text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
On December 6, 2001 06:41 am, Nathan Scott wrote:
> hey there.
>
> > I still don't like the class parsing inside the kernel, it's hard to see
> > what is good about that.
>
> I guess it ultimately comes down to simplicity. The IRIX interfaces
> have this separation of name and namespace - each operation has to
> indicate which namespace is to be used. That becomes very messy when
> you wish to work with multiple attribute names and namespaces at once.
> Since the namespace is intimately tied to the name anyway, this idea
> of specifying the two components together provides very clean APIs.
Right now we have two namespaces, user and system. That's one bit of
information, and the proposal is to represent it with 5-7 bytes, passing it
on every call, and decoding it with a memcmp or similar. This is just extra
fluff as far as I can see, and provides every bit as much opportunity for
implementing a private API as the original cmd parameter did, by encoding
whatever one pleases before the dot.
> The term "parsing" is a bit of an overstatement too. We're talking
> strncmp() complexity here, not lex/yacc. ;) And its not clear that
> you can get out of doing that level of parsing in the kernel anyway
> (unless you go for a binary namespace representation, and that's a
> real can of worms).
I'm suggesting we take a look at that.
> > Is there a difference between these two?:
> >
> > long sys_setxattr(char *path, char *name, void *value, size_t size,
int flags)
> > long sys_lsetxattr(char *path, char *name, void *value, size_t size,
int flags)
> >
>
> Yes, definately. The easiest reason - there are filesystems which
> support extended attributes on symlinks already (XFS does), coming
> from other operating systems, and there should be a way to get at
> that information too.
OK, well it looks like you're going a little overboard here in dividing out
the functionality. What you're talking about is 'follow symlink or not',
right? That really does sound to me as though it's naturally expressed with
a flag bit. I really don't see a compelling reason to go beyond 8 syscalls:
get, fget, set, fset, del, fdel, list, flist
--
Daniel | http://oss.sgi.com/archives/xfs/2001-12/msg01533.html | CC-MAIN-2016-22 | refinedweb | 371 | 61.87 |
#include <execinfo.h>.
backtrace(),
backtrace_symbols(), and
backtrace_symbols_fd() are provided in
glibc since version 2.1.
For an explanation of the terms used in this section, see attributes(7).
These
−rdynamic linker option. Note that
names of "static" functions are not exposed, and won't be
available in the backtrace.
The program below demonstrates the use of
backtrace() and
backtrace_symbols(). The following shell
session shows what we might see when running the program:
$ cc −rdynamic prog.c −o]
− 1); else myfunc2(); } int main(int argc, char *argv[]) { if (argc != 2) { fprintf(stderr, "%s num−calls\n", argv[0]); exit(EXIT_FAILURE); } myfunc(atoi(argv[1])); exit(EXIT_SUCCESS); } | http://manpages.courier-mta.org/htmlman3/backtrace.3.html | CC-MAIN-2017-30 | refinedweb | 105 | 51.55 |
Grade Calculation in C#
This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!
Program Summary
Our first program is based on a common task that every course professor/instructor needs to do: make grades. In any given course, there is a grading scale and a set of categories.
Here is sample output from two runs of the program. The only data entered by the user are show in boldface for illustration here.
One successful run with the data used above:
Enter weights for each part as an integer
percentage of the final grade:
Exams: 40
Labs: 15
Homework: 15
Project: 20
Participation: 10
Enter decimal numbers for the averages in each part:
Exams: 50
Labs: 100
Homework: 100
Project: 100
Participation: 5
Your grade is 70.5%
Your letter grade is C-.
A run with bad weights:
Enter weights for each part as an integer
percentage of the final grade:
Exams: 30
Labs: 10
Homework: 10
Project: 10
Participation: 10
Your weights add to 70, not 100.
This grading program is ending.
18.5.2. Details
Make your program file have the name grade_calc.cs.
This is based on the idea of Dr. Thiruvathukal's own legendary course syllabus. We're going to start by assuming that there is a fixed set of categories. As an example we assume Dr. Thiruvathukal's categories.
In the example below we work out for Dr. Thiruvathukal's weights in each category, though your program should prompt the user for these integer percentages:
exams - 40% (integer weight is 40)
labs - 15% (weight 15)
homework - 15% (weight 15)
project - 20% (weight 20)
participation - 10% (weight 10)
Your program will prompt the user for each the weights for each of the categories. These weights will be entered as integers, which must add up to 100.
If the weights do not add up to 100, print a message and end the program. You can use an if-else construction here. An alternative is an if statement to test for a bad sum. In the block of statements that go with the if statement, you can put not only the message to the user, but also a statement:
return;
Recall that a function ends when a return statement is reached. You may not have heard that this can also be used with a void function. In a void function there is no return value in the return statement.
Assuming the weights add to 100, then we will use these weights to compute your grade as a double, which gives you the best precision when it comes to floating-point arithmetic.
We'll talk in class about why we want the weights to be integers. Because floating-point mathematics is not 100% precise, it is important that we have an accurate way to know that the weights really add up to 100. The only way to be assured of this is to use integers. We will actually use floating-point calculations to compute the grade, because we have a certain tolerance for errors at this stage. (This is a fairly advanced topic that is covered extensively in courses like COMP 264/Systems Programming and even more advanced courses like Numerical Analysis, Comp 308.)
We are going to pretend that we already know our score (as a percentage) for each one of these categories, so it will be fairly simple to compute the grade.
For each category, you will define a weight (int) and a score (double). Then you will sum up the weight * score and divide by 100.0 (to get a double-precision floating-point result).
This is best illustrated by example.
George is a student in COMP 170. He has the following averages for each category to date:
exams: 50%
labs: 100%
homework: 100%
project: 100%
participation: 5%
The following session with the csharp interpreter shows the how you would declare all of the needed variables and the calculation to be performed:
csharp> int exam_weight = 40;
csharp> int lab_weight = 15;
csharp> int hw_weight = 15;
csharp> int project_weight = 20;
csharp> int participation_weight = 10;
csharp> double exam_grade = 50.0;
csharp> double lab_grade = 100;
csharp> double homework_grade = 100;
csharp> double project_grade = 100;
csharp> double participation_grade = 5;
This is intended only to be as an example though. Your program must ask the user to enter each of these variables.
Once we have all of the weights and scores entered, we can calculate the grade as follows. This is a long expression: It is continued on multiple lines. Recall all the > symbols are csharp prompts are not part of the expression:
csharp> double grade = (exam_weight * exam_grade +
> homework_weight* homework_grade +
> lab_weight * lab_grade + project_weight * project_grade +
> participation_weight * participation_grade) / 100.0;
Then you can display the grade as a percentage:
csharp> Console.WriteLine("Your grade is {0}%", grade);
Your grade is 70.5%
Now for the fun part. We will use if statements to print the letter grade. You will actually need to use multiple if statements to test the conditions. A way of thinking of how you would write the logic for determining your grade is similar to how you tend to think of the best grade you can hope for in any given class. (We know that we used to do this as students.)
Here is the thought process:
If my grade is 93 (93.0) or higher, I'm getting an A.
If my grade is 90 or higher (but less than 93), I am getting an A-.
If my grade is 87 or higher (but less than 90), I am getting a B+.
And so on...
Finally, if I am less than 60, I am unlikely to pass.
We'll come to see how logic plays a major role in computer science-sometimes even more of a role than other mathematical aspects. In this particular program, however, we see a bit of the best of both worlds. We're doing arithmetic calculations to compute the grade. But we are using logic to determine the grade in the cold reality that we all know and love: the bottom-line grade.
This assignment can be started after the data chapter, because you can do most all of it with tools learned so far. Add the parts with if statements when you have been introduced to if statements. (Initially be sure to use data that makes the weights actually add up to 100.)
You should be able to write the program more concisely and readably if you use functions developed in class for the prompting user input.
18.5.3. Grading Rubric:
Enter weights, with prompts [3]
End if the weights do not add to 100: [5]
Enter grades, with prompts: [3]
Calculate the numerical average and display with a label: [5]
Calculate the letter grade and display witha label: [5]
Use formatting standards for indentation: [4]
Sequential statements at the same level of indentation
Blocks of statements inside of braces indented
Closing brace for a statement block always lining up with the heading before the start of the block.
18.5.4. Logs and Partners:
Your name and who your partner is (if you have one)
Your approximate total number of hours working on the homework
Some comment about how it went - what was hard ...
An assessment of your contribution (if you have a partner)
An assessment of your partner's contribution (if you have a partner).
Just omit the parts about a partner if you do not have one.
Solution Preview
Attached is the program that'll help you do your task. It has a class called gradeCalculation containing a "main" function. It asks the users to input the weights and checks whether the weights sum up to 100. IF not, the program finishes, otherwise it continues to ask for the grades. It then calculates the percentage and assigns a letter grade using if...else statements.
Please feel free to modify the program as per your requirements. You may also create separate functions to input weights and grades from the user, and to do the calculation for computing the grade. Feel free to let me know if you require help with that as well. Currently, everything is done under the main function to make it simple for you to understand.
I am also attaching the main.cs file as well as pasting the code here. You can compile and run the code either in C# compiler/editor on your machine or use freely available online compilers.
Best,
using System.IO;
using System;
class gradeCalculation
{
static void Main()
{
Console.WriteLine("-------------------");
...
Solution Summary
Easy to understand C# code for calculating exam grades. Inline comments are not included as the code is self-explanatory. | https://brainmass.com/computer-science/c-sharp/grade-calculation-624418 | CC-MAIN-2019-51 | refinedweb | 1,445 | 71.24 |
FAQs about TypeScript in DenoFAQs about TypeScript in Deno
Can I use TypeScript not written for Deno?Can I use TypeScript not written for Deno?
Maybe. That is the best answer, we are afraid. For lots of reasons, Deno has chosen to have fully qualified module specifiers. In part this is because it treats TypeScript as a first class language. Also, Deno uses explicit module resolution, with no magic. This is effectively the same way browsers themselves work, though they don't obviously support TypeScript directly. If the TypeScript modules use imports that don't have these design decisions in mind, they may not work under Deno.
Also, in recent versions of Deno (starting with 1.5), we have started to use a
Rust library to do transformations of TypeScript to JavaScript in certain
scenarios. Because of this, there are certain situations in TypeScript where
type information is required, and therefore those are not supported under Deno.
If you are using
tsc as stand-alone, the setting to use is
"isolatedModules"
and setting it to
true to help ensure that your code can be properly handled
by Deno.
One of the ways to deal with the extension and the lack of Node.js non-standard resolution logic is to use import maps which would allow you to specify "packages" of bare specifiers which then Deno could resolve and load.
What version(s) of TypeScript does Deno support?What version(s) of TypeScript does Deno support?
Deno is built with a specific version of TypeScript. To find out what this is, type the following on the command line:
> deno --version
The TypeScript version (along with the version of Deno and v8) will be printed. Deno tries to keep up to date with general releases of TypeScript, providing them in the next patch or minor release of Deno.
There was a breaking change in the version of TypeScript that Deno uses, why did you break my program?There was a breaking change in the version of TypeScript that Deno uses, why did you break my program?
We do not consider changes in behavior or breaking changes in TypeScript
releases as breaking changes for Deno. TypeScript is a generally mature language
and breaking changes in TypeScript are almost always "good things" making code
more sound, and it is best that we all keep our code sound. If there is a
blocking change in the version of TypeScript and it isn't suitable to use an
older release of Deno until the problem can be resolved, then you should be able
to use
--no-check to skip type checking all together.
In addition you can utilize
@ts-ignore to ignore a specific error in code
that you control. You can also replace whole dependencies, using
import maps, for situations where a
dependency of a dependency isn't being maintained or has some sort of breaking
change you want to bypass while waiting for it to be updated.
How do I write code that works in Deno and a browser, but still type checks?How do I write code that works in Deno and a browser, but still type checks?
You can do this by using a configuration file with the
--config option on the
command line and adjusting the
"lib" option in the
"compilerOptions" in the
file. For more information see
Targeting Deno and the Browser.
Why are you forcing me to use isolated modules, why can't I use const enums with Deno, why do I need to do export type?Why are you forcing me to use isolated modules, why can't I use const enums with Deno, why do I need to do export type?
As of Deno 1.5 we defaulted to isolatedModules to
true and in Deno 1.6 we
removed the options to set it back to
false via a configuration file. The
isolatedModules option forces the TypeScript compiler to check and emit
TypeScript as if each module would stand on its own. TypeScript has a few type
directed emits in the language at the moment. While not allowing type directed
emits into the language was a design goal for TypeScript, it has happened
anyways. This means that the TypeScript compiler needs to understand the
erasable types in the code to determine what to emit, which when you are trying
to make a fully erasable type system on top of JavaScript, that becomes a
problem.
When people started transpiling TypeScript without
tsc, these type directed
emits became a problem, since the likes of Babel simply try to erase the types
without needing to understand the types to direct the emit. In the internals of
Deno we have started to use a Rust based emitter which allows us to optionally
skip type checking and generates the bundles for things like
deno bundle. Like
all transpilers, it doesn't care about the types, it just tries to erase them.
This means in certain situations we cannot support those type directed emits.
So instead of trying to get every user to understand when and how we could
support the type directed emits, we made the decision to disable the use of them
by forcing the isolatedModules option to
true. This means that even when we
are using the TypeScript compiler to emit the code, it will follow the same
"rules" that the Rust based emitter follows.
This means that certain language features are not supportable. Those features are:
- Re-exporting of types is ambiguous and requires knowing if the source module is exporting runtime code or just type information. Therefore, it is recommended that you use
import typeand
export typefor type only imports and exports. This will help ensure that when the code is emitted, that all the types are erased.
const enumis not supported.
const enums require type information to direct the emit, as
const enums get written out as hard coded values. Especially when
const enums get exported, they are a type system only construct.
export =and
import =are legacy TypeScript syntax which we do not support.
- Only
declare namespaceis supported. Runtime
namespaceis legacy TypeScript syntax that is not supported.
Why don't you support language service plugins or transformer plugins?Why don't you support language service plugins or transformer plugins?
While
tsc supports language service plugins, Deno does not. Deno does not
always use the built-in TypeScript compiler to do what it does, and the
complexity of adding support for a language service plugin is not feasible.
TypeScript does not support emitter plugins, but there are a few community
projects which hack emitter plugins into TypeScript. First, we wouldn't want
to support something that TypeScript doesn't support, plus we do not always use
the TypeScript compiler for the emit, which would mean we would need to ensure
we supported it in all modes, and the other emitter is written in Rust, meaning
that any emitter plugin for TypeScript wouldn't be available for the Rust
emitter.
The TypeScript in Deno isn't intended to be a fully flexible TypeScript
compiler. Its main purpose is to ensure that TypeScript and JavaScript can run
under Deno. The secondary ability to do TypeScript and JavaScript emitting via
the runtime API
Deno.emit() is intended to be simple and straight forward and
support a certain set of use cases.
How do I combine Deno code with non-Deno code in my IDE?How do I combine Deno code with non-Deno code in my IDE?
The Deno language server supports the ability to have a "per-resource"
configuration of enabling Deno or not. This also requires a client IDE to
support this ability. For Visual Studio Code the official
Deno extension
supports the vscode concept of
multi-root workspace.
This means you just need to add folders to the workspace and set the
deno.enable setting as required on each folder.
For other IDEs, the client extensions needs to support the similar IDE concepts. | https://deno.land/manual@v1.17.3/typescript/faqs | CC-MAIN-2022-27 | refinedweb | 1,326 | 62.07 |
Important: Please read the Qt Code of Conduct -
Updating text on ui.label from another Widget?
Hello, I'm about to start a new personal project and have one question still lingering if you pros don't mind helping me answer. Plan is to have a mainWindow with just a QLabel in it and a QPushButton that will open another widget. This other widget will have a lineEdit and submit button. How do I go about taking the lineEdit text and having it show on the mainWindow label after pressing the submit button? I'll start coding it now. Thanks
- SGaist Lifetime Qt Champion last edited by
Hi,
Give that widget a signal that you will emit when the button is clicked and connect it to your label setText slot.
Perfect, thanks for the quick reply. May I show you what I have right now?
form.h #ifndef FORM_H #define FORM_H #include <QWidget> namespace Ui { class Form; } class Form : public QWidget { Q_OBJECT public: explicit Form(QWidget *parent = nullptr); ~Form(); private: Ui::Form *ui; void submitButtonClick(); }; #endif // FORM_H
form.cpp #include "form.h" #include "ui_form.h" Form::Form(QWidget *parent) : QWidget(parent), ui(new Ui::Form) { ui->setupUi(this); connect(ui->submitButton, &QPushButton::clicked, this, &Form::submitButtonClick); } Form::~Form() { delete ui; } void Form::submitButtonClick() { this->close(); }(); private: Ui::MainWindow *ui; void pushButtonClick(); }; #endif // MAINWINDOW_H
mainwindow.cpp #include "mainwindow.h" #include "ui_mainwindow.h" #include "form.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) , ui(new Ui::MainWindow) { ui->setupUi(this); connect(ui->pushButton, &QPushButton::clicked, this, &MainWindow::pushButtonClick); } MainWindow::~MainWindow() { delete ui; } void MainWindow::pushButtonClick() { Form *form = new Form(); form->show(); }
Added the signal, how does the connect look like?
form.h signals: void applyChanges();
form.cpp void Form::submitButtonClick() { emit applyChanges(); this->close(); }
- mrjj Lifetime Qt Champion last edited by mrjj
@BoGut
Hi
Like the normal connect.
but i think you want like
signals:
void applyChanges(const QString &Text);
so we can emit the text with the signal.
we need to do the connect where you create the Form object
void MainWindow::pushButtonClick() { Form *form = new Form(); form->show(); connect( form, &Form::applyChanges, ui->thelabel, &QLabel::setText ); // From what object, ClassName::SignalName, to what Object, ClassName::SlotName) }
and we need to emit the text also
void Form::submitButtonClick() { emit applyChanges( ui->LineEditName->text()); this->close(); }
Something like this. Its just brained compiled so might have some errors but thats the overall idea.
The Form that has the lineEdit emits the text to MainWin where we connected the labels setText.
-May I show you what I have right now?
Yes ! please always show the code you have and any errors as then
its much easier to help. We will help with almost anything if the question
is carefull written and effort shown.
Thank you so much, it works! Read the signals and slots QT docs and got maybe 75% of what they were talking about :-) You guys are amazing, virtual beers on me! | https://forum.qt.io/topic/122465/updating-text-on-ui-label-from-another-widget | CC-MAIN-2021-04 | refinedweb | 490 | 56.35 |
Jun 14, 2007 09:50 AM|naturehermit|LINK
I have following program and program compiles if I change the protection of playRadio in radio object to public. I want to know why it doesnt compile as it is against the principles of OO design that the caller should be unaware of underlying object in this case radio. The car should do a message passing to the radio object and it should then invoke a response.
using System; using System.Collections.Generic; using System.Text; using System.Drawing; namespace ConsoleApplication1 { class Program { static Point p = new System.Drawing.Point(20, 30); struct Cycle { int _val, _min, _max; public Cycle(int min, int max) { _val = min; _min = min; _max = max; } public int Value { get { return _val; } set { if (_val > _max) { _val = _min; } else { if (_val < _min) { _val = _max; } else _val = value; } } } public override string ToString() { return Value.ToString(); } public int ToInteger() { return Value; } public static Cycle operator +(Cycle arg1, int arg2) { arg1.Value += arg2; return arg1; } public static Cycle operator -(Cycle arg1, int arg2) { arg1.Value -= arg2; return arg1; } } public class car { internal class radio { enum power{on, off}; protected void playRadio() { //power=power.on; Console.WriteLine("Radio is", power.on); } } public void playCarRadio() { car.radio r = new car.radio(); r.playRadio(); } } static void Main(string[] args) { Cycle degrees = new Cycle(0, 359); Cycle quarters = new Cycle(1,4); car Audi = new car(); Audi.playCarRadio(); for (int i = 0; i <= 8; i++) { degrees += 90; quarters += 1; Console.WriteLine("degrees = {0}, quarters = {1}", degrees,quarters); } //object o; p.Offset(-1, -1); Console.WriteLine("Point X {0}, Y {1}", p.X, p.Y); //Console.WriteLine(o == (null ? 0 : Convert.ToInt32(o))); } } }
Jun 14, 2007 12:17 PM|bmains|LINK
Hey,
If you make the method protected for the internal class radio in this case, it is only available to itself and classes that derive from it, so that won't work because car doesn't inherit from radio. If you make it protected internal void playRadio(), that will work. Also, to retain the radio power setting, car should have a property of type radio, so it can retain the value you set.
Jun 14, 2007 01:48 01:59 04:56 PM|bmains|LINK
Hey,
I didn't say change protected to internal, use protected internal combined (another level of exposure). Properties can be internal or protected, or protected internal as well; realize that you set the car.radio method, but nothing is keeping track of radio so the value's aren't retained... So you work with the radio object, but after the car's radio method is done, nothing knows about it.
Jun 14, 2007 05:03 PM|bmains|LINK
Hey,
What if you did this instead:
public class Car
{
private CarRadio _radio;
public CarRadio Radio
{
get
{
if (_radio == null)
_radio = new CarRadio();
return _radio;
}
}
public class CarRadio
{
private power _power;
internal CarRadio() { _power = power.off; }
enum power{on, off};
public void playRadio()
{
_power=power.on;
Console.WriteLine("Radio is {0}", _power);
}
}
Then, all you have to do to play the radio is Audi.Radio.playRadio() instead of the redirection... And no one can freely instantiate Radio because it's constructor is internal...
Jun 14, 2007 07:01 PM|MadsTorgersen[MSFT]|LINK
Just to elaborate:
In Java, protected is strictly more permissive than "package". In C# the equivalent of Java's protected is "protected internal". protected makes a member visible in derived classes, internal makes it visible in the same assembly, "protected internal" makes it visible in both places.
Thanks,
Mads Torgersen, MS C# Language PM
Jun 14, 2007 07:12 PM|bmains|LINK
Thanks for the elaboration Mads; I didn't know anything about Java.
Jun 15, 2007 08:18 AM|naturehermit|LINK
Guys thanks all again for replying, however I am sorry if the internal has confused the debate. I shouldnt have put the internal over there. My simple question is why a simple protected method of a class inside a class is not accessible to this class as in my above example. By making protected internal, I am merely changing the visibility to make it visible in all assembly, but thats not what I want to understand, what I want to understand is why the method is inaccessible. My making it internal--its visibile to everyone in the assembly (against encapsulation.) But that I dont want to do. here is it again as below.
using System; 2 using System.Collections.Generic; 3 using System.Text; 4 using System.Drawing; 5 6 namespace ConsoleApplication1 7 { 8 class Program 9 { 10 static Point p = new System.Drawing.Point(20, 30); 11 struct Cycle 12 { 13 int _val, _min, _max; 14 public Cycle(int min, int max) 15 { 16 _val = min; 17 _min = min; 18 _max = max; 19 } 20 21 public int Value 22 { 23 get { return _val; } 24 set 25 { 26 if (_val > _max) 27 { 28 _val = _min; 29 } 30 else 31 { 32 if (_val < _min) 33 { 34 _val = _max; 35 } 36 else 37 _val = value; 38 } 39 } 40 } 41 public override string ToString() 42 { 43 return Value.ToString(); 44 } 45 public int ToInteger() 46 { 47 return Value; 48 } 49 public static Cycle operator +(Cycle arg1, int arg2) 50 { 51 arg1.Value += arg2; 52 return arg1; 53 } 54 public static Cycle operator -(Cycle arg1, int arg2) 55 { 56 arg1.Value -= arg2; 57 return arg1; 58 } 59 } 60 public class car 61 { 62 class radio 63 { 64 enum power{on, off}; 65 66 protected void playRadio() 67 { 68 //power=power.on; 69 Console.WriteLine("Radio is", power.on); 70 } 71 72 } 73 public void playCarRadio() 74 { 75 car.radio r = new car.radio(); 76 r.playRadio(); 77 } 78 } 79 static void Main(string[] args) 80 { 81 Cycle degrees = new Cycle(0, 359); 82 Cycle quarters = new Cycle(1,4); 83 car Audi = new car(); 84 Audi.playCarRadio(); 85 for (int i = 0; i <= 8; i++) 86 { 87 degrees += 90; 88 quarters += 1; 89 Console.WriteLine("degrees = {0}, quarters = {1}", degrees,quarters); 90 } 91 //object o; 92 p.Offset(-1, -1); 93 Console.WriteLine("Point X {0}, Y {1}", p.X, p.Y); 94 //Console.WriteLine(o == (null ? 0 : Convert.ToInt32(o))); 95 } 96 } 97 } 98
Jun 15, 2007 08:22 PM|scobrown|LINK
protected only allows you to access the member, method, etc from WITHIN the class.
this will work:
public class Car { protected void AProtectedMethod() { } protected class Radio { Radio(Car parent) { parent.AProtectedMethod(); } } }
but this will not:
public class Car { Car() { Radio radio = new Radio(); radio.AProtectedMethod() ; //THIS WILL NOT COMPILE } protected class Radio { protected void AProtectedMethod() { } } }
Jun 15, 2007 08:31 PM|scobrown|LINK
Additionally, you should not be hiding the car radio in this case. It is not truelly an internal component.
In real life you press buttons on your Car's radio. The car manufacturer does not create buttons such as playRadio, tuneRadio, setVolume. Imagine the extra volumes of code you would write if your radio exposed 200 methods to car. You would have 200 methods on car with the sole responsibility of passing values to the radio.
The car should expose a Radio object, just as bmains said.
Jun 18, 2007 08:20 AM|naturehermit|LINK
Thanks once again, however I created my class radio (earlier for checking purposes) as protected and it doesnt work, but works straight in java. As far as radio is concerned, the radio object should be hidden from the user object, and car should encapsulate the radio object.
The user should interact with the car and not the radio, infact user should be totally oblivious of the radio object. THIS as you all know is called data hiding or encapsulation. And yes the car object should accomplish radio commands by passing object to the radio object. You might have 200 methods but its the best practice of OO.
Jun 18, 2007 08:28 AM|naturehermit|LINK
I will make life simple. Should this compile or not? Dont put it up in a compiler and try, just use the principles of OO and answer me and then try compiling it.
public car { void playCarRadio() { radio r = new radio(); r.playRadio(); } protected class radio() { protected void playRadio() { } } static void Main(string[] args) { car Audi = new car(); Audi.playCarRadio(); } }
Jun 18, 2007 03:22 PM|bmains|LINK
Hey,
I believe it should, though the playCarRadio method is private in this case. However, it will instantiate radio, call play radio, then the radio instance is gone because it doesn't store it locally within car.
Jun 18, 2007 03:28 PM|naturehermit|LINK
Thank god, thats all I am saying...But it doesnt work..it says that playRadio is inaccesible, even if I make playCarRadio() as public. Do you recon, there is a problem with the language specification. It compiles fine in Java.
Jun 18, 2007 04:57 PM|MadsTorgersen[MSFT]|LINK
It shouldn't work, and it doesn't.
playRadio is protected within its class; hence it is visible only within that class and its subclasses. Nowhere else. If you want it to be visible outside of its class, you need to make it internal, protected internal, or public.
This does not break encapsulation. The enclosing class car has the class radio as a protected member in your example. This means that radio is only visible inside car and it subclasses. which means that even if declared public, playRadio will only be visible within car and its subclasses. Which, I believe, is what you want. Right?
Mads Torgersen, MS C# Language PM
Jun 18, 2007 07:22 PM|bmains|LINK
Hey,
Out of curiosity, because playCarRadio is declared void (which is private), and the static doesn't declare public as well, should it be accessible? Did I see it right?
Jun 19, 2007 08:38 AM|naturehermit..
Jun 19, 2007 08:41 AM|naturehermit|LINK
Hi Brian,
It should be accessible because it is with in the same class. Actully because this is user interactive class so ideally it should have a public interface.
Jun 19, 2007 11:38 AM|bmains|LINK
Hey,
Right, so public must be declared. I wonder if part of your problem is that you have it as private?
Jun 19, 2007 11:44 AM|bmains|LINK
naturehermitlet me assure you that radio object should not expose any public methods. This is what is known as datahiding.
Saying radio object should not expose public methods is your design choice, but there isn't one way to do things. Actually, there are very good reasons for exposing the radio object, but not exposing certain methods. As you saw in the example I gave you, I exposed Radio object, made its constructor internal, made it as a property of car, and that way you could interface with the radio directly.
naturehermitHere it is radio object, imagine in a bank..The classes should work by passing message to the underlying object.
That would be a design pattern choice, like using observer pattern and such. That is a choice but there isn't one way to do things. Having an internal object is not passing messages to the underlying object, but working with an internal object.
Jun 19, 2007 11:59 AM|naturehermit|LINK
Brian, I did some experimentation and found that despite the method being public in the radio class, it actually is not a modifier risk in the initial sense because the class is declared as protected. So any other class cannot use that method anyway, but if somebody was able to modify just one keyword of the class from protected to public, the whole class will be undermined. Whereas if in the real case where class and its methods are protected, then all of them will need to be changed for having a public expose.
So it seems c# has some issues with it that may not become apparent so quickly but will come out during a hacking session or program mistakes. Java on the other hand seems to work with it and would not have those issues. There is another thread I am running with variable scoping, which again seem works fine in Java and has issues with c# and nobody has been able to understand why or provide explanation. I have this thread looked up by the Team as well.
More and more I am testing this, I m beginning to belive that C# is a weak language in terms of specifications of OO and will hurt someone, somewhere badly, unless somebody can tell me its designed like that or its supposed to be. I am running a test with generics and here again...different issue though..but ...Anyway the language fails formal methods tests so I should have understood.
As per a choice of design pattern, my friend--Best practices is not a design choice and If you subscribe to a methodology OO in this case, then your tools should subscribe to that as well. i wish I am wrong because it hurts to know that such a beautiful toolset with so much rich support in tools has problem with spec.
Jun 19, 2007 05:24 PM|bmains|LINK
I don't mean the radio class, I mean car's playCarRadio is not declared public, as well as the static void main method. The whole class will be undermined if public, not necessarily. Even best practices are a design pattern of choice, as one best practice may vary from another. The pattern you employ is one specific design pattern and there are others that work in a different layout. It's all relative.
Out of curiosity, how will C# hurt someone?
Jun 19, 2007 05:48 PM|MadsTorgersen[MSFT]|LINK
natureher..
Access control works a little differently in C# than Java because nested classes were part of C# from the start, whereas they were grafted onto Java as a later addition. We were able to design the whole thing together.
In C# when a nested class such as your "radio" is declared "protected" that means that the whole class is only accesible within its surrounding class ("car") and the subclasses of that. Outside of "car" and its derived classes noone can access the radio class or its members, regardless of whether those members are public or not. in C# public on a member simply means "visible to everyone who can see my enclosing type".
The problem in Java is that it is hard for a nested type to hide something from the enclosing type. We do in fact understand encapsulation very well at MS [:)] to the degree that we give you proper encapsulation of nested types as well.
So keep the radio class protected, and make its methods public. They won't escape to anyone outside the car, because the protected-ness of the car class - well - protects them. It does not expose public methods to anyone but car and its subclasses. Never ever.
Mads
Jun 19, 2007 05:52 PM|MadsTorgersen[MSFT]|LINK
MadsTorgersen[MSFT]
They won't escape to anyone outside the car, because the protected-ness of the car class - well - protects them.
Sorry, typo: That should be the protected-ness of the radio class, obviously.
Jun 20, 2007 08:38 AM|naturehermit|LINK
Yes Brian you are right that the car's playCarRadio should be public in real life but because I am calling the main from with in the same class so it was hardly an issue. Brian best practices are not design pattern of choice but way how a thing should be done. If you read the OO spec from OMG you will find that an encapsulated element should not expose anything public. If the language subscribes to OO then it should follow up.
Pattern is all together a different issue as different people have different ways of doing things. Like some go for SOA, some go for repetitivness, some for individuality...different topic. though..
How will c# hurt someone--One of the guy I know wrote a code that led to death of 250 people because of the problem in the spec of the language itself (code was used in an airliner) and he is very old and still regrets it. You and I are very young but we have a responsibility towards our code to the whole society and we cannot feign ignorance. A company like microsoft which is super rich can live with it, because they were formed out of ..you know...but we should do our best..
Jun 20, 2007 08:56 AM|naturehermit|LINK
Dear Mads,
I am obviously not starting a debate between java and c# but it suffices to say that they have done a brilliant job and followed things to the spec. Its like IE and Mozilla and despite IE7--Firefox is way ahead of times then IE7 --reason software designed to the spec.
Well If you are from Microsoft, please tell me that its a feature that in a protected class you need to have public methods to work with it. Although according to OO spec, an encapsulated class should not expose any public methods period.
If you read my earlier post I have mentioned the downside of keeping class protected and methods public although I agree that whatever the expose of the class is..it will apply.
As per your saying about encapsulation of nested types, check this out-- I will put comments to illustrate my point.
using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 { class car { protected class radio { enum power : int { on, off }; private power _power; public void playRadio() { _power = power.on; Console.WriteLine("Radio is", _power.ToString()); test t = new test();
//Look here its no longer protected because I just got it...(if the method were not public this would never happen) t.test2(); }
//The method is protected protected void test1() { Console.WriteLine("running test1"); } protected class test { public void test2() { Console.WriteLine("running test"); radio r = new radio();
//called internally as it should be.... r.test1(); } } } void playCarRadio() { car.radio r = new car.radio(); r.playRadio(); } static void Main(string[] args) { car Audi = new car(); Audi.playCarRadio(); gen Oa = new gen("Hello", "World"); Console.WriteLine((string)Oa.t + (string)Oa.u); GenM<string, string> ga = new GenM<string, string>("Hello ", "World "); Console.WriteLine(ga.t + ga.u); gen ob = new gen(10.124, 2005); Console.WriteLine((double)ob.t + (int)ob.u); GenM<double, int> gb = new GenM<double, int>(10.124, 2005); Console.WriteLine(gb.t + gb.u); } } }
26 replies
Last post Jun 20, 2007 08:56 AM by naturehermit | http://forums.asp.net/t/1122064.aspx?OO+Concept+general+question+about+Encapsulation+ | CC-MAIN-2014-35 | refinedweb | 3,095 | 64.1 |
armos 0.2.6
Armos is a free and open source library for creative coding in D programming language.
To use this package, put the following dependency into your project's dependencies section:
armos
armos is a free and open source library for creative coding in D programming language.
Demo
import armos.app; import armos.graphics; class TestApp : BaseApp{ Mesh line = new Mesh; override void setup(){ lineWidth(2); line.primitiveMode = PrimitiveMode.LineStrip; } override void draw(){ line.drawWireFrame; } override void mouseMoved(int x, int y, int button){ line.addVertex(x, y, 0); line.addIndex(cast(int)line.numVertices-1); } } void main(){run(new TestApp);}
Platform
- Linux
- macOS
- Windows
Require
Install
- Install some packages to build with dlang.
- macOS
$ brew install dmd dub
- Download this repository.
- Latest(via github)
$ git clone git@github.com:tanitta/armos.git
$ dub add-local <repository-path>
- Stable(via dub)
$ dub fetch armos
- Install dependency dynamic libraries and npm for glsl package management.
- macOS
$ brew install glfw3 assimp freeimage libogg libvorbis npm
- Build armos.
$ dub build armos
Usage
Generate new project.
$ dub run armos -- generate project <projectpath>
We recoment to set alias. (This command:
$ dub list | grep "armos" will find a package path of armos)
alias armos="path/to/armos"
Or, add to aleady existing package.
put the following dependency into your project's dub.sdl or dub.json.
dependency "armos" version="~>0.0.1"
Why use D?
Processing Speed : D is as fast as C++ programs.
Build Speed : The compilation is more faster than a speed of C++. Because of that, we can repeat trial and error.
Extensibility : We can use C/C++/Objective-C via D binding.
Easiness to learn : It isn't so much complex than C++!
-
ScreenShots
Contribution
Contributions are very welcome!
- Fork it
- Create your feature branch from dev branch (git checkout -b my-new-feature)
- Commit your changes (git commit -am 'Add some feature')
- Push to the branch (git push origin my-new-feature)
- Create new Pull Request
- Registered by tanitta
- 0.2.6 released 9 months ago
- tanitta/armos
- BSL-1.0
- Authors:
-
- Dependencies:
- derelict-ogg, derelict-al, dub, derelict-fi, derelict-glfw3, derelict-assimp3, rx, derelict-ft, derelict-gl3, fswatch, derelict-vorbis, derelict-portmidi, colorize
- Versions:
- Show all 17 versions
- Download Stats:
0 downloads today
1 downloads this week
1 downloads this month
132 downloads total
- Score:
- 2.1
- Short URL:
- armos.dub.pm | http://code.dlang.org/packages/armos | CC-MAIN-2018-34 | refinedweb | 395 | 51.75 |
The Eight Queens Puzzle is a classic problem whose goal is to place 8 queens on an
8x8 chessboard in such a way that none of the queens share a row, column or diagonal. The version discussed here allows exploration of not just an
8x8 but an arbitrary size
nxn board, and hence the program is called
N Queens.
The problem is very interesting from a Computer Science point of view, as is raises many issues to do with data representation, algorithmic efficiency and more. You can explore some of these issues in How to think like a Computer Scientist.
This article focuses on a Python implementation of the game using a fantastic tool for building graphical user interfaces (GUIs) called
simplegui.
simplegui is a minimal GUI framework for Python which keeps things simple so you can focus on the idea you are trying to implement without getting bogged down in detail. It provides:
- A canvas area where you can draw shapes and text, and display images
- A control area where you can easily add buttons and labels
- keyboard press and mouse click event handling
- Timers
- Audio playback functionality
There are two ways you can use this tool:
- In a browser, using CodeSkulptor. (See here for some awsome demos.)
- Locally using the famous pygame package along with Olivier Pirson’s SimpleGUICS2Pygame package which uses
pygameto implement the functionality of Codeskulptor’s
simpleguimodule.
Play the 8 Queens Puzzle in Python Online
Try and solve the puzzle on different size boards. Be aware that a couple of the smaller sizes do not have solutions. Can you tell which ones?
Now that you are familiar with the puzzle, no doubt you are chomping at the bit to write the program for yourself, or at least to peek at the code to see how it works.
You can find the complete code for the puzzle on my GitHub The code is comprised of two files:
n_queens.pycontains attributes and methods for the logic and data representation of the game, and is completely independent of the GUI code. This separation is generally a very good idea when working with GUIs, as you will learn from experience if you try and build similar games mixing together display logic and game logic.
n_queens_gui.pyuses attributes and methods from
n_queens.pyalong with the tools available from the
simpleguimodule to make a graphical version of the game.
You will notice that both files use object oriented programming. For some this might be considered an advanced topic. It would be possible to write the program with using classes, but at the level of complexity of this game that approach could quickly become unwieldy, particularly when it comes to keeping track of global variables.
That said, there is a great deal that you can do with Codeskulptor/SimpleguiCS2Pygame without using OOP, so don’t be discouraged if the code here seems a bit beyond your level of understanding. In future articles I may well cover the basics of GUI programming for people who haven’t yet ventured into the world of classes and objects.
Python implementation of the classic Eight Queens Puzzle
# n_queens.py """ N-Queens Problem using Codeskulptor/SimpleGUICS2Pygame By Robin Andrews - info@compucademy.co.uk """ try: import n_queens_gui except ImportError: import user47_EF0SvZ5pFJwZRzj_0 as n_queens_gui QUEEN = 1 EMPTY_SPOT = 0 BOARD_SIZE = 5 class NQueens: """ This class represents the N-Queens problem. There is no UI, but its methods and attributes can be used by a GUI. """ def __init__(self, n): self._size = n self.reset_board() def get_size(self): """ Get size of board (square so only one value) """ return self._size def reset_new_size(self, value): """ Resets the board with new dimensions (square so only one value). """ self._size = value self.reset_board() def get_board(self): """ Get game board. """ return self._board def reset_board(self): """ Restores board to empty, with current dimensions. """ self._board = [[EMPTY_SPOT] * self._size for _ in range(self._size)] def is_winning_position(self): """ Checks whether all queens are placed by counting them. There should be as many as the board size. """ num_queens = sum(row.count(QUEEN) for row in self._board) return num_queens >= self._size def is_queen(self, pos): """ Check whether given position contains a queen. """ i, j = pos return self._board[i][j] == QUEEN def place_queen(self, pos): """ Add a queen (represented by 1) at a given (row, col). """ if self.is_legal_move(pos): self._board[pos[0]][pos[1]] = QUEEN return True # Return value is useful for GUI - e.g trigger sound. return False def place_queen_no_checks(self, pos): """ For testing """ self._board[pos[0]][pos[1]] = QUEEN def remove_queen(self, pos): """ Set position on board to EMPTY value """ self._board[pos[0]][pos[1]] = EMPTY_SPOT def is_legal_move(self, pos): """ Check if position is on board and there are no clashes with existing queens """ return self.check_row(pos[EMPTY_SPOT]) and self.check_cols(pos[1]) and self.check_diagonals(pos) def check_row(self, row_num): """ Check a given row for collisions. Returns True if move is legal """ return not QUEEN in self._board[row_num] def check_cols(self, pos): """ Check columns and return True if move is legal, False otherwise """ legal = True for row in self._board: if row[pos] == QUEEN: legal = False return legal def check_diagonals(self, pos): """ Checks all 4 diagonals from given position in a 2d list separately, to determine if there is a collision with another queen. Returns True if move is legal, else False. """ num_rows, num_cols = len(self._board), len(self._board[0]) row_num, col_num = pos # Lower-right diagonal from (row_num, col_num) i, j = row_num, col_num # This covers case where spot is already occupied. while i < num_rows and j < num_cols: if self._board[i][j] == QUEEN: return False i, j = i + 1, j + 1 # Upper-left diagonal from (row_num, col_num) i, j = row_num - 1, col_num - 1 while i >= 0 and j >= 0: if self._board[i][j] == QUEEN: return False i, j = i - 1, j - 1 # Upper-right diagonal from (row_num, col_num) i, j = row_num - 1, col_num + 1 while i >= 0 and j < num_cols: if self._board[i][j] == QUEEN: return False i, j = i - 1, j + 1 # Lower-left diagonal from (row_num, col_num) i, j = row_num + 1, col_num - 1 while i < num_cols and j >= 0: if self._board[i][j] == QUEEN: return False i, j = i + 1, j - 1 return True def __str__(self): """ String representation of board. """ res = "" for row in self._board: res += str(row) + "\n" return res n_queens_gui.run_gui(NQueens(BOARD_SIZE))
# n_queens_gui.py """ GUI code for the N-Queens Problem using Codeskulptor/SimpleGUICS2Pygame By Robin Andrews - info@compucademy.co.uk """ try: import simplegui collision_sound = simplegui.load_sound("") success_sound = simplegui.load_sound("") except ImportError: import SimpleGUICS2Pygame.simpleguics2pygame as simplegui simplegui.Frame._hide_status = True simplegui.Frame._keep_timers = False collision_sound = simplegui.load_sound("") success_sound = simplegui.load_sound("") queen_image = simplegui.load_image("") queen_image_size = (queen_image.get_width(), queen_image.get_height()) FRAME_SIZE = (400, 400) BOARD_SIZE = 20 # Rows/cols class NQueensGUI: """ GUI for N-Queens game. """ def __init__(self, game): """ Instantiate the GUI for N-Queens game. """ # Game board self._game = game self._size = game.get_size() self._square_size = FRAME_SIZE[0] // self._size # Set up frame self.setup_frame() def setup_frame(self): """ Create GUI frame and add handlers. """ self._frame = simplegui.create_frame("N-Queens Game", FRAME_SIZE[0], FRAME_SIZE[1]) self._frame.set_canvas_background('White') # Set handlers self._frame.set_draw_handler(self.draw) self._frame.set_mouseclick_handler(self.click) self._frame.add_label("Welcome to N-Queens") self._frame.add_label("") # For better spacing. msg = "Current board size: " + str(self._size) self._size_label = self._frame.add_label(msg) # For better spacing. self._frame.add_label("") # For better spacing. self._frame.add_button("Increase board size", self.increase_board_size) self._frame.add_button("Decrease board size", self.decrease_board_size) self._frame.add_label("") # For better spacing. self._frame.add_button("Reset", self.reset) self._frame.add_label("") # For better spacing. self._label = self._frame.add_label("") def increase_board_size(self): """ Resets game with board one size larger. """ new_size = self._game.get_size() + 1 self._game.reset_new_size(new_size) self._size = self._game.get_size() self._square_size = FRAME_SIZE[0] // self._size msg = "Current board size: " + str(self._size) self._size_label.set_text(msg) self.reset() def decrease_board_size(self): """ Resets game with board one size larger. """ if self._game.get_size() > 2: new_size = self._game.get_size() - 1 self._game.reset_new_size(new_size) self._size = self._game.get_size() self._square_size = FRAME_SIZE[0] // self._size msg = "Current board size: " + str(self._size) self._size_label.set_text(msg) self.reset() def start(self): """ Start the GUI. """ self._frame.start() def reset(self): """ Reset the board """ self._game.reset_board() self._label.set_text("") def draw(self, canvas): """ Draw handler for GUI. """ board = self._game.get_board() dimension = self._size size = self._square_size # Draw the squares for i in range(dimension): for j in range(dimension): color = "green" if ((i % 2 == 0 and j % 2 == 0) or i % 2 == 1 and j % 2 == 1) else "red" points = [(j * size, i * size), ((j + 1) * size, i * size), ((j + 1) * size, (i + 1) * size), (j * size, (i + 1) * size)] canvas.draw_polygon(points, 1, color, color) if board[i][j] == 1: canvas.draw_image( queen_image, # The image source (queen_image_size[0] // 2, queen_image_size[1] // 2), # Position of the center of the source image queen_image_size, # width and height of source ((j * size) + size // 2, (i * size) + size // 2), # Where the center of the image should be drawn on the canvas (size, size) # Size of how the image should be drawn ) def click(self, pos): """ Toggles queen if legal position. Otherwise just removes queen. """ i, j = self.get_grid_from_coords(pos) if self._game.is_queen((i, j)): self._game.remove_queen((i, j)) self._label.set_text("") else: if not self._game.place_queen((i, j)): collision_sound.play() self._label.set_text("Illegal move!") else: self._label.set_text("") if self._game.is_winning_position(): success_sound.play() self._label.set_text("Well done. You have found a solution.") def get_grid_from_coords(self, position): """ Given coordinates on a canvas, gets the indices of the grid. """ pos_x, pos_y = position return (pos_y // self._square_size, # row pos_x // self._square_size) # col def run_gui(game): """ Instantiate and run the GUI """ gui = NQueensGUI(game) gui.start() | https://compucademy.net/eight-queens-puzzle-in-python/ | CC-MAIN-2022-27 | refinedweb | 1,635 | 60.21 |
.
I previously tried adding
__builtin_unreachable, and quit for a few reasons, listed here:. The latter two are probably no longer valid, so it’s probably worth looking into again, if the first can be fixed.
Comment by Paul Biggar — 26.12.11 @ 11:00
In theory assertions could be changed to evaluate their condition in release builds. In practice it seems unlikely to happen. No assertion has ever evaluated its condition, so that’d be a substantial change impeding adoption. I did originally think I could mark
JS_Assertas not returning, which would have the same effect…but that’s a lie, it can return if you’re in a debugger and you quell the trap. I think
MOZ_NOT_REACHEDis the only macro really amenable to unreachability. But maybe I’m missing something.
Comment by Jeff — 26.12.11 @ 11:25
> NS_ASSERTION is the oldest, but unfortunately it can be ignored, and therefore
> historically has been.
This is very untrue.
Jesse files bugs about layout assertions, and they often get fixed.
Better still, reftests automatically go orange if they trigger unexpected NS_ASSERTIONS. And unlike fatal assertions, that doesn’t wipe out the results of the rest of the test suite. We should do this for mochitests too but for some reason or another that never eventuated.
Back when I used to dogfood debug builds, I would regularly hit JS_ASSERTs. That’s one of the reasons I stopped dogfooding debug builds. So JS_ASSERT/MOZ_ASSERT are being ignored; we ignore them by not using debug builds.
I have never appreciated the point of view that non-fatal assertions are worthless and fatal assertions are always better.
Comment by Robert O'Callahan — 26.12.11 @ 21:27
I believe I’ve seen bugs in the last few months where dbaron, in reviewing style system patches, has mentioned that the people writing them need to fix the style system assertions they’ve triggered and not noticed. I could be mistaken; I’m doing a lot of skimming of my bugmail these days, especially outside JS.
NS_ASSERTIONisn’t always ignored, and yes, Jesse is awesome. But it is often enough that the extra discretion doesn’t strike me as an advantage.
I could as easily argue that fail-fast is better for not hoarding tinderbox time when a changeset has been weighed and found wanting in initial results. In truth, I think the merits and demerits cut about the same both ways, so I don’t see early crashes on tinderbox as categorically worse than delayed, maybe-complete results. (Especially since most people’s patches can and should be smokescreened against a small subset of Mochitests, often weeding out 99% of problems, before testing on tinderboxen.)
I have never appreciated the converse view. 🙂 I think we’re going to have to agree to disagree on this point. But I think most people are happier with assertions that break like
assertdoes than the other way.
As final data points, WebKit’s and V8’s
ASSERTmacros are fatal.
Comment by Jeff — 27.12.11 @ 00:03
Why does the clang version of MOZ_STATIC_ASSERT use reason but the others all use #cond ?
The last time I hit an assertion it was a false positive. If it had been a fatal assertion, I would have been even more annoyed than I already was.
I also occasionally appear to hit a false positive assertion in JS in an obscure bug requiring a computer that is so slow that the slow script timeout triggers while trying to show the slow script timeout dialog. (I actually get about four of those dialogs in all, but some of them may be caused by the slow script timeout triggering because of the time it takes me to use the debugger to ignore the assertion.)
Comment by Neil Rashbrook — 27.12.11 @ 03:33
But I think most people are happier with assertions that break like assert does than the other way.
As long as you give me something that interrupts as NS_ASSERTION currently does without shooting the build down, I am with you. I am currently very eager to place NS_ASSERTION as a mark, that I personally care about it, when it goes wrong. If I look at table webkit code the asserts prevent mainly things that would crash anyway. A lot of the NS_ASSERTION in layout will not be followed by a crash but by wrong rendering because basic assumptions are violated. I like to get the disruptive information but I would not want to tear the browser down because a table is at the wrong place. This would be NS_WARN… but they are so frequent that they got silenced (I see 19 of those when my debug build starts and obviously some of them are very old and nobody cares.)
Comment by Bernd — 27.12.11 @ 07:48
The clang version does the right thing, the others do the wrong. Zack noticed this and filed a bug; I’ll fix it today.
What’s the bug for that JS assertion? If it’s wrong, as it sounds like it is, it shouldn’t be there.
Comment by Jeff — 27.12.11 @ 09:55
Sounds like you want something log-gy, then, not an assertion. We have various logging stuff already, so I guess you’re concerned that it’s too obtuse to work with, compared to just a simple do-this, spew-to-console. Perhaps we should add logging stuff to mfbt. I don’t know much about logging APIs, so I should probably not be the person to design and implement one.
I often add assertions to my code that I know won’t work, while I’m writing code. Then I either fix them before landing, or I remove them and indicate any remaining deficiency in a comment, and usually a bug too. This tends to work pretty well for me.
Comment by Jeff — 27.12.11 @ 10:04
That alone doesn’t mean anything. David knows the assertion will fire in some situation, but maybe the person writing the code hasn’t hit that path yet.
Which is better: identifying 10 test failures in a single Tinderbox run and fixing all of them, or doing 10 Tinderbox runs fixing one failure at a time? The former of course, both for Tinderbox/tryserver usage and more importantly for the developer.
In practice I run the patch(es) through Tinderbox, either try or on inbound, and if there are failures I know which subset of tests I need to rerun locally after I think I might have fixed the failures. Unless test failures are fatal.
Webkit layout code hardly has any assertions compared to ours. That’s not good for them.
Bernd is exactly right. Fatal assertions probably make sense in the JS engine where almost any kind of bug will lead to a crash anyway. They often do not make sense in layout, where many bugs result in nothing more harmful than incorrect page rendering.
Not at all. NS_ASSERTION has clear semantics: when it fires, we have a bug. None of our log levels mean that.
Comment by Robert O'Callahan — 27.12.11 @ 14:53
See of course.
Comment by Robert O'Callahan — 27.12.11 @ 14:59
I couldn’t resist:
Comment by Robert O'Callahan — 27.12.11 @ 15:53
So, to double-check, these are fatal-in-debug, no-op-in-release assertions? Not arguing for whether it should be fatal (at the moment), but due to the history of NS_ASSERTION these things should be spelled out very clearly… and somehow this post manages to not be explicit. (“execution halts in a debuggable way” is, sadly, too easily lost in the other text IMHO.)
I believe – I wasn’t around at the time – that there used to be something that was NS_ASSERTION-without-the-message, and they were mass-converted to NS_ASSERTION-with-the-message at some point; seems reasonable to guess then that making sure the new things have a message would be a good idea. There seems to still be a few examples lying around in comm-central.
Comment by Mook — 27.12.11 @ 18:12
It’s an assumption that’s doubtless wrong at least sometimes, but I assume people have run at least a fair subset of the relevant tests when posting patches for review, and the tests either pass or any remaining failures are noted so that I can evaluate without respect to those known problems. To do otherwise is to potentially waste the reviewer’s time. Everyone does that on occasion, sure. But there’s a difference between mistakenness and inattentiveness to potential test failure.
I think this is the exceptional case, made more so by proper preliminary local testing to smoke out baseline test failures.
What is your position on compile errors halting the build? I’ve almost never encountered so many failures piling atop each other in a single tinderbox run, except a couple times in patches that fell afoul of Windows-specific compile errors. But I wouldn’t change the fatality of errors (compile or runtime) just to suit those edge cases.
Yet this doesn’t explain why we use fatal assertions even when execution might well proceed without notable problem should the assertion fail. Indeed, in rare instances we use a
LOCAL_ASSERTmacro which includes backstop code (
return NULLleading to script execution halting without throwing an exception) for the case where the assertion fails in a release build, because the corresponding code is so hairy. We use fatal assertions even when there’s no particular dangerous consequence to doing so, but merely as matter-of-course verification of an assumption.
There’s also substantial value to not requiring the developer to decide whether his assertion is one verifying an important requirement or uncovering mostly harmless error.
A new log level, or something, might be a fair alternative. But I worry about such a thing being overused when the test is well-understood, and where a fatal failure would get attention a logging message would simply be ignored.
Comment by Jeff — 28.12.11 @ 08:15
Yes, fatal in debug, no-op in release. I don’t actually intend this post to be long-term documentation of all this; that’s what the comments in
Assertions.hare for. And I believe those comments clearly spell out the semantics. If you disagree, I’m happy to amend them in whatever way makes them clear to you, of course. But this post is mostly announcement, with few enough extra details so people who won’t immediately look at the header will know what’s available to use.
Comment by Jeff — 28.12.11 @ 08:20
Hm, I see you noted the compiler point in your separate post, which I’m reading now. 🙂
Comment by Jeff — 28.12.11 @ 08:24
Sometimes assertions can give you compiler warnings in optimized builds due to unused variables, e.g.:
int x = widget->frob();
MOZ_ASSERT(x != 0);
where
xis otherwise unused. When
MOZ_ASSERTexpands to a nop in optimized builds, this may trigger a warning, depending on your compiler flags. One way to avoid this is to use this clever
sizeoftrick:
#ifdef DEBUG
# define MOZ_ASSERT(expr_) /* as before */
#else
# define MOZ_ASSERT(expr_) ((void)sizeof(expr_))
#endif /* DEBUG */
The
sizeofensures that the code doesn’t get evaluated at runtime, but the compiler will now believe that any variable that appears in the expression is used.
The only potential problem I can think of is C99’s VLAs, where taking the size of a VLA is a runtime operation, not a compile-time one. But you would never assert on a VLA (it would always decay into a non-null pointer). I also don’t know if C++11 added any new features that could potentially make
sizeofa runtime operation.
Comment by Adam Rosenfield — 30.12.11 @ 10:58
The
sizeoftrick is clever. Mozilla’s assertions quite often depend upon
DEBUG-only variables, however, so using
sizeofwouldn’t work for us. One trick we’ve recently started using is this:
template <typename T>
class DebugOnly
{
#ifdef DEBUG
T t;
#endif
public:
#ifdef DEBUG
T& operator=(const T& t) { this->t = t; return *this; }
operator T() { return t; }
// …and whatever other operators become needed
#else
T& operator=(const T& t) { }
// …other corresponding no-op versions
#endif
};
void foo(Widget* widget)
{
DebugOnly<int> x = widget->frob();
MOZ_ASSERT(x != 0);
}
Because this only elides the store for the variable and doesn’t necessarily elide computation of the right-hand side, I am somewhat ambivalent about its preferability to just enclosing the entire thing in an
#ifdef. EIBTI and all that. But if used carefully, it does look a little cleaner.
…
sizeofis a runtime operation on C99’s variable-length arrays? Ugh. Mozilla’s pretty much never used them as C++ has (or we can create) better, and safer (against stack overflow) alternatives, thankfully. C++11 doesn’t include variable-length arrays, and its
sizeofremains a compile-time construct.
Comment by Jeff — 31.12.11 @ 11:53 | http://whereswalden.com/2011/12/26/introducing-mozillaassertions-h-to-mfbt/ | CC-MAIN-2017-22 | refinedweb | 2,174 | 63.8 |
Considering which approach would be better... we are building a SharePoint farm on internal network private namespace using SharePoint 2013 with Claims, customer has an ADFS 2.0 server with AD on the internal network.
Customer has a Perimeter network that external users must go through to reach internal services.
Customer requirements are that AD and ADFS 2.0 resources must not be located in the Perimeter network, Proxy access is ok.
Objectives:
1.] SSO for external users (login one time) can hit site collections and links in one site collection referencing another site collection, we don’t want the double prompts like using NTLM and TMG from the outside.
2.] Internal users on the corporate LAN can access the SharePoint 2013 with their domain joined machines, seamlessly, that is, they only login once locally to their machine in the morning and then can hit the SharePoint 2013 resources they have permission
to, and don’t get prompted throughout the day.
3.] Both internal and external users hit the same SharePoint site, (identical URLs internal and external) and users outside and inside need to collaborate in the same site collection.
4.] Would like the architecture to support in the future a federated trust with a Partner who is using claims.
So that the remote security objects can be leveraged, rather than duplicating the accounts in the corporate directory.
Considerations
Option 1; Internal users hit SharePoint 2013 directly on the internal LAN, remote users come in through Perimeter and UAG.
- Configure the internal SharePoint 2013 resources using Claims and the internal ADFS 2.0 server.
In doing so, would we need Kerberos configured for the internal users to have seamless access on the internal SharePoint 2013 claims web applications?
- Configure UAG for remote users (corporate users with AD accounts) one time sign in.
Publish both the ADFS 2.0 server and the SharePoint 2013 site from the internal network to the DMZ?
Option 2; all users access SharePoint 2013 through the Perimeter network
- Configure Internal SharePoint 2013 resources using Claims and ADFS 2.0 on the internal network but do not permit client access directly from the internal network, force all traffic to access resources through the DMZ.
- Configure UAG for both external and internal users access is from the UAG server in the DMZ.
Same question would this approach require us to publish both the ADFS 2.0 server and the SharePoint 2013 site from the internal network to the DMZ?
Seeking input from Architects that have had some experience with similar access requirements, should I be considering other approaches?
Microsoft is conducting an online survey to understand your opinion of the Technet Web site. If you choose to participate, the online survey will be presented to you when you leave the Technet Web site.
Would you like to participate? | https://social.technet.microsoft.com/Forums/forefront/en-US/f960dff2-1951-4d26-bebb-c65160f9e64f/uag-sso-with-claims-and-sharepoint-2013?forum=forefrontedgeiag | CC-MAIN-2015-27 | refinedweb | 470 | 61.87 |
EMS Data Import 2005 for SQL Server 3.0
Sponsored Links
EMS Data Import 2005 for SQL Server 3.0 Ranking & Summary
RankingClick at the star to rank
Ranking Level
User Review: 10 (1 times)
File size: 4065K
Platform: Windows 9X/ME/NT/2K/2003/XP/Vista
License: Shareware
Price: $95.00
Downloads: 578
Date added: 2007-03-27
Publisher: EMS Database Management Solutions, Inc
EMS Data Import 2005 for SQL Server 3.0 description
EMS Data Import 2005 for SQL Server 3.0. Software Development
EMS Data Import 2005 for SQL Server 3.0 Screenshot
EMS Data Import 2005 for SQL Server 3.0 Keywords
Bookmark EMS Data Import 2005 for SQL Server 3.0
EMS Data Import 2005 for SQL Server 3.0 Copyright
WareSeeker.com do not provide cracks, serial numbers etc for EMS Data Import 2005 for SQL Server 3.0. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited.
Featured Software
Want to place your software product here?
Please contact us for consideration.
Contact WareSeeker.com
Version History
Related Software
import data from MS Access to Microsoft QL server Free Download
mport your data quickly from MS Excel 97-2007, MS Access, DBF, XML, TXT Free Download
EMS Data Export 2005 for SQL Server helps you export data from MS SQL databases to any of 15 available formats Free Download
Export your data to any of 15 most popular data formats, including MS Access, MS Excel, MS Word, PDF, HTML and more. Free Download
EMS Data Import for Oracle is a powerful tool to import your data quickly from MS Access, MS Excel, DBF, XML, TXT and CSV files to Oracle database tables. Free Download
EMS Data Pump for SQL Server is a powerful. Free Download
EMS Data Import for MySQL is a powerful tool to import your data quickly from MS Excel, MS Access, DBF, TXT, CSV and XML files to MySQL tables. Free Download
DB Comparer for SQL Server is an excellent tool for database comparison and synchronization. It allows you to view all the differences in compared database objects and execute an automatically generated script to eliminate all or selected differences. Having EMS DB Comparer for SQL Server you can work with several projects at once, define comparison parameters, print difference reports, and alter modification scripts. Free Download
Latest Software
- EMS Data Generator 2005 for PostgreSQL 2.3
- EMS Data Generator 2005 for InterBase/Firebird 2.3
- EMS Data Generator 2005 for MySQL 2.3
- EMS Data Generator 2005 for DB2 2.3
- EMS SQL Query 2007 for SQL Server 3.0
- Export Table to Text for SQL Server 1.05.00
- Publish Table to Word for SQL Server 1.05.00
- EMS Data Export 2007 for InterBase/Firebird 3.1.0.1
Popular Software
Favourite Software | http://wareseeker.com/Software-Development/ems-data-import-2005-for-sql-server-3.0.zip/43788 | CC-MAIN-2016-44 | refinedweb | 470 | 57.77 |
Chapter 2
The Power and Peril of Pictures
Visualizations
While we have been able to make several interesting observations about data by simply running our eyes down the numbers in a table, our task would have been much harder had the tables been larger. In data science, a picture can be worth a thousand numbers.
We saw an example of this earlier in the text, when we examined John Snow's map of cholera deaths in London in 1854.
Snow showed each death as a black mark at the location where the death occurred. In doing so, he plotted three variables — the number of deaths, as well as two coordinates for each location — in a single graph without any ornaments or flourishes. The simplicity of his presentation focuses the viewer's attention on his main point, which was that the deaths were centered around the Broad Street pump.
In 1869, a French civil engineer named Charles Joseph Minard created what is still considered one of the greatest graph of all time. It shows the decimation of Napoleon's army during its retreat from Moscow. In 1812, Napoleon had set out to conquer Russia, with over 400,000 men in his army. They did reach Moscow, but were plagued by losses along the way; the Russian army kept retreating farther and farther into Russia, deliberately burning fields and destroying villages as it retreated. This left the French army without food or shelter as the brutal Russian winter began to set in. The French army turned back without a decisive victory in Moscow. The weather got colder, and more men died. Only 10,000 returned.
The graph is drawn over a map of eastern Europe. It starts at the Polish-Russian border at the left end. The light brown band represents Napoleon's army marching towards Moscow, and the black band represents the army returning. At each point of the graph, the width of the band is proportional to the number of soldiers in the army. At the bottom of the graph, Minard includes the temperatures on the return journey.
Notice how narrow the black band becomes as the army heads back. The crossing of the Berezina river was particularly devastating; can you spot it on the graph?
The graph is remarkable for its simplicity and power. In a single graph, Minard shows six variables:
- the number of soldiers
- the direction of the march
- the two coordinates of location
- the temperature on the return journey
- the location on specific dates in November and December
Edward Tufte, Professor at Yale and one of the world's experts on visualizing quantitative information, says that Minard's graph is "probably the best statistical graphic ever drawn."
The technology of our times allows to include animation and color. Used judiciously, without excess, these can be extremely informative, as in this animation by Gapminder.org of annual carbon dioxide emissions in the world over the past two centuries.
A caption says, "Click Play to see how USA becomes the largest emitter of CO2 from 1900 onwards." The graph of total emissions shows that China has higher emissions than the United States. However, in the graph of per capita emissions, the US is higher than China, because China's population is much larger than that of the United States.
Technology can be a help as well as a hindrance. Sometimes, the ability to create a fancy picture leads to a lack of clarity in what is being displayed. Inaccurate representation of numerical information, in particular, can lead to misleading messages.
Here, from the Washington Post in the early 1980s, is a graph that attempts to compare the earnings of doctors with the earnings of other professionals over a few decades. Do we really need to see two heads (one with a stethoscope) on each bar? Tufts coined the term "chartjunk" for such unnecessary embellishments. He also deplores the "low data-to-ink ratio" which this graph unfortunately possesses.
Most importantly, the horizontal axis of the graph is is not drawn to scale. This has a significant effect on the shape of the bar graphs. When drawn to scale and shorn of decoration, the graphs reveal trends that are quite different from the apparently linear growth in the original. The elegant graph below is due to Ross Ihaka, one of the originators of the statistical system R.
Here is a graphic from Statistics Canada, a website produced by the Government of Canada.
The graphs represent the distribution of after-tax income, in Canadian dollars, of families in Canada. The blue graph uses figures from the 2006 Census while the green graph shows estimates for 2010 based on the National Household Survey.
Based on what you have learned about visualization thus far, what is your assessment of this graphic?
Bar Charts and Histograms
Categorical Data and Bar Charts¶
Now that we have examined several graphics produced by others, it is time to produce some of our own. We will start with bar charts, a type of graph with which you might already be familiar. A bar chart shows the distribution of a categorical variable, that is, a variable whose values are categories. In human populations, examples of categorical variables include gender, ethnicity, marital status, country of citizenship, and so on.
A bar chart consists of a sequence of rectangular bars, one corresponding to each category. The length of each bar is proportional to the number of entries in the corresponding category.
We will start by drawing a bar chart of the genres of a set of movies. The data are in a table called
tableA, which looks like this:
tableA
... (4 rows omitted)
The first column of
tableA is labeled
GENRE. It consists of the names of various genres of movie. More formally, it contains the names of the categories of the genre variable. The second column is labeled
COUNT, and contains the number of movies in each genre.
The method
barh can be applied to a table with two columns as above, to produce a bar chart consisting of horizontal bars. The horizontal orientation makes it easier to label the bars. The argument that
barh requires is the name of the column consisting of the categories. The length of each bar is the count in that category.
Let us apply
barh to
tableA, with the
GENRE column as its argument.
tableA.barh('GENRE')
A different table called
tableB also consists of movie genre data, in columns labled
GENRE and
COUNT just as in
tableA. Here is a bar chart of the data in
tableB.
tableB.barh('GENRE')
At first glance, it applears that the two bar charts are quite different. One of them shows a distribution that looks like a smooth curve, while the other is quite irregular. However, closer inspection reveals that the two bar charts are in fact representations of exactly the same set of data. The only difference is in the order in which the categories appear. As we have seen before with graphs that involve two axes, it is important to study both axes carefully before making conclusions.
Unlike numbers, categories do not have a unique ordering relative to each other. The user determines the order in which they appear. In
tableA, the categories are arranged so that the bars appear in decreasing order of length. In
tableB, the categories are listed in alphabetical order. You could randomize the order; the data would tell the same story, though some orderings might make the story a little easier to read.
The data used in the bar charts above represent the market share for each genre of Hollywood movie, from 1995 to 2015. The source is the website The Numbers, subtitled "where data and the movie business meet."
Quantitative Data and Histograms¶
Many of the variables that data scientists study are quantitative. These are measurements of numerical variables such as income, height, age, and so on. In keeping with the movie theme of this section, we will study the amount of money grossed by movies in recent decades. Our source is the Internet Movie Database. The IMDb is an online database that consists of a vast repository of information about movies, television shows, video games, and so on.
The table
imdb consists of IMDb's data on U.S.A.'s top grossing movies of all time. The first column contains the rank of the movie; Avatar has the top rank, with a box office gross amount of more than 760 million dollars in the United States. The second column contains the name of the movie; the third contains the U.S. box office gross in dollars; and the fourth contains the same gross amount, in millions of dollars.
There are 627 movies on the list. Here are the top ten.
imdb = Table.read_table('imdb.csv') imdb
... (617 rows omitted)
Three-digit numbers (even with a few decimal places) are easier to work with than nine-digit numbers. So we will work with a smaller table called mill, created by selecting just the fourth column of imdb.
The method hist applied to one numerical column such as mill produces a figure called a histogram that looks very much like a bar chart. In this section, we will examine histograms and their properties.
mill = imdb.select(['in_millions'])
mill.hist()
The figure above shows the distribution of the amounts grossed, in millions of dollars. The amounts have been grouped into contiguous intervals called bins. Although in this dataset no movie grossed an amount that is exactly on the edge between two bins, it is worth noting that hist has an endpoint convention: bins include the data at their left endpoint, but not the data at their right endpoint. Sometimes, adjustments have to be made in the first or last bin, to ensure that the smallest and largest values of the variable are included. You saw an example of such an adjustment in the Census data used in the Tables section, where an age of "100" years actually meant "100 years old or older."
We can see that there are 10 bins (some bars are so low that they are hard to see), and that they all have the same width. We can also see that there the list contains no movie that grossed fewer than 100 million dollars; that is because we are considering only the top grossing movies of all time. It is a little harder to see exactly where the edges of the bins are placed. For example, it is not clear exactly where the value 200 lies on the horizontal axis, and so it is hard to judge exactly where the first bar ends and the second begins.
The optional argument bins can be used with hist to specify the edges of the bars. It must consist of a sequence of numbers that includes the left end of the first bar and the right end of the last bar. As the highest gross amount is somewhat over 760 on the horizontal scale, we will start by setting bins to be the array consisting of the numbers 100, 150, 200, 250, and so on, ending with 800.
mill.hist(bins=np.arange(100,810,50))
This figure is easier to read. On the horizontal axis, the labels 100, 200, 300, and so on are centered at the corresponding values. The number of movies that grossed between 100 million and 150 million dollars appears to be around 340; the number that grossed between 150 million and 200 million dollars appears to be around 125; and so on.
A large majority of the movies grossed between 100 million and 250 million dollars. A very small number grossed more than 600 million dollars. This results in the figure being "skewed to the right,", or, less formally, having "a long right hand tail." Distributions of variables like income or rent often have this kind of shape.
The exact counts are given below. The entries of 250 million dollars or more have been collected in a single bin. The total of the counts is 627, which is the number of movies on the list.
bins, counts = ["[100, 150)", "[150,200)", "[200, 250)", "[250, 800)"],[338,129,68,92] bincounts = Table([bins, counts], ['bins','counts']) bincounts
What is wrong with this picture?
Let us try to redraw the histogram with just four bins: [100, 150), [150, 200), [200, 250), and [250, 800). As we saw in the table of counts, the [250, 800) bin contains 92 movies.
mill.hist(bins=[100, 150, 200, 250, 800])
Even though the method used is called hist, the figure above is NOT A HISTOGRAM. It gives the impression that there are many more movies in the 250-800 bin than in the 100-150 bin, and indeed more than in the entire range 100-250. The height of each bar is simply plotted the number of movies in the bin, without accounting for the difference in the widths of the bins.
So what is a histogram?
The figure above shows that what the eye perceives as "big" is area, not just height. This is particularly important when the bins are of different widths.
That is why a histogram has two defining properties:
- The bins are contiguous (though some might be empty) and are drawn to scale.
- The area of each bar is proportional to the number of entries in the bin.
Property 2 is the key to drawing a histogram, and is usually achieved as follows:$$ \mbox{area of bar} ~=~ \mbox{proportion of entries in bin} $$
When drawn using this method, the histogram is said to be drawn on the density scale, and the total area of the bars is equal to 1.
To calculate the height of each bar, use the fact that the bar is a rectangle:$$ \mbox{area of bar} = \mbox{height of bar} \times \mbox{width of bin} $$
and so$$ \mbox{height of bar} ~=~ \frac{\mbox{area of bar}}{\mbox{width of bin}} ~=~ \frac{\mbox{proportion of entries in bin}}{\mbox{width of bin}} $$
For hist to draw a histogram on the density scale, the Boolean option normed must have the value True. You can think of "normed" as shorthand for "follows the norm of the density scale."
mill.hist(bins=[100, 150, 200, 250, 800], normed=True)
This is a reasonable representation of the data, though of course some detail has been lost. The level of detail in a histogram depends on the level of detail in the data as well as on the choices made by the user. Before we explore this idea further, let us first check that the numbers on the vertical axis above are consistent with the heights that we would calculate.
There are 129 movies in the [150, 200) bin. The proportion of movies in the bin is therefore 129/627, and the width of the bin is 200-150. So the height of the bar above that bin should be$$ \frac{129/627}{200-150} ~=~ 0.0041148325358851675 $$
That agrees with the height of the bar as shown in the figure. You might want to check that the other heights also agree with what you would calculate.
The level of detail, and the flat tops of the bars
Take another look at the [150, 200) bin in the figure above. The flat top of the bar, at the level 0.004, hides the fact that the movies are somewhat unevenly distributed across the bin. To see this, let us split the [150, 200) bin into five narrower bins of width 10 million dollars each:
mill.hist(bins=[100, 150, 160, 170, 180, 190, 200, 250, 800], normed=True)
Some of the skinny bars are taller than 0.004 and others are shorter. By putting a flat top at 0.004 over the whole bin, we are deciding to ignore the finer detail and use the flat level as a rough approximation. Often, though not always, this is sufficient for understanding the general shape of the distribution.
Notice that because we have the entire dataset, we can draw the histogram in as fine a level of detail as the data and our patience will allow. However, if you are looking at a histogram in a book or on a website, and you don't have access to the underlying dataset, then it becomes important to have a clear understanding of the "rough approximation" of the flat tops.
The density scale
The height of each bar is a proportion divided by a bin width. Thus, for this datset, the values on the vertical axis are "proportions per million dollars." To understand this better, look again at the [150, 200) bin. The bin is 50 million dollars wide. So we can think of it as consisting of 50 narrow bins that are each 1 million dollars wide. The bar's height of roughly "0.004 per million dollars" means that in each of those 50 skinny bins of width 1 million dollars, the proportion of movies is roughly 0.004.
Thus the height of a histogram bar is a proportion per unit on the horizontal axis, and can be thought of as the density of entries per unit width.
imdb.select(['in_millions']).hist(bins=[100, 150, 200, 250, 800], normed=True)
Density Q&A
Look again at the histogram, and this time compare the [200, 250) bin with the [250, 800) bin.
Q: Which has more movies in it?
A: The [250, 800) bin. It has 92 movies, compared with 68 movies in the [200, 250) bin.
Q: Then why is the [250, 800) bar shorter than the [200, 250) bar?
A: Because height represents density per unit width, not the number of movies in the bin. The [250, 800) bin has more movies than the [200, 250) bin, but it is also a whole lot wider. So the density is much lower.
Bar chart or histogram?
Bar charts display the distributions of categorical variables. All the bars in a bar chart have the same width. The lengths (or heights, if the bars are drawn vertically) of the bars are proportional to the number of entries.
Histograms display the distributions of quantitative variables. The bars can have different widths. The areas of the bars are proportional to the number of entries.
Multiple bar charts and histograms
In all the examples in this section, we have drawn a single bar chart or a single histogram. However, if a data table contains several columns, then barh and hist can be used to draw several graphs at once. We will cover this feature in a later section.
Functions
We are building up a useful inventory of techniques for identifying patterns and themes in a data set. Sorting and filtering rows of a table can focus our attention. Bar charts and histograms can summarize data visually to convey broad numerical patterns. The next approach to analysis we will consider involves grouping rows of a table by arbitrary criteria. To do so, we will explore two core features of the Python programming language: function definition and conditional statements.
We have used functions extensively already in this text, but never defined a function of our own. The purpose of defining a function is to give a name to a computational process that may be applied multiple times. Although there are many situations in computing that require repeating a computational process many times, the most natural one in our setting is to perform the same process on each row of a table.
A function is defined in Python using a
def statement, which is a multi-line
statement that begins with a header line giving the name of the function and
names for the arguments of the function. The rest of the
def statement,
called the body, must be indented below the header.
A function expresses a relationship between its inputs (called arguments) and
its outputs (called return values). The number of arguments required to call
a function is the number of names that appear within parentheses in the
def
statement header. The values that are returned depend on the body. Whenever a
function is called, its body is executed. Whenever a
return statement within
the body is executed, the call to the function completes and the value of the
expression directly following
return is returned.
The definition of the
percent function below multiplies a number by 10 and rounds the result to two decimal places.
def percent(x): return round(100*x, 2)
The primary difference between defining a
percent function and simply evaluating its return expression
round(100*x, 2) is that when a function is defined, its return expression is not immediately evaluated. It cannot be, because the value for
x is not yet defined. Instead, the return expression is evaluated whenever this
percent function is called by placing parentheses after the name
percent and placing an expression to compute its argument in parentheses.
percent(1/6)
16.67
percent(1/6000)
0.02
percent(1/60000)
0.0
In the expression above, called a call expression, the value of
1/6 is computed and then passed as the argument named
x to the
percent function. When the
percent function is called in this way, its body is executed. The body of
percent has only a single line:
return round(100*x, 2). Executing this
return statement completes execution of the
percent function's body and gives the value of the call expression
percent(1/6).
The same result is computed by passing a named value as an argument. The
percent function does not know or care how its argument is computed; its only job is to execute its own body using the argument names that appear in its header.
sixth = 1/6 percent(sixth)
16.67
Conditional Statements. The body of a function can have more than one line and more than one return statement. A conditional statement is a multi-line statement that allows Python to choose among different alternatives based on the truth value of an expression. While conditional statements can appear anywhere, they appear most often within the body of a function in order to express alternative behavior depending on argument values.
A conditional statement always begins with an
if header, which is a single line followed by an indented body. The body is only executed if the expression directly following
if (called the if expression) evaluates to a true value. If the if expression evaluates to a false value, then execution of the function body continues.
For example, we can improve our
percent function so that it doesn't round very small numbers to zero so readily. The behavior of
percent(1/6) is unchanged, but
percent(1/60000) provides a more useful result.
def percent(x): if x < 0.00005: return 100 * x return round(100 * x, 2)
percent(1/6)
16.67
percent(1/6000)
0.02
percent(1/60000)
0.0016666666666666668
A conditional statement can also have multiple clauses with multiple bodies, and only one of those bodies can ever be executed. The general format of a multi-clause conditional statement appears below.
if <if expression>: <if body> elif <elif expression 0>: <elif body 0> elif <elif expression 1>: <elif body 1> ... else: <else body>
There is always exactly one
if clause, but there can be any number of
elif clauses. Python will evaluate the
if and
elif expressions in the headers in order until one is found that is a true value, then execute the corresponding body. The
else clause is optional. When an
else header is provided, its else body is executed only if none of the header expressions of the previous clauses are true. The
else clause must always come at the end (or not at all).
Let us continue to refine our
percent function. Perhaps for some analysis, any value below $10^{-8}$ should be considered close enough to 0 that it can be ignored. The following function definition handles this case as well.
def percent(x): if x < 1e-8: return 0.0 elif x < 0.00005: return 100 * x else: return round(100 * x, 2)
percent(1/6)
16.67
percent(1/6000)
0.02
percent(1/60000)
0.0016666666666666668
percent(1/60000000000)
0.0
A well-composed function has a name that evokes its behavior, as well as a docstring — a description of its behavior and expectations about its arguments. The docstring can also show example calls to the function, where the call is preceded by
>>>.
A docstring can be any string that immediately follows the header line of a
def statement. Docstrings are typically defined using triple quotation marks at the start and end, which allows the string to span multiple lines. The first line is conventionally a complete but short description of the function, while following lines provide further guidance to future users of the function.
A more complete definition of
percent that includes a docstring appears below.
def percent(x): """Convert x to a percentage by multiplying by 100. Percentages are conventionally rounded to two decimal places, but precision is retained for any x above 1e-8 that would otherwise round to 0. >>> percent(1/6) 16.67 >>> perent(1/6000) 0.02 >>> perent(1/60000) 0.0016666666666666668 >>> percent(1/60000000000) 0.0 """ if x < 1e-8: return 0.0 elif x < 0.00005: return 100 * x else: return round(100 * x, 2)
Functions and Tables
imdb = Table.read_table('imdb.csv')
Functions can be used to compute new columns in a table based on existing column values. For example, the IMDb dataset of top grossing movies placed the year of each movie in the title. To categorize the data by year, we first must separate the title from the date.
Slicing. Titles are strings, and a string is a sequence. Any sequence can be sliced, creating a new sequence of the same type that has only a range of the original elements. To slice a sequence, place two indices separated by a colon within square brackets. While slicing has a new syntax, slices have the same behavior as the arguments to
np.arange: the first number is an inclusive lower bound and the second argument is an exclusive upper bound.
title = "Terminator 2: Judgment Day (1991)" title[3:10]
'minator'
Negative indices in a slice (or in element selection) count from the end of the sequence. Since years of movies in this data set always contain exactly 4 numbers, we can separate the date from the title using a slice and negative constants.
title[-5:-1]
'1991'
The
year function below takes in a movie title with the year at the end, slices out the year, and converts the result to an integer by calling
int.
def year(title): """Return the year of a movie, assuming it appears at the end of the title.""" return int(title[-5:-1]) year(title)
1991
Apply. The
apply method of a table calls a function on each element of a column, forming a new array of return values. To indicate which function to call, just name it (without quotation marks). The name of the column of input values must still appear within quotation marks.
imdb['year'] = imdb.apply(year, 'movie') imdb
... (617 rows omitted)
Computing Categories¶
Functions can also be used to create categories based on existing columns. The first step in categorizing data is to write a function that can take existing column values as arguments and return a category label. Then, a new column for that category can be added using
apply, as in the example above.
Certain science fiction films have pushed the limits of special effects technology. The movie E.T., released in 1982, made audiences believe in aliens. Jurassic Park, released in 1993, delivered the most convincing images of dinosaurs ever created. Avatar, released in 2009, created realistic humanoid aliens in immersive 3-dimensional films. We can use these landmarks of special effects technology to categorize the history of cinema.
def age(year): if year < 1982: return 'old' elif year < 1993: return 'modern' elif year < 2009: return 'recent' else: return 'contemporary' imdb['era'] = imdb.apply(age, 'year') imdb
... (617 rows omitted)
Once a new category column is introduced, it can be used to perform any sort of further processing. For instance, we could count how many movies come from each era.
imdb.select(['era', 'year']).group('era', len).barh('era')
Functions can also be used to generate visualizations from tables. Histograms of the eras show quite a contrast in the distribution of movie proceeds over the years. What changes might explain the trend you observe? How might you investigate whether that change accounts for the trend?
def age_hist(age): imdb.where('era', age).select(['in_millions']).hist( bins=np.arange(100, 1000, 50), normed=True) age_hist('old') age_hist('modern') age_hist('recent') age_hist('contemporary')
Sampling
imdb = Table.read_table('imdb.csv')
Deterministic Samples
When you simply specify which elements of a set you want to choose, without any chances involved, you create a deterministic sample.
A determinsitic sample from the rows of a table can be constructed using the
take method. Its argument is a sequence of integers, and it returns a table containing the corresponding rows of the original table.
The code below returns a table consisting of the rows indexed 3, 18, and 100 in the table
imdb. Since the original table is sorted by
rank, which begins at 1, the zero-indexed rows always have a rank that is one greater than the index. However, the
take method does not inspect the information in the rows to construct its sample.
imdb.take([3, 18, 100])
We can select evenly spaced rows by calling
np.arange and passing the result to
take. In the example below, we start with the first row (index 0) of
imdb, and choose every 100th row after that until we reach the end of the table. The expressions
imdb.num_rows and
len(imdb.rows) could be used interchangeably to indicate that the range should extend to the end of the table.
imdb.take(np.arange(0, imdb.num_rows, 100))
Much of data science consists of making conclusions based on the data in random samples. Correctly interpreting analyses based on random samples requires data scientists to examine exactly what random samples are.
A population is the set of all elements from whom a sample will be drawn.
A probability sample is one for which it is possible to calculate, before the sample is drawn, the chance with which any subset of elements will enter the sample.
In a probability sample, all elements need not have the same chance of being chosen. For example, suppose you choose two people from a population that consists of three people A, B, and C, according to the following scheme:
- Person A is chosen with probability 1.
- One of Persons B or C is chosen according to the toss of a coin: if the coin lands heads, you choose B, and if it lands tails you choose C.
This is a probability sample of size 2. Here are the chances of entry for all non-empty subsets:
A: 1 B: 1/2 C: 1/2 AB: 1/2 AC: 1/2 BC: 0 ABC: 0
Person A has a higher chance of being selected than Persons B or C; indeed, Person A is certain to be selected. Since these differences are known and quantified, they can be taken into account when working with the sample.
To draw a probability sample, we need to be able to choose elements according to a process that involves chance. A basic tool for this purpose is a random number generator. There are several in Python. Here we will use one that is part of the module
random, which in turn is part of the module
numpy.
The method
randint, when given two arguments
low and
high, returns an integer picked uniformly at random between
low and
high, including
low but excluding
high. Run the code below several times to see the variability in the integers that are returned.
np.random.randint(3, 8) # select once at random from 3, 4, 5, 6, 7
4
A Systematic Sample
Imagine all the elements of the population listed in a sequence. One method of sampling starts by choosing a random position early in the list, and then evenly spaced positions after that. The sample consists of the elements in those positions. Such a sample is called a systematic sample.
Here we will choose a systematic sample of the rows of
imdb. We will start by picking one of the first 10 rows at random, and then we will pick every 10th row after that.
"""Choose a random start among rows 0 through 9; then take every 10th row.""" start = np.random.randint(0, 10) imdb.take(np.arange(start, imdb.num_rows, 10))
... (53 rows omitted)
Run the code a few times to see how the output varies. Notice how the numbers in the rank column all have the same ending digit. That is because the first row has a random index between 0 and 9, and hence a random rank between 1 and 10; then the code just adds 10 successively to each selected row index, leaving the ending digit unchanged.
This systematic sample is a probability sample. To find the chance that a particular row is selected, look at the ending digit in the rank column of the row. If that is 7, for example, then the row will be selected if and only if the row corresponding to movie rank 7 (Star Wars) is selected. The chance of that is 1/10.
In this scheme, all rows do have the same chance of being chosen. But that is not true of other subsets of the rows. Because the selected rows are evenly spaced, most subsets of rows have no chance of being chosen. The only subsets that are possible are those in which all the ranks have the same ending digit. Those are selected with chance 1/10.
Random sample with replacement
Some of the simplest probability samples are formed by drawing repeatedly, uniformly at random, from the list of elements of the population. If the draws are made without changing the list between draws, the sample is called a random sample with replacement. You can imagine making the first draw at random, replacing the element drawn, and then drawing again.
In a random sample with replacement, each element in the population has the same chance of being drawn, and each can be drawn more than once in the sample.
If you want to draw a sample of people at random to get some information about a population, you might not want to sample with replacement – drawing the same person more than once can lead to a loss of information. But in data science, random samples with replacement arise in two major areas:
studying probabilities by simulating tosses of a coin, rolls of a die, or gambling games
creating new samples from a sample at hand
The second of these areas will be covered later in the course. For now, let us study some long-run properties of probabilities.
We will start with the table
die which contains the numbers of spots on the faces of a die. All the numbers appear exactly once, as we are assuming that the die is fair.
die = Table([[1, 2, 3, 4, 5, 6]],['Face']) die
Drawing the histogram of this simple set of numbers yields an unsettling figure as
hist makes a default choice of bins:
die.hist()
The numbers 1, 2, 3, 4, 5, 6 are integers, so the bins chosen by
hist only have entries at the edges. In such a situation, it is a better idea to select bins so that they are centered on the integers. This is often true of histograms of data that are discrete, that is, variables whose successive values are separated by the same fixed amount. The real advantage of this method of bin selection will become more clear when we start imagining smooth curves drawn over histograms.
die.hist(bins=np.arange(0.5, 7, 1), normed=True)
Notice how each bin has width 1 and is centered on an integer. Notice also that because the width of each bin is 1, the height of each bar is $0.16666 \ldots /1 = 1/6$, the chance that the corresponding face appears.
The histogram shows the probability with which each face appears. It is called a probability histogram of the result of one roll of a die.
The histogram was drawn without rolling any dice or generating any random numbers. We will now use the computer to mimic actually rolling a die. The process of using a computer program to produce the results of a chance experiment is called simulation.
To roll the die, we will use a method called
sample. This method returns a new table consisting of rows selected uniformly at random from a table. Its first argument is the number of rows to be returned. Its second argument is whether or not the sampling should be done with replacement.
The code below simulates 10 rolls of the die. As with all simulations of chance experiments, you should run the code several times and notice the variability in what is returned.
die.sample(10, with_replacement=True)
We will now roll the die several times and draw the histogram of the observed results. The histogram of observed results is called an empirical histogram.
The results of rolling the die will all be integers in the range 1 through 6, so we will want to use the same bins as we used for the probability histogram. To avoid writing out the same bin argument every time we draw a histogram, let us define a function called
hist_1to6 that will perform the task for us. The function will take one argument: the name of a table that contains the results of the rolls.
def hist_1to6(x): return x.hist(bins=np.arange(0.5, 7, 1), normed=True)
hist_1to6(die.sample(20, with_replacement=True))
Below, for comparison, is the probability histogram for the roll of a die. Based on that, we expect each face to appear about on $1/6$ of the rolls. But if you run the simulation above a few times, you will see that with just 20 rolls, the proportion of times each face appears can be quite far from $1/6$.
hist_1to6(die)
As we increase the number of rolls in the simulation, the proportions get closer to $1/6$.
hist_1to6(die.sample(2000, with_replacement=True))
The behavior we have observed is an instance of a general rule.
The Law of Averages¶
If a chance experiment is repeated independently and under identical conditions, then, in the long run, the proportion of times that an event occurs gets closer and closer to the theoretical probability of the event.
For example, in the long run, the proportion of times the face with four spots appears gets closer and closer to 1/6.
Here "independently and under identical conditions" means that every repetition is performed in the same way regardless of the results of all the other repetitions.
Convergence of empirical histograms¶
We have also observed that a random quantity (such as the number of spots on one roll of a die) is associated with two histograms:
a probability histogram, that shows all the possible values of the quantity and all their chances
an empirial histogram, created by simulating the random quantity repeatedly and drawing a histogram of the observed results
We have seen an example of the long-run behavior of empirical histograms:
As the number of repetitions increases, the empirical histogram of a random quantity looks more and more like the probability histogram.
At the Roulette Table¶
Equipped with our new knowledge about the long-run behavior of chances, let us explore a gambling game. Betting on roulette is popular in gambling centers such as Las Vegas and Monte Carlo, and we will simulate one of the bets here.
The main randomizer in roulette in Nevada is a wheel that has 38 pockets on its rim. Two of the pockets are green, eighteen black, and eighteen red. The wheel is on a spindle, and there is a small ball on it. When the wheel is spun, the ball ricochets around and finally comes to rest in one of the pockets. That is declared to be the winning pocket.
You are allowed to bet on several pre-specified collections of pockets. If you bet on "red," you win if the ball comes to rest in one of the red pockets.
The bet even money. That is, it pays 1 to 1. To understand what that means, assume you are going to bet \$1 on "red." The first thing that happens, even before the wheel is spun, is that you have to hand over your \$1. If the ball lands in a green or black pocket, you never see that dollar again. If the ball lands in a red pocket, you get your dollar back (to bring you back to even), plus another \$1 in winnings.
The table
wheel represents the pockets of a Nevada roulette wheel. It has 38 rows labeled 1 through 38, one row per pocket.
pockets = np.arange(1, 39) colors = (['red', 'black'] * 5 + ['black', 'red'] * 4) * 2 + ['green', 'green'] wheel = Table([pockets, colors],['pocket', 'color']) wheel
... (28 rows omitted)
The function
bet_on_red takes a numerical argument
x and returns the net winnings on a \$1 bet on "red," provided
x is the number of a pocket.
def bet_on_red(x): """The net winnings of betting on red for outcome x.""" pockets = wheel.where('pocket', x) if pockets['color'][0] == 'red': return 1 else: return -1
bet_on_red(17)
-1
The function
spins takes a numerical argument
n and returns a new table consisting of
n rows of
wheel sampled at random with replacement. In other words, it simulates the results of
n spins of the roulette wheel.
def spins(n): return wheel.sample(n, with_replacement=True)
We will create a table called
play consisting of the results of 10 spins, and add a column that shows the net winnings on \$1 placed on "red." Recall that the
apply method applies a function to each element in a column of a table.
play = spins(10) play['winnings'] = play.apply(bet_on_red, 'pocket') play
And here is the net gain on all 10 bets:
sum(play['winnings'])
2
We can put all this together in a single function called
fate_red that takes as its argument the number of bets and returns the net gain on that many \$1 bets placed on "red." Try running
fate_red several times with an argument of 500.
def fate_red(n): net_gain = sum(spins(n).apply(bet_on_red, 'pocket')) if net_gain > 0: return 'You made ' + str(net_gain) + " dollars. Lucky!" elif net_gain == 0: return "Whew! Broke even." elif net_gain < 0: return 'You made '+ str(net_gain) + " dollars. The casino thanks you for making it richer."
fate_red(500)
'You made -30 dollars. The casino thanks you for making it richer.'
Betting \$1 on red hundreds of times seems like a bad idea from a gambler's perspective. But from the casinos' perspective it is excellent. Casinos rely on large numbers of bets being placed. The payoff odds are set so that the more bets that are placed, the more money the casinos are likely to make, even though a few people are likely to go home with winnings.
Simple Random Sample – a Random Sample without Replacement
A random sample without replacement is one in which elements are drawn from a list repeatedly, uniformly at random, at each stage deleting from the list the element that was drawn.
A random sample without replacement is also called a simple random sample. All elements of the population have the same chance of entering a simple random sample. All pairs have the same chance as each other, as do all triples, and so on.
The default action of
sample is to draw without replacement. In card games, cards are almost always dealt without replacement. Let us use
sample to deal cards from a deck.
A standard deck deck consists of 13 ranks of cards in each of four suits. The suits are called spades, clubs, diamonds, and hearts. The ranks are Ace, 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, and King. Spades and clubs are black; diamonds and hearts are red. The Jacks, Queens, and Kings are called 'face cards.'
The table
deck contains all 52 cards in a column labeled
cards. The abbreviations are:
Spades: s $~~$ Clubs: c $~~$ Diamonds: d $~~$ Hearts: h
Ace: A $~~$ Jack: J $~~$ Queen: Q $~~$ King: K
from itertools import product suits = ['♠︎', '♥︎', '♦︎', '♣︎'] ranks = ['A', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K'] deck = Table.from_rows(product(ranks, suits), ['rank', 'suit']) deck
... (42 rows omitted)
A poker hand is five cards dealt at random from the deck. The code below deals a poker hand. Deal a few hands to see if you can get a flush: a hand that contains only one suit. How many aces do you typically get?
deck.sample(5)
Note that the hand is the set of five cards, regardless of the order in which they appeared. For example, the hand
['9♣︎', '9♥︎', 'Q♣︎', '7♣︎', '8♥︎'] is the same as the hand
['7♣︎', '9♣︎', '8♥︎', '9♥︎', 'Q♣︎'].
This can be used to show that simple random sampling can be thought of in two equivalent ways:
drawing elements one by one at random without replacement
randomly permuting (that is, shuffling) the whole list, and then pulling out a set of elements at the same time
Explorations: Privacy
We're going to look at some data collected by the Oakland Police Department. They have automated license plate readers on their police cars, and they've built up a database of license plates that they've seen -- and where and when they saw each one.
First, we'll gather the data. It turns out the data is publicly available on the Oakland public records site. I downloaded it and combined it into a single CSV file by myself before lecture.
lprs = Table.read_table('./all-lprs.csv.gz', compression='gzip', sep=',')
lprs.column_labels
('red_VRM', 'red_Timestamp', 'Location')
Let's start by renaming some columns, and then take a look at it.
lprs.relabel('red_VRM', 'Plate') lprs.relabel('red_Timestamp', 'Timestamp') lprs
... (2742091 rows omitted)
Phew, that's a lot of data: we can see about 2.7 million license plate reads here.
Let's start by seeing what can be learned about someone, using this data -- assuming you know their license plate.
As a warmup, we'll take a look at ex-Mayor Jean Quan's car, and where it has been seen. Her license plate number is 6FCH845. (How did I learn that? Turns out she was in the news for getting $1000 of parking tickets, and the news article included a picture of her car, with the license plate visible. You'd be amazed by what's out there on the Internet...)
lprs.where('Plate', '6FCH845')
OK, so her car shows up 6 times in this data set. However, it's hard to make sense of those coordinates. I don't know about you, but I can't read GPS so well.
So, let's work out a way to show where her car has been seen on a map. We'll need to extract the latitude and longitude, as the data isn't quite in the format that the mapping software expects: the mapping software expects the latitude to be in one column and the longitude in another. Let's write some Python code to do that, by splitting the Location string into two pieces: the stuff before the comma (the latitude) and the stuff after (the longitude).
def getlatitude(s): before, after = s.split(',') # Break it into two parts latstring = before[1:] # Get rid of the annoying '(' return float(latstring) # Convert the string to a number def getlongitude(s): before, after = s.split(',') # Break it into two parts longstring = after[1:-1] # Get rid of the ' ' and the ')' return float(longstring) # Convert the string to a number
Let's test it to make sure it works correctly.
getlatitude('(37.797558, -122.26935)')
37.797558
getlongitude('(37.797558, -122.26935)')
-122.26935
Good, now we're ready to add these as extra columns to the table.
lprs['Latitude'] = lprs.apply(getlatitude, 'Location') lprs['Longitude'] = lprs.apply(getlongitude, 'Location') lprs = lprs.drop('Location') lprs
... (2742091 rows omitted)
And at last, we can draw a map with a marker everywhere that her car has been seen.
jeanquan = lprs.where('Plate', '6FCH845') Marker.map(jeanquan['Latitude'], jeanquan['Longitude'], labels=jeanquan['Timestamp'])
OK, so it's been seen near the Oakland police department. This should make you suspect we might be getting a bit of a biased sample. Why might the Oakland PD be the most common place where her car is seen? Can you come up with a plausible explanation for this?
Let's try another. And let's see if we can make the map a little more fancy. It'd be nice to distinguish between license plate reads that are seen during the daytime (on a weekday), vs the evening (on a weekday), vs on a weekend. So we'll color-code the markers. To do this, we'll write some Python code to analyze the Timestamp and choose an appropriate color.
import datetime def getcolor(ts): t = datetime.datetime.strptime(ts, '%m/%d/%Y %I:%M:%S %p') if t.weekday() >= 6: return 'green' # Weekend if t.hour >= 6 and t.hour <= 17: return 'blue' # Weekday daytime return 'red' # Weekday evening lprs['Color'] = lprs.apply(getcolor, 'Timestamp')
Now we can check out another license plate, this time with our spiffy color-coding. This one happens to be the car that the city issues to the Fire Chief.
t = lprs.where('Plate', '1328354') Marker.map(t['Latitude'], t['Longitude'], labels=t['Timestamp'], colors=t['Color'])
Hmm. We can see a blue cluster in downtown Oakland, where the Fire Chief's car was seen on weekdays during business hours. I bet we've found her office. In fact, if you happen to know downtown Oakland, those are mostly clustered right near City Hall. Also, her car was seen twice in northern Oakland on weekday evenings. One can only speculate what that indicates. Maybe dinner with a friend? Or running errands? Off to the scene of a fire? Who knows. And then the car has been seen once more, late at night on a weekend, in a residential area in the hills. Her home address, maybe?
Let's look at another.
t = lprs.where('Plate', '5AJG153') Marker.map(t['Latitude'], t['Longitude'], labels=t['Timestamp'], colors=t['Color'])
What can we tell from this? Looks to me like this person lives on International Blvd and 9th, roughly. On weekdays they've seen in a variety of locations in west Oakland. It's fun to imagine what this might indicate -- delivery person? taxi driver? someone running errands all over the place in west Oakland?
We can look at another:
t = lprs.where('Plate', '6UZA652') Marker.map(t['Latitude'], t['Longitude'], labels=t['Timestamp'], colors=t['Color'])
What can we learn from this map? First, it's pretty easy to guess where this person lives: 16th and International, or pretty near there. And then we can see them spending some nights and a weekend near Laney College. Did they have an apartment there briefly? A relationship with someone who lived there?
Is anyone else getting a little bit creeped out about this? I think I've had enough of looking at individual people's data.
As we can see, this kind of data can potentially reveal a fair bit about people. Someone with access to the data can draw inferences. Take a moment to think about what someone might be able to infer from this kind of data.
As we've seen here, it's not too hard to make a pretty good guess at roughly where some lives, from this kind of information: their car is probably parked near their home most nights. Also, it will often be possible to guess where someone works: if they commute into work by car, then on weekdays during business hours, their car is probably parked near their office, so we'll see a clear cluster that indicates where they work.
But it doesn't stop there. If we have enough data, it might also be possible to get a sense of what they like to do during their downtime (do they spend time at the park?). And in some cases the data might reveal that someone is in a relationship and spending nights at someone else's house. That's arguably pretty sensitive stuff.
This gets at one of the challenges with privacy. Data that's collected for one purpose (fighting crime, or something like that) can potentially reveal a lot more. It can allow the owner of the data to draw inferences -- sometimes about things that people would prefer to keep private. And that means that, in a world of "big data", if we're not careful, privacy can be collateral damage.
If we want to protect people's privacy, what can be done about this? That's a lengthy subject. But at risk of over-simplifying, there are a few simple strategies that data owners can take:
Minimize the data they have. Collect only what they need, and delete it after it's not needed.
Control who has access to the sensitive data. Perhaps only a handful of trusted insiders need access; if so, then one can lock down the data so only they have access to it. One can also log all access, to deter misuse.
Anonymize the data, so it can't be linked back to the individual who it is about. Unfortunately, this is often harder than it sounds.
Engage with stakeholders. Provide transparency, to try to avoid people being taken by surprise. Give individuals a way to see what data has been collected about them. Give people a way to opt out and have their data be deleted, if they wish. Engage in a discussion about values, and tell people what steps you are taking to protect them from unwanted consequences.
This only scratches the surface of the subject. My main goal in this lecture was to make you aware of privacy concerns, so that if you are ever a steward of a large data set, you can think about how to protect people's data and use it responsibly.
Appendix: Statements
0. Introduction¶
In this note, we'll go over the structure of Python code in a bit more detail than we have before. When you've absorbed this material, you should be able to read Python code and decompose it into simple, understandable parts. This note should be particularly useful if you've seen a lot of Python code, but you have a hard time interpreting complicated-looking code like
table['foo'] = np.array([1,2,3]) + table['bar'].
Decomposing Python into small parts is kind of like diagramming an English sentence. While our brains are perfectly capable of generating and understanding English without explicitly identifying things like subjects and predicates, Python interprets code very literally according to its rules (its syntax). So if you want to understand Python code, it's more important to have a precise model of Python's rules in your head. On the flip side, Python's rules are much simpler than those of English (see, for example, this amusingly complicated English sentence). They just seem complicated because we're less familiar with them. That makes it possible to learn Python much faster than you learned English.
Note: Everything in this note is also available, with even more pedantic precision, at the official Python language reference. This note is focused on the material in chapters 6, 7, and 8 of the reference. We will omit some details and fudge some truths in the interest of pedagogy. Once you feel like an expert in this stuff, feel free to brave the official documentation.
How to read this document¶
This note contains a bunch of code cells, in addition to text. The code cells typically illustrate points from the text. Please run the code cells as you go through the note, and pay attention to what their output is. Recall that the thing that's printed when you run a cell is the value of the last line.
3 # Line 1.0 z = 3 # Line 1.1 4+3 # Line 1.2 y = 4+3 # Line 1.3 (2+3)+z # Line 1.4 "foo"+"bar" # Line 1.5 [1,2,3] # Line 1.6 x = [1,2,3] # Line 1.7 sum(x) # Line 1.8 x[2] # Line 1.9 x[2] = 4 # Line 1.10 t = Table() # Line 1.11 t['Things'] = np.array(["foo", "bar", "baz"]) # Line 1.12 t.sort('Things') # Line 1.13 u = t.sort('Things') # Line 1.14 u.relabel('Things', 'Nonsense') # Line 1.15 u # Line 1.16
(The
# Line X comments are just there for labeling; don't consider them part of the lines. Similarly, other instances of
# some text here that you see in this note are just for explanation.) Each line in the cell is a statement. A statement is a (somewhat) self-contained piece of code. Python executes statements in the order in which they appear. There are many kinds of statements, and to execute a statement, Python first has to figure out what kind of statement it is.
2. Expressions¶
The most basic kind of statement is the expression.
Line 0 above is just an expression:
3. Like many (but not all) expressions, it has a value, the integer 3. Like some (but not all) expressions, computing its value causes nothing to "happen" to the world. (We say it has no side effects.) When Python executes line 0, it computes that value. Since nothing is done with it, it just gets discarded. The same is true of lines 2, 4, 5, 6, 8, 9, 13, and 16 -- those are expression statements that cause values to be computed, but the computation has no side effects, and the value of the full expression is eventually discarded. Line 15 is an expression that does have side effects -- it causes the
'Things' column in the table named
t to be renamed to
'Nonsense'. The other lines are statements but not expressions, but we will see that, like many statements, they contain expressions.
Expressions are themselves usually made up of several smaller expressions joined together by some rules; we call these compound expressions, and we sometimes call the component expressions subexpressions. Line 2, for example, is a compound expression made up of the subexpressions
4 and
3 joined by
+. Python knows what a
+ between two expressions means, and it puts them together so that the value of
4+3 is the value of
4 plus the value of
3, or 7.
Line 4 is another compound expression. We can think of it as the subexpressions
(2+3) and
z, again joined by
+. But
(2+3) is itself a compound expression, made up of
2 and
3 joined by
+. Python first computes the value of
(2+3), which is 5, and then computes the value of
z, which is 3 (
z having been assigned previously), and then adds 5 and 3 to get 8.
(2+3)*(4*((5*6)+7)) is also a valid expression. It contains 10 subexpressions (not including itself):
2
3
(2+3)
4
5
6
(5*6)
7
((5*6)+7)
(4*((5*6)+7))
Compound expressions can be arbitrarily complicated compositions of expressions.
Question. How many subexpressions are contained in the expression
((1+2)+(3+4))+((5+6)+(7+8))?
It's critical to recognize that subexpressions are valid expressions that could be written by themselves or made part of other compound expressions. If you see a complicated expression like the one above (or even more exotic ones later), and you don't understand what it does, you can always break it down into smaller bits until you get to very basic expressions. There is a fairly small list of basic expression types (things that can't be broken down into subexpressions) to learn.
This note will tell you the rules about most of the basic expressions in Python, but in order to understand and write real code (which very regularly involves large compound expressions) you'll need to develop the skill of breaking down compound expressions into subexpressions. You can try to do that mentally while you're reading code, but if that's too hard, you can just type them into a Python code cell and see what they do.
Question. What's the value of each subexpression you found above? You can just type them into the empty code cell below if you like.
A note on errors¶
Line 5 (
"foo"+"bar") is a compound expression adding two strings, with
"foo" and
"bar" as subexpressions. This is okay, since the
+ operator knows how to handle two strings. It produces the string "foobar" as its value.
When the following cell is executed, however, there is an error. (Run the cell to confirm that.)
"foo"+5 # Error!
When you see an error, don't just give up. Often (though unfortunately not always) the error message will tell you what's wrong. The error message first tells us that the problem happened on line 1 of the cell (in this case, the only line) and the text of the error is "TypeError: Can't convert 'int' object to str implicitly". Python evaluates
"foo" and
5 just fine, but when the
+ operator tries to apply itself "foo" and 5, it becomes unhappy. The error refers to the fact that the
+ operator tries to convert its arguments to something it can add. For example, adding an integer and a float, like
3+4.5, works because
+ converts the integer
3 to a float. But
+ can't convert a number to text (or vice-versa), so it gives up.
The important thing to realize about that cell, for our purposes, is exactly where the error happens. In the next cell, for example, some work is done before an error happens:
("foo"+"bar")+5 # Error!
Python actually evaluates the subexpression
("foo"+"bar") successfully, producing the string "foobar", before again failing to add "foobar" and 5. The error occurs only when trying to add a string and a number, and not before.
Now, let's go over the kinds of expressions that Python has.
"foo" # a string expression, whose value is the string "foo"
'foo' # a string expression, essentially identical to the one above
'5' # a string expression, which happens to contain a single character called 5
5 # an int expression, whose value is the integer number 5
5.1 # a float expression, whose value is the decimal number 5.1
It's important to recognize that string, int, and float expressions produce values of different types. A string is not an int, nor is it a float. You can see the type of anything by calling
type(thing) (or print it out with
print(type(thing)), as in
type(2),
type('foo'), or
i_am_a_string = "blah" type(i_am_a_string)
Confusingly but conveniently, many functions built into Python will try to convert values of one type to another.
3+4.5 was one example we just saw -- in order to add
3 and
4.5, Python first converts the integer
3 to the float
3..
print(3) is another -- in order to print anything so you can see its value, the
You can do conversions between these three types yourself with the
str(),
int(), and
float() functions.
"""blah ... # looks like a comment but isn't last line"""
The result is just a string like
"foo" above, with a few differences. Triple double-quotation marks denote the beginning and end of this string, and it can take up multiple lines, unlike an ordinary string expression.
Frankly, this is an arcane detail of Python, but we bring it up because triple-quoted strings are often used for writing long-form comments in code, instead of
# comments. This works even though the string is just an expression, not a special device for long comments. That's because an expression doesn't do anything by itself, except that the last expression in a Jupyter notebook cell gets printed. So you can sprinkle string expressions (or other expressions that have no side-effects) throughout your code (on their own lines) and no harm will come of it.
The following (oddly and excessively) documented code shows this:
"""The code in this cell produces pi rounded to 5 decimal digits.""" "First, let's give a name to pi." my_name_for_pi = math.pi # Now, we round it to 5 decimal # digits. pi_rounded = round(my_name_for_pi, 5) "Now make that the last expression in this cell." pi_rounded
Lists¶
Line 6 above,
[1,2,3], is another kind of compound expression, the list literal. Python knows that when square brackets (
[]) appear by themselves with a comma-separated list of expressions inside them, we are asking for a list consisting of those expressions' values.
Again, each expression in the list can be a compound expression. So it's okay to write something like:
["foo"+"bar", sum([1,2,3]), [4, 5, 6]]
Question. Describe the value of the above list expression in English.
Calls¶
Line 8,
sum(x), is also a compound expression, a function call. Python evaluates the subexpression
sum, producing a function that adds members of lists, and the subexpression
x, which was previously set to a list of integers. Then the parentheses
() direct Python to call the function on the left of the parenthesis (the one named
sum) on the value of
x, producing the value 6. Note that it's possible to write things like
5(3) or
nonexistent_function(0). Python will just complain that
5 is not a function (specifically, that it is not "callable") or that
nonexistent_function hasn't been defined, respectively.
The following line is similar to line 8, but the subexpression inside the parentheses,
x + [4], is itself a compound expression:
sum(x + [4])
(Recall that adding two lists with
+ makes a new list consisting of the two lists smashed together. So
x + [4] above has value equal to
[1,2,4,4].
x is equal to
[1,2,4], not
[1,2,3] as it was defined on line 7, because on line 10 we set its last element to
4.)
We haven't seen how to define new functions yet, but here is one example to see how the expression before the
( is just an expression (whose value must be a function):
my_name_for_sum = sum my_name_for_sum(x)
Indexing¶
Line 9,
x[2], is yet another compound expression. Python evaluates the subexpression
x, producing a list, and the subexpression
2. The square brackets
[], appearing immediately after an expression and with another expression inside them, tell Python to index into the value of the first expression using the value of the second expression. For this list as it's defined on line 9, this produces the value 3.
Notice that the code string
[2] can have two different meanings, depending on the code immediately around it. If there is an expression to the left, for example
x[2], then Python will take it to mean an indexing expression. If not, Python will think you mean a list with a single element, 2.
Like parentheses, the things on either side of the square brackets can be compound expressions:
x[2-1]
(x + [13])[2+1]
Question. In the last cell, there are 7 subexpressions, not counting the whole expression
(x + [13])[2+1]. Can you identify all of them?
Finally, note that different kinds of values support different kinds of indexing. A Table, for example, supports indexing by strings, producing a column:
t['Things']
Question. To put together list indexing and function calls, try to figure out what the following code is doing. (Note that an expression like
sum has a value, like any other name expression, and that value is a function. We can put function values into lists, just like other values.)
some_functions_on_lists = [sum, len] (some_functions_on_lists[0])(x)
Dots and attributes¶
Objects (just another name for a value, like 1, "four score", or a Table) often have things called properties, attributes, fields, or (in the case when the things are functions) methods. Let's call them attributes. Though in this class we won't see how to create new kinds of objects, we will use attributes all the time.
We access attributes using a
.. For example:
t.rows
Generically, the thing on the left of the
. must be an expression whose value is an object with the attribute we want. As with calling and indexing, it can be an arbitrarily complicated compound expression. The thing on the right of the dot is the name of the attribute. Unlike the arguments of a function or the index in an indexing expression, it is not an expression. It must be the name of an attribute that the object on the left has.
As we said, sometimes an attribute is a function, in which case we sometimes call it a method instead. The syntax is the same as other attribute accesses:
t.sort
t.sort('Things')
The only difference between a method and a normal function is that the object itself (
t in this case) is automatically passed as the first argument to the method. So the
sort function technically has two arguments -- the first is the table that
sort is being called on, and the second is the column name. This is how
sort knows which table to sort! Normally this is a really technical detail that you don't need to worry about, but it can come up when you accidentally pass the wrong number of arguments to a method:
t.sort('This', 'is', 'too', 'many', 'arguments') # Error!
The error complains that we gave 6 arguments to
sort, but it looks like we only passed 5. The extra first argument is the table
t.
A weird thing about dot syntax¶
You might notice at some point that dots are used in two ways in Python: accessing attributes, and in expressions for floating-point numbers. For example,
x.y is accessing the attribute named
y in the value named
x, while
1.2 is just an expression for the number 1.2. This is one reason why you can't have numbers at the start of names. It also means that the expression on the left of a
. can't just be number. For example, we can't access the attribute
real of an integer this way (for this example, you don't need to know what
real is doing, other than that it should just return the same value as the integer):
1.real
That's because Python can't tell whether we're trying to write an (invalid) decimal number
1.real or access the
real attribute of the value
1. Surrounding the
1 in parentheses makes it clear to Python:
(1).real
Exercises to put it all together¶
Question. Many people, when they first encounter tables and try to use them to manipulate data, assume that Python allows more syntactic flexibility than it really does. Below are some examples of things we might hope would work, but don't. For each one, describe what it actually does, what its author was probably trying to do, what went wrong, and how to fix it.
# No error here, just setup for the next cells. Run this cell to see the table we're working with. my_table = Table([[1, 2, 3, 4], [9, 2, 3, 1]], ['x', 'y'], ) my_table
my_table['x + y']
my_table['x' + 'y']
my_table['x'] + ['y']
my_table.where('x' >= 3)
my_table.where(['x'] >= 3)
my_table.sort('y') row_with_smallest_y = my_table.rows[0]
If we had only expressions, it would be difficult to put together many steps in our code. For example, which piece of code is more legible?
Table([['Alice', 'Bob', 'Alice', 'Alice', 'Connie'], [119.99, 29.99, 10.00, 350.00, 5.29]], ['Customer', 'Bill']).group('Customer', np.sum).sort('Bill sum', descending=True)['Customer'][0]
transactions = Table() # Line 3.0 transactions['Customer'] = ['Alice', 'Bob', 'Alice', 'Alice', 'Connie'] # Line 3.1 transactions['Bill'] = [119.99, 29.99, 10.00, 350.00, 5.29] # Line 3.2 total_bill_per_customer = transactions.group('Customer', np.sum) # Line 3.3 customers_sorted_by_total_bill = total_bill_per_customer.sort('Bill sum', descending=True)['Customer'] # Line 3.4 top_customer = customers_sorted_by_total_bill[0] # Line 3.5 top_customer # Line 3.6
Many programs do hundreds (or millions) of different things, and it would be cumbersome to do this only using expressions. In this example, we are doing only one thing, using several steps. The first cell is concise, but it's very hard to read. In the second cell, we use assignment statements to break down the steps into things that are (hopefully) understandable.
An assignment statement is executed like other statements, but it always causes an effect on the world (recall that we called these side effects). That is subsequent statements will see the changes made by the assignment.
Name assignments¶
An assignment statement generally has two expressions separated by an equals sign. The expression on the right can be anything, but the expression on the left must be an "assignable thing". The simplest case is a name that has not been assigned to anything yet, like
total_bill_per_customer on line 3 above. Before line 3 is executed, it would be an error to refer to
total_bill_per_customer, but after line 3, that name can be used to refer to the table created by
transactions.group('Customer', np.sum).
Assignment statements can also reassign existing names to something else:
number = 3 number = 4 number = number + 2 number
As a matter of code style, it is best to avoid this where possible, because it can make your code more confusing. (If everything is assigned only once, it's trivial to see what its value is when you read code. Otherwise you might need to hunt down all the assignments.) But occasionally it is useful, and sometimes it is necessary. We'll see examples of the latter when we cover iteration.
Indexing assignments¶
Lines 1 and 2 above are assignments to parts of an indexable thing. In this case, they add new columns to the
transactions Table associated with the strings "Customer" and "Bill", respectively. Generically, an indexing assignment looks like:
<expression with indexable value>[<expression>] = <expression>
The same pattern happens when we assign elements of a list or array:
my_list = [4, 5, "foo"] my_list[0] = "bar"
Different indexable things can have different behavior when you set something in them. For example, Tables use string indexing instead of number indexing, and they are okay with adding new columns using indexing assignments (as we saw in lines 1 and 2) or with replacing existing columns with something else. If we want to change the customer names (say because we made a mistake the first time), we could do that by changing the whole "Customer" column:
transactions['Customer'] = ['Alice', 'Bob', 'Alice', 'Alice', 'Dora'] #
Lists, however, don't let us add new elements. We can only assign new things to the slots a list had when it was created:
my_list[2] = "baz" # Okay.
my_list[3] = "garply" # Error.
Note that it is possible to make an existing list longer using extend(), or to make a new, longer copy of the list with
+. You just can't do it with index assignment.
Why do lists have this restriction?
Lists are supposed to contain contiguous ranges of things; they can't have "holes" that aren't indexable. If you could extend a list by assigning to it at whatever indices you wanted, you could assign elements, say, 0, 1, and 3, leaving 2 unassigned. Then what should
len return for that list -- 3 or 4? And what should happen when you print it? Should it say
[0,1,<blank>,3]? It's not clear. To make sure you don't have to worry about this when you use lists, Python doesn't let you do it.
4. Import statements¶
A simple, standalone kind of statement is the import statement, as in
import numpy as np. It has the side effect of making the
numpy module available, giving it the name
np. Notice that the import statement has its own special rules, and it doesn't include other expressions as subexpressions anywhere.
Modules are actually values, just like strings or functions. Saying
import numpy just loads the module named numpy from the computer's library of modules and assigns it the name
numpy.
import numpy as np assigns it the name
np instead. We could imagine that
import numpy as np does something like this:
np = load_module('numpy') # BEWARE: NOT REAL PYTHON CODE.
When you say something like
np.array([1,2,3]), you're accessing the
array attribute of the module named
np and calling it on the list given by
[1,2,3]. (Note that, unlike function attributes of some other values, function attributes of modules are not usually called methods, and they don't get the module value as an extra argument.)
Question. How many subexpressions (not counting the whole expression) are there in the following expression?
np.array([1,1+2,3])*4
def square(x): return x*x
square(5.5)
After this line, the function
square will be available for calling. Defining a function doesn't do anything else. In particular, it's not called unless you call it somewhere.
The function definition is our first example of a statement that takes up multiple lines. In fact, a function definition is a compound statement that typically includes multiple substatements; its general form is:
def <function name>(<argument list>): <substatement 0> <substatement 1> ...
Notice the indentation of the statements inside the function. Indentation tells Python where your function definition ends. You can use as many spaces as you want (as long as you're consistent), but 4 is traditional.
When a function is executed (using the function call syntax we saw above), its substatements are executed sequentially, just like an ordinary sequence of statements in a cell. A substatement can be any statement you want, just like a subexpression can be any expression you want. You can even put function definitions as substatements inside function definitions. A special kind of substatement often seen in functions (and nowhere else) is the
return statement, which is covered in detail next. When a return statement is reached, execution finishes (even if there are statements below) and the expression after
return becomes the value of the function call.
Before the statements are executed, each name in the argument list is set to the corresponding value in the arguments passed to the function. For example, when we call
square(5.5) above, Python starts executing the statements in the
square function, but first sets
x to 5.5. Arguments are how we pass information into functions; functions with no arguments can only behave one way.
Why functions?¶
Functions are extremely useful for packaging small pieces of functionality into easily-understandable pieces. Computer code is so powerful that organizing and maintaining it is often much more difficult than just getting the computer to do what we want. If you can wrap a complicated procedure into a single function, then you can focus once on getting that function written correctly, and then move on to something else, never worrying about its correctness again. In most moderate- or large-scale software, all code is organized this way -- that is, all code is just a bunch of (relatively short) functions that call each other.
In your labs, and in coding you do outside and after this class, you'll often notice yourself repeating the same thing several times, with slight modifications. For example, you might analyze a dataset and then perform the same analysis with a different dataset for comparison. Or you might find yourself repeatedly doing the same mathematical operation, like "square each element and add 5". When that happens, you should rewrite your code so that the thing you're repeating happens inside a function with a memorable name.
def square(x): return x*x
It's important to know that the name
x is assigned to a value only for the purposes of the statements inside the function. Outside the function call, argument names are not modified or visible. For example:
x = 5 def cube(x): return x*x*x cube(3) # 27 x # Still 5!
def square_root(does_not_appear_elsewhere): return does_not_appear_elsewhere**(1/2) square_root(4) does_not_appear_elsewhere # Causes an error. does_not_appear_elsewhere was only defined inside the function while it was being called.
Similarly, any names defined inside a function are only defined inside the function while it's running. They don't even stick around across calls to the function; each time the function body finishes, the names defined inside it are wiped out, just like argument names.
def times_three(x): multiplier = 3 return multiplier*x six = times_three(2) three = multiplier # Error!
Functions as values¶
A function definition like
def my_func(x): return 2*x
is really just producing a function value and assigning the name
my_func to that value. In this case, the function value is the function that multiplies its single argument by 2. You should imagine
def as doing something similar to the following (non-functioning) code:
my_func = make_a_function(x): # BEWARE: NOT REAL PYTHON CODE. return 2*x
...where we're imagining for a moment that the special syntax
make_a_function(...): ... returns a function. So names assigned to functions are really just ordinary names, and function values are just like other values. Of course, function values, like other values, have special behaviors; they can be called using
(), and they can't be added together like strings or numbers.
Names assigned to functions are also just ordinary names. It is possible, for example, to redefine a name that was previously defined as a function using
def (though this is so confusing that it is usually a bad idea):
def my_func(x): return 2*x eight = my_func(4) my_func = 3 # Technically possible, but inadvisable! my_func + 2
We can also put function values into a list, as we saw earlier:
def my_func_0(x): return 0*x def my_func_1(x): return 1*x funcs = [my_func_0, my_func_1] zero = funcs[0](3) zero
Though Python prints function values in a slightly cryptic way, you can print them if you want:
funcs
6. Return statements¶
Inside a function definition, we very often see yet another kind of statement: the return statement. This has the form
return <expression>. Any of the expressions we saw above can appear after the
return. This is the value produced by calls to the function. For example, the value of
square(5) is 25, since
square will return
5*5 when it is called with the argument
5.
return stops execution of the function; subsequent statements are not reached. For example:
def weird_but_technically_correct_square(x): return x*x return (x*x)+1 weird_but_technically_correct_square(5)
If a return statement is never reached, calling the function produces no value. The following code is wrong, for example:
def wrong_circle_area(r): math.pi*(r**2) some_name = wrong_circle_area(4) some_name
Unfortunately, this is a mistake that Python will not complain about; it will just silently let
some_name have no value. (Technically it is given a special value called None. If a statement with value None is the last statement in a cell, Jupyter doesn't print anything, and that's what happens in the above cell. But you can see the value of
some_name if you write, for example,
str(some_name).)
To be clear, we just fix this by
returning whatever we want the function to return:
def correct_circle_area(r): return math.pi*(r**2) circle_radius_four_area = correct_circle_area(4) circle_radius_four_area
x = [1,2,3] if len(x) > 4: message = "x is a long list!" else: message = "x is a short list!"
The general form of a conditional is:
if <boolean-valued expression 0>: <statement 0.0> ... elif <boolean-valued expression 1>: <statement 1.0> ... elif <boolean-valued expression 2>: <statement 2.0> ... ... else: <statement n.0> ...
If there is an
else clause, then exactly one of the statement groups will be executed; otherwise, it's possible that none of them will happen (if none of the expressions next to
if or
elif are True).
Conditionals are pretty simple, but like functions, they are very important for writing code that does interesting things.
Something to watch out for is that Python will implicitly convert non-boolean values to boolean values, sometimes using surprising rules. Typically, the convention is that something that is "zero-like" or "empty" is False, while other things are True. It's best not to rely on this behavior, though; use an explicit comparison that produces a boolean value. See what happens in the following examples:
if 0: x = True else: x = False x
if 1: x = True else: x = False x
if "some string": x = True else: x = False x
if "": x = True else: x = False x
if []: # (an empty list) x = True else: x = False x
if [3]: x = True else: x = False x
if np.array([]): x = True else: x = False x
if np.array([True]): x = True else: x = False x
if np.array([False]): x = True else: x = False x
if np.array([True, False]): x = True else: x = False x | http://data8.org/fa15/text/2_visual.html | CC-MAIN-2019-13 | refinedweb | 14,301 | 63.7 |
I am totally new to Arduino and trying to make a small project. Basically I am using a 4 x 4 matrix keypad to get inputs from the user. First the user enters the first value and then the second value. Both inputs are 2 digit numbers. The Arduino saves the first and second inputs into an array. So I am stuck on this part and cant figure out how to get two inputs. Example: Please Enter the First Input: (User Enters) Please Enter the Second Input: (User Enters) Then the Arduino takes both values and run calculations. Thank You !!!
You can use this diagram to configure the board and the keypad. You can use any board that has at least 7 digital inputs, because that’s how many pins the keypad uses.
Here is a sample program for use with the keypad.
And this is the Key Pad library which will help you understand and change the code to make it what you want.
Here is basic example for what you need. I don’t actually have keypad myself so I wasn’t able to test this to see if it will compile. But this is the basic idea.
(Note: this is in the void loop() so it will keep running over and over. You need to program it to what you want it do. If you run into any problems feel free to upload your code here in the programming section and we can help you solve any issues that may arise.)
#include <Keypad.h> const byte ROWS = 4; // Four rows const byte COLS = 4; // Three columns // Define the Keymap char keys[ROWS][COLS] = { {'1','2','3','F'}, {'4','5','6','E'}, {'7','8','9','D'}, {'A','0','B','C'}, }; // Connect keypad ROW0, ROW1, ROW2 and ROW3 to these Arduino pins. byte rowPins[ROWS] = {39, 41, 43, 45}; //connect to the row pinouts of the keypad byte colPins[COLS] = {31, 33, 35, 37}; //connect to the column pinouts of the keypad // Create the Keypad Keypad keypad = Keypad( makeKeymap(keys), rowPins, colPins, ROWS, COLS ); void setup(){ Serial.begin(9600); keypad.addEventListener(keypadEvent); //add an event listener for this keypad } void loop(){ Serial.println("Please Enter the First Input: "); char input1 = keypad.waitForKey(); Serial.println("Please Enter the First Input: "); char input2 = keypad.waitForKey(); int total = input1 + input2 Serial.println("Your total is "); Serial.print(total); } | https://forum.arduino.cc/t/arduino-input-through-keypad/308963 | CC-MAIN-2021-43 | refinedweb | 396 | 73.17 |
An integral part of using Python involves the art of handling exceptions. There are primarily two types of exceptions; Built-in exceptions and User-Defined Exceptions. In such cases, the error handling resolution is to save the state of execution in the moment of error which interrupts the normal program flow to execute a special function or a code which is called Exception Handler.
There are many types of errors like ‘division by zero’, ‘file open error’, etc. where an error handler needs to fix the issue. This allows the program to continue based on prior data saved.
Source: Eyehunts Tutorial
Just like Java, exceptions handling in Python is no different. It is a code embedded in a try block to run exceptions. Compare that to Java where catch clauses are used to catch the Exceptions. The same sort of Catch clause is used in Python that begins with except. Also, custom-made exception is possible in Python by using the raise statement where it forces a specified exception to take place.
Reason to use exceptions
Errors are always expected while writing a program in Python which requires a backup mechanism. Such a mechanism is set to handle any encountered errors and not doing so may crash the program completely.
The reason to equip python program with the exception mechanism is to set and define a backup plan just in case any possible error situation erupts while executing it.
Catch exceptions in Python
Try statement is used for handling the exception in Python. A Try clause will consist of a raised exception associated with a particular, critical operation. For handling the exception the code is written within the Except Clause. The choice of performing a type of operation depends on the programmer once catching the exception is done.
The below-defined program loops until the user enters an integer value having a valid reciprocal. A part of code that triggers an exception is contained inside the Try block.
In case of absence of any exceptions then the normal flow of execution continues skipping the except block. And in case of exceptions raising the except block is caught.
The Output will be:
Naming the exception is possible by using the ex_info() function that is present inside the sys module. It asks the user to make another attempt for naming it. Any unexpected values like ‘a’ or ‘1.3’ will trigger the ValueError. Also, the return value of ‘0’ leads to ZeroDivisionError.
Exception handling in Python: try, except and finally
There are instances where the suspicious code may raise exceptions which are placed inside such try statement block. Again, there is a code that is dedicated to handling such raised exceptions and the same is placed within the Except block.
Below is an example of above-explained try and except statement when used in Python.
try:
** Operational/Suspicious Code
except for SomeException:
** Code to handle the exception
How do they work in Python:
- The primarily used try block statements are triggered for checking whether or not there is any exception occurring within the code.
- In the event of non-occurrence of exception, the except block (Containing the exceptions handling statements) is executed post executing the try block.
- When the exception matches the predefined name as mentioned in ‘SomeException’ for handling the except block, it does the handling and enables the program to continue.
- In case of absence of any corresponding handlers that deals with the ones to be found in the except block then the activity of program execution is halted along with the error defining it.
Defining Except without the exception
To define the Except Clause isn’t always a viable option regardless of which programming language is used. As equipping the execution with the try-except clause is capable of handling all the possible types of exceptions. It will keep users ignorant about whether the exception was even raised in the first place.
It is also a good idea to use the except statement without the exceptions field, for example some of the statements are defined below:
try:
You do your operations here;
………………….
except:
If there is an exception, then execute this block.
………………….
else:
If there is no exception then execute this block.
OR, follow the below-defined syntax:
try:
#do your operations
except:
#If there is an exception raised, execute these statements
else:
#If there is no exception, execute these statements
Here is an example if the intent is to catch an exception within the file. This is useful when the intention is to read the file but it does not exist.
try:
fp = open(‘example.txt’, r)
except:
print (‘File is not found’)
fp.close
This example deals with opening the ‘example.txt’. In such cases, when the called upon file is not found or does not exist then the code executes the except block giving the error read like ‘File is not found’.
Defining except clause for multiple exceptions
It is possible to deal with multiple exceptions in a single block using the try statement. It allows doing so by enabling programmers to specify the different exception handlers. Also, it is recommended to define a particular exception within the code as a part of good programming practice.
The better way out in such cases is to define the multiple exceptions using the same, above-mentioned except clause. And it all boils down to the process of execution wherein if the interpreter gets hold of a matching exception, then the code written under the except code will be executed.
One way to do is by defining a tuple that can deal with the predefined multiple exceptions within the except clause.
The below example shows the way to define such exceptions:
try:
# do something
except (Exception1, Exception2, …, ExceptionN):
# handle multiple exceptions
pass
except:
# handle all other exceptions
You can also use the same except statement to handle multiple exceptions as follows −
try:
You do your operations here;
………………….
except(Exception1[, Exception2[,…ExceptionN]]]):
If there is an exception from the given exception list,
then execute this block.
………………….
else:
If there is no exception then execute this block.
Exception handling in Python using the try-finally clause
Apart from implementing the try and except blocks within one, it is also a good idea to put together try and finally blocks.
Here, the final block will carry all the necessary statements required to be executed regardless of the exception being raised in the try block.
One benefit of using this method is that it helps in releasing external resources and clearing up the cache memories beefing up the program.
Here is the pseudo-code for try..finally clause.
try:
# perform operations
finally:
#These statements must be executed
Defining exceptions in try… finally block
The example given below executes an event that shuts the file once all the operations are completed.
try:
fp = open(“example.txt”,’r’)
#file operations
finally:
fp.close()
Again, using the try statement in Python, it is wise to consider that it also comes with an optional clause – finally. Under any given circumstances, this code is executed which is usually put to use for releasing the additional external resource.
It is not new for the developers to be connected to a remote data centre using a network. Also, there are chances of developers working with a file loaded with Graphic User Interface.
Such situations will push the developers to clean up the used resources. Even if the resources used, yield successful results, such post-execution steps are always considered as a good practice. Actions like shutting down the GUI, closing a file or even disconnecting from a connected network written down in the finally block assures the execution of the code.
The finally block is something that defines what must be executed regardless of raised exceptions. Below is the syntax used for such purpose:
The file operations example below illustrates this very well:
- try:
- f = open(“test.txt”,encoding = ‘utf-8’)
- # perform file operations
- finally:
- f.close()
Or In simpler terms:
try:
You do your operations here;
………………….
Due to any exception, this may be skipped.
finally:
This would always be executed.
………………….
Constructing such a block is a better way to ensure the file is closed even if the exception has taken place. Make a note that it is not possible to use the else clause along with the above-defined finally clause.
Understanding user-defined exceptions
Python users can create exceptions and it is done by deriving classes out of the built-in exceptions that come as standard exceptions.
There are instances where displaying any specific information to users is crucial, especially upon catching the exception. In such cases, it is best to create a class that is subclassed from the RuntimeError.
For that matter, the try block will raise a user-defined exception. The same is caught in the except block. Creating an instance of the class Networkerror will need the user to use variable e.
Below is the syntax:
class Networkerror(RuntimeError):
def __init__(self, arg):
self.args = arg
Once the class is defined, raising the exception is possible by following the below-mentioned syntax.
try:
raise Networkerror(“Bad hostname”)
except Networkerror,e:
print e.args
Key points to remember
Note that an exception is an error that occurs while executing the program indicating such events (error) occur though less frequently. As mentioned in the examples above, the most common exceptions are ‘divisible by 0’, ‘attempt to access non-existent file’ and ‘adding two non-compatible types’. Ensure putting up a try statement with a code where you are not sure whether or not the exception will occur. Specify an else block alongside try-except statement which will trigger when there is no exception raised in a try block.
Author bio
Shahid Mansuri Co-founder Peerbits, one of the leading software development company, USA, founded in 2011 which provides Python development services. Under his leadership, Peerbits used Python on a project to embed reports & researches on a platform that helped every user to access the dashboard that was freely available and also to access the dashboard that was exclusively available. His visionary leadership and flamboyant management style have yield fruitful results for the company. He believes in sharing his strong knowledge base with a learned concentration on entrepreneurship and business.
Read Next
Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track
Fake Python libraries removed from PyPi when caught stealing SSH and GPG keys, reports ZDNet
There’s more to learning programming than just writing code | https://hub.packtpub.com/how-to-perform-exception-handling-in-python-with-try-catch-and-finally/ | CC-MAIN-2020-05 | refinedweb | 1,755 | 53.21 |
In Part 1 I looked at PostSharp’s support for INotifyPropertyChanged, and several handy aspects to help with threading: Background, Dispatch, ThreadUnsafe and ReaderWriterSynchronized. In part 2 I’d planned to look at PostSharp’s Actor support and new features for undo/redo, but life got in the way, so part 2 will cover only the Actor aspect, and part 3 will cover new features in PostSharp 3.2.
Actor
The Actor model hasn’t yet received a lot of attention in the .NET world. The model was first defined in 1973 as a means to model parallel and distributed systems, “a framework for reasoning about concurrency.” The model assumes that “concurrency is hard” and provides an alternative to do-it-yourself threading and locking. It’s built into languages like Erlang and Scala, and there are a number of libraries and frameworks. It’s gotten a recent boost in the .NET world with F# agents, the TPL Dataflow library and Project Orleans.
Conceptually, an actor is a concurrency primitive which can both send and receive messages and create other actors, all completely asynchronous, and thread-safe by design. An actor may or may not hold state, but it is never shared.
Where does PostSharp fit in? Remembering the PostSharp promise: “Eradicate boilerplate. Raise abstraction. Enforce good design.” the PostSharp Actor implementation allows developers to work at the “right” level of abstraction, and provides both build time and run time validation to avoid shared mutable state and ensure that private state is accessed by only a single thread at a time.
To use the Actor aspect, install the Threading Pattern Library package from NuGet.
Ping Pong
I started with the PingPong sample (well, PingPing really) from PostSharp. Here’s the code:
[Actor] public class Player { private string name; private int counter; public Player(string name) { this.name = name; } public async Task Ping(Player peer, int countdown) { Console.WriteLine("{0}.Ping({1}) from thread {2}", this.name, countdown, Thread.CurrentThread.ManagedThreadId); if (countdown > 1) { await peer.Ping(this, countdown - 1); } this.counter++; } public async Task GetCounter() { return this.counter; } } class Program { static void Main(string[] args) { AsyncMain().Wait(); Console.ReadLine(); } private static async Task AsyncMain() { Console.WriteLine("main thread is {0}", Thread.CurrentThread.ManagedThr Player ping = new Player("Sarkozy"); Player pong = new Player("Hollande"); Task pingTask = ping.Ping(pong, 10); await pingTask; Console.WriteLine("{0} Counter={1}", ping, await ping.GetCounter()); Console.WriteLine("{0} Counter={1}", pong, await pong.GetCounter()); } }
Here the Player class is an actor, and decorated with the PostSharp Actor aspect. The “messages” are implied by the Ping and GetCounter async methods. Whether the “message-ness” of the actor model should be abstracted away is certainly a point for discussion, but it does provide for easier programming within an OO language like C#.
From the output we see that 1) activation (construction) is performed on the caller’s thread, 2) the player’s methods are invoked on background threads, and 3) there is no thread affinity.
Validation
The compile-time validation performed when using the Actor aspect tries to ensure you do the right thing.
1. All fields must be private, and private state must not be made available to other threads or actors.
If we try to define the name field as public:
[Actor] public class Player { public string name; private int counter; ... }
This results in the compiler error: Field Player.name cannot be public because its declaring class Player implements its threading model does not allow it. Apply the [ExplicitlySynchronized] custom attribute to this field to opt out from this rule.
The same holds true of a public property:
[Actor] public class Player { ... public int Id { get; private set; } ... }
This results in the compile-time error: Method Player cannot return a value or have out/ref parameters because its declaring class derives from Actor and the method can be invoked from outside the actor.
2. All methods must be asynchronous.
To PostSharp this means that method signatures must include the async modifier. If you try to return a Task from a non-async method, something like this:
public Task<string> SayHello(string greeting) { return Task.FromResult("You said: '" + greeting + "', I say: Hello!"); }
You’ll get a compiler error: Method Player cannot return a value or have out/ref parameters because its declaring class derives from Actor and the method can be invoked from outside the actor.
The async rule also means that you must ignore the standard compiler warning about using async when you don’t demonstrably need to, which is why the GetCounter method looks like this:
public async Task<int> GetCounter() { return this.counter; }
PostSharp will dispatch the method to a background task, so you should ignore the compiler warning: This async method lacks ‘await’ operators and will run synchronously. Consider using the ‘await’ operator to await non-blocking API calls, or ‘await Task.Run(…)’ to do CPU-bound work on a background thread.
If you remove the async modifier the Actor validation will fail. You can add an await, but it looks silly, and you shouldn’t await Task.FromResult anyway:
public async Task<int> GetCounter() { return await Task.FromResult<int>(this.counter); }
You can, however, write a synchronous method, which PostSharp will dispatch to a background thread. For example:
public void Ping(Player peer, int countdown) { Console.WriteLine("{0}.Ping from thread {1}", this.name, Thread.CurrentThread.ManagedThreadId); if (countdown >= 1) { peer.Pong(this, countdown - 1); } this.counter++; }
This may be a good thing, but also possibly misleading, since at first glance a developer might assume the method is executed synchronously on the current thread.
Rock-Paper-Scissors
Next I tried the “Rock-Paper-Scissors” example as described here.
Here’s my implementation.
namespace Roshambo { public enum Move { Rock, Paper, Scissors } [Actor] public class Coordinator { public async Task Start(Player player1, Player player2, int numberOfThrows) { Task.WaitAll(player1.Start(), Task.Delay(10), player2.Start()); while (numberOfThrows-- > 0) { var move1Task = player1.Throw(); var move2Task = player2.Throw(); Task.WaitAll(move1Task, move2Task); var move1 = move1Task.Result; var move2 = move2Task.Result; if (Tie(move1, move2)) { Console.WriteLine("Player1: {0}, Player2: {1} - Tie!", move1, move2); } else { Console.WriteLine("Player1: {0}, Player2: {1} - Player{2} wins!", move1, move2, FirstWins(move1, move2) ? "1" : "2"); } } } private bool Tie (Move m1, Move m2) { return m1 == m2; } private bool FirstWins(Move m1, Move m2) { return (m1 == Move.Rock && m2 == Move.Scissors) || (m1 == Move.Paper && m2 == Move.Rock) || (m1 == Move.Scissors && m2 == Move.Paper); } } [Actor] public class Player { private Random _random; private string _name; public Player(string name) { _name = name; } public async Task Start() { int seed = Environment.TickCount + System.Threading.Thread.CurrentThread.ManagedThreadId; _random = new Random(seed); } public async Task<Move> Throw() { return (Move)_random.Next(3); } public async Task<string> GetName() { return _name; } } }
class Program { static void Main(string[] args) { AsyncMain().Wait(); Console.ReadLine(); } private static async Task AsyncMain() { var coordinator = new Coordinator(); var player1 = new Player("adam"); var player2 = new Player("zoe"); await coordinator.Start(player1, player2, 20); } }
And the exciting results:
A few things to note:
- I passed a name to the Player constructor but then never used it again. As private state, to access the name you must follow the Actor message rules and use an async method. I wouldn’t want the Coordinator to repeatedly ask each Player for its name, but this could have been done once at play start.
- Trying to uniquely seed a System.Random instance for each player was tricky, and my implementation is a hack. The Random class is not thread-safe, so while sharing a single static Random instance among Player actors is an option, having to perform my own locking around Random.Next calls seemed to violate the spirit of the actor model. The default seed for a Random instance is Environment.TickCount, which if called in close succession will likely return the same value. Using the current thread id as a seed is an alternative, but although PostSharp will ensure that Actor methods will be called on a background thread, there’s no assurance they’ll be different threads for different actor instances. My not-so-robust compromise was to take the sum of TickCount and thread id and cross my fingers. Including the dummy Task.Delay when waiting for the players to start helps.
- The Coordinator here does not hold state, and its Start method will 1) tell the players to start, 2) tell the players to throw, and 3) announce the result.
- The Player does hold non-shared state, and contains Start, Throw and GetName async methods. None of these methods is inherently asynchronous, so I see compiler warnings telling me to consider using the await operator. I could have made these methods synchronous, but as I said above I think it leads to some cognitive dissonance between the code you see and the underlying actor implementation.
Summary
Overall, despite some quirks, using the Actor aspect could be useful. It would be interesting to compare PostSharp’s Actor support with other .NET implementations, and I may try that some day. | https://aroundtuitblog.wordpress.com/tag/threading/ | CC-MAIN-2017-47 | refinedweb | 1,502 | 58.08 |
You're enrolled in our new beta rewards program. Join our group to get the inside scoop and share your feedback.Join group
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hey,
I have a script that calls an external web-service for a merge check bit I cannot get it to work as a conditional merge check
// This webhook is calling a REST API with the current PR id as a parameter
import groovy.json.JsonSlurper;
import groovy.json.JsonBuilder;
import groovy.json.StreamingJsonBuilder;
def get(String url) {
def connection = url.toURL().openConnection()
connection.setRequestMethod("GET")
connection.doOutput = false
def cont = connection.content.text
connection.connect()
return cont
}
def REST_URL="";
def response = new JsonSlurper().parseText(get(REST_URL));
def ret = (response.status == 'accept') ? true : false;
return ret
maybe I have misunderstood how this feature work.
The web-service will take the PR id as argument and then do lots of checks and then return accept or block in the returned JSON.
I get this to work in the Script Console but when I paste it into a conditional merge check I get errors like:
<pre>org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: Script45.groovy: 20: [Static type checking] - You tried to call a method which is not allowed: Script45#get(java.lang.String) @ line 20, column 17. def json_resp = get(REST_URL); ^ Script45.groovy: 21: [Static type checking] - You tried to call a method which is not allowed: groovy.json.JsonSlurper#<init>() @ line 21, column 19. def jsonSlurper = new JsonSlurper(); ^ Script45.groovy: 22: [Static type checking] - Cannot find matching method groovy.json.JsonSlurper#parseText(java.lang.Object). Please check if the declared type is right and if the method exists
Any ideas why this is not working in the conditional merge check?
Thnx in advance!
Cheers,
// Svant | https://community.atlassian.com/t5/Adaptavist-questions/Call-web-service-for-merge-check/qaq-p/918421 | CC-MAIN-2021-25 | refinedweb | 303 | 51.65 |
On Tuesday 11 February 2003 19:39, Stefano Mazzocchi wrote:
> Niclas Hedhman wrote:
> > However, I like the notion that there are "typing information" (of
> > DTD/Schema) for block's inputs and outputs, so at least the configuration
> > tools (not the sitemap in runtime, that's just waste of CPU resources)
> > can validate the "pluggability" between blocks.
> Now, suppose you have a stylesheet that transforms MVL (my vector
> language) into SVG and keeps everything else untouched. Then you have a
> generator that doesn't spit MVL at all. The two combine perfectly, yet
> it's silly to do so.... but how in hell are you going to find out?
As I said, "non-intrusive" at tool level, meaning it can not enforce the
rules, just hint. Right now, I am not willing to commit more thought than
that.
> I'm pretty sure some megaguru like Mr. Clark might be able to create an
> algebraic representation of the input and output schemas, than provide
> an permutation language to obtain 'matching' of the two dealing with
> multidimensionality of namespaces.
Isn't Clark still sitting/stand/laying/swimming contended in Thailand,
enjoying the riches of life?
> But I challenge anybody (megaguru included) to try.?
> > The exact mechanism for this is a lot harder, because it needs to be
> > simple and non-intrusive. Also, until there are blocks and some more
> > solid configuration tools, this is less important than, for instance,
> > flows, and can wait.
> Oh, that's for sure.
So, the FFT can start and being digested, and brought up again later, when the
"brain enzymes" of the community have broken the complexity down to a small
and simple set, that can be handled.
Niclas
---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200302.mbox/%3C200302121328.42063.niclas@hedhman.org%3E | CC-MAIN-2017-17 | refinedweb | 301 | 55.03 |
.
So as always when using Python for finacial data related shenanigans, it’s time to import our required modules:
import pandas as pd import numpy as np from pandas_datareader import data
We will first use the pandas-datareader functionality to download the price data from the first trading day in 2000, until today, for the S&P500 from Yahoo Finance as follows:
sp500 = data.DataReader('^GSPC', 'yahoo',start='1/1/2000')
Ok, lets do a quick check to see what format the data has been pulled down in.
s500.head()
Good stuff, so let’s create a quick plot of the closing prices to see how the S&P has performed over the period.
sp500['Close'].plot(grid=True,figsize=(8,5))
The trend strategy we want to implement is based on the crossover of two simple moving averages; the 2 months (42 trading days) and 1 year (252 trading days) moving averages.
Our first step is to create the moving average values and simultaneously append them to new columns in our existing sp500 DataFrame.
sp500['42d'] = np.round(sp500['Close'].rolling(window=42).mean(),2) sp500['252d'] = np.round(sp500['Close'].rolling(window=252).mean(),2)
The above code both creates the series and automatically adds them to our DataFrame. We can see this as follows (I use the ‘.tail’ call here as the moving averages don’t actually hold values until day 42 and day 252 so wil just show up as ‘NaN’ in a ‘.head’ call):
sp500.tail
And here we see that indeed the moving average columns have been correctly added.
Now let’s go ahead and plot the closing prices and moving averages together on the same chart.
sp500[['Close','42d','252d']].plot(grid=True,figsize=(8,5))
Our basic data set is pretty much complete now, with all that’s really left to do is devise a rule to generate our trading signals.
We will have 3 basic states/rules:
1) Buy Signal (go long) – the 42d moving average is for the first time X points above the 252d tend.
2) Park in Cash – no position.
3) Sell Signal (go short) – the 42d moving average is for the first time X points below the 252d trend.
The first step in creating these signals is to add a new column to the DataFrame which is just the difference between the two moving averages:
sp500['42-252'] = sp500['42d'] - sp500['252d']
The next step is to formalise the signals by adding a further column which we will call Stance. We also set our signal threshold ‘X’ to 50 (this is somewhat arbitrary and can be optimised at some point)
X = 50 sp500['Stance'] = np.where(sp500['42-252'] > X, 1, 0) sp500['Stance'] = np.where(sp500['42-252'] < -X, -1, sp500['Stance']) sp500['Stance'].value_counts()
(n.b. there was an error in logic with the above lines of code when this post article was posted – so you will very possibly get significantly different results even if using the same inputs and time period of data as I have – the error was that I had omitted the minus sign in front of the “X” in the second line of code in the above code box – the error was kindly pointed out by Theodore in the comments section on 07/03/2019)
The last line of code above produces:
-1 2077 1 1865 0 251 Name: Stance, dtype: int64.
A quick plot shows a visual representation of this ‘Stance’. I have set the ‘ylim’ (which is the y axis limits) to just above 1 and just below -1 so we can actually see the horizontal parts of the line.
sp500['Stance'].plot(lw=1.5,ylim=[-1.1,1.1])
Everything is now in place to test our investment strategy based upon the signals we have generated. In this instance we assume for simplicity that the S&P500 index can be bought or sold directly and that there are no transaction costs. In reality we would need to gain exposure to the index through ETFs, index funds or futures on the index…and of course there would be transaction costs to pay! Hopefully this omission wont have too much of an effect as we don’t plan to be in and out of trades “too often”.
So in this model, our investor is either long the market, short the market or flat – this allows us to work with market returns and simply multiply the day’s market return by -1 if he is short, 1 if he is long and 0 if he is flat the previous day.
So we add yet another column to the DataFrame to hold the daily log returns of the index and then multiply that column by the ‘Stance’ column to get strategy returns:
sp500['Market Returns'] = np.log(sp500['Close'] / sp500['Close'].shift(1)) sp500['Strategy'] = sp500['Market Returns'] * sp500['Stance'].shift(1)
Note how we have shifted the sp[‘Close’] series down so that we are using the ‘Stance’ at the close of the previous day to calculate the return on the next day
Now we can plot the returns of the S&P500 versus the returns on the moving average crossover strategy on the same chart for comparison:
sp500[['Market Returns','Strategy']].cumsum().plot(grid=True,figsize=(8,5))
So we can see that although the strategy seems to perform rather well during market downturns, it doesn’t do so well during market rallies or when it is just trending upwards.
Over the test period it barely outperforms a simple buy and hold strategy, hardly enough to call it a “successful” strategy at least.
But there we have it; A simple moving average cross over strategy backtested in Python from start to finish in just a few lines of code!!
HI I am having trouble with this line. By any chance would you be able to assist?
sp500[’42d’] = np.round(sp500[‘Close’].rolling(window=42).mean(),2)
sp500[‘252d’] = np.round(sp500[‘Close’].rolling(window=252).mean(),2)
Sure thing… What is it that you’re having problems with exactly? If you could provide a little bit more information, I’ll try to help…
Are you getting an error message? If you could post it here, I’ll take a look.
Thank you very much for responding to my initial comment, I really appreciate it and I was able to solve the issue. (100% my fault) These tutorials are great. THANK YOU VERY MUCH AGAIN!!
I have another question/though about this back-test. If we were using shorter moving averages, would it be possible to create to following parameters:
Thanks,
Sal
Hi Sal, thanks for the kind words…happy to know my online ramblings are of help to at least one or two people!
Your questions are good ones, and ones that I am sure many people would have when looking into an MA cross over trading strategy. I have had a play around and I believe I have come up with something that will get you what you want. It’s not the fastest of code, and it sure ain’t the prettiest either but the final outcome follows the logic of what you have asked for…so here is it:
Couple of things to be aware of:
1) The "threshold" of the distance that the MA series need to diverge by to count as a "cross over" has been set at 50. This can be changed and optimised according to your own preferences. For example, if you wanted the MA lines to JUST cross to count as a "cross over" you could set the threshold (vairable X) to 1.
2) I have set the "days" variable to 50 - this is the holding period, and of course you can change this at will also.
Hope that helps and if you have any further questions, please do ask.
Thank you for the response. I am having some trouble understanding this piece of code. The code is working but I would like to better understand it. I am primarily confused with the iloc, and k and I. I really don’t understand what those are or where they are pulling information from. any clarity would be greatly appreciated!!
#iterate through the DataFrame and update the “Stance2” column to hold the revelant stance
for i in range(X,len(sp500)):
#logical test to check for 1) a cross over short over long MA 2) That we are currently in cash
if (sp500[‘Stance’].iloc[i] > sp500[‘Stance’].iloc[i-1]) and (sp500[‘Stance’].iloc[i-1] == 0) and (sp500[‘Stance2’].iloc[i-1] == 0):
#populate the DataFrame forward in time for the amount of days in our holding period
for k in range(days):
try:
sp500[‘Stance2’].iloc[i+k] = 1
sp500[‘Stance2’].iloc[i+k+1] = 0
except:
pass
#logical test to check for 1) a cross over short under long MA 2) That we are currently in cash
if (sp500[‘Stance’].iloc[i] < sp500['Stance'].iloc[i-1]) and (sp500['Stance'].iloc[i-1] == 0) and (sp500['Stance2'].iloc[i-1] == 0):
#populate the DataFrame forward in time for the amount of days in our holding period
for k in range(days):
try:
sp500['Stance2'].iloc[i+k] = -1
sp500['Stance2'].iloc[i+k+1] = 0
except:
pass
Hi there, no problem at all…glad to hear the code works as intended, at least.
In terms of your other questions regarding the “iloc” and the k and i, I think they may be best tackled in a separate blog post centered around that section of code specifically; it would be a little tough to explain it all properly in these comment boxes.
I’ll try my best to find some time this weekend and put something together for you that will hopefully make it a little clearer as to what the is actually doing etc
Until then…
THANK YOU!!!!
Hi Sal – please find the latest blog post which hopefully answers your questions at
May I ask – are you and “algo” the same person? I see posts by both yourself and “algo” about the same topic.
Regards 😀
[…] Staying on the same topic of optimisation that we visited in the last post concerning portfolio holdings and efficient frontiers/portfolio theory, I thought I would quickly revisit the moving average crossover strategy we built a few posts ago; the previous article can be found here. […]
[…] Welcome back…this post is going to deal with a couple of questions I received in the comments section of a previous post, one relating to a moving average crossover trading strategy – the article can be found here. […]
[…] of the results we got from the moving average crossover strategy backtest in the last post (can be found here), and spend a bit of time digging a little more deeply into the equity curve and producing a bit of […]
Hi there, I am having a problem with the import of data from yahoo using pandas.Could you please help?
File “C:\Python27\lib\site-packages\requests\adapters.py”, line 504, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host=’ichart.finance.yah
oo.com’, port=80): Max retries exceeded with url: /table.csv?a=0&ignore=.csv&s=%
5EGSPC&b=1&e=10&d=6&g=d&f=2017&c=2000 (Caused by NewConnectionError(‘: Failed to establish a new connect
ion: [Errno 11004] getaddrinfo failed’,))
Hi, thanks for the comment and apologies for the delay in replying, I have been travelling these past 2 weeks – unfortunately the Yahoo Finance API has been discontinued I believe and so no longer works with the Pandas DataReader. You could use a provider like Quandl instead – the syntax is slightly different and the data comes down in a slightly different format but with a few tweaks you can use it no problem. You will need to install the “quandl” module with “pip install” and then sign up to. after that you can search for the contract you need and click the “Python” option under the “Export Data” in the top right of the page.
Have a go at that and if you need any extra guidance or clarification, do let me know!
Hello:
When I ran this code line: sp500[‘Strategy’] = sp500[‘Market Returns’] * sp500[‘Stance’].shift(1), I got this error: AttributeError: ‘numpy.ndarray’ object has no attribute ‘shift’
Please what do you think I am doing wrong
That’s very strange, sp500[‘Stance’] should be a “pandas.core.series.Series” not a “numpy.ndarray”.
Please try to run the code
type(sp500[‘Stance’])
and let me know what the output is.
I eventually sort this out.
Btw, please do you have a code to graphically represent Lake Ratio & Gain to Pain ratio of such a strategy as above?
The Gain to Pain ratio is an easy one to do – I’ve had a quick play around and have some code that calculates and creates a very simple bar chart of the Gain to Pain data. The Lake Ratio is however a much more complicated process…I would have to have a think and spend some time trying to get something put together.
As a start, here is the code for the Gain to Pain…
Thank you. That was really helpful
Hello: Please one more problem, I am trying to plot simple graphical chart that shows the bearish and bullish period distinctly using the Exponential Moving average and create a new regime, etc. I will appreciate as I need more education on this.
Hi Famson, you can just use the Exponential Weighted Average method included in the Pandas library…
Take a look at:
That explains its use.
So for example we could use the code:
sp500[‘Adj Close’].ewm(com=0.5).mean()
to get the exponential weighted average of the sp500 Adjusted Close using a “Centre of Mass” of 0.5.
If you wanted to plot it, just add “.plot()” at the end of the line above.
Hope that helps.
Yes it does. But what I was actually looking at is MA that use different colour and trendline for downward movement and upward movement.
In addition, I am also looking at pairs trade between these 2 indices using specific indicator.
Thank you
Thank you very much for this series of tutorials! I mean all your WORK! Excellent work! Keep it coming please!!!!!
Brilliant work indeed! Thank you very much. Would be nice if you could clarify my below doubt.
I have a csv file. 6 columns in the below format.
Date Stock 1 Price Stock 2 Price Stock 3 Price Stock 4 Price Market Index Price
But the thing here is I have the price data stored in csv on desktop. I would like to utilise mine instead of pulling from Yahoo. And yes, the thing is Stock 1 is the indicator. That is the whole strategy of crossover signal is obtained just from second column stock 1 price list. Based on this signal, stock 2, 3, 4 is purchased weighted equally. Could you kindly advice me on the code that I need to input as a replacement.
Also, I don’t want to bring in the short position.
Just long position and hold on to it – when short moving avg crosses above long moving average
Sell position entirely – when short moving avg crosses below long moving average; then buy back once again after 5 trading days.
I have been struggling a lot with the code as I’m a newbie in python. It would be really kind of you, if you assist me with the code. Thank you once again for the fantastic work of yours. Keep going.
Hi Stephen,
Apologies for the delay in replying – with regard to the request above – to read in a cvs file you can use pandas “read_csv”:
With regards to the other criteria specified, may i ask what you have come up with so far? If you post it, perhaps I can take a look through and suggest areas to modify etc.
I will reply to you via email too to see if I can help.
Cheers
S666
Hi, There is a small change in my problem. Below is the code I’m using. Just stuck up in the threshold part. That is, rebalance portfolio only if it deviates beyong the threshold say 5%. I would like to put this condition before initiating the rebalance so that it doesn’t rebalance every month for even a small deviation. Could you pls guide me. Thank you very much.
#
import bt
# fetch some data and also if out of these stocks, the recent listed date range is considered
data = bt.get(‘VTI, BND’,start=’2007,01,11′,end=’2017,01,11′)
print (data.head())
class OrderedWeights(bt.Algo):
def __init__(self, weights):
self.target_weights = weights
def __call__(self, target):
target.temp[‘weights’] = dict(zip(target.temp[‘selected’], self.target_weights))
return True
#commission
def my_comm(q, p):
return abs(q) * 0.5
# create the strategy & if you need it to run weekly the rebalancing use Weekly instead of Monthly
s = bt.Strategy(‘Portfolio1’, [bt.algos.RunMonthly(),
bt.algos.SelectAll(),
OrderedWeights([0.5, 0.5]),
bt.algos.Rebalance()])
# create a backtest and run it
test = bt.Backtest(s, data, initial_capital=10000, commissions=my_comm)
res = bt.run(test)
# first let’s see an equity curve
res.display()
res.plot()
# ok and how does the return distribution look like
res.plot_histogram()
# and just to make sure everything went along as planned, let’s plot the security weights over time
res.plot_security_weights()
just wanna know the reason when you sum up strategy return
why don’t you used np.exp to the log return ?
Hey, I am a bit confused about this part:
sp500[‘Stance’] = np.where(sp500[’42-252′] < X, -1, sp500[‘Stance’])
I think we should have taken the absolute value and changed sign to greater.
For example, if we have 100 and 80, that will be 20 which will be < 50 which was the limit. However do we want it like this? I thought we wanted only cases where say 50 -110 = -60 which is a sell.
Hi Theodore – you are indeed correct!! Thanks very much for pointing this out…it’s quite an egregious error on my part, as it’s an important part of the logic!!!
The line of code should read:
I had omitted the minus sign in front of the "X" - we are indeed looking for the value of the 42 period MA minus the 252 period MA to be lower than MINUS 50!!
Again - thanks for bringing that to my attention - I shall change the code accordingly.
Thank you very much for providing us access to these tutorials. I am a retiree who learns python by studying the resources that he finds on the Internet. Trying to understand these script, I have the following questions.
a) .- What criteria should we follow to fix set our signal threshold ‘X’ ?. You use X = 50 for the SP_500. I have tested with “IBE.MC” and with quotes from two Investment Funds and, if the threshold is not 0 or very close to 0, practically all the result “stances” are zero and the whole post process of the scripts is a disaster.
b) .- The calculation of Volatility / Max Drawdown, always gives me the error “ZeroDivisionError: float division by zero”
I will appreciate any suggestions to set these concepts.
Hi there, apologies for the late reply. I will email you directly and help you with this, that will probably be easier than commenting back and forth. Check your inbox shortly 🙂 | https://www.pythonforfinance.net/2016/09/01/moving-average-crossover-trading-strategy-backtest-in-python/ | CC-MAIN-2020-10 | refinedweb | 3,277 | 70.53 |
Although a Java program is sometimes called a class, there are many occasions when a program requires more than one class to get its work done. A multiclass program consists of a main class and any helper classes that are needed. These helper classes earn their name by helping the main class do its work.
An example might be a Java applet that displays a scrolling headline as part of its graphical user interface. The headline could be an independent object in the program, just like other interface elements such as buttons and scroll bars. It makes sense to put the headline into its own class, rather than including its variables and methods in the applet class.
When you divide a program into multiple classes, there are two ways to define the helper classes. One way is to define each class separately, as in the following example:
public class WreakHavoc {
String author = "Ignoto";
public void infectFile() {
VirusCode vic = new VirusCode(1024);
}
}
class VirusCode {
int vSize;
VirusCode(int size) {
vSize = size;
}
}
In this example, the VirusCode class is being used as a helper of the WreakHavoc class. Helper classes will often be defined in the same .java source file as the class they're assisting. When the source file is compiled, multiple class files will be produced. The preceding example would produce the files WreakHavoc.class and VirusCode.class.
If more than one class is defined in the same source file, only one of the classes can be public. The other classes should not have public in their class statements. Also, the name of the source file should match the name of the public class. In the preceding example, the name should be WreakHavoc.java.
When creating a main class and a helper class, you can also put the helper inside the main class. When this is done, the helper class is called an inner class.
An inner class is placed within the opening bracket and closing bracket of another class.
public class WreakMoreHavoc {
String author = "Ignoto";
public void infectFile() {
VirusCode vic = new VirusCode(1024);
}
class VirusCode {
int vSize;
VirusCode(int size) {
vSize = size;
}
}
}
An inner class can be used in the same manner as any other kind of helper class. The main difference—other than its location—is what happens after the compiler gets through with these classes. Inner classes do not get the name indicated by their class statement. Instead, the compiler gives them a name that includes the name of the main class.
In the preceding example, the compiler produces WreakHavoc.class and WreakHavoc$VirusCode.class.
This section illustrates one of the simplest examples of how an inner class can be defined and used. Inner classes are an advanced feature of Java that you won't encounter often as you first learn the language. The functionality they offer can be accomplished by using helper classes defined separately from a main class, and that's the best course to take as you're starting out in | http://codeidol.com/community/java/putting-one-class-inside-another/11385/ | CC-MAIN-2018-26 | refinedweb | 497 | 63.09 |
- NAME
- Synopsis
- Description
- Distributions
- Installation
- Constructor and Initialization
- Methods
- What is 'path info'?
- Is there any sample code?
- Why did you fork CGI::Application::Dispatch?
- What version of CGI::Application::Dispatch did you fork?
- How does CGI::Snapp::Dispatch differ from CGI::Application::Dispatch?
- There is no module called CGI::Snapp::Dispatch::PSGI
- Processing parameters to dispatch() and dispatch_args()
- No special code for Apache, mod_perl or plugins
- Unsupported features
- Enhanced features
- This module uses Class::Load to try loading your application's module
- Reading an error document from a file
- Handling of exceptions
- How does CGI::Snapp parse the path info?
- What is the structure of the dispatch table?
- How do I use my own logger object?
- How do I sub-class CGI::Snapp::Dispatch?
- Are there any security implications from using this module?
- Why is CGI::PSGI required in Build.PL and Makefile.PL when it's sometimes not needed?
- Troubleshooting
- See Also
- Machine-Readable Change Log
- Version Numbers
- Credits
- Repository
- Support
- Author
NAME
CGI::Snapp::Dispatch - Dispatch requests to CGI::Snapp-based objects
Synopsis
CGI Scripts
Here is a minimal CGI instance script. Note the call to new()!
#!/usr/bin/env perl use CGI::Snapp::Dispatch; CGI::Snapp::Dispatch -> new -> dispatch;
(The use of new() is discussed in detail under "PSGI Scripts", just below.)
But, to override the default dispatch table, you probably want something like this:
MyApp/Dispatch.pm:
package MyApp::Dispatch; parent 'CGI::Snapp::Dispatch'; sub dispatch_args { my($self) = @_; return { prefix => 'MyApp', table => [ '' => {app => 'Initialize', rm => 'start'}, ':app/:rm' => {}, 'admin/:app/:rm' => {prefix => 'MyApp::Admin'}, ], }; }
And then you can write ... Note the call to new()!
#!/usr/bin/env perl use MyApp::Dispatch; MyApp::Dispatch -> new -> dispatch;
PSGI Scripts
Here is a PSGI script in production on my development machine. Note the call to new()!
#!/usr/bin/env perl # # Run with: # starman -l 127.0.0.1:5020 --workers 1 httpd/cgi-bin/local/wines.psgi & # or, for more debug output: # plackup -l 127.0.0.1:5020 httpd/cgi-bin/local/wines.psgi & use strict; use warnings; use CGI::Snapp::Dispatch; use Plack::Builder; # --------------------- my($app) = CGI::Snapp::Dispatch -> new -> as_psgi ( prefix => 'Local::Wines::Controller', # A sub-class of CGI::Snapp. table => [ '' => {app => 'Initialize', rm => 'display'}, ':app' => {rm => 'display'}, ':app/:rm/:id?' => {}, ], ); builder { enable "ContentLength"; enable "Static", path => qr!^/(assets|favicon|yui)!, root => '/dev/shm/html'; # /dev/shm/ is Debian's RAM disk. $app; };
Warning! The line my($app) = ... contains a call to "new()". This is definitely not the same as if you were using CGI::Application::Dispatch or CGI::Application::Dispatch::PSGI. They look like this:
my($app) = CGI::Application::Dispatch -> as_psgi
The lack of a call to new() there tells you I've implemented something very similar but different. You have been warned...
The point of this difference is that new() returns an object, and passing that into "as_psgi(@args)" as $self allows the latter method to be much more sophisticated than it would otherwise be. Specifically, it can now share a lot of code with "dispatch(@args)".
Lastly, if you want to use regexps to match the path info, see CGI::Snapp::Dispatch::Regexp.
Description
This module provides a way to automatically look at the path info - $ENV{PATH_INFO} - of the incoming HTTP request, and to process that path info like this:
- o Parse off a module name
-
- o Parse off a run mode
-
- o Create an instance of that module (i.e. load it)
-
- o Run that instance
-
- o Return the output of that run as the result of requsting that path info (i.e. module and run mode combo)
-
Thus, it will translate a URI like this:
/app/index.cgi/module_name/run_mode
into something that is functionally equivalent to this:
my($app) = Module::Name -> new(...); $app -> mode_param(sub {return 'run_mode'}); return $app -> run;
Distributions
This module is available as a Unix-style distro (*.tgz).
See for help on unpacking and installing distros.
Installation
Install CGI::Snapp::Dispatch as you would for any
Perl module:
Run:
cpanm CGI::Snapp::Dispatch
or run:
sudo cpan CGI::Snapp::Dispatch
or unpack the distro, and then either:
perl Build.PL ./Build ./Build test sudo ./Build install
or:
perl Makefile.PL make (or dmake or nmake) make test make install
Constructor and Initialization
new() is called as
my($app) = CGI::Snapp::Dispatch -> new(k1 => v1, k2 => v2, ...).
It returns a new object of type
CGI::Snapp::Dispatch.
Key-value pairs accepted in the parameter list (see corresponding methods for details [e.g. "return_type([$string])"]):
- o logger => $aLoggerObject
Specify a logger compatible with Log::Handler.
Note: This logs method calls etc inside CGI::Snapp::Dispatch.
To log within CGI::Snapp, see "How do I use my own logger object?".
Default: '' (The empty string).
To clarify: The built-in calls to log() all use a log level of 'debug', so if your logger has 'maxlevel' set to anything less than 'debug', nothing nothing will get logged.
'maxlevel' and 'minlevel' are discussed in Log::Handler#LOG-LEVELS and Log::Handler::Levels.
- o return_type => $integer
Possible values for $integer:
- o 0 (zero)
dispatch() returns the output of the run mode.
This is the default.
- o 1 (one)
dispatch() returns the hashref of args built from combining the output of dispatch_args() and the args to dispatch().
The requested module is not loaded and run. See t/args.t.
- o 2 (two)
dispatch() returns the hashref of args build from parsing the path info.
The requested module is not loaded and run. See t/args.t.
Default: 0.
Note: return_type is ignored by "as_psgi(@args)".
Methods
as_psgi(@args)
Returns a PSGI-compatible coderef which, when called, runs your sub-class of CGI::Snapp as a PSGI app.
This works because the coderef actually calls "psgi_app($args_to_new)" in CGI::Snapp.
See the next method, "dispatch(@args)", for a discussion of @args, which may be a hash or hashref.
Lastly: as_psgi() does not support the error_document option the way dispatch({table => {error_document => ...} }) does. Rather, it throws errors of type HTTP::Exception. Consider handling these errors with Plack::Middleware::ErrorDocument or similar.
dispatch(@args)
Returns the output generated by calling a CGI::Snapp-based module.
@args is a hash or hashref of options, which includes the all-important 'table' key, to define a dispatch table. See "What is the structure of the dispatch table?" for details.
The unfortunate mismatch between dispatch() taking a hash and dispatch_args() taking a hashref has been copied from CGI::Application::Dispatch. But, to clean things up, CGI::Snapp::Dispatch allows dispatch() to accept a hashref. You are encouraged to always use hashrefs, to avoid confusion.
(Key => value) pairs which may appear in the hashref parameter ($args[0]):
- o args_to_new => $hashref
This is a hashref of arguments that are passed into the constructor (
new()) of the application.
If you wish to set parameters in your app which can be retrieved by the $self -> param($key) method, then use:
my($app) = CGI::Snapp::Dispatch -> new; my($output) = $app -> dispatch(args_to_new => {PARAMS => {key1 => 'value1'} });
This means that inside your app, $self -> param('key1') will return 'value1'.
See t/args.t's test_13(), which calls t/lib/CGI/Snapp/App1.pm's rm2().
See also t/lib/CGI/Snapp/Dispatch/SubClass1.pm's dispatch_args() for how to pass in one or more such values via your sub-class.
- o auto_rest => $Boolean
If 1, derived class's dispatch table. See also the next option.
Default: 0.
See t/args.t test_27().
- o auto_rest_lc => $Boolean
If 1, then in combination with auto_rest, this tells Dispatch that you prefer lower cased HTTP method names. So instead of
foo_POSTand
foo_GETyou'll get
foo_postand
foo_get.
See t/args.t test_28().
- o default
Specify a value to use for the path info if one is not available. This could be the case if the default page is selected (e.g.: '/cgi-bin/x.cgi' or perhaps '/cgi-bin/x.cgi/').
- o error_document
Note: When using "as_psgi(@args)", error_document makes no sense, and is ignored. In that case, use Plack::Middleware::ErrorDocument or similar.
If this value is not provided, and something goes wrong, then Dispatch will return a '500 Internal Server Error', using an internal HTML page. See t/args.t, test_25().
Otherwise, the value should be one of the following:
- o A customised error string
To use this, the string must start with a single double-quote (") character. This character character will be trimmed from final output.
- o A file name
To use this, the string must start with a less-than sign (<) character. This character character will be trimmed from final output.
$ENV{DOCUMENT_ROOT}, if not empty, will be prepended to this file name.
The file will be read in and used as the error document.
See t/args.t, test_26().
- o A URL to which the application will be redirected
This happens when the error_document does not start with " or <.
Note: In all 3 cases, the string may contain a '%s', which will be replaced with the error number (by sprintf).
Currently CGI::Snapp::Dispatch uses three HTTP errors:
- o 400 Bad Request
This is output if the run mode is not specified, or it contains an invalid character.
- o 404 Not Found
This is output if the module name is not specified, or if there was no match with the dispatch table, or the module could not be loaded by Class::Load.
- o 500 Internal Server Error
This is output if the application dies.
See t/args.t, test_24().
- o prefix
This option will set the string to be prepended to the name of the application module before it is loaded and created.
For instance, consider /app/index.cgi/module_name/run_mode.
This would, by default, load and create a module named 'Module::Name'. But let's say that you have all of your application specific modules under the 'My' namespace. If you set this option -
prefix- to 'My' then it would instead load the 'My::Module::Name' application module instead.
The algorithm for converting a path info into a module name is documented in "translate_module_name($name)".
- o table
In most cases, simply using Dispatch with the
defaultand
prefixis enough to simplify your application and your URLs, but there are many cases where you want more power. Enter the dispatch table (a hashref), specified here as the value of the
tablekey.
Since this table can be slightly complicated, a whole section exists on its use. Please see the "What is the structure of the dispatch table?" section.
Examples are in the dispatch_args() method of both t/lib/CGI/Snapp/Dispatch/SubClass1.pm and t/lib/CGI/Snapp/Dispatch/SubClass2.pm.
dispatch_args($args)
Returns a hashref of args to be used by "dispatch(@args)".
This hashref is a dispatch table. See "What is the structure of the dispatch table?" for details.
"dispatch(@args)" calls this method, passing in the hash/hashref which was passed in to "dispatch(@args)".
Default output:
{ args_to_new => {}, default => '', prefix => '', table => [ ':app' => {}, ':app/:rm' => {}, ], }
This is the perfect method to override when creating a subclass to provide a richer "What is the structure of the dispatch table?".
See CGI::Snapp::Dispatch::SubClass1 and CGI::Snapp::Dispatch::SubClass2, both under t/lib/. These modules are exercised by t/args.t.
new()
See "Constructor and Initialization" for details on the parameters accepted by "new()".
Returns an object of type CGI::Snapp::Dispatch.
translate_module_name($name)
This method is used to control how the module name is translated from the matching section of the path. See "How does CGI::Snapp parse the path info?".
The main reason that this method exists is so that it can be overridden if it doesn't do exactly what you want.
The following transformations are performed on the input:
- o The text is split on '_'s (underscores)
Next, each word has its first letter capitalized. The words are then joined back together using '::'.
- o The text is split on '-'s (hyphens)
Next, each word has its first letter capitalized. The words are then joined back together without the '-'s.
Examples:
module_name => Module::Name module-name => ModuleName admin_top-scores => Admin::TopScores
What is 'path info'?
For a CGI script, it is just $ENV{PATH_INFO}. The value of $ENV{PATH_INFO} is normally set by the web server from the path info sent by the HTTP client.
A request to /cgi-bin/x.cgi/path/info will set $ENV{PATH_INFO} to /path/info.
For Apache, whether $ENV{PATH_INFO} is set or not depends on the setting of the AcceptPathInfo directive.
For a PSGI script, it is $$env{PATH_INFO}, within the $env hashref provided by PSGI.
Path info is also discussed in "mode_param([@new_options])" in CGI::Snapp.
Similar comments apply to the request method (GET, PUT etc) which may be used in rules.
For CGI scripts, request method comes from $ENV{HTTP_REQUEST_METHOD} || $ENV{REQUEST_METHOD}, whereas for PSGI scripts it is just $$env{REQUEST_METHOD}.
Is there any sample code?
Yes. See t/args.t and t/lib/*.
Why did you fork CGI::Application::Dispatch?
To be a companion module for CGI::Snapp.
What version of CGI::Application::Dispatch did you fork?
V 3.07.
How does CGI::Snapp::Dispatch differ from CGI::Application::Dispatch?
There is no module called CGI::Snapp::Dispatch::PSGI
This just means the PSGI-specific code is incorporated into CGI::Snapp::Dispatch. See "as_psgi(@args)".
Processing parameters to dispatch() and dispatch_args()
The code which combines parameters to these 2 subs has been written from scratch. Obviously, the intention is that the new code behave in an identical fashion to the corresponding code in CGI::Application::Dispatch.
Also, the re-write allowed me to support a version of "dispatch(@args)" which accepts a hashref, not just a hash. The same flexibility has been added to "as_psgi(@args)".
No special code for Apache, mod_perl or plugins
I suggest that sort of stuff is best put in sub-classes.
Unsupported features
- o dispatch_path()
Method dispatch_path() is not provided. For CGI scripts, the code in dispatch() accesses $ENV{PATH_INFO} directly, whereas for PSGI scripts, as_psgi() accesses the PSGI environment hashref $$env{PATH_INFO}.
Enhanced features
"new()" can take extra parameters:
- o return_type
Note: return_type is ignored by "as_psgi(@args)".
This module uses Class::Load to try loading your application's module
CGI::Application::Dispatch uses:
eval "require $module";
whereas CGI::Snapp::Dispatch uses 2 methods from Class::Load:
try_load_class $module; croak 404 if (! is_class_loaded $module);
For CGI scripts, the 404 (and all other error numbers) is handled by sub _http_error(), whereas for PSGI scripts, the code throws errors of type HTTP::Exception.
Reading an error document from a file
CGI::Application::Dispatch always prepends $ENV{DOCUMENT_ROOT} to the file name. Unfortunately, this means that when $ENV{DOCUMENT_ROOT} is not set, File::Spec prepends a '/' to the file name. So, an error_document of '<x.html' becomes '/x.html'.
This module only prepends $ENV{DOCUMENT_ROOT} if it is not empty. Hence, with an empty $ENV{DOCUMENT_ROOT}, an error_document of '<x.html' becomes 'x.html'.
See sub _parse_error_document() and t/args.t test_26().
Handling of exceptions
CGI::Application::Dispatch uses a combination of eval and Try::Tiny, together with Exception::Class. Likewise, CGI::Application::Dispatch::PSGI uses the same combination, although without Exception::Class.
CGI::Snapp::Dispatch just uses Try::Tiny. This applies both to CGI scripts and PSGI scripts. For CGI scripts, errors are handled by sub _http_errror(). For PSGI scripts, the code throws errors of type HTTP::Exception.
How does CGI::Snapp parse the path info?
Firstly, the path info is split on '/' chars. Hence /module_name/mode1 gives us ('', 'module_name', 'mode1').
The value 'module_name' is passed to "translate_module_name($name)". In this case, the result is 'Module::Name'.
You are free to override "translate_module_name($name)" to customize it.
After that, the prefix option's value, if any, is added to the front of 'Module::Name'. See "dispatch_args($args)" for more about prefix.
FInally, 'mode1' becomes the name of the run mode.
Remember from the docs for CGI::Snapp, that this is the name of the run mode, but is not necessarily the name of the method which will be run. The code in your sub-class of CGI::Snapp can map run mode names to method names.
For instance, a statement like:
$self -> run_modes({rm_name_1 => 'rm_method_1', rm_name_2 => 'rm_method_2'});
in (probably) sub setup(), shows how to separate run mode names from method names.
What is the structure of the dispatch table?
Sometimes it's easiest to explain with an example, so here you go:
CGI::Snapp::Dispatch -> new -> dispatch # Note the new()! ( args_to_new => { PARAMS => {big => 'small'}, }, default => '/app', prefix => 'MyApp', table => [ '' => {app => 'Blog', rm => 'recent'}, 'posts/:category' => {app => 'Blog', rm => 'posts'}, ':app/:rm/:id' => {app => 'Blog'}, 'date/:year/:month?/:day?' => { app => 'Blog', rm => 'by_date', args_to_new => {PARAMS => {small => 'big'} }, }, ] );
Firstly note, that besides passing this structure into "dispatch(@args)", you could sub-class CGI::Snapp::Dispatch and design "dispatch_args($args)" to return exactly the same structure.
OK. The components, all of which are optional, are:
- o args_to_new => $hashref
This is how you specify a hashref of parameters to be passed to the constructor (new() ) of your sub-class of CGI::Snapp.
- o default => $string
This specifies a default for the path info in the case this code is called with an empty $ENV{PATH_INFO}.
- o prefix => $string
This specifies a namespace to prepend to the class name derived by processing the path info.
E.g. If path info was /module_name, then the above would produce 'MyApp::Module::Name'.
- o table => $arrayref
This provides a set of rules, which are compared - 1 at a time, in the given order - with the path info, as the code tries to match the incoming path info to a rule you have provided.
The first match wins.
Each element of the array consists of a rule and an argument list.
Rules can be empty (see '' above), or they may be a combination of '/' chars and tokens. A token can be one of:
- o A literal
Any token which does not start with a colon (:) is taken to be a literal string and must appear exactly as-is in the path info in order to match. In the rule 'posts/:category', posts is a literal.
- o A variable
Any token which begins with a colon (:) is a variable token. These are simply wild-card place holders in the rule that will match anything - in the corresponding position - in the path info that isn't a slash.
These variables can later be referred to in your application (sub-class of CGI::Snapp) by using the $self -> param($name) mechanism. In the rule 'posts/:category', ':category' is a variable token.
If the path info matched this rule, you could retrieve the value of that token from within your application like so: my($category) = $self -> param('category');.
There are some variable tokens which are special. These can be used to further customize the dispatching.
- o :app
This is the module name of the application. The value of this token will be sent to "translate_module_name($name)" and then prefixed with the prefix if there is one.
- o :rm
This is the run mode of the application. The value of this token will be the actual name of the run mode used. As explained just above ("How does CGI::Snapp parse the path info?"), this is not necessarily the name of the method within the module which will be run.
- o An optional variable
Any token which begins with a colon (:) and ends with a question mark (?) is considered optional. If the rest of the path info matches the rest of the rule, then it doesn't matter whether it contains this token or not. It's best to only include optional variable tokens at the end of your rule. In the rule 'date/:year/:month?/:day?', ':month?' and ':day?' are optional-variable tokens.
Just as with variable tokens, optional-variable tokens' values can be retrieved by the application, if they existed in the path info. Try:
if (defined $self -> param('month') ) { ... }
Lastly, $self -> param('month') will return undef if ':month?' does not match anything in the path info.
- o A wildcard
The wildcard token '*' allows for partial matches. The token must appear at the end of the rule.
E.g.: 'posts/list/*'. Given this rule, the 'dispatch_url_remainder' param is set to the remainder of the path info matched by the *. The name ('dispatch_url_remainder') of the param can be changed by setting '*' argument in the argument list. This example:
'posts/list/*' => {'*' => 'post_list_filter'}
specifies that $self -> param('post_list_filter') rather than $self -> param('dispatch_url_remainder') is to be used in your app, to retrieve the value which was passed in via the path info.
See t/args.t, test_21() and test_22(), and the corresponding sub rm5() in t/lib/CGI/Snapp/App2.pm.
- o A HTTP method name
You can also dispatch based on HTTP method. This is similar to using auto_rest but offers more fine-grained control. You include the (case insensitive) method name at the end of the rule and enclose it in square brackets. Samples:
':app/news[post]' => {rm => 'add_news' }, ':app/news[get]' => {rm => 'news' }, ':app/news[delete]' => {rm => 'delete_news'},
The main reason that we don't use regular expressions for dispatch rules is that regular expressions did not provide for named back references (until recent versions of Perl), in the way variable tokens do.
How do I use my own logger object?
Study the sample code in CGI::Snapp::Demo::Four, which shows how to supply a Config::Plugin::Tiny *.ini file to configure the logger via the wrapper class CGI::Snapp::Demo::Four::Wrapper.
Also, see t/logs.t, t/log.a.pl and t/log.b.pl.
See also "What else do I need to know about logging?" in CGI::Snapp for important info and sample code.
How do I sub-class CGI::Snapp::Dispatch?
You do this the same way you sub-class CGI::Snapp. See this FAQ entry in CGI::Snapp.
Are there any security implications from using this module?
Yes. Since CGI::Snapp::Dispatch will dynamically choose which modules to use as content generators, it may give someone the ability to execute specially crafted modules on your system if those modules can be found in Perl's @INC path. This should only be a problem if you don't use a prefix.
Of course those modules would have to behave like CGI::Snapp based modules, but that still opens up the door more than most want.
By using the prefix option you are only allowing Dispatch to pick modules from a pre-defined namespace.
Why is CGI::PSGI required in Build.PL and Makefile.PL when it's sometimes not needed?
It's a tradeoff. Leaving it out of those files is convenient for users who don't run under a PSGI environment, but it means users who do use PSGI must install CGI::PSGI explicitly. And, worse, it means their code does not run by default, but only runs after manually installing that module.
So, since CGI::PSGI's only requirement is CGI, it's simpler to just always require it.
Troubleshooting
It doesn't work!::Dispatch -> new -> as_psgi({args_to_new => {logger => $logger} }, ...);
In addition, you can trace CGI::Snapp::Dispatch itself with the same (or a different) logger:
CGI::Snapp::Dispatch -> new(logger => $logger) -> as_psgi({args_to_new => {logger => $logger} }, ...);
The entry to each method in CGI::Snapp and CGI::Snapp::Dispatch is logged using this technique, although only when maxlevel is 'debug'. Lower levels for maxlevel do not trigger logging. See the source for details. By 'this technique' I mean there is a statement like this at the entry of each method:
$self -> log(debug => 'Entered x()');
- o Are you confused about combining parameters to dispatch() and dispatch_args()?
I suggest you use the request_type option to "new()" to capture output from the parameter merging code before trying to run your module. See t/args.t.
- o Are you confused about patterns in tables which do/don't use ':app' and ':rm'?
The golden rule is:
- o If the rule uses 'app', then it is non-capturing
This means the matching app name from $ENV{PATH_INFO} is not saved, so you must provide a modue name in the table's rule. E.g.: 'app/:rm' => {app => 'MyModule}, or perhaps use the prefix option to specify the complete module name.
- o If the rule uses ':app', then it is capturing
This means the matching app name from $ENV{PATH_INFO} is saved, and it becomes the name of the module. Of course, prefix might come into play here, too.
- o Did you forget the leading < (read from file) in the customised error document file name?
-
- o Did you forget the leading " (double-quote) in the customised error document string?
-
- o Did you forget the embedded %s in the customised error document?
This triggers the use of sprintf to merge the error number into the string.
- o Are you trying to use this module with an app non based on CGI::Snapp?
Remember that CGI::Snapp's new() takes a hash, not a hashref.
- o Did you get the mysterious error 'No such field "priority"'?
You did this:
as_psgi(args_to_new => $logger, ...)
instead of this:
as_psgi(args_to_new => {logger => $logger, ...}, ...)
-
CGI::Snapp - A almost back-compat fork of CGI::Application.
As of V 1.01, CGI::Snapp now supports PSGI-style apps.
And see CGI::Snapp::Dispatch::Regexp for another way of matching the path info.
Machine-Readable Change Log
The file Changes was converted into Changelog.ini by Module::Metadata::Changes.
Version Numbers
Version numbers < 1.00 represent development versions. From 1.00 up, they are production versions.
Credits
Please read "CONTRIBUTORS" in CGI::Application::Dispatch, since this module is a fork of the non-Apache components of CGI::Application::Dispatch.
Repository
Support
Author
CGI::Snapp::Dispatch: | http://web-stage.metacpan.org/pod/CGI::Snapp::Dispatch | CC-MAIN-2021-10 | refinedweb | 4,286 | 57.87 |
Currently,)%></
View Complete Post
View Complete Post
From within a custom modelBinder (derived from DefaultModelBinder), how do I access the existing values of properties in the model into which the new values are being injected?
Thanks in advance for your help!
I
Hello,
Tutorial is going grat I am learning. I have had to fiqure out a few errors along the way but the VWD IDE makes debugging doable. Until NOW I have ran into the problem that when adding a new view the requested class Movies1.Models.Movie is not available in the pull down menu. If it is entered manually then the List option is not active in the next pulldown. At this point I have to ask a question that reaveals my experience level which is why I am working on a beginner level tutorial. Does running the dubbing function accomplish the compile taks. Does the build function? I ask because the instructions remind that the classes will not be availible if the program has not been compiled. Here is what I have
sing System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace Movie1.Controllers
{
public class MoviesController : Controller
{
MoviesEntities db = new MoviesEntities();
public ActionResult Index()
{
var movies = from m in db.Movies
where m.ReleaseDate > new DateTime (1984, 6,1,)
select m ;
return View(movies .ToList());
}
}
}
I read the FAQ's and searche,?
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/4786-model-binding-values-html-table-back-to.aspx | CC-MAIN-2018-43 | refinedweb | 249 | 68.57 |
Throughout last week, I had my first taste of Redux. During this time, we implemented Redux with React but, it does not need to be used exclusively with React. However, this has been my only experience with it thus far, so I will explain it the way it is used with React.
Upon introduction to Redux, you may be left feeling instantly confused. Initially learning React, most days are spent getting comfortable with the idea of passing props from one component, to another, to another... to another.... to.... another.
While this is an easy concept to understand, it's not necessarily the most efficient. There are a variety of state management systems used within React, but I want to discuss Redux and what has helped me wrap my mind around.
Let's keep this simple and straight to the point. Redux is Uber Eats.
I know what you may be thinking... What are you are talking about? Let me explain.
In traditional prop passing, relate each component to a neighbor. If you needed something from the grocery store, imagine that you have to ask neighbor E, to ask neighbor D, to ask neighbor C, to ask neighbor B, to ask neighbor A, if you can use some of their bread. It works... but, it's pretty inconvenient.
What if there was a way to just have the bread delivered straight to you?!
AH, this is where Redux shines. With the use of the Redux store, that bread (AKA state), is always available whenever you need it. No passing props, no talking to neighbors, just simply call up the store and get what you need!
The Redux Store
The Redux Store takes about 3.87 seconds to build, and is one of the easiest things to do in React. After installing Redux with your package manager of choice, simply import the function into your main component (usually index.js).
import { createStore } from 'redux'
Boom! Now you have the power, just create a store really quick! Be sure to export your reducer from it's proper file, and import it into your
index.js file.
const store = createStore(yourReducerGoesHere)
Simple enough? Now your store exists in a variable called
store. It takes in a reducer as well.(This is how it will manipulate the state that's held within the store. Now, let's talk about the Provider.
Providing state to your components
Provider is simple enough to remember, because it provides access the state from the store to your components. I say access, because it doesn't necessarily give your components the state just yet (this is what we have
connect() for).
In that same component, you'll want to import Provider.
import { Provider } from 'react-redux' Booyah!
After that, you want to wrap your
App component in that provider. Think of this as granting your application the ability to use the store. It typically looks something like this:
ReactDOM.render( <Provider store={store}> <App /> </Provider> , document.getElementById("root"));
See that sneaky little prop pass, right there? It almost forms a sentence! In the Provider we passed in the store. It can almost be read as, "Providing the store to the component". Well, that's how I read it at least! :)
Awesome, now we created a store, passed the store to the provider, which is providing that store to our application. Before seeing how we grab the state, we need to have state first! On to the reducer!
Reducing The Stress
Reducers! This is one of the powerful aspects of Redux. Essentially, I call them the execution guidelines. The reducer file will typically consist of two things: the initial state, and the reducer itself.
For example, for simplicity sake, let's say our initial state has an array of names.
const initialState = { names: ['Bob', 'Susan'] }
Woo! They are looking great. Now the reducer comes into play. This section can get messy, so we'll keep it extremely simple. Reducers are functions full of
if...else conditions. The easier way to write this is with switch cases. To prevent confusion, I'll provide an example of both,
if...else and a switch case, if you happen to be familiar with both!
Our case that modifies state will be called, 'Add Name'. However, in Redux cases, it's common practice to use all capital letters for this (kind of similar to just screaming at the reducer to do its job), so it would look like
'ADD_NAME'.
If none of the cases do match, you want to be sure to return the
initialState. I know this is a lot of words, so let's see an example!
export const reducer = (state = initialState, action) => { if (action.type === 'ADD_NAME') { return { ...state, names: [...state.names, action.payload] } } else { return state } }
What's happening here is the reducer takes in state, and an action. State will be undefined if you don't provide it an initial state, so in this example, we assign
state to
initialState. The action will be an object containing a
type and sometimes a
payload property. For example, this action object for this example may look like:
{ type: 'ADD_NAME', payload: newNameGoesHere }
The type specifies what reducer case to trigger, like instructions! The payload is just data, it can be called anything. In this case, we have a new name we want to add to the
users array. So we spread the whole state object first, then spread the
users array into a new array, and add the new name on to the end, this name is being referenced by the
action.payload.
So back to my point, reducers are the execution guidelines. They take instruction from the action, and perform based on what
action.type is called. This will make more sense in a second when we discuss actions. The
payload property is just a common way of passing in the data you want to incorporate into state, it can be called anything -
beanChili if you want! :D
Like I said, reducers are typically written in a switch case format, so they may look like this when you come across them:
export const reducer = (state = initialState, action) => { switch(action.type){ case 'ADD_NAME': return { ...state, names: [...state.names, action.payload] } default: return state } }
This achieves the same result, just tends to be less words, the longer your code gets!
Okay, so we've covered the store, the provider, initial state, and the reducer. Now let's take a peek at actions!
Lights, Camera, ACTIONS
As I stated earlier, actions are the instructions for the reducer. Action creators are functions, that return actions. These actions are objects similar to the one I referenced above, with a
type and a
payload property.
The way these work, is your action creator function is called within your component, which returns an object of "instructions". In this case, you call the action, and it will return an object that looks like:
{ type: 'ADD_NAME', payload: newName }
This function could be represented by:
export const addName = (newName) => { return { type: 'ADD_NAME', payload: newName } }
In this case, when the
addName function is invoked, we will pass in the name we want to add, as
newName!
Now, this returned object gets passed into the reducer. Can you tell what's going to happen?
The reducer enters the switch case, and checks the
action.type. OH! The type is
'ADD_NAME', so hop into that return statement.
Okay, so it is returning state, and then attaching
action.payload onto the enter of the array... what is
action.payload?
Well, referencing our object above, we see
action.payload is the
newName. Let's say that we passed in the name 'Chris' as the
newName argument. What happens now, is Chris is tacked onto the end of the array. Now our
users array in state looks like:
['Bob', 'Susan', 'Chris'] Awesome!
So essentially we just called a function (an action creator), which said, "Hey Reducer... add a new name, the new name is Chris!"
The reducer responds, "Cool, added the name, here's your new state!"
Simple enough, right? They definitely get more complex as more functionality is incorporated into your application, but these are the basics.
However, there is one final question:
How do the components actually access this state?
Simple! By
connect! Let's take a look.
Connecting the links
Connecting the store state to our components becomes a bit of extra work, but essentially we have our state, and provide access to the main component (App.js). However, now we need to accept access, via the
connect() method.
Connect is a higher-order component, which is a different topic itself, but essentially this gets invoked twice in a row. It is called during the export of your component.
First, let's import
connect into our component:
import { connect } from 'react-redux';
Say we have a
<List /> component being rendered in
App.js, and we want to connect
List.js. In that component, on the export line we could do something like:
export default connect(null, {})(List);
The first invocation takes in two items, the state you're receiving, and the actions you want to use (in that order). Let's touch on the state.
Remember, connecting only accepts access, it doesn't actually provide the state, that's what we have
mapStateToProps for. :D
mapStateToProps says, "Oh, you connected your component? You granted access? Well here is the state you asked for!"
Okay... Maybe the component doesn't talk, but if they did, they'd probably say something along those lines.
This
mapStateToProps example, is a function that receives the state, and is then passed into the connect method. Like this:
const mapStateToProps = state => { return { names: state.names } }
This function takes in state, which is the entire state object from the reducer. In this case, our state object only has one array inside of it, but these state objects are typically 10x as long, so we have to specify what information we want!
In this return line, we say, "Return an object with a names property." How do we know what
names is? Well, we access it off of the
state object, by
state.names.
Our returned property doesn't need to be called names, we could do something like:
const mapStateToProps = state => { return { gummyBears: state.names } }
But, that's not very semantic is it? We want to understand that
names is an array of names. So it's common practice to keep the same property name, in your returned state object!
We're almost finished, so hang in there! Let's recap where we're at.
We have our component accessing state from the store, through
mapStateToProps. The state exists in the component now, but the component can't access it just yet.
First, we need to pass it to the connect function. The connect functions says, "Access to the store granted! Now... what state am I granting access to?"
So we pass in the function returning state,
mapStateToProps, like this:
export default connect(mapStateToProps, {})(List) Radical!
We're almost there!
Now the component is capable of receiving that state as props, like it traditionally would from a parent component. Maybe we are mapping over it, and displaying each name on the screen in a
div. Here's what this may look like!
const List = props => { return ( <div> { props.names.map(name => { return <div>{name}</div> }) } </div> ) }
Awesome! But there is one final problem... Where does the action get called?
Typically there would be an input, so you could input a new name, and add it to the array - but, for simplicity sake, let's just add a button that adds the name Chris, when clicked! (Not very functional, but you see my point! :D)
We need to access that action creator function. Well, earlier we exported that function so we could import it where we need it, like in our
List.js component!
import { addName } from "../actions"
The file location will depend on your directory structure, but it is common to have all actions exported from an
index.js file in your
actions directory, and then import from that directory. Don't worry too much about that now though!
Great, we have our function, but we can't just pass this function as props to our component just yet. This action is related to Redux, and with Redux we need to connect the action through the
connect higher-order component, so when we return our action object, our reducer can accept it and perform accordingly!
Remember that extra space in the
connect at the bottom of our
List.js component? Let's fill that in with our
addName function.
export default connect(mapStateToProps, {addName})(List);
Now, we can pass in our function as props (similar to our state), and use the function as we need!
const List = props => { return ( <div> <button onClick={() => props.addName('Chris')}></button> { props.names.map(name => { return <div>{name}</div> }) } </div> ) }
I simply created a button, and added an
onClick event listener, which triggers the
addName function, and passing in 'Chris', like we set out to achieve!
Geez! that was a mission... but we made it! So, let's recap what is happening exactly.
The Redux Recap
We started with creating our
store, and passed access to it through the provider, which wrapped our application. Then we created our initial state to use, and formed our reducer which manipulates the state. We built an action creator,
addName which is a function that returns instructions for the reducer. These specific instructions said, "We want to add the name Chris to the names array!"
The reducer then takes that information and adds the name to the state. Our component accesses the state through
connect, and receives the state through the
mapStateToPropsfunction. We also import our action creator,
addName, and pass it to
connect as well.
The result? We can access our action creator, and our state, as props! However, we aren't passing this information through any other components, just pulling it directly from the store. Delivery straight to your door! Uber eats roc- I mean, Redux rocks!
I understand there is so much more to Redux, and many other things you can change to make everything easier and simpler to use, I just wanted to cover some of the basic foundations of it, and what has helped me understand it a bit better!
I would love to hear your thoughts/opinions on Redux, and your experience with it, in the comments. I love talking about React + Redux! :D
Discussion
Often when instructors try to explain Redux they don't put themselves in the beginner's shoes, or they don't as they should be.
This article is beautiful because just that!
It went through the logical steps a beginner's mind goes through when trying to learn Redux. And this is exactly what makes it super useful.
As always Dylan you know how to simplify and express the JS!
Ebrahim, you rock man! I'm so glad you got some value from it. Trying my best, and I'm glad you really enjoyed it! Happy Sunday, and happy coding! 💪🏻🔥
Thanks, quite clearly written up to dumuddle some of the layers of boilerplate redux forces you into.
However you don't go the extra step of showing how redux is useful - actually modifying values or retrieving data from a server.
Since we're passing in props how does redux reactively update state ?
Where would you put code to do GET requests?
Also I'm unclear of the value of the extra layer of abstraction around actions, they're just wrapper functions for calling reducer methods, why not name and add methods to the reducer and call them directly?
Great point, totally makes sense! I wanted to refrain from getting to in depth with the possibilities with Redux, because there's a TON to go over. I just wanted to introduce the basic idea of actions and reducers to anyone who has been struggling to grasp them! I most definitely plan to go a bit more in depth to cover various situations, such as the GET requests.
Thanks for the suggestions, I'll be sure to dive a bit deeper and offer examples of various situations to provide a better grasp around the concepts! :)
thanks for the response. I'll surely look out for the next installment!
Dylan, thank you, thank you, thank you! I am also a student in a full stack web development bootcamp and Redux was something I just couldn't grasp, at least not in the way it was taught to me. This article helped me understand exactly HOW Redux works and with that information, I am now able to use it more confidently. I think you should think about becoming an instructor. You have a wonderful teaching style...I think you would be one of the best! Again, thank you for this!
Sherri that means the world! Thanks so much! While I definitely have a long... LONG way to go, I appreciate the kind words greatly! Super stoked to be able to provide some value to you, and help you grasp the Redux basics a bit more. That's awesome hear! Thanks so much! :)
Good article! Just a point:
here you did:
export const reducer = (state = initialState, action) {
switch(action.type){
case 'ADD_NAME':
return {
...state,
names: [...names, action.payload]
}
default:
return state
}
}
But in the return of ADD_NAME, must be
names: [...state.names, action.payload]
the same in if...else structure.
Thank you so much for the correction, you're most definitely right! Slipped my mind - I got those changed in the article. I appreciate that, thanks again! :)
A pleasure! Thanks for sharing your article!
I am beginner in Redux, trying to learn, recently I tried to read about redux, then after reading only part of any article about redux , just stopped as there is too much info, Good here is I am able to read the complete article of you, such a clear and simple explanation of redux terms and basic concept.
One doubt, In reducer actions, why we need to spread the state along with the names[] array, (I am thinking we need only the updated state i.e names only )
return {
...state,
names: [...names, action.payload]
}
I'm glad you enjoyed the read! Thank you!
So the reason we have to return ...state, as well, is because what you're returning is taking place of your current state. In this case, names is the only thing in state, so it doesn't really matter.
However, say we had another array with ages, or an object in state, in addition to the names array, we'd have to be sure to copy everything from State with ...state, and then change what we need to change.
If there was another array called ages, and the names array, and we only returned names, then ages would be overridden and be gone forever. If that makes sense. :)
Thank youu Dylan, You made my day into Redux, I was just searching for articles on redux. I could not find more easier than yours.
People feels they themselves are into this redux world along with you in story telling. Thats a great skill you do have.
Hopefully I also start writing like this in coming days.
I've been using Redux for almost a year now. But never have I ever come across such a beautiful explanation... Awesome mate. Keep up the good work...
This means the world! Super glad it was easy to understand for you, that's super special to me! Glad you enjoyed the read! :)
This is the simplest way to make others understand what Redux is. Really loved it brother. You really simplified Redux.
Thanks Jacob! That means the world, super stoked to hear that it clicked with you. Appreciate your support, have a great Sunday! 😄
Very nice blog post! I'm a strong believer in explaining things using language anyone can understand and this post does just that!
Awesome! Thanks so much! That was definitely the goal to break it down to a level that was easier to understand. Glad it provided you some value!😄
Thank you Dylan....this is pretty clean and 'lightweight' to understand 😊👍🏼
I'm currently taking a course on React+Redux and this article explained it so much better for my n00b brain than said course. Thanks for posting this! Great article!
Redux is such an over-complication but hey, its Facebook tech right? Should be good right? 🤦♂️
thank you thank you thank you...... so much for a great explanation...
This is awesome ! many thanks :) | https://dev.to/dylanmesty/redux-basics-explained-from-a-beginner-s-perspective-abm | CC-MAIN-2020-50 | refinedweb | 3,433 | 75.61 |
NOTE: I figured this out shortly after asking the question. See bottom of post for the answer.
Hi. I’m not sure if this is the right place to ask this question. (It may be more about my environment, and not sdl-specific.) If it’s not, please let me know and I’ll try on Stack Overflow. Cheers.
I’ve been using SDL2 in C++ for a while, and have successfully had a cross platform project set up between windows and osx (using MingW and GCC, building on command line). I’ve been recently trying to get a similar cross platform setup going using dotnetcore. This was relatively easy for an SDL2-only project, but I’m having trouble getting SDL2_Image working.
The process I’ve followed is based on this tutorial, which uses Homebrew to install sdl on osx.
youtube DOT com/watch?v=vCI8XwHrbL0
And part two, which brings SDL2_Image into the mix:
I just repeated the entire process again from scratch with the following steps:
Install sdl and sdl_image:
brew install sdl2
brew install sdl2_image
(Both install fine.)
Create a console project:
dotnet new console
Then I include the CSharp sdl2 bindings from flibitijibibo. (These are the ones created for use in the FNA-XNA project. I’ve succesfully used these in Windows.)
I entered the following code to test
using System; using SDL2; namespace SDL2_Image_test { class Program { static void Main(string[] args) { SDL.SDL_Init(SDL.SDL_INIT_EVERYTHING); var window = SDL.SDL_CreateWindow( "Test", SDL.SDL_WINDOWPOS_CENTERED, SDL.SDL_WINDOWPOS_CENTERED, 800, 600, SDL.SDL_WindowFlags.SDL_WINDOW_RESIZABLE ); var renderer = SDL.SDL_CreateRenderer( window, -1, SDL.SDL_RendererFlags.SDL_RENDERER_ACCELERATED | SDL.SDL_RendererFlags.SDL_RENDERER_PRESENTVSYNC ); var texture = SDL_image.IMG_LoadTexture(renderer, "bunny.png"); SDL.SDL_Event e; var running = true; while (running) { while (SDL.SDL_PollEvent(out e) != 0) { switch (e.type) { case SDL.SDL_EventType.SDL_QUIT: running = false; break; } } } SDL.SDL_DestroyTexture(texture); SDL.SDL_DestroyRenderer(renderer); SDL.SDL_DestroyWindow(window); SDL.SDL_Quit(); } } }
This results in sdl succesfully opening a window, but then immediately crashing with the following message.
Unhandled exception.) at SDL2.SDL_image.IMG_LoadTexture(IntPtr renderer, String file)
Following the advice of the message I set DYLD_PRINT_LIBRARIES with:
`export DYLD_PRINT_LIBRARIES=1’
When I next run the project, I get a very long list of dyld events. I can see sdl2_image a few lines up:
dyld: loaded: <6065E6D9-57CA-3DFE-B894-52C533F7CEB4> /usr/local/lib/libSDL2_image.dylib dyld: loaded: <144C7AAF-D976-3665-ABB2-9FEA80AB1384> /usr/local/opt/libpng/lib/libpng16.16.dylib dyld: unloaded: <6065E6D9-57CA-3DFE-B894-52C533F7CEB4> /usr/local/lib/libSDL2_image.dylib dyld: unloaded: <144C7AAF-D976-3665-ABB2-9FEA80AB1384> /usr/local/opt/libpng/lib/libpng16.16.dylib dyld: loaded: <93D03DD2-8CA3-3199-9238-9FDED9189F14> /usr/local/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Globalization.Native.dylib Unhandled exception. dyld: loaded: <14FD3ABB-E36C-3F75-BBFE-8218AE7B5AFE> /usr/local/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Native.dylib)
I note that is seems to load
libSDL2_image.dylib, and then unloads it again? At this point I’m at a loss. I’m not sure if this is a problem with Homebrew, Dotnet, my SDL2_Binary, or something else?
I just updated my mac to the latest Catalina, in the hope that it might help, but no luck.
Any tips or advice would be greatly appreciated. Thanks.
UPDATE:
Funny how you often figure it out right after posting to the forum, huh?
I went back to my C++ project on mac, and noticed that it was also crashing, with this message:
dyld: Library not loaded: /usr/local/opt/jpeg/lib/libjpeg.9.dylib
Based on some advice from Stack Overflow, I ran:
sudo find / -name 'libjpeg.*'
I saw that there were lots of libjpgs on my machine. As an experiment I ran:
brew install libjpeg
…and got the message
Warning: jpeg 9c is already installed The currently linked version is 8d You can use `brew switch jpeg 9c` to link this version.
After running…
brew switch jpeg 9c
… all was well. My C++ project ran fine, and so does my C# project!
I guess I’ll leave this info here in case it helps someone else. Cheers!
| https://discourse.libsdl.org/t/trouble-using-sdl2-image-with-net-on-osx-dotnet-core/26808 | CC-MAIN-2019-47 | refinedweb | 677 | 51.44 |
Hey, Folks..I am back with my new article on Database. Do you know that you can manage your database in real-time using a platform called Firebase. Sounds great na….We will try this out today.
Before start working with Firebase. Let us first know What is it? What are its features?
What is Firebase?
Firebase is a mobile and web application platform with tools and and infrastructure designed to help developers build high-quality aps. Firebase is made up of complementary features that developers can mix-and-max to fit their needs.
Features:
It provides following features:
- Realtime Database
- Hosting
- Authentication
- Storage
- Cloud Messaging
- Remote Config
- Test Lab
- Crash Reporting
- Notifications
- App Indexing
- Dynamic Links
- Invites
- AdWords
- AdMob
Today, we will discuss about Realtime Database with Firebase using Ionic 2.
First of all let us first see how we can create a database in Firebase:
To start using Database into our Ionic project, we need to follow below steps:
1. Create a project and to know how to create a project in Firebase, you can follow this link and go through Step 1 and 2.
2. You now need to click on your project shown on the console. On clicking, a new window gets opened which is shown as below:
3. Click on Database on side Menu shown as below:
4. On Clicking on Database, again a new window will gets opened up.
The area highlighted above in Red colour shows our database.
Note: In Firebase, you don’t need to create a database, it is automatically generated as shown above in green box. In above image, Blue box shows the name of the table. Pink box shows the key. It can be taken as row of the table. Black box shows the column of the row. All the tables are created manually through coding in Ionic 2. So as soon as you will write code, all tables will be created and will be shown in hierarchical form as shown in above image.
This is all you have to do in Firebase.
6. Now, create a new Ionic 2 project or open already existing project in command prompt and install following:
- Update your Ionic 2 with latest version. Type below command inside your project directory.
npm install -g ionic@latest
- Install app scripts. Type:
npm install ionic/app-scripts@latest --save dev
- Install types/request. Type:
npm install @typings/scripts@0.0.30 --save -dev --save -exact
- Install firebase and angularfire 2:
npm install firebase angularfire2 --save
7. Open your project in Visual Studio Code.
8. Open app.module.ts file and import AngularFireModule.
import { AngularFireModule } from 'angularfire2';
9. Now, copy the script code generated while adding firebase into your web app. Your script code must be in the form of:
You have to copy only the code in red box into your app.module.ts file before @ngModule under Export class firebaseConfig as shown below:
Export class firebaseConfig = {apiKey: "AIzaSyDdQPigpjA0t-ZPD5Fowy5O6ctllM9dNRg", authDomain: "apply-cbc33.firebaseapp.com", databaseURL: "", storageBucket: "apply-cbc33.appspot.com", messagingSenderId: "745925553315" }
Note: You will not have same script code as in the image. The script in above image is just a demo. Also, you can reuse the same script code in your other projects too, but just remember, your table will be created in the database whose script code you have entered in your project.
10. Now, add below line in your imports:
imports: [ IonicModule.forRoot(MyApp), AngularFireModule.initializeApp(firebaseConfig) ],
11. Import AngularFire in your .ts file.
import { AngularFire } from 'angularfire2';
Add it in constructor too:
Public angFire: AngularFire
Note : Steps 8 to 11 imports firebase into your project. Now you are ready to create tables and save your data in these tables.
I. Now, let us see how we can add our data into our firebase. Follow steps below:
1.Store each value in separate variables. Like below:
var Title = this.title;
2. Now, create a firebase reference and store data in it. For this, write code as below:
var firebaseRef = firebase.database().ref(); var SongsRef = firebaseRef.child('Songs'); SongsRef .push({ title:Title });
Here,
var firebaseRef = firebase.database().ref();: This line is creating a firebase reference.
var SongsRef = firebaseRef.child(Songs); : This line says that a child name “Songs” is created and its reference is stored in SongsRef. Songs is the name of the table which will automatically be created.
Songs.push({.. });: This is used to push data into our table.
Finally, Print the data into your html file in the way you want and then run our application.On running the application, your database must look like this:
II. Update the data in firebase:
To update the data, pass the id/key of the song, variable name(say A) (whose value you have to change), variable(say B) (in which updated value is present) into the function.
Then store the value of B in variable A in the function definition using update function as follows:
this.songs.update(songId, { title: data.title });
III. Remove the data from database.
To remove the data, pass the id/key of the song into the function.
Remove song of passed id using function remove.
removeSong(songId: string){ this.songs.remove(songId); }
These are the basic firebase functionalities in realtime database. Hope you have understood them. If not, don’t worry. Start reading again. Hope you will understand next time. Hehehe…. Well jokes apart, If you have any query, do comment on this article. Till then, enjoy because lots of things are still need to come.
More from my site
| http://jayantpaliwal.com/firebase-functionalities-in-ionic-2/ | CC-MAIN-2019-39 | refinedweb | 923 | 68.36 |
Hi this is probably a silly noob question, but to learn you have to ask, rigth?
I’ve made script that generates a list like this [1, 2] what do I do if I want to use only one of the values in another part of my script, and not both of them. Like if I want another part of the script to know that 1 is 1 and that 2 is 2 and use this information to do stuff.
Hi this is probably a silly noob question, but to learn you have to ask, rigth?
Like this?
a = [1, 2]
print a
print a[0]
print a[1]
no, like this
import Blender from Blender import Image image = Image.Load("C:\path") image.getSize()
image.getSize() returna a kind of list over the lengths of the two akses in the picture (x, y) I would like to use these values induvidualy.
well, I figured out how to use your answer Xjazz, thanks for that
I will probably post more questions in this tread when they come up…
You could say:
x,y = image.getSize()
to unpack the list, or you could do
size = image.getSize()
size[0], size[1]
to get things out.
New question about the same ting !!
How do I do the same thing if I have a list like this [[1, 2], [3, 4]]
Indexes nest, so if you have a list like that and you want the first element of the first list it would be like:
list = [[1, 2], [3, 4]]
elem = list[0][0]
what does ubsubscriptabele means?
That means your trying to subscript an object which does not support retrieving values via subscripts like:
var = 2
var[0] = 1
would give you an error because var is an integer, not a list, dictionary, set, etc which are iterable types (you can move over the elements contained within them).
I’m trying to use this for making a line betweentwo points, the code realy is like this:
p1 = [1, 0, 0] p2 = [0, 0, 0] verts = [p1,p2] faces = [0, 1] ob = B.Object.New('Mesh', 'Meshob') me = B.Mesh.New('myMesh') me.verts.extend( verts ) me.faces.extend( faces ) ob.link( me ) scene = B.Scene.GetCurrent() scene.link( ob ) B.Redraw(-1)
but I want the xy coordinates in p1 and p2 to vary in diferent situations, depending on the xy coordinates of one pixel, using the match.append([x,y]) to get the coordinates. How do I do that?
I’m having a hard time understanding what your trying to do (not the connection part, the image part). Im wondering what your this “match.append([x,y])” you mentioned is (obviously its a list your trying to append to, but what are you trying to do with this list once its been appended to). Could you explain a little more clearly?
Also the code you posted could be modernized some with:
p1 = [1, 0, 0] p2 = [0, 0, 0] verts = [p1,p2] faces = [0, 1] me = B.Mesh.New('myMesh') me.verts.extend( verts ) me.faces.extend( faces ) scene = B.Scene.GetCurrent() scene.objects.new(me, 'MeshOb') B.Redraw(-1)
I am trying to use the coordinates from one spesific pixel in p1 an another in p2. I’ve got the coordinates, but they are of cause only x,y coordinates, the other problem is that I dont know how to make the program use only the x,y coordinates with zero in the z angle
so what I kind of want the program to do is this, the names of the coordinates in this example is qwer in stead of xy to illustrate that it is different coordinates.
p1 = [q, w, 0]
p2 = [e, r, 0]
p1 and p2 coordinates is given like this p1 = [x, y, z]
I could also code it like this, in stead of using the p1 and p2
verts = [[1, 0, 0], [0, 0, 0]] faces = [0, 1] me = B.Mesh.New('myMesh') me.verts.extend( verts ) me.faces.extend( faces ) scene = B.Scene.GetCurrent() scene.objects.new(me, 'MeshOb') B.Redraw(-1)
Do you understand it now?
#or set from wherever p1=[1,2,0] p2=[p1[0], p1[1], 0] #p2 is now [1,2,0]
If you have q,w,e,r already, then what you posted would work just fine:
p1 = [q, w, 0] p2 = [e, r, 0]
I don’t think you understand my question, if it isn’t me who don’t understand the answer, could we discuss this on msn or something?
You migth be interested in that I am using this help in my project that has as a goal to program software to a simpel laser 3d scanner. Anyone that wants to help me, please contact me on mail or msn.
my adress is [email protected]
What are you actually trying to do?
I’ve got the coordinates, but they are of cause only x,y coordinates, the other problem is that I dont know how to make the program use only the x,y coordinates with zero in the z angle
So you want to take (x,y) and turn it into [x,y,0]?
It would help if you gave an example, like:
By this point in the program, I have found x and y to be 3 and 5.
I want p1 to be [3,5,0] and p2 to be [5,3,0]
Cheers,
Ian
Ok, it’s time to put out the script.
import Blender as B from Blender import Image import timeit image = Image.Load("C:\color.psd") print "Image from", image.getFilename() print "loaded to obj", image.getName() image.setXRep(4) image.setYRep(2) print "All Images available no2", Image.Get() print "Size", image.getSize() print image.getPixelI(0,0) a = image.getSize yaks =a[0] xaks = a[1] match = [] for y in range(yaks): for x in range(xaks) if image.getPixelI8x, y) == [224, 38, 14, 255]: match.aooend([x,y]) print match p1 =[match[0], 0] p2 =[match[1], 0] verta = [p1, p2] faces = [0, 1] me = = B.Mesh.New('myMesh') me.verts.extend( verts ) me.faces.extend( faces ) scee = B.Scene.GetCurrent() scene.objects.new(me, 'MeshOb') B.Redraw(-1)
well, the problem is that it don’t work. print match print this [[23, 23], [23, 29]], that is the x,y coordinates I want p1 and p2 to be in, how should I make it do that?
p1=[match[0][0],match[0][1],0] p2=[match[1][0],match[1][1],0]
Lists are nested in python.
thanks man.
new question. how do I automaticaly make one new point (p3,p4,p5, and so on) for each pixel with the [224, 38, 14, 255] color?
The best way to explain what I want it to do now is to automaticly make this if I have five points with the [224, 38, 14, 255] color.
p1 = [match[0][0],match[0][1], 0] p2 = [match[1][0],match[1][1], 0] p3 = [match[2][0],match[2][1], 0] p4 = [match[3][0],match[3][1], 0] p5 = [match[4][0],match[4][1], 0]
so I guess what I realy tries to do is sort of this:
for y in range(yaks): p+1 = [match[0+1][0],match[1+1][1],0]
and then sort of ad one to the numbers I’ve written +1 behind for each times it repeats. but I get a syntax error: can’t assign to operator
So can anybody tell me how to do this properly and working?
No worries.
You can either use "exec(“code goes here)” to compile and run code on the fly, but you’ll be left with loads of variables lying around the place. You could just build a new list.
p=[] for i in range(blah): p.append( [match[i][0],match[i][1],0] ) #or do this for item in match: p.append([item[0],item[1],0]) #then you can use me.verts.extend(p)
thanks!
Now I got a bunch of points, I have to get lines between them, do you thing I could use this to make these lines, if not, what should I do? | https://blenderartists.org/t/noob-question/400140 | CC-MAIN-2020-50 | refinedweb | 1,378 | 79.7 |
Important: Please read the Qt Code of Conduct -
When is AbstractItemModel's data() method called ?
Is there documentation for when AbstractItemModel's data() method is called ?
It seems that my application has many unnecessary calls to AbstractItemModel::data(). I'm using AbstractTableModel and I understand that data() must be called multiple times for the various roles of an index. I would expect data() to be called when the table is initially drawn and when a modification is made to a cell. Also, I'm not manually redrawing the anywhere in code.
I tried to get more details by having the rowCount() and columnCount() return 1 so I essentially have a table with only one cell. Next, I put a print statement inside data() and I could see it is being called for roles 6,7,9,10,1,0, and 8. The problem is that it does this four times so it's being called 28 times (7 roles * 4 times = 28) for only one cell in the table.
Understanding the sequence of calls for the Model/View architecture is critical to my doing well on this project. Any documentation on the sequence of calls (i.e. when is data()/setData() etc. called) ? Any advice ? I can provide code if necessary.
- Chris Kawa Moderators last edited by
The timing and order of data() calls is implementation detail of Qt and will likely change. It is not documented (officially).
The recommendation is that data() should be implemented to be "as fast as possible". You shouldn't make any assumptions on when or how many times certain data role will be polled. Optimally a data() implementation should be a switch with bunch of returns and no (or almost no) calculations.
If you need some heavy calculations for particular data roles you should consider caching it. You can take a look at QCache or QContiguousCache classes to see if they can help you out.
Thank you Chris.
I just noticed one thing. Here is main.cpp:
@#include "CMainWindow.h"
#include <QApplication>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
CMainWindow w;
w.show();
return a.exec();
}@
If I remove line 9, then data() gets called as many times as I would expect for the one table cell (i.e. getting each role ONCE upon launch). However, when I add the line, there are many more calls to data(). What is there about entering the main event loop (via exec()) that does this ?
- Chris Kawa Moderators last edited by
As I said - that's an implementation detail so I can only speculate.
I implemented an example just like you described:
@#include <QApplication>
#include <QAbstractItemModel>
#include <QTableView>
#include <QDebug>
class Model : public QAbstractItemModel {
public:
Model(QObject* parent = nullptr) : QAbstractItemModel(parent) {}
QModelIndex index(int row, int column, const QModelIndex&) const {
return createIndex(row, column);
}
QModelIndex parent(const QModelIndex&) const {
return QModelIndex();
}
int rowCount(const QModelIndex&) const {
return 1;
}
int columnCount(const QModelIndex&) const {
return 1;
}
QVariant data(const QModelIndex&, int role) const {
qDebug() << role;
return QVariant();
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QTableView w;
w.setModel(new Model());
w.show();
return a.exec();
}@
The output is "6 7 9 10 1 0 8" and that's it. It's like that even if I hide and show the widget multiple times. When you look at the Qt::ItemDataRole definition these are all roles needed to draw a cell - a font, text, colors, decoration etc.
So if you're getting these calls multiple times you're probably doing something else that forces the view to update. Check if you're not setting the model multiple times, resize the window(or just the widget), call setGeometry, resetModel, emit some signals that could trigger an update etc. I can't really help you more without seeing the whole context.
Thank you Chris!
I looked at my data() function and I was creating a copy of a map with 44000+ elements. This was the reason my interface lagged and not the many calls to data(). Your statement "data() should be implemented to be “as fast as possible” made me take a second look at my code. I still haven't figured out the reason for the many calls to data() (it's probably one of the things you listed), but I can at least continue with development.
- SGaist Lifetime Qt Champion last edited by
Hi,
AFAIK, as soon as there is an update needed then data will be called. An update can be triggered for many reasons: moving the widget, moving another window over your widget, minimize/maximize and lots of other things that depends also on your model, if you called reset, layoutChanged etc. | https://forum.qt.io/topic/47071/when-is-abstractitemmodel-s-data-method-called | CC-MAIN-2020-40 | refinedweb | 781 | 64.1 |
test.
We've got a base project going for us, which is nice. But let's add some proper unit tests as well as some code formatters! Note; you can find the live repository for this project here.
Notes
It totally fails. You may be able to see why when you look
at the code that was in our
The refactored test in our video looks like this:
import pytest @pytest.mark.parametrize("n,expected", [(0, 0), (5, 5), (10, 10), (26, 26), (1000, 26)]) def test_headtail_size(base_clumper, n, expected): assert len(base_clumper.head(n)) == expected assert len(base_clumper.tail(n)) == expected
It totally fails. You may be able to see why when you look
at the code that was in our
Clumper class.
class Clumper: def __init__(self, blob): self.blob = blob def __len__(self): return len(self.blob) def keep(self, *funcs): data = self.blob for func in funcs: data = [d for d in data if func(d)] return Clumper(data) def head(self, n): return Clumper([self.blob[i] for i in range(n)]) def tail(self, n): return Clumper([self.blob[-i] for i in range(1, n + 1)])
If you're interested in seeing how we refactored this code you can check the code on the live github repository here. You can also find a refactored set of tests here.
Feedback? See an issue? Something unclear? Feel free to mention it here.
If you want to be kept up to date, consider getting the newsletter. | https://calmcode.io/test/learn.html | CC-MAIN-2020-40 | refinedweb | 250 | 76.82 |
Our problem is mapping our legacy database schema to XML in a maintainable way – allowing for easy handling of future changes to the schema of either database or XML.
In our case we need to disseminate large sets of information in several formats - therefore XML and XSLT are a natural, now the problem is mapping our legacy database schema to XML in a maintainable way – allowing for easy handling of future changes to the schema of either database or XML.
Our solution – create a hierarchical value object model – where each value object apart from having the usual value properties has a toXML() method which serializes the object to XML according to the corresponding part of the XML schema.
By having a hierarchical model a top level value object may contain collections of lower level value objects and within it's toXML() method it would call upon those objects toXML() method therefore allowing for very clean localization of the database schema to XML schema mapping.
Therefore by adding XML "awareness" as methods to the value objects – the server work stays equal setting in the properties of the objects, and the client work becomes trivial – receive any value object and dump it to XML while still having the flexibility of combining and ordering it as wanted – which you don’t have if you pass only XML back to the client as well In our case there is the need to access properties of the value objects apart from generating XML.
This pattern creates self contained value objects where all of the mapping code is located where it logically "belongs" - within the object - where the properties are kept.
XML Aware Value Objects (28 messages)
- Posted by: Noam Borovoy
- Posted on: November 26 2001 12:10 EST
Threaded Messages (28)
- XML Aware Value Objects by Stefan Siprell on November 26 2001 12:48 EST
- XML Aware Value Objects by Giedrius Trumpickas on November 26 2001 20:09 EST
- Value Objects by Giedrius Trumpickas on November 27 2001 21:33 EST
- XML Aware Value Objects by Christo Angelov on March 28 2002 11:45 EST
- XML Aware Value Objects by Noam Borovoy on April 05 2002 09:39 EST
- XML Aware Value Objects by George Coles on May 19 2002 10:46 EDT
- XML Aware Value Objects by Jonathan Gibbons on November 29 2001 11:29 EST
- XML Aware Value Objects by Reg Whitton on December 05 2001 04:08 EST
- XML Aware Value Objects by Andrew Stevens on December 05 2001 12:18 EST
- XML Aware Value Objects by Noam Borovoy on December 05 2001 05:59 EST
- XML Aware Value Objects by Darius Silingas on December 04 2001 06:52 EST
- XML Aware Value Objects by Jerome Banks on December 04 2001 14:24 EST
- XML Aware Value Objects by Yawar Ali on December 05 2001 03:47 EST
- XML Aware Value Objects by Giedrius Trumpickas on December 05 2001 07:50 EST
- XML Aware Value Objects by jeff anderson on December 05 2001 11:42 EST
- XML Aware Value Objects by Giedrius Trumpickas on December 05 2001 11:55 EST
- XML Aware Value Objects by Tiberiu Fustos on December 07 2001 07:06 EST
- XML Aware Value Objects by Noam Borovoy on January 11 2002 11:05 EST
- XML Aware Value Objects by J?ns Weimarck on March 26 2002 06:30 EST
- XML Aware Value Objects by Atluri satish on November 12 2002 04:38 EST
- XML Aware Value Objects by Jean-Christophe Popeler on December 07 2001 04:30 EST
- XML Aware Value Objects by Cees Habraken on December 04 2001 06:52 EST
- XML Aware Value Objects by Nur Djuned on December 04 2001 10:20 EST
- XML Aware Value Objects by greg farris on December 04 2001 10:57 EST
- XML Aware Value Objects by Giedrius Trumpickas on December 05 2001 14:17 EST
- XML Aware Value Objects by Giedrius Trumpickas on December 05 2001 14:21 EST
- XML Aware Value Objects by Jeff Lawson on December 21 2001 12:53 EST
- XML Aware Value Objects by Daniel Repik on December 26 2001 17:51 EST
XML Aware Value Objects[ Go to top ]
Hi Noam,
- Posted by: Stefan Siprell
- Posted on: November 26 2001 12:48 EST
- in response to Noam Borovoy
we are having the exact same solution. A lot of our classes are created/instanciated out of Configuration Files. We actually went a little further and use XML Based constructors, to initialize the classes directly out of an XML element. This works perfectly fine, and we actually developed a little framework around it, helping to read/write the XML-Elements. Yet there are two thíngs that I like to change the next time:
- use a standard XML-Binding Network. We looked at some Systems (i.e. Zeus, Castor) and were uncontent to see, that they all require the usage of bean like method signatures to work properly. This would have forced us to develop unwanted Class Signatures and XML-DTDs to implement a non-standard XML-Marshalling Framework. I think things will change with the JDK 1.4. Having the Serialization built directly into the system, might be a motivation to actually change all the classes and documents.
- use a builder pattern to seperate the creation of the class and the class itself. In some of our classes the XML-Adaptor code is actually longer and more complicated than the actual logic of the class itself. So I think I will have seperate builder classes, as defined by the GOF.
The wrap up, it seems unlogic for me to go through the trouble of adapting the XML- and Java-representation of Information just to comply with the needs of Marshalling Frameworks, for domains with a constant Information Structure. Building you're own Adaptors has proven to be faster and the design of the classes and documents is more straight forward, meaning the maintenance of the code is a lot easier.
Of course I lack the experience of devloping code for dynamic information structures, nor have I ever used a predesigned framework for larger objectgraphs, and I'd love to hear feedback from people who have actually used such software.
Stefan
XML Aware Value Objects[ Go to top ]
- Posted by: Giedrius Trumpickas
- Posted on: November 26 2001 20:09 EST
- in response to Stefan Siprell
Research on dynamic systems was done long time ago before J2EE or even Java were popular.
Adaptive Object Model
AOM heavily uses type object pattern.
Excellent resources on metadata
Value Objects[ Go to top ]
One more intresting reading about Metadata
- Posted by: Giedrius Trumpickas
- Posted on: November 27 2001 21:33 EST
- in response to Stefan Siprell
by two fathers of metadata Foote and Yoder ;)
XML Aware Value Objects[ Go to top ]
Instead of XML Aware Value Objects, why not use XML-Empowered Value Objects that work with the DOM directly? Yes, yes, heavy weight, but that is a question of whether your hardware and performance requiremets allow it. In many cases they do, and it the cases they don't, you need more hardware anyway. And if you are going to convert to XML your data anyway, I don't see the point in passing it through Java data structures, which are convenient on a low-level access to data attributes and properties but fall flat on a larger scale when you have to look up and access objects in your tree. The DOM coupled with XPath expressions is far more powerful. I am not sure if the Java DOM supports data types, but that is easily fixeable with some small helper classes.
- Posted by: Christo Angelov
- Posted on: March 28 2002 11:45 EST
- in response to Stefan Siprell
XML Aware Value Objects[ Go to top ]
Our application does rerquire quite a lot of resources as is:
- Posted by: Noam Borovoy
- Posted on: April 05 2002 09:39 EST
- in response to Christo Angelov
a diferential data extraction creates a 25-50 Meg XML file.
try holding that in a DOM tree...
a complete extraction is about 2-3 Gig XML file which is NOT created by holding a tree in memory - not even of binary objects - instead each sub item is created, outputed and discarded.
JDOM has proven right for the job, yet I must say we only use it's very basic functionality - constructing a tree of elements and attributes, setting schema and namespaces, and then outputting either the whole document or just a branch.
XML Aware Value Objects[ Go to top ]
I agree with the few folks who have favored the "clean" approach. I just read the JAXB specification and API docs for the first time, and I am not happy about the requirement that objects implement special interfaces before they can be marshalled/unmarshalled to and from XML. I am willing to accept a peformance hit when performing this operations if it means that maintenance of the marshalling code can be consolidated in one framework.
- Posted by: George Coles
- Posted on: May 19 2002 10:46 EDT
- in response to Noam Borovoy
Cacheing of the reflective metadata required to marshal instances can help alot with performance, and doesnt break your separation of concerns. I have in the past lost the debate about whether objects should know how to render themselves, persist themselves, or what have you, and it resulted in bloated classes that were punishing to maintain, even for the small application we were building.
I think that time spent designing a marshalling framework that is fully capable of introspecting the managed objects is well worth the effort. The less code required, the better, as always. I hope that the JAXB team can be persuaded to abandon their current stance in upcoming revisions of the draft specification. The forces encouraging the "dumb managed object" approach seem to be fairly obvious.
XML Aware Value Objects[ Go to top ]
This problem is really several distinct issues I recon.
- Posted by: Jonathan Gibbons
- Posted on: November 29 2001 11:29 EST
- in response to Noam Borovoy
1) The technical issue of updating code throughout all systems to reflect schema changes.
2) The business issue of ensuring that only changes are rolled out in a highly structures, controlled and tested way.
Often schema changes are dictated and accepted with no risk analysis, eg changing a column width can blow out vast chunks of a system if there are implicit dependancies.
Another view on the same problem is once of services/data feeds. I bet you find yourself using the XML classes as a data feed mechanism as well as a publication system, which means you are immediatly into protocol versioning (or object serialisation versioning). eg a feed written a year ago is V1 and you are now on v2 which has dropped 2 columns and added a third. What should you do?
As ever, the solution depends on the scope of the project, your time scales etc. My only recommendation at this stage is to ensure EVERY XML stream includes a version number specific to the data being serialised. This gives you the option of supporting old versions if you want to.
Jonathan
XML Aware Value Objects[ Go to top ]
We have been through the toXml()/fromXml() route, and then swapped to to interrogating objects using reflection to output XML based on the accessor method names. This solves issue 1 for us, and as we are only using the XML internally issue 2 does not apply to us.
- Posted by: Reg Whitton
- Posted on: December 05 2001 04:08 EST
- in response to Jonathan Gibbons
However, I have encountered elsewhere the issue of controlling changes to a published interface that maps on to an underlying internal interface that is more fluid. This can happen with an API, and it can happen if you allow direct access to your database for reporting purposes. Where I have seen this the approach taken has been to code all those mappings manually.
My ideas around this have been: to use a custom Javadoc. Hopefully programmers making changes will see the mapping info and realise their importance.
This approach should also help with the detection of unwanted changes to the API. Changes to the mappings, will be reflected in the serialVersionUID values of the classes. These can be computed using the jdk program 'serialver'. If these values change then the existing API has been broken.
JAXB could probably be used in the building of these mapping classes. The embedded javadoc tags could give the DTD and schema info required. I haven't had a chance to play with JAXB, so I haven't thought this through yet.
Please read this article by Mark Pollack about code generation using Javadoc on JDC.
Reg
XML Aware Value Objects[ Go to top ]
My ideas around this have been: to use a custom Javadoc
- Posted by: Andrew Stevens
- Posted on: December 05 2001 12:18 EST
- in response to Reg Whitton
.
Have you looked at XDoclet? () This is just the sort of thing it was developed for; there's already (undocumented) support for Castor in there, or you could do your own mapping classes with a custom template.
XML Aware Value Objects[ Go to top ]
To clarify, as Jonathan pointed out, there are several issues targeted by the pattern.
- Posted by: Noam Borovoy
- Posted on: December 05 2001 05:59 EST
- in response to Jonathan Gibbons
Our architecture is rather standard:
App server - middle tier (web server) - client
We had the following requirements:
1. Mirroring the data from our legacy db to XML, so that:
A. the middle tier or end client can perform XSLT transformations to several other formats.
B. the middle tier can access some of the data fields directly - preferably without needing to parse the XML - for formatting of the views.
2. a highly configurable end user output - yet a rather stable XML schema - we will be publishing the data as XML as well - and don't want to be changing versions more than absolutely necessary. (see Jonathan's point 2)
3. keep maintenance simple - one code base of db to XML mapping for both server and middle tier. (see Jonathan's point 1).
The "standard" solutions were much heavier than we needed as they provide a configurable two way binding while we wanted a one way only mapping, db to XML, and rather fixed - we get all the configuration flexibility we need from using XSLT.
The strength of this pattern is that on the server side all looks like we're using the usual light weight value objects.
And on the client we get BOTH the value objects and the XML - without needing XML parsing libraries on the middle tier.
As a bonus we also get efficient network usage as the serialization is of binary value objects and not the full blown XML.
XML Aware Value Objects[ Go to top ]
Before coding toXml() or similar methods in data objects you need to think whether it will not result in coupling data objects with third-party libraries. Personaly I do not think that it is data object's responsibility to know how to store them in the XML-based storage especially if you use the same data objects both at client and server side's - this way client also needs to have XML libraries although conversions to/from XML are usually done at server side only. Also what if you need to store your objects in different form, e.g. LDAP, some relational database or some other formats? Would you code methods toLDAP(), toMySpecificFormat() and so on in the same object??? I would prefer separating data objects from their representations in XML or some other formats to keep data objects as light as possible. I would write some input/output stream or similiar writer/reader classes for storing data objects to XML and reading them from XML. Thanks for attention.
- Posted by: Darius Silingas
- Posted on: December 04 2001 06:52 EST
- in response to Noam Borovoy
XML Aware Value Objects[ Go to top ]
I agree. I believe XML requires a more orthogonal
- Posted by: Jerome Banks
- Posted on: December 04 2001 14:24 EST
- in response to Darius Silingas
persistence mechanism, via a Marshaller or Codec, to
stream your object to and from XML. Often you can't
pre-generate the object you need to stream ( as in JAXB)
or it is not easily touched ( perhaps your value objects
were already generated for some other persistence scheme,
like JDBC or entity beans ). This approach also makes
versioning easier ( different codecs for different
versions or variants of your XML ).
XML Aware Value Objects[ Go to top ]
The VISITOR pattern addresses exactly this issue, allowing operations to be performed on the elements of an object structure to be defined in a class separate from the elements themselves.
- Posted by: Yawar Ali
- Posted on: December 05 2001 03:47 EST
- in response to Darius Silingas
XML Aware Value Objects[ Go to top ]
Yes, but in visitor pattern you have some restrictions
- Posted by: Giedrius Trumpickas
- Posted on: December 05 2001 19:50 EST
- in response to Yawar Ali
1) you should have hierarchy of subjects
2) this hierarchy shouldn't change very often, because then you need to change visitor interface for adding more "visit" methods for new subjects.
XML Aware Value Objects[ Go to top ]
What about using a decorator pattern instead? the basic object classes would not contain any logic beyond data and business rules. Then decorator classes could be created to add the toXML() method to the various objects.
- Posted by: jeff anderson
- Posted on: December 05 2001 23:42 EST
- in response to Giedrius Trumpickas
Like so
Business b = new Business();
XMLBusiness x = new XMLBusiness(b);
String xml= x.toXML();
of course a decorator would have to be created for each class type in the hierarchy
Jeff Anderson
XML Aware Value Objects[ Go to top ]
I think in this case most appropriated pattern is Strategy. You can create strategies for various data sources.
- Posted by: Giedrius Trumpickas
- Posted on: December 05 2001 23:55 EST
- in response to jeff anderson
XML Aware Value Objects[ Go to top ]
I used the Composite pattern together with the Proxy to achieve this in a project. Alan Holub had some interesting insights into this in a series he had in Java World last year.
- Posted by: Tiberiu Fustos
- Posted on: December 07 2001 07:06 EST
- in response to jeff anderson
Every object that needed to represent itself should implement a "Renderable" interface. A builder is then populating the object's attributes with whater "Renderer" is needed for the task. For example XMLRenderer, or HTMLRenderer. Each attribute has a method "render()" that actually delegates it to the pre-configured renderer. A composite object performs that task just by iterating through all its "children" and ask them to render themselves.
In the end I think it's just a variation of the model described previously. It is complex to build it but once you have it on mapped out nicely it is nice, elegant and flexible.
XML Aware Value Objects[ Go to top ]
Thanks for all of the input - here's example code of what we ended up with:
- Posted by: Noam Borovoy
- Posted on: January 11 2002 11:05 EST
- in response to Tiberiu Fustos
We've started using JDOM and found a very elegant way of doing what we need.
I have an interface:
public interface XAValue {
public String toXML();
public Element getElement();
}
Now when you want to create a full blown XML document you can simply construct a JDOM Document and add the root element using the getElement() method on the root value object.
Then you can do whatever you want with the Document, output it, perform transforms on it, etc. using standard JDOM methods.
Using the JDOM Elements as the mapping tool allows us to keep the objects light weight as the Element is only created on demand and therefore does not cross the wire while gaining all the benefits of a standard API (plus the conversions to DOM and SAX that JDOM provides)
Regards,
Noam
Code example - In the value objects the interface is implemented as follows:
public class ID implements XAValue{
public String number;
public ID(String number) {
this.number = number;
}
public String toXML() {
return Outputter.getInstance().outputString(getElement());
}
public Element getElement(){
Element elem = new Element("id", "");
elem.addContent(new Element("number").addContent(number));
return elem;
}
}
Where Outputter is simply a singleton wrapper for the JDOM XMLOutputter:
public class Outputter extends XMLOutputter{
private static Outputter ctxt;
public Outputter() {
super(" ", true, "UTF-8");
}
public static Outputter getInstance(){
if (ctxt ==null)
ctxt = new Outputter();
return ctxt;
}
}
and a more complex value object would look like:
public class Representative implements XAValue{
public XAValue id;
public XAValue agentType;
public XAValue contactInfo;
public Representative(XAValue id,
XAValue contactInfo,
XAValue agentType) {
this.id = id;
this.contactInfo = contactInfo;
this.agentType = agentType;
}
public String toXML() {
return Outputter.getInstance().outputString(getElement());
}
public Element getElement(){
Element elem = new Element("representative", "");
elem.addContent(id.getElement());
elem.addContent(agentType.getElement());
elem.addContent(contactInfo.getElement());
return elem;
}
}
//Notice how the toXML() method is the same.
XML Aware Value Objects[ Go to top ]
Thanks for all of the input - here's example code of what >we ended up with:
- Posted by: J?ns Weimarck
- Posted on: March 26 2002 06:30 EST
- in response to Noam Borovoy
>We've started using JDOM and found a very elegant way of >doing what we need.
Hi!
I'm just curious to hear if you have had any problems with the approach you described? Has JDOM been working alright? (It's just a beta isn't it?)
Regards,
Jöns
XML Aware Value Objects[ Go to top ]
JDOM is very expensive affair even when compared to DOM parsing. Would it be right decision to use, specially as the latency of operation would directly add to response time for the end user.
- Posted by: Atluri satish
- Posted on: November 12 2002 04:38 EST
- in response to Noam Borovoy
XML Aware Value Objects[ Go to top ]
What about generating SAX events instead of Strings: i.e.
- Posted by: Jean-Christophe Popeler
- Posted on: December 07 2001 16:30 EST
- in response to jeff anderson
define
void toSAX(ContentHandler ch)
instead of
String toXML()
You can then pass any ContentHandler: to generate String (text) output, a DOM tree, or pipe to a XSLT engine that can handle SAX events...
XML Aware Value Objects[ Go to top ]
We did the same thing in a project.
- Posted by: Cees Habraken
- Posted on: December 04 2001 06:52 EST
- in response to Noam Borovoy
In our implementation the XML mapping was stored in an XML configuration file according to our own specs (I know but there was no standart yet) And the XML was then created by a reflection type of factory. This kept maintainability high and kept direct mapping code non existent.
Cees Habraken
XML Aware Value Objects[ Go to top ]
Have you look at JAXB and Sun's RI for it?.
- Posted by: Nur Djuned
- Posted on: December 04 2001 10:20 EST
- in response to Noam Borovoy
XML Aware Value Objects[ Go to top ]
Look at Castor... seems to have more features the current JAXB....
- Posted by: greg farris
- Posted on: December 04 2001 10:57 EST
- in response to Nur Djuned
XML Aware Value Objects[ Go to top ]
I think you definetly should take a look at "Adaptive Object Model". It's aproach how to create dynamic, "self aware" data object model.
- Posted by: Giedrius Trumpickas
- Posted on: December 05 2001 14:17 EST
- in response to Noam Borovoy
Main idea behind "Adaptive Object Model" is type object pattern.
In this model you have:
1) attribute type - type of attribute aka class
2) attributes - which represents instances of attribute type
3) entity - set of attributes
4) EntityType - defines what kind of attribute types entity
contains
At attribute type level you can create strategies for attribute formatting, editing (converting from text), maping to JDBC statements, marshaling and so on.
You can introduce your own types like SSN, AccountNumber, Money, PhoneNumber, EmailAddress ...
At entity level type you can create more complex strategies for mashaling, maping ...
In this model you can write code like this:
moneyAttributeInstance.format() - produces "user friendly" representation of money as text (Note: that inside format method delegates format call to type FormattingStrategy)
moneyAttribute.setFromTtext("10.100")-
(Note: that inside format method delegates format call to type JDBCMapingStrategy)
Giedrius
XML Aware Value Objects[ Go to top ]
Ups typo :) I though about JDBC but wrote about EditingStrategy
- Posted by: Giedrius Trumpickas
- Posted on: December 05 2001 14:21 EST
- in response to Giedrius Trumpickas
Correct version:
moneyAttribute.setFromTtext("10.100")-
(Note: that inside setFromText method delegates call to attribute type EditingStrategy)
Each attribute has reference to it's type.
Giedrius
XML Aware Value Objects[ Go to top ]
Check out XchainJ.com -- these guys have a GUI to map between XML / Java / database schema then their runtime processor takes care of the coding (don't need SAX/DOM/JDOM/JDBC). Cool!
- Posted by: Jeff Lawson
- Posted on: December 21 2001 12:53 EST
- in response to Noam Borovoy
XML Aware Value Objects[ Go to top ]
My solution to the problem was to use JAXB. It required development of a DTD, and optionally a transform specfication for each table.
- Posted by: Daniel Repik
- Posted on: December 26 2001 17:51 EST
- in response to Noam Borovoy
I looked at similar solutions like Castor, as well as, homw grown stuff. But what drove me towards this solution was that given Sun's seal of approval, I could be assured that is would be a standardized solution.
Another major advantage that I found was that I could subclass from the class generated by the transform compiler. This allows me to instantiate a different classes for client operations and another for my EJB land value object, using XML to go from one to another. | http://www.theserverside.com/discussions/thread.tss?thread_id=10431 | CC-MAIN-2015-32 | refinedweb | 4,336 | 53.65 |
order
The best way to represent data depends not only on the semantics of the data, but also on the type of model used. linear model With tree based models (e.g Decision tree,Gradient lifting tree and Random forest )It is a very common model with many members. They have very different properties when dealing with different feature representations. We first train a data set with linear model and decision tree.
PS: linear model and Decision tree The model has been explained before. If you are interested, you can click the relevant links to have a look. I won't repeat it here.
Training model
1. Data sources
Bisection data of first person fps game csgo:
csgo is a first person shooting game. The data includes each player's network delay (ping), number of kills, number of deaths, score, etc.
Hey, hey, I still know a lot about the game. This is the only blogger who can understand the data set of all dimensions without reading the English introduction of the original data.
2. Read file
import pandas as pd import winreg real_address = winreg.OpenKey(winreg.HKEY_CURRENT_USER,r'Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders',) file_address=winreg.QueryValueEx(real_address, "Desktop")[0] file_address+='\\' file_origin=file_address+"\\Source data-analysis\\model_data.csv" csgo=pd.read_csv(file_origin)#
Because it is troublesome to transfer the file to the python root directory or to the download folder after downloading the data every time. Therefore, I set up the absolute desktop path through the winreg library. In this way, as long as I download the data to the desktop or paste it into a specific folder on the desktop to read it, I won't be confused with other data.
In fact, this step is a process. Basically, every data mining has to be done again. There is nothing to say.
3. Cleaning data
It can be seen that the data does not include missing values, and there is no attribute overlap between characteristic values, so no processing is required for the time being.
4. Modeling
import numpy as np from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor#Decision tree from sklearn.linear_model import LinearRegression#linear model from sklearn.metrics import r2_score X_train,X_test,y_train,y_test=train_test_split(csgo[["Ping","Kills","Assists","Deaths","MVP","HSP"]])))
The results are as follows:
Sub box
As we all know, linear models can only be modeled for linear relationships. For a single feature, it is a straight line. Decision tree can build more complex data model, but it strongly depends on data representation. There is a way to make linear models more powerful on continuous data. Feature separation (also known as discretization) is used to divide it into multiple features.
We assume that the input range of features (in the above data set, we only consider the kills feature) is divided into a fixed number of boxes, such as 9, so the data points can be represented by the box in which they are located. To determine this, we first need to define the box. In the above data set, we define 9 evenly distributed boxes between - 4 and 64 according to the maximum and minimum values in the kills feature. We use the np.linspace function to create 9 boxes, that is, the space between two continuous boundaries:
Here, the first box contains all data points with eigenvalues between - 4 and 3.56, and the second box contains eigenvalues between 3.56
Next, we record the box to which each data point belongs. This can be easily calculated with the np.digitize function:
What we do here is to transform the continuous input features (kills) in the data set into a classification feature to represent the box where the data points are located. To use the scikit learn model on this data, we use pd.get_dummies transforms this discrete feature into a single heat coding.
PS: Unique heat coding I have already said the content of. If you are interested, you can click the relevant links to have a look. I won't repeat it here.
Since we specify 10 elements, the transformed eigenvalue contains 10 features.
Retraining
Next, we build a new linear model and a new decision tree model on the data after independent heat coding:
binned=pd.concat([csgo[["Ping","Kills","Assists","Deaths","MVP","HSP"]],kills_dummies],axis=1,ignore_index=False)###Merge two dataframe s X_train,X_test,y_train,y_test=train_test_split(binned)))
It can be seen that the accuracy of the linear model is higher than before (the more unsuitable the data set is for the linear model, the better the box splitting effect.) while the score of the decision tree model is unchanged. For each box, both predict a constant value. Because the characteristics in each box are constant, any model will predict the same value for all points in a box. Comparing the contents of the model before and after the feature box division, we find that the linear model becomes more flexible, because it now has different values for each box, and the flexibility of the decision tree model is reduced. The box feature usually does not have a better effect on the tree based model, because this model can learn to divide data at any location. In a sense, the decision tree can learn how to divide boxes, which is most useful for predicting these data. In addition, the decision tree views multiple features at the same time, while the box is usually for a single feature value. However, the expressiveness of linear model has been greatly improved after data transformation.
For a specific data set, if there is a good reason to use a linear model - for example, the data set is large, the dimension is high, but the relationship between some features and the output is nonlinear - then binning is a good way to improve the modeling ability.
Personal blog:
Welcome to my personal blog. There are not only technical articles, but also internalized notes of a series of books.
There are many places that are not doing very well. Welcome netizens to put forward suggestions, and hope to meet some friends to exchange and discuss together. | https://programmer.group/characteristic-engineering-discretization-and-box-division.html | CC-MAIN-2021-49 | refinedweb | 1,026 | 55.84 |
Hello, it has been more than three years since the last "Bits from the Debian GNU/Hurd porters"[1], high time for an update on the port. * Snapshot releases Three new snapshot releases have been done by Philip Charles, K14, K15 (which was only done as an updated mini CD-ISO, not a full snapshot), and K16. K16 has been released[2] on December 18th, 2007 featuring four CDs or two DVDs. Additionally, it also features a ready-to-go qemu-image[3] for the first time. K16 was also the first snapshot which included TLS (Thread Local Storage), a requirement for modern glibcs. New ported packages include Qt3, Qt4, SDL and Emacs22. * Base and toolchain status Currently, most base packages are current, with the notable exception of util-linux, which has been a big problem over the last years. However, Samuel Thibault got all outstanding issues of util-linux applied upstream so the version in experimental is mostly working. The toolchain is in pretty good shape as well since TLS support got implemented; we are using the current glibc, binutils and gcc Debian packages unmodified. * Xen support Besides qemu, which can be very slow to run, a Xen DomU port for GNU Mach has been made available by Samuel Thibault. It requires a non-PAE hypervisor and some minor manual tweaking, but is otherwise quite functional and stable already, see its wiki page[4] for further information. This will make people running the Hurd less dependent on specific hardware, as a lot of newer computers do not work with the underlying GNU Mach kernel anymore. * Autobuilder availability and archive coverage improved The percentage of packages built for Debian GNU/Hurd has improved from 40% to now nearly 60%[5] since the last Bits from the porters. Further, the backlog of outdated packages has been greatly reduced. This is due to the addition of two[6][7] Xen autobuilders earlier this year, which made the hurd-i386 autobuilders far more robust and fault-tolerant as they not need local admin attention anymore in case of problems with the GNU/Hurd guests. The remaining 40% of packages are either waiting for other packages to become available (see [8] for a (big) graph of those relationships) or are failing for some reason[9]; a complete list of build failures can be found at [10]. * Developer machine We are currently working on getting a general DD-accessible porter box setup. In the meantime, interested people can contact hurd-shell-account@gnu.org to get an account on one of the publically accessible (Debian) GNU/Hurd developer machines. For further details, see [11]. * Summer of Code 2008 This year, the GNU Hurd participated as its own organization at Google's Summer of Code, thanks to the coordination done by Olaf Buddenhagen[12]. All of the 5 projects were carried out quite successfully. The most practically relevant project for Debian GNU/Hurd was the implementation of a procfs translator[13] by Madhusudan C.S., which provides a traditional Unix-style /proc file system and the subsequent porting of the procps package, so utilities like pgrep etc. will be available after lenny, and procps Build-Depends no longer need to be special-cased on hurd-i386. Other GSoC projects were lisp bindings by Flavio Cruz, better system debugging and tracing by Andrei Barbu, namespace-based translator selection by Sergiu Ivanov and network virtualization by Zheng Da. More information on the details and outcome of those projects can be found on the wiki[14]. * Still no debian-installer Unfortunately, the Debian GNU/Hurd port still lacks d-i support. On the other hand, debootstrap now mostly works, even to cross-debootstrap a hurd-i386 installation from GNU/Linux, if one works around bug #498731. A relatively easy solution could be to use the GNU/Linux d-i to cross-install and setup a Debian GNU/Hurd system. People who have experience in d-i and possibly Debian GNU/Hurd are more than welcome to contact us at debian-hurd@lists.debian.org. for the Debian GNU/Hurd porters, Michael Banck [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
Attachment:
signature.asc
Description: Digital signature | https://lists.debian.org/debian-devel-announce/2008/09/msg00004.html | CC-MAIN-2016-22 | refinedweb | 703 | 60.04 |
The Sun node node for each metadevice or volume in that disk set or disk group.
In the Sun Cluster system, each device node in the local volume manager namespace is replaced by a symbolic link to a device node in the /global/.devices/node@nodeID file system where nodeID is an integer that represents the nodes in the cluster. Sun Cluster software continues to present the volume manager devices, as symbolic links, in their standard locations as well. Both the global namespace and standard volume manager namespace are available from any cluster node.
The advantages of the global namespace include the following:
Each node running the scgdevs(1M) command. | http://docs.oracle.com/cd/E19636-01/819-0421/cacdefff/index.html | CC-MAIN-2016-18 | refinedweb | 110 | 61.97 |
In this tutorial we will learn how to get and display a region of interest from an image, using Python and OpenCV.
Introduction
In this tutorial we will learn how to get and display a region of interest from an image, using Python and OpenCV.
In some cases, it might make sense to only apply some type of operation only to a portion of an image. Thus, it is important to understand how we can extract a region of interest of an original image.
Note that we are not going to apply any sort of algorithm to extract parts of an image that have some special feature, but rather simply define a part of the image that we want to obtain by specifying the coordinates of that region.
This tutorial was tested with version 4.0.0 of OpenCV and version 3.7.2 of Python.
The code
As usual, we will start by including the cv2 module.
import cv2
Followed by that, we are going to read our test image with a call to the imread function. As input, we need to pass the path to the file in the file system.
originalImage = cv2.imread('C:/Users/N/Desktop/testImg.png')
The image we are going to read is the one shown below in figure 1. As can be seen, it’s an image with some rectangles and a text in the middle.
For illustration purposes, we will assume that our region of interest (ROI) is the text.
Regarding the x coordinates, this region is located more or less between x = 240 and x = 430 and regarding y coordinates it is between y = 230 and y = 310. These are the coordinates we are going to use to extract the region of interest.
Recall from previous tutorials that, when we read an image with the imread function, we obtain a ndarray. Thus, we can use slicing to obtain our region of interest. You can read more about numpy indexing here.
In terms of notation, it is as simple as:
slicedImage = originalImage[y1:y2, x1:x2]
In other words, it means that we want all the pixels between the coordinates y1 and y2 and x1 and x2. For our case, taking in consideration the region of interest we have mentioned before, we get:
slicedImage = originalImage[230:310, 240:430]
To finalize, we will display both the original image and the sliced image (region of interest) in two different windows.
cv2.imshow("Original Image", originalImage) cv2.imshow("Sliced Image", slicedImage) cv2.waitKey(0) cv2.destroyAllWindows()
The final code can be seen below.
import cv2 originalImage = cv2.imread('C:/Users/N/Desktop/testImg.png') slicedImage = originalImage[230:310, 240:430] cv2.imshow("Original Image", originalImage) cv2.imshow("Sliced Image", slicedImage) cv2.waitKey(0) cv2.destroyAllWindows()
Testing the code
To test the code, simply run it in a tool of your choice. In my case, I’ll be using IDLE, a Python IDE.
You should get a result similar to figure 2. As can be seen, we have obtained both the original image and the region of interest, as expected.
One Reply to “Python OpenCV: Getting region of interest”
Thanks, I appreciate your mini-tutorials and find them very helpful and understandable. | https://techtutorialsx.com/2019/11/24/python-opencv-getting-region-of-interest/ | CC-MAIN-2019-51 | refinedweb | 538 | 56.25 |
Before introducing the specifics of inheritance, an example that includes all the prerequisite elements of inheritance might be helpful. In the code, XParent is the base class. XChild is the derived class and inherits the XParent class. XChild inherits a method, property, and field from the base class. XChild extends XParent by adding a method and field to this assemblage. XChild has five members: three from the base and two from itself. In this manner, XChild is a specialty type and refines XParent. In Main, instances of the XParent and XChild classes are created. Base methods are called on the XParent instance. Both base and derived methods are called on the XChild instance.
using System; namespace Donis.CSharpBook{ public class Starter{ public static void Main(){ XParent parent=new XParent(); parent.MethodA(); XChild child=new XChild(); child.MethodA(); child.MethodB(); child.FieldA=10; Console.WriteLine(child.FieldA); } public class XParent { public void MethodA() { Console.WriteLine("XParent.MethodA called from {0}.", this.GetType().ToString()); } private int propFieldA; public int FieldA { get { return propFieldA; } set { propFieldA=value; } } } public class XChild: XParent { public int MethodB() { Console.WriteLine("XChild.MethodB called from {0}.", this.GetType().ToString()); return fieldb; } private int fieldb=5; } } } }
Inheritance is language-agnostic. Managed languages can inherit classes written in another managed language. For library developers, this expands the universe of potential clients. Developers no longer need to maintain language-specific versions of a library or create complex workarounds. Just as important, a family of developers is not excluded from using a certain library. Another benefit of cross-language inheritance is collaboration. Team members collaborating on a software system can develop in the language of their choice. The entire team is not compelled to select a single source language.
Managed languages compile to Microsoft intermediate language (MSIL) code. The Common Language Runtime (CLR) does not perceive a Microsoft Visual Basic .NET class inheriting from a C# class. It views one MSIL class inheriting from another MSIL class. Language independence is easier to achieve when specific languages dissolve into a shared common language at compilation.
Cross-language inheritance fractures without compliance to the Common Language Specification (CLS)—at least relative to the base or derived class. Language-specific and noncompliant artifacts must be wrung from classes when cross-language inheritance is planned or expected. For example, the following class, although perfectly okay in C#, is unworkable in Visual Basic .NET. Visual Basic .NET is case insensitive, making MethodA in the following code ambiguous:
public class XBase { public void MethodA() { } public void methoda() { } }
The following code is an example of successful cross-language inheritance. The base class is written in C#, whereas the derived class is Visual Basic .NET.
' VB Code: which includes derived class. Imports System Imports Donis.CSharpBook Namespace Donis.CSharpBook Public Class Starter Public Shared Sub Main Dim child as New XChild child.MethodA() child.MethodB() End Sub End Class Public Class XChild Inherits XParent Public Sub MethodB Console.WriteLine("XChild.MethodB called from {0}.", _ Me.GetType().ToString()) End Sub End Class End Namespace // C# Code: which includes base class using System; namespace Donis.CSharpBook{ public class XParent { public void MethodA() { Console.WriteLine("XParent.MethodA called from {0}.", this.GetType().ToString()); } private int propFieldA; public int FieldA { get { return propFieldA; } set { propFieldA=value; } } } } | http://etutorials.org/Programming/programming+microsoft+visual+c+sharp+2005/Part+I+Core+Language/Chapter+3+Inheritance/Inheritance+Example/ | CC-MAIN-2017-34 | refinedweb | 543 | 51.85 |
Some time ago, fengjunhao's "parasites" won many awards at the Oscars. I also like watching movies. After watching the movie, I was curious about other people's opinions on the movie. So I used R to climb some Douban movie reviews, and jieba got the word cloud to understand. But if I didn't log in to Douban to climb directly to the movie reviews, I could only get ten short reviews. I think that's the amount of data In order to be a little less, I sorted out the method of python's simulated Login to Douban, batch crawling data, and making special style word cloud.
###1, Python library used
import os ##Provides access to operating system services import re ##regular expression import time ##Standard library of processing time import random ##Use random number standard library import requests ##Login import numpy as np ##Scientific computing library is a powerful N-dimensional array object, ndarray import jieba ##jieba Thesaurus from PIL import Image ##python image library, python3 multi-purpose Pilot Library import matplotlib.pyplot as plt ##Mapping plt.switch_backend('tkagg') from wordcloud import WordCloud, ImageColorGenerator##Word cloud production
I have to be familiar with the use of each library for a long time, and I am only at the entry level
###2, Thinking
1. Simulated Login Douban
2. Take a page of reviews
3. Get movie reviews in batch
4. Making common words cloud
5. Create the word cloud of picture shape background
###3, Code implementation
1. Simulated Login Douban
First of all, we need to analyze the login page of Douban
Click the right mouse button to enter "check", enter the wrong login information in the login window, and enter the Network named basic. Here are many useful information, such as
Request URL, user agent, accept encoding, etc
You also need to look at the parameters carried when you request to log in, and pull down the debugging window to view Form Data.
Code simulation login:
# Generate Session object to save cookies s = requests.Session() # Review data save file COMMENTS_FILE_PATH = 'douban_comments.txt' # Word cloud font WC_FONT_PATH = 'C:/Windows/Fonts/SIMLI.TTF' def login_douban(): """ //Log bean :return: """ # Login URL login_url = '' # Request header headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.116 Safari/537.36', 'Host': 'accounts.douban.com', 'Accept-Encoding':'gzip, deflate, br', 'Accept-Language':'zh-CN,zh;q=0.9', 'Referer': '', 'Connection': 'Keep-Alive'} # Pass user name and password data = {'name': '12345125',##Change your login name here 'password': '12342324',##Here is your correct login password 'remember': 'false'} try: r = s.post(login_url, headers=headers, data=data) r.raise_for_status() except: print('Login request failed') return 0 print(r.text) return 1
2. Take a page of reviews
Enter the short review page of the movie, analyze the web page, get the URL of the web page, then analyze the source code of the web page, check which tag the movie review is in and what features it has, and then use regular expression to match the desired tag content.
It can be found that the film reviews are all in the label of.
Code:
def spider_comment(page=0): """ //Crawl a page of reviews :param page: Paging parameters :return: """ print('Start crawling%d page' % int(page)) start = int(page * 20) comment_url = '' % start # Request header headers = {'user-agent': 'Mozilla/5.0'} try: r = s.get(comment_url, headers=headers)#s.get() r.raise_for_status() except: print('The first%d Page crawl request failed' % page) return 0 # Extract movie reviews using regular comments = re.findall('<span class="short">(.*)</span>', r.text)##Regular Expression Matching if not comments: return 0 # write file with open(COMMENTS_FILE_PATH, 'a+', encoding=r.encoding) as file: file.writelines('\n'.join(comments)) return 1
3. Get movie reviews in batch
In the short comment url of Douban, the start parameter is the parameter to control paging.
def batch_spider_comment(): """ //Film review of batch crawling for Douban :return: """ # Clear previous data before writing data if os.path.exists(COMMENTS_FILE_PATH): os.remove(COMMENTS_FILE_PATH)##If the system already has this file, delete it page = 0 while spider_comment(page): page += 1 # Simulate user browsing and set a crawler interval to prevent ip from being blocked time.sleep(random.random() * 3) print('Crawl finish') if login_douban():##If the login is successful, it will be crawled in batch batch_spider_comment()
If you log in successfully, you will perform batch crawling. Only 25 pages of short comments can be viewed on Douban webpage
4. Making common words cloud
After all the reviews are obtained, you can use jieba to segment words and wordcloud to create word cloud. The most common word cloud can be made in this way:
####Making word clouds f = open(COMMENTS_FILE_PATH,'r',encoding='UTF-8').read() wordlist = jieba.cut(f, cut_all=True) wl = " ".join(wordlist) #_font_size=50, random_state=42, stopwords=stop_words, font_path=WC_FONT_PATH) # Generative word cloud wc.generate(wl) # If you only set mask, you will get a word cloud with image shape plt.imshow(wc, interpolation="bilinear") plt.axis("off") plt.show()
5. Create the word cloud of picture shape background
It's common to be able to make ordinary word clouds. We can also make word clouds with picture shape and background, and the color of words is the same as that of pictures.
##Generate word cloud of picture shape background def GetWordCloud(): path_img = "C://Users/Administrator/Desktop/Blonde-girl.jpg"##Picture path f = open(COMMENTS_FILE_PATH,'r',encoding='UTF-8').read() wordlist = jieba.cut(f, cut_all=True) wl = " ".join(wordlist) background_image = np.array(Image.open(path_img))##Conversion between Image object and array # If you don't use word segmentation, you can't directly generate the correct Chinese word cloud. If you are interested in it, you can check it. There are many word segmentation modes # #The Python join() method is used to connect elements in a sequence with the specified character to generate a new string. # ABCD words default 200 max_font_size=50, random_state=42, stopwords=stop_words, font_path=WC_FONT_PATH,mask= background_image) # Generative word cloud wc.generate(wl) # If you only set mask, you will get a word cloud with image shape # Generate color values image_colors = ImageColorGenerator(background_image) # The following code shows the display picture plt.imshow(wc.recolor(color_func=image_colors), interpolation="bilinear") plt.axis("off") plt.show() if __name__ == '__main__': GetWordCloud()
At the end of the above, we simulated web page login, extracted reviews from web pages, crawled in batches, and created word cloud and special shape word cloud.
The whole process will roughly understand the structure of the web page, the idea of crawler, and the utility of the requests library. Compared with R, python crawler is indeed more beautiful and convenient. The regular expression extraction part of the movie review is also very direct. Data cleaning, word cloud production, is also very common and easy to understand. Python is indeed one of the tools that have to learn.
| https://programmer.ink/think/5e69f4b0a7388.html | CC-MAIN-2021-04 | refinedweb | 1,147 | 56.45 |
Plone Archetypes View Template Modifications
There are at least three ways of changing the look and feel of your content types:
- Use a template and take full control of the view.
- Use a macro and take control of part of a page.
- Use a template for only a field (recommended).
In this tutorial we will try to work way through all three ways using My First Minimal Plone Content Type as the base. Also: Please note that I am using Plone 3.0 with Arcetypes version 1.5.0 - perhaps this way of modifying the appearance of a content type will change in the future (it will - trust me).
Download the different versions of this tutorial here: [1] or here [2] .
Take one: Take full control
One way of doing this in a minimal way requires the following set of files:
__init__.py config.py message.py Extensions/Install.py skins/mymessage/mymessage_view.pt
As you can see there is a folder structure and a file added. Also most files have changed compared to My First Minimal Plone Content Type.
The new file skins/mymessage/mymessage_view.pt is where all the magic is located. This is what the file looks like (perhaps you can predict what is does):
<html metal: <body> <div metal: <h1 tal: Title</h1> <font size="300%" color="#DD3333"> <p tal: </font> </div> </body> </html>
The funny stuff that looks like HTML is Tag Attribute Language (TAL) and origin from the Zope layer (read more about it for Zope 2.6 here: [3]). There are two things worth noting here:
- <h1 tal: Title</h1>: This line looks into our current context (the instance of a MyMessage content type) and extracts the title.
- <p tal:: This snippet also examines the current item and looks in the schema for a field called body (remember that there is an IntegerField called body). The content of this field is then inserted into <p> and </p> tags.
Looking at an instance of a mymessage would - when we are completed - look something like this:
But to get there we need the template to be found and used by both Plone and Zope. Changes needed are of two types: the first type are settings needed for Zope and Plone to find the correct folder and files. The second type is for Plone to use the correct view.
Changes in config.py It seems to be considered good practice to add the name of the skins folder as a global variable in config.py, also there seems to be the need for globals to be called and stored:
PROJECTNAME = "MyMessage" SKINS_DIR = "skins" GLOBALS = globals()
Changes in __init__.py The init-file needs to import these new variables and register the skins-folder:
from config import PROJECTNAME, SKINS_DIR, GLOBALS registerDirectory(SKINS_DIR, GLOBALS)
Changes in Extensions/Install.py The install-file must let Plone know of the new folder as described below:
from Products.MyMessage.config import PROJECTNAME, GLOBALS def install(self): #... install_subskin(self, out, GLOBALS) print >> out, "installed subskin"
That concludes the first set of changes - we have now installed the folder and let both Plone and Zope know where to look to find the new folders.
Changes in message.py In order to force the system to use the template we have created we must build a variable called aliases. In it we define some standard behaviour and override it. We set both '(Default)' and 'view' to point to 'mymessage_view'. Some standard behaviour like editing does not have anything to fall back upon so, unfortunately, we must define the stuff already defined (like specificallt declaring that you do want the default behaviour).
class MyMessage(BaseContent): # ... aliases = { '(Default)' : PROJECTNAME.lower() + '_view', 'view' : PROJECTNAME.lower() + '_view', 'edit' : 'base_edit', }
Please note: I am not sure why the file is called 'mymessage_view'. Sometimes I get the impression that this is default and that this view would be used thanks to some naming magic - on the other hand we specifically define it in the aliases variable. It makes no sense, but let's just do it like every one else has done...
This version can be downloaded here: [4]
Again: please note that this is probably not the way you want to do it since this is an extremely inflexible solution. Suppose you want to add fields in your content type (like in Modifying The Minimal Plone Content Type)- then every time you change it you also need to rewrite the view. Bad, bad, bad.
Take two: Change the appearance of only one field
A quite likely scenario is that you have a field that stores a certain type of data that you want to be displayed in a certain way. Often you might have a content type that contains many fields of the same type (that should thus be displayed in the same way).
Now we want a macro that displays the content of a field. For the macro we need a template. A macro-template must contain three parts:
- view macro
- edit macro
- search macro
I copy-pasted this macro template from [5] and modified it a bit. It contains a lot of bulk but I am afraid of removing stuff from it - I am afraid to break it. It imports lot of xml name spaces that we might need.
We can see the three view/edit/search parts. In the edit and search parts I fall back upon the default methods - and again we explicitly have to say that we want the default.
<html xmlns="[6]; xmlns: <head><title></title></head> <body> <!-- view --> <metal:view_macro <!-- edit --> <metal:define <div metal:</div> </metal:define> <!-- search --> <metal:define <div metal:</div> </metal:define> </body> </html>
I stored this in skins/mymessage/my_string_widget.pt and as I am sure you understand we need to register the skins folder just like in the above example.
But to use this macro instead of the default one we of course have to do something to make changes take effect. The secret is is the content type schema definition, we change it to something like this:
IntegerField(widget=StringWidget(macro='my_string_widget',), ..., ),
As you can see we use an already defined widget, but override the default macro.
Let us also update the schema to be able to compare what happens without this change, the schema is now:
# Schema definition schema = BaseSchema.copy() + Schema(( IntegerField('alpha', required = 1, widget=StringWidget(macro='my_string_widget',), ), IntegerField('bravo', required = 1, widget=StringWidget(macro='my_string_widget',), ), IntegerField('charlie', required = 1, ), ))
Saving this, restarting the server and reinstalling the MyMessage package should now result in something like this:
As you can see the two first fields have identical behavior when it comes to their appearance. The third field looks like the old default field. You can download this version of my message right here: [10] .
Take Three: Change everything
By following the tutorial in [11] I created the file /skins/mymessage/mymessage_view.pt that contains:
<html xmlns="[12]; xmlns: <head><title></title></head> <body> <metal:header_macro Foo1 </metal:header_macro> <metal:body_macro Foo2 </metal:body_macro> <metal:footer_macro Foo3 </metal:footer_macro> <metal:folderlisting_macro Foo4 </metal:folderlisting_macro> </body> </html>
As you can see it contains metal-parts for header, body, footer and folderlisting. These decide how certain parts of the page are displayed. You can also js and css to change that aspects of a the view of a content type.
The above template produces the following quite blank and annoying look:
A second look at the template: the header
Let us start by changing one piece of the macro - the header. First you need a file with a magic name in a folder that has been registered as seen above. In this case ./skins/mymessage/mymessage_view.pt will do just fine. As you will see the template is pretty much standard html. As such it can be seen in borwsers, previewed and so on - your viewer will just ignore the strange code and if you are lucky leave it alone.
<html xmlns="[16]; xmlns: <head><title>Custom view of MyMessage.</title></head> <body> <metal:header_macro metal: <h1> MyMessage: <span tal:title</span> </h1> </metal:header_macro> </body> </html>
As you can see from the code the interesting parts here start with <metal:header_macro metal:. This command tells Zope that we want to override the default macro for the header with our own. And for the first time (almost) in my tutorials about Plone and Archetypes Conventions wins over Configuration - that means that if you do not specify what you want you get the default settings. In this case (and most others I would assume) this is exactly what we want.
So what is going on in the metal-tags? We specify a h1-header (the main header of a page) and then a little cryptic something: MyMessage: <span tal:title</span>. This means that we always get the text "MyMessage:" but it is followed with tal:content="context/title. This is one way of telling the platform that you want to extract the title of your content type.
On this screenshot of this template in action you can see that the title is indeed replaced with the corny text MyMessage: my favourite numbers. The rest of the page is left alone.
Extracting the URL and the username: let us change the footer
If we now want to add a little somthing at the bottom of the page we have to modify our template to contain a metal tag that defines the macro footer: metal:define-macro="footer". In this we might want to add the url of the page and its creation and mofication dates. Also we want to add every webdesigners worst nightmare: a horizontal line [:)]-|--<.
<metal:footer_macro metal: <hr> <small> <p> <code tal:URL</code><br /> created: <span tal: by <i><span tal:</i><br /> last modified: <span tal: </p> <p>As seen by <i tal:John Doe</i>.</p> </small> </metal:footer_macro>
As you will see in the screenshot the page now contains the URL, creation and modification date, creator and the name of the user that currently views the page.
The tricky parts: Modifying the body
I want to show a very nice feature of this approach - calling a method of a class and displaying the result in the browser. First let us write a simple function, in this case I just want to compute the sum of the fields alpha, bravo and charlie. I update the MyMessage class like this:
class MyMessage(BaseContent): # ... def my_sum(self): return self.alpha + self.bravo + self.charlie
I will now walk you through the parts of the template and explain each part:
Defining the body macro
<metal:body_macro metal: <h3>Summary</h3>
Like the earlier cases we start with a metal:define-macro call. This way we override the default behavior.
Calling the method
<p> The sum of <i tal:message</i> is <span tal: ***</span>. </p>
In this snippet we see how we can extract the value of a member of our class: """<i tal:message</i>""". In the exact same way we call a method (or function) of our class: """<span tal: ***</span>""".
Accessing accessors and using Python
As you might have guessed we access our fields in the schema in a similar way as well, by a syntax like this: """<span tal:alpha</span>""".
<p> That is true since <span tal:alpha</span> + <span tal:bravo</span> + <span tal:charlie</span> =
A nice and interesting option, that should be used with care, is to make a Python call on the fly:
<span tal: sum</span> </p>
Looping over the fields
Using the metal and tal tags you can even iterate over for example the fields in a schema. This requires a little extra knowledge and some dirty tricks such as if-conditions. The """<metal tal:""" declares that we are repeating something. Inside this loop a variable with the name field is used. Also we iterate over the items in "here.Schema().filterFields(isMetadata=0)" that means that we ignore fields that are considered metadata.
Also we ignore fields that are invisible with the "<tal:if_visible>" tag.
<h3>Details</h3> <metal tal: <tal:if_visible> <p> Field <span tal:#</span> <i> <span tal:Fieldname</span> </i> <code> <span tal:1</span> </code> </p> </tal:if_visible> </metal> </metal:body_macro>
Now if everything works out correctly you should be looking at something like this:
Some additional remarks
As a fanatic fan of the KISS-principle (Keep It Simple Stupid) I would recommend using as few and as generic changes as possible to any view of a schema. It is often just silly extra work. Also I would prefer using changes on a field level by using a custom macro on a field, like this:
IntegerField(widget=StringWidget(macro='my_string_widget',), ..., ),
If this is not acceptable due to some graphical design aspect I would, as option two, prefer to modify part of the main view macro. Using something like this:
<metal:body_macro metal: <!-- contents goes here --> </metal:body_macro>
Also in this body macro I would prefer a generic approach before a static approach. Imagine your boss telling you that you have to add a field to the schema. You can not tell him "No, because then I have to rewrite my view templare.", but if you would your boss would have told you that you should have thought about that from the beginning and then offer you free pizza for late nights at the office until you are done.
A small example to illustrate a generic approach
Let's add an extra field to our schema. Can you guess what I want to call it? Delta of course. The schema would get the following modification:
schema = BaseSchema.copy() + Schema(( # ... IntegerField('delta', required = 1, ), ))
Also we update the function inside the class - we still would only have touched one file:
def my_sum(self): return self.alpha + self.bravo + self.charlie + self.delta
Looking at a new instance of our MyMessage content type would now present something like this in your browser:
As you can see: the loop needs no change - it is ok already. The upper part where we specifically used getAlpha, getBravo and so on needs a review. Perhaps we have little time and patience and do not notice this error - then your content type is now broken.
References
Here are two nice places to start:
- Customizing AT View Templates by Floyd May [21]
- How to customise view or edit on archetypes content items by Peter Simmons [22]
- Programming Plone by Raphael Ritz [23]
This page belongs in Kategori Programmering.
This page is part of a series of tutorials on Plone Cms. | http://pererikstrandberg.se/blog/index.cgi?page=PloneArchetypesViewTemplateModifications | CC-MAIN-2017-39 | refinedweb | 2,434 | 69.72 |
On Thu, Mar 12, 2009 at 5:13 PM, Alois Schlögl <address@hidden> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > The following improvements have been included in the NaN-toolbox. > > - - sumskipnan_mex.mex has been optimized for speed (minimizing cache > missing, reducing loop overhead) > > - - a flag is set if some NaN occures in the data. The flag can be checked > (and reset) with function FLAG_NANS_OCCURED(). This enables a flexible > control on checks for NaN. (You can check after every call, or only at > the end of your script). > > - - the performance of var, std, and meansq has been improved. > > A performance between the NaN-toolbox and corresponding standard octave > functions (see script below) show the following results (time in [s]): > > > with NaN-tb w/o NaN-tb ratio > 0.25884 3.56726 13.78183 mean(x,1)/nanmean(x,1) > 0.36784 3.32899 9.05020 mean(x,2)/nanmean(x,2) > 0.30019 6.62467 22.06789 std(x,0,1) > 0.40114 2.23262 5.56561 std(x,0,2) > 0.28681 6.40276 22.32407 var(x,0,1) > 0.40269 2.18056 5.41505 var(x,0,2) > 0.28175 4.05612 14.39598 meansq(x,1) > 0.40703 4.19346 10.30248 meansq(x,2) > 0.25930 0.19884 0.76683 sumskipnan(x,1)/sum(x,1) > 0.30624 0.24179 0.78955 sumskipnan(x,2)/sum(x,2) > > > A performance improvement by factors as high as 22 can be seen, and > sumskipnan() is only about 25% slower than sum(). > > Of course, sumskipnan could also improve the speed of functions like > nanmean, nanstd, etc. Maybe you want to consider including sumskipnan in > standard octave. > I repeated your experiment using current Octave tip (-O3 -march=native, Core 2 Duo @ 2.83GHz): mean(x,1) mean(x,2) std(x,0,1) std(x,0,2) var(x,0,1) var(x,0,2) meansq(x,1) meansq(x,2) sum(skipnan)(x,1) sum(skipnan)(x,2) tic-toc time 0.108911 0.132629 0.114568 0.163950 0.112384 0.163973 0.112379 0.163682 0.096581 0.101545 0.090389 0.091657 0.915853 0.955799 0.883821 0.921007 0.110276 0.114233 0.082247 0.089742 tic-toc ratio 0.82993 0.69108 7.99397 5.82982 7.86431 5.61683 0.98129 0.69790 0.85159 0.88376 cputime 0.108007 0.136008 0.112007 0.164011 0.112007 0.164010 0.116007 0.160010 0.100006 0.100007 0.088005 0.088005 0.900056 0.956060 0.884055 0.924058 0.092006 0.116007 0.080005 0.092006 cputime ratio 0.81481 0.64706 8.03571 5.82924 7.89285 5.63416 0.79311 0.72500 0.80000 0.92000 It can be seen that the penalty for skipping NaNs is mostly within 20-30%, smaller for column-oriented reductions. The speed-up factors 5 and 7 for std and var are caused by the single-sweep computation done in sumskipnan. This becomes apparent when a less random data are supplied, and the NaN toolbox reverts to a backup algorithm (which is what Octave always does) - relative error at the order of 10^-4: tic-toc time 0.108613 0.132721 1.362765 1.500724 1.366353 1.499243 0.115758 0.163625 0.097873 0.102086 0.089788 0.089979 0.876386 0.914380 0.880742 0.913636 0.094084 0.091950 0.082200 0.089619 tic-toc ratio 0.82668 0.67796 0.64309 0.60929 0.64459 0.60940 0.81277 0.56196 0.83986 0.87788 cputime 0.108007 0.132008 1.364085 1.500094 1.368086 1.500093 0.116007 0.164011 0.096006 0.104006 0.092006 0.088005 0.876055 0.916057 0.880055 0.916057 0.092006 0.092006 0.084005 0.088005 cputime ratio 0.85185 0.66666 0.64223 0.61067 0.64327 0.61067 0.79311 0.56097 0.87500 0.84615 Here the std/var computations are slown down by some 35-45%. This is less favorable, though certainly no disaster. I think the Octave statistics subcommunity should discuss what would they appreciate best. Is anyone depending on the speed of std/var? Opinions about skipping NaNs? Given Octave's NA support, it may be better to just skip NAs, like R does. There were also suggestions to move the statistics functions completely out of Octave. Personally, I'd vote to retain just the stuff from statistics/base, because I sometimes use functions thereof despite not being a statistician. regards -- RNDr. Jaroslav Hajek computing expert & GNU Octave developer Aeronautical Research and Test Institute (VZLU) Prague, Czech Republic url: n = 8e3; randn("state", 123); #x = randn(n); x = 1 + randn(n) * 1e-4; #k=1; k=2; load data t=cputime();tic; m = mean(x,1); T(k,1)=toc;V(k,1)=cputime()-t; t=cputime();tic; m = mean(x,2); T(k,2)=toc;V(k,2)=cputime()-t; t=cputime();tic; m = std(x,0,1); T(k,3)=toc;V(k,3)=cputime()-t; t=cputime();tic; m = std(x,0,2); T(k,4)=toc;V(k,4)=cputime()-t; t=cputime();tic; m = var(x,0,1); T(k,5)=toc;V(k,5)=cputime()-t; t=cputime();tic; m = var(x,0,2); T(k,6)=toc;V(k,6)=cputime()-t; t=cputime();tic; m = meansq(x,1); T(k,7)=toc;V(k,7)=cputime()-t; t=cputime();tic; m = meansq(x,2); T(k,8)=toc;V(k,8)=cputime()-t; if (k == 1) t=cputime();tic; m = sumskipnan(x,1); T(k,9)=toc;V(k,9)=cputime()-t; t=cputime();tic; m = sumskipnan(x,2); T(k,10)=toc;V(k,10)=cputime()-t; else t=cputime();tic; m = sum(x,1); T(k,9)=toc;V(k,9)=cputime()-t; t=cputime();tic; m = sum(x,2); T(k,10)=toc;V(k,10)=cputime()-t; endif save data T V | http://lists.gnu.org/archive/html/octave-maintainers/2009-03/msg00323.html | CC-MAIN-2017-43 | refinedweb | 1,003 | 72.46 |
Hough Line transform is a technique used to detect straight lines in an image. This algorithm can detect all imperfect instances of a line in a given image ie, this algorithm can detect lines even if they are not perfectly straight. The detection of line is carried out by a voting process. Before examining the algorithm in detail we need to demystify the mathematics behind it.
The Mathematics
There are several form of equations in which a line can be represented. The most common and familiar one should be the “slope-intercept” form.
where
is the slope
is the y-intercept.
In this equation the slope
and the y-intercept
are known for a given line.
and
are variables. So by varying the value of
(or
) you can find the corresponding value of
(or
)
From the above equation we can derive another form of equation called the “Double Intercept” form.
……………..
where
is the x-intercept.
is the y-intercept.
The double intercept form will be used to derive a new form of equation called the “Normal Form” which is used in the Hough Line Transform. The reason for using normal form is that the “slope-intercept” from fails in case of vertical lines and the Double intercept form fails because of the large range of
and
.
Derivation:
Consider a line
which intersects the y-axis at point
and x-axis at point
. A line segment
having one end point on the origin intersects the line
at right angles at point
.
makes an angle
with the a-axis. The length of
is
.From the figure,
and
Consider
.
……………..
Now consider
.
.
because
.
Therefore
……………..
Substituting the values of
in
with
we get
……………..
Straight Line Detection:
Lets see how the above equation can be used to find straight lines.
Given the values of
and
, we can vary the value of
and find the corresponding value of
. Plotting
will give us a straight line.
For example, for
degrees and
units the graph would look like this.
If we keep the value
and
constant and vary the value of
we can find the corresponding values of
. For each
pair we can plot a line in x-y coordinate and all these lines will pass through
.
For example, let
be
. Let
. After substituting these values in the
we get
. When we plot the lines formed by the
pairs, they will all pass through
Now consider three points
. Let
.
The corresponding values of
are as follows.
If a line passes through a given point
, then the corresponding
gets a vote from that point. The line with
that gets maximum number of votes passes through maximum number of points.
In the above example,
got three votes, that means it passes through all three point. If
gets
number of votes then it passes through
number of points.
For simplicity we considered only 3 angles and 3 points. The number of lines that can pass through a point is infinite. If we plot all the possible values of
and
for the point
the graph would look like this.
Similarly we can plot graph for the points
,
and
. We observe that all the 3 curves intersect at a point. The point of intersection is
.
<p >But for practical application we can consider a small subset of angles; let’s say
. Here we are considering only 180 values for
and we get 180 values for
. So effectively we are finding 180 lines that pass through a point.
Algorithm
Preprocessing: Canny edge detector is applied to extract the edges.
Step 1: Calculate the maximum and minimum value of
and
. For practical implementation
and
are integral values. Select appropriate value for threshold
.
Step 2: Initialize a 2-D array A of size
x
and fill it with zeros.
Step 3: For each white pixel find value of
corresponding to each value of
. Increment
.
Step 5: Search for values of
which is above the threshold;
Step 4: Draw the lines.
Code
I have used OpenCV library for this program. It was compiled using GCC 4.5 under Linux OS (Linux Mint 13)
#include <iostream> #include "cv.h" #include "highgui.h" #include <math.h> #define PI 3.14159265f using namespace cv; int main() { int max_r; //Maximum magnitude of r int max_theta=179; //Maximum value of theta. int threshold=60; //Mimimum number of votes required by(theata,r) to form a straight line. int img_width; //Width of the image int img_height; //Height of the image int num_theta=180; //Number of values of theta (0 - 179 degrees) int num_r; //Number of values of r. int *accumulator; //accumulator to collect votes. long acc_size; //size of the accumulator in bytes. float Sin[num_theta]; //Array to store pre-calculated vales of sin(theta) float Cos[num_theta]; //Array to store pre-calculated vales of cos(theta) uchar *img_data; //pointer to image data for efficient access. int i,j; Mat src; Mat dst; /*Read and Display the image*/ Mat image = imread("poly.png"); namedWindow( "Polygon", 1 ); imshow( "Polygon", image); //convert color to gray scale image cvtColor(image, src, CV_BGR2GRAY); /*Initializations*/ img_width = src.cols; img_height = src.rows; //calculating maximum value of r. Round it off to nearest integer. max_r = round(sqrt( (img_width*img_width) + (img_height*img_height))); //calculating the number vales r can take. -max_r <= r<= max_r num_r = (max_r *2) +1; //pre-compute the values of sin(theta) and cos(theta). for(i=0;i<=max_theta;i++) { Sin[i] = sin(i * (PI/180)); Cos[i] = cos(i * (PI/180)); } //Initializing the accumulator. Conceptually it is a 2-D matrix with dimension r x theta accumulator = new int[num_theta * num_r]; //calculating size of accumulator in bytes. acc_size = sizeof(int)*num_theta * num_r; //Initializing elements of accumulator to zero. memset(accumulator,0,acc_size); //extracting the edges. Canny(src,dst,50,200,3); //Getting the image data from Mat dst img_data = dst.data; //Loop through all the pixels. Each pixel is represented by 1 byte. for(i=0;i<img_height;i++) { for(j=0;j<img_width;j++) { //Getting the pixel value. int val =img_data[i*img_width+j]; if(val>0) { //if pixel is not black do the following. //For that pixel find the the values of r for corresponding value of theta. //Value of r can be negative. (See the graph) //Minimum value of r is -max_r. //Conceptually the array looks like this // 0 1 2 3 4 5 6 .. 178 179 <---degrees // -max_r | | | | | | | | | | | // -max_r+1| | | | | | | | | | | // -max_r+2| | | | | | | | | | | // ... | | | | | | | | | | | // 0 | | | | | | | | | | | // 1 | | | | | | | | | | | // 2 | | | | | | | | | | | // ... | | | | | | | | | | | // max_r | | | | | | | | | | | // for(int t=0;t<=max_theta;t++) { //calculating the values of r for theta= t , x= j and y = i; int _r = round(j*Cos[t] + i*Sin[t]); //calculating the row index of _r in the accumulator. int r_index = (max_r+_r); //Registering the vote by incrementing the value of accumulator[r][theta] accumulator[r_index*num_theta + t]++; } } } } //Looping through each element in the accumulator for(int r_index=0;r_index<num_r;r_index++) { for(j=0;j<num_theta;j++) { //retrieve the votes. int votes = accumulator[r_index*num_theta+j]; if(votes>threshold) { //if votes receive is greater than the threshold //getting the value of theta int _theta=j; //getting the value of r int _r = r_index-max_r; //Calculating points to draw the line. Point pt1, pt2; pt1.x =0; pt1.y =round((_r - pt1.x*Cos[_theta])/Sin[_theta]); pt2.x =img_width; pt2.y =round((_r - pt2.x*Cos[_theta])/Sin[_theta]); //Drawing the line. line( image, pt1, pt2, Scalar(0,255,0), 3, CV_AA); } } } namedWindow("Detected Lines", 1 ); imshow( "Detected Lines",image); //Free the memory allocated to accumulator. delete[] accumulator; waitKey(0); return 0; }
One thought on “Hough Line Transform”
Hi! I could have sworn I’ve been to this blog before but after checking through some of the post I realized it’s new to me.
Nonetheless, I’m definitely delighted I found it
and I’ll be bookmarking and checking back frequently! | http://www.nithinrajs.in/hough-line-transform/ | CC-MAIN-2014-15 | refinedweb | 1,292 | 67.86 |
The Drive API allows you to add comments to files in Google Drive. You can let your users insert comments and replies in shared files and carry on discussion threads in the comments. By supporting these features in your app, you create an environment where users can share files and edit them collaboratively.
Working with comments and replies
When working with comments, you'll be interacting a lot with the
replies collections in addition to the
files resource. In this model, a comment starts a discussion within a file,
and replies are associated with a particular comment. Apps insert the content
of both comments and replies as plain
text, and then the response body provides an
htmlContent field containing
formatted content for display.
See the API reference for details and examples on using these resources.
Generally, an app must make the following API calls when managing comments:
- A files.get call any desired file metadata or content.
- A revisions.get call to get the revision of the file that you're currently working with. Use
revisions.get(revisionId='head')to make sure you are working with the latest revision. You'll need the revision
idto work with "anchors" (described below).
- A comments.list call to retrieve the comments and replies in a file.
- A comments.insert call to add the comment, or replies.insert to add a reply to an existing discussion.
When inserting a comment, you'll need to consider anchoring the comment to a region in the file. Reference information for anchors is provided below in this page, along with some tips on creating custom schemas.
Anchoring comments
An anchor defines the location or region in a file to which a comment relates or refers. Anchors are tied to a specific revision of a file.
Anchors are tied to different regions for different types of files, and apps should support regions that make sense for the file types they manipulate. For example, anchoring comments to line numbers makes sense for plain text documents, while anchoring them to a horizontal and vertical position makes more sense for images.
Each anchor has two required properties:
- r — A string ID indicating which revision of the file this anchor was created for. Use the revision
idretrieved with revisions.get.
- a — The region or regions associated with the anchor. This must be a JavaScript array, and the type of object in that array is a region.
You can define a region as an object with one or more region classifiers. Each region classifier is keyed by the classifier's name, and has a value containing various properties suitable for that classifier. For example, anchoring comments by lines in a document might look like the following:
{ 'r': revisionId, 'a': [ { 'line': { 'n': 12, 'l': 3, } }, { 'line': { 'n': 18, 'l': 1, } }] }
This anchors a comment in two separate areas: at line 12 covering a range of 3 lines, and covering line 18 as well. Text content for such a comment might be, "These lines are unnecessary. They are both covered by the text in line 4."
Supported region classifiers
Choose the region classifiers most suitable for your file type. Drive supports multiple classifiers for a single region. However, you can't set the same region classifier twice in the same region.
rect
A rectangle in a two dimensional image.
page
A page number in a pdf or tiff or other document with pages. Should also be used for documents with page-like elements. Eg. sheet (for spreadsheets), slide, layer etc.
time
A duration of time in a video or other document with a time dimension | Integer
txt
A range of text
line
A specific line in a text file or any files with lines in it.
matrix
A location in a matrix-like structure. Useful for defining row and columns in spreadsheet documents, or any other documents which have a row/column structure.
Custom schemas
If you want to create a new class of region, you should namespace it by the
app ID.
For example, if your app has ID
1234 and creates a property called
Slide,
then the full classifier name would be
1234.Slide. That way if another app
publishes another
Slide, then the two would not collide. The blank
namespace should only be used by Google-published classifiers.
Best practices for working with discussions
There are a few things to keep in mind when working with discussions: check permissions, resolve comments in some visible way, and don't display deleted comments unless you have a compelling reason to do so.
Permissions checks
Though it is not strictly required for managing comments, checking permissions
is highly recommended. Only the creator of a comment or reply can delete or
edit a comment, and
an app will receive errors if it tries to allow any other users to perform
these operations. You can avoid error scenarios by checking the
author.me field of a comments
resource.
UI for resolved comments
If your app sets the
resolved property when inserting comments and replies,
then your UI should give some clear indication of this status for users. For
instance, resolved comments could be greyed out, or (like in Google documents)
resolved comment threads could just be removed from the displayed file.
Tombstones and deleted comments
When a user deletes a comment, Google Drive stores a "tombstone" of the
comment, marked with
"deleted": "true" in the comments resource. If your
app retrieves some tombstoned comments with
probably shouldn't display them to the end user.
The Drive API provides the
includedDeleted property for
In particular, apps may want to use
includedDeleted to make sure that the UI
displays a current comments list, without any deleted comments. The flow
for this might look like the following:
- On load, list all comments except, of course, deleted ones (the default for
includedDeletedis
false). Store the current time right before you send this list request.
- Periodically perform a new list action and pass the saved timestamp as the
updatedMinparameter to get the comments since your last request. In this case, set
includeDeleted=trueso that you can perform a diff between the existing and new comments lists.
- Based on that diff, update the UI to clear out newly deleted comments.
Tips and tricks
To learn more about comments and discussions, you can watch the following video of Google engineers discussing related tips and tricks. | https://developers.google.com/drive/api/v2/manage-comments | CC-MAIN-2019-30 | refinedweb | 1,060 | 62.38 |
Hi everyone,
I have a project on Angular & Electron, and I’d like to integrate Three.js for a little bit of 3D image overlay.
I first updated my project to the latest Angular (8.2) then installed Three.js using
npm install three.
The issue is that when calling
import * as THREE from 'three'; in my code, and running
ng serve or launching Electron, I get the following error message:
ERROR in ../node_modules/three/src/renderers/webgl/WebGLUtils.d.ts:3:43 - error TS2304: Cannot find name 'WebGL2RenderingContext'.
I tried installing
@types/three and
@types/webgl2, but none of them seem to solve the issue.
I also cloned this repo that shows an Angular project working with Three.js, I can run it perfectly fine, but I can’t see what the issue is on my project…
I know it’s probably a dumb solution that I didn’t think about (some declaration somewhere), but I can’t seem to find it…
Here is the link to my repo
To reproduce:
- clone the repo,
- go to CVERT-ng folder,
npm i,
npm i three
- add
import * as THREE from 'three';in any .ts file (.service.ts or .component.ts),
- serve or run electon:
ng serveor
npm run electron
Any help would be VERY much appreciated !..
Thank you very much in advance !!! | https://discourse.threejs.org/t/cannot-find-name-webgl2renderingcontext/10725 | CC-MAIN-2022-21 | refinedweb | 222 | 63.19 |
UI life cycle
All BlackBerry device applications include an application class that is derived from either the Application class, which is included in the net.rim.device.api.system package, or the UiApplication class, which is included in the net.rim.device.api.ui package. The Application class is the base class for all applications on the device, and applications that are not required to respond to user input should extend this class. The UiApplication class is a subclass of Application, and applications that provide a UI should extend this class.
There are three basic phases to the life cycle of any BlackBerry device application:
- Starting
- Running
- Terminating
Starting
An application can be started on a BlackBerry device in the following ways:
- By a BlackBerry device user clicking an icon on the Home screen
- By the OS automatically when the device starts
- By the OS at a scheduled time
- By another application
Regardless of how an application is started, the application manager on the device is responsible for starting the process that the application runs within. The ApplicationManager class, which is included in the net.rim.device.api.system package, allows applications to interact with the application manager to perform tasks, including the following:
- Run an application immediately or at a scheduled time
- Interact with processes, including retrieving the IDs for foreground applications
- Post global events
The application manager starts an application by retrieving a new process on the device and creating a thread within that process to invoke one of the entry points of the application. For many applications, the main() method of the application class is the single entry point that is invoked, but you can specify multiple entry points for your applications. You can use multiple entry points to create different ways for a user to start an application. For example, if your application allows users to create a new document, you might provide users with two icons that they can click to start the application. Users could click one icon to open the application to its main screen and the other icon to open the application to the screen that allows them to create a new document.
Running
After the application manager on the BlackBerry device starts an application, the application can either run a series of commands until the series is complete, or it can enter a loop where it waits for and processes events until it receives an event indicating that it should terminate. Typically, an application invokes enterEventDispatcher() of the Application class near the beginning of its main() method, which allows the application to receive events and update the UI of the application.
When an application that has a UI starts, it typically pushes the first screen of the application on to the display stack by invoking pushScreen(). This screen is the first screen that a BlackBerry device user sees when the application starts. This screen can then push other screens on to the display stack in response to specific events, such as user input. Each screen responds to user input when it is on top of the display stack and displayed to the user.
Terminating
You can invoke System.exit() to terminate your application. This method causes the BlackBerry Java Virtual Machine to terminate all of the processes and threads of the application. Alternatively, you can terminate an application by popping the last screen off of the display stack, which results in a call to System.exit().
Because you typically start an application by invoking enterEventDispatcher(), which doesn't terminate, your application should provide a way to terminate. Applications that receive user input might provide a handler for a Close menu item, and applications that don't receive user input might terminate in response to a specific event on the BlackBerry device.
Event thread
Each BlackBerry device application has a special thread associated with it called the event thread. The event thread is the thread that has the event lock, meaning that the thread is responsible for executing all code for drawing and handling events. Only the event thread can process incoming events and update the UI of the associated application.
When an application class invokes enterEventDispatcher() in main(), the thread that started the application acquires the event lock and becomes the event thread. From this point onwards, the application runs in an event processing loop, receiving and responding to events that occur on the BlackBerry device, such as user input. You can create listeners to respond to specific events, or use listeners that are included in the BlackBerry Java SDK, and the event thread invokes these listeners in response to the corresponding events.
Because the event thread is the only thread that can process events and update the UI of an application, you should not use this thread to perform tasks that might fail or take a long time to complete. If you do, your application can become unresponsive and might be terminated by the device. For example, if your application needs to open a network connection, you should create a new thread to perform this task.
Event lock
The event lock in a BlackBerry device application allows a thread to process events and update the UI of that application. When an application class invokes enterEventDispatcher() in main(), the thread that started the application acquires the event lock.
You might need to update the UI of your application from a thread that is not the event thread. For example, a thread that manages network connections might need to update the UI to reflect a change in network status. You can do this in two ways:
You can acquire the event lock by invoking getEventLock() of the Application class, and synchronize on the event lock before performing your task. By using this approach, your thread functions like the event thread, and you can update the UI by using this thread.
You can inject an event into the message queue of your application by invoking invokeAndWait() or invokeLater() of the Application class. This approach allows your thread to request that a task be completed by the event thread as soon as possible (but not necessarily immediately). You inject an event in the form of an object that implements the Runnable interface. The event thread processes the event by invoking the object's run() method.
You should acquire and synchronize on the event lock if you need to perform a quick or urgent update of the UI. You should inject an event into the message queue if it is acceptable to experience a delay before your task runs. In each case, you should not run tasks that might block or take a long time to complete.
Code sample: Creating the framework for applications with a UI
The following code sample defines both the application class and screen class in the same .java file. It is good practice to place these definitions in separate .java files. Each screen class that your application uses should also be defined in a separate .java file.
import net.rim.device.api.ui.UiApplication; import net.rim.device.api.ui.container.MainScreen; //Create the class that represents your application. Extend the //UiApplication class to create an application that has a UI. public class MyApplication extends UiApplication { //Implement main() in your application class. This method represents //the primary entry point for your application. public static void main(String[] args) { //In main(), create an instance of your application class. MyApplication myApp = new MyApplication(); //Invoke enterEventDispatcher() to start the event thread //and allow your application to process events. myApp.enterEventDispatcher(); } //Implement the constructor for your application class. public MyApplication() { //In the application constructor, invoke pushScreen() to push an instance //of your application's first screen on to the display stack. pushScreen(new MyApplicationScreen()); } } //Create the class that represents your application's first screen. //Extend the MainScreen class to create a screen that consists of a title //section, separator element, and main scrollable section. class MyApplicationScreen extends MainScreen { //Implement the constructor for your screen class. public MyApplicationScreen() { //In the screen constructor, perform any initial tasks to set up your screen. //For example, to set a title for your screen, invoke setTitle(). setTitle("My First BlackBerry Device Application"); } }
Code sample: Overriding layout()
import net.rim.device.api.ui.UiApplication; import net.rim.device.api.ui.container.MainScreen; import net.rim.device.api.ui.component.ButtonField; public class OverridingLayoutDemo extends UiApplication { public static void main(String[] args) { OverridingLayoutDemo theApp = new OverridingLayoutDemo(); theApp.enterEventDispatcher(); } public OverridingLayoutDemo() { pushScreen(new OverridingLayoutDemoScreen()); } } class OverridingLayoutDemoScreen extends MainScreen { public OverridingLayoutDemoScreen() { setTitle("Overriding Layout Demo"); //In the screen constructor, create a new MyCustomButton object. //The MyCustomButton class extends the ButtonField class and //provides a custom layout() method. MyCustomButton theButton = new MyCustomButton("Click here."); //Add the MyCustomButton object to the screen. add(theButton); } } //Create the MyCustomButton class. class MyCustomButton extends ButtonField { //Implement a constructor for MyCustomButton to accept a String parameter. //Invoke super() to invoke the constructor of the superclass, ButtonField, //and set the String parameter as the label of the button. public MyCustomButton(String text) { super(text); } //Implement layout() to specify custom layout instructions for //MyCustomButton. In this example, layout() sets the size of the //field to either 100 x 100 pixels or the maximum dimensions that //are provided by the button's manager, whichever value is smaller. protected void layout(int width, int height) { setExtent(Math.min(100, width), Math.min(100, width)); } }
Best practice: Reducing the number of layouts
The layout() method of a field is invoked when the contents of the field must be arranged on the screen. This method lays out the content of the field, such as text, graphics, or other fields. Because this method is invoked often, including when fields are added to or removed from the screen and when the BlackBerry device is rotated, it's important to try to reduce the number of calls to layout() in your applications.
Consider the following guidelines:
- Add groups of fields, instead of individual fields, to a screen. Each time you add a field to a screen, the screen's layout() method is invoked. If you are adding a number of fields to the screen at the same time, you can add all of the fields to a manager and then add the manager to the screen. This approach invokes only a single layout().
- Remove groups of fields, instead of individual fields, from a screen. If you plan to remove a number of fields in your application, you can add all of the fields to a manager and then remove the manager to remove all of the fields at once. This approach invokes only a single layout().
- Invoke replace() of the Manager class to remove a field from a manager and replace it with another field. This method invokes only a single layout() and is more efficient than removing a field from the manager and then adding a new one in its place.
- Add fields and managers to a screen in the screen constructor. Adding and removing fields in the screen constructor does not invalidate the layout of the screen, and layout() is not invoked.
- Create a placeholder field for content that you can populate later, and add the field to a screen in the screen constructor. Even if you don't have the content ready to add, if you know the size of the content area then you can create a placeholder field and add the content later.
Code sample: Overriding paint()
import net.rim.device.api.ui.UiApplication; import net.rim.device.api.ui.container.MainScreen; import net.rim.device.api.ui.Graphics; import net.rim.device.api.ui.component.ButtonField; public class OverridingPaintDemo extends UiApplication { public static void main(String[] args) { OverridingPaintDemo theApp = new OverridingPaintDemo(); theApp.enterEventDispatcher(); } public OverridingPaintDemo() { pushScreen(new OverridingPaintDemoScreen()); } } class OverridingPaintDemoScreen extends MainScreen { public OverridingPaintDemoScreen() { setTitle("Overriding Paint Demo"); //In the screen constructor, create a new MyCustomButton object. //The MyCustomButton class extends the ButtonField class and //provides a custom paint() method. MyCustomButton theButton = new MyCustomButton(); //Add the MyCustomButton object to the screen. add(theButton); } } //Create the MyCustomButton class. class MyCustomButton extends ButtonField { //Implement layout() to specify custom layout instructions for //MyCustomButton. In this example, layout() sets the size of //the field to 200 x 200 pixels. protected void layout(int width, int height) { setExtent(200, 200); } //Implement paint() to specify how the field draws its //content area. In this example, paint() draws two colored //rectangles as the content of the button. protected void paint(Graphics g) { g.setColor(0x00FF00); g.fillRect(0, 0, getWidth(), getHeight()); g.setColor(0xEE0000); g.fillRoundRect((getWidth() / 4), (getHeight() / 4), (getWidth() / 2), (getHeight() / 2), 10, 10); } }
Optimizing painting
The paint() method of a field is invoked many times during the execution of a BlackBerry device application, including when a screen scrolls, is invalidated, or when the focus area changes. When you create your own custom fields, it is important that a field's paint() method is as efficient as possible. By making paint() efficient, you can create a UI that is smooth, responsive, and does not lag when BlackBerry device users interact with it. Your fields, managers, and screens should create as few objects as possible in paint(). You should move object creation from paint() to other methods that are invoked less frequently, such as layout().
Using strings effectively
Creating and manipulating String objects in your application can be time-consuming and use a lot of memory. For example, operations such as string concatenation create new String objects and should be performed as infrequently as possible.
You should try to create and cache static String objects outside of paint(). If you know in advance that the contents of a String object won't change while your application is running, you can move the creation of the String object into another method that is invoked less frequently. A good place to cache String objects is in the constructor of your screen. The constructor is invoked only once, when the screen is created, and you can save time and memory by creating and storing String objects in this method.
Using bitmaps effectively
Bitmaps that you use in your applications consume a lot of memory. Operations that you perform on bitmaps, such as scaling, can quickly consume the memory that is available to your application and slow its performance considerably.
You should try to create and cache bitmaps outside of paint(). If your bitmaps won't change while your application is running, you can move the creation and manipulation of bitmaps to other methods. You should consider creating your bitmaps in the screen constructor, so that they are created only once.
Using fonts effectively
You can use a variety of fonts in your applications. You can choose to use a font that is provided in the BlackBerry Java SDK or you can import a custom font. You can also derive a font that has a specific style and size from an existing font on the BlackBerry device.
Typically, fonts don't change while an application is running, and so you don't need to create and derive fonts in paint(). Instead, you should override applyFont() and perform font operations in this method. The applyFont() method is invoked when a screen is created, as well as when the default system font on the device changes. You can also measure text and cache the height of the font in layout(), and then use these values in paint() to draw the contents of a field.
Caching the dimensions of a field
The dimensions of a field or manager change only at specific times while your application is running. For example, the size of a field won't change in paint(); this method only draws the contents of the field using the space that is provided by the field's manager. The size of a field might change in layout(), where field arrangement takes place. You can use this knowledge to move any calculations of field size or positioning from paint() to layout().
For example, you can invoke the static Display.getWidth() and Display.getHeight() to retrieve the width and height of the screen, and cache these values in layout(). Because these values change only if the BlackBerry device is rotated, you don't need to retrieve them each time paint() is invoked. In general, though, you should use the dimensions of your field instead of the dimensions of the screen to position and lay out your fields. | https://developer.blackberry.com/bbos/java/documentation/ui_life_cycle_1970012_11.html | CC-MAIN-2015-11 | refinedweb | 2,732 | 52.49 |
04 April 2013 19:23 [Source: ICIS news]
(updates with Canadian and Mexican data)
HOUSTON (ICIS)--Chemical shipments on Canadian railroads rose by 7.6% year on year to 11,947 railcar loadings in the week ended 30 March, marking their 13th straight weekly increase this year, according to data released by a rail industry association on Thursday.
In the previous week, ended 23 March, Canada's chemical railcar loadings rose by 9.2% year on year. From 1 January to 30 March, shipments are up 11.4% to 148,775, the Association of American Railroads (AAR) said. ?xml:namespace>
US chemical railcar traffic fell by 2.3% year on year to 30,557 loadings in the week ended 30 March, marking its second decline in a row and the 11th decline so far this year.
In the previous week, ended 23 March, US chemical car loadings fell by 0.9% year on year. From 1 January to 30 March,
Meanwhile, overall US weekly railcar loadings for the week ended 30 March in the freight commodity groups tracked by the
Paul Hodges studies key influences shaping the chemical industry in his Chemical | http://www.icis.com/Articles/2013/04/04/9656013/canada-chem-railcar-traffic-rises-for-13th-straight-week.html | CC-MAIN-2015-06 | refinedweb | 192 | 63.29 |
Set the process group ID for a device
#include <sys/types.h> #include <unistd.h> int tcsetpgrp( int fildes, pid_t pgrp_id );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The tcsetpgrp() function sets the process group ID associated with the device indicated by fildes to be pgrp_id.
If successful, the tcsetpgrp() function causes subsequent breaks on the indicated terminal device to generate a SIGINT on all process in the given process group.
#include <sys/types.h> #include <unistd.h> #include <stdlib.h> int main( void ) { /* * Direct breaks on stdin to me */ tcsetpgrp( 0, getpid() ); return EXIT_SUCCESS; }
POSIX 1003.1
signal(), tcgetpgrp() | https://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/t/tcsetpgrp.html | CC-MAIN-2018-47 | refinedweb | 112 | 60.92 |
Create manually-sized gaps along edges of the screen which will not
be used for tiling, along with support for toggling gaps on and
off.
Note that XMonad.Hooks.ManageDocks is the preferred solution for
leaving space for your dock-type applications (status bars,
toolbars, docks, etc.), since it automatically sets up appropriate
gaps, allows them to be toggled, etc. However, this module may
still be useful in some situations where the automated approach of
ManageDocks does not work; for example, to work with a dock-type
application that does not properly set the STRUTS property, or to
leave part of the screen blank which is truncated by a projector,
and so on.
You can use this module by importing it into your ~/.xmonad/xmonad.hs file:
import XMonad.Layout.Gaps
and applying the gaps modifier to your layouts as follows (for
example):
layoutHook = gaps [(U,18), (R,23)] $ Tall 1 (3/100) (1/2) ||| Full -- leave gaps at the top and right
You can additionally add some keybindings to toggle or modify the gaps,
for example:
, ((modMask x .|. controlMask, xK_g), sendMessage $ ToggleGaps) -- toggle all gaps
, ((modMask x .|. controlMask, xK_t), sendMessage $ ToggleGap U) -- toggle the top gap
, ((modMask x .|. controlMask, xK_w), sendMessage $ IncGap R 5) -- increment the right-hand gap
, ((modMask x .|. controlMask, xK_q), sendMessage $ DecGap R 5) -- decrement the right-hand gap
If you want complete control over all gaps, you could include
something like this in your keybindings, assuming in this case you
are using XMonad.Util.EZConfig.mkKeymap or
XMonad.Util.EZConfig.additionalKeysP from XMonad.Util.EZConfig
for string keybinding specifications:
++
[ ("M-g " ++ f ++ " " ++ k, sendMessage $ m d)
| (k, d) <- [("a",L), ("s",D), ("w",U), ("d",R)]
, (f, m) <- [("v", ToggleGap), ("h", IncGap 10), ("f", DecGap 10)]
]
Given the above keybinding definition, for example, you could type
M-g, v, a to toggle the top gap.
To configure gaps differently per-screen, use
XMonad.Layout.PerScreen (coming soon).
An enumeration of the four cardinal directions/sides of the
screen.
Ideally this would go in its own separate module in Util,
but ManageDocks is angling for inclusion into the xmonad core,
so keep the dependencies to a minimum. | http://hackage.haskell.org/package/xmonad-contrib-0.8/docs/XMonad-Layout-Gaps.html | CC-MAIN-2016-36 | refinedweb | 361 | 53.92 |
I am trying to link my Arduino with my Raspbery PI using SPI.
I have found the following links helpful.
They connect the two directly but warn that a level shifter should be used to protect the Raspberry PI from the 5v of the Arduino.
When I risk damaging the Pi and link directly everything works fine.
When I put a level shifter in the middle it stops working.
I have tried the TXB0108 and the BSS138 both from ADAFRUIT
I have tried to break the problem down.
Linking the two sets of 4 wires together on the breadboard works properly.
I have moved each wire one at a time so that they connect via the TXB0108. Here are the results:
Ard 10, RPi 26 function = SS Works perfectly
Ard 11, RPi 19 function = MOSI Works perfectly
Ard 12, RPi 21 function = MISO Data transferred from the RPI to the Ard is corrupted. Data transferred from the Ard to RPi seems to be good
Ard 13, RPi 23 function = SCLK it looks like no data is transferred at all which suggests the clock signal is completely distorted.
I repeated the tests with the BSS138 and got similar results although the corruption when the MISO line was connected was not so severe.
I had hoped that the MISO connection at least could be made because I believe that is the only one risking damage as it is feeding in ~5V onto the RPi pin.
Do you have any thought as to how to proceed?
The code I use in the Arduino is:
- Code: Select all
// Written by Nick Gammon
// February 2011
// MISO pin 12
// MOSI pin 11
// CLK pin 13
// SS pin 10
#include <SPI.h>
char buf [100];
volatile byte pos;
volatile boolean process_it;
volatile int count=66;
void setup (void)
{
Serial.begin (9600); // debugging
Serial. println("setup");
// have to send on master in, *slave out*
pinMode(MISO, OUTPUT);
// pinMode(SS, INPUT);
//pinMode(MOSI, INPUT);
//pinMode(SCK, INPUT);
// get ready for an interrupt
pos = 0; // buffer empty
process_it = false;
// turn on SPI in slave mode
SPCR |= _BV(SPE);
//SPI.setClockDivider(SPI_CLOCK_DIV16);
// now turn on interrupts
SPI.attachInterrupt();
} // end of setup
// main loop - wait for flag set in interrupt routine
void loop (void)
{
if (process_it)
{
Serial.println("Saw Something");
//Serial.println (buf);
for(int i=0;i<pos;i++)
{
Serial.print(buf[i]);
}
Serial.println ("");
pos = 0;
process_it = false;
} // end of flag set
} // end of loop
// SPI interrupt routine
ISR (SPI_STC_vect)
{
//process_it = true;
byte c = SPDR; // grab byte from SPI Data Register
SPDR=count;
count++;
// add to buffer if room
// example: newline means time to process buffer
if (c == '\n')
{
process_it = true;
}
else
if (pos < sizeof buf)
{
buf [pos++] = c;
} // end of room available
} // end of interrupt routine SPI_STC_vect
The code in the Raspberry Pi is:
- Code: Select all
#!/usr/bin/python2.7
# MISO pin 9
# MOSi pin 10
# CLK pin 11
# SS pin 7 spidev1
import spidev
from time import sleep
def sendToArduino(val):
# val is a string and need to take each char, change to an integer and send
result=list()
for chr in val:
num=ord(chr)
temp = spi.xfer2([num])
result.append(int(temp[0]))
sleep(.005)
if val[len(val)-1]!='\n':
sleep(.005)
temp = spi.xfer2([ord('\n')])
returnChar=int(temp[0])
result.append(returnChar)
return result
# not currently used
def printListofInts(lst):
output=""
for c in lst:
output += c
print output
print '\n'
# reload spi drivers to prevent spi failures
import subprocess
reload_spi = subprocess.Popen('sudo rmmod spi_bcm2708', shell=True, stdout=subprocess.$
start_spi = subprocess.Popen('sudo modprobe spi_bcm2708', shell=True, stdout=subproces$
sleep(3)
spi = spidev.SpiDev()
spi.open(0,1) # The Gertboard DAC is on SPI channel 1 (CE1 - aka GPIO7)
while True:
ans=str(raw_input("Another , CR to finish\n"))
if len(ans)==0 :
break;
res=sendToArduino(ans)
print res
Does anybody have a working example of a combination of code (preferably in Python for the Pi) and a specific level shifter that I could use?
Thank you in advance
Alan | http://adafruit.com/forums/viewtopic.php?f=25&t=38582 | CC-MAIN-2014-15 | refinedweb | 669 | 60.45 |
Fipy, Lopy, Wipy Http Post?
How do I transfer data so that I can retrieve it on a web server like this?
<?php
//Sensor1
$p1 = $_GET["p1"];
$p2 = $_GET["p2"];
$p3 = $_GET["p3"];
$p4 = $_GET["p4"];
$p5 = $_GET["p5"];
$p6 = $_GET["p6"];
$p7 = $_GET["p7"];
$p8 = $_GET["p8"];
$p9 = $_GET["p9"];
$p10 = $_GET["p10"];
$p11 = $_GET["p11"];
$p12 = $_GET["p12"];
$p13 = $_GET["p13"];
$p14 = $_GET["p14"];
$p15 = $_GET["p15"];
$p16 = $_GET["p16"];
$p17 = $_GET["p17"];
$p18 = $_GET["p18"];
$p19 = $_GET["p19"];
$p20 = $_GET["p20"];
$inhalt = array($p1, $p2, $p3, $p4, $p5, $p6, $p7, $p8, $p9, $p10, $p11, $p12, $p13, $p14, $p15, $p16, $p17, $p18, $p19, $p20);
// Speichern der Datei
$eintrag = implode(";", $inhalt);
file_put_contents("sensor1.txt", $eintrag);
?>
Translated with
@Martinnn i have been trying to use urequests for HTTP POST and i always receive the error msg that the function accepts only 2 positional args while it gets 4 (or something along that line). have seen may msg in various fora about that error, but never found a turn-around that worked for me.
Das funktioniert leider nicht.
import requests
userdata = {"firstname": "John", "lastname": "Doe", "password": "jdoe123"}
resp = requests.post('', params=userdata)
Your PHP-File:
$firstname = htmlspecialchars($_GET["firstname"]);
$lastname = htmlspecialchars($_GET["lastname"]);
$password = htmlspecialchars($_GET["password"]);
echo "firstname: $firstname lastname: $lastname password: $password";
firstname: John lastname: Doe password: jdoe123
@martinnn said in Fipy, Lopy, Wipy Http Post?:
Not sure what you are trying to do...
As you write "POST" in your title - are you aware of the urequrests library (which provides http/https put, get, post...)?
Yes, I know the library, but I don't know how to use it to send it to my webserver via post command and then get it via PHP.
I'm new to Python.
Not sure what you are trying to do...
As you write "POST" in your title - are you aware of the urequrests library (which provides http/https put, get, post...)?
Please help me! | https://forum.pycom.io/topic/4075/fipy-lopy-wipy-http-post/?page=1 | CC-MAIN-2020-24 | refinedweb | 322 | 69.11 |
index
Fortran Tutorials
Java Tutorials
Java Applet Tutorials
Java Swing and AWT Tutorials
JavaBeans Tutorials
Create a Frame in Java
in java AWT package. The frame in java works like the main window where your components
(controls) are
added to develop a application. In the Java AWT, top-level... Create a Frame in Java
example | Java
Programming | Java Beginners Examples
| Applet Tutorials
| Awt
Tutorials | Java Certification
| Interview Question ...
applications, mobile applications, batch processing applications. Java is used arraylist index() Function
Java arrayList has index for each added element. This index starts from 0.
arrayList values can be retrieved by the get(index) method.
Example of Java Arraylist Index() Function
import
Java question - Swing AWT
Java question I want to create two JTable in a frame. The data... GetData(JTable table, int row_index, int col_index){
return table.getModel().getValueAt(row_index, col_index);
}
}
Thanks....
Java Frame
Java Frame What is the difference between a Window and a Frame
Java Program - Swing AWT
Java Program Write a Program that display JFileChooser that open... JFrame {
public static JFrame frame;
public JList list;
public...(String [] args) {
frame = new Uploader();
frame.setVisible(true
JList - Swing AWT
is the method for that? You kindly explain with an example. Expecting solution as early...); or
listmodel.add(int index, Object obj)
list gets updated once you add element..
i hope...) {
int index = list.getSelectedIndex();
listModel.remove(index
How to save data - Swing AWT
to :
Thanks...","bbb","ccc","ddd","eee"};
JFrame frame = new JFrame("Setting... getToolTipText(MouseEvent e) {
int index = locationToIndex(e.get
index of javaprogram
index of javaprogram what is the step of learning java. i am not asking syllabus am i am asking the step of program to teach a pesonal student.
To learn java, please visit the following link:
Java Tutorial
java-awt - Java Beginners
java-awt how to include picture stored on my machine to a java frame...());
JFrame frame = new JFrame();
frame.getContentPane().add(panel... information,
Thanks
Setting the Icon for the frame in Java
Setting the Icon for the frame in Java
... the icon of the frame.
This program is the detailed illustration to set the icon to the frame. Toolkit
class has been used to get the image and then the image,
slider - Swing AWT
://
Thanks... {
public static void main(String args[]) {
JFrame frame = new JFrame("JSlider Example");
Container content = frame.getContentPane();
JSlider slider
Java AWT Package Example
Java Swing : JFrame Example
Java Swing : JFrame Example
In this section, you will learn how to create a frame in java swing.
JFrame :
JFrame class is defined in javax.swing.JFrame.....
getContentPane() : It returns object of contentPane
for your frame.
getRootPane
swings - Swing AWT
:// What is Java Swing Technologies? Hi friend,import...(); } public TwoMenuItem(){ JFrame frame; frame = new JFrame("Two menu
Java - Swing AWT
Java I have write a program to capture images from camera... javax.imageio.ImageIO;
public class SaveImage extends Component {
int index...(){
BufferedImageOp op = null;
switch (index){
case 0:
bufferImage = bi
Java Code - Swing AWT
Java Code How to Display a Save Dialog Box using JFileChooser... index;
BufferedImage bi, bufferImage;
int w, h;
static JButton button... op = null;
switch (index){
case 0:
bufferImage = bi
Help Required - Swing AWT
JFrame("password example in java");
frame.setDefaultCloseOperation...();
}
});
}
}
-------------------------------
Read for more information.... the password by searching this example's\n"
+ "source code
index - Java Beginners
index Hi could you pls help me with this two programs they go hand in hand.
Write a Java GUI application called Index.java that inputs several... the number of occurrences of the character in the text.
Write a Java GUI
java - Swing AWT
java Hello Sir/Mam,
I am doing my java mini... for upload image in JAVA SWING.... Hi Friend,
Try the following code...(String[] args) {
JFrame frame = new JFrame("Upload Demo");
JPanel panel = new
Frame query
Frame query Hi,
I am working on Images in java. I have two JFrame displaying same image. We shall call it outerFrame and innerFrame.In innerFrame i am rotating the image.After all the rotations i need to display this image
Line Drawing - Swing AWT
) {
System.out.println("Line draw example using java Swing");
JFrame frame = new...Line Drawing How to Draw Line using Java Swings in Graph chart... using java Swing
import javax.swing.*;
import java.awt.Color;
import
java - Swing AWT
public void main(String args[]) throws
Exception {
JFrame frame = new
frame with title free hand writing
://
Thanks...frame with title free hand writing create frame with title free hand writing, when we drag the mouse the path of mouse pointer must draw line ao arc
java - Swing AWT
java how can i add items to combobox at runtime from jdbc Hi Friend,
Please visit the following link:
Thanks Hi Friend
java - Swing AWT
*;
import javax.swing.*;
class Maruge extends Frame implements ActionListener...*;
class Maruge extends Frame {
Label lab=new Label(" ");
Label lab1=new Label
JFileChooser - Swing AWT
);
}
public static void main(String s[]) {
JFrame frame = new JFrame("Directory chooser file example");
FileChooser panel = new FileChooser... for more information,
Thanks
How to index a given paragraph in alphabetical order
How to index a given paragraph in alphabetical order Write a java program to index a given paragraph. Paragraph should be obtained during runtime. The output should be the list of all the available indices.
For example:
Input
b+trees - Swing AWT
b+trees i urgently need source code of b+trees in java(swings/frames).it is urgent.i also require its example implemented using any string...;
String nodeName;
public static void main(String[] args) {
JFrame frame
java image loadin and saving problem - Swing AWT
java image loadin and saving problem hey in this code i am trying... the frame savin is nt done plzz help me with this code.........
import...){}
}
JFrame frame = new JFrame();
JPanel panel = new UploadImage
java - Swing AWT
What is Java Swing AWT What is Java Swing AWT
Index Out of Bound Exception
:\saurabh>java Example
Valid indexes are 0, 1,2,3,4,5,6 or 7
End...
Index Out of Bound Exception
Index Out of Bound Exception are the Unchecked Exception | http://www.roseindia.net/tutorialhelp/comment/57803 | CC-MAIN-2013-48 | refinedweb | 1,007 | 57.98 |
Comparator vs comparable
Satyajeet Kadam
Ranch Hand
Joined: Oct 19, 2006
Posts: 215
posted
Dec 23, 2009 21:20:30
0
Q1)It is necessary to modify the class whose instance is going to be sorted? What does it means?
Q2) From below example please tell where this has bee done?
import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; public class Test { public static void main(String args[]) { ArrayList list=new ArrayList<myComp>(); list.add(new myComp("s")); list.add(new myComp("a")); list.add(new myComp("t")); Collections.sort(list); System.out.println("Comparable Vehicles "+list); } } class myComp implements Comparable<myComp> { private String s1; myComp(String s) { s1=s; } public String getString() { return this.s1; } public String toString() { return s1; } public int compareTo(myComp o) { return new String(s1).compareTo(o.s1); } }
Q3) A seperate class can be created in order to sort the instance? what does it mean?
Q4) Where seperate class is created in this example?
Q5) How we can create many sort instances in this example?
import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; public class test1 { public static void main(String args[]) { ArrayList list=new ArrayList<myComp>(); list.add(new NonComparable("s")); list.add(new NonComparable("a")); list.add(new NonComparable("t")); Collections.sort(list,new myComp1()); System.out.println("Comparable Vehicles "+list); } } class NonComparable { private String s1; NonComparable(String s) { s1=s; } public String getString() { return this.s1; } } class myComp1 implements Comparator<NonComparable> { public int compare(NonComparable first,NonComparable second) { return new String(first.getString()).compareTo(second.getString()); } }
Travis Hein
Ranch Hand
Joined: Jun 06, 2006
Posts: 161
posted
Dec 23, 2009 21:38:53
0
In that first sample class, the class was modified by making it implement the Comparable interface, and thus the compareTo method.
The example there doesn't show a second class implemented differently. but assuming instead of wanting to have many instances of this class sorted by the property sorted here, we could subclass it and override the compareTo method.
in this example, it doesn't really show why we would do that, consider a class that had more than one
string
property, maybe an id, an age, a name. So the compareTo in this person class might by default sort by name. Now if we created a subclass birthday list entry say, to have it sort by age instead.
so jamming the object instances into something that naturally sorts, such as a
TreeSet
, or using a sort operation on the colletion, would cause the instances of person class to be sorted differently from the instances of birthday list class. I guess we could have some configurable external proparty that each object instance would know how to retreive somehow so that we could configure sorting by different properties in the one class, where this variable would drive the compareTo method implemented here, that would not need to have a new object type subclassed etc to have different sorting.
in that second example, you seem to be wanting to compare something that implements a comparable interface with something that does not. in that case you would have to come up with your own code that meticulously inspects each property in the A, and B ones, so that you end up coming with something that returns -1, 0,or +1 in a manner like the built in compare thingie would, but in this case, unless you made this logic into a wrapper class that implements Comparable interface, the
Java
built int utilities for comparing (or sorting) would not be able to make use of this.
Error: Keyboard not attached. Press F1 to continue.
PrasannaKumar Sathiyanantham
Ranch Hand
Joined: Nov 12, 2009
Posts: 110
posted
Dec 23, 2009 21:47:58
0
The difference between the comparable and comparator is that
1)In comparable the particular class itself will compare it with another object.
2)In Comparator a third party will compare two objects and return the result..
It is better to use the comparable interface
To err is human,
To forgive is not company policy
Christophe Verré
Sheriff
Joined: Nov 24, 2005
Posts: 14688
16
I like...
posted
Dec 23, 2009 22:05:12
0
PrasannaKumar Sathiyanantham wrote:
It is better to use the comparable interface
Better than what ?
[My Blog]
All roads lead to JavaRanch
Satyajeet Kadam
Ranch Hand
Joined: Oct 19, 2006
Posts: 215
posted
Dec 28, 2009 05:16:15
0
Please correct me if i am wrong?
1)In comparable the particular class itself will compare it with another object.
Do you mean to say that we are making the 2 objects of same class and comparing them with one another.Can we say this statement is same as the following
statement "You must modify the class whose instance you want to sort" .
2) In Comparator a third party will compare two objects and return the result
Do you mean to say that we will create a seperate class that implements comparator and write sorting logic as per our requirement and call Collection.sort(list,comparator).
Neha Daga
Ranch Hand
Joined: Oct 30, 2009
Posts: 504
posted
Dec 28, 2009 05:23:39
0
yes
SCJP 1.6 96%
Don't get me started about those stupid
light bulbs
.
subject: Comparator vs comparable
Similar Threads
Sorting an ArrayList is giving me fits
compare method in comparator interface
compare() and compareTo()
comparable and comparator interface
Sorting question
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/476144/java-programmer-SCJP/certification/Comparator-comparable | CC-MAIN-2014-52 | refinedweb | 932 | 54.52 |
yummly 0.3.6
Python package for Yummly API:
Python library for Yummly API:
Version: 0.3.6
NOTE: This library and its author are not affliated with Yummly.
Installation
Using setup.py
$ python setup.py install
Using pip
$ pip install yummly
Current Dependencies
- requests>=1.1.0
- nose>=1.2.1 (for testing)
Usage
Use yummly.Client to create a client object to interact with the Yummly API.
The client accepts api_id, api_key, and timeout as init parameters:
from yummly import Client # default option values TIMEOUT = 5.0 RETRIES = 0 client = Client(api_id=YOUR_API_ID, api_key=YOUR_API_KEY, timeout=TIMEOUT, retries=RETRIES) search = client.search('green eggs and ham') match = search.matches[0] recipe = client.recipe(match.id)
Search Recipes
API endpoint:?<params>
results = yummly.search('bacon') print 'Total Matches:', results.totalMatchCount for match in results.matches: print 'Recipe ID:', match.id print 'Recipe:', match.recipeName print 'Rating:', match.rating print 'Total Time (mins):', match.totalTimeInSeconds / 60.0 print '----------------------------------------------------'
Limit your results to a maximum:
# return the first 10 results results = yummly.search('chicken marsala', maxResults=10)
Offset the results for pagination:
# return 2nd page of results results = yummly.search('pulled pork', maxResults=10, start=10)
Provide search parameters:
params = { 'q': 'pork chops', 'start': 0, 'maxResult': 40, 'requirePicutres': True, 'allowedIngredient[]': ['salt', 'pepper'], 'excludedIngredient[]': ['cumin', 'paprika'], 'maxTotalTimeInSeconds': 3600, 'facetField[]': ['ingredient', 'diet'], 'flavor.meaty.min': 0.5, 'flavor.meaty.max': 1, 'flavor.sweet.min': 0, 'flavor.sweet.max': 0.5, 'nutrition.FAT.min': 0, 'nutrition.FAT.max': 15 } results = yummly.search(**params)
For a full list of supported search parameters, see section The Search Recipes Call located at:
Example search response:
Get Recipe
API endpoint:<recipe_id>
Fetch a recipe by its recipe ID:
recipe = yummly.recipe(recipe_id) print 'Recipe ID:', recipe.id print 'Recipe:', recipe.name print 'Rating:', recipe.rating print 'Total Time:', recipe.totalTime print 'Yields:', recipe.yields print 'Ingredients:' for ingred in recipe.ingredientLines: print ingred
Example recipe response:
NOTE: Yummly’s Get-Recipe response includes yield as a field name. However, yield is a keyword in Python so this has been renamed to yields.
Search metadata
API endpoint:<metadata_key>
Yummly provides a metadata endpoint that returns the possible values for allowed/excluded ingredient, diet, allergy, and other search parameters:
METADATA_KEYS = [ 'ingredient', 'holiday', 'diet', 'allergy', 'technique', 'cuisine', 'course', 'source', 'brand', ] ingredients = client.metadata('ingredient') diets = client.metadata('diet') sources = client.metadata('source')
NOTE: Yummly’s raw API returns this data as a JSONP response which yummly.py parses off and then converts to a list containing instances of the corresponding metadata class.
API Model Classes
All underlying API model classes are in yummly/models.py. The base class used for all models is a modified dict class with attribute-style access (i.e. both obj.foo and obj['foo'] are valid accessor methods).
A derived dict class was chosen to accommodate painless conversion to JSON which is a fairly common requirement when using yummly.py as an API proxy to feed your applications (e.g. a web app with yummly.py running on your server instead of directly using the Yummly API on the frontend).
Testing
Tests are located in tests/. They can be executed using nose by running run_tests.py from the root directory.
$ python run_tests.py
NOTE: Running the test suite will use real API calls which will count against your call limit. Currently, 21 API calls are made when running the tests.
Test Config File
A test config file is required to run the tests. Create tests/config.json with the following properties:
{ "api_id": "YOUR_API_ID", "api_key": "YOUR_API_KEY" }
This file will be loaded automatically when the tests are run.
License
This software is licensed under the BSD License.
TODO
- Provide helpers for complex search parameters like nutrition, flavors, and metadata
- Author: Derrick Gilland
- Keywords: yummly recipes
- License: BSD
- Categories
- Package Index Owner: dgilland
- DOAP record: yummly-0.3.6.xml | https://pypi.python.org/pypi/yummly/0.3.6 | CC-MAIN-2018-13 | refinedweb | 641 | 53.17 |
Hi there! This workshop is an introduction to React.js using Next.js. It assumes you have some familiarity with HTML and JavaScript, as well as programming functions like functions, objects, arrays, and classes, but you should be able to follow along regardless. If you're having trouble, feel free to ask in the Hack Club Slack!
A quick note: we’re using some features from ES6, a recent version of JavaScript. In this tutorial, we’re using arrow functions and const, among other features. If you’re more familiar with older versions of JavaScript, you might find the Babel REPL useful to see what ES6 code compiles to.
Intro to React, the site automatically inserts the new message into the list without you having to reload the page.
React is a framework built by engineers at Facebook a few years ago to make building complex web apps way easier. We’re going to start super simply, using React to make a regular-looking website, then start adding more complex functionality. Here are five terms you should know:
- JSX. Some engineers came up with a way to write HTML inside JavaScript. It sounds crazy, but it uses almost the same syntax as regular HTML (deep inside it’s fancy syntax for JavaScript functions that create HTML elements). The simplest JSX:
<p>Hi!</p>. You can also use JavaScript inside JSX. So if you’ve got
const name = 'Zach Latta'in your JS file, you can write
<h1>Welcome, {name}</h1>, and on your page it’ll show
Welcome, Zach Lattain a heading.
- Components. React apps are built with the idea of components, which are pieces of your application that encapsulate all their stuff—data, state (see below), styling, subcomponents. Everything is a component, from just one tag (the
ptag above is a component) to entire sections of a webpage. For example, if you created a component called "Welcome"—
const Welcome = () => <h1>Welcome!</h1>—you could then write
<Welcome />somewhere else in your app to load that component. This is a simple component, but you can imagine if that were a whole section of your site, being able to render it multiple times in different locations would be super useful. You can compose components together to create complex interfaces.
- Props. Short for “properties”, props are how you pass data between components. These are “inputs” that you can use inside the component. Let’s update the" />.
- State. This is data in your app. That might be the fact that a dropdown menu is open, or not. If you need information to decide how you render your site, it goes in state. We’ll get to using state later.
Components
Let’s make a more complex component now, with several props. Imagine you’re building a news website or a blog, so you need to show a list of articles with consistent styling.
const Article = ({ title, author, preview }) => ( <div> <h3>{title}</h3> <p>By {author}</p> </div> )
Now you can use it like this:
<Article title="Hello Hack Club!" author="@lachlanjc" />
(Thinking ahead—instead of just using components one-at-a-time like this, imagine downloading a list of articles, then rendering this Article component for each one. Well, that’s how news websites work!)
JSX tip: when you’re passing a string (text) value to a prop, you can use quotes, just like in HTML, but if you’re passing JavaScript, you use curly braces. If our article had multiple authors, we’d pass an array:
<Article title="Hello Hack Club!" author={['@lachlanjc', '@zachlatta']} />
(Want to read more about JSX?)
Next.js
So far we’ve just been looking at React. Next.js is a framework built to make building React-based web apps way easier. It handles setting up multiple pages, starting a server, and a bunch of super complex setup in the background. A whole bunch of major companies use it—it even powers parts of GitHub.
You’ve been reading long enough; let’s open up your development environment. Get started with a super simple template on Glitch: go to, click “Remix” & you can get started. Click “Show” to see the live website (it’ll take a moment to get running the first time). known as “packages”—bundles of code from other developers we need for our project to run. You’ll see we’re requiring
react, &
react-dom(that last one’s the “adapter” to run React on the web). Glitch handles automatically installing the dependencies and running the app for us.
At its most basic, a page with Next.js (so a file like
pages/index.js) looks like this. What gets rendered on the page goes inside the “default export” of the file.
export default () => <h1>Welcome!</h1>
Making your first Next.js page
Let’s try out that component we were using above:
const Article = ({ title, author, preview }) => ( <div> <h3>{title}</h3> <p>By {author}</p> </div> ) export default () => ( <main> <h1>Articles</h1> <Article title="Hello Hack Club!" author="@zachlatta" /> <Article title="Workshops are cool" author="@lachlanjc" /> </main> )
Hey, look at that! Try adding your own. Your site should immediately update.
Linking to a new page
Let’s make a second page, and add a link to it.
First up, the link. We need a way to make that link, but Next.js has our back here. At the top of the file, add
import Link from 'next/link'. You’ve just imported your first React component, and can use it!
import Link from 'next/link' const Article = ({ title, author, preview }) => ( <div> <h3>{title}</h3> <p>By {author}</p> </div> ) export default () => ( <main> <h1>Articles</h1> <Article title="Hello Hack Club!" author="@zachlatta" /> <Article title="Goodbye Hack Club :(" author="@lachlanjc" /> {/*Try adding another article!*/} <Link href="/shopping"> <a>Let’s go shopping</a> </Link> </main> )
The
Link component makes whatever we click on go that page, then the
<a> tag is the actual HTML element that’ll appear on our page.
Building a shopping list page
Now, that doesn’t go anywhere yet. Click on “New file” and enter
pages/shopping.js. We’re going to make a quick shopping list!
In your mind, imagine what this HTML will look like: (
ul makes a bulleted list, if you haven’t used it before)
<h1>Shopping List</h1> <ul> <li>Apples</li> <li>Oranges</li> <li>Pears</li> <li>Strawberries</li> </ul>
If we were going to only ever have those items on the page, it’d be the exact same thing in Next, wrapped in the
export default code we saw on the first page.
However, declaring the
<li> elements over & over is getting tiresome. Let’s move the list of items into a JavaScript array (
['thing', 'second thing'], etc), then “map” through each item and put it in the HTML.
const items = ['Apples', 'Oranges', 'Pears', 'Strawberries'] export default () => ( <main> <h1>Shopping List</h1> <ul> {items.map((item) => ( <li key={item}>{item}</li> ))} </ul> </main> )
Hold up. You just made a huge shift in how you develop things: you went from writing the code for the website directly yourself, to writing the code for the website to generate itself. You define the data (in most “real” apps,!
Adding interactivity to the list” (values that change, usually via user interaction), we’re going to need React’s
useState functionality (it’s one of the “React Hooks”). In
pages/shopping.js, add this line at the top:
import React, { useState } from 'react'
Now we need to make our component. We’ll need a text input, a button to add the item, and the same list of items. Notice that we now have a
return call after the
export but before the JSX for the page—you’ll need this for the next step.
const items = ['…'] export default () => { return ( <main> <h1>Shopping List</h1> <div> <input placeholder="New item" /> <button>Add item</button> </div> <ul> {items.map((item) => ( <li key={item}>{item}</li> ))} </ul> </main> ) }
Next up, we need to define some state on this component. Delete the
const items line, and set up
export default () => { const [items, setItems] = useState(['Apples', 'Strawberries']) const [newItem, setNewItem] = useState('') return ( <main> <h1>Shopping List</h1> <div> <input placeholder="New item" /> <button>Add item</button> </div> <ul> {items.map((item) => ( <li key={item}>{item}</li> ))} </ul> </main> ) }
newItem is where we’ll keep the text the user enters into the text box. Now, all that’s left is to add some event handling:
export default () => { const [items, setItems] = useState(['Apples', 'Strawberries']) const [newItem, setNewItem] = useState('') const changeNewItem = (e) => setNewItem(e.target.value) const addItem = () => { setItems((list) => [...list, newItem]) setNewItem('') } return ( <main> <h1>Shopping List</h1> <div> <input placeholder="New item" onChange={changeNewItem} value={newItem} /> <button onClick={addItem}>Add item</button> </div> <ul> {items.map((item) => ( <li key={item}>{item}</li> ))} </ul> </main> ) }
This is a little more complex, so let’s break it down:
itemsis our array of the items on the list (which
items.mapshows further down)
- What we pass to
useStateis the default value, so before the user does anything, that’s the initial state.
setItemsis a function React is generating for us for changing the value of
items. The whole page is, underneath, a function (see the
() => {}on the first line), so if you were to do
items = […], the next time React ran the function to render the page, the changes to the variable would disappear. To get around this, React keeps track of our state outside the context of just the functions for each component.
newItem&
setNewItemwork similarly, keeping track of the text the user types into the input box in a second chunk of React state.
changeNewItemis a function we wrote so that when the user types into the input box, we get the value of the input & set it to the state. (
eis the raw JavaScript event,
targetis the HTML element the event happened to, then
valueis the current value of the element.)
addItemis the code that runs when the user presses the “Add item” button. It adds the
newItemto the list of
items, then clears the input box. (This works because we set the
valueof the input box, so when we change the state, so does the element.)
One thing about React state is that it’s not “persistent”—the list won’t be saved if you come back another day—but there are a bunch of ways to handle data storage that we'll see later on.
Bonus: styling!. Go back to
pages/index.js, and add a
<style> tag just like this.
export default () => ( <main> <h1>Articles</h1> {/* the rest of your code... */} <style jsx>{` h1 { color: magenta; } `}</style> </main> )
Open up your app: the homepage has a magenta heading, but critically, the heading on the Shopping page doesn’t! Magical. So, you can style components on the Shopping page separately too.
<main> <h1>Shopping List</h1> {/* the rest of your code... */} <style jsx>{` ul { list-style: none; padding: 0; } li { padding: 1em 0; border-top: 1px solid #eee; } `}</style> </main>
Go crazy—try changing the fonts, colors, & whatever else!
Conclusion
Over the course of this workshop, you went from not knowing what JSX was to writing a two-page, interactive web app with it. Great work! This is just a starting point—you can keep adding to this app, but you can also check out the Next.js Dashboard workshop to continue learning. | https://workshops.hackclub.com/nextjs_starter/ | CC-MAIN-2022-33 | refinedweb | 1,901 | 72.46 |
This notebook is an introductory guide to Lale for scikit-learn users. Scikit-learn is a popular, easy-to-use, and comprehensive data science library for Python. This notebook aims to show how Lale can make scikit-learn even better in two areas: auto-ML and type checking. First, if you do not want to manually select all algorithms or tune all hyperparameters, you can leave it to Lale to do that for you automatically. Second, when you pass hyperparameters or datasets to scikit-learn, Lale checks that these are type-correct. For both auto-ML and type-checking, Lale uses a single source of truth: machine-readable schemas associated with scikit-learn compatible transformers and estimators. Rather than invent a new schema specification language, Lale uses JSON Schema, because it is popular, widely-supported, and makes it easy to store or send hyperparameters as JSON objects. Furthermore, by using the same schemas both for auto-ML and for type-checking, Lale ensures that auto-ML is consistent with type checking while also reducing the maintenance burden to a single set of schemas.
Lale is an open-source Python library and you can install it by doing
pip install lale. See
installation
for further instructions. Lale uses the term operator to refer to
what scikit-learn calls machine-learning transformer or estimator.
Lale provides schemas for 180
operators. Most of
these operators come from scikit-learn itself, but there are also
operators from other frameworks such as XGBoost or PyTorch.
If Lale does not yet support your favorite operator, you can add it
yourself by following this
guide.
If you do add a new operator, please consider contributing it back to
Lale!
The rest of this notebook first demonstrates auto-ML, then reveals some of the schemas that make that possible, and finally demonstrates how to also use the very same schemas for type checking.
Lale serves as an interface for two Auto-ML tasks: hyperparameter tuning and algorithm selection. Rather than provide new implementations for these tasks, Lale reuses existing implementations. The next few cells demonstrate how to use Hyperopt and GridSearchCV from Lale. Lale also supports additional optimizers, not shown in this notebook. In all cases, the syntax for specifying the search space is the same.
Let's start by looking at hyperparameter tuning, which is an important subtask of auto-ML. To demonstrate it, we first need a dataset. Therefore, we load the California Housing dataset and display the first few rows to get a feeling for the data. Lale can process both Pandas dataframes and Numpy ndarrays; here we use dataframes.
import pandas as pd import lale.datasets (train_X, train_y), (test_X, test_y) = lale.datasets.california_housing_df() pd.concat([train_X.head(), train_y.head()], axis=1)
As you can see, the target column is a continuous number, indicating
that this is a regression task. Besides the target, there are eight
feature columns, which are also all continuous numbers. That means
many scikit-learn operators will work out of the box on this data
without needing to preprocess it first. Next, we need to import a few
operators.
PCA (principal component analysis) is a transformer from
scikit-learn for linear dimensionality reduction.
DecisionTreeRegressor is an estimator from scikit-learn that can
predict the target column.
Pipeline is how scikit-learn composes
operators into a sequence.
Hyperopt is a Lale wrapper for
the hyperopt auto-ML library.
And finally,
wrap_imported_operators augments
PCA,
Tree, and
Pipeline with schemas to enable Lale to tune their hyperparameters.
from sklearn.decomposition import PCA from sklearn.tree import DecisionTreeRegressor as Tree from sklearn.pipeline import Pipeline from lale.lib.lale import Hyperopt lale.wrap_imported_operators()
Next, we create a two-step pipeline of
PCA and
Tree. This code
looks almost like in scikit-learn. The only difference is that since
we want Lale to tune the hyperparameters for us, we do
not specify them by hand. Specifically, we just write
PCA instead of
PCA(...), omitting the hyperparameters for
PCA. Analogously, we
just write
Tree instead of
Tree(...), omitting the hyperparameters
for
Tree. Rather than binding hyperparameters by hand, we leave them
free to be tuned by hyperopt.
pca_tree_planned = Pipeline(steps=[('tfm', PCA), ('estim', Tree)])
We use
auto_configure on the pipeline and pass
Hyperopt as an optimizer. This will use the pipeline's search space to find the best pipeline. In this case, the search uses 10 trials. Each
trial draws values for the hyperparameters from the ranges specified
by the JSON schemas associated with the operators in the pipeline.
%%time pca_tree_trained = pca_tree_planned.auto_configure( train_X, train_y, optimizer=Hyperopt, cv=3, max_evals=10, verbose=True)
100%|██████| 10/10 [00:08<00:00, 1.18trial/s, best loss: -0.41410769000479447] CPU times: user 16.2 s, sys: 9.31 s, total: 25.5 s Wall time: 9.45 s
By default, Hyperopt uses k-fold cross validation to evaluate each trial and a default scoring metric based on the task. The end result is the pipeline that performed best out of all trials. In addition to the cross-val score, we can also evaluate this best pipeline against the test data. We simply use the existing R2 score metric from scikit-learn for this purpose.
import sklearn.metrics predicted = pca_tree_trained.predict(test_X) print(f'R2 score {sklearn.metrics.r2_score(test_y, predicted):.2f}')
R2 score 0.37
In the previous example, the automation picked hyperparameter values for PCA and the decision tree. We know the values were valid and we know how well the pipeline performed with them. But we might also want to know exactly which values were picked. One way to do that is by visualizing the pipeline and using tooltips. If you are looking at this notebook in a viewer that supports tooltips, you can hover the mouse pointer over either one of the operators to see its hyperparameters.
pca_tree_trained.visualize()
Another way to view the results of hyperparameter tuning in Lale is by
pretty-printing the pipeline as Python source code. Calling the
pretty_print method with
ipython_display=True prints the code with
syntax highlighting in a Jupyter notebook. The pretty-printed code
contains the hyperparameters.
pca_tree_trained.pretty_print(ipython_display=True)
from sklearn.pipeline import Pipeline from sklearn.decomposition import PCA from sklearn.tree import DecisionTreeRegressor as Tree import lale lale.wrap_imported_operators() pca = PCA(svd_solver="full", whiten=True) tree = Tree( min_samples_leaf=0.09016751753288961, min_samples_split=0.47029117142535803, ) pipeline = Pipeline(steps=[("tfm", pca), ("estim", tree)])
Lale supports multiple auto-ML tools, not just hyperopt. For instance,
you can also use
GridSearchCV
from scikit-learn. You could use the exact same
pca_tree_planned
pipeline for this as we did with the hyperopt tool.
However, to avoid running for a long time, here we simplify the space:
for
PCA, we bind the
svd_solver so only the remaining hyperparameters
are being searched, and for
Tree, we call
freeze_trainable() to bind
all hyperparameters to their defaults. Lale again uses the schemas
attached to the operators in the pipeline to generate a suitable search grid.
Here, instead of the scikit-learn's
Pipeline(...) API, we use the
make_pipeline function. This function exists in both scikit-learn and
Lale; the Lale version yields a Lale pipeline that supports
auto_configure.
Note that, to be compatible with scikit-learn,
lale.lib.lale.GridSearchCV
can also take a
param_grid as an argument if the user chooses to use a
handcrafted grid instead of the one generated automatically.
%%time from lale.lib.lale import GridSearchCV from lale.operators import make_pipeline grid_search_planned = make_pipeline( PCA(svd_solver='auto'), Tree().freeze_trainable()) grid_search_result = grid_search_planned.auto_configure( train_X, train_y, optimizer=GridSearchCV, cv=3)
CPU times: user 12.8 s, sys: 6 s, total: 18.8 s Wall time: 8.58 s
Just like we saw earlier with hyperopt, you can use the best pipeline found for scoring and evaluate the quality of the predictions.
predicted = grid_search_result.predict(test_X) print(f'R2 score {sklearn.metrics.r2_score(test_y, predicted):.2f}')
R2 score 0.49
Similarly, to inspect the results of grid search, you have the same options as demonstrated earlier for hypopt. For instance, you can pretty-print the best pipeline found by grid search back as Python source code, and then look at its hyperparameters.
grid_search_result.visualize() grid_search_result.pretty_print(ipython_display=True, combinators=False)
from sklearn.decomposition import PCA from sklearn.tree import DecisionTreeRegressor as Tree from lale.operators import make_pipeline pipeline = make_pipeline(PCA(), Tree())
If we do not pretty-print with
combinators=False, the pretty-printed
code is rendered slightly differently, using
>> instead of
make_pipeline.
grid_search_result.pretty_print(ipython_display=True)
from sklearn.decomposition import PCA from sklearn.tree import DecisionTreeRegressor as Tree import lale lale.wrap_imported_operators() pipeline = PCA() >> Tree()
from lale.lib.lale import NoOp, ConcatFeatures from sklearn.linear_model import LinearRegression as LinReg from xgboost import XGBRegressor as XGBoost lale.wrap_imported_operators()
Lale emulates the scikit-learn APIs for composing pipelines using
functions. We already saw
make_pipeline. Another function in
scikit-learn is
make_union, which composes multiple sub-pipelines to
run on the same data, then concatenates the features. In other words,
make_union produces a horizontal stack of the data transformed by
its sub-pipelines. To support auto-ML, Lale introduces a third
function,
make_choice, which does not exist in scikit-learn. The
make_choice function specifies an algorithmic choice for auto-ML to
resolve. In other words,
make_choice creates a search space for
automated algorithm selection.
dag_with_functions = lale.operators.make_pipeline( lale.operators.make_union(PCA, NoOp), lale.operators.make_choice(Tree, LinReg, XGBoost(booster='gbtree'))) dag_with_functions.visualize()
The visualization shows
make_union as multiple sub-pipelines feeding
into
ConcatFeatures, and it shows
make_choice using an
|
combinator. Operators shown in white are already fully trained; in
this case, these operators actually do not have any learnable
coefficients, nor do they have hyperparameters. For each of the three
functions
make_pipeline,
make_choice, and
make_union, Lale also
provides a corresponding combinator. We already saw the pipe
combinator (
>>) and the choice combinator (
|). To get the effect
of
make_union, use the and combinator (
&) with the
ConcatFeatures operator. The next example shows the exact same
pipeline as before, but written using combinators instead of
functions.
dag_with_combinators = ( (PCA(svd_solver='full') & NoOp) >> ConcatFeatures >> (Tree | LinReg | XGBoost(booster='gbtree'))) dag_with_combinators.visualize()
Since the
dag_with_functions specifies an algorithm choice, when we
feed it to a
Hyperopt, hyperopt will do algorithm selection
for us. And since some of the operators in the dag do not have all
their hyperparameters bound, hyperopt will also tune their free
hyperparameters for us. Note that
booster for
XGBoost is fixed to
gbtree and hence Hyperopt would not tune it.
%%time multi_alg_trained = dag_with_functions.auto_configure( train_X, train_y, optimizer=Hyperopt, cv=3, max_evals=10)
100%|███████| 10/10 [01:42<00:00, 10.28s/trial, best loss: -0.8060166137873468] CPU times: user 1min 56s, sys: 10.3 s, total: 2min 6s Wall time: 1min 49s
Visualizing the best estimator reveals what algorithms hyperopt chose.
multi_alg_trained.visualize()
Pretty-printing the best estimator reveals how hyperopt tuned the
hyperparameters. For instance, we can see that a
randomized
svd_solver was chosen for PCA.
multi_alg_trained.pretty_print(ipython_display=True, show_imports=False)
pca = PCA(n_components=0.31022802683920675, svd_solver="full", whiten=True) xg_boost = XGBoost( gamma=0.7146673182687348, learning_rate=0.34333338181406947, max_depth=2, min_child_weight=9, n_estimators=689, reg_alpha=0.22238951827828057, reg_lambda=0.3685687126451779, subsample=0.498939586636504, ) pipeline = (pca & NoOp()) >> ConcatFeatures() >> xg_boost
Of course, the trained pipeline can be used for predictions as usual, and we can use scikit-learn metrics to evaluate those predictions.
predicted = multi_alg_trained.predict(test_X) print(f'R2 score {sklearn.metrics.r2_score(test_y, predicted):.2f}')
R2 score 0.81
This section reveals more of what happens behind the scenes for auto-ML with Lale. In particular, it shows the JSON Schemas used for auto-ML, and demonstrates how to customize them if desired.
When writing data science code, I often don't remember all the API information about what hyperparameters and datasets an operator expects. Lale attaches this information to the operators and uses it for auto-ML as demonstrated above. The same information can also be useful as interactive documentation in a notebook. Most individual operators in the visualizations shown earlier in this notebook actually contain a hyperlink to the excellent online documentation of scikit-learn. We can also retrieve that hyperlink using a method call.
print(Tree.documentation_url())
Lale's helper function
ipython_display pretty-prints JSON documents
and JSON schemas in a Jupyter notebook. You can get a quick overview
of the constructor arguments of an operator by calling the
get_defaults method.
from lale.pretty_print import ipython_display ipython_display(dict(Tree.get_defaults()))
{ "criterion": "mse", "splitter": "best", "max_depth": null, "min_samples_split": 2, "min_samples_leaf": 1, "min_weight_fraction_leaf": 0.0, "max_features": "auto", "random_state": null, "max_leaf_nodes": null, "min_impurity_decrease": 0.0, "min_impurity_split": null, "ccp_alpha": 0.0, }
Hyperparameters can be categorical (meaning they accept a few
discrete values) or continuous (integers or real numbers).
As an example for a categorical hyperparameter, let's look at the
criterion. JSON Schema can encode categoricals as an
enum.
ipython_display(Tree.hyperparam_schema('criterion'))
{ "description": "Function to measure the quality of a split.", "anyOf": [ {"enum": ["mse", "friedman_mse", "poisson"]}, {"enum": ["mae"], "forOptimizer": false}, ], "default": "mse", }
As an example for a continuous hyperparameter, let's look at
max_depth. The decision tree regressor in scikit-learn accepts
either an integer for that, or
None, which has its own meaning.
JSON Schema can express these two choices as an
anyOf, and
encodes the Python
None as a JSON
null. Also, while
any positive integer is a valid value, in the context of auto-ML,
Lale specifies a bounded range for the optimizer to search over.
ipython_display(Tree.hyperparam_schema(.", }, ], }
Besides hyperparameter schemas, Lale also provides dataset schemas.
For exampe, NMF, which stands for non-negative matrix factorization,
requires a non-negative matrix as
X. In JSON Schema, we express this
as an array of arrays of numbers with
minimum: 0. While NMF also
accepts a second argument
y, it does not use that argument.
Therefore, Lale gives
y the schema
{'laleType': 'Any'}, which permits any
values.
from sklearn.decomposition import NMF lale.wrap_imported_operators() ipython_display(NMF.input_schema_fit())
{ "type": "object", "required": ["X"], "additionalProperties": false, "properties": { "X": { "type": "array", "items": { "type": "array", "items": {"type": "number", "minimum": 0.0}, }, }, "y": {"laleType": "Any"}, }, }
While you can use Lale schemas as-is, you can also customize the
schemas to exert more control over the automation. As one example, it is common to tune XGBoost to use a large number for
n_estimators. However, you might want to
reduce the number of trees in an XGBoost forest to reduce memory
consumption or to improve explainability. As another example, you
might want to hand-pick one of the boosters to reduce the search space
and thus hopefully speed up the search.
import lale.schemas as schemas Grove = XGBoost.customize_schema( n_estimators=schemas.Int(minimum=2, maximum=6), booster=schemas.Enum(['gbtree'], default='gbtree'))
As this example demonstrates, Lale provides a simple Python API for writing schemas, which it then converts to JSON Schema internally. The result of customization is a new copy of the operator that can be used in the same way as any other operator in Lale. In particular, it can be part of a pipeline as before.
grove_planned = lale.operators.make_pipeline( lale.operators.make_union(PCA, NoOp), Grove) grove_planned.visualize()
Given this new planned pipeline, we use hyperopt as before to search for a good trained pipeline.
%%time grove_trained = grove_planned.auto_configure( train_X, train_y, optimizer=Hyperopt, cv=3, max_evals=10)
100%|███████| 10/10 [00:12<00:00, 1.25s/trial, best loss: -0.7478344560672071] CPU times: user 21.7 s, sys: 9.7 s, total: 31.4 s Wall time: 13.3 s
As with all trained Lale pipelines, we can evaluate
grove_trained
with metrics to see how well it does. Also, we can pretty-print
it back as Python code to double-check whether hyperopt obeyed the
customized schemas for
n_estimators and
booster.
predicted = grove_trained.predict(test_X) print(f'R2 score {sklearn.metrics.r2_score(test_y, predicted):.2f}') grove_trained.pretty_print(ipython_display=True, show_imports=False)
R2 score 0.74
pca = PCA(svd_solver="full", whiten=True) grove = Grove( gamma=0.42208258595069725, learning_rate=0.6558019595096513, max_depth=5, min_child_weight=13, n_estimators=5, reg_alpha=0.3590229319214039, reg_lambda=0.7978279409450941, subsample=0.6209085649172931, ) pipeline = (pca & NoOp()) >> ConcatFeatures() >> grove
The rest of this notebook gives examples for how the same schemas
that serve for auto-ML can also serve for error checking. We will
give comparative examples for error checking in scikit-learn (without
schemas) and in Lale (with schemas). To make it clear which version
of an operator is being used, all of the following examples uses
fully-qualified names (e.g.,
sklearn.feature_selection.RFE). The
fully-qualified names are for presentation purposes only; in typical
usage of either scikit-learn or Lale, these would be simple names
(e.g. just
RFE).
First, we import a few things.
import sys import sklearn from sklearn import pipeline, feature_selection, ensemble, tree
We use
make_pipeline to compose a pipeline of two steps: an RFE
transformer and a decision tree regressor. RFE performs recursive
feature elimination, keeping only those features of the input data
that are the most useful for its
estimator argument. For RFE's
estimator argument, the following code uses a random forest with 10
trees.
sklearn_hyperparam_error = sklearn.pipeline.make_pipeline( sklearn.feature_selection.RFE( estimator=sklearn.ensemble.RandomForestRegressor(n_estimators=10)), sklearn.tree.DecisionTreeRegressor(max_depth=-1))
The
max_depth argument for a decision tree cannot be a
negative number. Hence, the above code actually contains a bug: it
sets
max_depth=-1. Scikit-learn does not check for this mistake from
the
__init__ method, otherwise we would have seen an error message
already. Instead, scikit-learn checks for this mistake during
fit.
Unfortunately, it takes a few seconds to get the exception, because
scikit-learn first trains the RFE transformer and uses it to transform
the data. Only then does it pass the data to the decision tree.
%%time try: sklearn_hyperparam_error.fit(train_X, train_y) except ValueError as e: message = str(e) print(message, file=sys.stderr)
CPU times: user 4.34 s, sys: 62.5 ms, total: 4.41 s Wall time: 4.43 s
max_depth must be greater than zero.
Fortunately, this error message is pretty clear. Scikit-learn implements the error check imperatively, using Python if-statements to raise an exception when hyperparameters are configured wrong. This notebook is part of Lale's regression test suite and gets run automatically when changes are pushed to the Lale source code repository. The assertion in the following cell is a test that the error-check indeed behaves as expected and documented here.
assert message.startswith("max_depth must be greater than zero.")
import jsonschema #enable schema validation explicitly for the notebook from lale.settings import set_disable_data_schema_validation, set_disable_hyperparams_schema_validation set_disable_data_schema_validation(False) set_disable_hyperparams_schema_validation(False)
Below is the exact same pipeline as before, but written in Lale instead of directly in scikit-learn. In both cases, the underlying implementation is in scikit-learn; Lale only adds thin wrappers to support type checking and auto-ML.
%%time try: lale_hyperparam_error = lale.operators.make_pipeline( lale.lib.sklearn.RFE( estimator=lale.lib.sklearn.RandomForestRegressor(n_estimators=10)), lale.lib.sklearn.DecisionTreeRegressor(max_depth=-1)) except jsonschema.ValidationError as e: message = e.message print(message, file=sys.stderr)
CPU times: user 46.9 ms, sys: 15.6 ms, total: 62.5 ms Wall time: 43 ms
Invalid configuration for DecisionTreeRegressor(max_depth=-1) due to invalid value max_depth=-1. Schema of argument.", }, ], } Value: -1
assert message.startswith("Invalid configuration for DecisionTreeRegressor(max_depth=-1)")
Just like in the scikit-learn example, the error message in the Lale
example also pin-points the problem as passing
max_depth=-1 to the
decision tree. It does so in a more stylized way, printing the
relevant JSON schema for this hyperparameter. Lale detects the error
already when the wrong hyperparameter is being passed as an argument,
thus reducing the amount of code you have to look at to find the root
cause. Furthermore, Lale takes only tens of milliseconds to detect
the error, because it does not attempt to train the RFE transformer
first. In this example, that only saves a few seconds, which may not
be significant. But there are situations with larger time savings,
such as when using larger datasets, slower operators, or when auto-ML
tries out many pipelines.
from sklearn import decomposition
We use scikit-learn to compose a pipeline of two steps: an RFE transformer as before, this time followed by an NMF transformer.
sklearn_dataset_error = sklearn.pipeline.make_pipeline( sklearn.feature_selection.RFE( estimator=sklearn.ensemble.RandomForestRegressor(n_estimators=10)), sklearn.decomposition.NMF())
NMF, or non-negative matrix factorization, does not allow any negative numbers in its input matrix. The California Housing dataset contains some negative numbers and the RFE does not eliminate those features. To detect the mistake, scikit-learn must first train the RFE and transform the data with it, which takes a few seconds. Then, NMF detects the error and throws an exception.
%%time try: sklearn_dataset_error.fit(train_X, train_y) except ValueError as e: message = str(e) print(message, file=sys.stderr)
CPU times: user 4.41 s, sys: 62.5 ms, total: 4.47 s Wall time: 4.61 s
Negative values in data passed to NMF (input X)
assert message.startswith("Negative values in data passed to NMF (input X)")
lale_dataset_error = lale.operators.make_pipeline( lale.lib.sklearn.RFE( estimator=lale.lib.sklearn.RandomForestRegressor(n_estimators=10)), lale.lib.sklearn.NMF())
When we call
fit on the pipeline, before doing the actual training,
Lale can check that the
schema is correct at each step of the pipeline. In other words, it
checks whether the schema of the input data is valid for the first
step of the pipeline, and that the schema of the output from each step
is valid for the next step. By saving the time for training the RFE,
this completes in tens of milliseconds instead of seconds as before.
#Enable the data schema validation in lale settings from lale.settings import set_disable_data_schema_validation set_disable_data_schema_validation(False)
%%time try: lale_dataset_error.fit(train_X, train_y) except ValueError as e: message = str(e) print(message, file=sys.stderr)
CPU times: user 93.8 ms, sys: 0 ns, total: 93.8 ms Wall time: 131 ms
NMF.fit() invalid X, the schema of the actual data is not a subschema of the expected schema of the argument. actual_schema = {"type": "array", "items": {"type": "array", "items": {"type": "number"}}} expected_schema = { "type": "array", "items": {"type": "array", "items": {"type": "number", "minimum": 0.0}}, }
assert message.startswith('NMF.fit() invalid X, the schema of the actual data is not a subschema of the expected schema of the argument.')
In this example, the schemas for
X differ: whereas the data is an
array of arrays of unconstrained numbers, NMF expects an array of
arrays of only non-negative numbers.
Sometimes, the validity of hyperparameters cannot be checked in
isolation. Instead, the value of one hyperparameter can restrict
which values are valid for another hyperparameter. For example,
scikit-learn imposes a conditional hyperparameter constraint between
the
svd_solver and
n_components arguments to PCA.
sklearn_constraint_error = sklearn.pipeline.make_pipeline( sklearn.feature_selection.RFE( estimator=sklearn.ensemble.RandomForestRegressor(n_estimators=10)), sklearn.decomposition.PCA(svd_solver='arpack', n_components='mle'))
The above notebook cell completed successfully, because scikit-learn did not yet check for the constraint. To observe the error message with scikit-learn, we must attempt to fit the pipeline.
%%time message=None try: sklearn_constraint_error.fit(train_X, train_y) except ValueError as e: message = str(e) print(message, file=sys.stderr)
CPU times: user 4.39 s, sys: 62.5 ms, total: 4.45 s Wall time: 4.46 s
n_components='mle' cannot be a string with svd_solver='arpack'
assert message.startswith("n_components='mle' cannot be a string with svd_solver='arpack'")
Scikit-learn implements constraint-checking as Python code with if-statements and raise-statements. After a few seconds, we get an exception, and the error message explains what went wrong.
%%time try: lale_constraint_error = lale.operators.make_pipeline( lale.lib.sklearn.RFE( estimator=lale.lib.sklearn.RandomForestRegressor(n_estimators=10)), PCA(svd_solver='arpack', n_components='mle')) except jsonschema.ValidationError as e: message = str(e) print(message, file=sys.stderr)
CPU times: user 31.2 ms, sys: 15.6 ms, total: 46.9 ms Wall time: 34.8 ms
Invalid configuration for PCA(svd_solver='arpack', n_components='mle') due to constraint option n_components mle can only be set for svd_solver full or auto. Schema of constraint 2: { "description": "Option n_components mle can only be set for svd_solver full or auto.", "anyOf": [ { "type": "object", "properties": {"n_components": {"not": {"enum": ["mle"]}}}, }, { "type": "object", "properties": {"svd_solver": {"enum": ["full", "auto"]}}, }, ], } Value: {'svd_solver': 'arpack', 'n_components': 'mle', 'copy': True, 'whiten': False, 'tol': 0.0, 'iterated_power': 'auto', 'random_state': None}
assert message.startswith("Invalid configuration for PCA(svd_solver='arpack', n_components='mle')")
Lale reports the error quicker than scikit-learn, taking only tens of
milliseconds instead of multiple seconds. The error message contains
both a natural-language description of the constraint and its formal
representation in JSON Schema. The
'anyOf' implements an 'or', so
you can read the constraints as
(not (n_components in ['mle'])) or (svd_solver in ['full', 'auto'])
By basic Boolean algebra, this is equivalent to an implication
(n_components in ['mle']) implies (svd_solver in ['full', 'auto'])
Since the constraint is specified declaratively in the schema, it gets applied wherever the schema gets used. Specifically, the constraint gets applied both during auto-ML and during type-checking. In the context of auto-ML, the constraint prunes the search space: it eliminates some hyperparameter combinations so that the auto-ML tool does not have to try them out. We have observed cases where this pruning makes a big difference in search convergence.
This notebook showed additions to scikit-learn that simplify auto-ML as well as error checking. The common foundation for both of these additions is schemas for operators. For further reading, return to the Lale github repository, where you can find installation instructions, an FAQ, and links to further documentation, notebooks, talks, etc. | https://nbviewer.jupyter.org/github/IBM/lale/blob/master/examples/docs_guide_for_sklearn_users.ipynb | CC-MAIN-2021-39 | refinedweb | 4,268 | 50.43 |
SYNOPSIS
#include <stdlib.h>
int rpmatch(const char *response);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
rpmatch():
Since glibc 2.19:
_DEFAULT_SOURCE
Glibc 2.19 and earlier:
_SVID_SOURCE
DESCRIPTIONrpmAfter examining response, rpmatch() returns 0 for a recognized negative response ("no"), 1 for a recognized positive response ("yes"), and -1 when the value of response is unrecognized.
ERRORSA).
ATTRIBUTESFor an explanation of the terms used in this section, see attributes(7).
CONFORMING TOrpmatch() is not required by any standard, but is available on a few other systems.
BUGSTThe); }
COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | http://manpages.org/rpmatch/3 | CC-MAIN-2019-35 | refinedweb | 123 | 50.02 |
Agenda
See also: IRC log
Canonicalization Requirements
Namespaces and Namespace Undeclarations
Schema Validation after adding a signature
<fjh>
sidenote from later in meeting: use your W3C login name as an IRC handle for ease of assigning actions
fjh: next call 8/12/08
<fjh> TPAC
fjh: FtF scheduled Oct 20-21
<fjh> F2F planning
<fjh>
fjh: next year, 4 FtF mtgs
fjh: soliciting hosts for ftf meetings
<shivaram> Feb/March may be a better time in Seattle
<shivaram> due to weather
<gerald-> I could look into hosting the meeting here the first portion of 2009, I could work with Kelvin on this.
fjh: talked to XML coord group,
offer to get presentation but need guidance
... need to respond late this week / early next
klanz2: relationship between data
models in v1 and v2
... understand differences
<fjh> klanz2: impact of namespace prefix undeclarations on the XPath model
<fjh> ... relationship to xml 1.1
<scottc> core features that we would not want to profile out
<fjh> jccruellas: views on main relationships with xml signature for xpath 2.0
<fjh> ... most used features of XPath 2.0
<fjh> ... connections with xpath filter 2.0
klanz: implementation experience with performance, with respect to namespace processing
jccruellas: insight into XPath processing performance related to features, XPath 2.0 vs 1.0, and metrics used
<scribe> ACTION: fjh to draft message about XPath 2 presentation to mailing list [recorded in]
<trackbot> Created ACTION-20 - Draft message about XPath 2 presentation to mailing list [on Frederick Hirsch - due 2008-08-05].
<fjh> public
fjh: updated public and admin pages, please review
<fjh> administrative
<fjh>
fjh: separate errata from new versions
jccruellas: any product related to XPath Filter 2?
<fjh>
fjh: yes, listed in deliverables, add to product list
jccruellas: add product for XPath filter
RESOLUTION: approved product list with addition of XPath Filter
fjh: minor revisions sent out
klanz2: appropriate to wait another week
<fjh>
fjh: please check member list and send a note if anybody missing
<pdatta> I was there on the 2nd half on both days
fjh: several people missing who dialed in, so please report if you're missing
<klanz2> Check here as well please:
<klanz2>
<klanz2>
<fjh>
gerald-: reviewed issues and
actions
... resulting list seemed appropriate, but should review
... need relationship between issues captured in some way
fjh: some work on tracker ...
gerald-: seems to work now
... actions are specific tasks for people, issues are general topics for discussion
issues might lead to actions
fjh: issue of substance to the standards, related to our deliverables ...
scottc: no final decision to use
tracker, but manual no good
... want to use tracker for actions, but issues could be handled other ways
gerald-: other tools needed to track relationships
fjh: Agenda reviews are always implicit ...
<klanz2> re issues: maybe use titles to reflect relation ships
jccruellas: regrets for scheduled
meeting as scribe
... next meeting, 8/12
<fjh> regrets, klanz 8/12
scottc: happy to provide Jira instance if that's helpful for issues. will send email to fjh about it
fjh: identified at ftf as a core
issue
... avoiding big or breaking changes is helpful to adoption
... understand issues, priorities, but still limit changes when possible
seems like one use case or requirement is to signal inline what degree of c14n might be needed
relationship between serialization and c14n
fjh: streaming is another consideration
<csolc> canonicalization as a final transform could output something other than a nodeset or an octect stream.
klanz2: improving of robustness when editing unsigned parts of document
<klanz2> potentially including reindention ...
pdatta: whitespace between element tags
pdatta: ignoreable-white-space that has not been ignored ...
csolc: schema define ordered relationship of elements
<klanz2> xs:all ?
csolc: schema communicates semantics, c14n only about the bytes
scottc: schema conveys semantics that c14n doesn't understand
<EdS> In my view, schema communicates structure -- namespace communicates semantics.
<EdS> Namespace defines the semantics (meaning of XML elements); schema defines the structural relationship of the XML elements (and points to their semantics (namespace)); and XML instances are instances of an XML schema.
scottc: yes, but much of the semantic comes from the connection between elements, which is a schema
<EdS> I would say, in reference to Scott's point above, that schemas provide semantics in the sense that they indicate a hierarchy of elements which implies a semantically-meaningful grouping relationship.
scottc: xml doc communicates information - expectation that sig should verify when the same, despite details
csolc: e.g. schema says order doesnt matter, c14n takes element order important, processor may reorder before c14n, then unexpected verification failure
klanz: line breaks in base64 are horrible
scottc: schema type normalization is a common source of breakage
<fjh> possible issue: simplified c14n for signing versus more general c14n, e.g. not produce compliant xml document
<fjh> issue: simplified c14n for signing versus more general c14n, e.g. not produce compliant xml document
<trackbot> Created ISSUE-37 - Simplified c14n for signing versus more general c14n, e.g. not produce compliant xml document ; please complete additional details at ..
pdatta: other transforms depend on c14n1, e.g. STR
<gerald-> I need to leave, unfortuantely, talk to you all at the next meeting.
Kelvin: too many dependencies on
XML specs, making impls too complex and big
... goal to minimize and simplify dependencies
... reduces threat surface
<scribe> ACTION: Kelvin to propose ways to reduce dependencies on XML specs [recorded in]
<trackbot> Created ACTION-21 - Propose ways to reduce dependencies on XML specs [on Kelvin Yiu - due 2008-08-05].
bhill: in use cases, distinction
between need for XML signing and need for signing in XML
... simplified cases where you want the XML processor involved, but not necessarily in a fully robust XML context
<fjh> issue: profile for signature processing for non-XML or for contrained XML requirements
<trackbot> Created ISSUE-38 - Profile for signature processing for non-XML or for contrained XML requirements ; please complete additional details at ..
<fjh>
scottc: is namespace prefix undeclaration a breaking change?
klanz2: other changes due to unicode as well
... some xml 1.1 features may go into 1.0
... useful to send comments/questions to xml core wg
<klanz2>
<fjh> issues list
<fjh> issue: Namespace Undeclarations and canonicalization
<trackbot> Created ISSUE-39 - Namespace Undeclarations and canonicalization ; please complete additional details at ..
<fjh> EXI documents published -
<scottc> ACTION: esimon2 to review EXI docs that were published [recorded in]
<trackbot> Created ACTION-22 - Review EXI docs that were published [on Ed Simon - due 2008-08-05].
<fjh>: need to review where they mention signing, verification
EdS: expect exi goals to include efficient signing/verification of exi
jccruellas: exi another serialization, so are c14n and other concerns the same?
<EdS>: Will take approach that EXI requirements for signing will be based on EXI use cases and high efficiency.
jccruellas: does the usual 1 to many relationship exist between c14n and the source XML?
<fjh> issue: appropriate signing/verification position in EXI workflow, expectations and correctness review
<trackbot> Created ISSUE-40 - Appropriate signing/verification position in EXI workflow, expectations and correctness review ; please complete additional details at ..
scottc: could be orthogonal, but may be ways to take advantage of EXI, e.g. XML nesting issues?
<fjh> issue: signing compact EXI representation of XML - is that reproducable for verification
<trackbot> Created ISSUE-41 - Signing compact EXI representation of XML - is that reproducable for verification ; please complete additional details at ..
<fjh> ACTION: frederick to contact EXI re signature/verification use cases [recorded in]
<trackbot> Created ACTION-23 - Contact EXI re signature/verification use cases [on Frederick Hirsch - due 2008-08-05].
klanz: could transmission of EXI break signatures?
<fjh>
<klanz2> wonders if SUN has FastInfoset people interested in similar matters as EXI?
<fjh>
<fjh>
sean: concerned about relaxing
MUSTs
... existing signatures need to continue to validate
<jccruellas> +1
jccruellas: diff. requirements on signing vs. verifying seems like a good solution
<klanz2> -1
klanz: disagrees, diff between the
two is a small gap, so doesn't help implementations
... maybe change MUST to SHOULD, but agrees with sean generally
csolc: need sig 2.0 indicator to make real changes
<fjh> ack 2nd edition
<fjh> ack 2nd
Kelvin: does w3c have guidance for
how to deprecate things?
... timelines for dropping
... would like to see SHA 256 and at least one NIST ECC on the mandatory list
scottc: some things might be important enough to force people to support them
Kelvin: stronger crypto may have use where interop with weaker crypto not an issue
<klanz2> keep in mind that XAdES needs legacy algorithms for a long time, secured by stronger algorithms
<fjh> ... move SHA1 algs to recommended list.
<klanz2> ... and timestamps
<klanz2> .. cf. XAdES ArchiveTimestamp
sean: separate implementation reqs from spec reqs
<klanz2> deprecation for outdated algorithms on signing, " ... the implementation must issue a warning or so .... "
scottc: could have different levels of conformance, separate doc
jccruellas: one doc with the core
stuff, algorithm specifics, and rules for what to implement may
not be ideal
... algorithms often specific to deployment context
jccruellas: core + one or more algorithm docs that include xml syntax related to algorithm and requirementss
klanz: often a need to verify old
signatures
... emitting warnings on older algorithms, but not the same as not computing them any more
<jccruellas> +1
<fjh>
<fjh> (sean)
<fjh>
<klanz2> We could distinguish three cases here:
<fjh>
<klanz2> 1. New and Legacy
Processing would produce the same DigestInput
... 2. Processing that isn't backwards compatible
... 3. New Processing Models
... first case limits us to existing transforms
... second allows us to introduce new identifiers
<fjh>
scottc: prefer not to use transform to signal
data, could be misuse of feature
... could use hints
... not necessarily xml processing instruction
<fjh>
pdatta: preserving forward
compatibility via adding attributes, will it break people?
... could add attributes to existing elements without breaking existing implementations that do not expect it
... xpath implies DOM, but provide a hint that some transforms could be streamed
<klanz2> +1
<klanz2> to see how far we get with hints
<fjh> issue: backward and forward compatibility
<trackbot> Created ISSUE-42 - Backward and forward compatibility ; please complete additional details at ..
<EdS> I suggest calling it 1.1 if we restrict ourselves to non-breaking changes and call it 2.0 if/when we decide to go for breaking changes.
scottc: correcting the schema is important
<fjh> issue: improvements to XML Signature schema
<trackbot> Created ISSUE-43 - Improvements to XML Signature schema ; please complete additional details at ..
<scribe> ACTION: scantor to review schema for improvements [recorded in]
<trackbot> Created ACTION-24 - Review schema for improvements [on Scott Cantor - due 2008-08-05].
<EdS> +1 to Scott's schema review
fjh: capability to envelope a Signature as a generic feature in schemas?
<fjh>
fjh: does new xsd provide something new here?
EdS: wildcard opens up to security threat
... special schema attribute in a signature?
... design for document should include signature in schema if anticipated signing
<fjh> ... best practice for schema?
<fjh> ACTION: frederick to give feedback on xml schema best practice in xml-cg [recorded in]
<trackbot> Created ACTION-25 - Give feedback on xml schema best practice in xml-cg [on Frederick Hirsch - due 2008-08-05].
klanz: 2 cases
... signature is a first class object so designer needs to be aware of it
... layered approach where the signature is separate from the XML content
eds: only speaking of enveloped signatures
<csolc> the schema validation could ignore the signatures.
Kelvin: agrees that signature is first class citizen, but maybe add more flexible approach to detached signatures
eds: agrees, add support for a pointer to a signature in the schema
scottc: in other words, an xsi:sig attribute?
EdS: or xml:sig
<EdS> Idea is that a special attribute, perhaps in XML Schema or perhaps even better in the XML spec, for referencing a signature (detached or maybe not) that applies to the element.
<fjh> issue: requirement to enable signatures on documents that do not anticipate signatures in the schema
<trackbot> Created ISSUE-44 - Requirement to enable signatures on documents that do not anticipate signatures in the schema ; please complete additional details at ..
fjh: if completed, please send to list
with ACTION-# in the body and title indicating status of action
fjh: look at old best practices document from earlier WG
<fjh>
[NEW]
ACTION: esimon2 to review EXI docs that were
published [recorded in]
[NEW] ACTION: fjh to draft message about XPath 2 presentation to mailing list [recorded in]
[NEW] ACTION: frederick to contact EXI re signature/verification use cases [recorded in]
[NEW] ACTION: frederick to give feedback on xml schema best practice in xml-cg [recorded in]
[NEW] ACTION: Kelvin to propose ways to reduce dependencies on XML specs [recorded in]
[NEW] ACTION: scantor to review schema for improvements [recorded in]
[End of minutes] | http://www.w3.org/2008/07/29-xmlsec-minutes.html | CC-MAIN-2016-50 | refinedweb | 2,115 | 50.97 |
[Solved] Help with Counter in Delegate
I have this code that some what works. If I scroll through the model till the end all the minutes count down correctly. But if I don't do anything than only the minutes on the view get updated..
I can't do the following:
- Update Model's values
- Start Trigger on non-visible delegates
Any help would be appreciated.
Video of the Problem:
"Your text to link here...":
Here is the sample code:
@import QtQuick 1.1
import com.nokia.meego 1.0
Page {
tools: commonTools
ListView { id: mainListView x: 0 y: 70 width: 480; height: 400 clip: true boundsBehavior: Flickable.DragAndOvershootBounds cacheBuffer: 0 snapMode: ListView.NoSnap orientation: ListView.Horizontal flickableDirection: Flickable.HorizontalFlick model: mainListViewXML; delegate: myDelegate; }
XmlListModel {
id:mainListViewXML
source: "";
query: "/nextbart/etd"
XmlRole { name: "minutes"; query: "@minutes/string()"; }
onStatusChanged: { if (status === XmlListModel.Ready) { // Method #2 updateMinutes.start() } } }
Timer {
id: updateMinutes
interval: 6000 ;
running: false;
repeat: true
onTriggered: {
var xcount = mainListViewXML.count, i; for (i=0; i< xcount; i++) { var xmin = parseFloat(mainListViewXML.get(i).minutes) - 1; mainListViewXML.setData(i,minutes,xmin) } } }
Component{
id: myDelegate
Item {
width: 160; Timer { interval: 9000; running: true; repeat: true onTriggered: { //! Method #1 (Only Updates visible minutes) var xmin = parseFloat(trainMinutes.text) - 1; trainMinutes.text = xmin; } } Text { id: trainMinutes text: minutes font.bold: true color: "#000000" smooth: true x: 40 y: 70 width: 86 height: 68 opacity: 1 } }
}
}
@
Here is a sample screen of 3 numbers on the screen.
!(3 Numbers)!
Note: There are lots of more trains but only these 3 get updated if you don't scroll.
A couple things you could try:
- Regarding Method 1: ListView only creates visible delegates (that is, if the model has 10 items but the ListView can only display 3 items, it will not create the other 7). If you set a large cacheBuffer value it will force creation of all the items, and the Timer in the delegate method should work.
- Regarding Method 1: Assuming the XML data is automatically updated, you could try changing your timer to call mainListViewXML.reload(), which will pull the latest file from the network and re-parse it.
Regards,
Michael
Thanks Michael, I updated the cacheBuffer based on mainListViewXML.count, which seems to be working fine now.
@ //! Reset Buffer
mainListView.cacheBuffer = 160*mainListViewXML.count;@
I don't know how much the cache buffer affects the memory usage, but I would rather load the items than reload the XML. | https://forum.qt.io/topic/18837/solved-help-with-counter-in-delegate | CC-MAIN-2022-40 | refinedweb | 406 | 67.55 |
WebService::IMDBAPI - Interface to
version 1.130020
my $imdb = WebService::IMDBAPI->new(); # an array of up to 1 result my $results = $imdbapi->search_by_title('In Brugges', { limit => 1 }); # an WebService::IMDBAPI::Result object my $result = $results->[0]; say $result->title; say $result->plot_simple;
WebService::IMDBAPI is an interface to.
Creates a new WebService::IMDBAPI object. Takes the following optional parameters:
The user agent to use. Note that the default LWP user agent seems to be blocked. Defaults to
Mozilla/5.0.
The language for the results. Defaults to
en-US.
Searches based on a title. For the options and their defaults, see.
Some of the most common options are:
Limits the number of results. Defaults to 1.
The plot type you wish the API to return (none, simple or full). Defaults to simple.
The release date type you wish the API to return (simple or full). Defaults to simple.
$title is required.
$options are optional.
Returns an array of WebService::IMDBAPI::Result objects.
Searches based on an IMDB ID. For the options and their defaults, see.
$id is required.
$options are optional.
Returns a single WebService::IMDBAPI::Result object.
Andrew Jones <andrew@arjones.co.uk>
This software is copyright (c) 2013 by Andrew Jones.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~arjones/WebService-IMDBAPI-1.130020/lib/WebService/IMDBAPI.pm | CC-MAIN-2017-43 | refinedweb | 227 | 63.86 |
Render the props of a vtkRenderer. More...
#include <vtkRendererDelegate.h>
Render the props of a vtkRenderer.
vtkRendererDelegate is an abstract class with a pure virtual method Render. This method replaces the Render method of vtkRenderer to allow custom rendering from an external project. A RendererDelegate is connected to a vtkRenderer with method SetDelegate(). An external project just has to provide a concrete implementation of vtkRendererDelegate.
Definition at line 37 of file vtkRendererDelegate.h.
Definition at line 40 of file vtkRendererDelegate.h.
Return 1 if this class is the same type of (or a subclass of) the named class.
Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h.
Reimplemented from vtkObjectBase.
Render the props of vtkRenderer if Used is on.
Tells if the delegate has to be used by the renderer or not.
Initial value is off.
Definition at line 62 of file vtkRendererDelegate.h. | https://vtk.org/doc/nightly/html/classvtkRendererDelegate.html | CC-MAIN-2019-47 | refinedweb | 149 | 53.88 |
Netflix has open sourced its DGS Framework (Domain Graph Service) GraphQL server framework for Spring Boot. Starting out as a tool internal to the corporation, it has been generously open sourced for the rest of us to enjoy.
Netflix is one of those organizations that have gone beyond REST, embracing GraphQL instead. Rather than exposing a myriad of microservices to UI developers, Netflix opted for a unified API aggregation layer at the edge, powered by GraphQL. Since they also use Spring Boot for their infrastructure, merging was bound to happen.
As such the DGS framework is built on top of graphql-java and in essence it offers a layer of abstraction over the library's low-level details. While exclusively written in Kotlin (requires Kotlin 1.4), the framework mainly targets Java as Java is most closely associated with Spring Boot.That said, you are free to write your code in Kotlin too.
Integrating the library is easy since it too utilizes Spring Boot’s annotation-based model. For example in implementing a Data Fetcher (constructs which return the data for a query) you use the following annotation based snippet :
@DgsComponentpublic class ShowsDatafetcher {
@DgsComponent
public class ShowsDatafetcher {
private final List<Show> shows = List.of( new Show("Stranger Things", 2016), new Show("Ozark", 2017), new Show("The Crown", 2016), new Show("Dead to Me", 2019), new Show("Orange is the New Black", 2013));
private final List<Show> shows = List.of(
new Show("Stranger Things", 2016),
new Show("Ozark", 2017),
new Show("The Crown", 2016),
new Show("Dead to Me", 2019),
new Show("Orange is the New Black", 2013)
);
@DgsData(parentType = "Query", field = "shows")public List<Show> shows(@InputArgument("titleFilter") String titleFilter) { if(titleFilter == null) { return shows; }
@DgsData(parentType = "Query", field = "shows")
public List<Show> shows(@InputArgument("titleFilter") String titleFilter) {
if(titleFilter == null) {
return shows;
}
return shows.stream().filter(s -> s.getTitle().contains(titleFilter)).collect(Collectors.toList()); }}
return shows.stream().filter(s -> s.getTitle().contains(titleFilter)).collect(Collectors.toList());
}
Apart from that, it comes with a host of other features:
It's pretty easy to setup. Just add the reference to the library com.netflix.graphql.dgs:graphql-dgs-spring-boot-starter and let it consume your GraphQL schema file as DGS is designed for schema-first development.
To generate each GraphQL type for each type described in the schema, as well as to generate the Data Fetchers, the DSG codegen plugin must be included in the project :
plugins { id("com.netflix.dgs.codegen") version "4.0.12"}
plugins {
id("com.netflix.dgs.codegen") version "4.0.12"
This works according to the mapping rules so, for example, the basic scalar types are mapped to corresponding Java/Kotlin types (String, Integer etc.), whereas the date and time types are mapped to corresponding java.time classes.
The code generator can also create the client API classes which you can use to query data from a GraphQL endpoints using Java.
This is another testament to Spring Boot's versatility for your backend development; there are just so many integration options, something I found out first hand when graduating from the Java Web Developer Nanodegree. (See my Insider's Guide here). Now it can do GraphQL with ease too.
DGS GithubDGS Main
The Insider's Guide to the Java Web Developer Nanodegree
Learn How To Do Java On Azure
Foojay - All About Java and the OpenJDK
Introducing The Android Kotlin Developer Nanodegree Open-Source Vulnerabilities, OSV, database is a new, open source, project from Google that goes beyond the current state of CVE tracking.
Make a Comment or View Existing Comments Using Disqus
or email your comment to: comments@i-programmer.info | https://www.i-programmer.info/news/80-java/14411-netflixs-graphql-for-spring-boot.html | CC-MAIN-2021-17 | refinedweb | 607 | 53.1 |
Created on 2015-03-30 11:58 by akshetp, last changed 2015-04-14 16:03 by berker.peksag. This issue is now closed.
On the following test file (test.py):
```python
class Solution:
def repeatedNumber(self, A):
test = 1
return []
```
Running python -m py_compile test.py return the following error message:
Sorry: IndentationError: unexpected indent (test.py, line 6)
But without a newline on stderr at the end of the message. This causes some problems with scripts that expect a newline and new my particular case with reading stderr on docker where docker just ignore the line if it doesn't end in a newline.
Also, this message differs from the runtime error message:
```
File "test.py", line 6
return []
^
IndentationError: unexpected indent
```
Would it be possible to at least add in a newline and at best, change the message to be consistent with the runtime error message?
I will trying to look at the code and see if I can write a patch but it will take me some time.
I can confirm that the bugs is demonstrated by copying the text below into a normal unix text file.
This patch adds new line symbol. For some reason py_compile module prints only SyntaxErrors with traceback. All other exceptions are printed with "Sorry:" and in one line.
New changeset 1e139b3c489e by Berker Peksag in branch '3.4':
Issue #23811: Add missing newline to the PyCompileError error message.
New changeset d39fe1e112a3 by Berker Peksag in branch 'default':
Issue #23811: Add missing newline to the PyCompileError error message.
New changeset 22790c4f3b16 by Berker Peksag in branch '2.7':
Issue #23811: Add missing newline to the PyCompileError error message.
Thank you Alex. | https://bugs.python.org/issue23811 | CC-MAIN-2017-51 | refinedweb | 280 | 66.44 |
An introduction to any technology would not be complete without a "Hello World" example. This will give you some hands-on experience with the client-side and server-side code before diving into details. It also provides a sound basis for exploring Flash Remoting on your own.
First, we will look at the Flash code necessary to call the remote service, which is virtually the same regardless of which server-side technology implements the service. We will then look at the server-side code implemented in ColdFusion, Server-Side ActionScript, Java, ASP.NET, PHP, and as a SOAP-based web service.
The client-side ActionScript is virtually the same for each server-side service example. The only things that change are the path to the remote service when it is implemented as a web service and the path to the Flash Remoting gateway, which varies depending on the server implementation.
The client-side ActionScript code shown in Example 1-1 should be inserted on the first frame of the main timeline of a Flash movie, as shown in Figure 1-4.
/*** Section 1 ***/ #include "NetServices.as" /*** Section 2 ***/ // Assign myURL so it points to your Flash Remoting installation. var myURL = ""; var myServicePath = "com.oreilly.frdg.HelloWorld"; /*** Section 3 ***/ myResult = new Object( ); myResult.onResult = function (data) { trace("Data received from Server : " + data); }; myResult.onStatus = function (info) { trace("An error occurred : " + info.description); }; System.onStatus = myResult.onStatus; /*** Section 4 ***/ var myServer = NetServices.createGatewayConnection(myURL); var myService = myServer.getService(myServicePath, myResult); myService.sayHello( );
Section 1 of Example 1-1 includes the NetServices.as library, which contains the code necessary to connect to a Flash Remoting-enabled server from Flash. If you do not include NetServices.as, the example will not work, but you will not receive any errors within the authoring environment.
Section 2 initializes two variables: myURL and myServicePath. The myURL variable will be used to create a NetConnection object that points to the server. The myServicePath variable will be used to create a service object that points to the service that will be called.
The myURL variable specifies the URL to the Flash Remoting gateway installed on the server. If the Flash Remoting gateway is installed on a Microsoft .NET server, the URL will point to the .aspx file for the gateway. Similarly, if you are using AMFPHP, the URL will point to a gateway.php file on your server.
The myServicePath variable specifies the path on the server to the remote service that will be called. The naming convention is similar to a Java package, with each section representing a directory on the server and the last section pointing to the actual service. If the remote service is a Microsoft .NET DLL, myServicePath should refer to the DLL's namespace and class name. Similarly, if the remote service is a Java class, the myServicePath variable will refer to the package name and class name of the Java class. If the remote service is a web service, myServicePath should contain the path to the web service's WSDL file.
Calls from the Flash Player to the application server via the Flash Remoting gateway are asynchronous. Code execution within the Flash Player continues while data is being loaded, which is similar to loading XML into the Flash Player. You must define callback functions, which will be called automatically when the data loads from the server.
In ActionScript, callback functions can be attached as properties to a generic object (instantiated from the Object class). The functions are used to catch data and messages sent back from the server.
Section 3 of Example 1-1 creates an object and attaches two callback functions to it. The onResult( ) callback function is called when data is returned from the remote service, and the onStatus( ) callback function is called if an error occurs. An object used to receive results from a remote service is called a responder object (or sometimes called a response object).
The System.onStatus property specifies the function to be called if the Flash Player cannot connect to the server, as these types of errors are not handled by the onStatus( ) callback function for the remote service call. Example 1-1 sets System.onStatus to execute our object's onStatus( ) function. Once we have created an object and the callback functions to receive and process the data returned from the server, we are ready to call the remote service.
Section 4 of Example 1-1 makes a connection to the server by passing in myURL (initialized earlier) to the NetServices.createGatewayConnection( ) function. The server connection information is stored in the myServer variable. The example then gets a reference to the remote service, which we store in the variable myService, by calling the getService( ) method on the myServer variable initialized in the previous step. In the call to getService( ), we pass myServicePath to access the desired service and pass our myResult object to catch the data or status when the operation completes. We can then use myService (the reference to the remote service) to call methods on the service, such as the sayHello( ) method.
Save the Flash movie as HelloWorld.fla. Before the movie can be tested, we need to create the server-side code that implements the sayHello( ) function, as described in subsequent sections.
Example 1-1 utilizes the trace( ) command to display the data in the Output window in the Flash authoring environment. Therefore, the output is visible only when the movie is tested in the authoring environment and not when tested in a browser.
In the next section, you'll create the remote service required by this simple Flash movie. Once you have created the remote service, you can test the Flash movie using Control Test Movie. You should get the following output displayed in the Output window:
Data received from Server : Hello World from servertype
If you do not get this result:
Set the Output window to verbose mode (Window Output Options Debug Level Verbose).
Make sure that the server where the Flash Remoting gateway is installed is running and accessible.
Make sure that there are no syntax errors in your client-side ActionScript code or server-side code.
For the ColdFusion MX example, we will implement the remote service as a ColdFusion Component (CFC). CFCs are new to ColdFusion MX and provide an object-based approach to ColdFusion development. They are ideally suited to Flash Remoting. CFCs are discussed in depth in Chapter 5.
Create a file named HelloWorld.cfc and place it into the following directory, where webroot is the root of your web server and com\oreilly\frdg\ matches the service path specified by the initial portion of the myServicePath variable in Example 1-1:
Example 1-2 shows the code that must be added to your HelloWorld.cfc component:
<cfcomponent> <cffunction name="sayHello" access="remote" returntype="string"> <cfreturn "Hello World from ColdFusion Component" /> </cffunction> </cfcomponent>
This is a simple component that contains one function, sayHello( ), which returns a string. Notice that we set the access to "remote", which is necessary to allow the component to be called remotely, either by Flash or as a web service.
Save the component. If you have access to the ColdFusion administrative interface (which you should if you have a local installation) browse to it through your browser with the following URL:
After entering your ColdFusion administrative password, you should see a description of the component, similar to Figure 1-5.
If you do not see the description, or if you get an error, check and fix any syntax errors and try again.
Once you have verified that the ColdFusion component works via the browser, switch back to Flash and test the HelloWorld.fla movie created in Example 1-1. You should see "Hello World from ColdFusion Component" in Flash's Output window.
ColdFusion MX and JRun 4 application servers allow developers to create remote services in Server-Side ActionScript (SSAS). Server-Side ActionScript is a scripting language that a Flash MX developer can use to create remote services without needing to know a server-side language such as ColdFusion Markup Language (CFML) or Java. Client-side JavaScript and ActionScript programmers may find SSAS easier than learning a new language. Using SSAS, simple services can be written that access databases or utilize the HTTP functionality of ColdFusion or JRun 4. Code written in SSAS can be consumed by Flash via Flash Remoting only and cannot be used to create other types of output such as HTML.
The SSAS mechanism of ColdFusion MX and JRun 4 is actually a server-side implementation of the Rhino JavaScript parser, with some server-specific objects and methods added that allow the developer access to the functionality of <cfquery> and <cfhttp> tags of ColdFusion (found in the ActionScript CF object). Methods of the CF object can be accessed as CF.methodName( ). You can find a complete discussion of SSAS in Chapter 6. See for details on the Rhino project.
To implement the Hello World example in SSAS, create a plain text file named HelloWorld.asr using any text editor, and place it into the following directory, where webroot is the root of your web server:
Since ColdFusion can process CFCs, ColdFusion pages, and SSAS files, you need to make sure there are no name conflicts. If you created the ColdFusion component example file earlier, rename HelloWorld.cfc to SomethingElse.cfc to ensure that the SSAS (.asr) file, and not the ColdFusion file, is processed. You may also need to restart the ColdFusion MX server, as the .cfc file may have been cached. The exact order in which services are located varies with the application server on which the Flash Remoting gateway is installed. See the appropriate server chapters later in the book for details.
Example 1-3 shows the code that should be added to HelloWorld.asr; it creates a simple function called sayHello( ) that returns a string to the client.
function sayHello ( ) { return "Hello World from Server-Side ActionScript"; }
Save the file in plain text format and switch back to Flash. Test the Flash movie and you should see the output from the SSAS function.
If you get an error saying that the service cannot be found, check the service path, and make sure that there are no syntax errors in the .asr file.
For the Java example, we will implement our remote service as a simple Java class. Using Java as a remote service requires that the Flash Remoting gateway be installed on a Java application server such as Macromedia's JRun 4 or IBM's WebSphere. The Java version will not work with ColdFusion MX or Microsoft .NET servers.
Create a new plain text file in any text editor, name it HelloWorld.java, and enter the code shown in Example 1-4.
package com.oreilly.frdg; public class HelloWorld { public String sayHello ( ) { return "Hello World from Java"; } }
Compile the class into your web server's classpath. This may vary from server to server, but the server's WEB-INF (or SERVER-INF in the case of JRun) directory is usually included within the server's classpath. For example, to compile it using JRun 4, you would use (from a command prompt):
c:\jrun4\servers\myservername\server-inf\classes\com\oreilly\frdg\>javac HelloWorld.java
If you are using JRun 4 and created the SSAS example earlier, rename HelloWorld.asr to SomethingElse.asr to ensure that the Java class is used instead.
Once the class has been successfully compiled, place it in the classpath\com\oreilly\frdg\ directory and switch to Flash and test your movie. You should see the output from the sayHello( ) method of the HelloWorld Java class. If you get an error that the service cannot be found, make sure that you have compiled the class into the server's classpath.
ASP.NET services can be written in several languages, including VB.NET and C#. This Microsoft .NET service example is implemented as a .NET DLL written in C#.
Open Microsoft's Visual Studio .NET (VS.NET) and create a new project. From the Project Types window, select Visual C# Projects; then, from the Templates window, select Class Library. Set the name of the project to HelloWorld, as shown in Figure 1-6. Rename the class file that appears from Class1.cs to HelloWorld.cs. The code will work even if you do not rename the class file, but renaming it makes it easier to organize the files.
Example 1-5 shows the server-side C# code to implement the example as a Windows .NET service.
using System; namespace com.oreilly.frdg { public class HelloWorld { public String sayHello ( ) { return "Hello World from ASP.NET DLL"; } } }
Enter the code shown in Example 1-5 and compile the DLL using VS.NET's Build Build Solution option, which creates HelloWorld.dll in the following directory:
Copy HelloWorld.dll into the flashservices/bin directory on your .NET web server at:
The DLL contains a class with one function, sayHello( ), which returns a string. The service path within Flash is determined by the DLL's namespace plus the class containing the method being called. By setting the namespace to the same as the directory structure for our other examples, we will not have to change the myServicePath variable within our client-side ActionScript. Using a unique namespace also protects your DLL from namespace collisions with other DLLs.
Switch back to the Flash movie and change the myURL variable in Example 1-1 to point to the .NET version of the Flash Remoting gateway, such as:
var myURL = "";
This is the only change that has to be made to the Flash movie. It is necessary because the .NET version of the Flash Remoting gateway is implemented differently than the Java and ColdFusion MX versions.
Save the Flash movie and test it. You should see the output from the DLL ("Hello World from ASP.NET DLL") in Flash's Output window.
The Hello World application (and other applications) must be set up a bit differently in PHP than in other environments. Flash Remoting with PHP is class-based, due to requirements of the AMFPHP library. That is to say, all Flash Remoting services must be written as classes in PHP. To install the AMFPHP library, simply download the source release package and copy its flashservices directory to your web server's document root (see Chapter 9 for additional details). Because the class is named com.oreilly.frdg.HelloWorld, AMFPHP searches in the services path for a HelloWorld.php file. The main flashservices directory resides under the web root, with the AMFPHP classes in that directory. The services directory resides in this flashservices directory as well.
When building PHP remote services, you should include a gateway.php file in your server-side application in the directory for your current project. This creates the Flash Remoting gateway and includes the necessary files. The gateway.php file (shown in Example 1-6) for the Hello World example should be saved in the webroot\com\oreilly\frdg directory.
<?php /* File: gateway.php Instantiates the Gateway for the HelloWorld Application */ require_once '/app/Gateway.php'; /* Require files */ $gateway = new Gateway( ); /* Create the gateway */ $gateway->setBaseClassPath('/services/com/oreilly/frdg'); /* Set the path to where the service lives */ $gateway->service( ); /* Start the service */ ?>
Create a file named HelloWorld.php and place it into the following directory, where webroot is the root of your web server and com\oreilly\frdg\ matches the service path specified by the initial portion of the myServicePath variable in Example 1-1:
Add the code shown in Example 1-7 to your HelloWorld.php page.
<?php /* File: {SERVICES_CLASS_PATH}/com/oreilly/frdg/HelloWorld.php provides the HelloWorld class used in Chapter 1. */ class HelloWorld { function HelloWorld ( ) { $this->methodTable = array( 'sayHello' => array( 'description' => 'Says Hello from PHP', 'access' => 'remote', 'arguments' => array ('arg1') ) ); } function sayHello ( ) { return 'Hello World from PHP'; } } ?>
Example 1-7 implements a simple class named HelloWorld that contains one method, sayHello( ), which returns a string. The class is named the same as the file. The methodTable array is used by AMFPHP to look up functions to invoke and to provide a pseudoimplementation of ColdFusion's CFCExplorer utility, which documents the class, methods, properties, arguments, return types, and so forth.
Switch back to the Flash movie and change the myURL variable in Example 1-1 to point to the AMFPHP gateway:
var myURL = "";
This is the only change that has to be made to the Flash movie, and it is necessary because the PHP implementation utilizes PHP pages to handle the functionality of the gateway.
If you run the movie in the test environment, you should see the phrase "Hello World from PHP" in the Output window. If you don't see it, verify that you have correctly installed the AMFPHP classes and verify your code.
For the web service example, we will create a web service using ColdFusion MX. However, any web service containing a sayHello( ) method that returns a string works just as well.
Creating a web service in ColdFusion MX is extremely simple; we simply pass the URL to our CFC, adding ?wsdl to the query string, which tells ColdFusion to generate a web service from the component. We'll use the CFC that we created in Example 1-2, HelloWorld.cfc, saved in the directory specified earlier.
Browse to the component with a web browser, and add the ?wsdl query string to the URL that points to the component:
The browser should display the WSDL XML for the web service, as follows:
<?xml version="1.0" encoding="UTF-8"?> <wsdl:definitions <wsdl:message < see only a blank screen, view the page's source in your browser (using View Source in Internet Explorer, for example). If you receive an error, correct any errors identified by the error message and try again. Like any URL, the web service URL may be cached depending on the browser settings, so you should reload/refresh the page to make sure the browser isn't using the cached version. This web service can also be seen at the author's site at:
Switch to Flash and change the myServicePath variable to point to the web service's WSDL file. If you are using the CFC to create the web service, the path will be:
var myServicePath = "";
Test your movie, and you should see the output from the sayHello( ) method of the web service. Although our web service is on the same server as the Flash Remoting gateway, Flash Remoting is simply acting as a gateway when accessing an XML-based (SOAP-compliant) web service. The web service can be on any computer accessible via the network or the Internet.
When working with Flash Remoting and web services, you are not limited to ASP.NET, ColdFusion, PHP, and J2EE. Web services can be implemented in:
Python or Perl
C or C++
Any other language that has a SOAP library implementation
More information on web services can be found at:
The Hello World example, while simple, illustrates the power of using Flash Remoting. The core client-side ActionScript code is the same, regardless of the language or server model that the remote service is written in. At most, only the path to the Flash Remoting gateway or remote service is different.
Furthermore, none of the server-side code is Flash-specific. This means that you can create libraries of functions that work from the server-side languages, for use without Flash, which can also be called directly from Flash. In many cases, you will be able to integrate a Flash front end with existing server-side code and libraries with little or no changes on the server. (Details and exceptions are covered throughout the rest of the book.)
Isolation between server-side and client-side code allows for a clean division of labor. Server-side developers need not worry about what is calling their code; if there is a well-defined API on the server, Flash developers can seamlessly hook into the server-side code. Similarly, the Flash developer need not worry about the details of the server-side implementation. He need only know the API for the remote services he intends to call. If he is using web services, he can query the .wsdl file on the server to discover the methods. This allows both the server-side code and the Flash application to be developed simultaneously, reducing production time and making testing and debugging easier.
Even if one developer writes both the Flash and server-side code, the multitiered architecture is still advantageous. It allows you to define an API, implement it on the server, and then hook the Flash movie into it. This makes it possible to test each component on its own before connecting Flash to the server, ensuring that bugs are less frequent and easier to isolate.
Our example may seem simple, because we are only passing a string from the server to Flash. However, if you think of a string as just another type of object or datatype, you can begin to see the power of Flash Remoting. Try passing more complex datatypes, such as an array, from the server-side service to Flash, and see what is returned to the Flash movie. Modify the onResult( ) callback function from Example 1-1 to do something more interesting with the data than display it in the Output window. | http://etutorials.org/Macromedia/Fash+remoting.+the+definitive+guide/Part+I+Remoting+Fundamentals/Chapter+1.+Introduction+to+Flash+Remoting/1.6+Hello+World/ | CC-MAIN-2017-22 | refinedweb | 3,567 | 55.03 |
On Thursday 15 March 2007 22:08:59 Matthew Miller wrote: > How hard is it to get a program added to the blacklist? Festival probably > should be. And then I should do: > %ifarch x86_64 > Obsoletes: festival.i386 < 1.96 > %endif First, I don't think you can reference arch like that in a spec. Secondly, why don't you split out the two libs into a festival-libs package, that is required by festival? festival-devel will pick up the library requires out of the -libs package, the libs package will have a generic requires on festival, not an arch specific one. This will leave festival-devel and festival-libs as multiarch, while festival itself is not. This is the solution that many other packages use. -- Jesse Keating Release Engineer: Fedora
Attachment:
pgphiQdByZDWd.pgp
Description: PGP signature | https://www.redhat.com/archives/fedora-devel-list/2007-March/msg00746.html | CC-MAIN-2014-10 | refinedweb | 137 | 58.69 |
In this article, the term "Bonobo" is used to specify both the Bonobo infrastructure, and -- often the underlying CORBA transport. This is to reduce confusion where making a distinction between the two is not helpful.
Similarly, I do not refer to the various language bindings for Bonobo (particularly good are Perl and Python -- see Resources), but rather stick with the native C bindings.
The first thing to understand about the way Bonobo interfaces behave is that it is not exactly like the way an Object Oriented (OO) language behaves with a class. Whilst an OO language's class type data, (by type data we mean the abstract information describing the class or interface) contains information about the class's methods. It also (typically) contains data and scope type information. What this means is that in an OO language there is often a protected-private-public distinction for methods. In addition, there is often data layout information -- that is, a "Point" object might have a data member of name "foo" of type "double." This implementation detail can thus be confused with the interface that the object exports.
In CORBA, the only things that are exposed to the remote world are methods (even attributes are implemented as get/set methods). Thus an interface is precisely that -- a pure method-based WYSIWYG interface. Hence whilst a Bonobo interface can inherit from another Bonobo interface, there is no reason the implementation needs to.
Secondly, Bonobo does not use multiple inheritance (MI). Instead, it uses single inheritance in combination with interface aggregation in the same way as COM, UNO and XPCOM.
Most interfaces inherit solely from the Bonobo::Unknown interface, which provides a nexus for retrieving (and querying) other interfaces. Remember that an interface contains no data: It simply provides a way to talk to the actual data on the object that lives behind it. So it should be reasonable that several different interfaces could be used to access that data. Thus a Control, for example, might have a property-access interface and a visual-embedding interface.
To get an interface from an object reference, one would use the
queryInterface (QI) method in Interface Definition Language (IDL):
The queryInterface method
Whilst this is an important concept, in many cases the bindings will return interfaces of the type you expect, thus it is frequently not neccessary to use QI.
Bonobo splits into several parts. In the client section, the parts that we are interested in are the client sugarwrappers, and using the CORBA interfaces directly ourselves.
Using CORBA interfaces from C
The above IDL
queryInterface method, when compiled to
a C stub, would have a signature thus:
The queryInterface method compiled
and be used thus:
The queryInterface method in use
This looks intuitive enough: We pass a C string, and we get an Unknown back. But there are several things worth noticing:
- The namespace and interface name are mangled into the method name using underscores, ie. since the interface is in the Bonobo:: namespace it has a
Bonobo_prefix, and since the method is in the Unknown interface, there is then the
Unknown. This distinction is an important visual indication as to whether a method is a CORBA method (it will have a mixed case signature like
Bonobo_Control_activate) as opposed to a wrapper function ( which will be all lower case:
bonobo_control_new).
- Remote handles have an associated interface, for instance,
Bonobo_Unknowndenotes a CORBA pointer to a remote object implementing the Unknown interface.
- Each time a CORBA method is invoked it is possible that an exception will be flagged in 'ev'. This can be easily acertained by using the if (BONOBO_EX (&ev)) idiom, the environment needs to be initialized before the CORBA call by the
CORBA_exception_init.
To ease the hassles of using the CORBA methods in a fine-grained fashion and being concerned about exception environments, there are client wrappers that make use of Bonobo -- and its integration with existing Gtk+ programs -- far easier.
Bonobo provides a Control interface that allows the construction of rich embedded GUI widgets. It also provides a PropertyBag interface to allow any properties to be set on the widget, and often an EventSource interface to allow connection to any signals the control may emit. Thus Bonobo provides capabilities similar to those of an ActiveX control or a JavaBean.
Thus to create a sample calculator widget and insert it into (perhaps an educational application to teach math) you might do:
A calculator widget
This widget would then behave just as any other widget would, and would allow insertion into any Gtk container in the normal way.
To set the displayed value of the calculator widget, we might want to set the "value" property. There are several ways to do this, first the simple, but non-type-safe fashion:
Setting the "value" property
In this example, we grab the value, add 0.37 to it and set it back again. The more type-safe -- but longer -- method is perhaps more
instructive, after obtaining the
BonoboControlFrame from the widget with
bonobo_widget_get_control_frame we would do:
Setting the "value" property correctly
The property and control interfaces are slightly unusual in that one interfaces with the Gtk+ widget system in an intimate way, and the
other manipulates
CORBA_anys which are slightly ugly in C. Thus, having the
bonobo_widget_ and
bonobo_property_bag_client wrappers makes life far easier for
the C programmer.
Most new interfaces, however, will be used directly. For example, to write data to a stream, one would need to use the method (IDL):
A CORBA interface to write data to a stream
Thus the CORBA interface can be used directly, although there is also a
bonobo_stream_client_write helper method too. Notice here also the
bonobo_exception_get_text method is used to return a translated, user-readable description of the exception that occurred (although this is leaked above for clarity).
The last important thing to take care of in client code is reference counting. Bonobo's lifecycle management is that of reference counting. This means that if you want to ensure that an object is not going to die a sudden, shocking death, you keep a reference to it. The object keeps track of how many references have been taken to it and, when this count reaches zero, it destroys the object.
Thus to ensure that objects are correctly terminated when they are no longer needed, and to ensure that they are not terminated prematurely, it is important to reference count correctly.
Client code should always use the
bonobo_object_dup_ref and
bonobo_object_release_unref methods. These tolerate NIL references silently, and allow exceptions to be ignored.
By convention, when Bonobo methods return a reference, you need to release the reference when you are finished with it (as above).
Finally, the
bonobo_widget sugar wrapper also wraps the moniker infrastructure, this means that a much richer object namespace is available to the programmer. The following example:
Sugar wrapping an image reference
will produce a control (perhaps implemented by Eye Of Gnome) that will render the image.
Below are some key points to take from this second installment of a three-part introduction to Bonobo:
- It is easy to use Bonobo components.
- Use method capitalization to distinguish between direct CORBA invocations, and local helper wrappers.
- Treat references carefully.
- See the samples in bonobo/samples/controls/ for some working example code.
- Monikers provide a powerful, abstract object namespace, see bonobo/doc/Monikers for more information.
In the next article, we'll discuss the process of creating your own component and exposing it to the world.
- Bonobo & ORBit (Part 1 of this series)
- Implementing a new component (Part 3 of this series)
- You can download Bonobo as it is published under the GNU GPL (for links to GNU see the Resources in Part 1 of this series).
- ORBit:.
- The Perl and Python language bindings for Bonobo are quite good. For more on ORBit-perl see:.
- For ORBit-Python see: i.
- See the samples in bonobo/samples/controls/ for some working example code. You can find this code in the Bonobo CVS repository.
- Monikers provide a powerful, abstract object namespace, see bonobo/doc/Monikers in the Bonobo CVS for more information.
- Full Bonobo API Reference Manual.
- The full GNOME 1.0 API documentation is here:.
- Bonobo is named for the Bonobo monkey, the last Great Ape -- and an endangered species.
- Michael Meeks, the author, is a software engineer at Ximian, Inc.
-. | http://www.ibm.com/developerworks/library/co-bnbo2.html | crawl-002 | refinedweb | 1,388 | 51.48 |
Sometimes we encounter questions like Given a integer N and you are allowed to swap adjacent digits. Find minimum swaps such that number is divisible by 25. If it is impossible to obtain a number that is divisible by 25, then print “Not Possible”.
1 <= N <= 10^18
We can easily tell for number being divisible by 25 its last two digits must be “00” , “25” , “50” or “75”.
A Brute force solution would be try every available digits at last two places and we will do this greedily and this will give us minimum swaps.
But Question is Performing swaps and then checking number is divisible by 25 or not , will be hard if we take N as an integer input , even if take N as string input then division will bother us.
So here are some inbuilt functions to solve this problem which can convert integer to string and string to integer.
converting Integer into String :
string s = to_string( N ) ;
More generally to_string() is an overloaded function so it will convert any numeric value into string.
converting String into Integer :
long long N = atoll( s.c_str() ) ;
atoll returns long long int value , but if you want to return long int value then you can use atol( s.c_str() ) and for double value go for atof( s.c_str() ) .
Now it is easier for us to deal with this problem. As now we dealing with digits polynomial time want bother us any more
Now we can use counter i ( it will place i’th digit at last place ) and counter j ( it will place j’th digit at last second place ) to iterate over digits starting from first index of number.Then if current number is divisible by 25 we will update answer with current swap count if it is smaller than least count encountered yet.
One question may arise what if we encounter any leading zeros?? In that case swap it with nearest non zero digit in number.
Here is the code for given task ( When you see algorithm don’t just see it take a pen and paper , now think of a test case and apply steps of algorithms on that test case )
CODE
#include<bits/stdc++.h> using namespace std ; int main() { long long n ; cin >> n ; string s = to_string( n ) ; /// CONVERTS NUMBER TO STRING int l = s.size() ; /// STORING SIZE OF STRING long long ans = 1e16 ; /// INTIALLY WE NEED INFINITE SWAPS for( int i = 0 ; i < l ; i++ ) for( int j = 0 ; j < l ; j++ ) { if( i == j ) continue ; long long swaps = 0 ; /// COUNT NUMBER OF SWAPS string temp = s ; /// WE NEED ORIGINAL STRING AGAIN SO STORE IT IN TEMP /// AND APPLY YOUR OPPERATIONS ON IT for( int k = i ; k < l - 1 ; k++ ) /// TAKING i th DIGIT TO LAST POSITION { swap( temp[ k ] , temp[ k + 1 ] ) ; swaps++ ; } for( int k = j - ( j > i ) ; k < l - 2 ; k++ ) /// TAKING j th DIGIT TO LAST SECOND POS { swap( temp[ k ] , temp[ k + 1 ] ) ; swaps++ ; } int pos = -1 ; /// INTIALLY WE CONSIDER THERE ARE NO LEADING ZEROS for( int k = 0 ; i < l ; k++ ) /// IS THERE ANY LEADING ZEROS?? if( temp[ k ] != '0' ) { pos = k ; break ; } while( pos > 0 ) { swap( temp[ pos ] , temp[ pos - 1 ] ) ; swaps++ ; pos-- ; } /// NOW ITS TIME TO CHECK NUMBER WE RECVIED IS /// DIVISIBLE BY 25 long long temp_number = atoll( temp.c_str() ) ; /// CONVERTING STRING INTO INTEGER if( temp_number % 25 == 0 ) ans = min( ans , swaps ) ; } if( ans != 1e16 ) /// If it is possible to obtain a number that is divisible by 25 cout << ans << endl ; else cout << "Not Possible" << endl ; } | https://discuss.codechef.com/t/using-string-as-integer-and-integer-as-string/76063 | CC-MAIN-2020-40 | refinedweb | 592 | 71.28 |
Someone'm getting a warning from ReSharper about a call to a virtual member from my objects constructor. Why would this be something not to do?
(Assuming you're writing in C# here)
When an object written in C# is constructed, what happens is that the initializers run in order from the most derived class to the base class, and then constructors run in order from the base class to the most derived class (see Eric Lippert's blog for details as to why this is).
Also in .NET objects do not change type as they are constructed, but start out as the most derived type, with the method table being for the most derived type. This means that virtual method calls always run on the most derived type.
When you combine these two facts you are left with the problem that if you make a virtual method call in a constructor, and it is not the most derived type in its inheritance hierarchy, that it will be called on a class whose constructor has not been run, and therefore may not be in a suitable state to have that method called.
This problem is, of course, mitigated if you mark your class as sealed to ensure that it is the most derived type in the inheritance hierarchy - in which case it is perfectly safe to call the virtual method.
If 'Test' is an ordinary class, is there any difference between:
Test* test = new Test; //and Test* test = new Test();
Let's get pedantic, because there are differences that can actually affect your code's behavior. Much of the following is taken from comments made to an "Old New Thing" article.
Sometimes the memory returned by the new operator will be initialized, and sometimes it won't depending on whether the type you're newing up is a POD (plain old data), or if it's a class that contains POD members and is using a compiler-generated default constructor.
Assume:
struct A { int m; }; // POD struct B { ~B(); int m; }; // non-POD, compiler generated default ctor struct C { C() : m() {}; ~C(); int m; }; // non-POD, default-initialising m
In a C++98 compiler, the following should occur:
new A() - zero-initialize
new B - default construct (B::m is uninitialized)
new B() - default construct (B::m is uninitialized)
new C - default construct (C::m is zero-initialized)
In a C++03 conformant compiler, things should work like so:
new A() - value-initialize A, which is zero-initialization since it's a POD.
new B - default-initializes (leaves B::m uninitialized)
new B() - value-initializes B which zero-initializes all fields since its default ctor is compiler generated as opposed to user-defined.
new C - default-initializes C, which calls the default ctor.
So in all versions of C++ there's a difference between "
new A" and "
new A()" because A is a POD.
And there's a difference in behavior between C++98 and C++03 for the case "
new B()".
This is one of the dusty corners of C++ that can drive you crazy. When constructing an object, sometimes you want/need the parens, sometimes you absolutely cannot have them, and sometimes it doesn't matter.
If I inherit from a base class and want to pass something from the constructor of the inherited class to the constructor of the base class, how do I do that?
For example,
If I inherit from the Exception class I want to do something like this:
class MyExceptionClass : Exception { public MyExceptionClass(string message, string extraInfo) { //This is where it's all falling apart base(message); } }
Basically what I want is to be able to pass the string message to the base Exception class
Modify your constructor to the following so that it calls the base class constructor properly:
public class MyExceptionClass : Exception { public MyExceptionClass(string message, string extrainfo) : base(message) { //other stuff here } }
Note that a constructor is not something that you can call anytime within a method. That's the reason you're getting errors in your call in the constructor body.
Is it possible to call a constructor from another (within the same class, not from a subclass)? If yes how? And what could be the best way to call another constructor (if there are several ways to do it)?
Yes, it is possible:
public class Foo { private int x; public Foo() { this(1); } public Foo(int x) { this.x = x; } }
To chain to a particular superclass constructor instead of one in the same class, use
super instead of
this. Note that you can only chain to one constructor, and it has to be the first statement in your constructor body.
EDIT: See also this related question, which is about C# but where the same principles apply.
As an c# developer I'm used to run through constructors:
class Test { public Test() { DoSomething(); } public Test(int count) : this() { DoSomethingWithCount(count); } public Test(int count, string name) : this(count) { DoSomethingWithName(name); } }
Is there a way to do this in c++ ?
I tried calling the Class name and using the 'this' keyword, but both fails.
Unfortunately there's no way to do this in C++ (it's possible in C++11 though - see update at the bottom).
two ways of simulating this:
1) You can combine two (or more) constructors via default parameters:
class Foo { public: Foo(char x, int y=0); // combines two constructors (char) and (char, int) ... };
2) Use an init method to share common code
class Foo { public: Foo(char x); Foo(char x, int y); ... private: void init(char x, int y); }; Foo::Foo(char x) { init(x, int(x) + 7); ... } Foo::Foo(char x, int y) { init(x, y); ... } void Foo::init(char x, int y) { ... }
see this link for reference.
Update: Google rates this question high, so I think it's necessary to update it with current information. C++11 has been finalized, and it has this same feature (called delegating constructors).
The syntax is slightly different from C#:
class Foo { public: Foo(char x, int y) {} Foo(int y) : Foo('a', y) {} };
It's weird that this is the first time I've bumped into this problem, but:
How do you define a constructor in a C# interface?
Edit
Some people wanted an example (it's a free time project, so yes, it's a game)
IDrawable
+Update
+Draw
To be able to Update (check for edge of screen etc) and draw itself it will always need a
GraphicsDeviceManager. So I want to make sure the object has a reference to it. This would belong in the constructor.
Now that I wrote this down I think what I'm implementing here is
IObservable and the
GraphicsDeviceManager should take the
IDrawable...
It seems either I don't get the XNA framework, or the framework is not thought out very well.
Edit
There seems to be some confusion about my definition of constructor in the context of an interface. An interface can indeed not be instantiated so doesn't need a constructor. What I wanted to define was a signature to a constructor. Exactly like an interface can define a signature of a certain method, the interface could define the signature of a constructor.
You can't. It's occasionally a pain, but you wouldn't be able to call it using normal techniques anyway.
In a blog post I've suggested static interfaces which would only be usable in generic type constraints - but could be really handy, IMO.
One point about if you could define a constructor within an interface, you'd have trouble deriving classes:
public class Foo : IParameterlessConstructor { public Foo() // As per the interface { } } public class Bar : Foo { // Yikes! We now don't have a parameterless constructor... public Bar(int x) { } } | http://boso.herokuapp.com/constructor | CC-MAIN-2017-26 | refinedweb | 1,297 | 56.79 |
> I've suggested this to Guido in the past. His > reasonable response is that this would be too big a > change for Python 1. Maybe this is something to consider > for Python 2? Note: from now on the new name for Python 2 is Python 3000. :-) > The basic idea (borrowed from Smalltalk) is to have a kind > of dictionary that is a collection of "association" > objects. An association object is simply a pairing of a > name with a value. Association objects can be shared among > multiple namespaces. I've never liked this very much, mostly because it breaks simplicity: the idea that a namespace is a mapping from names to values (e.g. {"limit": 100, "doit": <function...>, ...}) is beautifully simple, while the idea of inserting an extra level of indirection, no matter how powerful, is much murkier. There's also the huge change in semantics, as you point out; currently, from foo import bar has the same effect (on bar anyway) as import foo bar = foo.bar # i.e. copying an object reference del foo while under your proposal it would be more akin to changing all references to bar to become references to foo.bar. Of course that's what the moral equivalent of "from ... import ..." does in most other languages anyway, so we might consider this for Python 3000; however it would break a considerable amount of old code, I think. (Not to mention brain and book breakage. :-) --Guido van Rossum (home page:) | https://mail.python.org/pipermail/python-dev/2000-January/001812.html | CC-MAIN-2020-40 | refinedweb | 245 | 73.37 |
IronPython3 is NOT ready for use yet. There is still much that needs to be done to support Python 3.x. We are working on it, albeit slowly. We welcome all those who would like to help!
IronPython is an open-source implementation of the Python programming language which is tightly integrated with the .NET Framework. IronPython can use the .NET Framework and Python libraries, and other .NET languages can use Python code just as easily.
Comparison of IronPython vs. C# for 'Hello World'
C#:
using System; class Hello { static void Main() { Console.WriteLine("Hello World"); } }
IronPython:
print("Hello World")
IronPython 3 targets Python 3, including the re-organized standard library, Unicode strings, and all of the other new features.
This project has adopted the code of conduct defined by the Contributor Covenant to clarify expected behavior in our community. For more information see the .NET Foundation Code of Conduct.
Builds of IronPython 3 are not yet provided.
See the building document
Since the main development is on Windows, Mono bugs may inadvertantly be introduced
IronPython 3 targets .NET 4.5 and .NET Core 2.0/2.1. | https://www.programcreek.com/python/?project_name=IronLanguages%2Fironpython3 | CC-MAIN-2020-40 | refinedweb | 188 | 69.28 |
I initially started this work to simply exercise my C++ skills as I stopped using it for some time. Then I compared the performance of my first version with the STL's unordered_map class and I was impressed that this solution was much faster and also consumed less memory (in fact, the .NET Dictionary is already much faster, even if it is managed code), so I decided to finish the job and have a fully functional Dictionary in C++.
unordered_map
Dictionary
I know, I am not using the C++ standard naming convention. I am really using the .NET naming convention here. Also, at work I use Visual Studio 2008, so I am not using C++11 features and, even if I can use C++11 as I have Visual 2013 at home, I don't plan to update this code to be C++11 compliant so soon, as I actually know lots of places that still use old C++ versions. If you like the code and want to make it C++11 compliant, feel free to do it.
I constantly see people affirming that C++ (native code) is much faster than C#/.NET/managed code, yet many times when I see the .NET ports of C++ applications they execute faster than their C++ counterparts. I attribute this to three main factors:
In many performance comparisons people say that C++ is not slower than C#, that it appears to be slower because it is actually freeing the used memory while .NET is accumulating memory in loops and will only free it sometime later. Well, if the "sometime later" is not seen by the users, then C# will be giving the results faster, even if it takes more time to free the memory. Yet the C++ problem is usually not the time spent deallocating memory, it is the time spent allocating it. The constructors and destructors, in many cases, are so fast they aren't noticed. This problem made me write the O(1) Object Pool in C++ article that tries to give the performance advantage back to C++.
Then, there's the second problem: The classes used as "associative arrays". They are named maps in C++ and dictionaries in .NET. .NET was built on the idea of hash-codes (Java too, but I am not discussing it here). All the primitive types, strings and the most important structs expected to be used as keys have good hash-code generators, which are expected to create well distributed results and to return really fast. C++, unfortunately, doesn't come with such support and for many time the only solution were the non-hashed maps, which are pretty slow. Now STL has the unordered_map class, yet some of the default hash-generators are terrible and the memory consumption is big (at least the Visual Studio implementations that I checked).
So, to try to reduce such a problem, I am giving a C++ implementation of a .NET like dictionary. Instead of giving many default hash-generators that can be slow, I don't give a default generator for those types that I don't know how to hash. In my opinion, it is better to be forced to implement a good hash-generator than to lose performance by using a bad one.
The entire idea of dictionaries and unordered maps is to associate values to keys, using a hash-code as the "indexer". The generated hash-code must always be the same for the same key. It is expected that hash-codes are well distributed and so, if possible, two different values should not generate the same hash-code. Yet, as the hash-code is a single numeric value, the dictionaries/maps must be prepared to deal with 2 or more different keys that generate the same hash-code.
When 32-bit values are used the hash-codes can have more than 4 billion different values (in 64 bits, multiply 4 billions by 4 billions), so the hash-code can't be used directly as a position in an array to find an item. Some math is done over the hash-code to chose a "bucket" where the key/value pair will be stored. Those buckets, then, can store many pairs.
I don't know if this is how the STL's unordered_map is implemented or if it is a Visual C++ specific implementation, but each bucket is a vector that can be resized independently and then there are some other rules to actually increase the number of buckets. In .NET we have an array of buckets and, maybe because .NET doesn't allow references to structs (which could be solved differently), these buckets are indexes that point to an array of structs (the real data).
vector
My implementation is similar to the .NET one, but instead of having 2 arrays, the array of buckets is already an array of structs. Then, if there are more items placed in the same bucket, I allocate the new struct instances from an O(1) Object Pool and point to it as C++ allows me to use pointers to structs. The rule for resizing the array is the same as the .NET one. When the number of items becomes bigger than the number of buckets, there's a resize. This means that with really well distributed hash-codes (like sequential ones) we will never have two items in the same bucket.
As a C++ specific detail, I don't allocate an array of structs directly. This would invoke the default constructor if the TKey or TValue is a class (not a pointer) and has a default constructor. Instead, I simply allocate the block of memory and when an item is added I use the placement new to initialize the struct there.
TKey
TValue
If you ever used the .NET dictionaries you will probably consider this class very easy to use. Considering you don't want to change the hash-generator, you could declare a dictionary like this:
Dictionary<int, int> dictionary;
Where the parameters inside the < and > are the type of key and the type of values, respectively.
Then, there are the following methods:
true
false
ContainsKey
GetValue
GetValueOrDefault
NULL
GetOrCreateValue: This method searches for a value for the given key. If one is found it is returned directly. If not, it invokes a valueCreator to create such a value, then it adds it to the dictionary before returning it. This method is overloaded and one of the overloads works pretty well with lambdas while the other receives a function pointer and an additional context pointer, which is passed to the function pointer when invoking it.
valueCreator
As I consider this method to be the hardest to use, I will give one example per overload.
With lambda:
string foundOrCreatedString = dictionary.GetOrCreateValue(57, [](int key){return to_string(key);});
In this case the GetOrCreateValue will look for a string bound to the key 57. If one is found, it is returned (and if it was added by hand, it may not be "57"). If it is not found, then one will be generated (using the to_string(key)), added to the dictionary and finally returned.
GetOrCreateValue
to_string(key)
With function pointer:
// A function must exist so we can get a pointer to it
string StringFromIntCreator(const int &value, void *ignoredContext)
{
return to_string(value);
}
// Then, we need to call the GetOrCreateValue method giving the function pointer and
// giving a context. In this case I don't need a context, so I am giving NULL. Yet, if
// if you want to invoke an instance method, you will need to give that instance as
// the context.
string foundOrCreatedString = dictionary.GetOrCreateValue(57, &StringFromIntCreator, (void *)NULL);
The dictionary allows fast insertions because of its indexing and because in many situations it can simply put the new items directly inside an array. Yet, it starts with a small array and many "resizes" may be required, which consume time. So, if you know how many items you will put in the dictionary, you can set its capacity before doing any work (which can be set even on the constructor).
Also, if you just finished adding items, or if you cleared the dictionary and want it to free its inner array, you will probably want to set its capacity to a smaller one.
So, here are the methods:
The following three methods can be used to enumerate all the items added to the dictionary. It is your responsibility to delete the returned enumerator.
The results of the enumerations are "bucket ordered" and those items that fit the same bucket may be reordered on resizes so, except if you are debugging the buckets or something like that, consider that the enumerations will be unordered.
Pair<TKey, TValue>
Even if this dictionary was inpired by the .NET one, it has many differences:
SetCapacity
TrimExcess
EnumerateItems
IEnumerable
out
size_t
Well, too many. Even if both have the same purpose, their utilization is pretty different, and I really consider this one easier to use.
What I know is that for very well distributed hash-codes this implementation is really faster on additions and in all my tests it was faster to remove items and to destroy the dictionary itself. It also consumes less memory to store items, yet it needs more memory in the exact moment that it is doing a resize, as a single big array must be resized while the STL's unordered_map may resize each bucket individually. It is important to note that my comparisons were made against the STL that comes with Visual C++. I don't know how well Visual C++ optimizes the STL classes and I am not sure if the Visual C++'s STL library is the real one or a Microsoft version of the STL.
To create a comparer/hasher for your own types you must create a class that has two public static methods:
If you create such a class as an specialization of the DefaultEqualityComparer template class, then it will be used as the default comparer for dictionaries that don't choose a different key comparer.
DefaultEqualityComparer
To create the default hash generators (that only exist for the small integer types and for the void*) I used this #define (which you can use too):
void*
#define
#define IMPLEMENT_DEFAULT_EQUALITY_COMPARER_AS_DIRECT_COMPARISON_AND_CAST(T) \
template<> \
class DefaultEqualityComparer<T> \
{ \
public: \
static size_t GetHashCode(T value) \
{ \
return (size_t)value; \
} \
static bool Equals(T value1, T value2) \
{ \
return value1 == value2; \
} \
}
This will simply return the input value itself cast as size_t as the hash-code and will use the == operator to do the comparison.
==
If we have a different type, like a string, we would need to use a more complex logic. The following is an example of a hash-code generator for the std::string:
string
std::string
template<>
class DefaultEqualityComparer<std::string>
{
public:
static bool Equals(const std::string &value1, const std::string &value2)
{
return value1 == value2;
}
static size_t GetHashCode(const std::string &value)
{
size_t length = value.length();
if (length < sizeof(size_t))
{
if (length == 0)
return 0;
const char *cString = value.c_str();
size_t result = 0;
for(size_t i=0; i<length; i++)
{
result <<= 8;
result |= *cString;
cString++;
}
return result;
}
const char *cString = value.c_str();
size_t *asSizeTPointer = (size_t *)cString;
size_t result = *asSizeTPointer;
size_t lastCharactersToUseCount = length - sizeof(size_t);
if (lastCharactersToUseCount > sizeof(size_t))
lastCharactersToUseCount = sizeof(size_t);
if (lastCharactersToUseCount > 0)
{
size_t otherResult;
if (lastCharactersToUseCount == sizeof(size_t))
otherResult = *((size_t *)(cString + length - sizeof(size_t)));
else
{
otherResult = 0;
cString += length;
for(size_t i=0; i<lastCharactersToUseCount; i++)
{
cString --;
otherResult <<= 8;
otherResult |= *cString;
}
}
result ^= otherResult;
result ^= length;
}
return result;
}
};
It is important to note that it only considers the first and last bytes of the string to calculate the hash-code. This makes it an almost constant time hash-code generator, but surely doesn't generate the best possible hash-codes (so, the hash-code may be generated very fast, yet the dictionary may become pretty slow because of bad hashing). If your strings have different contents but start and finish with the same characters, it will do a terrible job.
Actually strings are another point that may give advantage to .NET, Java and some other languages considered "slow", as the strings objects are immutable (or have really good copying logic), avoiding a memory copy each time one string variable is assigned to another, all the strings written directly in the code are really saved as string objects, avoiding the conversion from char* to string (which may happen per call in C++) and can store a pre-calculated hash-code.
char*
Considering that C++ strings don't have a pre-calculated hash-code, some algorithms will need to read the entire string contents to generate it, which will become slow if the strings are big. In this situation, dictionaries/unordered_maps that have big string keys may become much slower than normal maps, as normal maps will stop reading the contents of the compared strings at the first difference while the hash generator will be forced to read the entire string before giving a result.
The sample application is a performance test between the Dictionary and a map (not an unordered_map). I made it use a map because I wrote the solution in Visual Studio 2008 and it doesn't come with an unordered_map, yet you can easily replace all references of a map< by unordered_map< and check the results against it.
map
The sample will add some millions of items, will search all of them by key, will remove them and will do some other actions, measuring how much time it takes. It doesn't measure the memory consumption, yet I can say that in my computer the Dictionary consumed 575 MB for 20 million int/int pairs while the map consumed 670 MB for the same amount. Note: The sample doesn't have any exception handling and, as it creates lots of items, it may crash. If that's the case, try reducing the number of items from 20 millions to 10 millions (or another smaller value) or, if possible, try compiling it in 64-bit instead of 32.
Also, even when compiled in Release mode, the code runs slower inside the Visual Studio. The Dictionary code is only a little slower, but the map executed about 10 times slower on inserts and took 21 minutes for the removes (it takes only 5 seconds running outside Visual Studio). So, to get the correct results, compile in Release and run it outside Visual Studio. If you don't do that you will only benefit the Dictionary code even more.
This is the result running it outside Visual Studio:
We will first compare the speed difference with sequential values:
Testing Dictionary...
Adding 20000000 items: 1045 milliseconds.
Searching all the items by key: 172 milliseconds.
Removing all items by key: 156 milliseconds.
Re-adding all items: 374 milliseconds.
Time to destroy the entire collection: 78 milliseconds.
Full test finished in 1825 milliseconds.
Testing Map...
Adding 20000000 items: 6302 milliseconds.
Searching all the items by key: 2372 milliseconds.
Removing all items by key: 4836 milliseconds.
Re-adding all items: 6178 milliseconds.
Time to destroy the entire collection: 1170 milliseconds.
Full test finished in 20858 milliseconds.
Now we will compare the speed of random values:
Dictionary
Adding: 7878 milliseconds.
Searching for random values: Found: 77317, Not Found: 19922683, Time: 4446 milliseconds.
Deleting dictionary: 312 milliseconds.
Map
Adding: 30623 milliseconds.
Searching for random values: Found: 77317, Not Found: 19922683, Time: 32574 milliseconds.
Deleting map: 5335 milliseconds.
The user Arild Fiskum did the tests using Visual Studio 2012, comparing this dictionary with the unordered_map and using strings as keys. I got impressed that the unordered_map actually adds items faster, yet my implementation was faster in the overall tests, and in many cases we only add items once and then read them many, many times, so I think my implementation is good enough. Here are his results:
Larger test set compiled with x64:
We will first compare the speed difference with sequential values:
Testing Dictionary...
Adding 20000000 items: 21090 milliseconds.
Searching all the items by key: 5846 milliseconds.
Removing all items by key: 6821 milliseconds.
Re-adding all items: 5742 milliseconds.
Time to destroy the entire collection: 2799 milliseconds.
Full test finished in 42305 milliseconds.
Testing Map...
Adding 20000000 items: 16752 milliseconds.
Searching all the items by key: 6070 milliseconds.
Removing all items by key: 7948 milliseconds.
Re-adding all items: 16845 milliseconds.
Time to destroy the entire collection: 4266 milliseconds.
Full test finished in 51886 milliseconds.
If you find a bug, let me know. I didn't had time to do extensive testings on the code. I tested all the methods individually but I didn't use many different combinations, so it is still possible that I missed something. I really hope there are no bugs but, if you find one, tell me and I will do my best to solve it.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
for (int i=0; i < NUMBER_OF_MAP_ITERATIONS; i++)
for (int j=NUMBER_OF_MAP_ITERATIONS; j > 0; j--)
map.Add(i * NUMBER_OF_MAP_ITERATIONS + j, i);
if (map.ContainsKey(i * NUMBER_OF_MAP_ITERATIONS + j) == false)
...
dictionary<int>::insert: 608 milliseconds
dictionary<int>::find: 31 milliseconds
dictionary<int>::clear: 63 milliseconds
dictionary<string>::insert: 4462 milliseconds
dictionary<string>::find: 3369 milliseconds
dictionary<string>::clear: 93 milliseconds
dictionary<int>::insert: 670 milliseconds
dictionary<int>::find: 110 milliseconds
dictionary<int>::clear: 47 milliseconds
dictionary<string>::insert: 4524 milliseconds
dictionary<string>::find: 3151 milliseconds
dictionary<string>::clear: 156 milliseconds
dictionary<int>::insert: 00:00:00.5928038
dictionary<int>::find: 00:00:00.2028013
dictionary<string>::insert: 00:00:05.1948333
dictionary<string>::find: 00:00:02.5428163
hash_map<int>::insert: 624 milliseconds
hash_map<int>::find: 46 milliseconds
hash_map<int>::clear: 31 milliseconds
hash_map<string>::insert: 6287 milliseconds
hash_map<string>::find: 4337 milliseconds
hash_map<string>::clear: 94 milliseconds
dictionary<int>::insert: 639 milliseconds
dictionary<int>::find: 47 milliseconds
dictionary<int>::clear: 62 milliseconds
dictionary<string>::insert: 6412 milliseconds
dictionary<string>::find: 4664 milliseconds
dictionary<string>::clear: 312 milliseconds
C++ programmers interested in high performance implementations of hash
maps should check out:
<>
<>
Both of the above out perform this Dictionary implementation (in both
speed and space, on my platform GCC+Linux) while retaining a more
standard map interface.
As for the hash function itself, see:
<>
Since hashing strings is a more common problem.
Then it seems that .NET could do much better in terms of both speed
and space by looking at:
mct/closed-hash-map.hpp from
<>
- and -
sparsehash/dense_hash_map from
<>
I enjoyed your article and learned a lot by benchmarking a bunch of
map implementations. I'm sure I will use some of the mct maps in my
future work.
./DictionaryArticleSample -algorithms=dict
sequential test:
DictionaryRunTest
Adding 20000000
items: 2.778545s wall, 2.000000s user + 0.740000s system = 2.740000s CPU (98.6%)
Searching all the items by key: 0.344898s wall, 0.340000s user + 0.000000s system = 0.340000s CPU (98.6%)
Removing all items by key: 0.381325s wall, 0.380000s user + 0.000000s system = 0.380000s CPU (99.7%)
Re-adding all items: 0.424574s wall, 0.420000s user + 0.000000s system = 0.420000s CPU (98.9%)
DictionaryRunTest total: 4.144271s wall, 3.230000s user + 0.850000s system = 4.080000s CPU (98.4%)
random test:
DictionaryRunRandomTest
Adding: 9.968996s wall, 9.070000s user + 0.820000s system = 9.890000s CPU (99.2%)
DictionarySearchRandomValues:
Found: 186052, Not Found: 19813948
Searching for random values: 4.053072s wall, 4.030000s user + 0.000000s system = 4.030000s CPU (99.4%)
DictionaryRunRandomTest total: 14.568378s wall, 13.490000s user + 0.960000s system = 14.450000s CPU (99.2%)
maxrss: 1548156 KiB
total time: 18.712934s wall, 16.720000s user + 1.810000s system = 18.530000s CPU (99.0%)
./DictionaryArticleSample -algorithms=map
sequential test:
MapRunTest
Adding 20000000
items: 1.743022s wall, 1.490000s user + 0.230000s system = 1.720000s CPU (98.7%)
Searching all the items by key: 1.191629s wall, 1.190000s user + 0.000000s system = 1.190000s CPU (99.9%)
Removing all items by key: 1.218306s wall, 1.210000s user + 0.000000s system = 1.210000s CPU (99.3%)
Re-adding all items: 2.240278s wall, 2.230000s user + 0.000000s system = 2.230000s CPU (99.5%)
MapRunTest total: 6.427100s wall, 6.120000s user + 0.260000s system = 6.380000s CPU (99.3%)
random test:
MapRunRandomTest
Adding: 3.860627s wall, 3.600000s user + 0.230000s system = 3.830000s CPU (99.2%)
MapSearchRandomValues:
Found: 186052, Not Found: 19813948
Searching for random values: 3.602224s wall, 3.570000s user + 0.000000s system = 3.570000s CPU (99.1%)
MapRunRandomTest total: 7.496768s wall, 7.170000s user + 0.270000s system = 7.440000s CPU (99.2%)
maxrss: 394712 KiB
total time: 13.924338s wall, 13.290000s user + 0.530000s system = 13.820000s CPU (99.3%)
sparsehash/dense_hash_map
$ ./DictionaryArticleSample -algorithms=dict
sequential test:
DictionaryRunTest
Adding 20000000
items: 1.424000s wall, 0.970000s user + 0.440000s system = 1.410000s CPU (99.0%)
Searching all the items by key: 0.243710s wall, 0.240000s user + 0.000000s system = 0.240000s CPU (98.5%)
Removing all items by key: 0.269824s wall, 0.270000s user + 0.000000s system = 0.270000s CPU (100.1%)
Re-adding all items: 0.264776s wall, 0.270000s user + 0.000000s system = 0.270000s CPU (102.0%)
DictionaryRunTest total: 2.290066s wall, 1.780000s user + 0.490000s system = 2.270000s CPU (99.1%)
random test:
DictionaryRunRandomTest
Adding: 4.994587s wall, 4.510000s user + 0.460000s system = 4.970000s CPU (99.5%)
DictionarySearchRandomValues:
Found: 1435422, Not Found: 18564578
Searching for random values: 1.796608s wall, 1.790000s user + 0.000000s system = 1.790000s CPU (99.6%)
DictionaryRunRandomTest total: 7.110372s wall, 6.560000s user + 0.530000s system = 7.090000s CPU (99.7%)
maxrss: 1530724 KiB
total time: 9.400817s wall, 8.340000s user + 1.020000s system = 9.360000s CPU (99.6%)
$ ./DictionaryArticleSample -algorithms=map
sequential test:
MapRunTest
Adding 20000000
items: 0.788988s wall, 0.500000s user + 0.280000s system = 0.780000s CPU (98.9%)
Searching all the items by key: 0.123334s wall, 0.120000s user + 0.000000s system = 0.120000s CPU (97.3%)
Removing all items by key: 0.130719s wall, 0.130000s user + 0.000000s system = 0.130000s CPU (99.4%)
Re-adding all items: 0.849256s wall, 0.560000s user + 0.290000s system = 0.850000s CPU (100.1%)
MapRunTest total: 1.924399s wall, 1.310000s user + 0.600000s system = 1.910000s CPU (99.3%)
random test:
MapRunRandomTest
Adding: 2.458708s wall, 2.150000s user + 0.300000s system = 2.450000s CPU (99.6%)
MapSearchRandomValues:
Found: 1435422, Not Found: 18564578
Searching for random values: 1.326163s wall, 1.320000s user + 0.000000s system = 1.320000s CPU (99.5%)
MapRunRandomTest total: 3.817513s wall, 3.470000s user + 0.330000s system = 3.800000s CPU (99.5%)
maxrss: 787956 KiB
total time: 5.742289s wall, 4.780000s user + 0.930000s system = 5.710000s CPU (99.4%)
template<typename TKey, typename TValue>
class Pair
{
...
template <typename... Args>
Pair(const TKey &key, Args... args) :
_key(key),
_value(args...)
{
}
};
struct _Node
{
_Node *_nextNode;
size_t _hashCode;
Pair<TKey, TValue> _pair;
_Node(_Node *nextNode, size_t hashCode, const TKey &key, const TValue &value) :
_nextNode(nextNode),
_hashCode(hashCode),
_pair(key, value)
{
}
template <typename... Args>
_Node(_Node *nextNode, size_t hashCode, const TKey &key, Args... args) :
_nextNode(nextNode),
_hashCode(hashCode),
_pair(key, args...)
{
}
};
...
template <typename... Args>
TValue* TryAdd(const TKey &key, Args... args)
{
size_t hashCode = TEqualityComparer::GetHashCode(key);
size_t bucketIndex = hashCode % _capacity;
_Node *firstNode = &_buckets[bucketIndex];
if (_IsEmpty(firstNode))
{
new (firstNode)_Node(NULL, hashCode, key, args...);
_count++;
return &firstNode->_pair.GetValue();
}
_Node *node = firstNode;
do
{
if (hashCode == node->_hashCode)
if (TEqualityComparer::Equals(key, node->_pair.GetKey()))
return NULL;
node = node->_nextNode;
} while (node);
if (_count >= _capacity)
{
_Resize();
bucketIndex = hashCode % _capacity;
firstNode = &_buckets[bucketIndex];
if (_IsEmpty(firstNode))
{
new (firstNode)_Node(NULL, hashCode, key, args...);
_count++;
return &firstNode->_pair.GetValue();
}
}
node = _GetPool()->GetNextWithoutInitializing();
new (node)_Node(firstNode->_nextNode, hashCode, key, args...);
firstNode->_nextNode = node;
_count++;
return &node->_pair.GetValue();
}
dictionary.TryAdd(key, ValueClass(param));
dictionary.TryAdd(key, param);
TValue* TryAdd(const TKey &key, TValue&& value)
hash_set<T>
hash_map<TKey,TValue>
hash_set
hash_map
Hash
Pred
hash_traits
NextPrime()
mini_vector<T>
vector<T>
mini_vector
GetPrime()
IsPrime()
Sqrt(candidate)
(int)sqrt(candidate)
size()
_buckets
_Node
Dictionary::_Node
template< class T >
class StringEqualityComparer
{
public:
static size_t GetHashCode(T value)
{
int length = value.length();
int hash(0), i(0);
while( i < length )
hash = hash*31 + value[i++];
return hash;
}
static bool Equals(T value1, T value2)
{
return value1 == value2;
}
};
Dictionary<string, int,StringEqualityComparer<const string&>>
return std::hash<string>()(value);
This is a sample application that uses the C++ Dictionary class and compares
its performance to the std::map class. I am not doing a comparison to the
std::unordered_map because I am using Visual Studio 2008 to build this sample.
You are free to change the code to compare the performance to an unordered_map
but remember to compile the project in Release and execute it outside Visual
Studio to get the real results.
This example doesn't measure it, but consider how much memory is consumed.
We will first compare the speed difference with sequential values:
Testing Dictionary...
Adding 20000000 items: 738 milliseconds.
Searching all the items by key: 99 milliseconds.
Removing all items by key: 146 milliseconds.
Re-adding all items: 142 milliseconds.
Time to destroy the entire collection: 62 milliseconds.
Full test finished in 1193 milliseconds.
Testing Map...
Adding 20000000 items: 3586 milliseconds.
Searching all the items by key: 1438 milliseconds.
Removing all items by key: 3043 milliseconds.
Re-adding all items: 3502 milliseconds.
Time to destroy the entire collection: 846 milliseconds.
Full test finished in 12423 milliseconds.
Now we will compare the speed of random values:
Dictionary
Adding:
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/articles/761040/a-net-like-dictionary-in-cplusplus?pageflow=fixedwidth | CC-MAIN-2016-50 | refinedweb | 4,299 | 69.28 |
From: E. Gladyshev (eegg_at_[hidden])
Date: 2004-04-01 22:34:06
From: "Hurd, Matthew" <hurdm_at_[hidden]>
[...]
> It would be nice if something new like "aspect class" or some such could
Interesting name! I second your opinion in many ways.
I don't think that differentiating between traits and policy
makes too much sense or really necessary.
As for now to me, they both are parts of the same concept.
The idea is that a class can define an API
specification. The user is responsible
for implementing the API and supplying the implementation
to the class as a template parameter.
Traditionally an API is not a set of just behaviors
or just data structures. It is a combination of both
and it doesn't have to be stateless.
So what's so unique about API's from the OOP standpoint
especially. Well, an API as such cannot be
INSTANTIATED. It just EXISTs as a set of rules, etc.
It is like namespaces in C++.
You cannot instantiate a namespace!
You can instantiate data types defined by the API though.
So to me, both traits and policies are sort of
interchangeable namespaces.
Perhaps my view is too simplistic.
I sometimes do something like this
template< typename MatrixApi >
struct equation_system_solver {...};
struct api
{
private: //** note private here
api();
};
struct my_vector_api : api {...};
template< typename VectrorApi >
struct my_matrix_api : api {...};
equation_system_solver< my_matrix_api<my_vector_api> > s;
Eugene
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/04/63462.php | CC-MAIN-2021-31 | refinedweb | 251 | 69.99 |
Swift is certainly a great programming language for developing apps made for the Apple ecosystem. But as you know, Swift, as a language, is not Apple specific. You can use Swift in many other areas such as backend (with Vapor). Swift runs on Linux and Windows too.
Today I would like to present a quick "Getting started" post about how to make your own command line tools with Swift.
As a developer there is many tasks that I'm sure you have already made your own tools for, to automate them.
Cropping images, scrapping stuff from the internet, generating all kind of files or config stuff... You name it.
Today I'll try to demonstrate how to create your own tool using Swift, the Swift package manager, and the argument parser.
Building a command line tool using the Swift Package Manager
I find very useful to have an image comparison tool on my computer. I have a lot of UITests on my apps and while the tests are running, they automatically take screenshots of the screen. There are many use-cases such as upload them to iTunes Connect, but I also use it to compare screens over time and over versions of my apps. If I see any unexpected difference between my "Base" screenshots and the one taken while testing, I know something is wrong with my UI.
So today we'll be building an image comparison command line tool
Let's start by creating the project :
$ mkdir img_cmp $ cd img_cmp $ swift package init --type executable
Nothing crazy here; I create an
img_cmp directory for my project and initilize the project.
A folder structure hase been created for me :
Creating executable package: img_cmp Creating Package.swift Creating README.md Creating .gitignore Creating Sources/ Creating Sources/img_cmp/main.swift Creating Tests/ Creating Tests/LinuxMain.swift Creating Tests/img_cmpTests/ Creating Tests/img_cmpTests/img_cmpTests.swift Creating Tests/img_cmpTests/XCTestManifests.swift
Awesome, now you can build and run the project with command line :
swift build swift run
If you're on a macOS machine and have XCode installed, you can double click on Packaged.swift, it will open the project.
Otherwise you can simply open Sources/img_cmp/main.swift in you favorite text editor.
Now we are going to import the Argument Parser. This is a Swift package made by Apple to get simple user input via arguments while running the program.
In your Package.swift file, you will add the dependancy as follow :
let package = Package( name: "img_cmp", dependencies: [ .package(url: "", from: "0.3.0"), ], targets: [ .target( name: "img_cmp", dependencies: [ .product(name: "ArgumentParser", package: "swift-argument-parser"), ]), .testTarget( name: "img_cmpTests", dependencies: ["img_cmp"]), ] )
Now you have added the Argument Parser framework. If you are on XCode, it will automatically update the dependancy package and resolve the dependancy versions.
In the main.swift, we can now import the framework :
import ArgumentParser
Creating the program
We need to start by creating a parsable command. Nothing simpler: make a struct that conforms to
ParsableCommand, a protocol defined in the Argument Parser framework.
Here I'll call it
ImgCmp.
After that, we'll call the static
main function, that will take care of running the program for me.
struct ImgCmp: ParsableCommand { } ImgCmp.main()
Great, now to make our program do something, I will implement the function
run in our struct (also defined in the protocol
ParsableCommand). This will be the method used to execute the command.
struct ImgCmp: ParsableCommand { func run() throws { } } ImgCmp.main()
Notice this command can throw error, so the OS can catch the exit status of your program.
Get all the arguments !
Alright now we want the user to input two images to compare one to another. So we'll need (at least) 2 arguments.
The ArgumentParser let use input 3 type of arguments :
Arguments
That's a required input needed for the program to run.
Options
An option is an extra value that the user can input for a specific behaviour.
Flag
A Flag is a simple option that the user can add or not.
Example :
nano -L /path/to/file --tabsize=1
Here we use the command
nano which is used to edit a file.
-L is a flag (Don't add newlines to the ends of files).
/path/to/file is an argument (The file to edit).
--tabsize=1 is an option (Set the size (width) of a tab to cols columns).
To parse argument, the framework makes everything for us.
It's based on property wrappers - that you might have used with SwiftUI and it's pretty simple :
Every argument is an attribute of the struct (here
ImgCmp) and the propery wrapper takes parameters to define and specify the argument.
Let's start with our two main arguments :
@Argument(help: "The reference image.") var base: String @Argument(help: "The image to compare.") var image: String
Now if you run the command without two arguments it should say:
$ img_cmp swift run img_cmp Error: Missing expected argument '<base>' USAGE: img-cmp <base> <image> ARGUMENTS: <base> The reference image. <image> The image to compare. OPTIONS: -h, --help Show help information.
But add some strings and you're good :
$ img_cmp swift run img_cmp /Users/me/Desktop/a.png /Users/me/Desktop/b.png
Now for me, this is not enough. I want the user to be able to specify a tolerance for comparing. Say if A is 99% B, it's okay for me.
For that we'll add an option
@Option(name: [.customLong("tolerance"), .customShort("t")], help: "The tolerance to consider an image identical as a value from 0 to 1.\n 0 is strictly identical.") var tolerence: Float?
With the Option property wrapper, you can specify a long and a short name (here "tolerance" and "t").
Now we can use execute :
$ img_cmp swift run img_cmp /Users/me/Desktop/a.png /Users/me/Desktop/b.png -t 0.01
Let's also add a verbose flag. If the user would like to display more information and non blocking messages. A basic "-v" will work for me.
@Flag(name: [.customLong("verbose"), .customShort("v")], help: "Show logs, information and non blocking messages.") var verbose = false
Back the to run function : to exit a program, just throw an ExitCode (.success, .failure...). Just add an exit statement to the function :
throw ExitCode.success
There you go!
Now I wont (unless you ask for it in the comment) write about the program's body because it's kind of out of subject (and in real life, you would probably use imagemagick ^^), but here how your code should look like so far:
import ArgumentParser struct ImgCmp: ParsableCommand { @Argument(help: "The reference image.") var base: String @Argument(help: "The image to compare.") var image: String @Flag(name: [.customLong("verbose"), .customShort("v")], help: "Show logs, information and non blocking messages.") var verbose = false @Option(name: [.customLong("tolerance"), .customShort("t")], help: "The tolerance to consider an image identical, as a value from 0 to 1.\n 0 is stricly identical.") var tolerance: Float? func run() throws { // here you would compare the images, log stuff and return the right status code. throw ExitCode.success } } ImgCmp.main()
I hope you enjoyed this simple post. Don't hestitate to ask any question in the comment section, I'll be glad to help :).
Happy coding!
Top comments (0) | https://dev.to/kevinmaarek/create-your-own-command-line-tools-in-swift-4l18 | CC-MAIN-2022-40 | refinedweb | 1,211 | 67.35 |
Upgrading to prompt_toolkit 3.0¶
There are two major changes in 3.0 to be aware of:
First, prompt_toolkit uses the asyncio event loop natively, rather then using its own implementations of event loops. This means that all coroutines are now asyncio coroutines, and all Futures are asyncio futures. Asynchronous generators became real asynchronous generators as well.
Prompt_toolkit uses type annotations (almost) everywhere. This should not break any code, but its very helpful in many ways.
There are some minor breaking changes:
The dialogs API had to change (see below).
Detecting the prompt_toolkit version¶
Detecting whether version 3 is being used can be done as follows:
from prompt_toolkit import __version__ as ptk_version PTK3 = ptk_version.startswith('3.')
Fixing calls to get_event_loop¶
Every usage of
get_event_loop has to be fixed. An easy way to do this is by
changing the imports like this:
if PTK3: from asyncio import get_event_loop else: from prompt_toolkit.eventloop import get_event_loop
Notice that for prompt_toolkit 2.0,
get_event_loop returns a prompt_toolkit
EventLoop object. This is not an asyncio eventloop, but the API is
similar.
There are some changes to the eventloop API:
Running on top of asyncio¶
For 2.0, you had tell prompt_toolkit to run on top of the asyncio event loop. Now it’s the default. So, you can simply remove the following two lines:
from prompt_toolkit.eventloop.defaults import use_asyncio_event_loop use_asyncio_event_loop()
There is a few little breaking changes though. The following:
# For 2.0 result = await PromptSession().prompt('Say something: ', async_=True)
has to be changed into:
# For 3.0 result = await PromptSession().prompt_async('Say something: ')
Further, it’s impossible to call the prompt() function within an asyncio application (within a coroutine), because it will try to run the event loop again. In that case, always use prompt_async().
Changes to the dialog functions¶
The original way of using dialog boxes looked like this:
from prompt_toolkit.shortcuts import input_dialog result = input_dialog(title='...', text='...')
Now, the dialog functions return a prompt_toolkit Application object. You have
to call either its
run or
run_async method to display the dialog. The
async_ parameter has been removed everywhere.
if PTK3: result = input_dialog(title='...', text='...').run() else: result = input_dialog(title='...', text='...') # Or if PTK3: result = await input_dialog(title='...', text='...').run_async() else: result = await input_dialog(title='...', text='...', async_=True) | https://python-prompt-toolkit.readthedocs.io/en/stable/pages/upgrading/3.0.html | CC-MAIN-2022-33 | refinedweb | 376 | 61.02 |
Hacking attachment_fu to work with Flash/Flex uploads and crop square images
Rick Olson’s attachment_fu is my favorite file upload plug-in because let’s you use three different image manipulation tools [rmagick, mini-magick, image science] and storage options [file system, database, amazon s3]. However it doesn’t yet support two features I use on every CMS I build, Flash/Flex file upload (images will upload but won’t be resized) and square image cropping. Here’s how to tweak it to get both features working.
First up, support for Flash/Flex upload (I should really drop the ‘Flash/’ part as I only use Flex now) , first up Flex upload… Ilya Devers posted the solution on Google groups, but I get to claim 1% credit as my blog is mentioned in his post
The problem is really on the Flex side of things as all uploads come through as ‘application/octet-stream’ for the mime-type. attachement_fu can upload any kind of file so it checks the mime-type before running it’s resize code, since it’s looking for an image it skips over the Flex uploaded files. Ilya’s rather ingenious solution is to override attachment_fu and use the file system to check the file type. To overide attachment_fu add the ‘uploaded_data=’ and ‘get_content_type’ methods to your upload model.
end
Next is cropping square images with mini-magick. Currently if you request a square image attachment_fu will stretch rather than crop the image, this time I’ll ‘borrow’ the solution from Craig Ambrose. This time you have to dig deeper down into the depths of the rails plugins directory to edit ‘vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/processors/mini_magick_processor.rb’ and replace the resize_image method with the following.
# Performs the actual resizing operation for a thumbnail')) end else img.resize(size.to_s) end self.temp_path = img end if image[:width] < image[:height] shave_off = ((image[:height] - image[:width])/2).round image.shave("0x") elsif image[:width] > image[:height] shave_off = ((image[:width] - image[:height])/2).round image.shave("x0") end image.resize("x") return image end
To crop an image you use ‘‘ as in the Model code above. That’s it!
This was a great help. Thanks for post.
-Mike (guy you met at onAIR who had the music site)
Comment by Mike — January 14, 2008 @ 2:39 pm
Hey Alastair! Nice write up. I wish I would have seen your write up for squaring thumbnails. I ended up having the Flash guys do some masking magic on the images, but this would have been better. Oh well. And it’s nice to know there are other people out there who just want to push their apps out there in the best, fastest, easiest way possible and get on with the next app.
Comment by Sam Freiberg — January 16, 2008 @ 3:22 pm
I made the following modification for my purposes:
def resize_image(img, size)
logger.debug(”Size: #{size.inspect}”)'))
#resize_and_crop_irregular(img, size[0], size[1])
size[0] == size[1] ? resize_and_crop(img, size[0]) : resize_and_crop_irregular(img, size[0], size[1])
end
else
img.resize(size.to_s)
end
self.temp_path = img
end
def resize_and_crop_irregular(image, nu_width, nu_height)
original_ratio = (image[:width].to_f / image[:height].to_f).to_f
nu_ratio = (nu_width.to_f / nu_height.to_f).to_f
if nu_ratio original_ratio
new_ratio = (nu_width.to_f / nu_height.to_f).to_f
corrected_height = (image[:width].to_f / new_ratio).to_f
shave_off = ((image[:height] - corrected_height)/2).round
image.shave(”0x#{shave_off}”)
end
image.resize(”#{nu_width}x#{nu_height}”)
return image
end
Comment by Kevin Thompson — February 15, 2008 @ 5:51 pm
Very helpful, I’ve not had so much trouble working with Flex and anything else since I started trying to upload files, there were so many issues and this was just one of them.
However your fix didn’t work for me, it always called the rescue block, so I commented that out and then did some dumping of the various variables, the results for content_type (after the file -bi “#{File.join(temp_path)} part was):
ERROR: cannot open `â??/tmp/CGI4338-1â??’ (No such file or directory)
I had no idea what was going on as the temp_path was:
/tmp/CGI4338-1
I couldn’t figure this out so I added the following hack to the rescue block:
content_type = Mime::Type.lookup_by_extension( File.extname( self.filename ).gsub( /\./, ” ) )
if content_type.to_s.match( /.*\/.*/ )
content_type.to_s
else
fallback
end
I don’t really like this, but it works (I’ve registered the appropriate mime types with Mime::Type.register). Do you have any idea what that error is on the file -bi line.
Anyway it was still a great help to come across this post as it didn’t mean I spent as long figuring out why the thumbnails weren’t being created.
Thanks again.
Comment by Dave Spurr — April 19, 2008 @ 11:31 am
Hi,
I’ve integrated these changes you’ve made into the Ruboss “Flexible Rails” framework to allow file uploading (as well as Restful_Authentication) to be possible from Flex to Rails. Thanks for all of your help. I haven’t completely finished the Flex part yet, but this tutorial was great.
Here’s the beginnings of a RESTful Flex on Rails social networking site. Just laying the groundwork a little bit. Ruboss Tutorial
Peace,
Lance
Comment by Lance — September 3, 2008 @ 8:12 pm
[...] Hacking attachment_fu to work with Flash/Flex uploads and crop square images [...]
Pingback by acts_as_attachment?attachment_fu???? at ?????? — November 16, 2008 @ 2:04 am
It seems lot of people have been able to successfully make use of this blog.
unfortunately i’m still struggling. my questions:
1. how does the rails controller code look like? how are you guys constructing model object out of posted data?
thx
Comment by Alan — December 7, 2008 @ 8:20 pm
does this hack even work with latest version of fu plugin anymore????
nothing seems tobe work for me
Comment by Alan — December 7, 2008 @ 9:41 pm
Alan,
It works. attachment_fu requires default naming of some form variables to make things work. Be certain that your incoming parameters hash has a filename and uploaded_data parameter set. In Flex, you can set the name of your uploaded file with
file.upload(request, “uploaded_data”);
where file is a FileReference object.
Comment by Rich — December 11, 2008 @ 9:16 am | http://blog.vixiom.com/2007/12/28/hacking-attachment_fu-to-work-with-flashflex-uploads-and-crop-square-images/ | crawl-002 | refinedweb | 1,043 | 56.35 |
The use of blocks with methods is a syntactic cover that hides the fact that blocks can be passed like data.
There are both Proc and Lambda objects which can be used to wrap methods as objects so that they can be passed as parameters. The difference between the two types of "wrapping" is subtle but basically comes down the behavior of the return statement.
In a block, or a proc object that wraps it, a return acts as if it was in the calling method, i.e. the method that used the yield to call the block.
That is, return doesn't just terminate the block it terminates the calling method and the method that called it and so on.
In the case of a Lambda the return simply terminates the method and returns control to the calling program.
If you don't want to use the syntactic sugar of a block you can explicitly pass either a proc or a Lambda object.
To see this in action try:
class MyClass def myMethod p p.call 1 p.call 2 endend
class MyClass
def myMethod p
p.call 1
p.call 2
end
end
now the method accepts a parameter p which it assumes to be a Proc object wrapping a block of code. To execute this code you simply use the Proc object's call method. To create the Proc object all we have to do is:
myObject=MyClass.newp1=Proc.new {|x| puts x}myObject.myMethod p1
myObject=MyClass.new
p1=Proc.new {|x| puts x}
myObject.myMethod p1
where the new method accepts the block of code to be wrapped by the Proc object. There are shorter ways of writing this (using the & operator for example) but this form reveals what is actually going on.
To see the surprising behavior when a return is used within a block/Proc we need another method to call the method that calls the Proc:
class MyClass def myMethod1 p p.call 1 puts 'End MyMethod' end
def myMethod1 p
puts 'End MyMethod'
def myMethod2 p1=Proc.new {|x| puts x} myMethod1 p1 puts 'End myMethod2' endend
def myMethod2
p1=Proc.new {|x| puts x}
myMethod1 p1
puts 'End myMethod2'
Notice that the only real difference here is that myMethod2 calls myMethod1 in the same way as it was called in the main program. If you add:
myObject=MyClass.newmyObject.myMethod2
myObject.myMethod2
and run the program you will see both methods ending with suitable messages.
Now add a return to the Proc:
p1=Proc.new {|x| puts x;return}
Now when you run the program you will see 1 printed but neither of the methods get to print their ending message.
The return in the block returns control from the block to the calling method method1, from there to method2 and from there to the main program - this is probably not what you are expecting.
A single return seems to unwind three procedure calls! Note that this works in the same way even if you use blocks and yield - as blocks default to Proc objects.
However if you change the Proc object to a Lambda object then the return just terminates the block code and the two methods get to print their final messages as you would expect:
p1=lambda {|x| puts x;return}
Notice that lambda is a method of the Kernel object that creates a Lambda object - hence there is no call to new.
You can also create a lambda object using a notation that is much closer to other languages:
p1= -> (x) {puts x;return}
This hides the fact that it is a lambda object that is being created and makes it look much more like an anonymous function.
Of course given that Ruby incorporates elements of functional programming it has closures.
That is when you create a Proc or a Lambda it incorporates the current bindings. What this means is that everything that is in scope when the object is created remains in scope for the entire lifetime of the object even if they have gone out of scope in the normal execution of the program.
For example, if you define a method that returns a Lambda object then any local method variables that the object uses are available whenever you call the wrapped code. To see this in action first define a suitable method:
class MyClass def myMethod @n=7 return lambda {puts @n} endend
def myMethod
@n=7
return lambda {puts @n}
Notice that the method returns a lambda object that makes use of n which is only in scope, i.e. exists, while the method is active. Even so you can write:
myObject=MyClass.newp=myObject.myMethodp.call
p=myObject.myMethod
p.call
and you will see 7 displayed indicating that the variable is still available to the lambda object.
Once again unless you are familiar with the idea of closure this is surprising behavior.
Ruby actually takes this a stage further and provides a binding object which records all of the relevant bindings to the object it is created in. You can capture a set of bindings, i.e. the current state, and execute code in the context of the stored bindings For example:
class MyClass def myMethod @n=7 @b=binding @n=8 return @b endend
@b=binding
@n=8
return @b
Notice that the binding object is created when @n is 7.
If you now try:
myObject=MyClass.newb1=myObject.myMethodeval("puts @n")eval("puts @n",b1)
b1=myObject.myMethod
eval("puts @n")
eval("puts @n",b1)
the first eval reveals that @n is nil because the method has ended and its local instance variables are out of scope, i.e. in the normal way of things @n doesn't exist. The second eval displays @n as not only in existence but with a value of 8, i.e. the last value that was assigned to the variable.
Notice that bindings aren't snapshots of the values in a variable, they really are the set of variables used by an object.
To see this more clearly try:
class MyClass def myMethod x @b=binding @n=x return @b endend
def myMethod x
@n=x
myObject=MyClass.newb1=myObject.myMethod 8b2=myObject.myMethod 9eval("puts @n")eval("puts @n",b1)eval("puts @n",b2)
b1=myObject.myMethod 8
b2=myObject.myMethod 9
eval("puts @n",b2)
The result is nil, 9, 9 as both binding objects refer to the last state of myObject. | http://www.i-programmer.info/programming/ruby/5683.html?start=1 | CC-MAIN-2017-04 | refinedweb | 1,083 | 70.43 |
we consider some of the lesser known classes and keywords of C#. Today we will be looking at two set implementations in the System.Collections.Generic namespace: HashSet<T> and SortedSet<T>. Even though most people think of sets as mathematical constructs, they are actually very useful classes that can be used to help make your application more performant if used appropriately.
For more of the "Little Wonders" posts, see the index here.
In mathematical terms, a set is an unordered collection of unique items. In other words, the set {2,3,5} is identical to the set {3,5,2}. In addition, the set {2, 2, 4, 1} would be invalid because it would have a duplicate item (2). In addition, you can perform set arithmetic on sets such as:
Now, you may be thinking: why bother with the set classes in C# if you have no need for mathematical set manipulation? The answer is simple: they are extremely efficient ways to determine ownership in a collection.
For example, let’s say you are designing an order system that tracks the price of a particular equity, and once it reaches a certain point will trigger an order. Now, since there’s tens of thousands of equities on the markets, you don’t want to track market data for every ticker as that would be a waste of time and processing power for symbols you don’t have orders for. Thus, we just want to subscribe to the stock symbol for an equity order only if it is a symbol we are not already subscribed to.
Every time a new order comes in, we will check the list of subscriptions to see if the new order’s stock symbol is in that list. If it is, great, we already have that market data feed! If not, then and only then should we subscribe to the feed for that symbol.
So far so good, we have a collection of symbols and we want to see if a symbol is present in that collection and if not, add it. This really is the essence of set processing, but for the sake of comparison, let’s say you do a list instead:
1: // class that handles are order processing service
2: public sealed class OrderProcessor
3: {
4: // contains list of all symbols we are currently subscribed to
5: private readonly List<string> _subscriptions = new List<string>();
6:
7: ...
8: }
Now whenever you are adding a new order, it would look something like:
1: public PlaceOrderResponse PlaceOrder(Order newOrder)
2: {
3: // do some validation, of course...
4:
5: // check to see if already subscribed, if not add a subscription
6: if (!_subscriptions.Contains(newOrder.Symbol))
7: {
8: // add the symbol to the list
9: _subscriptions.Add(newOrder.Symbol);
10:
11: // do whatever magic is needed to start a subscription for the symbol
12: }
13:
14: // place the order logic!
15: }
What’s wrong with this? In short: performance! Finding an item inside a List<T> is a linear - O(n) – operation, which is not a very performant way to find if an item exists in a collection.
(I used to teach algorithms and data structures in my spare time at a local university, and when you began talking about big-O notation you could immediately begin to see eyes glossing over as if it was pure, useless theory that would not apply in the real world, but I did and still do believe it is something worth understanding well to make the best choices in computer science).
Let’s think about this: a linear operation means that as the number of items increases, the time that it takes to perform the operation tends to increase in a linear fashion. Put crudely, this means if you double the collection size, you might expect the operation to take something like the order of twice as long.
Linear operations tend to be bad for performance because they mean that to perform some operation on a collection, you must potentially “visit” every item in the collection. Consider finding an item in a List<T>: if you want to see if the list has an item, you must potentially check every item in the list before you find it or determine it’s not found.
Now, we could of course sort our list and then perform a binary search on it, but sorting is typically a linear-logarithmic complexity – O(n * log n) - and could involve temporary storage. So performing a sort after each add would probably add more time.
As an alternative, we could use a SortedList<TKey, TValue> which sorts the list on every Add(), but this has a similar level of complexity to move the items and also requires a key and value, and in our case the key is the value.
This is why sets tend to be the best choice for this type of processing: they don’t rely on separate keys and values for ordering – so they save space – and they typically don’t care about ordering – so they tend to be extremely performant.
The .NET BCL (Base Class Library) has had the HashSet<T> since .NET 3.5, but at that time it did not implement the ISet<T> interface. As of .NET 4.0, HashSet<T> implements ISet<T> and a new set, the SortedSet<T> was added that gives you a set with ordering.
When used right, HashSet<T> is a beautiful collection, you can think of it as a simplified Dictionary<T,T>. That is, a Dictionary where the TKey and TValue refer to the same object. This is really an oversimplification, but logically it makes sense. I’ve actually seen people code a Dictionary<T,T> where they store the same thing in the key and the value, and that’s just inefficient because of the extra storage to hold both the key and the value.
As it’s name implies, the HashSet<T> uses a hashing algorithm to find the items in the set, which means it does take up some additional space, but it has lightning fast lookups! Compare the times below between HashSet<T> and List<T>:
Now, these times are amortized and represent the typical case. In the very worst case, the operations could be linear if they involve a resizing of the collection – but this is true for both the List and HashSet so that’s a less of an issue when comparing the two.
The key thing to note is that in the general case, HashSet is constant time for adds, removes, and contains! This means that no matter how large the collection is, it takes roughly the exact same amount of time to find an item or determine if it’s not in the collection. Compare this to the List where almost any add or remove must rearrange potentially all the elements! And to find an item in the list (if unsorted) you must search every item in the List.
So as you can see, if you want to create an unordered collection and have very fast lookup and manipulation, the HashSet is a great collection.
And since HashSet<T> implements ICollection<T> and IEnumerable<T>, it supports nearly all the same basic operations as the List<T> and can use the System.Linq extension methods as well.
All we have to do to switch from a List<T> to a HashSet<T> is change our declaration. Since List and HashSet support many of the same members, chances are we won’t need to change much else.
1: public sealed class OrderProcessor
3: private readonly HashSet<string> _subscriptions = new HashSet<string>();
5: // ...
7: public PlaceOrderResponse PlaceOrder(Order newOrder)
8: {
9: // do some validation, of course...
11: // check to see if already subscribed, if not add a subscription
12: if (!_subscriptions.Contains(newOrder.Symbol))
13: {
14: // add the symbol to the list
15: _subscriptions.Add(newOrder.Symbol);
16:
17: // do whatever magic is needed to start a subscription for the symbol
18: }
19:
20: // place the order logic!
21: }
22:
23: // ...
24: }
25:
Notice, we didn’t change any code other than the declaration for _subscriptions to be a HashSet<T>. Thus, we can pick up the performance improvements in this case with minimal code changes.
Just like HashSet<T> is logically similar to Dictionary<T,T>, the SortedSet<T> is logically similar to the SortedDictionary<T,T>.
The SortedSet can be used when you want to do set operations on a collection, but you want to maintain that collection in sorted order. Now, this is not necessarily mathematically relevant, but if your collection needs do include order, this is the set to use.
So the SortedSet seems to be implemented as a binary tree (possibly a red-black tree) internally. Since binary trees are dynamic structures and non-contiguous (unlike List and SortedList) this means that inserts and deletes do not involve rearranging elements, or changing the linking of the nodes.
There is some overhead in keeping the nodes in order, but it is much smaller than a contiguous storage collection like a List<T>. Let’s compare the three:
The MSDN documentation seems to indicate that operations on SortedSet are O(1), but this seems to be inconsistent with its implementation and seems to be a documentation error. There’s actually a separate MSDN document (here) on SortedSet that indicates that it is, in fact, logarithmic in complexity. Let’s put it in layman’s terms: logarithmic means you can double the collection size and typically you only add a single extra “visit” to an item in the collection.
Take that in contrast to List<T>’s linear operation where if you double the size of the collection you double the “visits” to items in the collection. This is very good performance! It’s still not as performant as HashSet<T> where it always just visits one item (amortized), but for the addition of sorting this is a good thing.
Consider the following table, now this is just illustrative data of the relative complexities, but it’s enough to get the point:
Notice that the logarithmic – O(log n) – visit count goes up very slowly compare to the linear – O(n) – visit count. This is because since the list is sorted, it can do one check in the middle of the list, determine which half of the collection the data is in, and discard the other half (binary search).
So, if you need your set to be sorted, you can use the SortedSet<T> just like the HashSet<T> and gain sorting for a small performance hit, but it’s still faster than a List<T>.
Now, if you do want to perform more set-like operations, both implementations of ISet<T> support the following, which play back towards the mathematical set operations described before:
For more information on the set operations themselves, see the MSDN description of ISet<T> (here).
Don’t get me wrong, sets are not silver bullets. You don’t really want to use a set when you want separate key to value lookups, that’s what the IDictionary implementations are best for.
Also sets don’t store temporal add-order. That is, if you are adding items to the end of a list all the time, your list is ordered in terms of when items were added to it. This is something the sets don’t do naturally (though you could use a SortedSet with an IComparer with a DateTime but that’s overkill) but List<T> can.
Also, List<T> allows indexing which is a blazingly fast way to iterate through items in the collection. Iterating over all the items in a List<T> is generally much, much faster than iterating over a set.
Sets are an excellent tool for maintaining a lookup table where the item is both the key and the value. In addition, if you have need for the mathematical set operations, the C# sets support those as well.
The HashSet<T> is the set of choice if you want the fastest possible lookups but don’t care about order. In contrast the SortedSet<T> will give you a sorted collection at a slight reduction in performance.
Print | posted on Thursday, February 3, 2011 6:23 PM |
Filed Under [
My Blog
C#
Software
.NET
Little Wonders
] | http://geekswithblogs.net/BlackRabbitCoder/archive/2011/02/03/c.net-little-wonders-the-useful-but-overlooked-sets.aspx | CC-MAIN-2017-51 | refinedweb | 2,065 | 57.91 |
Greetings!
I’ve recently got Etherned Shield W5100 which I immediately attached to Arduino Mega.
I’ve tried out various sketches but when I try to connect to the server’s IP, I get this error:
ERR_CONNECTION_REFUSED (my PC and ethernet are working in the same local network), and my router
is Dlink-Dir 300. On the Ethernet pay a 511 resistor is solder next to the LAN socket.
Instead 510 resistor. I suppose that this might cause the problem? Or does the Dlink
router itself cause the issue? Also the 6th contact ISC closes on the case of the SD card.
I am wauting for your help in launching the pay. Sorry for my bad english.
#include <SPI.h> #include <Ethernet.h> byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; IPAddress ip(192, 168, 0, 20); EthernetServer server(80); void setup() { // Open serial communications and wait for port to open: Serial.begin(9600); //©; //"); } } | https://forum.arduino.cc/t/ethernet-shield-w5100-dont-working/375215 | CC-MAIN-2021-49 | refinedweb | 154 | 66.64 |
typing --- Support for type hints¶
Nouveau dans la version 3.5.
Code source : Lib/typing.py
Note
The Python runtime does not enforce function and variable type annotations. They can be used by third party tools such as type checkers, IDEs, linters, etc.
This module supports type hints as specified by PEP 484 and PEP 526.¶
A type alias is defined by assigning the type to the alias. In this example,
Vector and
List[float] will be treated as interchangeable synonyms:
from typing import List typing import Dict, Tuple, 'derived'.
Nouveau dans la version 3.5.2.
Callable¶
Frameworks expecting callback functions of specific signatures might be
type hinted using
Callable[[Arg1Type, Arg2Type], ReturnType].
Par exemple :¶, on.
Classes, functions, and decorators¶
The module defines the following classes, functions and decorators:
- class
typing.
TypeVar¶
Type variable.
Utilisation :
T = TypeVar('T') # Can be anything A = TypeVar('A', str, bytes) # Must be str or bytes
Type variables exist primarily for the benefit of static type checkers. They serve as the parameters for generic types as well as for generic function definitions. See class Generic for[Union[BaseUser, ProUser]]): ...
Type[Any]is equivalent to
Typewhich in turn is equivalent to
type, which is the root of Python's metaclass hierarchy.
Nouveau dans la
Nouveau dans la.
Nouveau dans la version 3.5.4.
Nouveau dans la.
Nouveau dans la
Nouveau dans la version 3.5.3.
- class
typing.
AsyncIterable(Generic[T_co])¶
A generic version of
collections.abc.AsyncIterable.
Nouveau dans la version 3.5.2.
- class
typing.
AsyncIterator(AsyncIterable[T_co])¶
A generic version of
collections.abc.AsyncIterator.
Nouveau dans la version 3.5.2.
- class
typing.
ContextManager(Generic[T_co])¶
A generic version of
contextlib.AbstractContextManager.
Nouveau dans la version 3.5.4.
Nouveau dans la version 3.6.0.
- class
typing.
AsyncContextManager(Generic[T_co])¶
A generic version of
contextlib.AbstractAsyncContextManager.
Nouveau dans la version 3.5.4.
Nouveau dans la version 3.6.2.
-]: ...
- class
typing.
DefaultDict(collections.defaultdict, MutableMapping[KT, VT])¶
A generic version of
collections.defaultdict.
Nouveau dans la version 3.5.2.
- class
typing.
OrderedDict(collections.OrderedDict, MutableMapping[KT, VT])¶
A generic version of
collections.OrderedDict.
Nouveau dans la version 3.7.2.
- class
typing.
Counter(collections.Counter, Dict[T, int])¶
A generic version of
collections.Counter.
Nouveau dans la version 3.5.4.
Nouveau dans la version 3.6.1.
- class
typing.
ChainMap(collections.ChainMap, MutableMapping[KT, VT])¶
A generic version of
collections.ChainMap.
Nouveau dans la version 3.5.4.
Nouveau dans la version 3.6.1.
-)
Nouveau dans la version 3.6.1.
-'
Nouveau dans la().
Utilisation :
class Employee(NamedTuple): name: str id: int
C’est équivalent two extra attributes:
_field_types, giving a dict mapping field names to types, and
_field_defaults, a dict mapping field names to default values. (The field names are in the
_fieldsattribute, which)])
Modifié dans la version 3.6.1: Added support for default values, methods, and docstrings.
-(typ)¶
A helper function to indicate a distinct types to a typechecker, see NewType. At runtime it returns a function that returns its argument. Usage:
UserId = NewType('UserId', int) first_user = UserId(1)
Nouveau dans la')
Nouveau dans la version 3.5.4.
Nouveau dans la version 3.6.2.
typing.
Union¶
Union type;
Union[X, Y]means either X or Y.
To define a union, use e.g.
Union[int, str].]
When comparing unions, the argument order is ignored, e.g.:
Union[int, str] == Union[str, int]
You cannot subclass or instantiate a union.
You cannot write
Union[X][Y].
You can use
Optional[X]as a shorthand for
Union[X, None].
Modifié dans la version 3.7: Don't remove explicit subclasses from unions at runtime.
typing.
Optional¶
Nouveau dans la version 3.5.3..
Nouveau dans la version 3.5.2. | https://docs.python.org/fr/3.7/library/typing.html | CC-MAIN-2019-47 | refinedweb | 627 | 53.98 |
CodePlexProject Hosting for Open Source Software
I have been playing around with BE 2.0 and think it's fantastic!
I have been looking for a Theme that would display the posts in a single wide column with widgets on their own page. All I can find are themes that have 2 columns (posts on the left and widgets in a smaller column to the right of that). So I
decided to try and customize the standard theme and have it displaying a single wide column/page for the posts but I cannot figure out how to have widgets show up in a new aspx page.
The reason I am looking to do this is I want to be able to include wide screen shots and photos in the posts and I need extra width on the page to do this. So I was thinking that in the Menu I could add a link to a widgets page or even list the names
of the widgets I want to include in the menu and have them open in their own separate page. What would be even cooler is a Widgets Dashboard where I could have several widgets displaying on their own page in 2 or 3 columns instead of 1 long column.
Does anyone have a theme like this or any ideas how I can do this?
Sam
Can anybody at least point me in the right direction on this?
Thanks
Like this with no sidebar?
If you want the code to do this, let me now, I got it on this forum from Ben a contributor to BE.
Here is a theme that woruld do that, just remove the widgets from the sidebar and put the at the bottom.
Thank you for your reply.
Yes, basically like that site with no sidebar. I was already able to accomplish a page like above by setting the first column to 100% and the second column to 0%.
Now I am looking for an example of a separate aspx page that will only display a Sidebar item like Tag cloud or Category list. That might be a page like that inherits the same theme as the main blog.
I was also thinking it might be cool to have a sidebar/widets page that could be configured like a Dashboard of several widgets. Since most of the widgets are narrow in size, this dashboard page of widgets might have 2 or 3 columns of widgets.
Then you could place several widgets on this page in any column. I have never seen this before on a blog but it might be interesting to try...
This is the code that removes the sidebar from the page, it goes the .cs page
using System;
using System.Collections;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using BlogEngine.Core;
using BlogEngine.Core.Web.Controls;
public partial class membership_applicationcopy : BlogBasePage
{
protected void Page_Load(object sender, EventArgs e)
{
PlaceHolder phSidebar = Master.FindControl("phSidebar") as PlaceHolder;
if (phSidebar != null)
phSidebar.Visible = false;
}
[System.Web.Services.WebMethodAttribute(), System.Web.Script.Services.ScriptMethodAttribute()]
public static string GetDynamicContent(string contextKey)
{
return default(string);
}
}
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://blogengine.codeplex.com/discussions/242667 | CC-MAIN-2017-04 | refinedweb | 560 | 73.37 |
I currently try to learn how to use the CImg library but I get an error that I do not quite understand. The documentation of the constructors in the CImg library were also rather unhelpful. The basic idea is that I want to create an image so that I can write the pixels in a second step (not shown) manually. However, the access to the image created with the CImg constructor is denied for some reason.
Here is a minimal example:
#include <iostream> #include <CImg.h> #include "Header.h" using namespace std; int Main() { cimg_library::CImg<float> img(200, 200, 0, 3, 0.0); cout << "img is: " << img(0, 0, 0, 0) << endl; //<----------------- error occurs here return 0; }
The error reads: Exception thrown at 0x0067B693 in Test.exe: 0xC0000005: Access violation reading location 0x00000000
Any help in understanding this would be much appreciated!
Best
Blue#
Edit: I tried 2 more things but both don't work unfortunately.
1st: I tried the .data() function as well but to no avail. Changing data types (float, int, unsigned char) also did not solve the problem, apart from giving the error message that the whole thing would point to a NULL vector now (still access denied).
2nd: I switched to using pointers:
cimg_library::CImg<unsigned char>* img = new cimg_library::CImg<unsigned char>(200, 200, 0, 3, 1); cout << "img is: " << *img->data(0, 0, 0, 0) << endl;
This still gives pretty much the same error message though: Exception thrown: read access violation. cimg_library::CImg::data(...) returned nullptr.
Dont set 0 but 1 for the number of slices. Otherwise you get an empty image 0x0x0x0 that has no pixels, leading to a wrong memory access.
User contributions licensed under CC BY-SA 3.0 | https://windows-hexerror.linestarve.com/q/so60685142-C-CImg-Access-violation-reading-location | CC-MAIN-2020-16 | refinedweb | 289 | 63.9 |
- ×Show All
Built on top of d3.js and stack.gl, plotly.js is a high-level, declarative charting library. plotly.js ships with over 40 chart types, including scientific charts, 3D graphs, statistical charts, SVG maps, financial charts, and more.
Contact us for Plotly.js consulting, dashboard development, application integration, and feature additions.
Table of contents
- Quick start options
- Modules
- Building plotly.js
- Bugs and feature requests
- Documentation
- Contributing
- Community
- Clients for R, Python, Node, and MATLAB
- Creators
Quick start options
Install with npm
npm install plotly.js-dist
and import plotly.js as
import Plotly from 'plotly.js-dist';or
var Plotly = require('plotly.js-dist');.
Use the plotly.js CDN hosted by Fastly
<!-- Latest compiled and minified plotly.js JavaScript --> <script src=""></script> <!-- OR use a specific plotly.js release (e.g. version 1.5.0) --> <script src=""></script> <!-- OR an un-minified version is also available --> <script src="" charset="utf-8"></script>
and use the
Plotlyobject in the window scope.
Fastly supports Plotly.js with free CDN service. Read more at
Download the latest release
and use the plotly.js
distfile(s). More info here.
Read the Getting started page for more examples.
Modules
Starting in
v1.15.0, plotly.js ships with several partial bundles (more info here).
Starting in
v1.39.0, plotly.js publishes distributed npm packages with no dependencies. For example, run
npm install plotly.js-geo-distand add
import Plotly from 'plotly.js-geo-dist';to your code to start using the plotly.js geo package.
If none of the distributed npm packages meet your needs, and you would like to manually pick which plotly.js modules to include, you'll first need to run
npm install plotly.jsand then create a custom bundle by using
plotly.js/lib/core, and loading only the trace types that you need (e.g.
pieor
choropleth). The recommended way to do this is by creating a bundling file. For example, in CommonJS:
// in custom-plotly.js var Plotly = require('plotly.js/lib/core'); // Load in the trace types for pie, and choropleth Plotly.register([ require('plotly.js/lib/pie'), require('plotly.js/lib/choropleth') ]); module.exports = Plotly;
Then elsewhere in your code:
var Plotly = require('./path/to/custom-plotly');
To learn more about the plotly.js module architecture, refer to our modularizing monolithic JS projects post.
Non-ascii characters
Important: the plotly.js code base contains some non-ascii characters. Therefore, please make sure to set the
charsetattribute to
"utf-8"in the script tag that imports your plotly.js bundle. For example:
<script src="my-plotly-bundle.js" charset="utf-8"></script>
Building plotly.js
Building instructions using
webpack,
browserifyand other build frameworks are in
BUILDING.md
Bugs and feature requests
Have a bug or a feature request? Please first read the issues guidelines.
Documentation
Official plotly.js documentation is hosted on plot.ly/javascript.
These pages are generated by the Plotly documentation repo built with Jekyll and publicly hosted on GitHub Pages. For more info about contributing to Plotly documentation, please read through contributing guidelines.
You can also suggest new documentation examples by submitting a Codepen on community.plot.ly with tag
plotly-js.
Contributing
Please read through our contributing guidelines. Included are directions for opening issues, using plotly.js in your project and notes on development.
Community
- Follow @plotlygraphs on Twitter for the latest Plotly news.
- Follow @plotly_js on Twitter for plotly.js release updates.
- Implementation help may be found on community.plot.ly (tagged
plotly-js) or on Stack Overflow (tagged
plotly).
- Developers should use the keyword
plotlyon packages which modify or add to the functionality of plotly.js when distributing through npm.
- Direct developer email support can be purchased through a Plotly Support Plan.
Versioning
This project is maintained under the Semantic Versioning guidelines.
See the Releases section of our GitHub project for changelogs for each release version of plotly.js.
Clients for R, Python, Node, and MATLAB
Open-source clients to the plotly.js APIs are available at these links:
plotly.js charts can also be created and saved online for free at plot.ly/create.
Creators
Active
Hall of Fame
Code and documentation copyright 2019 Plotly, Inc.
Code released under the MIT license.
Docs released under the Creative Commons license. | https://www.javascripting.com/view/plotly-js | CC-MAIN-2019-51 | refinedweb | 713 | 54.9 |
In this short tip tutorial we are going to learn how to display the current Angular 2 version in a web app so let's get started .
One note before i start ,because i’m still testing and learning Angular 2 i’m using plinker to play with all the new concepts and features so i don’t have to setup any development environment just to experiment with the framework so if you are not a building a real web app using Angular 2 you can just use plunker or any online JavaScript editor that supports Angular 2 .
To get version information you need VERSION module from '@angular/core' so you make sure to import it first
import {Component, NgModule, VERSION} from '@angular/core'
Next create an app component
@Component({ selector: 'my-app', template: ` <div> <h2>Hello </h2> <p> Your Angular 2 version is </p> </div> `, }) export class App { name:string; version:string; constructor() { this.name = 'Angular2' this.version = VERSION.full; } }
We have used the @Component annotation to create a simple component that has a template with two bindings name and version
Next in our component class we have added to member variables with type string to hold the name and version .
Then we have initialized the two variables in the constructor ,name with “Angular 2” string and version with VERSION.full which holds the full version of currently used Angular 2 .
Next you just display this component in your html page using
<body> <my-app> loading... </my-app> </body>
You should get something like <!-- _includes/image.html -->
<img src="" alt="How to output current Angular 2 version in your app"/>
Depending on your Angular 2 version .
That‘s it see you in the next tip<< | https://www.techiediaries.com/angular2/how-to-output-current-angular2-version-in-your-app/ | CC-MAIN-2017-43 | refinedweb | 286 | 55.37 |
Line-of-sight and view calculations. More...
#include "angband.h"
#include "cave.h"
#include "cmds.h"
#include "init.h"
#include "monster.h"
#include "player-timed.h"
Line-of-sight and view calculations..
Like it says on the tin.
References cave_monster(), cave_monster_max(), distance(), monster_race::flags, monster::fx, monster::fy, i, square::info, los(), angband_constants::max_sight, monster::race, rf_has, sqinfo_on, square_isprojectable(), chunk::squares, loc::x, loc::y, and z_info.
Referenced by update_view().
Make a square part of the current view.
References square::info, sqinfo_on, square_isglow(), square_isview(), square_iswall(), and chunk::squares.
Referenced by update_view_one().
Approximate distance between two points.
When either the X or Y component dwarfs the other component, this function is almost perfect, and otherwise, it tends to over-estimate about one grid per fifteen grids of distance.
Algorithm: hypot(dy,dx) = max(dy,dx) + min(dy,dx) / 2
Referenced by add_monster_lights(), chance_of_missile_hit(), effect_handler_DESTRUCTION(), effect_handler_EARTHQUAKE(), effect_handler_TELEPORT(), find_hiding(), find_safety(), generate_starburst_room(), get_moves_fear(), pick_and_place_distant_monster(), project(), ranged_helper(), scatter(), summon_possible(), target_set_interactive(), and update_view_one().
The comments below are still predominantly true, and have been left (slightly modified for accuracy) for historical and nostalgic reasons.
Some comments on the dungeon related data structures and functions...
Angband is primarily a dungeon exploration game, and it should come as no surprise that the internal representation of the dungeon has evolved over time in much the same way as the game itself, to provide semantic changes to the game itself, to make the code simpler to understand, and to make the executable itself faster or more efficient in various ways.
There are a variety of dungeon related data structures, and associated functions, which store information about the dungeon, and provide methods by which this information can be accessed or modified.
Some of this information applies to the dungeon as a whole, such as the list of unique monsters which are still alive. Some of this information only applies to the current dungeon level, such as the current depth, or the list of monsters currently inhabiting the level. And some of the information only applies to a single grid of the current dungeon level, such as whether the grid is illuminated, or whether the grid contains a monster, or whether the grid can be seen by the player. If Angband was to be turned into a multi-player game, some of the information currently associated with the dungeon should really be associated with the player, such as whether a given grid is viewable by a given player.
Currently, a lot of the information about the dungeon is stored in ways that make it very efficient to access or modify the information, while still attempting to be relatively conservative about memory usage, even if this means that some information is stored in multiple places, or in ways which require the use of special code idioms. For example, each monster record in the monster array contains the location of the monster, and each cave grid has an index into the monster array, or a zero if no monster is in the grid. This allows the monster code to efficiently see where the monster is located, while allowing the dungeon code to quickly determine not only if a monster is present in a given grid, but also to find out which monster. The extra space used to store the information twice is inconsequential compared to the speed increase.
Several pieces of information about each cave grid are stored in the info field of the "cave->squares" array, which is a special array of bitflags.
The "SQUARE_ROOM" flag is used to determine which grids are part of "rooms", and thus which grids are affected by "illumination" spells.
The "SQUARE_VAULT" flag is used to determine which grids are part of "vaults", and thus which grids cannot serve as the destinations of player teleportation.
The "SQUARE_MARK" flag is used to determine which grids have been memorized by the player. This flag is used by the "map_info()" function to determine if a grid should be displayed. This flag is used in a few other places to determine if the player can * "know" about a given grid.
The "SQUARE_GLOW" flag is used to determine which grids are "permanently illuminated". This flag is used by the update_view() function to help determine which viewable flags may be "seen" by the player. This flag is used by the "map_info" function to determine if a grid is only lit by the player's torch. This flag has special semantics for wall grids (see "update_view()").
The "SQUARE_VIEW" flag is used to determine which grids are currently in line of sight of the player. This flag is set by (and used by) the "update_view()" function. This flag is used by any code which needs to know if the player can "view" a given grid. This flag is used by the "map_info()" function for some optional special lighting effects. The "player_has_los_bold()" macro wraps an abstraction around this flag, but certain code idioms are much more efficient. This flag is used to check if a modification to a terrain feature might affect the player's field of view. This flag is used to see if certain monsters are "visible" to the player. This flag is used to allow any monster in the player's field of view to "sense" the presence of the player.
The "SQUARE_SEEN" flag is used to determine which grids are currently in line of sight of the player and also illuminated in some way. This flag is set by the "update_view()" function, using computations based on the "SQUARE_VIEW" and "SQUARE_GLOW" flags and terrain of various grids. This flag is used by any code which needs to know if the player can "see" a given grid. This flag is used by the "map_info()" function both to see if a given "boring" grid can be seen by the player, and for some optional special lighting effects. The "player_can_see_bold()" macro wraps an abstraction around this flag, but certain code idioms are much more efficient. This flag is used to see if certain monsters are "visible" to the player. This flag is never set for a grid unless "SQUARE_VIEW" is also set for the grid. Whenever the terrain or "SQUARE_GLOW" flag changes for a grid which has the "SQUARE_VIEW" flag set, the "SQUARE_SEEN" flag must be recalculated. The simplest way to do this is to call "forget_view()" and "update_view()" whenever the terrain or "SQUARE_GLOW" flag changes for a grid which has "SQUARE_VIEW" set.
The "SQUARE_WASSEEN" flag is used for a variety of temporary purposes. This flag is used to determine if the "SQUARE_SEEN" flag for a grid has changed during the "update_view()" function. This flag is used to "spread" light or darkness through a room. This flag is used by the "monster flow code". This flag must always be cleared by any code which sets it.
Note that the "SQUARE_MARK" flag is used for many reasons, some of which are strictly for optimization purposes. The "SQUARE_MARK" flag means that even if the player cannot "see" the grid, he "knows" about the terrain in that grid. This is used to "memorize" grids when they are first "seen" by the player, and to allow certain grids to be "detected" by certain magic.
Objects are "memorized" in a different way, using a special "marked" flag on the object itself, which is set when an object is observed or detected. This allows objects to be "memorized" independant of the terrain features.
The "update_view()" function is an extremely important function. It is called only when the player moves, significant terrain changes, or the player's blindness or torch radius changes. Note that when the player is resting, or performing any repeated actions (like digging, disarming, farming, etc), there is no need to call the "update_view()" function, so even if it was not very efficient, this would really only matter when the player was "running" through the dungeon. It sets the "SQUARE_VIEW" flag on every cave grid in the player's field of view. It also checks the torch radius of the player, and sets the "SQUARE_SEEN" flag for every grid which is in the "field of view" of the player and which is also "illuminated", either by the players torch (if any) or by any permanent light source. It could use and help maintain information about multiple light sources, which would be helpful in a multi-player version of Angband.
Note that the "update_view()" function allows, among other things, a room to be "partially" seen as the player approaches it, with a growing cone of floor appearing as the player gets closer to the door. Also, by not turning on the "memorize perma-lit grids" option, the player will only "see" those floor grids which are actually in line of sight. And best of all, you can now activate the special lighting effects to indicate which grids are actually in the player's field of view by using dimmer colors for grids which are not in the player's field of view, and/or to indicate which grids are illuminated only by the player's torch by using the color yellow for those grids.
It seems as though slight modifications to the "update_view()" functions would allow us to determine "reverse" line-of-sight as well as "normal" line-of-sight", which would allow monsters to have a more "correct" way to determine if they can "see" the player, since right now, they "cheat" somewhat and assume that if the player has "line of sight" to them, then they can "pretend" that they have "line of sight" to the player. But if such a change was attempted, the monsters would actually start to exhibit some undesirable behavior, such as "freezing" near the entrances to long hallways containing the player, and code would have to be added to make the monsters move around even if the player was not detectable, and to "remember" where the player was last seen, to avoid looking stupid.
Note that the "SQUARE_GLOW" flag means that a grid is permanently lit in some way. However, for the player to "see" the grid, as determined by the "SQUARE_SEEN" flag, the player must not be blind, the grid must have the "SQUARE_VIEW" flag set, and if the grid is a "wall" grid, and it is not lit by the player's torch, then it must touch a projectable grid which has both the "SQUARE_GLOW" and "SQUARE_VIEW" flags set. This last part about wall grids is induced by the semantics of "SQUARE_GLOW" as applied to wall grids, and checking the technical requirements can be very expensive, especially since the grid may be touching some "illegal" grids. Luckily, it is more or less correct to restrict the "touching" grids from the eight "possible" grids to the (at most) three grids which are touching the grid, and which are closer to the player than the grid itself, which eliminates more than half of the work, including all of the potentially "illegal" grids, if at most one of the three grids is a "diagonal" grid. In addition, in almost every situation, it is possible to ignore the "SQUARE_VIEW" flag on these three "touching" grids, for a variety of technical reasons. Finally, note that in most situations, it is only necessary to check a single "touching" grid, in fact, the grid which is strictly closest to the player of all the touching grids, and in fact, it is normally only necessary to check the "SQUARE_GLOW" flag of that grid, again, for various technical reasons. However, one of the situations which does not work with this last reduction is the very common one in which the player approaches an illuminated room from a dark hallway, in which the two wall grids which form the "entrance" to the room would not be marked as "SQUARE_SEEN", since of the three "touching" grids nearer to the player than each wall grid, only the farthest of these grids is itself marked "SQUARE_GLOW".
Here are some pictures of the legal "light source" radius values, in which the numbers indicate the "order" in which the grids could have been calculated, if desired. Note that the code will work with larger radiuses, though currently yields such a radius, and the game would become slower in some situations if it did.
Rad=0 Rad=1 Rad=2 Rad=3 No-Light Torch,etc Lantern Artifacts 333 333 43334 212 32123 3321233 @ 1@1 31@13 331@133 212 32123 3321233 333 43334 333
Forget the "SQUARE_VIEW" grids, redrawing as needed
References chunk::height, square::info, sqinfo_off, square_isview(), square_light_spot(), chunk::squares, and chunk::width.
Referenced by on_leave_level(), textui_enter_store(), and update_stuff().
A simple, fast, integer-based line-of-sight algorithm.
By Joseph Hall, 4116 Brewster Drive, Raleigh NC 27606. Email to jnh@e.nosp@m.cemw.nosp@m.l.ncs.nosp@m.u.ed.nosp@m.u.
This function returns TRUE if a "line of sight" can be traced from the center of the grid (x1,y1) to the center of the grid (x2,y2), with all of the grids along this path (except for the endpoints) being non-wall grids. Actually, the "chess knight move" situation is handled by some special case code which allows the grid diagonally next to the player to be obstructed, because this yields better gameplay semantics. This algorithm is totally reflexive, except for "knight move" situations.
Because this function uses (short) ints for all calculations, overflow may occur if dx and dy exceed 90.
Once all the degenerate cases are eliminated, we determine the "slope" ("m"), and we use special "fixed point" mathematics in which we use a special "fractional component" for one of the two location components ("qy" or "qx"), which, along with the slope itself, are "scaled" by a scale factor equal to "abs(dy*dx*2)" to keep the math simple. Then we simply travel from start to finish along the longer axis, starting at the border between the first and second tiles (where the y offset is thus half the slope), using slope and the fractional component to see when motion along the shorter axis is necessary. Since we assume that vision is not blocked by "brushing" the corner of any grid, we must do some special checks to avoid testing grids which are "brushed" but not actually "entered".
Angband three different "line of sight" type concepts, including this function (which is used almost nowhere), the "project()" method (which is used for determining the paths of projectables and spells and such), and the "update_view()" concept (which is used to determine which grids are "viewable" by the player, which is used for many things, such as determining which grids are illuminated by the player's torch, and which grids and monsters can be "seen" by the player, etc).
References ABS, FALSE, square_isprojectable(), and TRUE.
Referenced by add_monster_lights(), can_call_monster(), drop_near(), monster_list_collect(), project(), scatter(), summon_possible(), and update_view_one().
Mark the currently seen grids, then wipe in preparation for recalculating.
References chunk::height, square::info, sqinfo_off, sqinfo_on, square_isseen(), chunk::squares, and chunk::width.
Referenced by update_view().
Returns true if the player's grid is dark.
References player_can_see_bold(), player::px, and player::py.
Referenced by do_cmd_disarm_aux(), do_cmd_disarm_chest(), do_cmd_lock_door(), do_cmd_open_aux(), do_cmd_open_chest(), player_can_cast(), player_can_read(), search(), and see_floor_items().
Determine if a "legal" grid can be "seen" by the player.
References cave, FALSE, square::info, sqinfo_has, chunk::squares, and TRUE.
Referenced by do_cmd_tunnel_aux(), draw_path(), no_light(), project_feature_handler_KILL_WALL(), ranged_helper(), square_apparent_name(), square_remove_trap(), and target_sighted().
Determine if a "legal" grid is within "los" of the player.
References cave, FALSE, square::info, sqinfo_has, chunk::squares, and TRUE.
Referenced by effect_handler_AGGRAVATE(), effect_handler_PROBE(), effect_handler_PROJECT_LOS(), find_hiding(), find_safety(), get_moves_flow(), monster_check_active(), process_monster_can_move(), process_monster_grab_objects(), project(), project_feature_handler_DARK_WEAK(), project_feature_handler_KILL_DOOR(), project_feature_handler_KILL_TRAP(), project_feature_handler_LIGHT_WEAK(), and update_mon().
Update view for a single square.
References display_feeling(), angband_constants::feeling_need, chunk::feeling_squares, square::info, player_upkeep::only_partial, sqinfo_off, square_isfeel(), square_isseen(), square_light_spot(), square_note_spot(), square_wasseen(), chunk::squares, TRUE, player::upkeep, and z_info.
Referenced by update_view().
update the player's current view
References add_monster_lights(), player_state::cur_light, chunk::height, square::info, loc(), mark_wasseen(), player::px, player::py, sqinfo_on, square_isglow(), chunk::squares, player::state, player::timed, update_one(), update_view_one(), and chunk::width.
Referenced by update_stuff().
Decide whether to include a square in the current view.
References ABS, become_viewable(), distance(), los(), angband_constants::max_sight, square_iswall(), and z_info.
Referenced by update_view(). | http://buildbot.rephial.org/builds/restruct/doc/cave-view_8c.html | CC-MAIN-2019-13 | refinedweb | 2,682 | 58.21 |
A .NET assembly is “signed” if the developer compiled the assembly with the private key of a digital signature. When the system later loads the assembly, it verifies the assembly with the corresponding public key. Occasionally you may need to determine whether an assembly you have loaded has been signed.
If the .NET application or library you are building is signed, then all assemblies referenced by your application at compile time must also be signed. If you add to your signed application a reference to an unsigned assembly, you will receive this error when you compile your application:
Error 1 Assembly generation failed — Referenced assembly ‘XYZ’ does not have a strong name
However, if the application you are building is unsigned, or if you are loading an assembly at run-time, then it’s possible to load an unsigned assembly.
The following code demonstrates how to determine if a loaded .NET assembly is signed. In this example, “MyType” represents the name of any class defined in the assembly that you want to check:
Assembly asm = Assembly.GetAssembly( typeof( MyType ) ); if (asm != null) { AssemblyName asmName = asm.GetName(); byte[] key = asmName.GetPublicKey(); bool isSignedAsm = key.Length > 0; Console.WriteLine( "IsSignedAssembly={0}", isSignedAsm ); }
See .NET Assembly FAQ – Part 3 – Strong Names and Signing for more information about signed assemblies.
how can i know if an assembly has a digital signature
tab in the properties i tried to use FileVersionInfo,
In Visual Studio:
1. Select the project.
2. Click menu item “Project > Properties”.
3. Click the “Signing” tab.
4. If the “Sign the assembly” checkbox is checked, then the assembly has a digital signature.
thank you man sorry i didnot clarify my question i want to know how to do that programmatically
(ie get dll path and know if it has digital signature tab , rightclick–> properties–>digital signature)
The C# code to determine if an assembly has a digital signature is in the article above.
i dont know what does “typeof( MyType )” mean
could you plz give me an example
and thank you again 🙂
i want to get the path for the dll and then check if it has digital signature then get the signer name and the time stamp value,
bool sigend( string dll_path)
{
}
MyType is any type that’s defined in the assembly you are trying to check for digital signature.
So if your assembly has defined a Person class:
public class Person { … }
Then using the example in the article, you would access the assembly by:
Assembly asm = Assembly.GetAssembly( typeof( Person ) );
but the its not my dll its ididnt write the class
i have a DLL on my machine
C:myDLL.dll
and i want to know if its signd
and if it was signed i want to know the
“signer name ” and the “time stamp”
i wrote this code and it works fine
but
the “cert ” has alot of data
in it but i cant find
“signer name ” and the “time stamp”
//====================================
public bool signed(string filename)
{
X509Certificate cert = null;
try
{
cert = X509Certificate.CreateFromSignedFile(filename);
}
catch (CryptographicException e)
{
return false;
}
return true;
}
//========================
sorry man for all these questions but i cant find any thing about “time stamp” and “signer name”
and thanx a lot
Hi,
I am stepping in a bit late, but the signer name can be found in the Issuer property of the X509 certificate.
And the certificate has a ValidFrom and ValidTo properties. I’m not sure this is what you’re looking for as a timestamp.
Anyway, you can also retrieve an Assembly object from a file (wihtout using a type) with the Assembly.LoadFile() method.
Regards | https://www.csharp411.com/determine-if-a-loaded-net-assembly-is-signed/ | CC-MAIN-2021-43 | refinedweb | 601 | 59.53 |
How. It makes us really easy for us to to create and manipulate PDF files in ASP.NET code using iTextSharp library. Although many other open source libraries and APIs available for PDF files but I am a good admirer of ITextSharp. I will write a series of articles on PDF and iTextSharp and I will show you that how we can play with PDF files using iTextSharp. This article will be the first of this series and here I will show you to read PDF file using iTextSharp both in C# and VB.NET.
- First you need to download iTextSharp, extract it and include it in your project.
- Create a website in Visual Studio for C# or VB.NET and add a web form to it.
- Add a reference of ITextSharp in your website.
- Add following controls in Web Form for this sample. You need to have a PDF in your drive to upload in your sample website.
- Add following namespaces
C#
VB.NET
- Write below code Button click event
C#
VB.NET
- View in browser, upload PDF file and click button to see text of PDF file which we read using iTextSharp.
- You can download complete code sample from below link.
Best WordPress Themes and Plugins with Great Team and Support!Best WordPress Themes and Plugins with Great Team and Support! | http://getcodesnippet.com/2014/05/09/how-to-read-pdf-file-using-itextsharp-in-asp-net/ | CC-MAIN-2019-51 | refinedweb | 223 | 83.86 |
Every year, the Leuven Statistics Research Center (Belgium) is offering short courses for professionals and researchers in statistics and statistical tools.
The following link shows the overview of the courses: or get it here in pdf:
This year, BNOSAC is presenting the course on Advanced R Programming Topics, which will be held on Oktober 18-19.
This course is a hands-on course covering the basic toolkit you need to have in order to use R efficiently for data analysis tasks. It is an intermediate course aimed at users who have the knowledge from the course 'Essential tools for R' and who want to go further to improve and speed up their data analysis tasks.
The following topics will be covered in detail
– The apply family of functions and basic parallel programming for these, vectorisation, regular expressions, string manipulation functions and commonly used functions from the base package. Useful other packages for data manipulation.
– Making a basic reproducible report using Sweave and knitr including tables, graphs and literate programming
– If you want to build your own R package to distribute your work, you need to understand S3 and S4 methods, you need the basics of how generics work as well as R environments, what are namespaces and how are they useful. This will be covered to help you start up and build an R package.
– Basic tips on how to organise and develop R code and test it.
You can subscribe... | http://www.r-bloggers.com/r-courses-in-belgium/ | CC-MAIN-2015-35 | refinedweb | 241 | 55.68 |
I would like to know if it is a way to up load the positions for an object from an external file, for each frame
You could certainly do this if you wrote your own python script.
hmmm… pseudocode might look something like:
import blender
text = “wherever the text file you want to read is”
delimiter = " what you want to stop the file from being read/split which frame from eg a “;” , a “~”, a “,”, etc"
frame = Blender.Get(“curframe”)
keyframe = (set keyframe API here)
the next step would be to read data and set keyframes based on the text file
I don’t know if that made any sense:eek:
please let me know if that helped or just confused things.
Thanks
-amdbcg | https://blenderartists.org/t/external-source/452914 | CC-MAIN-2021-04 | refinedweb | 123 | 65.43 |
I
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson..
Participated in the
Arduino Contest 2017
Tarantula3 made it!
62 Discussions
Question 5 months ago
I've run into a roadblock and wonder if anyone has advice.
I've plugged in my API key and channel details, and wifi info.
I can confirm that the API key works by going to...>
which spits out the appropriate JSON.
When I send the program to the board, the Serial Monitor display gets as far as:
WiFi connected
IP address:
192.168.0.175
After that, nothing happens. That is to say, the LEDs don't light up, and nothing further appears in the Serial Monitor.
I've installed version 5.x of ArduinoJson (in accordance with the error messages I got when using version 6.x).
Is there a chance the YouTube API has been updated such that this code is outdated? Becky, does your still function properly to this day?
Thanks so much! I'm so excited by the progress I'm making with this!! :)
N.B. - I opted to the version of the code without WiFi manager, since I don't anticipate a big change with our WiFi setup, and don't mind having to change the code, if that happens.
Answer 7 weeks ago
Hi, I have the same problem with mine, did you get correct it?
Please help me.
Thanks
Reply 7 weeks ago
Yup, I corrected this particular issue by adding the line of code "client.setInsecure();", as detailed in a comment I made below this one.
I hope that helps!
Tip 5 months ago
If using version 2.5.x or later of the ESP8266 manager, I believe you may need to append the setup section to include the line "client.setInsecure();" as follows:
void setup() {
Serial.begin(115200);
for(uint8_t i=0; i<2; i++) { // Initialize displays
disp[i].seg7.begin(disp[i].addr);
disp[i].seg7.clear();
disp[i].seg7.writeDisplay();
}
//);
client.setInsecure();
}
This, according to the code at...
I wasn't able to get stats in the Serial Monitor until I added this line.
Reply 5 months ago
Interesting, thank you for sharing! I haven't tried to use this code using the latest versions.
5 months ago
I've run into a problem - I believe I've ordered the correct ESP-12e board, but when I attach it to my breadboard (which looks to be the same size as the one in your wiring picture), there are no remaining holes to which I can connect wires.
Have a I ordered the wrong ESP-12e??
I could get a bigger breadboard I suppose. But this discrepancy has me worried, so I thought I'd check in first. :) Thanks!
Reply 5 months ago
Oh. I've subsequently found...
Well, darnitall.
So I initially asked for a board at my local store and was given an ESP-01. To which I told the shop staff, "Wow, it's much smaller than in the pictures." My mistake.
And then I just ordered the above-pictured dude via Amazon.ca (Being in Canada, it's best to avoid Amazon.com links, to avoid a delivery that's both slow and expensive).
So, I guess I need something in between!? Gosh, I'm certain building my collection of useless ESP8266's. :(
This could be sage advice for anyone else reading. Follow the Amazon.com link if you can. Cuz this thing does _not_ appear to be standardized.
Reply 5 months ago
Shopping for parts is hard!
Reply 5 months ago
I've subsequently gotten around this issue by using a second breadboard.
6 months ago
Love this! Is there a way to make it work with a tm1637 display? It has the power and negative pins and CLK and DIO.
Using esp8266 board but can’t get anything on the display, even with trying to tweak the code, still quite new at this so any help would be amazing :)
7 months ago
I have been working on recreating your counter for youtube only. I've run into several issues that I have figured out doing research. However, I am now stuck with 3 errors that seem to be connected and cannot find any answers. Please see the errors below. This is your code with WifiManager.
"""
src\main.cpp: In function 'void setup()':
src\main.cpp:68:14: error: 'loadConfig' was not declared in this scope
loadConfig();
^
src\main.cpp:86:16: error: 'saveConfig' was not declared in this scope
saveConfig();
^
src\main.cpp:96:21: error: 'forceConfigMode' was not declared in this scope
forceConfigMode();
^
"""
I am unable to find these methods used in the library files. I'm at a loss. Any and all help is welcome.
ps. Love the content here and on YouTube. Keep it up.
Best Answer 7 months ago
Anytime you get something "not declared in this scope" it's because the compiler can't find the library file, either because you don't have it installed or it's not told to be included at the top of the program. Did you install all the libraries it calls for in the sample code? Brian wrote the wifimanager version, so you might double check with his guide:
Answer 7 months ago
I've double checked and it is installed and included in the script. In the Arduino IDE, it compiles and I get the errors through the serial monitor. In Vscode, it errors out before it's compiled. Could this be a version issue? I'm using the latest version of the WiFiManager library. I'll keep digging in the mean time. Still have yet to find the callouts for the mentioned methods. Thank you.
Answer 7 months ago
These are the 3 errors. I haven't messed with any of the code other than what was required.
Answer 6 months ago
The wifimanager version of this project is Brian's code, I recommend dropping him a question here:...
What version of the Arduino IDE are you using? Did you try the simpler sketch for this project, which hard-codes your credentials? It's at least a starting point...
7 months ago
YouTube Counter v2. I am having issues with nothing happening on the LED. I am sure the coding is correct along with the wires but nothing is happening. How do I check to make sure things are correct? Could it just be my youtube API?
// YouTube Subscriber Counter v2
// by Becky Stern 2018
// Displays your current subscriber count on a seven-segment display
// This version uses two four digit displays to support more than 10k subs
// based on library sample code by:
// Giacarlo Bacchio (Gianbacchio on github)
// Brian Lough (witnessmenow on github)
// Adafruit (adafruit on github)
// requires the following libraries, search in Library Manager or download from github:
#include <Wire.h> // installed by default
#include <Adafruit_GFX.h> //
#include <Adafruit_LEDBackpack.h> //
#include <YoutubeApi.h> //
#include <ArduinoJson.h> //
// these libraries are included with ESP8266 support
#include <ESP8266WiFi.h>
#include <WiFiClientSecure.h>
#include <DNSServer.h>
#include <ESP8266WebServer.h>
#include <WiFiManager.h> //
char ssid[] = "ATTaz55Gs2"; // your network SSID (name)
char password[] = "mywifipassword"; // your network key
//------- enter your API key and channel ID here! ------
#define API_KEY "youtube google API key" // your google apps API Token
#define CHANNEL_ID "UCNhmy9YYBNdCVe3tcM8qsHQ" // makes up the url of channel
// label the displays with their i2c addresses
struct {
uint8_t addr; // I2C address
Adafruit_7segment seg7; // 7segment object
} disp[] = {
{ 0x71, Adafruit_7segment() }, // High digits
{ 0x70, Adafruit_7segment() } // Low digits
};
int subscriberCount; // create a variable to store the subscriber count
WiFiClientSecure client;
YoutubeApi api(API_KEY, client);
unsigned long api_mtbs = 1000; //mean time between api requests
unsigned long api_lasttime; //last time api request has been done
void setup() {
Serial.begin(115200);
for(uint8_t i=0; i<2; i++) { // Initialize displays
disp[i].seg7.begin(disp[i].addr);
disp[i].seg7.clear();
disp[i].seg7.writeDisplay();
}
WiFiManager wifiManager;
wifiManager.autoConnect ("Blackjack");
Serial.println("");
Serial.println("WiFi connected");
Serial.println("IP address: ");
IPAddress ip = WiFi.localIP();
Serial.println(ip);
}
void loop() {
if (millis() > api_lasttime + api_mtbs) {
if(api.getChannelStatistics(CHANNEL_ID))
{
Serial.println("---------Stats---------");
Serial.print("Subscriber Count: ");
Serial.println(api.channelStats.subscriberCount);
Serial.print("View Count: ");
Serial.println(api.channelStats.viewCount);
Serial.print("Comment Count: ");
Serial.println(api.channelStats.commentCount);
Serial.print("Video Count: ");
Serial.println(api.channelStats.videoCount);
// Probably not needed :)
//Serial.print("hiddenSubscriberCount: ");
//Serial.println(api.channelStats.hiddenSubscriberCount);
Serial.println("------------------------");
subscriberCount = api.channelStats.subscriberCount;
uint16_t hi = subscriberCount / 10000, // Value on left (high digits) display
lo = subscriberCount % 10000; // Value on right (low digits) display
disp[0].seg7.print(hi, DEC); // Write values to each display...
disp[1].seg7.print(lo, DEC);
// print() does not zero-pad the displays; this may produce a gap
// between the high and low sections. Here we add zeros where needed...
if(hi) {
if(lo < 1000) {
disp[1].seg7.writeDigitNum(0, 0);
if(lo < 100) {
disp[1].seg7.writeDigitNum(1, 0);
if(lo < 10) {
disp[1].seg7.writeDigitNum(3, 0);
}
}
}
} else {
disp[0].seg7.clear(); // Clear 'hi' display
}
disp[0].seg7.writeDisplay(); // Push data to displays
disp[1].seg7.writeDisplay();
}
api_lasttime = millis();
}
}
Best Answer 6 months ago
This is the more complex version of the sketch, with wifi manager so you don't have to hard-code your wifi credentials. This requires a one-time setup following Brian's instructions, nothing will show until you configure the wifi:...
Did you try the simpler sketch, where you edit the Arduino code to include your credentials? You might try that first, as it doens't require the same setup.
7 months ago
Hey, I am trying to build this. I have installed all the relevant libraries but I am getting the below errors:
C:\Users\chris\Documents\Arduino\libraries\YoutubeApi\src\YoutubeApi.cpp: In member function 'bool YoutubeApi::getChannelStatistics(String)':
C:\Users\chris\Documents\Arduino\libraries\YoutubeApi\src\YoutubeApi.cpp:95:2: error: 'DynamicJsonBuffer' was not declared in this scope
DynamicJsonBuffer jsonBuffer;
^
C:\Users\chris\Documents\Arduino\libraries\YoutubeApi\src\YoutubeApi.cpp:95:20: error: expected ';' before 'jsonBuffer'
DynamicJsonBuffer jsonBuffer;
^
C:\Users\chris\Documents\Arduino\libraries\YoutubeApi\src\YoutubeApi.cpp:96:21: error: 'jsonBuffer' was not declared in this scope
JsonObject& root = jsonBuffer.parseObject(response);
^
C:\Users\chris\Documents\Arduino\libraries\YoutubeApi\src\YoutubeApi.cpp:97:10: error: 'ArduinoJson::JsonObject' has no member named 'success'
if(root.success()) {
^
exit status 1
Error compiling for board NodeMCU 1.0 (ESP-12E Module).
Reply 6 months ago
This error is complaining about not being able to find the YouTube API library, so perhaps it's not installed in the correct place, or maybe your code is missing the #include line that references it. What version of the Arduino IDE are you using?
7 months ago
I also made it : | https://www.instructables.com/id/YouTube-Subscriber-Counter-With-ESP8266-V2/ | CC-MAIN-2019-35 | refinedweb | 1,793 | 60.01 |
curl_getenv - return value for environment name
#include <curl/curl.h>
char *curl_getenv(const char * name );
curl_getenv() is a portable wrapper for the getenv() function, meant to emulate its behaviour and provide an identical interface for all operating systems libcurl builds on (including win32).
This function will be removed from the public libcurl API in a near future. It will instead be made "available" by source code access only, and then as curlx_getenv()..
Under unix operating systems, there isn't any point in returning an allocated memory, although other systems won't work properly if this isn't done. The unix implementation thus have to suffer slightly from the drawbacks of other systems.
getenv (3C)
This HTML page was made with roffit. | http://maemo.org/api_refs/3.0/connectivity/curl/curl_getenv.html | CC-MAIN-2016-36 | refinedweb | 121 | 54.83 |
Hi
I am new in indesign scripting
I want to create indesign Application using java script and without using ExtendScript ToolKit
In this case how create app (the application object).
if i use ExtendScript ToolKit then there is no need to create application object. app is bydefault
But i do not want to use of ExtendScript ToolKit .
I want to create simple java script using
<script type="text/javascript">
//body
</script>
when i run my java script (.js) file then it create indesign Application (instance).
For example on mac in Apple Script when i run my apple script
tell application "Adobe InDesign CS5.5"
set myDocument to make document with properties
end tell
in apple script editor which is inbuild with mac, Indesign Application start(create) sucessfully.
There is no need any ExtendScript ToolKit type tool to run (start) indesign application.
only script line tell application "Adobe InDesign CS5.5" create application instance.
In VBScript we create application object using this :
In Java Script how create Indesign Application Object.
I try to create Indesign Application object but not sucess.
<!DOCTYPE HTML>
<html>
<head>
<title>Testing JavaScript</title>
</head>
<body>
>
</body>
</html>
How create Indesign application Object using JavaScript ?
You seem to be confused. ExtendScript is not embedded in an HTML document. It is saved as an independant file with a js or jsx extension.
You do not need to create an InDesign object. It is created automatically. Refer to the InDesign object using the global app variable.
Harbs
Thanks Harbs for reply
Plz give me a simple .js example which run without ExtendScript ToolKit and create Indesign Application and one textframe in it.
if I save below script as .js extention and run it after double click on it then it give undefined app error.
>
I want to create indesign application using java script but do not want to use ExtendScript ToolKit.
Thanks.
What do you mean by not using ExtendScript Toolkit? The ESTK is not required. Script files are saved as plain text inside your Script Panel folder and run from the script panel.
Like I said, there's no need to define "app". Here is the full correct version of your script:
var myDocument = app.documents.add(); var myTextFrame = myDocument.pages.item(0).textFrames.add(); myTextFrame.geometricBounds = ["6p", "6p", "24p", "24p"]; myTextFrame.contents = "Hello World!";
Are you trying to run the script from outside InDesign?
Thanks Harbs for reply
Yes I want to run the script from outside Indesign.
For example on Mac os i use appleScript editor to run appleScript
tell application "Adobe InDesign CS5"
set myDocument to make document
end tell
My Indesign application start sucessfully.
I want to simply run my java script and want to create Indesign Application.
I do not want to put my script in Script Panel folder.
Like on Mac os , i want to create on window os using java script.
Simple run script in any java script editor (except ExtendScript ToolKit) and create Application.
Is it possible ?
Thanks
No. ExtendScript can only be run within the host application.
You can run an AppleScript which can execute an ExtendScript using do script.
Harbs
That's NOT quite correct Harb's… You head the script with the #target and you can double click it in the UI see Toolkit:
#target name Defines the target application for this JSX file. The name value is an application specifier; see Application and namespace specifiers. Enclosing quotes are optional.
If the Toolkit is registered as the handler for files with the .jsx extension (as it is by default), opening the file opens the target application to run the script. If this directive is not present, the Toolkit loads and displays the script. A user can open a file by double-clicking it in a file browser, and a script can open a file using a File object’s execute method.
As of either CS4 or CS5 you need to put the script in a TRUSTED place… on the mac
~/Documents/Adobe Scripts/YourScript.jsx // Then just alias it from that…
Um.
Well, yeah. ESTK can run scripts as well because it's a host application too...
If that helps the OP, than you're right, but he specifically said he did not want to run from ESTK.
from what i understand, he wants to create a standalone app that will use javascript to send some commands to indesign.
Harb's I think you misunderstood me… Head the file with #target indesign not the ESTK… Your snippet plus this will launch Indesign if NOT running and will execute without opening the ESTK
#target indesign var myDocument = app.documents.add(); var myTextFrame = myDocument.pages.item(0).textFrames.add(); myTextFrame.geometricBounds = ["6p", "6p", "24p", "24p"]; myTextFrame.contents = "Hello World!"
When I tried this I got indesign to open but no new page with a Hello World! written in the textframe. However, when I clicked the script again when indesign WAS open then the page loaded with the textframe.
Thanks for Reply
When I tried this I got a Script Aleart message.
"You are about to run a script in Adobe indesign CS5.5. You should only run script from a trusted source.Do you want to run the script" Yes, No.
when I click on yes then Indesign application start sucessfully and create a text frame with Hello World!.If I click on No then Script open in ExtendScript ToolKit.
I want to skip (or hide) this Script Aleart message and run the script directly without prompt this Aleart message. How hide this Script Aleart message ?
Then you should also read post 6 where I said where the TRUSTED location is… Then you won't get the dialog…
Thank you very much Mark for Quick reply
I put my script C:\Documents and Settings\Administrator\My Documents\Adobe Scripts\new.jsx on window os.
It run script sucessfully without prompt script alert message.
Thanks.
Thanks Mark for reply
I put my script at Trusted location C:\\Documents and Settings\\Administrator\\My Documents\\Adobe Scripts\\NewSript.jsx
Now I want to execute my script using below method but it not execute and return message "No documents open".
#include "indesign.h"
INDESIGN::_Application oApplication;
if(oApplication.CreateDispatch(L"InDesign.Application") ==0) // it start Indesign Application
{
// Dispatch is not created
}
CString ScriptPath =L"C:\\Documents and Settings\\Administrator\\My Documents\\Adobe Scripts\\NewSript.jsx";
oApplication.DoScript(COleVariant(ScriptPath),long(1246973031),covOpti onal,long(1699967573),ScriptPath);
It prompt message "No document open". While I double click on NewScript.jsx it run sucessfully.
General syntex of oApplication.DoScript() is
any doScript (script: varies[, language: ScriptLanguage=ScriptLanguage.UNKNOWN][, withArguments: Array of any][, undoMode: UndoModes=UndoModes.SCRIPT_REQUEST][, undoName: string=Script])
Executes the script in the specified language as a single transaction.
I use long(1246973031) for The java script language.
I do not understand where is problem.
my .jsx file data is
var myDocument = app.activeDocument;
var myTextFrame = myDocument.pages.item(0).textFrames.add();
myTextFrame.geometricBounds = ["6p", "25p", "30p", "30p"];
myTextFrame.contents = "Adobe Indesign !"
Thanks | http://forums.adobe.com/message/4689123 | CC-MAIN-2013-48 | refinedweb | 1,169 | 59.19 |
It’s an old saying that computers aren’t really smart, because they only do what we tell them to do. But now you can actually tell your Raspberry Pi to do something smart — to take control of connected devices in your home — using only your voice. It’s not hard — you can do it easily using some open source software and a Raspberry Pi. Just add an Arduino with an infrared (IR) LED and you can tell your Roomba what to do, too.
Shall We Say a Game?
This project is made possible by years of research by scores of scientists, engineers, and linguists around the world working to enable real-time voice recognition that can run on modest hardware — the sort of advances that have brought us Siri on Apple devices and the voice recognition capabilities built into Google’s Android.
Specifically, I’ll show you how to use an open source speech recognition toolkit from the Speech Group at Carnegie Mellon University called PocketSphinx, designed for use in embedded systems. This amazing library allows us to delegate the heavy lifting of turning sound into text, so we can focus on implementing higher-level logic to turn the text into meaningful actions, like commanding the lighting systems in your home and even your Roomba vacuum cleaner. It also allows us, as Makers, to get under the hood and experiment with aspects of a speech recognition engine that are usually reserved for those implementing the engines or studying them in academia.
Preparing the Pi
For about $35, the Raspberry Pi, a credit-card-sized computer, has enough power to translate your spoken commands into computer commands for home automation and robotics.
Before you can use the Raspberry Pi for speech recognition, you need to make sure you’re able to record sound from your USB microphone. The Raspbian distribution is configured to use the Advanced Linux Sound Architecture (ALSA) for sound. Despite its name this framework is quite mature and will work with many software packages and sound cards without alteration or configuration difficulties. For the specific case of the Pi there are a few tweaks we need to make to ensure that the USB card is preferred over the built-in audio.
First plug in your USB headset and power on the Pi. Once everything has finished loading, you can check that your USB audio is detected and ready to use by running the aplay -L command. This should display the name of your card such as in our example: Logitech H570e Stereo, USB Audio. If your USB sound card appears in this list then you can move on to making it the default for the system. To do this, use your favorite text editor to edit the alsa-base.conf file like so:
sudo nano /etc/modprobe.d/alsa-base.conf
You need to change the line options snd-usb-audio index=-2 to options snd-usb-audio index=0 and add a line below it with options snd_bcm2835 index=1. Once done, save the file and sudo reboot the Pi to use the new configuration.
To test your changes, use the arecord -vv --duration=7 -fdat ~/test.wav command to record a short 7-second piece of audio from the microphone. Try playing it back with aplay ~/test.wav and you should hear what you recorded earlier through your USB headphones. If not, try playing back a prerecorded sound such as aplay /usr/share/sounds/alsa/Front_Center.wav to determine if the issue lies with your microphone or speakers. (The internet will be a great help when troubleshooting these issues.)
If you heard your recording, great! You’re ready to move on to the software setup.
Compiling the Prerequisite Software
As we are standing on the shoulders of software giants, there are a few packages to install and several pieces of software to compile before your Pi is ready to host the voice control software.
First, go get the packages required for SphinxBase by executing:
sudo apt-get install libasound2-dev autoconf libtool bison \
swig python-dev python-pyaudio
You’ll also need to install some Python libraries for use with our demo application. To do this, you’ll install and use the Python pip command with the following commands:
curl -O
sudo python get-pip.py
sudo pip install gevent grequests
TIP: If your connection to the Pi is a bit flaky and prone to disconnects, you can save yourself some heartache by running these commands in a screen session. To do so, run the following before continuing.
sudo apt-get install screen
screen -DR sphinx
If at any stage you get disconnected from your Pi (and it hasn’t restarted) you can run screen -DR sphinx again to reconnect and continue where you left off.
Obtaining the Sphinx Tools
Now you can go about getting the SphinxBase package, which is used by PocketSphinx as well as other software in the CMU Sphinx family.
To obtain SphinxBase execute the following commands:
git clone git://github.com/cmusphinx/sphinxbase.git
cd sphinxbase
git checkout 3b34d87
./autogen.sh
make
(At this stage you may want to go make coffee …)
sudo make install
cd ..
You’re ready to move on to PocketSphinx. To obtain PocketSphinx, execute the following commands:
git clone git://github.com/cmusphinx/pocketsphinx.git
cd pocketsphinx
git checkout 4e4e607
./autogen.sh
make
(Time for a second cup of coffee …)
sudo make install
cd ..
To update the system with your new libraries, run sudo ldconfig.
Testing the Speech Recognition
Now that you have the building blocks of your speech recognition in place, you’ll want to test that it actually works before continuing.
Now you can run a test of PocketSphinx using pocketsphinx_continuous -inmic yes.
You should see something like the following, which indicates the system is ready for you to start speaking:
Listening...
Input overrun, read calls are too rare (non-fatal)
You can safely ignore the warning. Go ahead and speak!
When you’re finished, you should see some technical information along with PocketSphinx’s best guess as to what you said, and then another READY prompt letting you know it’s ready for more input.
INFO: ngram_search.c(874): bestpath 0.10 CPU 0.071 xRT
INFO: ngram_search.c(877): bestpath 0.11 wall 0.078 xRT
what
READY....
At this point, speech recognition is up and running. You’re ready to move onto the real fun of creating your custom voice control application!
Control All the Things
For our demo application, I’ve programmed our system to be able to control three separate systems: Philips Hue and Insteon lighting systems, and an iRobot Roomba robot vacuum cleaner. With the first two, you’ll communicate via a network-connected bridge or hub. For the third, you’ll communicate with an Arduino via a serial over a USB connection, then the Arduino will translate your commands into infrared (IR) signals that emulate a Roomba remote control.
If you just want to dive into the demo application and try it out, you can use the following commands to retrieve the Python source code and run it on the Raspberry Pi:
git clone
cd makevoicedemo
python main.py
At this stage you should have a prompt on the screen telling you that your Pi is ready for Input. Try saying one of the commands — “Turn on the kitchen light” or “Turn off the bedroom light” — and watch the words as they appear on the screen. As we haven’t yet set up the configuration.json file, the kitchen light should still be off.
Using PocketSphinx
There are several modes that you can configure for PocketSphinx. For example, it can be asked to listen for a specific keyword (it will attempt to ignore everything it hears except the keyword), or it can be asked to use a grammar that you specify (it will try to fit everything it hears into the confines of the grammar). We are using the grammar mode in our example, with a grammar that’s designed to allow us to capture all the commands we’ll be using. The grammar file is specified in JSGF or JSpeech Grammar Format which has a powerful yet straightforward syntax for specifying the speech that it expects to hear in terms of simple rules.
In addition to the grammar file, you’re going to need three more files in order to use PocketSphinx in our application: a dictionary file which will define words in terms of how they sound, a language model file which contains statistics about the words and their order, and an acoustic model which is used to determine how audio correlates with the sounds in words. The grammar file, dictionary, and language model will all be generated specifically for our project, while the acoustic model will be a generic model for U.S. English.
Generating the Dictionary
In order to generate our dictionary, we will be making use of lmtool, the web based tool hosted by CMU specifically for quickly generating these files. The input to lmtool is a corpus file which contains all or most of the sentences that you would like to be able to recognize. In our simple use case, we have the following sentences in our corpus:
turn on the kitchen light
turn off the kitchen light
turn on the bedroom light
turn off the bedroom light
turn on the roomba
turn off the roomba
roomba clean
roomba go home
You can type these into a text editor and save the file as corpus.txt or you can download a readymade version from the Github repository.
Now that you have your corpus file, go use lmtool. To upload your corpus file, click the Browse button which will bring up a dialog box that allows you to select the corpus file you just created.
Then click the button Compile Knowledge Base. You’ll be taken to a page with links to download the result. You can either download the compressed .tgz file which contains all the files generated or simply download the .dic file labeled Pronunciation Dictionary. Copy this file to the same makevoicedemo directory that was created on the Pi earlier. You can rename the file using the command mv *.dic dictionary.dic to make it easier to work with.
While you’re at it, download the prebuilt acoustic model from the Sphinx Sourceforge. Once you’ve moved it to the makevoicedemo directory, extract it with:
tar -xvf cmusphinx-en-us-ptm-5.2.tar.gz.
Creating the Grammar File
As I mentioned earlier, everything that PocketSphinx hears, it will try and fit into the words of the grammar. Check out how the JSGF format is described in the W3C note. It starts with a declaration of the format followed by a declaration of the grammar name. We simply called ours “commands.”
We have chosen to use three main rules: an action, an object, and a command. For each rule, you’ll define “tokens” which are what you expect the user to say. For example, the two tokens for our action rule are TURN ON and TURN OFF. We therefore represent the rule as:
<action> = TURN ON |
TURN OFF ;
Similarly the _object_ rule we define as:
<object> = KITCHEN LIGHT|
BEDROOM LIGHT|
ROOMBA ;
Finally, to demonstrate that we can nest rules or create them with explicit tokens, we define a command as:
public <command> = <action> THE <object> |
ROOMBA CLEAN |
ROOMBA GO HOME ;
Notice the public keyword in front of the <command>. This allows us to use the <command> rule by importing it into other grammar files in the future.
Initializing the Decoder
We are using Python as our programming language because it is easy to read, powerful, and thanks to the foresight of the PocketSphinx developers, it’s also very easy to use with PocketSphinx.
The main workhorse when recognizing speech with PocketSphinx is the decoder. In order to use the decoder we must first set a config for the decoder to use.
from pocketsphinx import *
hmm = 'cmusphinx-5prealpha-en-us-ptm-2.0/'
dic = 'dictionary.dic'
grammar = 'grammar.jsgf'
config = Decoder.default_config()
config.set_string('-hmm', hmm)
config.set_string('-dict', dic)
config.set_string('-jsgf', grammar)
Once this is done, initializing a decoder is as simple as decoder = Decoder(config).
For the example application, we’re using the pyAudio library to get the user’s speech from the microphone for processing by PocketSphinx. The specifics of this library are less important for our purposes (investigating speech recognition) and we will therefore simply take it for granted that pyAudio works as advertised.
The specifics of obtaining the decoder’s text output are a bit complex, however the basic process can be distilled down to the following steps.
\# Start an 'utterance'
decoder.start_utt()
\# Process a soundbite
decoder.process_raw(soundBite, False, False)
\# End the utterance when the user finishes speaking
decoder.end_utt()
\# Retrieve the hypothesis (for what was said)
hypothesis = decoder.hyp()
\# Get the text of the hypothesis
bestGuess = hypothesis.hypstr
\# Print out what was said
print 'I just heard you say:"{}"'.format(bestGuess)
Those interested in learning more about the gritty details of this process should turn their attention to the pocketSphinxListener.py code from the example project.
There are a lot of different configuration parameters that you can experiment with, and as previously mentioned, other modes of recognition to try. For instance, investigate the -allphone_ci PocketSphinx configuration option and its impact on decoding accuracy. Or try keyword spotting for activating a light. Or try a statistical language model, like the one that was generated when we were using the lmtool earlier, instead of a grammar file. As a practitioner you can experiment almost endlessly to explore the fringes of what’s possible. One thing you’ll quickly notice is that PocketSphinx is an actively developed research system and this will sometimes mean you need to rewrite your application to match the new APIs and function names.
Now that we’ve covered what’s required to turn speech into text let’s do something interesting with it! In the next section we will look at some rudimentary communications with the Insteon and Philips Hue networked lights.
Let There Be GET /Lights HTTP/1.1
Over the years, countless systems have been designed, built, and deployed to turn on and off the humble light bulb. The Insteon and Philips Hue systems both have this capability and both also have so much more. They both speak over wireless protocols, with the Insteon system having the added advantage of also communicating over a house’s power lines. Communicating directly with the bulbs in both of these systems would make for some epic hacks, however for the time being we’ve set our sights a little lower and have settled for communicating through a middleman.
An Insteon “hub” for home automation
A Philips Hue “bridge” for home automation
Both the systems come with a networked “hub” or “bridge” which performs the work required to allow devices on the network, such as smartphones with the respective apps installed, to communicate commands to the lights.
It turns out, both of these systems also have HTTP-based APIs which we can utilize for our example voice-controlled system. And both companies have developer programs that you can join to take full advantage of the APIs:
» Philips Hue Developer Program
» Insteon Developer Program
For those who like to earn their knowledge a little more “guerrilla style” there are plenty of resources online that explain the basics of communicating with the hubs from both of these manufacturers. Many tinkerers with published web articles on the subject learned their secrets through the age-old skills of careful inspection, network analysis, and reverse engineering. These skills are invaluable as a Maker and these systems both offer plenty of experience for those willing to work for it.
The example project has a minimal set of Python commands that can be used to communicate with both the older Insteon 2242-222 Hub and the current Philips Hue bridge to get you started.
I Command Thee Robot
Roomba, prepare to do my bidding!
To round out our menagerie of colorful devices we have also commandeered an Arduino Leonardo, which we have connected to an infrared LED and programmed to send commands to our iRobot Roomba robot vacuum cleaner.
The Arduino connects to the Raspberry Pi using a USB cable which both supplies power and allows for serial communications. We’re using the IRremote library to do the heavy lifting of blinking the IR LED with the appropriate precise timing.
The DIY infrared LED circuit connected to the Arduino. You could use a lot smaller perfboard.
In order for the library to have something to communicate with, we need to connect some IR transmission circuitry to our Arduino on the appropriate pins. The hardware can be boiled down to a transistor, IR LED, and a resistor or two, which can be connected on a breadboard or stripboard and connected to the Arduino. For our example build we tested both the SparkFun Max Power IR LED Kit (since discontinued) and a minimalist IR transmitter setup of a 330-ohm resistor, a 2N3904 NPN transistor, and a through-hole IR LED connected to the Arduino via three male headers.
We’re using the IRremote library to allow us to compose and send raw IR signals to the Roomba which emulate the Roomba’s remote control. These signals are a series of encoded pulses whose timing correspond with binary signals. When encoded for transmission by the library they look something like the following from our example sketch:
// Clean
const unsigned int clean[15] = {3000,1000,1000,3000,1000,3000,1000,3000,3000,1000,1000,3000,1000,3000,1000};
// Power Button
const unsigned int power[15] = {3000,1000,1000,3000,1000,3000,1000,3000,3000,1000,1000,3000,3000,1000,1000};
// Dock
const unsigned int dock[15] = {3000,1000,1000,3000,1000,3000,1000,3000,3000,1000,3000,1000,3000,1000,3000};
In order for the Roomba to receive and understand these signals, I found it best to send them four times, which we do with the following sendRoombaCommand function.
void sendRoombaCommand(unsigned int* command){
for (int i = 0; i < 4; i++){
irsend.sendRaw(command, 15, 38);
delay(50);
}
}
By including the IRremote library in your Arduino code, you can send raw infrared (IR) signals to command a Roomba.
Once the IR hardware has been connected to the appropriate pins and the sketch compiled and uploaded through the Arduino IDE, you’re able to send commands over the serial connection that will be translated into something the Roomba can understand. Binary truly is becoming the universal language!
All Together Now
So there you have it, a complete system to control Insteon lights, Philips Hue lights, and an iRobot Roomba with nothing but the sound of your voice!
Raspberry Pi (in black enclosure) + Arduino + infrared LED = complete hardware for yelling orders at your lights and robots.
This wouldn’t be possible without the generous contribution of the many open source projects we have used. This example project is also released as open source so that you can refer to it when implementing your own voice control projects on the Raspberry Pi and remix parts of it for your next project. We can’t wait to see what you come up with — please tell us what you make in the comments below.
Supporting Materials
Speech Group
PocketSphinx
SparkFun Max Power IR LED Kit
RaspPi USB Soundcards
JSGF
lmtool
Python
Insteon
Philips Hue
HTTP
APIs
IRremote
SparkFun Max Power IR LED Kit
Minimalist IR transmitter | http://makezine.com/projects/use-raspberry-pi-for-voice-control/ | CC-MAIN-2017-30 | refinedweb | 3,269 | 60.55 |
I'm using Create React App and Yarn workspaces and have all my code in packages. Here is an example of my workspace structure.
workspace
apps
src
app1
packages
common
components
a
b
c
Component1
d
e
f
Component2
I do my builds from `workspace/apps/src/app1` so relative paths are all from there. Component2 needs to import Component1 and since they're in the same package IntelliJ imports using this.
import Component1 from '../../../a/b/c/Component1';
If I change Settings -> Editor -> Code Style -> JavaScript -> Imports -> 'Use paths relative to the project, resource or sources roots' on it will do this
import Component1 from 'components/a/b/c/Component1';
This doesn't work since I'm building in `workspace/apps/src/app1` and it tries to find the file in `src/app1/components/a/b/c/Component1`
In `packages/common/package.json` I have "name": "@company/common" and I need to have the import be this even if the file is in the same package.
import Component1 from '@company/common/components/a/b/c/Component1';
I've tried all kinds of things and no luck. How can I do this?
In monorepo projects, package imports ('@company/common/components/a/b/c/Component1') are used when importing modules from different packages, but files from the same package are imported using relative paths. This is expected, and there is no way to change this... If you need it to work differently, please file a feature request to youtrack, providing a sample project
Should I file to WebStorm or IntelliJ and also what's the best way to share a sample project?
To WebStorm -
You can attach a sample project to youtrack ticket, or provide a link to github (or dropbox, or whatever file server you prefer)
A little change of direction and I've changed my Yarn workspace (monorepo) to have an index.js in each package root folder. In doing this I want my imports to work like this:
Using alt-Enter only provides an option to import using:
I've tried all kinds of settings in Settings -> Editor -> Code Style -> JavaScript -> Imports and can't get it to use the first way. How can I get IntelliJ to always use imports from index.js instead of deep imports to the actual source?
Sample project is here (make sure you're on the cra-fork branch).
Open the project in IntelliJ and using a command prompt run 'yarn' to download the dependencies. Open 'apps/app-shared/src/App.js' and comment out the line 4 import. Put your cursor on '<Header' and press alt-Enter and it only offer the import style on line 3.
looks similar to, please follow it for updates | https://intellij-support.jetbrains.com/hc/en-us/community/posts/360002793900-JavaScript-absolute-import-including-package-name | CC-MAIN-2021-04 | refinedweb | 454 | 61.56 |
IRC log of tagmem on 2002-11-11
Timestamps are in UTC.
19:52:11 [RRSAgent]
See
19:52:16 [DanC]
RRSAgent, bye
19:52:16 [RRSAgent]
I see no action items
19:52:25 [RRSAgent]
RRSAgent has joined #tagmem
19:52:32 [RRSAgent]
See
19:54:02 [Chris]
ChAcl says there are no resources there
19:55:48 [DanC]
try*,access*
19:55:49 [Zakim]
TAG_Weekly()2:30PM has now started
19:55:55 [Zakim]
+Ian
19:56:20 [MJDuerst]
MJDuerst has joined #tagmem
19:56:25 [DaveO]
DaveO has joined #tagmem
19:56:38 [Zakim]
+Chris
19:58:03 [Zakim]
+Norm
19:58:21 [Norm]
Norm has joined #tagmem
19:58:49 [DaveO]
I will not be on the call until I see something in IRC that I need to be involved in. I'm currently dialed into the wsdl wg meeting.
19:59:23 [Norm]
Is that an ongoing, weekly conflict for you, DaveO?
19:59:48 [Chris]
dan - yes, that url was what gave me the error message but I was trying to avoid putting the url into the log
19:59:57 [Stuart]
dailing
20:00:29 [Zakim]
+??P2
20:00:36 [Norm]
zakim, ??p2 is Stuart
20:00:37 [Zakim]
+Stuart; got it
20:01:48 [Zakim]
+??P3
20:01:58 [Ian]
zakim, ??P3 is Paul
20:01:59 [Zakim]
+Paul; got it
20:02:05 [Chris]
zakim, ??P3 is Paul
20:02:06 [Zakim]
sorry, Chris, I do not recognize a party named '??P3'
20:02:33 [PaulC]
PaulC has joined #tagmem
20:02:35 [Zakim]
+DanC
20:02:54 [tim-gone]
tim-gone has joined #tagmem
20:03:20 [Zakim]
+TBray
20:03:44 [Ian]
Present: SW, DC, CL, PC, NW, TB, IJ
20:03:49 [Ian]
Regrets: RF
20:03:56 [Ian]
Lurking: DO
20:04:05 [Zakim]
+TimBL
20:04:26 [Ian]
SW Chair, IJ Scribe
20:04:32 [Ian]
Present: SW, DC, CL, PC, NW, TB, IJ, TBL
20:04:40 [Ian]
Accept 4 Nov minutes?
20:04:42 [DaveO]
Norm, it isn't an ongoing conflict.
20:04:46 [Ian]
20:04:50 [DaveO]
Just a F2F.
20:04:54 [TBray]
TBray has joined #tagmem
20:05:15 [Ian]
SW: Accepted 4 Nov minutes (no dissent).
20:05:27 [Ian]
20:05:32 [Ian]
20:06:35 [DanC]
hmm... why at the end, rather than where it is? " 1. xlinkScope-23"
20:06:37 [Ian]
PC: Please leave a few minutes to clarify where xlink/hlink discussions are taking place, whether we can expect a reply from the HTML WG.
20:07:00 [DanC]
Zakim, remind us in 65 minutes to discuss xlinkScope-23
20:07:01 [Zakim]
ok, DanC
20:07:19 [Ian]
----------
20:07:21 [Ian]
Meeting planning
20:07:50 [Ian]
[No regrets for TAG ftf meeting but from RF; DC half day]
20:08:05 [Ian]
20:08:32 [Ian]
[Agenda, meeting info]
20:08:56 [Ian]
IJ: Please use AC registration form for things like AC reception night of TAG ftf meeting.
20:09:01 [Ian]
---------------
20:09:06 [Ian]
Presentations at AC meeting.
20:09:14 [Ian]
See tag@w3.org archive for links to drafts of slides.
20:10:00 [Ian]
TB: Presentations need to be brutally short
20:10:38 [Ian]
IJ: Talks should be no longer than 10 minutes each (to leave 30 minutes for discussion)
20:10:57 [Ian]
CL: Refer from xlink summary from slides.
20:11:37 [Zakim]
+DOrchard
20:12:22 [Ian]
TB: Add Stuart's xlink writeup to AC package.
20:12:55 [Ian]
PC: These slides will exist long after AC meeting and be referenced from agenda; helpful to point into reading material for more info.
20:13:00 [Ian]
PC: Use unobtrusive URIs
20:13:17 [Ian]
CL: Put code examples in other pages.
20:13:25 [Chris]
and link to them
20:14:52 [Ian]
TB: Can my slides stay on textuality?
20:17:18 [DanC]
odd thing about the slidemaker is that the Overview.html doesn't point to the all.htm file... I gotta get that fixed sometime.
20:17:32 [Ian]
SW: Please make comments on slides on the mailing list.
20:17:44 [Ian]
IJ: I will put people's html source on the web and run slidemaker over it.
20:18:32 [Ian]
Action IJ: Talk to Comm Team about three TAG contributions to AC rep dossier: summary, xlink summary, arch doc.
20:18:50 [Ian]
Martin, can you join?
20:19:04 [MJDuerst]
Yes. please give me a minute.
20:19:55 [Zakim]
+Martin
20:20:18 [DanC]
pub deadline is 13Nov per announcement 4Sep.
20:20:20 [Ian]
Issue 1. IRIEverywhere-27
20:20:24 [Ian]
20:20:39 [Stuart]
q?
20:20:45 [Ian]
TB: I found a draft IRI 02, published today. Is that the one to look at?
20:20:54 [Chris]
url of that draft, please?
20:21:49 [Ian]
20:21:58 [Ian]
(Path from TAG issues list right to it...)
20:22:07 [Ian]
[NW summarizes the issue]
20:22:09 [MJDuerst]
officially at
20:22:18 [Ian]
NW: Should W3C docs refer to IRIs in the future?
20:22:52 [Ian]
[Some sense that issues 15 and 17 bound at the hip]
20:23:07 [MJDuerst]
20:23:08 [Ian]
TB: Martin, what's the 50k view of this issue?
20:23:13 [Chris]
clear deendency, not the same issue though
20:23:26 [Ian]
MD: IRIs in concept have been around as far back as 1995 and 1996.
20:23:54 [Ian]
MD: We have been actively lately on a draft.
20:24:25 [Ian]
MD: Area director at IETF said that when we think it's ready to go to last call, he will issue a last call in the IESG as well.
20:24:52 [Ian]
MD: We've received a lot of comment on the draft through the years. Lately, comments have been "move on with this"
20:25:04 [TBray]
q+
20:25:42 [DanC]
one test case, in a question from RDFCore to XML Core, 14May2002
20:25:46 [Chris]
MD" yes, IRI should be used everywhere
20:25:56 [Ian]
MD: My position (and that of the I18N WG, I think) is expressed by the Character Model spec : you should use IRIs basically everywhere. I personally think that in practice, IRIs will pop up in practice more readily.
20:26:07 [Chris]
MD: already in use, but underspacified
20:26:29 [Ian]
ack DanC
20:26:30 [Zakim]
DanC, you wanted to ask if the I18N WG is maintaining a test collection to go with the IRI draft
20:26:33 [Chris]
MD: less likely to see in XLink rle attribute etc, but popular on web pages
20:26:46 [Ian]
MD: There is a test collection (currently 1 test).
20:26:49 [Chris]
MD: test collection - one test!
20:27:24 [Ian]
MD: We have a "test" for bidi.
20:27:51 [Ian]
MD: I tried to have a lot of examples; if you see places where more examples would be helpful, please tell us.
20:28:00 [Ian]
TB: General remarks:
20:28:23 [Chris]
MD: the one excample helped gwet consensus between Mozilla, Opera and Microsoft
20:28:24 [Ian]
TB: 1) Whether IRIs are a good idea or not, I have a concern about the instability of the current IRI spec.
20:28:25 [Zakim]
-DOrchard
20:28:38 [Ian]
TB: So process issue about pointing to the spec.
20:29:04 [Ian]
PC: Relationship to charmod needs to be explicit.
20:29:31 [Ian]
TB: 2) Software needs to know whether it's dealing with an IRI or URI.
20:29:37 [DanC]
yeah, I'm getting to the point where my technical concerns are addressed, and the dominant issue is process: what to cite as an IRI spec? [and please split charmod in 3 parts]
20:29:53 [Chris]
q+
20:30:04 [Stuart]
ack TBray
20:30:14 [Ian]
TB: 3) I still have major heartburn about the case issue; examples are so non-sensible (uppercase E7 diff from lowercase e7 gives me heartburn).
20:30:24 [Ian]
TB: 4) There are parts of the IRI spec that I just didn't understand.
20:30:59 [Ian]
TB: There may be additional work required to reveal some unspoken assumptions.
20:31:37 [Ian]
CL: There are a number of ways to deal with the case-sensitivity of hex escapes (CL lists three possibilities).
20:32:00 [DanC]
I prefer "you SHOULD use %7e; %7E is NOT RECOMMENDED"
20:32:01 [Ian]
MD: On relationship to Charmod: At some point, some pieces of the IRI draft were in Charmod (e.g., conversion procedure).
20:32:13 [Ian]
MD: But we decided to separate the specs; Charmod points to IRI draft.
20:32:23 [Chris]
a) allow %7e and %7E, say they are exactly equivalent, but no implication that hello and Hello are equivalent
20:32:39 [Chris]
b) allow both, say they are different (yuk)
20:32:44 [Ian]
MD: Charmod says "W3C specs should/must use IRIs where URIs would be used"
20:32:57 [Chris]
c) only allow %7e, %7E is invalid
20:33:15 [timbl2]
q+ to wonder All non-canonical-utf8 URIs are notvalid URIs? UTF-8 equivalent URIs are consisered equivalent? Or are IRIs just like URIrefs - strings for indirectly giving a URI in an actual document.
20:33:21 [Ian]
MD: For Xpointer, separate issue about encoding/decoding using UTF-8.
20:33:22 [Chris]
ack ChrisL
20:33:32 [Chris]
ack Chris
20:33:37 [Stuart]
ack Chris
20:33:59 [Ian]
MD: Charmod can't advance without the RFC.
20:34:18 [Ian]
[There are several people who suggest splitting charmod; moving forward one reason.]
20:34:30 [DanC]
yes, please split charmod in 3; did we (the TAG) request that, Chris? have you heard back?
20:34:31 [Chris]
which is why I suggested splitting charmod into several pieces
20:34:45 [Chris]
yes, we did request that and no, we have not heard back
20:35:08 [Chris]
q+
20:35:39 [Ian]
MD: We think this is a URI issue first (case of hex escapes); once decided for URIs, do the same thing for IRIs.
20:36:01 [Ian]
MD: On the clarity of the IRI spec, please don't hesitate to send comments.
20:37:36 [Ian]
TB: Could the IRI draft assert that in hex escaping, lowercase must always be used?
20:37:51 [DanC]
that seems silly, TBray; you're going to pretend there are no URIs/IRIs that use upper-case %7E?
20:38:01 [DanC]
or that all of them are wrong
20:38:02 [Ian]
MD: Current deployment is different - some places use uppercase.
20:38:02 [DanC]
?
20:38:37 [DanC]
"canonical form"?
20:38:42 [Chris]
hence my suggestion to decouple case insensitivity of hex escapes (which are not characters) from case insensitivity of characters
20:39:03 [MJDuerst]
Chris: that goes without saying
20:39:14 [Chris]
but yes, drawback is an extra layer of processing, however light, beyond binary string comparison
20:39:28 [TBray]
couldn't insist on upper or lower case for URis, but could conceivably for IRIs
20:39:37 [Ian]
TBL: Will IRIs have the same role as URI references?
20:39:40 [Chris]
martin, anything which is important enough to go without saying had probably better be said ;-)
20:39:51 [Ian]
TBL: Same space of identifiers, but just a syntax convention?
20:40:08 [Stuart]
q?
20:40:10 [MJDuerst]
but for IRIs, it isn't that important. It's important when converting from IRI to URI,
20:40:13 [Ian]
TBL: What is being proposed fundamentally: where do IRIs fit in?
20:40:18 [Stuart]
ack TimBl
20:40:19 [Zakim]
Timbl, you wanted to wonder All non-canonical-utf8 URIs are notvalid URIs? UTF-8 equivalent URIs are consisered equivalent? Or are IRIs just like URIrefs - strings for indirectly
20:40:20 [TBray]
q+
20:40:22 [Zakim]
... giving a URI in an actual document.
20:40:28 [Ian]
CL: Maybe we should just propose that the IRI editors get on with it.
20:41:06 [Ian]
CL: When I proposed that %7e and %7E be made equivalent, I was not proposing that "e" and "E" be equivalent.
20:41:21 [Ian]
(i.e., the ascii characters "e" and "E").
20:42:01 [timbl2]
%7e is 3 characters in a IRI but 1 character in a URI
20:42:09 [DanC]
er... %7E is three chars in the URI spec so far
20:42:11 [Ian]
[One model of URIs is that this is just a syntax issue: whether you use hex escapes or other character representation in the string.]
20:43:02 [MJDuerst]
if possible, IRI and URI should be as similar as possible, except for the larger repertoire
20:43:08 [MJDuerst]
of characters that can be used in IRI
20:43:11 [Ian]
[Comparison of URIs is character-by-character. Question of whether "%" as part of "%7e" is a character, or whether "%7e" is the cahracter.]
20:43:32 [DanC]
the URI
has 12 characters in it.
20:43:36 [Stuart]
q+ to ask if there will be a new Schema datatype for IRI
20:43:56 [Chris]
cool! namespaces says compare *on characters* so declare hex escapers as not characters. like ncrs in xml
20:44:34 [Ian]
TBL, DC read the URI spec in a way that says that "%" is a character; since in that spec characters are ASCII.
20:44:38 [DanC]
ok, but hex escapers have not, yet, been so declared.
20:44:51 [TBray]
q?
20:45:02 [Chris]
ack Chris
20:45:18 [Ian]
TBL: There are a number of ways to go from here. I think that even if you define equivalence in the IRI spec, you need to have a warning in the URI spec.
20:45:41 [Ian]
MD: You could also say that when you convert from IRI to URI you always use lowercase.
20:45:57 [Chris]
use lowecase *for hex escapes* (clarification)
20:46:12 [Chris]
seconded
20:46:17 [Ian]
[Martin didn't say "for hex escapes" but I assume that he meant that.]
20:46:27 [Ian]
TB: We should say that IRIs are a good idea.
20:46:28 [MJDuerst]
yes.
20:46:29 [Chris]
TimBray: propose IRIs are a good idea
20:46:39 [Ian]
TB: We should not tell W3C WGs to use IRIs until they are baked.
20:47:01 [Ian]
TB: In the arch doc we should say "Don't hex escape things that don't need escaping. Use lowercase when you do."
20:47:16 [DanC]
yes, that is: the space of resource identifiers should/can/does use the repository of Unicode characters.
20:47:17 [Chris]
(but he did say "when converting from IRI to URI" which implies hexification)
20:47:18 [Ian]
TB: I think these are things we could do today usefully.
20:47:21 [Stuart]
q?
20:47:31 [Stuart]
ack TBray
20:47:46 [Stuart]
ack DanC
20:47:51 [MJDuerst]
q+Martin
20:47:59 [Ian]
DC: I am comfortable with the idea of agreeing to use more than 90 characters in an identifier.
20:48:37 [Ian]
DC: Character space of URIs should be Unicode.
20:48:52 [timbl2]
q+ to propose we encorage Martin in doing URIs and and move on, and ask to know when there is a well-define relationship between the URI and IRI.
20:48:53 [Ian]
DC: When you are naming resources, you should not be limited to 90 some characters.
20:48:58 [Ian]
ack Stuart
20:48:59 [Zakim]
Stuart, you wanted to ask if there will be a new Schema datatype for IRI
20:49:06 [Ian]
SW: Will we get help from Schema datatypes?
20:49:07 [DanC]
90 is not related to the length
20:49:13 [Stuart]
ack Stuart
20:49:29 [Ian]
DC: The schema type is anyURI. Its lexical space is unconstrained.
20:49:40 [Ian]
DC: There might be a thing or two (e.g., spaces).
20:49:46 [Ian]
MD: Only a problem if you make a list type.
20:49:55 [Ian]
DC: But you can have a list of strings, so dealt with.
20:50:39 [MJDuerst]
I think value space and lex space are IRI, but a mapping to URIs is given by a pointer to XLink
20:51:04 [MJDuerst]
XLink has the main part of the conversion from IRI to URI, but not the details
20:51:45 [Ian]
DC: In HTTP, you need to escape spaces.
20:52:05 [Ian]
DC: There are no URIs with spaces in them.
20:52:20 [Ian]
TBL: So anyURI is already an IRI-like thing.
20:52:21 [Chris]
no URIs, or no HTTP URIs?
20:52:23 [Stuart]
ack Martin
20:52:27 [DanC]
reading
...
20:52:46 [Chris]
is
Documents a URI?
20:52:57 [TBray]
is anyURI architecturally broken because of lack of clarity as to whether it's a URI or IRI?
20:53:10 [Ian]
MD: Some specs already referring to preliminary versions of IRI spec. I think that we shouldn't tell WGs to delete their refs and replace them later; just to upgrade when appropriate.
20:53:18 [DanC]
"An anyURI value can be absolute or relative, and may have an optional fragment identifier (i.e., it may be a URI Reference)."
20:53:19 [Ian]
TBL: I am against the TAG spending time on something fluffy.
20:53:25 [Chris]
all URIs are IRIs
20:53:51 [DanC]
illegal, equivalent, or NOT RECOMMENDED.
20:53:55 [Ian]
TBL: Until we clarify these issues, we should not emphasize their use yet.
20:54:18 [Chris]
IRI is not really 'fluffy'. It just needs to make some decisions and ship.
20:54:22 [Stuart]
MD Agree on the case thing.
20:54:35 [Ian]
MD: Earlier URI specs talked about equivalence, but practice went in other directions.
20:54:36 [DanC]
phpht. can't find a specification of anyURI lexical->value mapping.
20:54:50 [Stuart]
q?
20:55:13 [ndw]
ndw has joined #tagmem
20:56:10 [Chris]
DC:any breakage is not recent
20:56:28 [Chris]
TBL: should we work on "URI are broken"
20:56:39 [Chris]
CL: No, I18N WG is on it
20:56:49 [Chris]
TBL: No, they are not, Martin just said so
20:56:57 [Chris]
Stuart: next steps?
20:57:46 [Chris]
TB: Universe of resource identifiers should be unicode characters
20:58:16 [Chris]
TB: Say 'we approve of IRI work'
20:58:23 [Zakim]
-Norm
20:59:07 [Chris]
TB: Should *not* say to WGs to drop URI and gofor IRI because IRI is not final yet
20:59:53 [Chris]
PC: Important what TAG says, we should be careful what we are stating or seen to state
21:00:14 [Chris]
TB: Do not suggestthatall specs should be using IRI now
21:00:28 [Chris]
MD: For href,XLink already uses the
21:00:47 [DanC]
IRIs are already in HTML 4. XHTML 1, XLink, RDF 1.0x
21:00:58 [DanC]
... and XML Schema
21:01:03 [Chris]
CL: existing Recs say the same stuff
21:01:07 [Ian]
DC: XML Schema cites XLink
21:01:26 [Chris]
this ID is taking stuff from existing Recs so that future Recs can all point to one place
21:01:39 [Ian]
TB: We could assert in the arch doc that it must be crystal clear when referring to resource ids whether you are talking about URIs or something else.
21:02:01 [Chris]
TB: Must be crytal clear when software has to deal with URI or IRI - software must not have to guess
21:02:09 [Ian]
TB: "When prescribing resource identifiers, a spec MUST be clear about whether it's talking about URIs or something else; don't make software guess."
21:02:19 [Ian]
TBL: A lot of people will think that IRIs are different from URIs.
21:02:41 [Chris]
TBL: Confusion similar to URIrefs, people with think IRI is different to URI.
21:02:49 [Chris]
Specs should use the IRI production
21:02:58 [Chris]
TBL: Specs should use the IRI production
21:03:25 [Ian]
TBL: I think we should write the whole lot based on a clean IRI proposal.
21:03:25 [Stuart]
ack TimBL
21:03:27 [Zakim]
Timbl, you wanted to propose we encorage Martin in doing URIs and and move on, and ask to know when there is a well-define relationship between the URI and IRI.
21:03:31 [Chris]
TBL: we should write u the issue once there is a final IRI spec
21:04:01 [Ian]
DC: What's the estimate for building a test collection?
21:04:08 [Chris]
DC: how long to get a test collection together?
21:04:09 [Ian]
DC: TB has some cases, I have a few.
21:04:21 [Stuart]
ack DanCon
21:04:32 [Stuart]
ack DanC
21:04:33 [Zakim]
DanC, you wanted to say that a test collection is top on my wish-list for this stuff
21:05:11 [Ian]
MD: Test cases are on the top of my list.
21:05:23 [Ian]
DC: It would take me about 4 months; need to get consensus around test cases.
21:05:28 [Ian]
DC: That takes time.
21:05:46 [timbl2]
How many of the following are true? For every IRI there is a corresponding URI? For every URI there existys a single IRI? All URIs before this spec are still valid after this spec? If two URIs are ASCIIchar for char identical then the equivalent IRIs are uniced char for char compatible? etc etc etc...
21:06:09 [Ian]
DC: What should the namespaces 1.x spec say?
21:06:54 [Ian]
TB: Not appropriate for namespaces 1.x to go to IRIs today.
21:06:57 [Chris]
TimBL, I note that three of your questions are about URI to IRI mapping, wheras the data flow is the other way
21:07:07 [Ian]
DC: But software is perfectly happy today with IRIs (in my experience).
21:07:41 [Ian]
TB: I don't think it's ok for namespaces 1.x to point to Unicode today; I think it's appropriate *today* to point to RFC2396.
21:07:43 [MJDuerst]
q+Martin
21:07:55 [Ian]
DC: So what should software do when it gets an IRI?
21:08:03 [Ian]
TB: I would expect software not to notice.
21:08:27 [Ian]
SW: This topic on our agenda Monday morning.
21:08:32 [Ian]
(at ftf meeting)
21:08:57 [DanC]
hmm... morning of the ftf... I gotta find a proxy for my position on this then.
21:09:03 [Chris]
IETF Proposed Standard good enough for W3C specs to reference?
21:09:06 [Ian]
MD: I can attend ftf meeting Monday morning to talk about this.
21:09:51 [Ian]
MD: I'd like the TAG to tell us how to address the case issue.
21:09:57 [Ian]
CL: Can't you just pick one approach?
21:10:26 [Ian]
MD: Current approach is that uppercase and lowercase are different in escapes, and SHOULD convert to lowercase.
21:10:45 [Chris]
21:10:47 [DanC]
that current approach is what I prefer.
21:10:55 [Chris]
on our reading list for f2f
21:11:47 [Zakim]
-Martin
21:11:52 [Ian]
=======================
21:11:55 [Ian]
Arch doc
21:12:00 [Zakim]
DanC, you asked to be reminded at this time to discuss xlinkScope-23
21:12:57 [timbl2]
My question was, are the guarantees which the spec gives mentione din the spec? Guaranteews of consistency etc?
21:13:13 [Ian]
IJ: To get arch doc to TR page, can we resolve big issues here, then I will incorporate and get ok's from two TAG participants.
21:13:18 [MJDuerst]
Tim, the spec doesn't give any guarantees. You need implementations for that.
21:13:22 [timbl2]
IRI spec
21:14:03 [DanC]
"consistency etc" leaves a lot of room.
21:14:48 [Ian]
IJ: What needs to be done?
21:15:22 [Ian]
SW: On URI terminology, can we commit to consistency on what RFC2396 becomes?
21:15:25 [Ian]
q+
21:15:29 [MJDuerst]
Stuart, Ian: I have noted that Roy won't come to the f2f. Does he plan to call in by phone?
21:15:30 [Ian]
ack Martin
21:15:55 [Ian]
IJ: I wouldn't want to commit to something that doesn't exist yet.
21:15:58 [Ian]
CL, DC: Agreed.
21:15:59 [Chris]
no, need to see it
21:16:04 [MJDuerst]
If Roy plans to call in for some time, it would definitely be good to have him for the IRI and casing
21:16:14 [Ian]
[Agreement that terminology shouldn't diverge.]
21:16:15 [MJDuerst]
discussion, but then 9:00 would be very early for him.
21:16:22 [Stuart]
q?
21:16:35 [Ian]
ack Ian
21:16:41 [Ian]
SW: I can live without such a statement, then.
21:17:16 [Ian]
DC: RF has released an internet draft of the URI spec with the non-controversial changes. He is working on the next draft, where we will have to defend our position.
21:17:50 [Ian]
DC: I wouldn't emphasize reading this draft (if you're only going to read this spec once).
21:18:30 [Ian]
TB: I can commit to reading it and providing feedback.
21:18:35 [DanC]
is good enough for me.
21:18:45 [Ian]
DC: 7/11 draft is good enough for me. Enough of an improvement that I endorse publication.
21:18:48 [Ian]
PC: +1
21:19:21 [Ian]
DC: Please be conservative about changes.
21:19:31 [Ian]
IJ: I may insert editors notes.
21:20:07 [Ian]
Action item review:
21:20:12 [Ian]
1. Action CL 2002/09/25: Redraft section 3, incorporating CL's existing text and TB's structural proposal (see minutes of 25 Sep ftf meeting on formats).
21:20:14 [Ian]
CL: Please continue
21:20:17 [Ian]
# Action DC 2002/11/04: Review "Meaning" to see if there's any part of self-describing Web for the arch doc.
21:20:19 [Ian]
DC: Please continue.
21:20:21 [Ian]
======================
21:20:22 [Chris]
ok, I will send my edits (for my action item) for the *next* publication
21:20:30 [Ian]
XLink scope
21:20:36 [Ian]
21:20:49 [Ian]
PC: I have some concerns that we aren't in the center of discussion on this ite.
21:20:51 [Ian]
item
21:21:44 [Ian]
PC: We haven't yet received comments back on what we sent to the HTML WG.
21:21:58 [Ian]
PC: Are we going to engage with the HTML WG?
21:23:14 [Ian]
[Some discussion on communication with other groups.]
21:23:43 [Ian]
TBL: I think that HTML WG thinks they've made their point.
21:23:59 [Ian]
SW: I have sent email on two occasions to the HTML WG but not have not gotten a reply from Steven.
21:24:39 [Chris]
q+
21:24:44 [Ian]
DC: We've not invited the HTML WG to participate on www-tag.
21:24:52 [Stuart]
q?
21:25:10 [Ian]
SW: A message was sent to the HTML WG list, but didn't reach the archive.
21:25:30 [Chris]
www-html-editors but not in archives. norm has a recipt though
21:25:45 [DanC]
indeed... can't find it in
21:26:13 [Ian]
TB: I think we've done the right thing. I presume that they're busy.
21:27:22 [Ian]
PC: As far as I'm concerned, there's no point that this be on our ftf agenda since we've had no feedback.
21:27:40 [Chris]
zakim, queue?
21:27:41 [Zakim]
I see Chris on the speaker queue
21:27:45 [Chris]
zakim, queue?
21:27:46 [Zakim]
I see Chris on the speaker queue
21:28:00 [Chris]
zakim, queue?
21:28:01 [Zakim]
I see Chris on the speaker queue
21:28:08 [Ian]
DC: We don't have a message from Steven on behalf of the WG.
21:28:34 [timbl2]
For the HTML WG,
21:28:34 [timbl2]
Steven Pemberton
21:28:34 [timbl2]
Chair
21:28:34 [Ian]
SW: Yes, we do. The first message was on behalf of the WG; I have asked for confirmation from Steven that this is still their reply.
21:28:39 [Ian]
ack Chris
21:28:56 [timbl2]
For the HTML WG,
21:28:57 [timbl2]
Steven Pemberton
21:28:57 [timbl2]
Chair
21:29:06 [Ian]
CL: I think the HTML WG owes us a response since we sent a request to their list.
21:29:19 [timbl2]
For the HTML WG,
21:29:20 [timbl2]
Steven Pemberton
21:29:20 [timbl2]
Chair
21:29:38 [timbl2]
Message senbt 26 spe 2002
21:29:50 [Ian]
CL: There are also other WGs we should be discussing this with.
21:29:51 [timbl2]
to www-tag
21:30:00 [Ian]
q?
21:30:14 [Chris]
CL: HTML wg is not the only client of hypermedia linking
21:30:25 [Stuart]
21:30:28 [Ian]
PC: I'm concerned that more of a plan isn't in place for how to take this question forward.
21:30:50 [Ian]
PC: One answer is to wait until the Tech Plenary.
21:31:10 [Ian]
CL: I expect the Tech PLenary to produce a plan, not the technical solution, however.
21:31:11 [Chris]
its a long way off, in march
21:31:38 [Chris]
so that date pretty much ensures that HTML WG will not use the results, if any, of the march meeting
21:32:09 [Ian]
TBL: I think the TAG has a duty to solve this issue; I don't think that discussion has been moved out of the TAG.
21:33:30 [Ian]
TB: I know that several of us have put a lot of work into discussion on www-tag. I sympathize with PC's concern, and agree with TBL that new technical arguments have been brought forward and consensus not yet achieved.
21:33:43 [Ian]
TB: I think SW has done the right thing asking the HTML WG where we stand.
21:34:00 [Ian]
SW: Does the TAG hold the same opinion as formulated at the ftf meeting?
21:34:22 [Ian]
SW: I've had no commentary yet on the summary.
21:34:58 [Ian]
TB: Mimasa pointed HTML WG to the summary on 28 Oct; no commentary from them yet.
21:35:24 [Ian]
TB: Thus, I think we should not drop this, but should not proceed far in the face of no new info from the HTML WG.
21:35:31 [Ian]
SW: Should we spend time on this at the ftf meeting?
21:35:38 [Ian]
TB: SW's summary is cogent.
21:35:42 [Ian]
DC: But contains no proposal.
21:36:27 [Ian]
TBL: TAG could comment on some arguments that SW has summarized.
21:36:34 [DanC]
gee... it's only a 1-day ftf; if somebody wants xlink23 on there, I'd like that somebody to make a proposal.
21:37:29 [Zakim]
-TBray
21:37:32 [Ian]
ADJOURNED
21:37:38 [Zakim]
-Stuart
21:37:39 [Zakim]
-Ian
21:37:40 [Ian]
RRSAgent, stop | http://www.w3.org/2002/11/11-tagmem-irc | CC-MAIN-2017-17 | refinedweb | 5,332 | 79.3 |
Syntax:
#include <deque> TYPE& at( size_type loc ); const TYPE& at( size_type loc ) const;
The at() function returns a reference to the element in the deque at index loc. The at() function is safer than the [] operator, because it won't let you reference items outside the bounds of the deque.
For example, consider the following code:
deque<int> v( 5, 1 ); for( int i = 0; i < 10; i++ ) { cout << "Element " << i << " is " << dq[i] << endl; }
This code overrunns the end of the deque, producing potentially dangerous results. The following code would be much safer:
deque<int> v( 5, 1 ); for( int i = 0; i < 10; i++ ) { cout << "Element " << i << " is " << dq.at(i) << endl; }
Instead of attempting to read garbage values from memory, the at() function will realize that it is about to overrun the deque and will throw an exception.
Related Topics: [] operator | http://www.cppreference.com/wiki/stl/deque/at | crawl-002 | refinedweb | 144 | 65.05 |
Pure Storage CEO swap: Dietzen out, Giancarlo inPure Storage
Pure Storage CEO Scott Dietzen resigned unexpectedly Thursday, as the all-flash pioneer named former Cisco executive Charlie Giancarlo its new leader.
Dietzen will stay at Pure as chairman, although his exact role has not been determined. Dietzen said the CEO change was his idea, and there is no reason to suspect he was pushed out. The move was disclosed during Pure’s earnings report, which beat expectations for revenue growth.
As Pure Storage CEO, Dietzen built the flash vendor up to a projected $1 billion in revenue for 2017, close to 1,900 employees, and 3,700 customers.
“This was my call,” Dietzen said on the Pure earnings call. “I had a wonderful run, thanks to an extraordinarily gifted team. I’ve been in the job seven years and we have done some great things. And as I look to the road ahead for Pure, I felt we needed a different class of experience in operating at scale.”
Dietzen said he expected a search for his successor as Pure Storage CEO to take longer than it did, but “Charlie came to the top quickly. Charlie has phenomenal experience operating at scale having been part of Cisco when it went from $1 billion to well over $40 billion in revenues. And he’s an entrepreneur, having built a start-up, having participated in hyper-growth at Cisco, having been on the boards of great companies like ServiceNow and Arista through their growth phase. So, I think he is exactly the right leader at the right time.”
On his blog, Dietzen called the choice to step down “without a doubt, the hardest decision I’ve made in my career.”
Giancarlo did not tip his hand on his plans as the Pure Storage CEO. He made a statement at the start of the earnings call but did not take questions.
“I have considered many CEO opportunities over the past couple of years,” Giancarlo said. “What inspired me about the Pure vision was the opportunity to contribute to build a great multibillion-dollar and independent public company, which has the opportunity to become the global leader in data platforms. I expect to do a great deal of listening in these next few weeks and months. While I will not be taking questions on today’s call, I look forward to sharing details about my observations and priorities in the next earnings call.”
Giancarlo has spent more than 30 years as an executive and director at IT vendors, mostly with networking and telecom-related companies. Most notably, he worked at Cisco from 1994-2008 as executive vice president, chief development officer and president of its Linksys division.
Giancarlo was considered a candidate to replace John Chambers as Cisco CEO but he left in 2008 after Chambers said he intended to stick around for years. Chambers stayed on as CEO until 2015, turning the position over to Chuck Robbins.
After leaving Cisco, Giancarlo became interim CEO at telecom vendor Avaya for six months in 2008 and remains on Avaya’s board. He has been a managing director at private equity firm Silver Lake Partners since 2008, and sits on the boards of Accenture, Arista Networks, Attivo Networks and ServiceNow. Before joining Cisco, he founded Telecom Systems and Adaptive Corp., and went to Cisco from Ethernet switching company Kalpana through acquisition.
Pure Storage CEO swap follows positive revenue quarter
Pure’s revenue of $224.5 million last quarter grew 38% from last year and came in near the high point of its guidance. Pure executives said they remain on track for $1 billion in revenue for the year and they expect the fourth quarter of 2017 to be their first profitable quarter. Pure still has a ways to go on profit, however, after losing $62 million last quarter.
Besides trying to achieve profitability, Giancarlo will face other challenges as Pure transforms from startup to a large storage company. While Pure rode a flash wave that has taken over networked storage, it now faces another set of disruptors that threaten all storage vendors. These include the cloud, hyper-converged infrastructure and software-defined storage.
Emerging storage technologies pose challenges to new Pure boss
Pure’s cloud strategy is to sell the underlying storage for cloud providers, and to help enterprises build private clouds and connect their on-premises storage to public clouds in hybrid setups.
Pure claims more than 600 cloud providers as customers, contributing more than one-quarter of its revenue. Dietzen said Pure is also working on adding the capability to allow enterprise customers to stream applications between on-premises arrays and public clouds.
“Pure is delivering the data platform for the cloud era,” Dietzen said.
Dietzen brushed off the threat of hyper-convergence, saying hyper-converged appliances address different types of use cases than Pure’s all-flash arrays. He said for now, there is plenty of room for both.
“There is no question, I think hyper-converged and Pure have been the two big disrupters in the market,” he said. “But I will say we are mostly operating in different segments. If you add up all of our competition with hyper-converged infrastructure, we are seeing them in less than five percent of our engagements.”
Dietzen also played down the need for software-only products, pointing out that software is the key to Pure’s technology but its customers want it packaged on the right type of hardware.
“A pure software packaging is hard to achieve today because there is still a great amount of variability in the underlying flash,” he said. “Each new generation of flash even from a single fab changes in behavior pretty significantly, and so we are constantly tuning our software to take best advantage of each generation of the flash technology.”
Giancarlo faces other issues as Pure Storage CEO. They include whether the vendor should expand beyond its FlashArray block storage and FlashBlade unstructured data platforms, and if it should explore acquisitions to grow. Pure execs so far have refused to address those questions, leaving them for the new guy.
Druva pulls in $80 million for data management software
Druva today said it has raised another $80 million in funding, bringing its total investments into the range of $200 million for the fast-growing data management software vendors.
The Sunnyvale, Calif.-based company claims to have more than 4,000 worldwide customers that include NASA, Pifzer, NBC Universal, Marriot hotels, Stanford University and Lockheed Martin. The latest funding round was led by Riverwood Capital but Sequoia Capital India, Nexus Venture Partners and Tenaya Capital also participated.
Dave Packer, Druva’s vice president of product and alliance marketing, said the money will be used to further expand its sales, marketing, research and development as it moves into the growing cloud data management market. Druva scored $51 million in new private financing back in October 2016, and used that to diversify its cloud backup platform and accelerate global marketing and sales.
“A lot of that has not been spent,” Packer said of the previous funding. “A large portion of the (newest) investment is going to ramp up engineering. On an engineering standpoint, (we want) to supply a single control plane for end-to-end backup, recovery and resilience.”
Druva sells two branded cloud backup products that will serve as the foundation for its data management software portfolio. The enterprise-level Druva inSync product is for endpoints and it backs up data across physical and public cloud storage. The Druva Phoenix is a software agent used to back up and restore data sets in the cloud for distributed physical and virtual servers. Pheonix applies global deduplication at the source level and points archived server backups at a cloud target.
Last February, Druva upgraded inSync with tools to detect ransomware attacks and help recover clean data. The endpoint software detects strange behavior patterns. The company recently announced its Druva Cloud Platform that provides a unified control pane for data management across endpoints, servers and cloud data. It works as a service model.
“This will allow for a more on-demand model,” Packer said of the Cloud Platform. “Instead of providing two different products, they can be put under a single control plane. A consolidation of services also provides a greater level of capabilities. Users can have a single point of access to all the data.”
Parker said in the last six years companies have seen even more silos and disparate locations data stores that have gotten even more complicated with cloud adoption, driving up the need for data management software.
“What happened with organizations over time is you have all these disparate silos of data which are not connected,” he said. “Your organization is growing but at the expense of a centralized data plan. They have not been able to reconcile that. In fact, the data center is no longer the center of their data. So we need centralized policy management.”
In the past two years, Druva has set up subsidiaries in Japan and Germany and opened offices in the United Kingdom, Australia and Singapore. The data protection vendor set up Microsoft Azure and Amazon Web Services (AWS) cloud data centers in Canada, the United Kingdom and Hong Kong.
The company has positioned its data management software to go up against traditional backup vendors CommVault and Veritas Technologies, which also are transitioning into broad-based data management players. It’s also competing with startup Rubrik, which has raised a total of $292 million in funding since 2015 for cloud data management.
Druva executives have stated their goal to do an Initial Public Offering (IPO) by the end of this year, assuming they hit their revenue targets. Druva in 2015 claimed its revenues grew more than 100% year-over-year for five straight years.
C.
Net.
Unitrends backup appliance products get hyped for NutanixStorage
Unitrends backup appliance products will go where some vendors have gone before — supporting the Nutanix Acropolis Hypervisor – while adding a cloud twist.
The company’s Recovery Series backup appliances and Unitrends Backup virtual appliances will feature integration for the Acropolis Hypervisor (AHV). Unitrends is extending its core data center backup and recovery capabilities for Nutanix to the purpose-built Unitrends Cloud.
That cloud backend separates Unitrends from other partners, said Joseph Noonan, vice president of product management. Unitrends also offers the flexibility to protect all hypervisors that run on Nutanix. In addition, Unitrends supports VMware, Hyper-V and Citrix XenServer hypervisors.
Organizations can back up from Nutanix appliances directly to Unitrends appliances or to an external NAS device.
Veeam, Commvault, Rubrik and Comtrade Software are among the other data protection vendors that recently launched or will launch backup for Nutanix AHV.
Unitrends is looking to broaden its customer base with AHV support. Noonan said only a small percentage of Unitrends’ 19,000 customers use Nutanix. The Unitrends backup appliance line has a lot of midmarket customers, and Noonan said he hopes the AHV integration brings in more small to medium enterprises.
Noonan said he is seeing organizations that are early to adopt newer technologies going hyper-converged.
“It significantly reduces footprint for them,” he said. “It’s more about TCO and simplicity.”
He said customers are also looking to reduce infrastructure costs of VMware licensing.
Joseph Noonan
Nutanix executives said at the vendor’s .NEXT 2017 user conference in June that using AHV can help customers save money by avoiding VMware enterprise license agreements, even if Nutanix HCI software and appliances are considered pricey. Nutanix offers AHV as part of its hyper-converged platform with no licensing costs.
On the cloud level, Unitrends backup appliance products also integrate with Microsoft Azure and Amazon Web Services. But there are gaps with the big cloud providers, especially as they relate to small and medium enterprises, Noonan said.
“We see the Unitrends Cloud being a better fit,” Noonan said, pointing to stronger service-level agreements, holistic support, scalability, and total cost of ownership and cost predictability.
Unitrends has been in the Nutanix Elevate Technology Alliance Program since October, supporting joint customers.
“Now we’re extending it more to integration,” Noonan said.
The Unitrends backup appliance integration with the Nutanix AHV will be available later this year.
MozyEnterprise backup tool finds a new key for security
Dell EMC’s Mozy has unlocked a new encryption key security feature for its enterprise backup product.
MozyEnterprise now provides support for the Key Management Interoperability Protocol (KMIP), which automatically generates per-user encryption keys that can be managed through an on-premises key management server (KMS).
The.
Flash Memory Summit 2017: Flash really on fireflash storage
Any claims that “flash is on fire” at Flash Memory Summit 2017 this week drew awkward glances, nervous laughs or groans. That’s because one flash system literally caught fire, causing the exhibition hall at the Santa Clara Convention Center to close for the entire show.
The Innodisk booth caught fire hours before Flash Memory Summit 2017 opened Tuesday morning. Damage from the fire and water from the sprinkler system that doused it prompted fire marshals to order the exhibition floor closed for the entire three-day show.
The show went on, with meetings and dozens of keynotes and panel sessions discussing all things flash for three days. Product launches went out as scheduled but the shutting of the exhibit hall disappointed vendors who planned demos of new and emerging products.
Fire marshals have not identified the cause of the fire.
Demonstrations that were never demonstrated included the Kaminario K2.N NVMe array due to ship in spring of 2018 and E8 Storage’s shipping D-24 rack-scale NVMe array as well as its coming X24 arrays. Newcomer Liqid wanted to show off what it calls a bare-metal composable infrastructure system using hardware from OneStop Systems.
Other products scheduled for demos included Toshiba NVMe over Fabric software, several new Intel SSDs, Mellanox NVMe over Fabrics devices, Everspin 1 GB and 2 GB DDR4 form factor MRAM devices, and a host of Samsung products including a reference “Mission Peak” 1U server that can store 576 TB of SSD capacity with new form factor 16 TB drives.
“We wanted to show that we’re real, and our stuff is battle tested,” said Julie Heard, E8 Storage’s director of technical marketing.
Flash Memory Summit 2017 wasn’t a complete waste for E8. The team won a best-of-show award for most innovative flash memory technology and showed off its Game of Thrones-knockoff “Game of LUNs” poster.
Poster we did for Flash Memory Summit: "A Game of LUNs" @E8Storage #FMS2017 #nvme pic.twitter.com/Ms683fI347
— Zivan Ori (@ZivanOri) August 9, 2017
Other notable Flash Memory Summit 2017 award winners included Western Digital for NAND flash, CNEX Labs and Brocade for storage networking, Excelero for software-defined storage, and Attala Systems Inc. for storage system.
Primary Data DataSphere upgrade follows funding grabData Analytics, Primary Data
Primary Data has beefed up storage analytics and cloud migration in its DataSphere virtualization platform. Now the startup is ready to dip into a fresh stash of cash totaling $40 million to heighten its profile in enterprise data storage.
PrimaryData DataSphere 2.0, released this week in early access, builds on previous editions oriented mostly for application development. The latest version embeds an artificial intelligence-based storage analytics engine that automatically moves inactive data to Amazon S3-compatible cloud object stores.
If the data once again becomes active, DataSphere transparently retrieves it from the cloud for access on local storage.
“We are able to give storage awareness to an application. Normally, you would have to write (code) for that,” Primary Data CEO Lance Smith said.
A policy catalog in 2.0, known as Objective Expressions, allows customers to prescribe the characteristics that can be applied to all data or to an individual file. To move data between cloud platforms, users need to change only the objectives for the data. Primary Data DataSphere then moves the data to the appropriate storage target.
“We traditionally have gone after the development and testing space, which are usually small deployments. But people are finding that our technology is so powerful that many of them are putting it in production (as a way) to save lots of money” on storage, Smith said.
Data protection and cloud mobility highlight 2.0 release
Primary Data claims DataSphere can manage and move billions of files and objects. The software will consume a customer’s block storage and converts it to the file namespace.
The enhanced storage analytics examine historical usage patterns to determine which tier of storage best meets an application’s requirements. DataSphere determines the optimal data placement based on customer-defined attributes relating to cost, data primacy or performance.
Primary Data DataSphere 2.0 include assimilation of array-based snapshots, allowing customers to use the snapshots to both preserve changes in real time and to serve as a disaster recovery tool. DataSphere accesses snapshot APIs of underlying storage arrays to clone space-efficient copies on a WAN or public cloud. The vendor claims this feature allows it to mix and match different vendors’ storage in the same share. Additional data protection in 2.0 includes metadata backup and restore and portal protection.
Primary Data DataSphere 2.0 supports cross-domain mapping and fully integrates with Windows Active Directory and Windows Access Control Lists, allowing mixed shares between Linux and Windows.
New investment earmarked for expansion of sales teams in U.S.
Along with DataSphere’s revamped storage analytics, the data management specialist announced up to $40 million obtained in separate funding transactions. The proceeds boost the startup’s total investment haul to nearly $100 million since its 2016 launch.
Primary Data received $20 million in venture funding in a Series B round led by Pelion Venture Partners, with participation from existing vendors Accel Partners and Battery Ventures. Up to $20 million in additional funding is available through a line of credit.
Smith said Primary Data will expand sales teams in growing markets, particularly Europe and North America.
“We have been hiring in North America and Europe since the start of this year to vastly grow our presence in vertical markets. We had been investing heavily in engineering up to now,” Smith said.
NetApp cloud strategy includes SolidFire, SDSCloud storage, NetApp
Created in the 20th century to sell storage to engineers, NetApp has survived for 25 years to remain the largest standing data storage company not tied to a server vendor. Founder Dave Hitz credits that survival to the company’s “enormous capacity to change” as the IT landscape changes.
“People ask me, why are you still alive after 25 years? That’s a very real question,” Hitz, currently a NetApp executive vice president, said during a press even last month. “NetApp has survived 25 years because we have an amazing ability for radical change when we need it.”
Hitz said his company has previously pivoted to survive disruptions caused by the rise of the internet, the internet crash and virtualization. He said all posed threats to NetApp when they first developed, and NetApp adjusted its storage to take advantage. Now the NetApp cloud pivot is the current adjustment that can make or break the company.
“Each of these transitions were things that were going to kill us,” he said. “Here we are again, possibly the biggest transition of all, into cloud computing and again it’s the thing that’s going to kill us. We hear, ‘We’re all doomed, everything’s going to move into the cloud, there’s no room for NetApp.’ I don’t think it’s true. It could be true if we don’t’ respond.”
Of course, you don’t have to be a bull-castrating genius to figure out the cloud is the key for today’s storage companies. Every large storage company has the cloud in its strategy and barely a month goes by when we don’t see a startup come along promising to provide cloud-like storage for enterprises, and to connect on-premises storage to public clouds.
So what is the NetApp cloud strategy?
Hitz said NetApp “way underestimated how pervasive the cloud would be on all enterprise computing,” just like it misjudged how flash would impact enterprise storage. (NetApp originally bet on flash as cache instead of solid-state drives in storage arrays before getting out its successful All-Flash FAS array in 2016.). But he said the NetApp cloud plan consists of doing what it does best — data management.
NetApp founder Dave Hitz
“We think data is the hardest part [of the cloud],” Hitz said. “It is very easy to go to Amazon or Azure, fire up 1,000 CPUs, run them for an hour or day or week, [and] then turn them off. It’s not easy to get them the data they need, and after they make a bunch of data, it’s not easy to get it back and keep track of it. Those are the hard parts. And that’s right in the center of our wheelhouse.”
Channeling NetApp’s history, CEO George Kurian said he saw his job when he took over in 2015 as leading the company through transition. “As the world around us changed, NetApp needed to change fundamentally,” he said.
He sees a strong NetApp cloud strategy as the key to initiating that change. “Many customers are engaged with us to help them build hybrid architectures, whether it’s between on-prem and public cloud, between two public clouds or migrating one of their sites to a colocation,” Kurian said.
Kurian cites SolidFire — an all-flash array platform built for cloud providers — as the “backbone of the next-gen data center.” NetApp acquired SolidFire in 2016 as much as a cloud platform as to fill a need for all-flash storage.
NetApp cloud software-defined storage (SDS) and services include Private Storage for Cloud, Ontap Cloud, Data Fabric, AltaVault cloud backup and others. NetApp also has a Cloud Business Unit, which includes development, product management, operations, marketing and sales.
Senior vice president Anthony Lye joined the company last March to run the NetApp Cloud Business Unit. “The whole purpose of my organization is to build software that runs on hyper-scale platforms,” Lye said. “The software can be consumed by NetApp or non-NetApp customers, on hybrid or multi-cloud environments.”
The NetApp cloud portfolio will go a long way in determining if the vendor gets to keep its survivor status.
Comm. | https://itknowledgeexchange.techtarget.com/storage-soup/page/22/ | CC-MAIN-2019-43 | refinedweb | 3,760 | 52.7 |
ASSIGNMENT DETAILS
(Topics include: Statistical Inference, Simple Regression Analysis)
You have been asked by your client to recommend which of two available stocks will perform better over time, relative to risk. You will need to compare risk and return relationship of the two stocks over time and present your findings as a formal written report (detailing your calculations and findings).
Which stock should I invest in?
This assignment is based on topics including; Sampling and Estimation, Hypothesis Testing,
and Regression Analysis.
Comparison of stock returns
Variables and Data Sources:
PS&P = S&P 500 Price Index
This is the Standard and Poor index of 500 companies and will be used as a market portfolio. (You would use the return from this series as Market Return rM,t)^gspc
PB = Boeing Company- BA’s Stock Price
A particular stock we are interested in to determine how it behaves in response to market changes.
PGD = General Dynamics- GD’s Stock Price
rf =Interest rate on 10 Year US-Treasury Note
This variable is given in percentage (with % sign omitted) and will serve as a risk-free interest rate. We will use this variable to compute excess returns on our preferred stock (either Boeing or GD) and Market excess returns.
Task B: Perform the following.
1. Calculate returns for these three series in Excel or any software of your choice using the transformation: rt = 100*ln(Pt / Pt-1) and perform the Jarque-Berra test of normally distributed returns for each of Boeing and GD. What do you infer about the distribution of the two stock returns series? Describe also the risk and average return relationship in each of the two stocks.
Hints:
: We have performed a similar task in Workshop 01.
: If there are say “n+1” observations on prices, then the return series would have “n” observations.
: These numbers would represent percentages after multiplication with 100 in the formula above. But you would not put a percentage sign in your data. For example, returns for two periods are 0.35% and 0.41% but we would use 0.35 and 0.41 after omitting % sign in our excel worksheet.
2.Test a hypothesis that the average return on GD stock is different from 2.8%. Which test statistic would you choose to perform this hypothesis test and why? Also, specify the distribution of the test statistic under the null hypothesis, using 5% significance level.
3. Before investing in one of the two stocks, you first want to compare risk associated with each of the two stocks. Perform an appropriate hypothesis test using 5% significance level and interpret your results.
4. Besides, you want to determine whether both stocks have the same population average return. Perform an appropriate hypothesis test using information in your sample of 60 observations on returns, using 5% significance level. Report your findings and also mention which stock will you prefer and why?
5. Compute excess return on your preferred stock as yt = rt – rf,t and excess market return as xt = rM,t – rf,t and perform the following tasks. a. Estimate the CAPM (CAPITAL ASSET PRICING MODEL) using linear regression by regressing the excess return on your preferred stock ( yt ) on excess market return ( xt ) and properly report your regression results. b. Interpret the estimated CAPM beta-coefficient in terms of the stock’s riskiness in comparison with the market. c. Interpret the value of R2.
d. Interpret 95% confidence interval for the slope coefficient.
6.Using the confidence interval approach to hypothesis testing, perform the hypothesis test to determine whether your preferred stock is a neutral stock. (Note: You would not be given any marks if you do not use CI approach to test a hypothesis)
7. One of the assumptions of ordinary least squares (OLS) method is; normally distributed error term in the model. Perform an appropriate hypothesis test to determine whether it is plausible to assume normally distributed errors.
Analyse Data & Submit Report
Prepare your written report in two Parts:
Part A: Calculations
Set out all your calculations for each of the tasks (listed above) using Data Analysis Tool in Excel. Present your results in graphs and charts as appropriate
Part B: Interpretation
: Explain what your results mean, in language that your client can understand. For example, what conclusions can you draw from each of your findings?
Include all appendices, graphs, tables and written answers. Answer the questions directly. Do not present unnecessary graphs or numerical measures, undertake inappropriate tests or discuss irrelevant matters.
Solution.pdf
Solution.pdf
Which stock should I invest in? (Topics include: Statistical Inference, Simple Regression Analysis) You have been asked by your client to recommend which of two available stocks will perform better...
Which Stock Will Perform Better? flickr photo shared by ota_photos under a Creative Commons ( BY-SA ) license You have been asked by your client to recommend which of two available stocks will perform...
Assignment 2: Details Which stock should I invest in? (Topics include: Statistical Infereincludeple Regression Analysis) You have been asked by your client to recommend which of two available stocks...
You have been asked by your client to recommend which of two available stocks will perform better over time, relative to risk. You will need to compare risk and return relationship of the two stocks...
Statistical Inference, Simple Regression Analysis), This assignment is based on topics including; Sampling and Estimation, Hypothesis Testing, and Regression Analysis. work should be done in word and...
Your solution is just a click away! Get it Now
By creating an account, you agree to our terms & conditions
We don't post anything without your permission
Attach Files | https://www.transtutors.com/questions/assignment-details-topics-include-statistical-inference-simple-regression-analysis-y-2911316.htm | CC-MAIN-2019-43 | refinedweb | 939 | 54.93 |
system.file
Percentile
Find Names of R System Files Details for the meaning of the default value of
NULL.
- mustWork
- logical. If
TRUE, an error is given if there are no matching files.
Details
This checks the existence of the specified files with
file.exists. So file paths are only returned if there
are sufficient permissions to establish their existence.
The unnamed arguments in
... are usually character strings, but
if character vectors they are recycled to the same length.
This uses
find.package to find the package, and hence
with the default
lib.loc = NULL looks first for attached
packages then in each library listed in
.libPaths().
Note that if a namespace is loaded but the package is not attached,
this will look only on
.libPaths().
Value
...,.
Aliases
- system.file
Examples
library(base)
system.file() # The root of the 'base' package system.file(package = "stats") # The root of package 'stats' system.file("INDEX") system.file("help", "AnIndex", package = "splines") | https://www.rdocumentation.org/packages/base/versions/3.0.3/topics/system.file | CC-MAIN-2021-04 | refinedweb | 159 | 60.92 |
Wiki
SCons / UsingCodeGenerators
One.
#!python # SConstruct file env=Environment() # Create the mk_vds generator tool mk_vds_tool = env.Program(target= 'mk_vds', source = 'mk_vds.c') # This emitter will be used later by a Builder, and has an explcit dependency on the mk_vds tool def mk_vds_emitter(target, source, env): env.Depends(target, mk_vds_tool) return (target, source) # Create a builder (that uses the emitter) to build .vds files from .txt files # The use of abspath is so that mk_vds's directory doesn't have to be added to the shell path. bld = Builder(action = mk_vds[0].abspath + ' < $SOURCE > $TARGET', emitter = mk_vds_emitter, suffix = '.vds', src_suffix = '.txt') # Add the new Builder to the list of builders env['BUILDERS']['MK_VDS'] = bld # Generate foo.vds from foo.txt using mk_vds env.MK_VDS('foo.txt')
If you look at the resulting dependency tree you can see it works::
% scons --debug=tree foo.vds +-foo.vds +-foo.txt +-mk_vds +-mk_vds.o +-mk_vds.c
Updated | https://bitbucket.org/scons/scons/wiki/UsingCodeGenerators?action=info | CC-MAIN-2015-32 | refinedweb | 153 | 62.24 |
Unformatted text preview: Lecture 12| Memory Addresses, Pointers, and Arrays Revisited
Memory and Addresses Variables and Addresses Pointers and Memory Addresses Addressing and Dereferencing Summary Pointer Assignment and Pointer to void Call-by-Value Call-by-Reference Pointer Arithmetic Arrays, Memory Addresses, and Pointers Passing Arrays to Functions | An Alternative View An Example Readings Exercises CSC 1500 { Lecture 12 1 Overview Memory, variables, addresses, and pointers Call-by-value versus call-by-reference Arrays, memory addresses, and pointers Passing arrays to functions | revisited CSC 1500 { Lecture 12 2 Memory and Addresses 01000001 (represents character ‘A’) 0000 0001 0002 0003 byte
0004 ...... Memory ...... In the lowest level, computers understand only 0's and 1's. Computer memory stores a sequence of 0's and 1's. Each such storage unit is called a bit . Eight bits are grouped together to form a larger storage unit called a byte . Each byte has a unique address for identi cation. Computer memory can thus be viewed as a series of bytes. CSC 1500 { Lecture 12 bit 3 Variables and Addresses
0400 variable c ’A’ 0000 0001 0002 0003 byte
0004 ...... Memory 0400 ...... char c scanf("%c", &c) c is a char variable. denotes the address of variable c in memory.
/* user typed 'A' <Enter> */ &c Thus, &c equals 0400.
scanf("%c", &c) { causes the input character ('A') to be stored in memory
address of c, i.e. 0400. { e ectively stores 'A' into the variable c. CSC 1500 { Lecture 12 4 Pointers and Memory Addresses
0123 variable ptr 0400 0400 variable c ’A’ 0000 0001 0002 0003 byte
0004 ...... 0400 Memory ...... char char c *ptr ptr = &c scanf("%c", ptr) is NOT a char variable. ptr does not store a char. ptr stores the memory address of a char variable. The type of ptr is char pointer.
ptr char *ptr { declares the variable ptr. { variable name is ptr, NOT *ptr. { says that ptr is a char pointer.
CSC 1500 { Lecture 12 5 Addressing and Dereferencing
int a, b, *p
a ? /* declares 3 variables a, b and p without initialization
b ? p ? */ a=b=7 p = &a /* store integer value 7 to the two int variables */ /* store the address of an int variable a to p */ a 7 b 7 p Now we can use pointer p to access the value of variable a. By using the indirection operator *. The expression *p is equivalent to the variable to which p points (refers to). Example,
printf("a = %d\n", *p) /* prints value of a, i.e. 7 */ * Not to be confused with the use of declared.
p when pointer p is is a char pointer variable while *p is a char variable . is 0400. After the statement p = &a, the value of variable p becomes 0400. *p then refers to the content of the memory location whose address is stored in variable p, i.e. 0400. CSC 1500 { Lecture 12 An alternative view: Suppose the memory address of variable a 6 *p = 3 printf ("a = %d\n", a) /* 3 is printed */ a 3 b 7 p When *p appears on the L.H.S. of an assignment, it means the value on the R.H.S. is to be written onto the memory location to which p points.
p = &b a 3 b 7 p *p = 2 * *p - a printf("b = %d\n", b) /* is equivalent to "b = 2 * b - a " */ /* 11 is printed */ a 3 b 11 p We do not really care about the actual memory address stored. Our concern is the data object to which p is pointing.
CSC 1500 { Lecture 12 7 Summary
#include <stdio.h> int main(void) { int i, *iptr printf ("Enter an integer? ") scanf("%d", &i) /* use format specifier "%p" to print a memory address */ printf ("\nMemory address of variable i = %p\n", &i) printf ("Value of variable i = %d\n", i) iptr = &i printf ("\nValue of variable iptr = %p\n", iptr) printf ("Value of data object pointed to by iptr = %d\n", *iptr) printf ("\nMemory address of iptr itself = %p\n", &iptr) iptr = NULL /* NULL is defined in stdio.h, means nothing */ printf ("\niptr now points to nothing!\n") printf ("\nThe statement \"*iptr = 3 \" produces a runtime error!\n") *iptr = 3 /* error message or hang... */ return 0 } Enter an integer? 99 Memory address of variable i = effffc0c Value of variable i = 99 Value of variable iptr = effffc0c Value of data object pointed to by iptr = 99 Memory address of iptr itself = effffc08 iptr now points to nothing! The statement "*iptr = 3 " produces a runtime error! you will see...] null pointer assignment OR] hang!!! CSC 1500 { Lecture 12 8 Pointer Assignment and Pointer to void In ANSI C, one pointer can be assigned to another only when they both have the same type. Example,
int char x, *ptr1, *ptr2 c, *ptr3=&c /* ptr1 points to x */ /* valid, both ptr1 and ptr2 point to x */ /* invalid */ ptr1 = &x ptr2 = ptr1 ptr2 = ptr3 In pointer assignment, the value of a pointer (an address) is copied to another pointer variable. E ectively both pointer variables refer to the same memory location, i.e. the same data object. Pointer assignment is allowed when one of the operands is of type \pointer to void". (void means no type or any type) We can think of void * as a generic pointer type. Not to be confused with NULL (means 0 or nothing). Example,
#include <stdio.h> int main(void) { int char void x, *xptr=&x *cptr *vptr /* valid */ /* valid */ vptr = xptr cptr = vptr return 0 } CSC 1500 { Lecture 12 9 Call-by-Value
#include <stdio.h> void wrong_swap (int, int) int main(void) { int x=33, y=99 printf ("x=%d, y=%d\n", x, y) wrong_swap(x, y) printf ("x=%d, y=%d\n", x, y) return 0 } void wrong_swap (int x, int y) { int tmp printf ("\t swap!\n") printf ("\t x=%d, y=%d\n", x, y) tmp = x x=y y = tmp printf ("\t x=%d, y=%d\n", x, y) } x=33, y=99 swap! x=33, y=99 x=99, y=33 x=33, y=99 CSC 1500 { Lecture 12 10 int main(void) { int x=33, y=99; ...... wrong_swap(x, y); ...... } x 33 y 99 Memory void wrong_swap(int x, int y) { int tmp; ...... ...... } x 33 y 99 tmp ? Whenever variables are passed as arguments to a function, their values are copied to the corresponding function parameters. The parameters have a lifespan identical to the lifespan of the function. That is, the parameters are created when the function is entered, and are destroyed (freed from memory) when the function returns. No matter how the value of the parameters changes, the variables in the calling environment (main in the example) are not changed.
CSC 1500 { Lecture 12 11 Call-by-Reference
#include <stdio.h> void swap (int *, int *) int main(void) { int x=33, y=99 printf ("x=%d, y=%d\n", x, y) swap(&x, &y) printf ("x=%d, y=%d\n", x, y) return 0 } void swap (int *p, int *q) { int tmp printf ("\t swap!\n") printf ("\t *p=%d, *q=%d\n", *p, *q) tmp = *p *p = *q *q = tmp printf ("\t *p=%d, *q=%d\n", *p, *q) } x=33, y=99 swap! *p=33, *q=99 *p=99, *q=33 x=99, y=33 CSC 1500 { Lecture 12 12 int main(void) { int x=33, y=99; ...... swap (&x, &y); ...... } 0400 x 33 0481 y 99 Memory void swap(int *p, int *q) { int tmp; ...... ...... } p 0400 q 0481 tmp ? Call-by-Reference is accomplished by
Passing an address as an argument when the function is called. Declaring a function parameter to be a pointer. Using the dereferenced pointer in the function body. CSC 1500 { Lecture 12 13 Pointer Arithmetic
#include <stdio.h> int main(void) { int char ivar, *iptr cvar, *cptr iptr = &ivar cptr = &cvar printf ("Original\n") printf ("iptr = %p (hexadecimal)\n", iptr) printf ("cptr = %p (hexadecimal)\n", cptr) printf ("\nIncremented\n") printf ("iptr = %p (hexadecimal)\n", ++iptr) printf ("cptr = %p (hexadecimal)\n", ++cptr) return 0 } Original iptr = effffc0c (hexadecimal) cptr = effffc07 (hexadecimal) Incremented iptr = effffc10 (hexadecimal) cptr = effffc08 (hexadecimal) If a pointer variable is incremented, its value is actually increased by sizeof(type) where type is the data type that the pointer points to. This allows the pointer to point to the next piece of data item in the memory meaningfully. Other arithmetic operators (<p> + <i>, <p> - <i>, <p> - <p>, ...) when applied to pointer variables behave similarly.
CSC 1500 { Lecture 12 14 Arrays, Memory Addresses, and Pointers One Dimensional
ad dr es s ad dr es s ad dr es s ad dr es s ad dr es s
list[0] [1] [2] [3] [4] int list[5]; list Assuming that sizeof(int)==4. Assuming that list 0] begins at memory address 1000. { list 1] begins at 1004. { list 2] begins at 1008. { etc. An array name without subscript (list) denotes the address of the rst element of the array. That is, list==&list 0]. Starting address of list
list i] + sizeof(int) 10 00 is given by
i CSC 1500 { Lecture 12 10 04 10 08 10 12 10 16 15 Two Dimensional
matrix 1000 int matrix[3][4]; 1004 1008 1012 1016 1020 1024 1028 1032 1036 1040 address 1044 matrix[0][0] matrix[0][1] matrix[0][2] matrix[0][3] matrix[1][0] matrix[1][1] matrix[1][2] matrix[1][3] matrix[2][0] matrix[2][1] matrix[2][2] matrix[2][3] Assuming that sizeof(int)==4. Assuming that matrix 0] 0] begins at address 1000. { matrix 0] 1] begins at 1004. { matrix 0] 2] begins at 1008. { etc. An array name without subscript (matrix) denotes the address of the rst element of the array (matrix==&matrix 0] 0]). For a 2-dimensional int array matrix ROW] COL], the starting address of matrix i] j] is given by matrix + sizeof(int) (COL i + j)
CSC 1500 { Lecture 12 16 Passing Arrays to Functions | An Alternative View #include <stdio.h> #define SIZE 5 int sum(int *, int) int main(void) { int list SIZE] int i for (i=0 i<SIZE i++) { printf("number = ? ") scanf("%d", &list i]) } printf("sum = %d\n", sum(list, SIZE)) return 0 } int sum (int *ptr, int size) { int sum=0 while (size) { sum += *ptr ptr++ size-} return sum } Note: ===== *ptr *(ptr+1) *(ptr+2) ... *(ptr+i) == list 0] == list 1] == list 2] == list i] When an array is being passed, the name of the array is used as the parameter. It's actually the base address of the array to be passed. The array elements are not copied. Call-by-Reference!! What's the di erence between the two methods?
int sum (int list ], int size) int sum (int *ptr, int size) CSC 1500 { Lecture 12 17 An Example
#include <stdio.h> #define SIZE 4 int skip_sum(int *, int) int main(void) { int n, array SIZE] = {1, 2, 3, 4} while (1) { printf("Skip how many? ") scanf("%d", &n) if (n < 0) break else printf("Sum = %d\n", skip_sum(array, n)) } printf ("Bye!\n") return 0 } int skip_sum(int *ptr, int skip) { int i, sum=0 ptr = ptr + skip for (i=1 i <= (SIZE-skip) sum += *ptr return sum } i++, ptr++) Skip how Sum = 9 Skip how Sum = 7 Skip how Sum = 4 Skip how Sum = 0 Skip how Bye! many? 1 many? 2 many? 3 many? 4 many? -1 CSC 1500 { Lecture 12 18 Readings Chapter 8, Sections 8.1 { 8.4 Chapter 8, Section 8.9 Chapter 8, Section 8.12 Chapter 9, Sections 9.3 { 9.4
Exercises Chapter 8, Exercises 2, 5, 6, 7, 10 Chapter 9, Exercises 3, 4 2 End of Lecture 12
CSC 1500 { Lecture 12 19 ...
View Full Document
This note was uploaded on 05/23/2010 for the course COMPUTER S CSC1500 taught by Professor Fung during the Spring '10 term at CUHK.
- Spring '10
- fung
Click to edit the document details | https://www.coursehero.com/file/5899192/12/ | CC-MAIN-2017-22 | refinedweb | 2,001 | 71.65 |
hey guys what's up?
I was trying to solve a problem that I met in my book Java how to program it's about compound interests.
Code :
public class Interests { public static void main (String[] args) { double p = 1000; double rate = 0.05; double amount; for (int year = 1; year <= 10; year++){ amount = p * (Math.pow((1+rate),years)); System.out.printf("the amount for year %d is: %,20.2f", year, amount); }//end for }//end main }//end class
this was the original code. THE QUESTION IS: Modify the application to use only integers
to calculate the compound interest. [Hint: Treat all monetary amounts as integral numbers
of pennies. Then break the result into its dollars and cents portions by using the division and remainder
operations, respectively. Insert a period between the dollars and the cents portions.]
THE PROBLEM IS: it's ok to treat all monetary amounts as integral numbers of pennies then break the result into dollars and cents, but how can I convert the rate to an integer without losing data ? | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/17778-using-integers-printingthethread.html | CC-MAIN-2014-49 | refinedweb | 174 | 65.01 |
#include <IO.h> int io_write_seq( GapIO *io, int N, int2 *length, int2 *start, int2 *end, char *seq, int1 *conf, int2 *opos);
This function updates disk and memory details of reading number N. If
this reading does not yet exist, all non existant readings up to and including
N will be initialised first using the
io_init_readings function.
[FIXME: The current implement does not update the fortran lngth (io_length()) array. This needs to be done by the caller. ]
The length argument is the total length of the sequence, and hence also the expected size of the seq, conf and opos arrays. start and end contain the last base number of the left cutoff data and the first base number of the right cutoff data.
Unlike io_read_seq, all arguments to this function are mandatory.
If the records on disk do not already exist then they are allocated first
using the
allocate function.
This function returns 0 for success and non zero for failure. | http://staden.sourceforge.net/scripting_manual/scripting_124.html | CC-MAIN-2014-15 | refinedweb | 160 | 70.43 |
Rails: Rake Tasks
While building a recent rails 5 app, I decided to enhance my documentation process by creating a suite of tasks that can help the next developer manage the app. Here are a few tasks I've created over the years that I use regularly.
Basic Task
Tasks help you create a repeatable set of scripts. For example, suppose you want to add a file to github, add a commit message and push your commit to github, you can turn it into a task.
Before
git add -A
git commit -m "custom_message"
git push
After
rails github:commit["my message"]
Beneath the hood
Create a task with a namespace of github and task name of commit.
rails g task github commit
Wrap the shell commands into executable scripts like this.
namespace :github do desc "›› Commit with message" task :commit, [:message] => :environment do |task, args| sh %{ git add -A } sh %{ git commit -m "#{args.message}" } sh %{ git push } end end
Run the task
rails github:commit["my message"]
How to create a task
If you want to create a task in Rails, all you need to do is type:
rails g task namespace_name task_name
That's it!
Note
One minor change in Rails 5 is the switch from
rake to
rails. So
rake db:create or
rake task are now
rails db:create and
rails task, etc. I welcome this change as it helps developers remember one less thing.
View a list of available tasks
If you want to see a listing of available tasks, simply type:
rails -T
I use
›› as a marker to quickly identify my custom tasks.
Advanced Tasks
Here's a
task created to automatically start and stop Webrick.
Revisiting Rails after 4 years of ExpressJS has been interesting. I find myself missing
npm's
nodemon to autostart my dev server every time I make changes. When it comes to gems that watch your environment, most of the Rails blogs suggest
guard and
live-reload but they lack the simplicity of
nodemon. Therefore, I prefer
rerun, a gem commonly used for Sinatra web-apps.
Step 1
Create a rails 5 app
rails _5.0.0_ new my_app
If this doesn't work for you, I suggest reading my Getting Started with Rails 5 article.
Step 2
Install Rerun by adding it to your
Gemfile.
gem 'rerun', :groups => [:development, :test]
Step 3
Install the gem file
bundle install
Step 4 - Test
rerun
Run
rerun from your command line. If you notice, there are two explicit directories within the command line,
config and
app. This is how I'm telling
rerun to specifically monitor the changes I'm making on my models, controllers, serializers, etc.
rerun --dir config,app rails s
Step 5 - Create a task
Now let's take this to the next level. Although running a single command in
Terminal isn't too much effort, I often find myself moving on to another project and forgetting how I configured this one.
Therefore, I prefer to package these commands into Rails tasks. This command will tell Rails to generate a task with a file name
server.rake with a method name of
start.
rails g task server start
From there, change my
tasks/server.rake file to look like this.
namespace :server do desc %Q{ ›› Run Rails while monitoring /app, /config } task :start do sh %{ rerun --dir config,app rails s } end end
Step 6 - Run your task
We're done, all we need to do now is call the rake task
rails server:start
Ta Da!
More with Tasks
Remove all comments within a
Gemfile
Create the task
rails g task gemfile clean
Paste this into
lib/tasks/gemfile.rake.
namespace :gemfile do desc %Q{ ›› Remove all the comments in Gemfile and make a Gemfile.bak just in case } task :clean do sh %{ ruby -pi.bak -pe "gsub(/^#.*\n/, '')" Gemfile } end end
Run the task
rails gemfile:clean
Clearing a DB
This command drops the database, purges the schema, creates a new database, and populates it with data.
rails g task db wipe
Paste this into
lib/tasks/db.rake.
namespace :db do desc %Q{ ›› Drop the db schema, Create a new one, Migrate the data, Populate the data } task wipe: :environment do sh %{ rails db:purge db:create db:migrate db:seed --trace } end end
Run the task
rails db:wipe
Even More with Tasks
If you take note of
app.rake. One thing I often do is document the rails commands I use into a single task. Yes, it's cumbersome to do but on the bright side, I rarely have to look at
Schema.rb or
db/migrate/ to try and make sense of my models. Again, this isn't efficient but it does help keep a good record of your work. | http://www.chrisjmendez.com/2016/07/31/rails-5-tasks/ | CC-MAIN-2017-26 | refinedweb | 800 | 70.23 |
In this article, we will learn how to check the number is power of two. Let’s get started.
Table of contents
- Given problem
- Solution for checking number is power of 2
- Divide an integer number to 2
- Using log method of Math package
- Based on the property of n and n-1
Given problem
Below is an description of this problem:
Given a positive integer, write a function to find if it is a power of two or not. Example 1: Input: n = 4 Output: Yes Example 2: Input: n = 7 Output: No Example 3: Input: n = 32 Output: Yes
Solution for checking number is power of 2
To solve this problem, we have some solutions:
Divide an integer number to 2. If the remained number is different than 0, then it is not power of 2.
Using log() method of Math package. Carefully, because its data type is double or float, so it can contains number error.
If n is power of two, then n - 1 that has all unset bits of n becomes set bits of n - 1, vice versa.
For example: 4 = 100, then 3 = 011
Divide an integer number to 2
public boolean isPowerOfTwoUsingIterative(long n) { if (n == 0) { return false; } while (n != 1) { if (n % 2 != 0) { return false; } n = n / 2; } return true; }
Using log method of Math package
public boolean isPowerOfTwoUsingLogMath(long n) { if (n == 0) { return false; } long log2 = log2(n); return Math.ceil(log2) == Math.floor(log2); } public long log2(long x) { return (long) (Math.log(x) / Math.log(2) + 1e-10); }
Based on the property of n and n-1
public boolean isPowerOfTwo(int n) { return n != 0 && ((n & (n - 1)) == 0); } | https://ducmanhphan.github.io/2020-03-06-Check-whether-number-is-power-of-two/ | CC-MAIN-2021-25 | refinedweb | 283 | 60.65 |
Is it possible to scan a character, pass it to a char array and then if a is defined as string to print that string? Below is the code, (which gets the warning "cast to pointer from integer of different size")
Thanks in advance
char *a = "alpha";
int main()
{
char *A[80];
char ch;
printf("enter message");
scanf(" %c", &ch);
A[0] = (char *) ch;
printf("%s\t", A[0]);
return 0;
}
What you want might be something like this.
#include <stdio.h> /* word candidate list: terminated by NULL */ const char* a[] = { "alpha", NULL }; int main(void) { char ch; int i; /* read input */ printf("enter message"); if (scanf(" %c", &ch) != 1) { puts("read error"); return 1; } /* search for matching word(s) */ for (i = 0; a[i] != NULL; i++) { /* if the first character of the word is what is scanned, print the word */ if (a[i][0] == ch) { printf("%s\t", a[i]); } } return 0; } | https://codedump.io/share/svYs0zKjWYH7/1/can-printf-print-defined-strings | CC-MAIN-2017-13 | refinedweb | 153 | 69.25 |
by Julien Ponge
Published July 2014
Use JMH to write useful benchmarks that produce accurate results.
enchmarks are an endless source of debates, especially because they do not always represent real-world usage patterns. It is often quite easy to produce the outcome you want, so skepticism is a good thing when looking at benchmark results.
Yet, evaluating the performance of certain critical pieces of code is essential for developers who create applications, frameworks, and tools. Stressing critical portions of code and obtaining metrics that are meaningful is actually difficult in the Java Virtual Machine (JVM) world, because the JVM is an adaptive virtual machine. As we will see in this article, the JVM does many optimizations that render the simplest benchmark irrelevant unless many precautions are taken.
In this article, we will start by creating a simple yet naive benchmarking framework. We will see why things do not turn out as well as we hoped. We then will look at JMH, a benchmark harness that gives us a solid foundation for writing benchmarks. Finally, we’ll discuss how JMH makes writing concurrent benchmarks simple.
Benchmarking does not seem so difficult. After all, it should boil down to measuring how long some operation takes, and if the operation is too fast, we can always repeat it in a loop. While this approach is sound for a program written in a statically compiled language, such as C, things are very different with an adaptive virtual machine. Let’s see why.
Implementation. Let’s take a naive approach and design a benchmarking framework ourselves. The solution fits into a single static method, as shown in Listing 1.
public class WrongBench { public static void bench(String name, long runMillis, int loop, int warmup, int repeat, Runnable runnable) { System.out.println("Running: " + name); int max = repeat + warmup; long average = 0L; for (int i = 0; i < max; i++) { long nops = 0; long duration = 0L; long start = System.currentTimeMillis(); while (duration < runMillis) { for (int j = 0; j < loop; j++) { runnable.run(); nops++; } duration = System.currentTimeMillis() - start; } long throughput = nops / duration; boolean benchRun = i >= warmup; if (benchRun) { average = average + throughput; } System.out.print(throughput + " ops/ms" + ([ !benchRun ? " (warmup) | " : " | ")); } average = average / repeat; System.out.println("\n[ ~" + average + " ops/ms ]\n"); } }
Listing 1
The
bench method executes a benchmark expressed as a
java.lang.Runnable. The other parameters include a descriptive name (
name), a benchmark run duration (
runMillis), the inner loop upper bound (
loop), the number of warm-up rounds (
warmup), and the number of measured rounds (
repeat).
Looking at the implementation, we can see that this simple benchmarking method measures a throughput. The time a benchmark takes to run is one thing, but a throughput measurement is often more helpful, especially when designing microbenchmarks.
Sample usage. Let’s use our fresh benchmarking framework with the following method:
static double distance( double x1, double y1, double x2, double y2) { double dx = x2 - x1; double dy = y2 - y1; return Math.sqrt((dx * dx) + (dy * dy)); }
The
distance method computes the Euclidean distance between two points (
x1,
y1) and (
x2,
y2).
Let’s introduce the following constants for our experiments: 4-second runs, 10 measurements, 15 warm-up rounds, and an inner loop of 10,000 iterations:
static final long RUN_MILLIS = 4000; static final int REPEAT = 10; static final int WARMUP = 15; static final int LOOP = 10_000;
Running the benchmark is done as follows:
public static void main( String... args) { bench("distance", RUN_MILLIS, LOOP, WARMUP, REPEAT, () -> distance(0.0d, 0.0d, 10.0d, 10.0d)); }
On a test machine, a random execution produces the following shortened trace:
Running: distance (...) [ ~30483613 ops/ms ]
According to our benchmark, the
distance method has a throughput of 30483613 operations per millisecond (ms). Another run would yield a slightly different throughput. Java developers will not be surprised by that. After all, the JVM is an adaptive virtual machine: bytecode is first interpreted, and then native code is generated by a just-in-time compiler. Hence, performance results are subject to random variations that tend to stabilize as time increases.
Great; but still . . . is 30483613 operations per ms for
distance a meaningful result?
The raw throughput value does not give us much perspective, so let’s compare our result for
distance with the throughput of other methods.
Looking for a baseline. Let’s take the same method signature as
distance and return a constant instead of doing a computation with the parameters:
static double constant( double x1, double y1, double x2, double y2) { return 0.0d; })); }
Listing 2
We also update our benchmark as shown in Listing 2. The
constant method will give us a good baseline for our measurements, since it just returns a constant. Unfortunately, the results are not what we would intuitively expect:
Running: distance (...) [ ~30302907 ops/ms ] Running: constant (...) [ ~475665 ops/ms ]
The throughput of
constant appears to be lower than that of
distance, although
constant is doing no computation at all.
static void nothing() { } // (...))); bench("nothing", RUN_MILLIS, LOOP, WARMUP, REPEAT, WrongBench::nothing); }
Listing 3
To give more depth to this observation, let’s benchmark an empty method (see Listing 3). The results get even more surprising.
Running: distance (...) [ ~29975598 ops/ms ] Running: constant (...) [ ~421092 ops/ms ] Running: nothing (...) [ ~274938 ops/ms ]
has the lowest throughput, although it is doing the least.
nothing
Isolating runs. This is the first lesson: mixing benchmarks within the same JVM run is wrong. Indeed, let’s change the benchmark order:
Running: nothing (...) [ ~30146676 ops/ms ] Running: distance (...) [ ~493272 ops/ms ] Running: constant (...) [ ~284219 ops/ms ]
We get the same relative throughput drop figures, albeit with a different benchmark ordering. Let’s run a first benchmark alone, as shown in Listing 4. By repeating the process for each benchmark, we get the following results:
Running: nothing (...) [ ~30439911 ops/ms ] Running: distance (...) [ ~30213221 ops/ms ] Running: constant [ ~30229883 ops/ms ]
In some runs
distance could be faster than
constant. The general observation is that all these measurements are very similar, with
nothing being marginally faster. In itself, this result is suspicious, because the
distance method is doing computations on
double numbers. So we would expect a much lower throughput. We will come back to this later, but first let’s discuss why mixing benchmarks within the same JVM process was a bad idea.
public static void main(String... args) { bench("nothing", RUN_MILLIS, LOOP, WARMUP, REPEAT, WrongBench::nothing); // bench("distance", RUN_MILLIS, LOOP, WARMUP, REPEAT, () -> distance(0.0d, 0.0d, 10.0d, 10.0d)); // bench("constant", RUN_MILLIS, LOOP, WARMUP, REPEAT, () -> constant(0.0d, 0.0d, 10.0d, 10.0d)); }
Listing 4
The main factor in why benchmarks get slower over runs is the
Runnable.run() method call in the
bench method. While the first benchmark runs, the corresponding call site sees only one implementation class for
java.lang.Runnable.
Given enough runs, the virtual machine speculates that
run() always dispatches to the same target class, and it can generate very efficient native code. This assumption gets invalidated with the second benchmark, because it introduces a second class to dispatch
run() calls to. The virtual machine has to deoptimize the generated code. It eventually generates efficient code to dispatch to either of the seen classes, but this is slower than in the previous case. Similarly, the third benchmark introduces a third implementation of
java.lang .Runnable. Its execution gets slower because Java HotSpot VM generates efficient native code for up to two different types at a call site, and then it falls back to a more generic dispatch mechanism for additional types.
This is not the sole factor, though. Indeed, the
bench method’s code and the
Runnable objects’ code blend when seen by the virtual machine. The virtual machine tries to speculate on the entire code using optimizations such as loop unrolling, method inlining, and on-stack replacements.
Calling
System.currentTimeMillis() has an effect on throughput as well, and our benchmarks would need to accurately subtract the time taken for each of these calls. We could also play with different inner-loop upper-bound values and observe very different results.
The OpenJDK wiki Performance Techniques page provides a great overview of the various techniques being used in Java HotSpot VM. As you can see, ensuring that we measure only the code to be benchmarked is difficult.
More pitfalls. Going back to the performance evaluation of the
distance method, we noted that its throughput was very similar to the throughput measured for a method that would do no computation and return a constant.
In fact, Java HotSpot VM used dead-code elimination; since the return value of
distance is never used by our
java.lang.Runnable under test, it practically removed it. This also happened because the method has no side effect and has a simple control flow that is recursion-free.
To convince ourselves of this, let’s modify the
java.lang.Runnable lambda that we pass to our benchmark method, as shown in Listing 5.
// (...) static double last = 0.0d; public static void main(String... args) { bench("distance_use_return", RUN_MILLIS, LOOP, WARMUP, REPEAT, () -> last = distance(0.0, 0.0, 10.0, 10.0)); System.out.println(last); }
Listing 5
Instead of just calling
distance, we now assign its return value to a field and eventually print it, to force the virtual machine not to ignore it. The benchmark figures are now quite different:
Running: distance_use_return (...) [ ~18865939 ops/ms ]
We now have a more meaningful result, because the
constant method had a throughput of about 30229883 operations per ms on our test machine.
Although it’s not perceptible in this example, we could also highlight the effect of constant folding. Given a simple method with constant arguments and a return value that is evidently dependent on those parameters, the virtual machine is able to speculate that it is not useful to evaluate each call. We could come up with an example to illustrate that, but let’s instead focus on writing benchmarks with a good harness framework.
Indeed, the following should be clear by now:
JMH is a Java harness library for writing benchmarks on the JVM, and it was developed as part of the OpenJDK project. JMH provides a very solid foundation for writing and running benchmarks whose results are not erroneous due to unwanted virtual machine optimizations. JMH itself does not prevent the pitfalls that we exposed earlier, but it greatly helps in mitigating them.
JMH is popular for writing microbenchmarks, that is, benchmarks that stress a very specific piece of code. JMH also excels at concurrent benchmarks. That being said, JMH is a general-purpose benchmarking harness. It is useful for larger benchmarks, too.
Creating and running a JMH project. While JMH releases are being regularly published to Maven Central Repository, JMH development is very active and it is a great idea to make builds of JMH yourself. To do so, you need to clone the JMH Mercurial repository, and then build it with Apache Maven, as shown in Listing 6. Once this is done, you can bootstrap a new Maven-based JMH project, as shown in Listing 7.
$ hg clone openjdk-jmh (...) $ cd openjdk-jmh (...) $ mvn install (…)
Listing 6
$ mvn archetype:generate \ -DinteractiveMode=false \ -DarchetypeGroupId=org.openjdk.jmh \ -DarchetypeArtifactId=jmh-java-benchmark-archetype \ -DgroupId=com.mycompany \ -DartifactId=benchmarks \ -Dversion=1.0-SNAPSHOT
Listing 7
This creates a project in the
benchmarks folder. A sample benchmark can be found in
src/main/java/MyBenchmark.java. While we will dissect the sample benchmark in a minute, we can already build the project with Apache Maven:
$ cd benchmarks/ $ mvn package (...) $ java \ -jar target/microbenchmarks.jar (…)
When you run the self-contained
microbenchmarks.jar executable JAR file, JMH launches all the benchmarks of the project with default settings. In this case, it runs
MyBenchmark with the default JDK and no specific JVM tuning. Each benchmark is run with 20 warm-up rounds of 1 second each and then with 20 measurement rounds of 1 second each. Also, JMH launches a new JVM 10 times for running each benchmark.
As we will see later, this behavior can be customized in the benchmark source code, and it can be overridden using command-line flags. Running
java -jar target/microbenchmarks.jar -help allows us to see the available flags.
Let’s instead run the benchmark with the parameters shown in Listing 8. These parameters specify the following:
-f 1).
-wi 5).
-i 5 -r 3s).
jvmArgs.
.*Benchmark.*regular expression.
$ java -jar target/microbenchmarks.jar \ -f 1 -wi 5 -i 5 -r 3s \ -jvmArgs '-server -XX:+AggressiveOpts' \ .*Benchmark.*
Listing 8
The execution gives a recap of the configuration, the information for each iteration and, finally, a summary of the results that includes confidence intervals, as shown in Listing 9.
# Run progress: 0.00% complete, ETA 00:00:20 # VM invoker: /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Con tents/Home/jre/bin/java # VM options: -server -XX:+AggressiveOpts # Fork: 1 of 1 # Warmup: 5 iterations, 1 s each # Measurement: 5 iterations, 3 s each # Threads: 1 thread, will synchronize iterations # Benchmark mode: Throughput, ops/time # Benchmark: com.mycompany.MyBenchmark.testMethod # Warmup Iteration 1: 1292809.889 ops/ms # Warmup Iteration 2: 1320406.283 ops/ms # Warmup Iteration 3: 1313474.495 ops/ms # Warmup Iteration 4: 1320902.931 ops/ms # Warmup Iteration 5: 1324933.533 ops/ms Iteration 1: 1323529.164 ops/ms Iteration 2: 1324869.829 ops/ms Iteration 3: 1318025.798 ops/ms Iteration 4: 1309566.744 ops/ms Iteration 5: 1320382.335 ops/ms Result : 1319274.774 ±(99.9%) 23298.541 ops/ms Statistics: (min, avg, max) = (1309566.744, 1319274.774, 1324869.829), stdev = 6050.557 Confidence interval (99.9%): [1295976.233, 1342573.315] Benchmark Mode Samples Mean Mean error Units c.m.MyBenchmark.testMethod thrpt 5 1319274.774 23298.541 ops/ms
Listing 9
package com.mycompany; import org.openjdk.jmh.annotations.GenerateMicroBenchmark; public class MyBenchmark { @GenerateMicroBenchmark public void testMethod() { // place your benchmarked code here } }
Listing 10
Anatomy of a JMH benchmark. The sample benchmark that was generated looks like Listing 10. A JMH benchmark is simply a class in which each
@GenerateMicroBenchmark annotated method is a benchmark. Let’s transform the benchmark to measure the cost of adding two integers (see Listing 11).
package com.mycompany; import org.openjdk.jmh.annotations.*; import java.util.concurrent.TimeUnit; @State(Scope.Thread) @BenchmarkMode(Mode.Throughput) @OutputTimeUnit(TimeUnit.MILLISECONDS) @Fork(value = 3, jvmArgsAppend = {"-server", "-disablesystemassertions"}) public class MyBenchmark { int x = 923; int y = 123; @GenerateMicroBenchmark @Warmup(iterations = 10, time = 3, timeUnit = TimeUnit.SECONDS) public int baseline() { return x; } @GenerateMicroBenchmark @Warmup(iterations = 5, time = 5, timeUnit = TimeUnit.SECONDS) public int sum() { return x + y; } }
Listing 11
We have a
baseline benchmark that gives us a reference on returning an
int value. JMH takes care of reusing return values so as to defeat dead-code elimination. We also return the value of field
x; because the value can be changed from a large number of sources, the virtual machine is unlikely to attempt constant folding optimizations. The code of
sum is very similar.
The benchmark has more configuration annotations present. The
@State annotation is useful in the context of concurrent benchmarks. In our case, we simply hint to JMH that
x and
y are thread-scoped.
The other annotations are self-explanatory. Note that these values can be overridden from the command line. By running the benchmark on a sample machine, we get the results shown in Listing 12.
Benchmark Mode Samples Mean Mean error Units c.m.MyBenchmark.baseline thrpt 60 527635.162 756.927 ops/ms c.m.MyBenchmark.sum thrpt 60 440033.766 623.455 ops/ms
Listing 12
Lifecycle and parameter injection. In simple cases, class fields can hold the benchmark state values.
In more-elaborate contexts, it is better to extract those into separate
@State annotated classes. Benchmark methods can then have parameters of the type of these state classes, and JMH arranges instances injection. A state class can also have its own lifecycle with a setup and a tear-down method. We can also specify whether a state holds for the whole benchmark, for one trial, or for one invocation.
We can also require JMH to inject a
Blackhole object. A
Blackhole is used when it is not convenient to return a single object from a benchmark method. This happens when the benchmark produces several values, and we want to make sure that the virtual machine will not speculate based on the observation that the benchmark code does not make use of these. The
Blackhole class provides several
consume(...) methods.
The class shown in Listings 13a and 13b is an elaborated version of the previous benchmark with a state class, a lifecycle for the state class, and a
Blackhole.
package com.mycompany; import org.openjdk.jmh.annotations.*; import org.openjdk.jmh.logic.BlackHole; import java.util.Random; import java.util.concurrent.TimeUnit; @BenchmarkMode(Mode.Throughput) @OutputTimeUnit(TimeUnit.MILLISECONDS) @Fork(value = 3, jvmArgsAppend = {"-server", "-disablesystemassertions"}) public class MyBenchmark { @State(Scope.Thread) static public class AdditionState { int x; int y; @Setup(Level.Iteration) public void prepare() { Random random = new Random(); x = random.nextInt(); y = random.nextInt(); } @TearDown(Level.Iteration) public void shutdown() { x = y = 0; // useless in this benchmark... } }
Listing 13a
@GenerateMicroBenchmark @Warmup(iterations = 10, time = 3, timeUnit = TimeUnit.SECONDS) public int baseline(AdditionState state) { return state.x; } @GenerateMicroBenchmark @Warmup(iterations = 5, time = 5, timeUnit = TimeUnit.SECONDS) public int sum(AdditionState state) { return state.x + state.y; } @GenerateMicroBenchmark @Warmup(iterations = 10, time = 3, timeUnit = TimeUnit.SECONDS) public void baseline_blackhole( AdditionState state, BlackHole blackHole) { blackHole.consume(state.x); } @GenerateMicroBenchmark @Warmup(iterations = 5, time = 5, timeUnit = TimeUnit.SECONDS) public void sum_blackhole( AdditionState state, BlackHole blackHole) { blackHole.consume(state.x + state.y); } }
Listing 13b
When a benchmark method returns a value, JMH takes it and consumes it into a
Blackhole. Returning a value and using a
Blackhole object are equivalent, as shown by the benchmark results in Listing 14.
Benchmark Mode Samples Mean Mean error Units c.m.MyBenchmark.baseline thrpt 60 527565.188 1531.198 ops/ms c.m.MyBenchmark.baseline_blackhole thrpt 60 528168.519 710.463 ops/ms c.m.MyBenchmark.sum thrpt 60 439957.824 956.078 ops/ms c.m.MyBenchmark.sum_blackhole thrpt 60 439852.867 1001.242 ops/ms
Listing 14
The
@TearDown annotation was illustrated for the sake of completeness, but we could clearly have omitted the
shutdown() method for this simple benchmark. It is mostly useful for cleaning up resources such as files.
Our “wrong” benchmark, JMH-style. We can now revisit with JMH the benchmark we did in the beginning of the article. The enclosing class looks like Listing 15.
@BenchmarkMode(Mode.Throughput) @OutputTimeUnit(TimeUnit.MILLISECONDS) public class GoodBench { public static double constant(double x1, double y1, double x2, double y2) { return 0.0; } public static double distance(double x1, double y1, double x2, double y2) { double dx = x2 - x1; double dy = y2 - y1; return sqrt((dx * dx) + (dy * dy)); } @State(Scope.Thread) public static class Data { double x1 = 0.0; double y1 = 0.0; double x2 = 10.0; double y2 = 10.0; } // (...) }
Listing 15
We will be measuring throughput in terms of operations per ms. Data is enclosed within an
@State-annotated static inner class whose mutable fields will prevent Java HotSpot VM from doing certain optimizations that we discussed earlier.
We are using two baselines. The first is a
void empty method, and the second simply returns a constant
double value, as shown in Listing 16. Benchmarking
constant() and
distance() is as simple as Listing 17.
@GenerateMicroBenchmark public void baseline_return_void() { } @GenerateMicroBenchmark public double baseline_return_zero() { return 0.0; }
Listing 16
@GenerateMicroBenchmark public double constant(Data data) { return constant(data.x1, data.y1, data.x2, data.y2); } @GenerateMicroBenchmark public double distance(Data data) { return distance(data.x1, data.y1, data.x2, data.y2); }
Listing 17
To put things into perspective, we also include flawed measurements subject to dead-code elimination and constant folding optimizations (see Listing 18).
@GenerateMicroBenchmark public double distance_folding() { return distance(0.0, 0.0, 10.0, 10.0); } @GenerateMicroBenchmark public void distance_deadcode(Data data) { distance(data.x1, data.y1, data.x2, data.y2); } @GenerateMicroBenchmark public void distance_deadcode_and_folding() { distance(0.0, 0.0, 10.0, 10.0); }
Listing 18
Finally, we can also provide a
main method to this benchmark using the JMH builder API, which mimics the command-line arguments that can be given to the self-contained JAR executable. See Listing 19.
public static void main(String... args) throws RunnerException { Options opts = new OptionsBuilder() .include(".*.GoodBench.*") .warmupIterations(20) .measurementIterations(5) .measurementTime(TimeValue.milliseconds(3000)) .jvmArgsPrepend("-server") .forks(3) .build(); new Runner(opts).run();}
Listing 19
Figure 1 shows the results as a bar chart with the mean error included for each benchmark.
Figure 1
Given the two baselines, we clearly see the effects of dead-code elimination and constant folding. The only meaningful measurement of
distance() is when the value is being consumed by JMH and parameters are passed through field values. All other cases converge to either the performance of returning a constant
double or an empty
void-returning method. Devising Concurrent Benchmarks JMH was designed with concurrent benchmarks in mind. These kinds of benchmarks are very difficult to measure correctly, because they involve several threads and inherently nondeterministic behaviors. Next, let’s examine concurrent benchmarking with JMH for the comparison of readers and writers over an incrementing
long value. To do so, we use a pessimistic implementation based on a
long value for which every access is protected by a synchronized block, and an optimistic implementation based on
java.util.concurrent.atomic .AtomicLong. We want to compare the performance of each implementation depending on the proportion of readers and writers that we have.
JMH has the ability to execute a group of threads with different benchmark code. We can specify how many threads will be allocated to a certain benchmark method. In our case, we will have cases with more readers than writers and, conversely, cases with more writers than readers.
Benchmarking the pessimistic implementation. We start with the following benchmark class code:
@BenchmarkMode( Mode.AverageTime) @OutputTimeUnit( TimeUnit.NANOSECONDS) public class ConcurrentBench { // (...) }
The pessimistic case is implemented using an inner class of
ConcurrentBench, as shown in Listing 20.
State(Scope.Group) @Threads(8) public static class Pessimistic { long value = 0L; final Object lock = new Object(); @Setup(Level.Iteration) public void prepare() { value = 0L; } public long get() { synchronized (lock) { return value; } } public long incrementAndGet() { synchronized (lock) { value = value + 1L; return value; } } }
Listing 20
The
@State annotation specifies that there should be a shared instance per group of threads while running benchmarks. The
@Threads annotation specifies that eight threads should be allocated to run the benchmarks (the default value is
4).
Benchmarking the pessimistic case is done through the methods shown in Listing 21. The
@Group annotation gives a group name, while the
@GroupThreads annotation specifies how many threads from the group should be allocated to a certain benchmark.
@GenerateMicroBenchmark @Group("pessimistic_more_readers") @GroupThreads(7) public long pessimistic_more_readers_get(Pessimistic state) { return state.get(); } @GenerateMicroBenchmark @Group("pessimistic_more_readers") @GroupThreads(1) public long pessimistic_more_readers_incrementAndGet( Pessimistic state) { return state.incrementAndGet(); } @GenerateMicroBenchmark @Group("pessimistic_more_writers") @GroupThreads(1) public long pessimistic_more_writers_get(Pessimistic state) { return state.get(); } @GenerateMicroBenchmark @Group("pessimistic_more_writers") @GroupThreads(7) public long pessimistic_more_writers_incrementAndGet( Pessimistic state) { return state.incrementAndGet(); }
Listing 21
We, hence, have two groups: one with seven readers and one writer, and one with one reader and seven writers.
Benchmarking the optimistic implementation. This case is quite symmetrical, albeit with a different implementation (see Listing 22). The benchmark methods are also split in two groups, as shown in Listing 23.
@State(Scope.Group) @Threads(8) public static class Optimistic { AtomicLong atomicLong; @Setup(Level.Iteration) public void prepare() { atomicLong = new AtomicLong(0L); } public long get() { return atomicLong.get(); } public long incrementAndGet() { return atomicLong.incrementAndGet(); } }
Listing 22
@GenerateMicroBenchmark @Group("optmistic_more_readers") @GroupThreads(7) public long optimistic_more_readers_get(Optimistic state) { return state.get(); } @GenerateMicroBenchmark @Group("optmistic_more_readers") @GroupThreads(1) public long optimistic_more_readers_incrementAndGet( Optimistic state) { return state.incrementAndGet(); } @GenerateMicroBenchmark @Group("optmistic_more_writers") @GroupThreads(1) public long optimistic_more_writers_get(Optimistic state) { return state.get(); } @GenerateMicroBenchmark @Group("optmistic_more_writers") @GroupThreads(7) public long optimistic_more_writers_incrementAndGet( Optimistic state) { return state.incrementAndGet(); }
Listing 23
Execution and plotting. JMH offers a variety of output formats beyond plain-text console output, including JSON and CSV output. The JMH configuration shown in Listing 24 allows us to obtain results in a
.csv file.
public static void main(String... args) throws RunnerException { Options opts = new OptionsBuilder() .include(".*.ConcurrentBench.*") .warmupIterations(5) .measurementIterations(5) .measurementTime(TimeValue.milliseconds(5000)) .forks(3) .result("results.csv") .resultFormat(ResultFormatType.CSV) .build(); new Runner(opts).run(); }
Listing 24
The console output provides detailed results with metrics for each benchmarked method. In our case, we can distinguish the performance of reads and writes. There is also a consolidated performance result for the whole benchmark.
Figure 2
The resulting
.csv file can be processed with a variety of tools, including spreadsheet software and plotting tools. For concurrent benchmarks, it contains only the consolidated results. Listing 25 is a processing example using the Python matplotlib library. The result is shown in Figure 2.
import numpy as np import matplotlib.pyplot as plt data = np.genfromtxt('results.csv', delimiter=',', names=True, dtype=None) x = data['Mean'] y = np.arange(len(data['Benchmark'])) err = data['Mean_Error_999'] labels = [] for name in data['Benchmark']: labels.append(name[len('"bench.ConcurrentBench.'):-1]) plt.rcdefaults() plt.barh(y, x, xerr=err, color='blue', ecolor='red', alpha=0.4, align='center') plt.yticks(y, labels) plt.xlabel("Performance (ns/op)") plt.title("Benchmark") plt.tight_layout() plt.savefig('plot.png')
Listing 25
As we could expect, we see that the pessimistic implementation is very predictable: reads and writes share a single intrinsic lock, which is consistent, albeit slow. The optimistic case takes advantage of compare and swap, and reads are very fast when there is low write contention. As a warning, we could further increase the contention with more writers, and then the performance would be worse than that of the pessimistic case.
This article introduced JMH, a benchmark harness for the JVM. We started with our own benchmarking code and quickly realized that the JVM was doing optimizations that rendered the results meaningless. By contrast, JMH provides a coherent framework to write benchmark code and avoid common pitfalls. As usual, benchmarks should always be taken with a grain of salt. Microbenchmarks are very peculiar, since stressing a small portion of code does not preclude what actually happens to that code when it is part of a larger application. Nevertheless, such benchmarks are great quality assets for performance-critical code, and JMH provides a reliable foundation for writing them correctly. | http://www.oracle.com/technetwork/articles/java/architect-benchmarking-2266277.html | CC-MAIN-2015-35 | refinedweb | 4,411 | 50.02 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.