text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hello geeks and welcome in this article, we will cover the NumPy isclose(). Along with that, for an overall better understanding, we will also look at its syntax and parameter. Then we will see the application of all the theory part through a couple of examples. But first, let us try to get a brief overview of this function through its definition.
Suppose you have two arrays and a situation arrives where you need to compare the 2 arrays element-wise. The function NumPy isclose() does that for you. Also, it provides you with an option to add tolerant value, which comes very handy when dealing with data from scientific experiments. Also, it returns an array of boolean on completion of the program. In the next section, we will look at the syntax associated with it.
SYNTAX
numpy.isclose(a,b)
This is the general syntax for our function. Now in the next section, let us look at the various parameters associated with it.
PARAMETERS
1. a,b : array_like
This parameter represents the 2 input arrays that need to be compared.
2. rtol:float
This parameter represents the relative tolerance parameter. This can be understood as the ratio of absolute error of the measurement being taken.
3. atol:float
This parameter represents the absolute tolerance parameter, which can be understood as the minimum difference between the measured value and the actual value. It is advised not to use this when comparing numbers that are smaller than one.
4. equal_nan: bool
This parameter, if present, compares Nan’s as equal. If True, Nan’s in a will be considered equal to Nan’s in b in the output array.
RETURN
Y: array_like
On completion of the program it return a Boolean array. Which has true if the values are equal and false if not.
EXAMPLE
As we are done with all the theory portion related to NumPy isclose(). This section will be looking at how this function works and how it helps us achieve our desired output. We will start with an elementary level example and gradually move our way to more complicated examples.
1. Basic Numpy isclose() example
#input import numpy as ppool a=[2.67,2.56] b=[2.68,2.56] print(ppool.isclose(a,b))
Output:
[False True]
Above, we can see a fundamental example of nothing fancy for understanding how the function works and its syntax application. First, we have imported the NumPy module, following which we have defined 2 arrays. Then used our syntax along with the print statement to get the desired output. Here we can see that we have not used any tolerance parameter. Due to which it will return true only if the 2 values are the same.
2. NumPy isclose() example with relative tolerance
import numpy as ppool a=[1.0001e10,1e-9] b=[1e10,1e-8] print(ppool.isclose(a,b,rtol=.01))
Output:
[ True True]
Another example related to NumPy isclose(). Here we have followed similar steps to the first example. Next, we have also used the optional parameter relative tolerance and specified it to “.01”. If we want to verify the output, we can also do so. Here our first value is 1.0001e10, which is equivalent to 10001000000, and we have to compare it with 1e10 equivalent to 10000000000. Now we have specified the relative tolerance to be .01 at maximum. Here if we find the relative tolerance for this case, we get .001, which is less than the specified condition. Hence true, you can similarly check for the 2nd case.
Why don’t you guys try out with the another parameter atol and tell me the result in that case.
NumPy isclose() vs allclose()
In this section, we will be comparing the 2 different NumPy functions. As we have already discussed NumPy isclose(). Here just look at the definition of NumPy allclose(), this function helps us find wheatear a 2 arrays are element-wise equal within the tolerance. Now lets us compare them side, which will help us in better understanding.
From the above comparison table, we can observe that we have taken the same set of arrays, but the end is different in 2 cases. This primarily because while the isclose function returns the Boolean for every single element after comparing. But in the second case the where we use allclose, we get a single Boolean value for any number of elements after comparing.
NumPy isclose() vs math isclose()
In this section, we will be looking at another comparison concerning the isclose() function. Math is an inbuilt library of python. It provides a function disclose, which is used to check whether the 2 floating-point numbers are close.
Below you can look at table showcasing the working of 2 functions
MUST READ
- Bitonic sort: Algorithm and Implementation in Python
- Numpy Vstack in Python For Different Arrays
- Numpy Pad Explained With Examples in Python
- isin Function Explained With Examples in Python
CONCLUSION
In this article, we covered the NumPy isclose(). Besides that, we have also looked at its syntax and parameters. For better understanding, we looked at a couple of examples. We varied the syntax and looked at the output for each case. We can conclude that Numpy isclose() helps us compare two different arrays within a tolerance.
I hope this article was able to clear all doubts. But in case you have any unsolved queries feel free to write them below in the comment section. Done reading this, why not read about the square root function next. | https://www.pythonpool.com/numpy-isclose/ | CC-MAIN-2021-43 | refinedweb | 922 | 65.22 |
In this tutorial, we're going to work some more on our registration code within the __init__.py.
Before we go any further, let's install passlib with:
pip install passlib
Then, adding some imports to our __init__.py file:
from flask import Flask, render_template, flash, request, url_for, redirect, session from wtforms import Form, BooleanField, TextField, PasswordField, validators from passlib.hash import sha256_crypt from MySQLdb import escape_string as thwart import gc
Passlib will be used for password encryption, and the escape_string is used to protect against SQL injection attempts (hacking). The gc module is used for garbage collection (memory issues). We also add session to the flask imports, which is used for accessing the user-specific session / cookie information. We also import a bunch of the field stuff from wtforms.
Now we have some new register_page function code:
@app.route('/register/', methods=["GET","POST"]) def register_page(): try: form = RegistrationForm(request.form) if request.method == "POST" and form.validate(): username = form.username.data email = form.email.data password = sha256_crypt.encrypt((str(form.password.data))) c, conn = connection() x = c.execute("SELECT * FROM users WHERE username = (%s)", (thwart(username))) if int(x) > 0: flash("That username is already taken, please choose another") return render_template('register.html', form=form) else: c.execute("INSERT INTO users (username, password, email, tracking) VALUES (%s, %s, %s, %s)", (thwart(username), thwart(password), thwart(email), thwart("/introduction-to-python-programming/"))) conn.commit() flash("Thanks for registering!") c.close() conn.close() gc.collect() session['logged_in'] = True session['username'] = username return redirect(url_for('dashboard')) return render_template("register.html", form=form) except Exception as e: return(str(e))
There's a lot going on here. If you would like an in-depth walkthrough of this code, see the video.
Simple, the code first will check to see if the method is a POST. Keep in mind, the user might just be simply loading the register page. If there is a POST request, then we want to gather the information in the form.
Once we have the information in the form, the next thing we want to do is connect to the database. Now we don't want to have two users with the same username, so we first want to see if that username already exists. If it does, then we want to tell them that username already exists, and let them try again.
If the username does not already exist, and we've made it to this point, that means we have a unique username, passwords that match, and an email, ready to insert into our database.
So we insert to the database, flash a message to the user thanking them to register, and you're done.
When you're all set with your insertions, then you need to make sure you always run a conn.commit(), which is "save" your changes to the database. If you forget to do this, then your changes will not be saved.
Finally, we use gc.collect() to help keep memory waste down.
Notice also that we happen to log in our user after they register, using the flask session functionality. | https://pythonprogramming.net/flask-registration-tutorial/ | CC-MAIN-2019-26 | refinedweb | 517 | 67.15 |
A function is a block of code that performs a specific task.
Suppose we need to create a program to create a circle and color it. We can create two functions to solve this problem:
- a function to draw the circle
- a function to color the circle
Dividing a complex problem into smaller chunks makes our program easy to understand and reusable.
There are two types of function:
- Standard Library Functions: Predefined in C++
- User-defined Function: Created by users
In this tutorial, we will focus mostly on user-defined functions.
C++ User-defined Function
C++ allows the programmer to define their own function.
A user-defined function groups code to perform a specific task and that group of code is given a name (identifier).
When the function is invoked from any part of the program, it all executes the codes defined in the body of the function.
C++ Function Declaration
The syntax to declare a function is:
returnType functionName (parameter1, parameter2,...) { // function body }
Here's an example of a function declaration.
//.
Calling a Function
In the above program, we have declared a function named
greet(). To use the
greet() function, we need to call it.
Here's how we can call the above
greet() function.
int main() { // calling a function greet(); }
Example 1: Display a Text
#include <iostream> using namespace std; // declaring a function void greet() { cout << "Hello there!"; } int main() { // calling the function greet(); return 0; }
Output
Hello there!
Function Parameters
As mentioned above, a function can be declared with parameters (arguments). A parameter is a value that is passed when declaring a function.
For example, let us consider the function below:
void printNum(int num) { cout << num; }
Here, the
int variable num is the function parameter.
We pass a value to the function parameter while calling the function.
int main() { int n = 7; // calling the function // n is passed to the function as argument printNum(n); return 0; }
Example 2: Function with Parameters
// program to print a text #include <iostream>
The int number is 5 The double number is 5.5
In the above program, we have used a function that has one
int parameter and one
double parameter.
We then pass num1 and num2 as arguments. These values are stored by the function parameters n1 and n2 respectively.
Note: The type of the arguments passed while calling the function must match with the corresponding parameters defined in the function declaration.
Return Statement
In the above programs, we have used void in the function declaration. For example,
void displayNumber() { // code }
This means the function is not returning any value.
It's also possible to return a value from a function. For this, we need to specify the
returnType of the function during function declaration.
Then, the
return statement can be used to return a value from a function.
For example,
int add (int a, int b) { return (a + b); }
Here, we have the data type
int instead of
void. This means that the function returns an
int value.
The code
return (a + b); returns the sum of the two parameters as the function value.
The
return statement denotes that the function has ended. Any code after
return inside the function is not executed.
Example 3: Add Two Numbers
// program to add two numbers using a function #include <iostream> using namespace std; // declaring a function int add(int a, int b) { return (a + b); } int main() { int sum; // calling the function and storing // the returned value in sum sum = add(100, 78); cout << "100 + 78 = " << sum << endl; return 0; }
Output
100 + 78 = 178
In the above program, the
add() function is used to find the sum of two numbers.
We pass two
int literals
100 and
78 while calling the function.
We store the returned value of the function in the variable sum, and then we print it.
Notice that sum is a variable of
int type. This is because the return value of
add() is of
int type.
Function Prototype
In C++, the code of function declaration should be before the function call. However, if we want to define a function after the function call, we need to use the function prototype. For example,
// function prototype void add(int, int); int main() { // calling the function before declaration. add(5, 3); return 0; } // function definition void add(int a, int b) { cout << (a + b); }
In the above code, the function prototype is:
void add(int, int);
This provides the compiler with information about the function name and its parameters. That's why we can use the code to call a function before the function has been defined.
The syntax of a function prototype is:
returnType functionName(dataType1, dataType2, ...);
Example 4: C++ Function Prototype
// using function definition after main() function // function prototype is declared before main() #include <iostream> using namespace std; // function prototype int add(int, int); int main() { int sum; // calling the function and storing // the returned value in sum sum = add(100, 78); cout << "100 + 78 = " << sum << endl; return 0; } // function definition int add(int a, int b) { return (a + b); }
Output
100 + 78 = 178
The above program is nearly identical to Example 3. The only difference is that here, the function is defined after the function call.
That's why we have used a function prototype in this example.
Benefits of Using User-Defined Functions
- Functions make the code reusable. We can declare them once and use them multiple times.
- Functions make the program easier as each small task is divided into a function.
- Functions increase readability.
C++ Library Functions
Library functions are the built-in functions in C++ programming.
Programmers can use library functions by invoking the functions directly; they don't need to write the functions themselves.
Some common library functions in C++ are
sqrt(),
abs(),
isdigit(), etc.
In order to use library functions, we usually need to include the header file in which these library functions are defined.
For instance, in order to use mathematical functions such as
sqrt() and
abs(), we need to include the header file
cmath.
Example 5: C++ Program to Find the Square Root of a Number
#include <iostream> #include <cmath> using namespace std; int main() { double number, squareRoot; number = 25.0; // sqrt() is a library function to calculate the square root squareRoot = sqrt(number); cout << "Square root of " << number << " = " << squareRoot; return 0; }
Output
Square root of 25 = 5
In this program, the
sqrt() library function is used to calculate the square root of a number.
The function declaration of
sqrt() is defined in the
cmath header file. That's why we need to use the code
#include <cmath> to use the
sqrt() function.
To learn more, visit C++ Standard Library functions. | https://www.programiz.com/cpp-programming/function | CC-MAIN-2021-04 | refinedweb | 1,113 | 61.87 |
ZparkIOZparkIO
Boiler plate framework to use Spark and ZIO together.
The goal of this framework is to blend Spark and ZIO in an easy to use system for data engineers.
Allowing them to use Spark is a new, faster, more reliable way, leveraging ZIO power.
Table of ContentsTable of Contents
- What is this library for ?
- Public Presentation
- Why would you want to use ZIO and Spark together?
- How to use?
- Examples
- Authors
What is this library for ?What is this library for ?
This library will implement all the boiler plate for you to be able to include Spark and ZIO in your ML project.
It can be tricky to use ZIO to save an instance of Spark to reuse in your code and this library solve all the boilerplate problem for you.
Public PresentationPublic Presentation
Feel free to look at the slides on Google Drive or on SlideShare presented during the ScalaSF meetup on Thursday, March 26, 2020. You can also watch the presentation on Youtube.
ZparkIO was on
version 0.7.0, so things might be out of date.
Why would you want to use ZIO and Spark together?Why would you want to use ZIO and Spark together?
From my experience, using ZIO/Future in combination with Spark can speed up drastically the performance of your job. The reason being that sources (BigQuery, Postgresql, S3 files, etc...) can be fetch in parallel while the computation are not on hold. Obviously ZIO is much better than Future but it is harder to set up. Not anymore!
Some other nice aspect of ZIO is the error/exception handling as well as the build-in retry helpers. Which make retrying failed task a breath within Spark.
How to use?How to use?
I hope that you are now convinced that ZIO and Spark are a perfect match. Let's see how to use this Zparkio.
Include dependenciesInclude dependencies
First include the library in your project:
libraryDependencies += "com.leobenkel" %% "zparkio" % "[VERSION]"
This library depends on Spark, ZIO and Scallop.
Unit-testUnit-test
You can also add
libraryDependencies += "com.leobenkel" %% "zparkio-test" % "[VERSION]"
To get access to helper function to help you write unit tests.
How to use in your code?How to use in your code?
There is a project example you can look at. But here are the details.
MainMain
The first thing you have to do is extends the
ZparkioApp trait. For an example you can look at the ProjectExample: Application.
SparkSpark
By using this architecture, you will have access to
SparkSesion anywhere in your
ZIO code, via
import com.leobenkel.zparkio.Services._ for { spark <- SparkModule() } yield { ??? }
for instance you can see its use here.
Command linesCommand lines
You will also have access to all your command lines automatically parsed, generated and accessible to you via:
CommandLineArguments ; it is recommended to make this helper function to make the rest of your code easier to use.
Then using it, like here, is easy.
HelpersHelpers
In the implicits object, that you can include everywhere. You are getting specific helper functions to help streamline your projects.
Unit testUnit test
Using this architecture will literally allow you to run your main as a unit test.
ExamplesExamples
Simple exampleSimple example
Take a look at the simple project example to see example of working code using this library: SimpleProject.
More complex architectureMore complex architecture
A full fles production ready project will obviously need more code that the simple example. For this purpose, and upon suggestion of several awesome people, I added a more complex project. This is a WIP and more will be added as I go. MoreComplexProject. | https://index.scala-lang.org/leobenkel/zparkio/zparkio/0.9.1?target=_2.11 | CC-MAIN-2020-34 | refinedweb | 603 | 67.55 |
NAME
setfsuid - set user identity used for file system checks
SYNOPSIS
#include <unistd.h> /* glibc uses <sys/fsuid.h> */ int setfsuid(uid_t fsuid);
DESCRIPTION
The system call setfsuid() sets the user ID that the Linux kernel uses to check for all accesses to the file system. Normally, the value of fsuid will shadow the value of the effective user ID. In fact, whenever the effective user ID is changed, fsuid will also be changed to the new value of the effective user ID. Explicit calls to setfsuid() and setfsgid.
RETURN VALUE
On success, the previous value of fsuid is returned. On error, the current value of fsuid is returned.
CONFORMING TO
setfsuid() is Linux-specific and should not be used in programs intended to be portable. It is present since Linux 1.1.44 and in libc since libc 4.7.6.
NOTES
When glibc determines that the argument is not a valid userUID capability).
SEE ALSO
kill(2), setfsgid(2), capabilities(7), credentials(7)
COLOPHON
This page is part of release 2.77 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/hardy/man2/setfsuid32.2.html | CC-MAIN-2013-48 | refinedweb | 194 | 59.4 |
TDD with Pylons
I’ve written about test driven development and Pylons before but there have apparently been some changes to how it all works since Pylons 1.0. I didn’t run across anything in the documentation detailing the changes, specifically to the template context global and how you access it in your tests.
From the Pylons docs on testing:
Pylons will provide several additional attributes for the paste.fixture response object that let you access various objects that were created during the web request:
- session — Session object
- req — Request object
- c — Object containing variables passed to templates
- g — Globals object
To use them, merely access the attributes of the response after you’ve used a get/post command:response = app.get('/some/url') assert response.session['var'] == 4 assert 'REQUEST_METHOD' in response.req.environ
As it turns out, that’s sort of true. I have a test that looks like this:
from nbapowerrank.tests import * class TestGamedetailsController(TestController): def test_index(self): response = self.app.get(url(controller='admin/gamedetails', action='index')) self.assertEqual(len(response.c.games) > 0, True)
and a controller that looks like this:
from nbapowerrank.lib.base import BaseController, render from nbapowerrank.model import meta from nbapowerrank.model.game import Game log = logging.getLogger(__name__) class GamedetailsController(BaseController): def __before__(self): self.game_q = meta.Session.query(Game) def index(self): yesterday = datetime.today() - timedelta(1) c.games = self.game_q.filter_by(gamedate=yesterday).all() return render('/gamedetails.mako')
Unfortunately, contra the documentation that says the “c” alias will be available on the test response object, that test always fails with an AttributeError stating that in fact, the c attribute does not exist. It frustrated me even more because all the other attributes that are supposed to be on the response like session and g were in fact there. After doing some random digging, I came across this in the Pylons 1.0 roadmap: “Deprecate pylons.c, pylons.g, and pylons.buffet. These have been disrecommended since 0.9.7.”
Apparently, they are/have deprecated using the c alias for tmpl_context even though when you create a controller under Pylons 1.0, it still aliases the context as c. Sigh. So in order to test data that you have added to your context for templating, your test should use the explicit tmpl_context instead of the c like this:
from nbapowerrank.tests import * class TestGamedetailsController(TestController): def test_index(self): response = self.app.get(url(controller='admin/gamedetails', action='index')) self.assertEqual(len(response.tmpl_context.games) > 0, True)
And then all will be well with your world. I do have to say that compared to both ASP.Net MVC and Rails, the testing support in Python/pylons seems to be a second class citizen. I’m not sure why that is because as a dynamic language, it seems to benefit greatly from a TDD approach. Maybe it’s just me getting back into a framework after a year’s worth of changes.
Anyway, that may help some people out there trying to search in the googleplex about TDD and Pylons. | http://mentalpandiculation.com/2010/11/tdd-with-pylons/ | CC-MAIN-2014-15 | refinedweb | 511 | 50.53 |
Subject: Re: [Boost-users] [boost] [Fit]Â formal review starts today
From: paul Fultz (pfultz2_at_[hidden])
Date: 2016-03-15 04:20:27
> On Monday, March 14, 2016 11:15 PM, Lee Clagett <forum_at_[hidden]> wrote:
> > On Mon, 14 Mar 2016 01:47:21 +0000 (UTC)
> paul Fultz <pfultz2_at_[hidden]> wrote:
>> On Sunday, March 13, 2016 5:17 PM, Lee Clagett <forum_at_[hidden]>
>> wrote:
> [snip]
>>
>> >-.
>
> FWIW, I am starting to like the lambda functionality in Fit.
>
>> >-);
>
> Most of the adaptors return a callable that is ready-for-use, whereas
> the lazy and decorator adaptors have not finished configuration. The
> partial and pipable adaptors have a third characteristic in that they
> _may_ be ready-for-use, depending on whether the function can be
> invoked.
>
> In this particular case, I was suggesting `fit::lazy(foo, _1, 2)`
> simply because it would return a callable that was ready-for-use,
> which I thought was more consistent with the other adaptors.
But adaptors take functions as a their parameters. If there are other
parameters than just functions, than a decorator is used.
>
>>
>>.
>
> You can use pipable operators in fit::flow? Everything returns an
> immediate value except for when a pipable-adapted function is
> partially-invoked, which returns a `pipe_closure` function. If the
> `operator|` is used at all a value is returned (which could be, but is
> unlikely to be another function). Can you give an example?
Yes, instead of writing:
auto r = x | f(y) | g(z);
The user can write:
auto r = flow(f(y), g(z))(x);
>
>>
> [snip]
>>
>> >
>> >> *.
>
> Is this a problem in conjunction with the conditional checks? A "SFINAE
> expression check" would instantiate a constexpr function that was
> referenced in the trailing return, causing a compile failure ...
> something like that? Rather unfortunate, makes the code is a bit harder
> to read IMO. Not terrible.
Yes, that is correct, which is unfortunate.
>
> [snip]
>> >-.
>>
>
> I forgot about the return type compution of Fit - constexpr does not
> appear to be the issue in this case. The problem is asking the compiler
> to compute the return type of a recursive function. The compiler cannot
> "see" the depth unless it runs constexpr simultaneously while
> computing - and even then it would have to do a dead-code optimization
> in the ternary operator or something. Head-hurting for sure.
>
> I added constexpr to repeat_integral_decorator<0>::operator(), and
> defined FIT_RECURSIVE_CONSTEXPR_DEPTH=0 then ran test/repeat.cpp with
> Clang 3.4 and Gcc 4.8 both of which compiled just fine. Only a
> two-phase approach is needed here (DEPTH=1), where the first phase has
> the full expression for SFINAE purposes and normalizing the value with
> the ternary operator, and the second phase is a fixed return type based
> on the forwarded value.
Hmm, that may work. I originally tried a two-phase approach at first, which
didn't work. However, it may work, at least for fit::repeat and
fit::repeat_while. I don't think it will work at all for fit::fix even with an
explicit return type. I will need to investigate this more.
>
> Is there a bug/issue in an older Gcc, Clang, or even MSVC that needed
> this larger loop unrolling? It is still unfortunate that a stackoverlow
> is possible with this implementation
Yes, it is. I need to look into a way to prevent or minimize this.
> - is it worth mentioning the
> limitation in the documentation?
Yes, I do mention the limitation in fit::fix, but it should be mention with
fit::repeat as well.
>
>> >.
>
> They have trouble grokking the static_cast in certain situations?
> Because the forward function still has the same static_cast
> internally ...
Yep they do. One issue with gcc 4.6 is that it can't mangle a static_cast, and
MSVC has its own funkiness going on.
>
>> >-.
>
> The adapted flip function _is a_ phoenix actor due to inheritance.
And thats the problem right there. It shouldn't still be a phoenix actor after
inheritance. Using fit::lazy, it becomes a bind expression, however, after
using flip, this is no longer the case:
auto lazy_sum = lazy(_+_)(_1, _2);
auto f = flip(lazy_sum);
static_assert(std::is_bind_expression<decltype(lazy_sum)>::value, "A bind expression");
static_assert(!std::is_bind_expression<decltype(f)>::value, "Not a bind expression");
Furthermore, your example doesn't compile when using Fit's placeholders.
> All> Fit adapted functions have this anti-feature. Or at least I think its an
> anti-feature.>
> [snip]
>>
>> >-_`.
>
> I meant that invoking a function adapted by `static_` could (and likely
> will) be slightly slower due to the usage of the function local static.
> The compiler/linker has to initialize the static before first use,
> which is allowed to occur before main. So either the compiler
> determines the global order of initialization (and initialize statics
> that are optionally used), or it generates initialization in a
> thread-safe way.
>
> I just disassembled code from Gcc 4.8 -O2/-O3 and Clang 3.4 -O2/-O3 on
> Linux, and a double-check lock scheme is used even if the `this`
> variable is *not* used. So avoiding this adaptor would be preferred for
> potential performance reasons, which I think is worth a documentation
> note.
Thats something, I will note in the documention then.
>
> [snip]
>> >
>> >
>> >> * thought more work was needed before inclusion.
Of course more work will be needed for final inclusion.
> A few implementation> details (mainly inheritance usage and the iffy reinterpret_case for
> lambda), and some work on documentation/design. If the documentation
> had recommendations for some edge cases, I would not have to guess
> whether they were considered during development.
>
> For instance, should there be some documentation explaining how the
> library "tests" for `operator()` in certain adaptors, and how
> templates
> (SFINAE/hard errrors) play a role:
>
> int main() {
> namespace arg = boost::phoenix::placeholders;
> const auto negate = (!arg::_1);
> const auto sub = (arg::_1 - arg::_2);
>
> // does not compile, hard-error with one argument call
> // std::cout << fit::partial(sub)(0)(1) << std::endl;
And that hard error comes from phoenix, of course, as it does compile using the Fit
placeholders:
const auto sub = (_1 - _2);
std::cout << partial(sub)(0)(1) << std::endl;
>
> // outputs -1
> std::cout << fit::partial(sub)(0, 1) << std::endl;
>
> // outputs 1
> std::cout << fit::conditional(negate, sub)(0) << std::endl;
>
> // outputs 1
> std::cout << fit::conditional(negate, sub)(0, 1) << std::endl;
>
> // does not compile, hard-error with one argument call -
> // the ambiguous overload is irrelevant to the error
> // std::cout << fit::match(negate, sub)(0) << std::endl;
And that works with Fit placeholders.
>
> // does not compile, ambiguous (both have variadic overloads)
> // std::cout << fit::match(negate, sub)(0, 1) << std::endl;
This does not, and is a bug. I am going to look into that.
> return 0;
> }
>
> Even if Phoenix is old news with Fit, these types of errors will show
> up in user code. Can a user assume that conditional will never
> instantiate the next `operator()` if the previous succeeded?
Yes, fit::conditional is meant to be "lazy" in instantiating the functions. In
fact, the library relies on this internally. I will make sure to document this
to make that clearer.
> Or should
> user code always strive to be SFINAE friendly?
Yes, of course. If the user doesn't properly constrain their functions, how
can they expect the library to detect the callability of the function?
> When does it not> matter? Hopefully I am not being too difficult
>
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net | https://lists.boost.org/boost-users/2016/03/85883.php | CC-MAIN-2020-24 | refinedweb | 1,252 | 56.96 |
17 February 2012 09:26 [Source: ICIS news]
NAGOYA (ICIS)--Pan Pacific Copper (PPC) restarted its copper smelter and sulphuric acid plant in Saganoseki, southern ?xml:namespace>
The smelter and sulphuric acid unit, which produces close to 1.5m tonnes/year of sulphuric acid and was shut down unexpectedly on 7 January, restarted on 13 February – a day ahead of the scheduled restart date, source said.
Suppliers are concerned that the restart of the smelter would contribute to an oversupply situation in
Most sulphuric acid facilities are currently running at full rates, while demand, both on the domestic and export fronts, has weakened, sources said.
The PPC outage had helped balance the sulphuric acid supply-demand ratio in
An affiliated smelter in South Korea, LS Nikko Copper Inc, also exported approximately 35,000 tonnes on behalf of PPC | http://www.icis.com/Articles/2012/02/17/9533130/japans-pan-pacific-copper-resumes-sulphuric-acid-production.html | CC-MAIN-2014-41 | refinedweb | 138 | 53.34 |
Name | Synopsis | Description | Examples | Return Values | Errors | Attributes | See Also | Notes
cc [flag]... [file]... -lgen [library]...
#include <regexpr.h> char *compile(char *instring, char *expbuf, const char *endbuf);
int step(const char *string, const char *expbuf);
int advance(const char *string, const char *expbuf);
extern char *loc1, loc2, locs;
extern int nbra, regerrno, reglength;
extern char *braslist[], *braelist[];
These routines are used to compile regular expressions and match the compiled expressions against lines. The regular expressions compiled are in the form used by ed(1).
The parameter instring is a null-terminated string representing the regular expression.
The parameter expbuf points to the place where the compiled regular expression is to be placed. If expbuf is NULL, compile() uses malloc(3C) to allocate the space for the compiled regular expression. If an error occurs, this space is freed. It is the user's responsibility to free unneeded space after the compiled regular expression is no longer needed.
The parameter endbuf is one more than the highest address where the compiled regular expression may be placed. This argument is ignored if expbuf is NULL. If the compiled expression cannot fit in (endbuf-expbuf) bytes, compile() returns NULL and regerrno (see below) is set to 50.
The parameter string is a pointer to a string of characters to be checked for a match. This string should be null-terminated.
The parameter expbuf is the compiled regular expression obtained by a call of the function compile().
The function step() returns non-zero if the given string matches the regular expression, and zero if the expressions do not match. If there is a match, two external character pointers are set as a side effect to the call to step(). The variables set in step() are loc1 and loc2. loc1 is a pointer to the first character that matched the regular expression. The variable loc2 points to the character after the last character that matches the regular expression. Thus if the regular expression matches the entire line, loc1 points to the first character of string and loc2 points to the null at the end of string.
The purpose of step() is to step through the string argument until a match is found or until the end of string is reached. If the regular expression begins with ^, step() tries to match the regular expression at the beginning of the string only.
The advance() function is similar to step(); but, it only sets the variable loc2 and always restricts matches to the beginning of the string.
If one is looking for successive matches in the same string of characters, locs should be set equal to loc2, and step() should be called with string equal to loc2. locs is used by commands like ed and sed so that global substitutions like s/y*//g do not loop forever, and is NULL by default.
The external variable nbra is used to determine the number of subexpressions in the compiled regular expression. braslist and braelist are arrays of character pointers that point to the start and end of the nbra subexpressions in the matched string. For example, after calling step() or advance() with string sabcdefg and regular expression \(abcdef\), braslist[0] will point at a and braelist[0] will point at g. These arrays are used by commands like ed and sed for substitute replacement patterns that contain the \n notation for subexpressions.
Note that it is not necessary to use the external variables regerrno, nbra, loc1, loc2 locs, braelist, and braslist if one is only checking whether or not a string matches a regular expression.
#include<regexpr.h> . . . if(compile(*argv, (char *)0, (char *)0) == (char *)0) regerr(regerrno); . . . if (step(linebuf, expbuf)) succeed( );
If compile() succeeds, it returns a non-NULL pointer whose value depends on expbuf. If expbuf is non-NULL, compile() returns a pointer to the byte after the last byte in the compiled regular expression. The length of the compiled regular expression is stored in reglength. Otherwise, compile() returns a pointer to the space allocated by malloc(3C).
The functions step() and advance() return non-zero if the given string matches the regular expression, and zero if the expressions do not match.
If an error is detected when compiling the regular expression, a NULL pointer is returned from compile() and regerrno is set to one of the non-zero error numbers indicated below:
See attributes(5) for descriptions of the following attributes:
ed(1), grep(1), sed(1), malloc(3C), attributes(5), regexp(5)
When compiling multi-threaded applications, the _REENTRANT flag must be defined on the compile line. This flag should only be used in multi-threaded applications.
Name | Synopsis | Description | Examples | Return Values | Errors | Attributes | See Also | Notes | https://docs.oracle.com/cd/E19082-01/819-2246/regexpr-3gen/index.html | CC-MAIN-2018-13 | refinedweb | 785 | 60.45 |
Develop a simple Hello World email client application by using the EWS Managed API.
Last modified: December 05, 2013
Applies to: EWS Managed API | Exchange Server 2010
Note: This content applies to the EWS Managed API 2.0 and earlier versions. For the latest information about the EWS Managed API, see Web services in Exchange.
In this article
Set up your EWS Managed API development environment
What can you do with the EWS Managed API?
Beyond the basics: Learn more about the EWS Managed API
Additional resources
You can use the EWS Managed API to create client applications that target Exchange Web Services (EWS). The EWS Managed API provides an intuitive, easy-to-use object model for sending and receiving web service messages. You can access nearly all the information stored in an Exchange mailbox by using the EWS Managed API.
To create an EWS Managed API client application, make sure that you have the following:
A version of the EWS Managed API. This example will work with all versions of the EWS Managed API. Note that we are using the Exchange_2007SP1 schema version as that is the earliest shared service version.
A mailbox on an Exchange server that is running a version of Exchange starting with Exchange Server 2007, including Exchange Online. You must have the user name and credentials of the account. By default, direct EWS access is enabled for all Exchange Online plans except for the Kiosk plan.
A version of Visual Studio starting with Visual Studio 2005. In this example, we use Visual Studio 2010. If you don’t have Visual Studio installed, you can download Visual C# 2010 Express.
A version of the .NET Framework starting with the .NET Framework 3.5.
Familiarity with C#.
You can use the EWS Managed API to perform many tasks related to the information that is stored in an Exchange mailbox. We'll start by showing you how to use the EWS Managed API to create a simple Hello World email application.
The following table summarizes the basic tasks that you have to complete to create an EWS Managed API client application.
Step 1: Create a project for your first EWS Managed API client application
Start Visual Studio. On the File menu, choose New Project. The New Project dialog box opens.
From the Installed, right-click References and choose Add Reference from the context menu. The Add Reference dialog box opens.
Choose the Browse tab. Browse to the location where you installed the EWS Managed API dll. For this example, the path is the following: C:\Program Files\Microsoft\Exchange\Web Services\1.2\. The path will vary based on whether you download the 32 or 64 bit version of the Microsoft.Exchange.WebServices.dll. Choose Microsoft.Exchange.WebServices.dll and select OK. This adds the EWS Managed API reference to your project.
If you are using the EWS Managed API 1.2.1 or EWS Managed API 2.0, change the HelloWorld project to target the .NET Framework 4.
Now that you have your project set up and a reference to the EWS Managed API, you are ready to create your first application. To keep things simple, we will add our code to the Program.cs file. In the next step, we will develop the basic code to write most EWS Managed API client applications.
Step 3: Set up certificate validation
Versions of Exchange starting with Exchange 2007 Service Pack 1 (SP1), including Exchange Online, use self-signed X509 certificates to authenticate calls to the EWS client. You have to set up a certificate validation callback method in order for calls to your Exchange server to succeed.
Include references to the .NET Framework namespaces. Add the following references to the top of Program.cs.
Create a certificate validation callback method. Add the following code in Program.cs after the Main(string[] args) method.; } }
Add your certificate validation callback method to the ServicePointManager by adding the following code to the beginning of the Main(string[] args) method. This callback has to be available before you make any calls to the EWS end point.
You now have a client that can trust an Exchange server that has self-signed certificate.
Step 4: Set up URL redirection validation
Add the following redirection validation callback method after the Main(string[] args) method. This validates whether the redirected URL is using Transport Layer Security.
private static bool RedirectionUrlValidationCallback(string redirectionUrl) { // The default for the validation callback is to reject the URL. bool result = false; Uri redirectionUri = new Uri(redirectionUrl); // Validate the contents of the redirection URL. In this simple validation // callback, the redirection URL is considered valid if it is using HTTPS // to encrypt the authentication credentials. if (redirectionUri.Scheme == "https") { result = true; } return result; }
This validation callback will be passed to the ExchangeService object in the step 5.
Step 5: Prepare the ExchangeService object
Add a using directive reference to the EWS Managed API. Add the following code after the last using directive at the top of Program.cs.
In the Main method, after the CertificateValidationCallback is added to the ServicePointManager, instantiate the ExchangeService object with the service version you intend to target. In this example, we target the earliest version of the EWS schema.
If you are targeting an on-premises Exchange server and your client is domain joined, proceed to step 4. If you client is targeting an Exchange Online service, you have to pass explicit credentials. Add the following code after the instantiation of the ExchangeService object and set the credentials for your mailbox account.
Domain-joined clients that target an on-premise Exchange server can use the default credentials of the user who is logged on, assuming the credentials are associated with a mailbox. Add the following code after the instantiation of the ExchangeService object.
Your client is ready to make the first call to the Autodiscover service to get the service URL for calls to the EWS service.
The AutodiscoverUrl method on the ExchangeService object performs a call to the Autodiscover service to get the service URL. If this call is successful, the URL property on the ExchangeService object will be set with the service URL. Pass the user principal name and the RedirectionUrlValidationCallback to the AutodiscoverUrl method. Add the following code after the credentials have been specified in step 4 or 5.
At this point, your client is set up to make calls to EWS to access mailbox data.
Step 6: Create your first Hello World email message
After the AutodiscoverUrl method call, instantiate a new EmailMessage object and pass in the service object you created.
You now have an email message instantiated on the client on which the service binding is set. Any calls initiated on the EmailMessage object will be targeted at the service.
Now set the To: line recipient of the email message. For this example, use your your SMTP address.
Set the subject and the body of the email message.
You are now ready to send your first email message by using the EWS Managed API. The Send method will call the service and submit the email message for delivery.
You are ready to run your Hello World application. Select F5. A blank console window will open. You will not see anything in the console window while your application authenticates, follows Autodiscover redirections, and then makes its first call to create an email message. If you want to see the actual calls being made, add the following two lines of code before the AutodiscoverUrl method is called.
For information about how to capture the calls into a file, see Tracing EWS requests by using the EWS Managed API 2.0.
You now have a working EWS Managed API client application. For your convenience, the following example shows all the code that you added to create your Hello World application.
static void Main(string[] args) { ServicePointManager.ServerCertificateValidationCallback = CertificateValidationCallBack;(); }
This article shows you how to create your first EWS Managed API client application. You can now start using EWS to access your mailbox data. You can continue to expand your client application and perform specific tasks by using the EWS Managed API. To learn more, see the resources listed in the following table. | https://msdn.microsoft.com/library/office/jj220499(exchg.80) | CC-MAIN-2017-22 | refinedweb | 1,369 | 57.16 |
DEBSOURCES
Skip Quicknav
sources / cpio / 2.4.2-32 / tar
/* Extended tar format from POSIX.1.
Written by David J. MacKenzie.. */
#ifndef _TAR_H
#define _TAR_H 1
/* A tar archive consists of 512-byte blocks.
Each file in the archive has a header block followed by 0+ data blocks.
Two blocks of NUL bytes indicate the end of the archive. */
/* The fields of header blocks:
All strings are stored as ISO 646 (approximately ASCII) strings.
Fields are numeric unless otherwise noted below; numbers are ISO 646
representations of octal numbers, with leading zeros as needed.
linkname is only valid when typeflag==LNKTYPE. It doesn't use prefix;
files that are links to pathnames >100 chars long can not be stored
in a tar archive.
If typeflag=={LNKTYPE,SYMTYPE,DIRTYPE} then size must be 0.
devmajor and devminor are only valid for typeflag=={BLKTYPE,CHRTYPE}.
chksum contains the sum of all 512 bytes in the header block,
treating each byte as an 8-bit unsigned value and treating the
8 bytes of chksum as blank characters.
uname and gname are used in preference to uid and gid, if those
names exist locally.
Field Name Byte Offset Length in Bytes Field Type
name 0 100 NUL-terminated if NUL fits
mode 100 8
uid 108 8
gid 116 8
size 124 12
mtime 136 12
chksum 148 8
typeflag 156 1 see below
linkname 157 100 NUL-terminated if NUL fits
magic 257 6 must be TMAGIC (NUL term.)
version 263 2 must be TVERSION
uname 265 32 NUL-terminated
gname 297 32 NUL-terminated
devmajor 329 8
devminor 337 8
prefix 345 155 NUL-terminated if NUL fits
If the first character of prefix is '\0', the file name is name;
otherwise, it is prefix/name. Files whose pathnames don't fit in that
length can not be stored in a tar archive. */
/* The bits in mode: */
#define TSUID 04000
#define TSGID 02000
#define TSVTX 01000
#define TUREAD 00400
#define TUWRITE 00200
#define TUEXEC 00100
#define TGREAD 00040
#define TGWRITE 00020
#define TGEXEC 00010
#define TOREAD 00004
#define TOWRITE 00002
#define TOEXEC 00001
/* The values for typeflag:
Values 'A'-'Z' are reserved for custom implementations.
All other values are reserved for future POSIX.1 revisions. */
#define REGTYPE '0' /* Regular file (preferred code). */
#define AREGTYPE '\0' /* Regular file (alternate code). */
#define LNKTYPE '1' /* Hard link. */
#define SYMTYPE '2' /* Symbolic link (hard if not supported). */
#define CHRTYPE '3' /* Character special. */
#define BLKTYPE '4' /* Block special. */
#define DIRTYPE '5' /* Directory. */
#define FIFOTYPE '6' /* Named pipe. */
#define CONTTYPE '7' /* Contiguous file */
/* (regular file if not supported). */
/* Contents of magic field and its length. */
#define TMAGIC "ustar"
#define TMAGLEN 6
/* Contents of the version field and its length. */
#define TVERSION "00"
#define TVERSLEN 2
#endif /* tar.h */ | https://sources.debian.org/src/cpio/2.4.2-32/tar.h/ | CC-MAIN-2020-05 | refinedweb | 460 | 65.62 |
Contents
TurboGears is designed to get running with a minimum of trouble and very little external configuration. For many applications, the only thing that will need to change from the basic tg quickstart configuration is the location of the database.
A quickstarted application comes with five configuration files:
- dev.cfg
- sample-prod.cfg
- test.cfg
- <package_name>/config/app.cfg
- <package_name>/config/log.cfg
The dev.cfg file is the deployment configuration used while you are developing your application.
The sample-prod.cfg contains appropriate settings for a “real” deployment. We recommend that you rename this file to prod.cfg and tweak the settings before deploying an application.
The test.cfg file is used when you run the test suite of your project. See Testing Your Application for more information about test suites.
Of these three files, only one will be used at a time, depending on how you start the server, that’s why they are located in the toplevel-directory (that also holds the setup.py file) and not in your application’s package.
The app.cfg holds options that are always the same regardless of the environment your program runs in. While dev.cfg and prod.cfg are mostly concerned with database config strings, ports, and auto-reloading, app.cfg contains encoding, output, and identity settings.
The log.cfg holds logging configuration that is independent of of the environment your program runs in, e.g. different log handlers and log formatting profiles. Environment-specific logging configuration should go into dev.cfg or prod.cfg.
Running the server using start-*appname*.py will use the dev.cfg config file by default, and if that does not exist, a file prod.cfg in the current directory. You can change this by passing the path to another config file as a command line parameter:
python start-*appname*.py /some/path/prod.cfg
If you want to package your application as an egg to be installed on the production server, then the content of the toplevel-directory will not be part of the package, i.e. neither dev.cfg nor prod.cfg will be available after installation of that egg. So you will need to take care of copying prod.cfg somewhere to the production server. Another solution is to package a default config file along with the application. When no dev.cfg` or ``prod.cfg will be found, then TurboGears looks for configs/default.cfg in the application’s package. Note that this file will be not created by default, you need to copy it from dev.cfg or sample-prod.cfg, and you also need to enable packaging of that file in setup.py by removing the comment in front of the following line:
``data_files = [('config', ['default.cfg'])],``
The syntax is an extended version of the standard INI/ConfigParser format. The main extensions are an increased importance of sections and the addition og some pre-defined variables for string interpolation, to reduce the number of hard coded file paths. If want to know more about this format, you can refer to the ConfigObj documentation.
The [global] section of the config file is intended for settings that apply server-wide and at start-up time. These settings are also copied to CherryPy’s site configuration.
You can also have path-based config sections used for tools and other settings that apply only to one part of your site. These settings are also copied to CherryPy’s application configuration. The example below will set tg.allow_json only for the /admin part of the site.
[/admin] tg.allow_json = True
Like with the ConfigParser format, you can also use string interpolation, to refer to values of one configuration setting in the value of another setting within the config file. See the ConfigObj documentation on string interpolation for more information on this.
There are two pre-defined variables for string interpolation. The first is current_dir_uri, which is the absolute file system path to the current working directory. This is primarily used in the default sqlite’s dburi but can be used elsewhere:
sqlobject.dburi = "sqlite://%(current_dir_uri)s/devdata.sqlite"
The second is top_level_dir, the absolute file system path to your application’s package directory (the location of controllers.py in a quickstarted project). It is used primarily in app.cfg to get a working filepath to the static content when the project is moved to a different server.
In addition to the global and application settings, there is also the logging configuration which is done in the [logging] section.
Retrieving values is done using the turbogears.config.get('key'[,default]) function:
>>> from turbogears import config >>> config.get('sqlobject.dburi','sqlite:///:memory:') #second value is optional default value 'sqlite:///:memory:'
In your templates the turbogears.config.get function is available as tg-config, so you can do something like this:
${tg.config('myproject.mysetting', 'default')}
Alternatively, you can pass the config object as part of your template dictionary:
from turbogears import config #... def ...(self, ...) return dict(config=config)
Or, if you prefer, you can import config in your Genshi or Kid template:
<?python from turbogears import config ?>
In both cases, the syntax for retrieving configuration settings is the same as in your controller code.
TurboGears uses its own config module, but copies the global settings and the application settings to CherryPy’s configuration system. This document doesn’t cover the CherryPy settings and conventions in detail, but here is a short list of the most common options.
This setting binds CherryPy to a particular ip address. This isn’t usually necessary, as CherryPy will listen for any incoming connections by default.
The exception is when your application is running on a system which has both a IPv4 and IPv6 network stack. By default the CherryPy server will only listen on the IPv6 interfaces. To listen on all IPv4 interfaces, you should set server.socket_host = '0.0.0.0'. If this does not work, a workaround would be to set server.socket_host to a specific interface address and run your application behind a reverse proxy that listens on all interfaces and forwards requests to your application.
Specifies the root of the site for use when generating URLs. Use this if your whole TurboGears instance is not at the root of your webserver. So if your application was located at /turbogears/my_app rather than /my_app, you would set server.webpath = "/turbogears/".
Note: Despite the name, this option is a TurboGears extension to CherryPy and is used with the turbogears.url() function.
For a more complete treatment of CherryPy options, we defer you to the CherryPy site, specifically section 1.3 an overview of the configuration system and section 1.3.2 reference of some configuration options.
This sets the CatWalk session directory, i.e. where CatWalk stores its state data. You may need to set this to something different from the default, e.g. if you deploy your application with mod_wsgi and you want to use CatWalk mounted in your own controllers. Example:
catwalk.session_dir = "/var/cache/myapp/catwalk-session"
The path may be absolute or relative to the current working directory of your server. The directory will be created if it does not exist, so make sure it either exists already or the server has write access in the parent directory. New in TurboGears 1.5
The SQLAlchemy URI that identifies the database to use for this application when using the SQLAlchemy database abstraction (the default).
The general form of the URI is databasetype://username:password@hostname:port/databasename.
If you put :doc:`/notrans` in front of the URI, the automatic database transactions will be turned off.
Note that for sqlite, the format is more like that of a file:// URL: sqlite:///absolute_path_to_file (since absolute_path_to_file will start with a forward slash as well on Unix-like systems, you will end up with four slashes after sqlite:!).
Also note that on Windows systems, sqlite used | instead of : As of Turbogears 0.9a6, it is smart enough to undestand both ways. For example: sqlite:///d|databases/foo.db and sqlite:///d:databases/foo.db are both valid points to D:\databases\foo.db.
Note to MySQL users: If you use a different socket than /tmp/mysql.sock, you may specify which socket to use by adding a unix_socket argument to the dburi. i.e., mysql://user:pass@host:port/db?unix_socket=/path/to/mysql.sock.
Sets the hostname part of the URL under which your application is reachable from the network. This may be different from server.socket_name, e.g. if the TurboGears server is running behind a reverse proxy. This may include the port number (again, this is the port on which your app is available from the network and may be different from server.socket_port, separated by a colon. This setting is used by the turbogears.absolute_url function to build an absolute URL, including URL scheme (see below) and hostname.
Examples:
tg.url_domain = '' # absolute_url('/') --> ''
tg.url_domain = '' # absolute_url('/') --> ''
Sets the scheme part of the URL under which your application is reachable from the network. This may be necessary e.g. if the TurboGears server is running behind a reverse proxy and the proxy talks HTTP to the TurboGears server but HTTPS to the web client. This setting is used by the turbogears.absolute_url function to build an absolute URL, including URL scheme and hostname (see above).
Example:
tg.url_domain = '' tg.url_scheme = 'https' # absolute_url('/') --> ''
If you set this to True, the SafeMultiPartFilter request filter will be enabled, which handles request from buggy clients, which do not transfer properly formated multipart MIME messages, e.g. the Adobe Flash FileReference class. You would usually enable this only for the URLs that actually handle requests from these clients. New in 1.5
Example:
[/upload] safempfilter.on True
This option automatically includes the listed widgets in all pages on the site. This is primarily useful for javascript libraries that have been packaged as widgets, such as the tg.mochikit.
Note: This option has replaced the tg.mochikit_all option from older TurboGears releases. To achieve the same effect, you’ll want to set tg.include_widgets = ['turbogears.mochikit'].
Whether to check all base classes for their attributes when converting an an SQLObject model object to JSON.
For backward-compatibility turbojson.descent_bases is also recognized as a default for this setting, but using turbojson.descent_bases is deprecated.
All other options prefixed with json. are passed as keyword arguments to the json.JSONEncoder constructor (simplejson is used instead of json for older Python versions). See the json documentation for all supported options.
The translation function to use for message translation. Possible values are "tg_gettext", "so_gettext" and "sa_gettext". If this is not set, "tg_gettext" will be used, which refers to the turbogears.i18n.tg_gettext.tg_gettext function.
so_gettext and sa_gettext are alternative translation functions, which use specific storage backends, namely a database and the SQLObject or the SQLAlchemy driver. They are defined in turbogears.i18n.sogettext and turbogears.i18n.sagettext respectively.
Answering “yes” to the identity prompt during tg-admin quickstart will fill in your model with a number of tables and will add the following options to your app.cfg:
Domain name to specify when setting the cookie (must begin with a . (dot) according to RFC 2109). This is useful when your computer is at spam.eggs.foo.com and you want to set a cookie for your whole foo.com domain for use in other applications.
The default (None) should work for most cases and will default to the machine to which the request was made. Note: localhost is NEVER a valid value and will NOT work.
Where to look for the key of an existing visit in the request.
This should be a comma-separated list of the possible values: 'cookie', 'form'. The given methods will be tried one-after-another in the given order until a valid visit key is found or the list is exhausted. By default only use the visit key found in a session cookie. New in 1.0.6.
Comma-separated list of sources the identity should provider consider when determining the identity associated with a request. The given methods will be tried one-after-another in the given order until a valid identity is found or the list is exhausted.
Available options:
When identity.*provider.encryption_algorithm (see below under Provider) is set to "custom" you can provide a custom password encryption function through this setting. The value should be the full name of the callable (including the module (and possibly class) name) in dotted-path notation. The callable should accept the password to be encrypted as a string (utf-8 encoded) and return a unicode string with the encrypted password.
Example:
identity.custom_encryption = 'mypkg.utils.encrypt_password'
These options are only applicable when using one of the default identity provider plugins. TurboGears supplies both a SQLObject and a SQLAlchemy identity provider, they are named soprovider and saprovider respectively and take the same options.
The password encryption algorithm used when comparing passwords against what’s stored in the database. Valid values are md5 or sha1 or custom.
If you do not specify an encryption algorithm, passwords are expected to be clear text. If the value is custom you must also set the the identity.custom.encryption setting (see above under Identity) to the name of an appropriate callable in dotted-path notation.
If this is set to a valid encryption algorithm the TurboGears identity provider will encrypt passwords supplied as part of your login form. If you set the password through the password property, like my_user.password = 'secret' the password will be saved in encrypted form in the database, provided identity is up and running, or you have loaded the configuration specifying what encryption to use (the latter may be necessary in situations where identity may not yet be running, like tests). | http://www.turbogears.org/1.5/docs/Configuration.html | CC-MAIN-2014-15 | refinedweb | 2,300 | 58.58 |
This is our 10th tutorial of Learning PIC microcontrollers using MPLAB and XC8. Till now, we have covered many basic tutorials like LED blinking with PIC, Timers in PIC, interfacing LCD, interfacing 7-segment, ADC using PIC etc. If you are an absolute beginner, then please visit the complete list of PIC tutorials here and start learning.
In this tutorial, we.
What is a PWM Signal?
Pulse Width Modulation (PWM) is a digital signal which is most commonly used in control circuitry. This signal is set high we will set a frequency of 5KHz.
PWM using PIC16F877A:
PWM signals can be generated in our PIC Microcontroller by using the CCP (Compare Capture PWM) module. The resolution of our PWM signal is 10-bit, that is for a value of 0 there will be a duty cycle of 0% and for a value of 1024 (2^10) there be a duty cycle of 100%. There are two CCP modules in our PIC MCU (CCP1 And CCP2), this means we can generate two PWM signals on two different pins (pin 17 and 16) simultaneously, in our tutorial we are using CCP1 to generate PWM signals on pin 17.
The following registers are used to generate PWM signals using our PIC MCU:
- CCP1CON (CCP1 control Register)
- T2CON (Timer 2 Control Register)
- PR2 (Timer 2 modules Period Register)
- CCPR1L (CCP Register 1 Low)
Programming PIC to generate PWM signals:
In our program we will read an Analog voltage of 0-5v from a potentiometer and map it to 0-1024 using our ADC module. Then we generate a PWM signal with frequency 5000Hz and vary its duty cycle based on the input Analog voltage. That is 0-1024 will be converted to 0%-100% Duty cycle. This tutorial assumes that you have already learnt to use ADC in PIC if not, read it from here, because we will skip details about it in this tutorial.
So, once the configuration bits are set and program is written to read an Analog value, we can proceed with PWM.
The following steps should be taken when configuring the CCP module for PWM operation:
- Set the PWM period by writing to the PR2 register.
- Set the PWM duty cycle by writing to the CCPR1L register and CCP1CON<5:4> bits.
- Make the CCP1 pin an output by clearing the TRISC<2> bit.
- Set the TMR2 prescale value and enable Timer2 by writing to T2CON.
- Configure the CCP1 module for PWM operation.
There are two important functions in this program to generate PWM signals. One is the PWM_Initialize() function which will initialize the registers required to set up PWM module and then set the frequency at which the PWM should operate, the other function is the PWM_Duty() function which will set the duty cycle of the PWM signal in the required registers. }
The above function is the PWM initialize function, in this function The CCP1 module is set to use PWM by making the bit CCP1M3 and CCP1M2 as high.. The desired frequency can be set by using the below formulae
PWM Period = [(PR2) + 1] * 4 * TOSC * (TMR2 Prescale Value)
Rearranging these formulae to get PR2 will give
PR2 = (Period / (4 * Tosc * TMR2 Prescale )) - 1
We know that Period = (1/PWM_freq) and Tosc = (1/_XTAL_FREQ). Therefore.....
PR2 = (_XTAL_FREQ/ (PWM_freq*4*TMR2PRESCALE)) – 1;
Once the frequency is set this function need not be called again unless and until we need to change the frequency again. In our tutorial I have assigned PWM_freq = 5000; so that we can get a 5 KHz operating frequency for our PWM signal.
Now let us set the duty cycle of the PWM by using the below function } }
Our PWM signal has 10-bit resolution hence this value cannot be stored in a single register since our PIC has only 8-bit data lines. So we have use to other two bits of CCP1CON<5:4> (CCP1X and CCP1Y) to store the last two LSB and then store the remaining 8 bits in the CCPR1L Register.
The PWM duty cycle time can be calculated by using the below formulae:
PWM Duty Cycle = (CCPRIL:CCP1CON<5:4>) * Tosc * (TMR2 Prescale Value)
Rearranging these formulae to get value of CCPR1L and CCP1CON will give:
CCPRIL:CCP1Con<5:4> = PWM Duty Cycle / (Tosc * TMR2 Prescale Value)
The value of our ADC will be 0-1024 we need that to be in 0%-100% hence, PWM Duty Cycle = duty/1023. Further to convert this duty cycle into a period of time we have to multiply it with the period (1/ PWM_freq)
We also know that Tosc = (1/PWM_freq), hence..
Duty = ( ( (float)duty/1023) * (1/PWM_freq) ) / ( (1/_XTAL_FREQ) * TMR2PRESCALE) ;
Resolving the above equation will give us:
Duty = ( (float)duty/1023) * (_XTAL_FREQ / (PWM_freq*TMR2PRESCALE));
You can check the complete program in the Code section below along with the detailed Video.
Schematics and Testing:
As usual let us verify the output using Proteus simulation. The Circuit Diagram is shown below.
Connect a potentiometer to 7th pin to feed in a voltage of 0-5. CCP1 module is with pin 17 (RC2), here the PWM will be generated which can be verified using the Digital oscilloscope. Further to convert this into a variable voltage we have used a RC-filter and an LED to verify the output without a scope.
What is a RC-Filter?. Now based on the value of the capacitor, the capacitor will take some time to get fully charged, once charged it will block the DC current (Remember capacitors block DC but allows AC) hence the input DC voltage will appear across the output. The high frequency PWM (AC signal) will be grounded through the capacitor. Thus a pure DC is obtained across the capacitor. A value of 1000Ohm.
Working on Hardware:
The hardware setup of the project is very simple, we are just going to reuse our PIC Perf board shown below.
We will also need a potentiometer to feed in the analog voltage, I have attached some female end wires to my pot (shown below) so that we can directly connect them to the PIC Perf board.
Finally to verify the output we need a RC circuit and a LED to see how the PWM signal works, I have simply used a small perf board and soldered the RC circuit and the LED (to control brightness) on to it as shown below.
That’s it we have programmed to read the Analog voltage from the POT and convert into PWM signals which in turn have been converted into Variable voltage using RC filter and the result is verified using our hardware. If you have some doubt or get stuck somewhere kindly use the comment section below, we will be happy to help you out. The complete working is working in the video.
Also check our other PWM Tutorials on other microcontrollers:
// CONFIG
= OFF // Brown-out Reset Enable bit (BOR disabled)
#pragma config LVP = ON //
#define TMR2PRESCALE 4
#include <xc.h>
long PWM_freq = 5000;()
{
int adc_value;
TRISC = 0x00; //PORTC as output
TRISA = 0xFF; //PORTA as input
TRISD = 0x00;
ADC_Initialize(); //Initializes ADC Module
PWM_Initialize(); //This sets the PWM frequency of PWM1
do
{
adc_value = ADC_Read(4); //Reading Analog Channel 0
PWM_Duty(adc_value);
__delay_ms(50);
}while(1); //Infinite Loop
}
Mar 17, 2017
Hi sir,
I am electrical engineering and I need more simple practical circuits about (delay time cct) in my work (electronic change over equipment) so as to make more safity in the work
Asimple practical cct please
And I would like to thank you very much about all these informatios
thank you very much
Arkan M R .
Mar 19, 2017
Hi Arkan,
Kindly explain your requirements in detail, we will be happy to help you out!!
Thanks Arkan, your comments encourage us..
Mar 30, 2017
Hi. I am writing a PWM code for PIC18f4550 micrcocontroller. I am using MPLAB X IDE with XC8. I am referring to your code to write a simple PWM code to test it on oscilloscope and later on build on it to do more things. I don't understand what are:
CCP1X = duty & 1; //Store the 1st bit
CCP1Y = duty & 2; //Store the 0th bit
What are CCP1X and CCP1Y? What is their role?
Mar 31, 2017
Hi Ann,
I will explain the following lines from my code
As you can see the variable duty contains the value of the PWM duty cycle. This va;ue has to be used to set the CCP1X and CCP1Y bit and also the CCPR1L register.
The following lines are from the datasheet
CCP1CON<5:4> is nothing but the CCP1X and CCP1Y bits respectively. The value of duty after this formulae(shown below) can be upto 10-bit resolution
Since the Registers in our PIC MCU can hold only upto 8-bits(PIC16F877A is a 8-bit MCU) the additional two bits (CCP1X and CCP1Y) are used to store the least two significant bits and the remaining 8 bits are stored into the CCPR1L register.
Hope this answered your question.
Jan 31, 2019
Hi
I was just wondering about the order of bit masking for the 0th and the 1st bit of the duty cycle to store in the CCP1X and CCP1Y locations. So actually, CCP1X should be storing the 1st bit of duty cycle and the CCP1Y should be containing the 0th bit right and it is commented like that also. I am pasting the code below for reference
CCP1X = duty & 1; //Store the 1st bit
CCP1Y = duty & 2; //Store the 0th bit
CCPR1L = duty>>2;// Store the remining 8 bit
But to get the 0th bit in CCP1Y, duty should be done '& (AND)' with '1' right? and for getting 1st bit in CCP1X duty should be done '&' with '2' right? instead of the other way done above.
Thanks
Mar 22, 2019
I agree with Subil that 1st bit would be duty &2 and 0th bit would be duty &1
However, CCP1X and CCP1Y are ** bits ** whereas duty & 2 will evaluate to either 0 (binary 00) or 2 (binary 10)
and so you need to right shift duty & 2 before assigning to CCP1X
like this:
CCP1X = (duty & 2) >> 1; //Store the 1st bit
CCP1Y = duty & 1; //Store the 0th bit
Jun 16, 2017
Hi,
I checked your code for my project and it worked well. now the problem is only the led is dimming but the fan or bulb doesn't dimming. Help me in this.
Jun 16, 2017
Hi Thejo,
Glad that the code worked for you...
I am not sure how you have interfaced the fan or bulb with the PIC MCU. Let me know what your driver circuit is. If you have used a relay it will not work. Hence make sure you use a power electronic device like MOSFET and you switch it properly using the PWM signals.
Jun 22, 2017
the timer0 in PIC18F is used to get 13ms delay. can u show steps to get the value of T0CON based on TMROH=0x02 and TMROL=0x12 and Fosc=20MHz.
Aug 03, 2017
hello! please from ''duty = ((float)duty/1023)*(_XTAL_FREQ/(PWM_freq*TMR2PRESCALE)); ''' what is ''float'' doing in there? is it a variable or what??
why did you hav to add float to the formulae??
Aug 14, 2017
No Float is not a variable.
Here the data type of duty is integer. So when this integer value is devided by 1023 it will also be an integer. But we need it in terms of decimal for better accuracy. So we prefix (float) to ask the compilernot to truncate the obtained result to interger but keep it in float as such.
Hope this clears your doubt.
Thanks,
Aswinth
Aug 18, 2017
please give me an example of this... am a bit confuse.. from PWM_Duty(adc_value); i understand that, voltage from 0-5v will also be converted from 0-1023
take for instance adc_value=1020 which in our duty formulae has been divided by 1023.duty=((float)duty/1023) saying (1020/1023)=0.9971 which will result everything to be zero.. so please explain to me how (float) works in that code with an example of duty result between 0-5v.. Thank you!(i want to move to the next tutorial after getting full understanding of this)
Aug 18, 2017
Yes you got it partially correct.
(1020/1023)=0.9971
But, the value 0.9971 will not be converted to 0 yet. Because we still have the remaning formulae to complete.
so,
(duty/1023) * (20000000/(5000*4))
Since you took an example of 1020 as duty
0.9971 * (20000000/(5000*4)
Which will be 0.9971*(1000) = 997
PWM_Duty(997);
Hope this clears things for you
Aug 18, 2017
wow!! Thanks ... confusion cleared!
Aug 20, 2017
Always Welcome!!!!
Aug 14, 2017
hi, can i use my keypad as an input to generate pwm to control the angle of the servo motor ?e.g opening a door or locking the door with the input on the keypad
Aug 15, 2017
Yes, you can...
Interface a keyboard with microcontroller and read the entered values. Then write that value to PWM_duty() function. This way you should be able to control the Servo motor based on your entered value.
Sep 17, 2017
iam intersted this pwm generating ckt.
first of all i thanks you lot about this tutorials .
can you help me how to modify this ckt instead of pot. make it digital through Up & Down push buttons.... is it possible?
Sep 18, 2017
Hi Kumar,
Yes, it is very much possible. Here in this code above. The variable adc_value controls the duty cycle of the output PWM signal. You simply have to connect two push buttons and use some If statements to check if they are pressed. If pressed simply increment/decrement the value of adc_value. note the range of this variable should only be from 0-1024
Sep 23, 2017
Hi,
This code is very helpful to get the basic idea for coding for a pwm output.
Can you guide me to design a pwm output for pic16f18857.
Thanks
Sep 23, 2017
hhow do i make this run for pic16f18857
Sep 24, 2017
The method is the same. Only the name of registers might change. Read the datasheet of PIC16F18857 and modify the code accordingly
Oct 21, 2017
hi sir
,
this project helped me.
nice and great......
but
i want to make this project on mickro c compiler..plz..pic16f877a
Dec 18, 2017
Hi,
I am using a PIC16F873A with a variation of your code that uses a for() loop to increase and decrease the duty cycle. I am using a LED component that effectively has RGB LED's, connected to PORTC 1, 2, and 3. Is there any way of changing which pin the pwm signal outputs on? have tried altering the TRISC2=0 to TRIS1=0 etc to no avail. is it possible to output the signal on multiple pins at the same time?
Jan 03, 2018
Hello!
Never have I used PWM signals before so it's kinda tricky to me. I am using PIC18F8722 and am supposed to build a function with the following structure : void config_PWM(int en, int ch, int freq, int period, int c_duty) . //en=enable the module, ch=PWM output channel, freq=osc. freq., period=PWM period(0-127), c_duty=cycle duty(0-1024) .
All the information here has been really helpful, I am just having difficulties in putting all this stuff together within one function. If you could please land me a hand I would be very grateful, since I won't have access to the microcontroller til next week. Thank you!
Jan 16, 2018
If I may ask, why is the signal from the potentiometer being fed to both RA0 and RA5 ?
Thanks
Jan 18, 2018
Sorry for the mistake CB, you can connect it either to RA0 or to RA5, the code here reads voltage from pin RA5. So if you are using this code then you can ignore the connection to RA0
Mar 06, 2018
Ur ckt and code working fine for led only ang it also getting dimmed. But even after adding a TRIAC the bulb/ fan not getting on. Please suggest?
Mar 21, 2018
sir, how do i control ac loadinstead of led.....
Mar 28, 2018
Aren't the values of CCP1X and CCP1Y inverted? If you and duty with 1 you get the 0th bit, not the first
Mar 30, 2018
nice job man ! i did this with PIC16F628A and works great , i just need to change TRISC2 TO TRISB9 and the pic16 used don't have ADC
Apr 03, 2018
Ur ckt and code working fine for led only ang it also getting dimmed. But even after adding a TRIAC the bulb/ fan not getting on. Please suggest?
Apr 06, 2018
You cannot drive all types of TRIAC directly with PIC, the reason is they might consume more current than LED. Look for triac driving circuit
Apr 11, 2018
Can I generate a 10MHz frequency at 50% duty cycle using the PIC16F628A
Apr 26, 2018
Hi Sir !
Can you please tell me how to generate two symmetric PWM pulses at RC1 and RC2 ports (pic16f877a).
Dead time between two pulses also makes me headache.
Thank you for your help
May 02, 2018
Sir can you tell me what will be the possible changes in the code if I use a 4Mhz crystal for generating a PWM frequency of 250hz? | https://circuitdigest.com/microcontroller-projects/pic-microcontroller-pic16f877a-pwm-tutorial | CC-MAIN-2019-39 | refinedweb | 2,909 | 69.31 |
The QAbstractItemDelegate class is used to display and edit data items from a model. More...
#include <QAbstractItemDelegate>
Inherits QObject.
Inherited by QItemDelegate., and Pixelator Example. date locally in order to increase performance or conserve network bandwidth.
Although models and views should respond to these hints in appropriate ways, custom components may ignore any or all of them if they are not relevant.
Creates a new abstract item delegate with the given parent.
Destroys the abstract item delegate..
This signal must be emitted when the editor widget has completed editing the data, and wants to write it back into the model. setModelData() and setEditorData().
Whenever an event occurs, this function is called with the event model option and the index that corresponds to the item being edited.
The base implementation returns false (indicating that it has not handled the event)..
This pure abstract function must be reimplemented if you want to provide custom rendering. Use the painter and style option to render the item specified by the item index.
If you reimplement this you must also reimplement sizeHint().().
Sets the data for the item at the given index in the model to the contents of the given editor.
The base implementation does nothing. If you want custom editing you will need to reimplement this function.
See also setEditorData().
This pure abstract function must be reimplemented if you want to provide custom rendering. The options are specified by option and the model item by index.
If you reimplement this you must also reimplement paint().. | https://doc.qt.io/archives/qtopia4.3/qabstractitemdelegate.html | CC-MAIN-2021-17 | refinedweb | 252 | 59.8 |
Dear gurus :-)
I am currently working in a registerd namespace (eg. /EXAMPLE/* for development work.
Now for some SOLMAN upgrades and in BW systems, the activation of objects is requesting that the namespace is registered also in the target transport systems. Until now we did not transport the namespace.
So... in SE03 I want to record the recipient profile with repair key into a transport request but cannot find any Transport option there. It is a vew so I could transport it from SM30 with keys but only need the repair key for the modification assistant and not namespace sender key itself.
Transporting table contents seems unwise as there are a few tables in the area which know about the keys.
Does anyone know what the correct procedure is to record a recipient key of a namespace onto a WB transport request?
I am a bit lost on the UI side at the moment :-)
Cheers,
Julius
Hey Julius,
Any reason not to transport the namespace?...
My first reaction is that repair keys are for specific namespaces.. so I would have thought that adding the key to a transport would not have been helpful if the namespace is not registered on the target system.
Regards, Juan
Help to improve this answer by adding a comment
If you have a different answer for this question, then please use the Your Answer form at the bottom of the page instead.
BTW: I noticed that the view V_TRNSPACE does not have a delivery class in it's DDIC attributes. I suspect that for this reason the transport options in the menus are greyed out. Could be a bug.
Cheers,
Julius
Help to improve this answer by adding a comment
If you have a different answer for this question, then please use the Your Answer form at the bottom of the page instead.
Add a comment | https://answers.sap.com/questions/9702632/transporting-reciepient-keys-for-registered-namesp.html | CC-MAIN-2020-40 | refinedweb | 311 | 69.92 |
This tutorial describes how we can create a Hadoop MapReduce Job with Spring Data Apache Hadoop. As an example we will analyze the data of a novel called The Adventures of Sherlock Holmes and find out how many times the last name of Sherlock’s loyal sidekick Dr. Watson is mentioned in the novel.
Note: This blog entry assumes that we have already installed and configured the used Apache Hadoop instance.
We can create a Hadoop MapReduce Job with Spring Data Apache Hadoop by following these steps:
- Get the required dependencies by using Maven.
- Create the mapper component.
- Create the reducer component.
- Configure the application context.
- Load the application context when the application starts.
These steps are explained with more details in the following Sections. We will also learn how we can run the created Hadoop job.
Getting the Required Dependencies with Maven
We can download the required dependencies with Maven by adding the dependency declations of Spring Data Apache Hadoop and Apache Hadoop Core to our POM file. We can declare these dependencies by adding the following lines to our>
Creating the Mapper Component
A mapper is a component that divides the original problem into smaller problems that are easier to solve. We can create a custom mapper component by extending the Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> class and overriding its map() method. The type parameters of the Mapper class are described in following:
- KEYIN describes the type of the key that is provided as an input to the mapper component.
- VALUEIN describes the type of the value that is provided as an input to the mapper component.
- KEYOUT describes the type of the mapper component’s output key.
- VALUEOUT describes the type of the mapper component’s output value.
Each type parameter must implement the Writable interface. Apache Hadoop provides several implementations to this interface. A list of existing implementations is available at the API documentation of Apache Hadoop.
Our mapper processes the contents of the input file one line at the time and produces key-value pairs where the key is a single word of the processed line and the value is always one. Our implementation of the map() method has following steps:
- Split the given line into words.
- Iterate through each word and remove all Unicode characters that are not either letters or numbers.
- Create an output key-value pair by calling the write() method of the Mapper.Context class and providing the required parameters.
The source code of the WordMapper class looks following: Text word = new Text(); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer lineTokenizer = new StringTokenizer(line); while (lineTokenizer.hasMoreTokens()) { String cleaned = removeNonLettersOrNumbers(lineTokenizer.nextToken()); word.set(cleaned); context.write(word, new IntWritable(1)); } } /** * Replaces all Unicode characters that are not either letters or numbers with * an empty string. * @param original The original string. * @return A string that contains only letters and numbers. */ private String removeNonLettersOrNumbers(String original) { return original.replaceAll("[^\\p{L}\\p{N}]", ""); } }
Creating the Reducer Component
A reducer is a component that removes the unwanted intermediate values and passes forward only the relevant key-value pairs. We can implement our reducer by extending the Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT> class and overriding its reduce() method. The type parameters of the Reducer class are described in following:
- KEYIN describes the type of the key that is provided as an input to the reducer. The value of this type parameter must match with the KEYOUT type parameter of the used mapper.
- VALUEIN describes the type of the value that is provided as an input to the reducer component. The value of this type parameter must match with the VALUEOUT type parameter of the used mapper.
- KEYOUT describes type of the output key of the reducer component.
- VALUEOUT describes the type of the output key of the reducer component.
Our reducer processes each key-value pair produced by our mapper and creates a key-value pair that contains the answer of our question. We can implement the reduce() method by following these steps:
- Verify that the input key contains the wanted word.
- If the key contains the wanted word, count how many times the word was found.
- Create a new output key-value pair by calling the write() method of the Reducer.Context class and providing the required parameters.
The source code of the WordReducer class is given in following:
import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WordReducer extends Reducer<Text, IntWritable, Text, IntWritable> { protected static final String TARGET_WORD = "Watson"; @Override protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { if (containsTargetWord(key)) { int wordCount = 0; for (IntWritable value: values) { wordCount += value.get(); } context.write(key, new IntWritable(wordCount)); } } private boolean containsTargetWord(Text key) { return key.toString().equals(TARGET_WORD); } }
Configuring the Application Context
Because Spring Data Apache Hadoop 1.0.0.M2 does not support Java configuration, we have to configure the application context of our application by using XML. We can configure the application context of our application by following these steps:
- Create a properties file that contains the values of configuration properties.
- Configure a property placeholder that fetches the values of configuration properties from the created property file.
- Configure Apache Hadoop.
- Configure the executed Hadoop job.
- Configure the job runner that runs the created Hadoop job.
Creating the Properties File
Our properties file contains the values of our configuration parameters. We can create this file by following these steps:
- Specify the value of the fs.default.name property. The value of this property must match with the configuration of our Apache Hadoop instance.
- Specify the value of the mapred.job.tracker property. The value of this property must match with the configuration of our Apache Hadoop instance.
- Specify the value of the input.path property.
- Add the value of the output.path property to the properties file.
The contents of the application.properties file looks following:
fs.default.name=hdfs://localhost:9000 mapred.job.tracker=localhost:9001 input.path=/input/ output.path=/output/
Configuring the Property Placeholder
We can configure the needed property placeholder by adding the following element to the applicationContext.xml file:
<context:property-placeholder
Configuring Apache Hadoop
We can use the configuration namespace element for providing configuration parameters to Apache Hadoop. In order to execute our job by using our Apache Hadoop instance, we have to configure the default file system and the JobTracker. We can configure the default file system and the JobTracker by adding the following element to the applicationContext.xml file:
<hdp:configuration> fs.default.name=${fs.default.name} mapred.job.tracker=${mapred.job.tracker} </hdp:configuration>
Configuring the Hadoop Job
We can configure our Hadoop job by following these steps:
- Configure the input path that contains the input files of the job.
- Configure the output path of the job.
- Configure the name of the main class.
- Configure the name of the mapper class.
- Configure the name of the reducer class.
Note: If the configured output path exists, the execution of the Hadoop job fails. This is a safety mechanism that ensures that the results of a MapReduce job cannot be overwritten accidentally.
We have to add the following job declaration to our application context configuration file:
<hdp:job
Configuring the Job Runner
The job runner is responsible of executing the jobs after the application context has been loaded. We can configure our job runner by following these steps:
- Configure the job runner.
- Configure the executed jobs.
- Configure the job runner to run the configured jobs when it is started.
The declaration of our job runner bean is given in following:
<hdp:job-runner
We can execute the created Hadoop job by loading the application context when our application is started. We can do this by creating a new ClasspathXmlApplicationContext object and providing the name of our application context configuration file as a constructor parameter. The source code of our create a Hadoop MapReduce job with Spring Data Apache Hadoop. Our next step is to execute the created job. The first thing we have to do is to download The Adventures of Sherlock Holmes. We must download the plain text version of this novel manually since the website of Project Gutenberg is blocking download utilities such as wget.
After we have downloaded the input file, we are ready to run our MapReduce job. We can run the created job by starting our Apache Hadoop instance in a pseudo-distributed mode and following these steps:
- Upload our input file to HDFS.
- Run our MapReduce job.
Uploading the Input File to HDFS
Our next step is to upload our input file to HDFS. We can do this by running the following command at command prompt:
hadoop dfs -put pg1661.txt /input/pg1661.txt
We can check that everything went fine Our MapReduce Job
We have two alternative methods for running our MapReduce job:
- We can execute the main() method of the Main class from our IDE.
- We can build a binary distribution of our example project by running MapReduce everything went fine, we should see a similar directory listing:
Found 2 items -rw-r--r-- 3 xxxx supergroup 0 2012-08-05 12:31 /output/_SUCCESS -rw-r--r-- 3 xxxx supergroup 10 2012-08-05 12:31 /output/part-r-00000
Now we will finally find out the answer to our question. We can get the answer by running the following command at command prompt:
hadoop dfs -cat /output/part-r-00000
If everything went fine, we should see following output:
Watson 81
We now know that the last name of doctor Watson was mentioned 81 times in the novel The Adventures of Sherlock Holmes.
What is Next?
My next blog entry about Apache Hadoop describes how we can create a streaming MapReduce job by using Hadoop Streaming and Spring Data Apache Hadoop.
PS. A fully functional example application that was described in this blog entry is available at Github.
74 comments… add one
My background: I know Maven and did some Projects at the university, but I’m new to Spring and Hadoop, so I needed to take a closer look at the spring configuration part.
Here are a few questions and hints which might improve your great article:
– “…Holmes and find out how many the last name of Sherlock’s…” -> “…Holmes and find out how many ”’times”’ the last name of Sherlock’s…”
– maybe create the maven project with the artifact:generate command, and add the dependencies afterwards
mvn archetype:generate \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DgroupId=com.company.division \
-DartifactId=appName \
-Dversion=1.0-SNAPSHOT \
-Dpackage=com.company.division.appName
– Creating the Mapper Component: You could add a note that we will configure the location of the classes in the application.xml afterwards + a proposal to create a package for those classes
– Path for the files applicationContext.xml and application.properties: You could add a note that the path to the applicationContext.xml will be configured in the Main class afterwards and tat the convention is that the file goes into /src/main/ressources/META-INF/applicationContext.xml
Links to basic explanations for the bare minimum of a applicationContext skeleton might be usefull, including the needed xmlns definitions.
– funny typo: the “napper class”
– for the “hadoop dfs –ls /input” command you have used a “–” instead of “-” (I don’t know the english terms) which will give copypasting readers some headache. (same for “hadoop dfs –cat”)
– A Link to Instructions for the assembly target would be usefull.
– Add the instruction to execute the command with Maven:
mvn exec:java -Dexec.mainClass=”com.company.division.appName.Main”
Thank you for the great Tutorial
Hi Konfusius,
Thank you for your comment. I appreciate that you took the time to write me a note about these errors and improvement ideas. I fixed the typos and I will also make changes to the article (and code) later.
Hi Petri,
Thanks for this very helpful tutorial. I built a sample using Spring-Hadoop based on the steps you suggested. For running the job, I am using the binary distribution mechanism and running the job using the startup script. I thing that I noted was that when running the job in this way, I don’t see the Job appearing on the JobTracker user interface. Any ideas why?
Also, the logs show, “Unable to load native-hadoop library for your platform… using builtin-java classes where applicable”. Will you be able to provide any elaboration on this.
Thanks again.
Hi Saurabh,
It is nice to hear that you find tutorial helpful.
I have a few questions concerning your problem with the JobTracker UI:
The log line you mentioned is written to the log when the native Hadoop libraries are not found. Check the Native Hadoop Libraries section of the Hadoop documentation for more information.
It seems that if a JobTracker is not configured, the job will not be visible in the JobTracker user interface.
Dear Sir
It gives me more confidence to work on hadoop pls. send me more information regarding
hadoop.
regards
Unmesh
Hi Unmesh,
It is nice to know that this blog entry was useful to you. Also, thanks for pointing out that you want to read more about Apache Hadoop.
Hey Petri,
Thanks for a great article.
I have executed your project (got it from github). it executed successfully. I used gradle to build the project.
But surprised to see no output directory created. I have input directory and data in HDFS. Can you please help me out. Tried many things like changed Mapper and reducer. Hadoop parameters are correct as per my cluster.
Hoping for your quick response,
Amar
Hi Amar,
I have got a few questions for you:
Unfortunately I don’t have access to a Hadoop cluster but I have got a local installation of Hadoop 1.0.3 which runs in the pseudo-distributed mode. If you could answer to my email and send me your Gradle build script, I can test it in my local environment.
Hi Petri,
Really appreciate quick response. I am using Hadoop 1.0.3 version only. Yes I am using Spring data hadoop 1.0.0. Also I did try it using maven but no luck :(. Surprising thing is it get executed successfully without any error. If I remove hadoop bean and a driver class then it works properly. Am I missing any configuration stuff. My application context is:
fs.default.name=hdfs://Ubuntu05:54310
simple job runner
Configures the reference to the actual Hadoop job.
I have hard coded some properties. Also tried with a property file. But no luck at all.
Thanks,
Amar
Hi Amar,
It seems that the configuration of the job runner has changed between 1.0.0.M2 (The Spring Data Apache Hadoop version used in this blog entry) and 1.0.0.RC1. Have you checked out the instructions provided in the Running a Hadoop Job section of the Spring Data Apache Hadoop reference documentation?
Hi Petri,
Changing version to M2 worked. I hearty appreciate your help.
Thanks,
Amar
Amar,
It is good to hear that you were able to solve your problem. I will update this blog entry and the example application when I have got time to do it.
Hi,
I’m trying out spring-data-hadoop and running it as a webapplication on tomcat. I configured everything and when running the Job from my servlet I get the following exception
SEVERE: PriviledgedActionException as:tomcat7 cause:java.io.IOException: Failed to set permissions of path: /hadoop_ws/mapred/staging/tomcat71391258236/.staging to 0700
here hadoop.tmp.dir=/hadoop_ws
I know tomcat7 user doesn’t have access to the above directory, but I’m not sure how to pass this exception.
I tried following, but now luck:
1. started tomcat as the user that have permission to /hadoop_ws directory. I changed TOMCAT7_USER and TOMCAT7_GROUP
2. hadoop dfs -chmod -R /hadoop_ws
3. changed mapreduce.jobtracker.staging.root.dir to different folder and set 777 permission.
none of the above approach worked. All the examples I find in internet is either configuring the mapreduce jobs in xml as in this post, or the application is a standalone application which runs as logged-in user.
Any help highly appreciated.
Thanks.
run: hadoop fs -chmod -R 777 /
Regards,
JP
Hello Petri,
hello
I followed your nice tutorial but with a change.
I used spring hadoop 1.0.0.RC2.
I have needed a change in the JobRuner definition in the spring config file. I added
otherwise no output directory was created.
I’ll go to your next blog entry.
Rafa
I
Hi Rafa,
Thank you for your comment.
As you found out, the configuration of the job runner bean has changed between the 1.0.0.M2 and 1.0.0.RC2 versions of Spring Data Hadoop.
I have been supposed to update these blog entries but I have been busy with other activities. Thank you for pointing out that I should get it done as soon as possible.
I updated the Spring Data Apache Hadoop version to 1.0.0.RC2.
Hi Petri,
I am doing loading data in bulk to hbase with Hbase MapReduce. Here I can configure HFileOuputFormat. Is there any way to configure same with spring application context?
Hoping for your quick response,
Amar
I have not personally used HBase and that is why I cannot give a definitive answer to your question. However, the following resources might be useful to you:
Hi Petri,
Great tutorial!!!
I followed the steps and works locally perfectly with startup.sh.
When deploy in a master-slave cluster and run $>hadoop jar mapreduce.jar
the job start, the tasks start in both nodes but in map fase I got:)
Any idea?
Thanks
Hi Bill,
thank you for your comment.
The root cause of your problem is that the mapper class is not found from the classpath. Did you use my example application or did you create your own application?
I would start by checking out that the configuration of your job is correct (check that the mapper and reducer classes are found). Also, if you created your own application, you could try to test my example application and see if it is working. I have tested it by running Hadoop in a pseudo distributed mode and it would be interesting to hear if it works correctly in your environment.
Hi again Petri,
Thanks for your reply.
I am using your example. I have a linux host with two virtual machines(Vmware)
one as the master node and once as a slave node, cluster tested successful.
My steps below:
#HOST
1) Configure application.properties
fs.default.name=hdfs://master:54310
input.path=/user/billbravo/gutenberg
output.path=/user/billbravo/gutenberg/output
2) Make assembly(Yours example of course)
$host>mvn assembly:assembly
3) Run example locally(Successful)
$host>unzip target/mapreduce-bin.zip
$host>cd mapreduce
$host>sh startup.sh
4) Copy to master node
$host>scp mapreduce-bin.zip billbravo@master:
4) Run in the cluster
$host>ssh billbravo@master
#MASTER NODE
$master>unzip mapreduce-bin.zip
$master>cd mapreduce
$master>hadoop jar mapreduce.jar
In this step occurs the previously comment error:)
My goal is understand how to make a hadoop aplication(with spring of corse), run local for debug and then deploy in a remote cluster like Amazon EMR.
Thanks again for your attention
:)
2) Copy assembly to master node(The cluster has been tested previously successful)
$scp target/mapreduce-bin.zip billbravo@master:
Hi Bill,
you are welcome (about my attention). These kind of puzzles are always nice because usually I learn a lot of thing things by solving them. :)
I found the problem. I updated this blog entry and made two changes to my example:
I also updated Spring Data Apache Hadoop to version 1.0.0.RELEASE.
This should solve your problem.
Hi Petri,
It works perfectly!
I found a work around copying mapreduce.jar to the Distributed Cache
and using the option hdp:cache in applicationContex.xml:
But your solution is cleaner.
Greetings
Hi Bill,
It is good to hear that this solved your problem. :)
I have the same problems. but I don not know how to fix it . Could you get me more code to descript it. Thank you.
Hi,
The solution to his was problem is described in this comment. You can find the relevant files from Github:
Hi again Petri,
Thanks for your reply.
I saw the comment. and added JobTracker configuration to both application.properties and applicationContext.xml
application.properties:
fs.default.name=hdfs://localhost:9100
mapred.job.tracker=localhost:9101
applicationContext.xml:
fs.default.name=${fs.default.name}
mapred.job.tracker=${mapred.job.tracker}
In addition, I updated Spring Data Apache Hadoop to version 1.0.1.RELEASE
# hadoop dfs -mkdir /input
# hadoop dfs -ls /
Found 3 items
drwxr-xr-x – root supergroup 0 2013-10-12 11:40 /hbase
drwxr-xr-x – root supergroup 0 2013-10-12 12:05 /input
drwxr-xr-x – root supergroup 0 2013-10-12 11:40 /tmp
# hadoop dfs -put sample.txt /input/sample.txt
# hadoop dfs -ls /input
Found 1 items
-rw-r–r– 3 root supergroup 51384 2013-10-12 12:09 /input/sample.txt
Unfortunately, when I run the progrem
java.lang.RuntimeException: java.lang.ClassNotFoundException: net.petrikainulainen.spring.data.apachehadoop.Word):1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: net.petrikainulainen.spring.data.apachehadoop.WordMapper:249)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:855)
… 8 more
OMG!
I am sorry. I am running this example in eclipse ide.when I am running this for local files on my disk it is OK.
By the way, do you has any example about hbasetemplate?
Good to hear that you were able to solve your problem (eventually). Unfortunately I haven’t written anything about the hdbasetemplate. I should probably pay more attention to this stuff in the future though.
great tutorial. Many thanks.
Could you please provide some information around configuring org.apache.avro.mapred.AvroJob.
My mapred job uses AvroInputformat.
Any information aorund configuring AvroJob using Spring Data would be very helpful.
Thanks
Thank you for your comment. It is nice to hear that this blog post was useful to you.
Also, thank you for providing me an idea for a new blog post. I have added your idea to my to-do list and I will probably write something about it after I have finished my Spring Data Solr tutorial (four blog posts left).
Hi Petri,
I was following your tutorial step by step.i have hadoop version 0.20.0 installed.but running the program it is throwing class not found exception for the mapper class.
can you please help me.i am new to this technology.
The requirements of Spring Data Apache Hadoop are described in its reference manual. It contains the following paragraph:
Spring.
I am not sure what is the difference between 0.20.2 (minimum supported version) and 0.20.0 (your version) but it might not be possible to use Spring Data Apache Hadoop with Hadoop 0.20.0. Did you try to run the example application of this blog post or did you create your own application?
Hi Petri,
i have hadoop 0.20.2(sorry for wrong information).i am running this example in eclipse ide.when i am running this for local files on my disk it is working fine.running same map reduce for hdfs it is throwing mapper class not found.
check bellow link for log.
This happened to me as well and I had to make some changes to my example application. These changes are described in this comment. Compare these files with the configuration files of your project:
I hope that this solves your problem.
i have the same files in my project and using spring-data-hadoop 1.0.0 release version.still facing the problem :( .Is there any other files i need to change..??
This commit fixed the problem in my example application. As you can see, I made changes only to the files mentioned in my previous comment (and updated the version number of Spring Data Apache Hadoop to the pom.xml).
Is there any chance that I can see your code? I cannot figure out what is wrong and seeing the code would definitely help.
Thanks for your inputs.
i have commented mapred.job.tracker=${mapred.job.tracker} in applicationContext.xml and now its working fine.but i dont know y this is happening.do u know the reason.
The reason for this might be that when the job tracker is not set, Hadoop will use the default job tracker which could be the local one. In other words, your map reduce job might not be executed in the Hadoop cluster. Check out this forum thread for more details about this.
Hello, Petri.
My current configuration includes one linux machine with the NameNode, TaskTracker and JobTracker daemons, another linux with a DataNode and a third Windows 7 machine with Eclipse/Netbeans for development.
At least with this configuration if you want to run the mapreduce job from the development machine it is mandatory to include the jar attribute in the applicationContext.xml file:
Other way the classes will not be available in the execution cluster. Hope it helps.
jv
Hi Javier,
Thank you for you comment.
Unfortunately Wordpress decided to remove the XML which you added to your comment (I should probably figure out if I can disable that feature since it is quite annoying).
Anyway, the configuration of my example application uses the jar-by-class attribute which did the trick when I last run into this problem.
However, your comment made me check if you can explicitly specify the name of the jar file. I erad the schema of Spring for Apache Hadoop configuration and find out that you can do this by using the jar attribute.
This information is indeed useful because if the jar file of the map reduce job cannot be resolved when the jar-by-class attribute is used, you can always use the jar attribute for this purpose.
Again, thanks for pointing this out.
Hello Interesting read, am a undergraduate student and wanting to leverage these techniques for an application as final year project Bsc IT. I am expecting a lot of data etc so I opted for mongodb. I’ve been using spring-data-mongo and been wondering if one could use spring-data-mongo and spring-data-hadoop together. mongodb has a way to integrate with hadoop. so how does everything play nice together? thank you
It is possible to use multiple Spring Data libraries in the same project. I have not personally used spring-data-hadoop and spring-data-mongo in the same project but it should be doable.
On the other hand, it is kind of hard to say if this makes any sense because I have no idea what kind of an application you are going to create. Could you shed some light on this?
If you can describe the use cases where you would like to use spring-data-hadoop and spring-data-mongo, I can probably give you a better answer.
Hi Petri,
First of all thank you for the tutorial. I downloaded the example maven project from github. I am able to run the application. But, It gives ClassNotFoundException for WordMapper class. I am using apache hadoop 1.2.1 version. Spring 3.1.0.
I am connecting to hdfs from remote vm. can you give me any suggesstion to solve this problem.
I have to confess that I haven’t really used Hadoop or Spring Data for Apache Hadoop after I wrote a few blog posts about them.
Did you try the things mentioned in these comments:
Hi sir, i am trying to run your example above mentioned . i am getting the below error ,unable to solve this .. plz help me. i tried in some other forums and blogs unable to solve this….
INFO – sPathXmlApplicationContext – Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@46ae506e: startup date [Tue Oct 29 11:02:44]
WARN – JobClient – No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).10291053_0002
INFO – JobClient – map 0% reduce 0%
INFO – JobClient – Task Id : attempt_201310291053_0002_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: test.WordMapper
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:857)
Update: I removed the unnecessary lines so that the comment is a bit cleaner. – Petri
Hi,
Did you already try the advice given in this comment?
hi petri .
the same code which i tried above working fine today. but the output of my job was nothing . there are some word matchings in the input file ..
a)input file content:
hadoop shiva rama hql
java hadoop
b) my properties file
fs.default.name=hdfs://localhost:54310
mapred.job.tracker=localhost:54311
input.path=/input3/
output.path=/output7/
c) application context was same as yours..
d) i am using apache hadoop 1.2.1 in pseudo distributed in my system. and trying ur example.. using STS(spring tools suite ) , by running Main.java class a java application im getting below output in console
INFO – sPathXmlApplicationContext – Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@46ae506e: startup date [Wed Oct 30 12:53:2110301123_0008
INFO – JobClient – map 0% reduce 0%
INFO – JobClient – map 100% reduce 0%
INFO – JobClient – map 100% reduce 33%
INFO – JobClient – map 100% reduce 100%
INFO – JobClient – Job complete: job_201310301123_0008
INFO – JobClient – Counters: 28
INFO – JobClient – Job Counters
INFO – JobClient – Launched reduce tasks=1
INFO – JobClient – SLOTS_MILLIS_MAPS=6477
INFO – JobClient – Total time spent by all reduces waiting after reserving slots (ms)=0
INFO – JobClient – Total time spent by all maps waiting after reserving slots (ms)=0
INFO – JobClient – Launched map tasks=1
INFO – JobClient – Data-local map tasks=1
INFO – JobClient – SLOTS_MILLIS_REDUCES=9332
INFO – JobClient – File Output Format Counters
INFO – JobClient – Bytes Written=0
INFO – JobClient – FileSystemCounters
INFO – JobClient – FILE_BYTES_READ=76
INFO – JobClient – HDFS_BYTES_READ=134
INFO – JobClient – FILE_BYTES_WRITTEN=109536
INFO – JobClient – File Input Format Counters
INFO – JobClient – Bytes Read=34
INFO – JobClient – Map-Reduce Framework
INFO – JobClient – Map output materialized bytes=76
INFO – JobClient – Map input records=2
INFO – JobClient – Reduce shuffle bytes=76
INFO – JobClient – Spilled Records=12
INFO – JobClient – Map output bytes=58
INFO – JobClient – CPU time spent (ms)=1010
INFO – JobClient – Total committed heap usage (bytes)=163581952
INFO – JobClient – Combine input records=0
INFO – JobClient – SPLIT_RAW_BYTES=100
INFO – JobClient – Reduce input records=6
INFO – JobClient – Reduce input groups=5
INFO – JobClient – Combine output records=0
INFO – JobClient – Physical memory (bytes) snapshot=237084672
INFO – JobClient – Reduce output records=0
INFO – JobClient – Virtual memory (bytes) snapshot=1072250880
INFO – JobClient – Map output records=6
INFO – JobRunner – Completed job [wordCountJob]
In my hdfs ,im getting the output7 directory with _success,logs and part-00000 files , the part-00000 is empty file ,it should display hadoop 2 as per my input file .
plz help me out .,
2)plz suggest me me how to use that “jar attribute” in my applicationcontext.xml .
hi petri .. got the answer for my problem . thanks …
i am making a jar of my project in the local drive, giving that path to classpath of my projects run configuration… though im not using any “jar ” attribute in the applicationcontext.xml file.
Hi,
It is good to hear that you could solve your problem!
Hi Petrik,
I am facing the similar issue class not found when running from eclipse and submitting to jobs to pseudo mode hadoop machine.
Tried the options below you have provided not works for the sample project you have prvided.
Add jar-by-class attribute to the job configuration
Use the jar attribute if jar-by-class does not work
But adding attribute libs and providing the path of jar but that approach is hard of development to bind libs attribute in every job bean.
Why jar-by-class attribute is not working?
How the big workflows are developed using Spring Hadoop?
Kindly share your thoughts!
I have to admit that I haven’t been using Spring Data for Apache Hadoop after I wrote this tutorial but I remember that the version of Spring Data for Apache Hadoop which I used in this tutorial didn’t support all Hadoop versions. Which Hadoop version are you using?
Hi,
I use spring-data-hadoop version 1.0.2.RELEASE
and use hadoop-core 1.2.1 …
The problem is empty output file named part-r-00000.
The program finished successfully but when I use command:
hadoop dfs -cat /output/part-r-00000
nothing to list…
Please help
I have to confess that I have no idea what your problem is. I will update the example application of this blog post to use the latest stable version of Spring for Apache Hadoop and see if I run into this issue as well.
Hi Petri,
i found one easiest place of you to integrate spring and hadoop. I followed your instructions step by step and every thing went without errors.
But, i got INFO: Starting job [wordCountJob] and not ended which means execution was not completed. Why this is so, could you help me?
Can you see the job in the job tracker user interface?
Hi,
I have problem about executing project within eclipse IDE. I got noClassFoundError…
Do I need to add eclipse hadoop plugin or anything else?
Do I need any extra configuration about eclipse?
Note: I can execute project after making jar and execute with “hadoop jar …jar”
Hi,
I have solved the problem by adding below code in hdp:job tag:
libs=”${LIB_DIR}/mapreduce.jar”
But still I don’t know why couldnt execute without above code.
Can you help?
Thanks…
Did you import the project into Eclipse by using m2e (how to import Maven project into Eclipse)? I haven’t used Eclipse in a seven years so I am not sure how the Maven integration is working at the moment.
However, it seems that for some reason Eclipse didn’t add the dependencies of the project into classpath. I assume that if the project is imported as a Maven project, the Maven integraation of Eclipse should take care of this.
Yes. I imported project into eclipse by using m2e. But the same problem occured in IntelliJ IDE.
You are right that libraries can not be added to classpath.
By the way I want to debug project on eclipse. Any help?
Thanks so much
i am successfully connected multi node and also uploaded files in hdfs(it’s only master files).
how to direct to upload the files in slave to master hdfs on ubuntu?
Hi Petri,
Iam trying to run word cout program using spring-hadoop.
Iam using hadoop-2.0.0-cdh4.2.0 MR1 in stand alone mode.
While executing ,its resulting in following exception.
Server IPC version 7 cannot communicate with client version 4.
Iam unable to connect to hdfs.Please help me out
Hi,
The example application of this blog post uses Spring for Apache Hadoop 1.0.0. This version doesn’t support Apache Hadoop 2.0.0 (See this comment for more details about this).
You might want to update Spring for Apache Hadoop to version 1.0.2. It should support Apache Hadoop 2.0.0 (I haven’t tried it myself).
Hi I followed above steps.but I got this type of error.
WARNING: Failed to scan JAR [file:/B:/Software/STS-SpringToolSuite/springsource/vfabric-tc-server-developer-2.9.5.SR1/Spring-VM/wtpwebapps/Hadoopfitness/WEB-INF/lib/core-3.1.1.jar] from WEB-INF/lib
java.util.zip.ZipException: invalid CEN header (bad signature)
The jar file called
core-3.1.1.jaris probably corrupted in some way. This StackOverflow answer provides some clues which might help you to solve this issue.
The location of the jar file suggests that you have downloaded it manually. Is there some reason why you didn’t get it by using Maven?
Hi,
I’m struggling with getting this thing working, already put it on stackoverflow – can u check and help me figure out what’s wrong :
Hi,
Have you tried following the advise given in this comment?
Hi Petri,
Yes, I have already done what has been mentioned in the comment – I think the issue is with the Mapper and Reducer classes not becoming available to the Hadoop framework but I’m not able to figure out !
Can you have a look at the stackoverflow thread I have mentioned(even a bounty hasn’t helped me out :( )
Hi Petri,
Just to clarify – I tried adding jar-by-class=”com.hadoop.basics.Main” and creating independent mapper and reducer classes(not updated in the stackoverflow thread where I have put the code with inner classes) but still the same error.
Regards,
KA
Did you add the job tracker configuration to your application context configuration file?
When I was investigating the problem of the person who originally asked the same question, I noticed that adding the
jar-by-classattribute didn’t solve the problem. I had to configure the job tracker as well.
You can configure the job tracker by adding the following snippet to your application context configuration file (you can replace the property placeholders with actual values as well):
Hi Petri,
Yeah u r right about the ‘jar-by-class’ not solving the problem but I have already configured the job tracker in the applicationContext.xml as follows:
Yet, I’m getting the ClassNotFoundException for mapper and reducer classes
Can u spare some time and have a look at the issue detailed at :
It seems that there are a few differences between your code and my code: | http://www.petrikainulainen.net/programming/apache-hadoop/creating-hadoop-mapreduce-job-with-spring-data-apache-hadoop/ | CC-MAIN-2015-06 | refinedweb | 6,324 | 57.37 |
Simple IE-like Menu and Toolbar
Environment: VS6/.NET, Windows 98/Me/2000/XP
1 Overview
This article was inspired by the Alpha WTL Sample application that I saw recently. I decided to do something similar, using the MFC library. Standard MFC applications are very out of date with their 16 colored toolbars and menus without images. There are some MFC extensions that implement Office- or Visual Studio-like control bars. They are pretty and very powerful, but just too complicated for most simple applications. Besides, they don't obey the standard Windows UI style. My idea was to create a simple interface based on Internet Explorer that implements features introduced in Windows XP and remains compatible with all OS versions since Windows 98.
The project consists of three classes:
CMenuBar: A toolbar that looks and acts exactly like a menu bar. It can be placed in a rebar control just like in Internet Explorer. It also draws icons next to menu items.
CAlphaImageList: A replacement for CImageList that supports images with an alpha channel introduced in Windows XP. It automatically generates hot and disabled images looking like those in Internet Explorer 6.
CAlphaToolBar: An extension of CToolBar that allows using alpha channel images.
Under Windows XP, this interface automatically uses either the 3D style or the new flat style (compared in the picture above). Under older OS versions it uses the traditional 3D style. Another feature introduced in Windows XP is images with alpha channel. The icons can have smoothed edges so that they look good on every background, dark or bright. This interface works correctly with images from 16 colors to 32-bit alpha channel bitmaps under all versions of Windows.
2 Using It in Your Applications
Note: The menu bar only works in SDI frame windows. It can't be used in dialog windows or in MDI frame windows.
Step 1: Add AlphaImageList.cpp, AlphaToolBar.cpp, and MenuBar.cpp with their corresponding headers to your project.
Step 2: Make sure that your application uses the Windows XP visual style. Copy the manifest.xml file to the res directory of your project and add the following line to YourApp.rc2:
CREATEPROCESS_MANIFEST_RESOURCE_ID RT_MANIFEST "res/manifest.xml"
Step 3: Ensure that Windows XP symbols will be included in your project. Add the following directive at the beginning of StdAfx.h or define it in your project settings:
#define _WIN32_WINNT 0x0501
You will need the new Platform SDK that includes Windows XP symbols.
Step 4: Put the following in your frame window header:
protected: // control bar embedded members CStatusBar m_wndStatusBar; CMenuBar m_wndMenuBar; CAlphaToolBar m_wndToolBar; CReBar m_wndReBar;
Of course you may add any number of toolbars and dialog bars to the rebar control.
Step 5: Modify your frame window's OnCreate handler like in the example:
int CMainFrame::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CFrameWnd::OnCreate(lpCreateStruct) == -1) return -1; GetMenu()->DestroyMenu(); SetMenu(NULL); if (!m_wndToolBar.Create(this, AFX_IDW_TOOLBAR) || !m_wndToolBar.LoadToolBar(IDR_MAINFRAME, AILS_NEW)) { TRACE0("Failed to create toolbar\n"); return -1; // fail to create } if (!m_wndMenuBar.Create(this) || !m_wndMenuBar.LoadMenuBar(IDR_MAINFRAME, AILS_NEW)) { TRACE0("Failed to create menubar\n"); return -1; // fail to create } m_wndMenuBar.LoadToolBar(IDR_MAINFRAME); if (!m_wndReBar.Create(this, RBS_BANDBORDERS, WS_CHILD | WS_VISIBLE | WS_CLIPCHILDREN | WS_CLIPSIBLINGS | CBRS_ALIGN_TOP) || !m_wndReBar.AddBar(&m_wndMenuBar) || !m_wndReBar.AddBar(&m_wndToolBar, NULL, (CBitmap*)NULL, RBBS_BREAK)) { TRACE0("Failed to create rebar\n"); return -1; // fail to create } if (!m_wndStatusBar.Create(this) || !m_wndStatusBar.SetIndicators(indicators, sizeof(indicators)/sizeof(UINT))) { TRACE0("Failed to create status bar\n"); return -1; // fail to create } return 0; }
Step 6: Add the following message handlers to your frame window: PreTranslateMessage, WM_MENUCHAR, WM_SYSCOMMAND, WM_ACTIVATE, and WM_SETTINGCHANGE.
BOOL CMainFrame::PreTranslateMessage(MSG* pMsg) { if (pMsg->message == WM_SYSKEYDOWN && pMsg->wParam == VK_MENU) m_wndMenuBar.SetPrefix(TRUE); else if (pMsg->message == WM_KEYUP && pMsg->wParam == VK_MENU) m_wndMenuBar.SetPrefix(FALSE); return CFrameWnd::PreTranslateMessage(pMsg); } LRESULT CMainFrame::OnMenuChar(UINT nChar, UINT nFlags, CMenu* pMenu) { if (m_wndMenuBar.OpenMenu(nChar)) return -1; return CFrameWnd::OnMenuChar(nChar, nFlags, pMenu); } void CMainFrame::OnSysCommand(UINT nID, LPARAM lParam) { if (nID == SC_KEYMENU && m_wndMenuBar.OnKeyMenu(lParam)) return; CFrameWnd::OnSysCommand(nID, lParam); } void CMainFrame::OnActivate(UINT nState, CWnd* pWndOther, BOOL bMinimized) { m_wndMenuBar.Activate(nState != WA_INACTIVE); CFrameWnd::OnActivate(nState, pWndOther, bMinimized); } void CMainFrame::OnSettingChange(UINT uFlags, LPCTSTR lpszSection) { m_wndMenuBar.UpdateSettings(); CFrameWnd::OnSettingChange(uFlags, lpszSection); }
2.1 Toolbar images
The CAlphaImageList class supports two styles of images:
AILS_OLD: The image list uses a 16-color bitmap and doesn't create the hot and disabled images. The toolbar looks like in old style applications. The default background color (RGB 192,192,192) is used as transparent. You can change it by modifying the following line in AlphaImageList.cpp:
#define AIL_TRANSPARENT RGB(192,192,192)
To avoid some gray parts of buttons (like the disk label or the printer) being treated as transparent, I modified the original bitmap so that the magenta color is replaced by a slightly lighter gray (RGB 208,208,208), which is non-transparent. You can replace the Toolbar.bmp file with Toolbar4.bmp from the resources directory of the demo project. Before that, change the toolbar button size to 16x16 pixels.
AILD_NEW: The image list uses a 32-bit bitmap if comctl32.dll version 6 is detected or a 24-bit bitmap otherwise. You can use any number of colors in the toolbar bitmap. If the bitmap contains an alpha channel and the image list is 32-bit, the alpha channel is used as transparency information. Otherwise, the gray color (RGB 192,192,192) is transparent and all other colors are opaque.
You can create 32-bit bitmaps using Adobe Photoshop or a similar application. Even if the image has an alpha channel, you should use the gray color as the background so that the transparency is correct in previous OS versions—the only difference is that the icon edges won't be smooth in that case.
The resources directory of the demo project contains three versions of the new style toolbar bitmap: 8-bit, 24-bit, and 32-bit (with alpha channel). You can use whichever you want in your application. The bitmap was taken from the WTL sample application, with the background color changed to gray. The toolbar button size should be 16x16.
Note that Visual Studio will not let you edit the toolbar if the bitmap contains more than 256 colors. In that case, you will have to modify the resource file (YourApp.rc) by hand. The toolbar definition looks like this (16, 16 is the button size and IDR_MAINFRAME is the resource ID):
IDR_MAINFRAME TOOLBAR DISCARDABLE 16, 16 BEGIN BUTTON ID_FILE_NEW BUTTON ID_FILE_OPEN BUTTON ID_FILE_SAVE SEPARATOR BUTTON ID_EDIT_CUT ... END
2.2 Menu resource
CMenuBar contains the following functions to load data from resources:
BOOL LoadMenuBar(UINT nID, int nStyle=AILS_OLD)
Load a menu resource. The nStyle parameter specifies the image list style (see above). You may call this function multiple times to replace the previous menu. The menu bar will automatically resize inside a rebar if needed. All images will be discarded.
BOOL LoadToolBar(UINT nID)
Load images from the bitmap resource and assign them to menu items with the same command IDs as in the toolbar resource. You may add multiple toolbars to a menu. You can create a separate toolbar resource that contains images for menu items, or use the default toolbar.
The menu bar assumes that the size of all images is 16x16 pixels. You can change this by editing the following lines in MenuBar.cpp:
#define MB_CX_ICON 16 #define MB_CY_ICON 16
2.3 Popup menus
CMenuBar also provides support for displaying context menus with the correct visual style and images. Put all popup menus in one resource and call the following function to load it:
BOOL LoadPopupMenu(UINT nID)
This menu will automatically use the same images as the window menu. There are two functions to display popup menus:
void TrackPopup(int nIndex, CPoint ptPos) void TrackPopup(LPCSTR lpszName, CPoint ptPos)
The first one uses the menu index; the second one uses the name of the menu, which may be more comfortable if your application uses many context menus. The ptPos is the menu position in screen coordinates.
Note: If you don't load a separate popup menu, the window menu will be used by default.
3 Final Notes
I found a bug in MFC 6 that caused the edit view to be incorrectly placed in the window, so that it covers a part of the rebar. The CEditView::CalcWindowRect should be replaced with CView::CalcWindowRect to fix this. Also, the edit view doesn't load and save the text correctly if comctl32.dll version 6 is used.
Implementing support for MDI applications is possible, but quite difficult. Anyway, the MDI architecture is not comfortable and out of date; check out my last article (Multithreaded SDI Applications) for a different solution. There are a lot of things that could be done, such as support for chevrons, popup buttons, and so forth. However, I tried to keep this code small, simple, and yet useful for most purposes.
Visit the author's home page at.
DownloadsDownload demo project - 49 Kb
Download source - 15 Kb
AddStrings crash solvedPosted by AlexLouis on 06/17/2005 09:09am
In the LoadMenuBar and AttachMenu methods, you have this code:But You can't pass a CString to the AddStrings method, because the string must be ended by two NULL characters. Under somes circumstances, this may lead to a crash. The correct code would be like: Reply
very nice code and my suggestion for the "manifest" stuffPosted by Legacy on 11/06/2003 12:00am
Originally posted by: seazi
popup menuPosted by Legacy on 09/23/2003 12:00am
Originally posted by: Mitesh Pandey
hi!
I used the features provided by you in my MFC application and it worked fine.
In my MFC application I have several popup menus. The items in these popup menus are not associated with the windows menu items. The IDs of the popup items are unique and are contained in their respective menu resource.
How can I use the techniques provided by you to these popup menus so that these popup menus have icons.
Please Help.
IE barsPosted by Legacy on 09/12/2003 12:00am
Originally posted by: Mitesh Raj Pandey
IE barsPosted by AiType on 09/23/2004 07:14am
Hi! I had the same problems by compiling the demo code. That's because you use an old user32.lib, and so an oldReply
. You have to donwload the new platform sdk from Microsoft to get the new libraries. See on:
bye
Memory leaks? Corrupted Stack?Posted by Legacy on 09/05/2003 12:00am
Originally posted by: Kurta
I can compile and run the iebars_demo.zip project (Windows XP, Visual Studio .NET) without modifications. But I get some strange results. Here's an excerpt of the debug output:
'IEBars.exe': Loaded 'I:\My Projects\IEBars\Debug\IEBars.exe', Symbols loaded.
.
.
.
Loaded 'K:\WINDOWS\WinSxS\x86_Microsoft.Windows.Common-Controls_6595b64144ccf1df_6.0.10.0_x-ww_f7fb5805\comctl32.dll', No symbols loaded.
'IEBars.exe': Loaded 'K:\WINDOWS\system32\uxtheme.dll', No symbols loaded.
// This occurs when the
// BOOL CAlphaImageList::AddBitmap(UINT nID)
// function returns (Ln:260).
Run-Time Check Failure #2 - Stack around the variable 'bmpi' was corrupted.
Run-Time Check Failure #2 - Stack around the variable 'bmpi' was corrupted.
First-chance exception at 0x77e73887 in IEBars.exe: Microsoft C++ exception: CL_EXCEPCION_FINAL_FUNCION @ 0x0012f100.
First-chance exception at 0x77e73887 in IEBars.exe: Microsoft C++ exception: CL_EXCEPCION_FINAL_FUNCION @ 0x0012f23b.
// Closing the app.
Detected memory leaks!
Dumping objects ->
i:\my projects\iebars\alphaimagelist.cpp(142) : {106} normal block at 0x003882D0, 8192 bytes long.
Data: < > C0 C0 C0 00 C0 C0 C0 00 DA B0 A4 F9 E6 BA B0 EF
i:\my projects\iebars\alphaimagelist.cpp(142) : {91} normal block at 0x003860A0, 8192 bytes long.
Data: < > C0 C0 C0 00 C0 C0 C0 00 DA B0 A4 F9 E6 BA B0 EF
Object dump complete.
The program '[2796] IEBars.exe: Native' has exited with code 0 (0x0).
What's the problem? What should I do?
Thanks,
K.Reply
Platform SDK not neccesary...Posted by Legacy on 09/04/2003 12:00am
Originally posted by: Fagan Nicola
If do not have a Platform SDK,
create a file .h with this defines:
//PlatForm.h
#define BTNS_BUTTON TBSTYLE_BUTTON // 0x0000
#define BTNS_SEP TBSTYLE_SEP // 0x0001
#define BTNS_CHECK TBSTYLE_CHECK // 0x0002
#define BTNS_GROUP TBSTYLE_GROUP // 0x0004
#define BTNS_CHECKGROUP TBSTYLE_CHECKGROUP // (TBSTYLE_GROUP | TBSTYLE_CHECK)
#define BTNS_DROPDOWN TBSTYLE_DROPDOWN // 0x0008
#define BTNS_AUTOSIZE TBSTYLE_AUTOSIZE // 0x0010; automatically calculate the cx of the button
#define BTNS_NOPREFIX TBSTYLE_NOPREFIX // 0x0020; this button should not have accel prefix
#define BTNS_SHOWTEXT 0x0040 // ignored unless TBSTYLE_EX_MIXEDBUTTONS is set
#define BTNS_WHOLEDROPDOWN 0x0080 // draw drop-down arrow, but without split arrow section
#define SPI_GETFLATMENU 0x1022
#define SPI_GETKEYBOARDCUES 0x100A
#define TPM_VERPOSANIMATION 0x1000L
#define MIIM_FTYPE 0x00000100
#define COLOR_MENUHILIGHT 29
#define DT_HIDEPREFIX 0x00100000
..and in your stdafx.h insert:
#include "platform.h"
Recompile all.
Nice Work
ByeReply
how to display certain icons in the menu without creating a button in the toolbar?Posted by Legacy on 08/18/2003 12:00am
Originally posted by: Appstmd
Any idea on how to display certain icons in the menu without creating a button in the toolbar?
I think I could do so by just hiding the button in the toolbar, but is it the best solution?
Thnaks in advance!Reply
CFileDialog Problem!!!Posted by Legacy on 08/12/2003 12:00am
Originally posted by: Brian Farkas
The problem exists as "krkim" wrote.
The CFileDialog works if you select "Open/Save" from the File menu, but if you place a new menu entry and open up a separate CFileDialog, it will "crash" after exiting DoModal. Pretty annoying.
I've discarded all the modifications in CMainFrame (basically deleted your code from my project) but the CFileDialog still failed to work perfectly, so i suspect there is a bug in the new "Platform SDK" (thanks Microsoft ;).
Sadly the new Platform SDK is required to run IEBARS (also the new XP Style buttons/graphics), so i did a little hack. Deleted the new platform SDK from Visual C++ (Tools->Options->Directories). This way IEBARS would not compile because of some missing symbols. I've looked up the values of these required symbols in the new platform SDK (in "CommCtrl.h" & "WinUser.h") and manually typed/defined them into my project's "stdafx.h". This way IEBARS compiled and CFileDialog worked fine again using my old/default plaform SDK which came with Visual C++. Sadly the new XP style graphics/buttons disappeared :(.
I decided to copy the new CommCtrl.* (2 files) and WinUser.* (3 files) from the new platform SDK to my default (old) Visual C++ include directory. It turned out that the compiler wants a "TvOut.h" file aswell, so i copied it from the new platfrom SDK too. The XP style graphics came back and CFileDialog works, too.
By the way, the compiler still says that some symbols left undefined (they are declared in the "WinUser.h"), so i kept these in my "stdafx.h".
Hope this helps!
greets,
Brian/Audio Simulation
How to fix CFileDialog with IEBARS.
1. Do NOT use the new Platform SDK (until it is fixed;), so delete it from Visual C's Tools->Options->Directories.
2. Copy new WinUser.* (3 files), CommCtrl.* (2 files) and TvOut.h from the new platform SDK Include folder to Visual C's default include folder (usually C:\Program Files\Microsoft Visual Studio\VC98\Include). Overwrite the old files.
3. Add these defines into your "StdAfx.h" file
#define SPI_GETKEYBOARDCUES 0x100a
#define SPI_GETFLATMENU 0x1022
#define TPM_VERPOSANIMATION 0x1000L
#define MIIM_FTYPE 0x100
#define COLOR_MENUHILIGHT 29
#define DT_HIDEPREFIX 0x00100000
That's AllReply
Some mistakes, help!Posted by Legacy on 07/31/2003 12:00am
Originally posted by: Dan
e:\program files\microsoft sdk\include\shtypes.h(102) : error C2011: '_SHITEMID' : 'struct' type redefinition
e:\program files\microsoft sdk\include\shtypes.h(120) : error C2011: '_ITEMIDLIST' : 'struct' type redefinition
e:\program files\microsoft sdk\include\shtypes.h(157) : error C2059: syntax error : 'constant'
e:\program files\microsoft sdk\include\shtypes.h(160) : error C2143: syntax error : missing ';' before '}'
e:\program files\microsoft sdk\include\shtypes.h(160) : error C2501: 'STRRET_TYPE' : missing storage-class or type specifiers
e:\program files\microsoft sdk\include\shtypes.h(164) : error C2011: '_STRRET' : 'struct' type redefinition
e:\program files\microsoft sdk\include\shtypes.h(210) : error C2143: syntax error : missing ';' before '}'
e:\program files\microsoft sdk\include\shtypes.h(210) : error C2143: syntax error : missing ';' before '}'
e:\program files\microsoft sdk\include\shtypes.h(210) : error C2143: syntax error : missing ';' before '}'
e:\program files\microsoft sdk\include\shlwapi.h(61) : error C2143: syntax error : missing ';' before '{'
e:\program files\microsoft sdk\include\shlwapi.h(61) : error C2447: missing function header (old-style formal list?)
sdk was installed properly. But I find that there are some same definitons in file "ShlObj.h" and "ShTypes.h". Can you tell me how to solve this problem!Reply
Best Regards!
A little bugPosted by Legacy on 06/11/2003 12:00am
Originally posted by: Appstmd
Hi!
There is a little bug with the menu bar: if you click on a menu without releasing the left button, select an item in the menu and then release the left button, nothing happens!
The program should execute the command associated with the selected item!
Does anyone have an idea on how to fix this problem?
Thks in advance!Reply
Appstmd. | http://www.codeguru.com/cpp/controls/toolbar/article.php/c5487/Simple-IElike-Menu-and-Toolbar.htm | CC-MAIN-2013-20 | refinedweb | 2,875 | 58.38 |
Yes this is a 'tease/joke' thread. I was reading the Santa Hack thread and saw Bamccaig remark a time or two that he hated globals. Since he knows I like him and has offered advice and help when he didn't have to, I couldn't resist this gag thread. Yes I know this isn't really a competition and will have no winner, but figured Bamccaig would at least find some humor in it (or not). Matthew is free to lock it if he feels so inclined of course.
-----------------------------------------------
Introducing the POB (P*ss Off Bamccaig) Global Competition Thread
Rules are simple, don't have to have working code. Just paste code that has globals declared.
I'll start:
#include <iostream>
bool PISS_OFF_BAMCCAIG = 1;
int main()
{
return 0;
}
-----------------------------------------------
Sorry bamccaig, couldn't
bool missingParen=true;
#define IAMNOTMISSING }
int main() {
if(missingParen) {
IAMNOTMISSING
return 0;
}
------------Solo-Games.org | My Tech Blog: The Digital Helm
int x = getX();
if(somethingIsTrue(x)){
doSomethingAboutIt(x);
}
for(int i=0; i<getLength(); i++){
doSomethingForEach(getItem(i));
}
Object foo = getFoo();
if(somethingElseEntirelyIsTrue(foo)){
doSomethingAboutThat(foo);
}else{
doSomethingAboutThat(foo + 5);
doSomethingDifferently(foo);
}
doSomething ( 5 );
[bamccaig@sephiroth ~]$ cat your_fileThis was a triumph.I'm making a note here. HUGE SUCCESS.[bamccaig@sephiroth ~]$
People that drive too fast through intersections and have to skip across 3 lanes to make a corner in their huge-ass PoS pickup trucks, people that cut corners in the road and cut you off, and people that drive down the middle of the fucking road instead of just wearing down the snow on both sides...
int temp1=0;
float temp2=3;
double temp3_f=2;
All stolen from content he wrote. I just pirated his pet peeves.:
Obviously, the language of choice for this is PHP.
<?php
for ($i = 0; $i < mt_rand(); ++$) {
$varname = sprintf("variable%d", $i);
$$varname = md5(mt_rand() . "and a bottle of rum");
define($varname, $i);
}
function add($a, $b) {
$varname_a = 'variable' . $a;
$varname_b = 'variable' . $b;
global $$varname_a, $$varname_b, $$$varname_a, $$$varname_b;
return $$$varname_a + $$$varname_b;
}
Might even actually work for some cases. And it's secure, because it uses a salt
---Me make music: Triofobie---"We need Tobias and his awesome trombone, too." - Johan Halmén
Is there a really important reason why PHP uses all those dollar signs?
Is there a really important reason why PHP uses all those dollar signs?
It's the syntax. $x is a variable; $$x is a variable variable. Normally you don't use these, but the feature is there. What it does is evaluate the original (single-dollared) variable as a string, and use it as a variable name. So if you have this:
$a = 'foobar';
$x = 'a';
print($$x);
...it will print "foobar" - $x is 'a', so $$x resolves to $a.
Variable variables are loosely comparable to pointers in C, only that you use strings to point to variables instead of binary pointer values (and that memory management is taken care of by the language).
The C++ equivalent of the above would be:
string a = "foobar";
string* x = &a;
cout << *x;
Only that PHP uses strings to reference variables, so you can construct variable references by name, as strings.
Thanks for the explanation Tobias, I'd have to say that language is mind numbingly ugly!
Only if you make it so. PHP has enormous potential for writing hideous code, but that's no excuse for actually doing it, just like owning a knife is not an excuse for stabbing people. (Cue "guns kill people" debate)
klaxxzgamemakre ftw
-----For assistance, please register and click on this link to PM a moderator
Sometimes you may have to send 3-4 messages
just like owning a knife is not an excuse for stabbing people.
WHAT???!?!!! You mean I've been doing it wrong this entire" --
WHAT???!?!!! You mean I've been doing it wrong this entire time??
No, certainly not. You just need a better excuse than "I own a knife". For example, you might say, "He looked at me funny", or "I needed something he has, but he wouldn't give it to me", or simply "God told me to do it".
{"name":"tumblr_llkdcdAMCT1qd6bglo1_500.jpg","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/3\/4\/34acc490fb6ff9c7137313e71535dc56.jpg","w":366,"h":601,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/3\/4\/34acc490fb6ff9c7137313e71535dc56"}
stabbity stabbity stab is its own excuse.
#define RET int
#define NAME main()
#define GAME {dosomething()
#define STARTAWESOMEGAME RET NAME GAME
#define ENDAWESOMEGAME return 0;}
#define AWESOMEGAME 0;
struct GAMESTATE {int a;bool b;} AWESOME_GAMESTATE;
class UnrequiredSingleton;
void dosomething();
STARTAWESOMEGAME;
ENDAWESOMEGAME
The following is more readable:
Amazingly so, I edited it again. Someone reading my original version would have had no idea what's going on - now with the labels it all is much clearer.
Yeah, the labels are like self documenting code. There's really no need for comments when you code properly like that.
And it removes the need for ugly indentation. Everything lines up very nicely. | https://www.allegro.cc/forums/thread/609316/944432 | CC-MAIN-2018-39 | refinedweb | 820 | 63.19 |
in reply to Re: Help with Geo::IP outputin thread Help with Geo::IP output
Thanks you for much for the example.
Currently you don't. It is a module in the App:: namespace. If you want to use the infrastructure generated by geoip steal the code from geoip or use it's json output in a pipe:
use JSON::MaybeXS;
open my $gh, "-|", "geoip", "--json", $url_or_hostname or die;
my $data = decode_json (do { local $/; <$gh> });
say $data->[0]{provider}; # [0] as it returns a list of matches
[download]
As with the other post today, in a piped open, close needs to be checked for errors as well. Or, let me just plug my relatively new module IPC::Run3::Shell::CLIWrapper ;-)
use JSON::PP qw/decode_json/;
use IPC::Run3::Shell::CLIWrapper;
my $geoip = IPC::Run3::Shell::CLIWrapper->new( { fail_on_stderr=>1,
stdout_filter => sub {$_=decode_json($_)} }, qw/ geoip --json
--DB=dbi:SQLite:dbname=geoipdb / );
my $data = $geoip->('209.197.123.153');
[download]
No recent polls found | https://www.perlmonks.org/?node_id=11118416 | CC-MAIN-2021-10 | refinedweb | 164 | 65.76 |
07 March 2012 07:08 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The tender, which closes on Wednesday, did not specify the volume required, the traders added.
However, the tender said FPCC will consider a minimum volume of 25,000 tonnes, the traders said.
The seller that is successful in the tender will have to deliver the cargoes to Mailiao, the traders said, adding that the pricing formula for the tender was undisclosed.
FPCC last purchased 100,000 tonnes of open-spec naphtha for delivery in the first half of April to Mailiao in the fourth week of February, according to the traders.
The cargoes were bought at premiums of $14.00-20.00/tonne (€10.64-15.20/tonne) to | http://www.icis.com/Articles/2012/03/07/9538943/taiwans-formosa-petrochemical-seeks-naphtha-for-h2.html | CC-MAIN-2014-35 | refinedweb | 121 | 70.33 |
I just installed PyCharm 2.6.3 on RedHat Enterprise Linux 6.2 and have imported pre-existing python code into the IDE. The code runs fine and I can debug it except when I try to spawn a new thread. I am unable to stop on a breakpoint in the new thread.
Do I need to add the PyChar-2.6.3/helper/pydev to my PythonPath to get it to work?
Changed to more accurately reflect issue.
Do I need to add the PyChar-2.6.3/helper/pydev to my PythonPath to get it to work?
Changed to more accurately reflect issue.
It is possible to debug new threads in PyCharm.
Could you please provide the script you trying to debug and we'll what goes wrong here.
Thank you for gelping.
There are several hundred files involved in a large package so it will be close to impossible for to ship a useful file for you to debug. Since the feature is supported, it must be a configuration issue on my side or a odd niche system configuration that PyCharm is not expecting.
When using eclipse I used the open source version of pydev and did a import pydevvd pydevd.settrace(). Looking at the pycharm version of pydevd there is a settrace() method but it does not look like I need to call it manually for pycharm.
Looking through the forums there was a suggestion to set: PYCHARM_DEBUG=true. I set that and also enabled the "Background Tasks" window. It comes back with the message "Connecting to debugger" and keeps on scanning.
See the attached file for the debug screen.
output.txt (17KB)
There was a suggestion set set –noreload but I am either passing it to the python debugger or to my python script and the –noreload option is not be accepted.
Any debugging suggestions to get you more data would be appreciated.
Regards,
Tam
import threading
def on_start_button_clicked(self, widget):
widget.set_sensitive(False)
self.pause_button.set_sensitive(True)
self.stop_button.set_sensitive(True)
self.test_thread = threading.Thread(target = self.test_manager.execute_list, name = 'execution_thread', args = (self.console,))
self.test_thread.start()
From eclipse with pydevd plugin we can insert breakpoints into python files called from Popen by inserting:
import pydevd
pydevd.settrace()
This implementation is not supported in PyCharm. PyCharm does support remote debugging but it is not obvious how to to enable that for the same system.
See attached project file to reproduce issue.
processTest.tar (40KB)
import pydevdd
pydevd.settrace('localhost', port=55000)
Note the port is set in
Reference discussion below.
The attached file has a sample project that gives and example.
processTest.tgz (839KB)
Debugging of multiple processes is also supported in PyCharm (sorry I didn't mention Popen in question head and thought that you are asking about threads).
If Settings | Debugger | Python | Attach to subprocesses automaticaly checkbox is enabled PyCharm attachs it's debugger to every spowned process.
Remote debugging can also help in that case, but it requires some cumbersome manipulations.
Checking "Attached to subprocess automatically" was one of the first things I tried but still could not step into the code for debugging.
Just tried what you suggested with the sample processTest project above and still could not successfully step into code.
I am wondering if the issue I am seeing with some incompatibility with version of Java or python.
Here is my java version
/pycharm.sh
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)
Warning: The encoding 'UTF-8' is not supported by the Java runtime.
[ 82724] WARN - .packaging.PyPIPackagesUpdater - Connection timed out
Here is my python version
Python 2.6.6 (r266:84292, Aug 28 2012, 10:55:56)
[GCC 4.4.6 20120305 (Red Hat 4.4.6-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Regards,
Tam
Regards,
Tam
Could you please set PYCHARM_DEBUG environment variable in your run configuration and provide console output after you run your script.
As you specified I enabled the PYCHARM_DEBUG environment variable.
I used the same project that I uploaded above processTest.tgz and disabled the line pydevd.settrace in externalPythonFile.
See attached for debug output from the console window.
Please feel free to contanct me if you need any more characterization to help with a workaround or fix.
outputtest.txt (8.1KB) | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206595465-Debugging-in-Linux-when-Calling-Popen | CC-MAIN-2020-10 | refinedweb | 722 | 58.89 |
Home Skip to content Skip to navigation Skip to footer
Cisco.com Worldwide Home
Table of Contents
What You Will Learn. 4
What Is Cisco IOS EEM?. 4
Cisco IOS EEM Processing. 4
Sample Cisco IOS EEM Applications. 5
Technical Overview of Cisco IOS EEM.. 5
Event Detectors. 8
Event Detector: None. 9
Event Detector: Syslog. 9
Event Detector: SNMP.. 10
Event Detector: Timer 10
Event Detector: Counter 10
Event Detector: Interface. 11
Event Detector: CLI 11
Event Detector: OIR.. 12
Event Detector: RF. 12
Event Detector: IOSWDSYSMON.. 12
Event Detector: GOLD.. 12
Event Detector: APPL. 13
Event Detector: Process. 13
Event Detector: WDSYSMON.. 13
Event Detector: SNMP-Notification. 14
Event Detector: RPC.. 14
Event Detector: Track. 14
Policies: Applets and Scripts. 15
Applets. 15
TCL Scripts. 17
Using TCL with Cisco IOS EEM.. 17
TCL Script Variables. 18
Implementing TCL: Basic Steps for Registration. 18
TCL Script Components. 20
Event Trigger: Event Detector 20
Import of Cisco TCL Libraries. 20
Information Collection. 20
Logic and Action. 22
Use Cases. 24
None Event Detector 24
Problem Statement 24
Suggested Solution. 24
Script Flow.. 24
Test Procedure. 24
Syslog Event Detector 28
Problem Statement 28
Suggested Solution. 28
Script Flow.. 29
Test Procedure. 29
Raw Script 29
SNMP Event Detector 33
Problem Statement 33
Suggested Solution. 33
Script Flow.. 33
Test Procedure. 33
Raw Script 34
Timer Event Detector 36
Problem Statement 36
Suggested Solution. 36
Script Flow.. 36
Test Procedure. 36
Raw Script 37
Interface Event Detector 41
Problem Statement 41
Suggested Solution. 41
Script Flow.. 41
Test Procedure. 41
Raw Script 42
Raw Script (Secondary Email Script) 44
OIR Event Detector 47
Problem Statement 47
Suggested Solution. 47
Script Flow.. 47
Test Procedure. 47
Raw Script 48
GOLD Event Detector 52
Problem Statement 52
Suggested Solution. 52
Script Flow.. 53
Test Procedure. 53
Raw Script 53
Process Event Detector 55
Problem Statement 55
Suggested Solution. 55
Script Flow.. 55
Test Procedure. 55
Raw Script 55
WDSYSMON Event Detector 57
Problem Statement 57
Suggested Solution. 57
Script Flow.. 57
Test Procedure. 58
Raw Script 58
Track Event Detector 61
Problem Statement 61
Suggested Solution. 61
Script Flow.. 61
Test Procedure. 61
Raw Script 61
Conclusion. 64
For More Information. 64
Appendix. 65
Detecting Multiple Events. 65
Using event_reqinfo. 65
Applying Decision Logic. 67
If-Then Example. 67
If-Then-Else Example. 68
Applying Actions. 69
Send a Syslog Message. 69
Send an Email Alert 69
Send an SNMP Trap. 70
TCL Code Snippets. 71
Applying Regular Expressions. 72
Annotated TCL Script Example. 73
Applet and TCL Script Comparison Example. 75
What You Will Learn
This document provides an introduction to Cisco IOS® Embedded Event Manager (EEM) and describes some potential use cases.
Cisco IOS EEM is a robust tool available to customers and is a significant differentiator for Cisco. Cisco IOS EEM isavalue-added feature that is included at no cost (on most Cisco® routing and switching platforms) and can reduce customer operating expenses (OpEx) by automating tasks, providing real-time alerts with automated responses, and facilitating troubleshooting. It is essentially a full-time network agent that allows devices to monitor and detect events and immediately take action.
Cisco IOS EEM is embedded in Cisco IOS Software and is enabled by default. There is no requirement or dependency on a centralized management system. Cisco IOS EEM capability is therefore distributed across thenetwork, and Cisco IOS EEM can identify a problem and take action at the point closest to the problem.
This document describes Cisco IOS EEM and provides a technical overview. It also discusses event detectors, policies (using applets and scripts), and use cases.
The appendix provides additional technical details. It discusses multiple event detection, how to gather information using the event_reqinfo command, and decision logic. It also presents script action examples, Tool Command Language (TCL) code snippets, and helpful commands and URLs and steps through execution of a sample script.
Note: While this document is based on tests using Cisco Catalyst® 6500 and 4500 Series Switches, the concepts and scripts also apply to other platforms that support Cisco IOS EEM. Cisco IOS EEM Version 2.4 is used as the basis for all discussions in this document, and this version of Cisco IOS EEM is currently supported on the Cisco Catalyst 6500 and 4500 Series using Cisco IOS Software Release 12.2(33)SXI (Cisco Catalyst 6500 Series ) and 12.2(50)SG (Cisco Catalyst 4500 Series).
What Is Cisco IOS EEM?
Cisco IOS EEM is a unique subsystem within Cisco IOS Software. Cisco IOS EEM is a powerful and flexible tool that can automate tasks and customize the behavior of Cisco IOS Software and the operation of the device. Using Cisco IOS EEM, customers can create and run scripts directly on a router or switch. The scripts are referred to as Cisco IOS EEM policies and can be programmed using integrated Cisco IOS Software commands (creating an applet) orusing the TCL scripting language.
Cisco IOS EEM allows customers to:
● Use the intelligence in Cisco IOS Software to detect specific events and respond or alert network administrators to those events
● Automate routine tasks and create custom commands
● Respond to detected events with immediate action, which may include:
- Making an automated change to the device configuration
- Gathering information and sending an email
- Implementing a custom syslog or Simple Network Management Protocol (SNMP) trap
Although command-line interface (CLI) applets provide a quick and easy interface for Cisco IOS EEM, TCL scripts add a great deal more power and flexibility. This document focuses on TCL scripts and includes a few sample applet configurations for comparison.
Cisco IOS EEM Processing
Cisco IOS EEM processing involves two basic functions:
● Event detection: Configure Cisco IOS EEM to respond on the basis of one of the Cisco IOS Software embedded event detectors using applets (embedded in Cisco IOS Software) or TCL scripts (downloaded to flash memory and loaded into the system). Each event detector can trigger a script on the basis of the specific event type. Most scripts are triggered when an event occurs in Cisco IOS Software: for example, when an interface goes down or a threshold is exceeded. However, some events are triggered by time or manual execution.
● Action policy implementation: Depending on the event being triggered, some action will be performed, such as sending an email message, reconfiguring the switch, or installing an IP route.
- Before any action is taken, some additional information will probably need to be obtained. The event_reqinfo facility, an array that is created in Cisco IOS Software for each event detector, is one source of information. CLI commands embedded in the script or applet are also good sources of additional information.
- Conditional loop logic such as If-Then-Else and If-Then-While statements can also be used to further focus the decision criteria before an action is performed. For example, after an event detector is triggered, a script or applet could invoke a CLI command to get the time using show clock. The result could be parsed to determine whether the event occurred within a specific time frame: IF the event occurred within the relevanttime frame, THEN perform action A; ELSE perform action B.
Sample Cisco IOS EEM Applications
Here are some examples of Cisco IOS EEM applications:
● Automatically configure a switch interface depending on the device connected to the port. For example, ifanIP phone is detected, Cisco IOS EEM can automatically configure the port settings for an IP phone.
● Automatically respond to an abnormal condition detected on an interface. For example, if a high error rate is observed on an interface, Cisco IOS EEM can alert network operations or automatically shut down the port and reroute traffic.
● Detect specific syslog messages and collect information. For example, if more information is needed to troubleshoot a sporadic problem, Cisco IOS EEM can gather real-time information upon detection of a specific syslog message. This information can then be automatically emailed to network operations staff for analysis.
The applications for Cisco IOS EEM are limited only by the network administrator’s imagination. A Cisco IOS EEM scripting community has been established through which people can share scripts they have found useful. This community is especially valuable for getting some initial ideas about how others have put Cisco IOS EEM to work intheir networks. Visit the Cisco Beyond — Product Extension Community at.
Technical Overview of Cisco IOS EEM
Figure 1 shows the basic Cisco IOS EEM architecture. Event detectors monitor the system for specific events. Events are typically fault conditions, counters, or other occurrences that are deemed interesting. When an event occurs, the event detector passes information to the Cisco IOS EEM server subsystem. If the event matches a registered policy (user-configured applet or script), Cisco IOS EEM performs an action (as specified in the policy).
The Cisco IOS EEM server is responsible for the following:
● Event registration: Registration is the starting point for Cisco IOS EEM; an event detector identifies something deemed interesting that happened in the device and provides an update on what happened to the Cisco IOS EEM subsystem.
● Publishing an event: If an event detector detects an event, the Cisco IOS EEM server and the policy director communicate to take action.
● Requesting more information about an event: A script can request more information about an event, whichcan be used in scripting logic or sent to an administrator.
● Storage and retrieval of persistent data: The Cisco IOS EEM server has facilities to store variables andcounters and even to access the CLI to make system changes.
● Registering script directories: A script directory must be defined. This process is described in the section “TCLImplementation: Basic Steps for Registration” later in this document.
● Registering TCL scripts: TCL scripts must be registered with the Cisco IOS EEM server before they become active. This process is also described in the section “TCL Implementation: Basic Steps for Registration” later in this document.
● Registering applets: Applets are registered with the Cisco IOS EEM server in the first line of the applet (writtenusing CLI and stored in the running configuration). Whereas TCL scripts can be unregistered to prevent execution, applets are stored in the running configuration and must be removed from the configurationto prevent execution.
Figure 2 shows a more detailed view of a Cisco IOS EEM implementation on the Cisco Catalyst 6500 Series. Thecurrent implementation is based Cisco IOS EEM Version 2.4, which adds event detectors and programmable actions with the TCL subsystem.
Cisco IOS EEM capabilities will continue to expand with the future addition of more event detectors, integration withadditional subsystems and external systems, and expanded platform support. As of Cisco IOS Software Release 12.2(33)SXI, the Cisco Catalyst 6500 Series supports 17 different event detectors. The Cisco Catalyst 4500 Series as of Cisco IOS Software Release 12.2(50)SG supports 15 event detectors (Cisco IOS EEM Version 2.4). The two event detectors not supported by the Cisco Catalyst 4500 Series are the Cisco Generic Online Diagnostics (GOLD) and Enhanced Object Tracking (EOT) event detectors.
A Cisco IOS EEM policy is created through an applet (entered from the CLI) or written externally using a TCL script. The policy is registered with the Cisco IOS EEM server through the Cisco IOS EEM policy director using internal APIs. When an event occurs that matches a registered event specification, the event detector interacts with the CiscoIOS EEM server using internal APIs. Actions are then implemented through the Cisco IOS EEM policy directoras shown in Figure 3.
Note: Any Cisco IOS software CLI command (or combination of commands) can be invoked as an action in anapplet or script. This powerful capability can be used to alter the running configuration of the device in real time. Ascript can also collect show command output, which can then be parsed and emailed to provide more information about the event.
If CLI commands are invoked from within the script, the script opens a telnet (vty) session to access the switch’s CLI (note that there is no authentication prompt for the script; instead, the script starts in user execution mode). The script must enter commands as if they were being entered directly at the CLI. For example, enable must be specified to enter privileged execution mode, and configure terminal must be specified to enter global configuration mode. If the CLI command generates interactive prompts (which may interfere with script execution), the command file prompt quiet may help.
Event Detectors
Cisco IOS EEM event detectors are Cisco IOS Software processes that determine when a Cisco IOS EEM event occurs. Each event detector monitors a subset of the operating state of the device. When an event is detected, theevent detector communicates with the Cisco IOS EEM server through internal APIs, and immediate action maybetaken.
Cisco IOS EEM can monitor many types of events, and these fall broadly into two categories: user and system events. For example, a user event may be triggered when a user types a specific command. A system event may betriggered when a device generates a specific syslog message.
Some event detectors do not monitor Cisco IOS Software. For example, the Timer event detector is used to count down a timer, or run a cron time-based scheduler job periodically, or run a job at a particular date and time. Another example is the Counter event detector, which monitors a generic counter that is build manually by the user.
Each event detector is unique, and the syntax for each varies to identify what the event detector is monitoring. Forexample, while the Syslog event detector is configured to monitor messages, the Interface event detector is configured to monitor counters provided on interfaces. The GOLD event detector monitors the severity level of CiscoGeneric Online Diagnostics (GOLD) results, while the Process event detector looks for restarts or shutdowns inCisco IOS Software Modularity.
The variety of event detectors provides for a wide range of detectable events in Cisco IOS Software. General-purpose event detectors such as CLI and Syslog provide a means of detecting more universal events, while event detectors such as OIR and GOLD focus on particular processes. It is this diversity that makes Cisco IOS EEM such apowerful tool.
Some event detectors are specific to Cisco IOS Software Modularity code or Cisco IOS Software routing platforms. Updated information regarding Cisco IOS EEM, platform support, and the event detectors available can be found at.
For more information about Cisco IOS EEM support on router platforms (as well as case studies, data sheets, andadditional white papers), see.
The remainder of this section provides more detail about the event detectors currently available on the Cisco Catalyst6500 Series (Cisco IOS Software Release 12.2(33)SXI) and Cisco Catalyst 4500 Series (Cisco IOS Software Release 12.2(50)SG).
Event Detector: None
The None event detector publishes an event when the Cisco IOS Software event manager run CLI command executes a Cisco IOS EEM policy. Cisco IOS EEM schedules and runs policies on the basis of an event specification that is contained within the policy itself. Before the event manager run command will run, the Cisco IOS EEM policy must be identified and registered with the system to be permitted to run manually.
Unlike all the other event detectors, the user triggers the None event detector manually. This event detector can beused as a macro to enter CLI commands or to test the logic of a script or applet. It can also be used to obtain information through, for example, CLI commands and apply some logic to determine what actions need to be performed. The main point to remember is that this event detector is not triggered by any Cisco IOS Software event; instead, it is triggered manually by the user.
Event Detector: Syslog
The Syslog event detector allows syslog messages to be screened for a regular-expression pattern match. The screened messages can be further qualified, requiring that a specific number of occurrences be logged within a specified time. A match on specified event criteria triggers a configured policy action.
For example, Cisco IOS EEM can be configured to screen the syslog for an interface-down message by looking for the string %LINK-3-UPDOWN in the syslog message. A regular-expression pattern match could be applied to the entire syslog message to find the interface name. Then a syslog message can be sent to the console, or an email message can be sent to the on-call engineer.
This event detector is fairly universal in that it tracks any event that sends a syslog message. As new features are added, this event detector provides a quick way to gain management visibility prior to availability of SNMP MIB support.
Event Detector: SNMP
The Simple Network Management Protocol (SNMP) event detector allows a standard SNMP MIB object to bemonitored and an event to be generated when the object value matches user-specified values or crosses specifiedthresholds.
A threshold can be specified as an absolute value, average rate of change, or incremental value. If the incremental value is set to 50, for example, an event would be published when the interface counter increases by 50. The values are obtained for comparison to thresholds after a user-configured poll interval. After an SNMP event has been published, the monitoring logic is reset using either of two methods: the interface counter is reset when a second threshold, called an exit value, is crossed, or when a set period of time has elapsed.
If SNMP has traps or SNMP GET commands that specify that an SNMP manager be notified when a threshold is crossed, Cisco IOS EEM extends this notification by allowing users to send email or post a syslog message to the console. In addition, a policy action can be performed such as reconfiguration of the switch to mitigate the effects of crossing the threshold.
Event Detector: Timer
The Timer event detector publishes events for the following four types of timers:
● An absolute-time-of-day timer publishes an event when a specified absolute date and time occurs.
● A countdown timer publishes an event when a timer counts down to zero.
● A watchdog timer publishes an event when a timer counts down to zero, and then the timer automatically resets itself to its initial value and starts to count down again.
● A cron timer publishes an event using a UNIX standard cron specification to indicate when the event is to be published. A cron timer never publishes events more than once per minute.
Similar to the None event detector, the Timer event detector is not triggered by a Cisco IOS Software process such as a Cisco GOLD or OIR process, but by the user, who configures some sort of timer to trigger an event; essentially, time initiates the event. When the timer is triggered, most likely some information will be obtained, and a policy will be applied dependent on the information received.
For example, a cron timer can be used to run a particular command at the same time everyday, or a watchdog timer can run the same command repeatedly after a configured amount a time.
The Timer event detector also can be configured to perform a show ip route command looking for a particular route. Ifthe route is not in the routing table, this could indicate that packets are traversing a backup link, which may be more costly. The switch can then notify the on-call engineer that the switch is using the more costly route. Checking this route periodically, even just daily, can mitigate the cost incurred by using the backup link.
Event Detector: Counter
The Counter event detector publishes an event when a named counter crosses a specified threshold. Multiple participants affect counter processing. One or more subscribers define the criteria that cause the event to be published, and the event detector can modify the counter. After a counter event has been published, the counter monitoring logic can be reset to start monitoring the counter immediately, or it can be reset when a second threshold, called an exit value, is crossed.
The counter itself that the Counter event detector monitors needs to be created and needs to incremented or decremented. Most likely, at least one other script will be written whose job is to create a counter for the Counter event detector to monitor.
For example, consider a use case in which the goal is to send a notification if an interface goes down five times in3minutes. The Syslog event detector can be used to monitor the interface and increment a counter whenever theinterface goes down. This counter can then be monitored by a Counter event detector. To limit the counter to 3-minute intervals, a Timer event detector can reset the counter every 3 minutes. If the counter reaches 5 within the3-minute period, the Counter event detector can send an email notification that the interface is flapping. As an additional action, the script can also reset the counter.
Event Detector: Interface
The Interface event detector publishes an event when an interface counter for a specified interface crosses a defined threshold. A threshold can be specified as an absolute value or an incremental value. If the incremental value were set to 50, for example, an event would be published when the interface counter increases by 50.
After an interface counter event has been published, the interface counter monitoring logic is reset using either of two methods. The interface counter is reset when a second threshold, called an exit value, is crossed, or when an period of time elapses. When configuring this event detector, the user must indicate which interface and which parameter to monitor. The user must also configure the polling interval to indicate how often the reading is taken.
Consider the previous example using the Counter event detector. The same result could be obtained by using the Interface event detector. The Interface event detector could be configured to monitor interface resets for a particular interface every 3 minutes, looking for an incremental value of 5 during the 3-minute polling interval. When triggered, the Interface event detector could send the same email notification indicating a flapping interface.
Event Detector: CLI:
● Synchronous publishing of CLI events: The CLI command is not executed until the Cisco IOS EEM policy exits, and the Cisco IOS EEM policy can control whether the command is executed. A read-and-write variable called _exit_status, can be used with this detector; it allows you to set the exit status at the policy exit for policies triggered by synchronous events. If, for example, the variable _exit_status is set to 0, the CLI command invoked by the user is skipped, whereas, if the variable _exit_status is set to 1, the command isrun.
● Asynchronous publishing of CLI events: The CLI event is published, which invokes the script, and then the CLIcommand that the user entered (invoking this script) is executed.
● Asynchronous publishing of CLI events with command skipping: The CLI event is published, but the CLI command entered by the user is not executed.
The synchronous parameter cited in the preceding list enables the system to hold the command while the script or applet gathers more information to decide whether or not to allow the command to be executed. In the example presented in this document, the use case prevents debug commands (with the exception of show debugging and undebug all) from being run during business hours. This example illustrates the use of the synchronous parameter to hold the execution of the command while the script determines the type of command that has been entered (such as show debugging or undebug all).
Note that the system matches on the entire command and not on abbreviated input. For example, the system will seea sh debug entered by the user as show debugging. When building a pattern match, you must use the full command and not the abbreviated version. If you are uncertain as to what the full command is, use the tab function inCisco IOS Software to extend each word in a command.
Event Detector: OIR
The online insertion and removal (OIR) event detector publishes an event when one of the following hardware insertion or removal events occurs:
● A line card is removed from the chassis.
● A line card is inserted into the chassis.
Route processors, line cards, and feature cards can be monitored for OIR events. With knowledge of the insertion of a module, the module can be automatically configured according to the line-card type. When the module is inserted, the module type can be obtained through a show command. Depending on the type of card that was inserted, the applet or TCL script can configure the module with the desired configuration, such as security (port security, Dynamic Address Resolution Protocol [ARP] Inspection, Dynamic Host Configuration Protocol [DHCP] snooping, etc.) or spanning-tree host mode (PortFast enabled).
Event Detector: RF
The redundancy framework (RF) event detector publishes an event when one or more RF events occur during synchronization in a dual route-processor system. The RF event detector can also detect an event when a dual route processor system continuously switches from one route processor to another (referred to as a ping-pong situation).
Event Detector: IOSWDSYSMON
The Cisco IOS watchdog system monitor (IOSWDSYSMON) event detector publishes an event when one of the following occurs:
● CPU utilization for a Cisco IOS Software task crosses a threshold.
● Memory utilization for a Cisco IOS Software task crosses a threshold.
Two events can be monitored at the same time, and the event publishing criteria can be specified to require one event or both events to cross their specified thresholds.
As with other threshold event detectors, a subevent or process to be monitored is configured in IOSWDSYSMON. You can use general operands — greater than, greater than or equal to, etc. — to compare detected values to thevalue desired. Other parameters specify monitoring of two events at the same time. You can specify the event publishing criteria to require one or both events to cross their specified thresholds before the event detector is triggered. In addition, you can specify a time window in which the subevents must occur for the event detector tobetriggered.
While a similar result could be obtained using the SNMP event detector monitoring CPU object IDs (OIDs), this event detector is specifically for monitoring CPU and memory.
Event Detector: GOLD
Cisco Generic Online Diagnostics (GOLD) on Cisco Catalyst 6500 Series Switches checks the health of hardware components and verifies proper operation of the system data and control planes. These tests take effect when the system is booting and when the system is operational.
Boot diagnostics allow the system to detect faults in the hardware components at boot time and help ensure that afailing module is not introduced into a live network. If boot diagnostics detect a failure on a Cisco Catalyst 6500 Series Switch, the failing modules are shut down. The administrator can configure the level of boot diagnostics as minimal, complete, or disabled. Though use of complete diagnostics is recommended, the default on the Cisco Catalyst 6500 Series is minimal diagnostics, allowing the system to come online faster.
Run-time diagnostics run a series of diagnostic checks to determine the condition of an online system. Diagnostic tests can be disruptive and nondisruptive. Nondisruptive tests occur in the background and do not affect the system data or control planes, but disruptive tests do affect live packet flows and should be scheduled during special maintenance windows.
The GOLD event detector is triggered when Cisco GOLD boot or run-time diagnostics on a specified card and subcard results in a configured severity level (normal, minor, or major). The GOLD event detector can also be configured to generate an event only if it is a new failure or if it is an existing one. When the event detector is triggered, the applet or TCL script is normally configured to perform an action based on the failure detected.
Event Detector: APPL
The application-specific (APPL) event detector allows any Cisco IOS EEM policy (TCL script or applet) to publish an event. This event can be used to trigger a second script (TCL script or applet) into action. When a Cisco IOS EEM policy publishes an event, it must use a Cisco IOS EEM subsystem number as part of its event creation; specifically, the TCL script or applet must always use subsystem number 798, which is reserved by the Cisco IOS EEM subsystem for this purpose. Along with the event type, the event type number (from 1 to 999) must be specified. This unique number identifies one of several scripts that can be triggered into action. If an existing policy is registered for subsystem 798 and for the specified event type number, a second policy of the same event type will trigger the first policy to run when the specified event is published.
Event Detector: Process
Unlike Cisco IOS Software, Cisco IOS Software Modularity on the Cisco Catalyst 6500 Series can run processes independently. With this capability comes to capability to restart and shut down processes without affecting other processes. The Process event detector generates events for Cisco IOS Software Modularity process start, normal and abnormal stop, and restart events. The events generated by the system manager allow policies to change the default behavior of the process restart.
The Process event detector can monitor a single process or many processes at once. If you want to monitor only aparticular process, this process can be called out individually by instance or path. If a particular process (path or instance) is not called out in TCL script, the event detector monitors all processes. This approach differs from that ofan applet, in which a regular expression is used to identify which process to monitor. In this case, a period (.) denotes any match. Monitoring can also be restricted to a certain CPU using the node option. In addition to choosing a process, the type of event can be identified: as an abort event, normal termination, start of a process, user-initiated restart, or user-initiated shutdown.
Event Detector: WDSYSMON
The Cisco IOS Software Modularity watchdog system monitor (WDSYSMON) event detector detects infinite loops, deadlocks, and memory leaks in Cisco IOS Software Modularity processes (Cisco Catalyst 6500 Series only).
Use the WDSYSMON event detector to register a composite event, which is a combination of several subevents orconditions. For example, you can use this command to register the combination of conditions in which the CPU utilization of a certain process exceeds 80 percent and the memory used by the process is greater than 50 percent ofits initial allocation.
As with other threshold-type event detectors, a subevent or process is configured in WDSYSMON to be monitored. You can use general operands—greater than, greater than or equal to, etc.—to compare the detected values to the value desired. Other parameters let you monitor up to four events at the same time. You can specify criteria to require any combination of events to cross their thresholds before the event detector is triggered. In addition, you can specify a time window in which the subevents need to occur for the event detector to be triggered.
Event Detector: SNMP-Notification
The SNMP-Notification event detector can be made to run a policy when an SNMP trap with the specified SNMP OID is encountered on a specific interface or address.
The SNMP-Notification event detector can intercept SNMP trap and inform messages coming into the router. AnSNMP notification event is generated when an incoming SNMP trap or inform message matches specified valuesor crosses specified thresholds.
Both SNMP and the SNMP server manager must be configured and enabled on the device prior to the use of the SNMP-Notification event detector.
Event Detector: RPC
The remote procedure call (RPC) event detector provides the capability to invoke Cisco IOS EEM policies from outside the router over an encrypted connection using Secure Shell (SSH). The RPC event detector uses Simple Object Access Protocol (SOAP) data encoding for exchanging XML-based messages. This event detector can be used to run Cisco IOS EEM policies and then receive output in a SOAP XML-formatted reply.
Event Detector: Track
The enhanced object tracking (EOT) Track event detector publishes an event when the status of a tracked object changes. Object tracking was first introduced in the Hot Standby Router Protocol (HSRP) as a simple mechanism that allowed users to track the interface line-protocol state only. If the line-protocol state of the interface was down, the HSRP priority of the router was reduced, allowing another HSRP router with a higher priority to become active.
EOT is now integrated into Cisco IOS EEM to allow Cisco IOS EEM to report on a status change of a tracked object and to allow enhanced object tracking to track Cisco IOS EEM objects. A new type of tracking object—a stub object—is created. The stub object can be manipulated using the existing CLI commands that already allow tracked objects to be manipulated.
Object tracking was enhanced to provide complete separation between the objects to be tracked and the action to be taken by a client when a tracked object changes. Thus, several clients, such as HSRP, Virtual Router Redundancy Protocol (VRRP), and Gateway Load Balancing Protocol (GLBP) can register their interest with the tracking process, track the same object, and each take different action when the object changes. A unique number will identify each tracked object. This unique number is specified in the tracking CLI. Client processes use this number to track a specific object. The tracking process periodically polls the tracked objects and notes any change of value. Thechanges in the tracked object are communicated to interested client processes, either immediately or after a specified delay. The object values are reported as either up or down.
Another source for object tracking is Cisco IOS IP Service Level Agreements (SLAs). EOT can track a Cisco IOS IP SLA for state, such as Internet Control Message Protocol (ICMP) Echo or TCP Connect. When connectivity is lost, EOT will show the state as down. The Tracking event detector will then recognize the down state and can invoke a policy action. In the use case provided in this document, the Cisco IOS IP SLA is used to verify the reachability of aserver IP address. When reachability is broken, the Tracking event detector will recognize this state, and EOT is triggered. The action policy removes a host route to that server saying that it is unavailable. A message is sent to theconsole, and an email message is sent to the network operations on-call team.
Policies: Applets and Scripts
A Cisco IOS EEM policy is an entity that defines an event and the actions to be taken when that event occurs. There are two types of Cisco IOS EEM policies: applets and TCL scripts. An applet is a simple form of policy that is defined using the CLI. A TCL script is a form of policy that is written in TCL.
Applets
Applets are created through the CLI and become part of the Cisco IOS Software configuration file (and persist across system reboots).
In applet configuration mode, three types of configuration statements are supported.
● An event command specifies the event criteria to trigger the applet to run.
● An action command specifies an action to perform when the Cisco IOS EEM applet is triggered.
● A set command sets the value of a Cisco IOS EEM applet variable.
As of Cisco IOS Software Release 12.2(33)SXI and 12.2(50)SG, up to six event commands are allowed in an applet configuration. See the appendix for more information about multiple event detection. Multiple action configuration commands also are allowed in an applet configuration. The action configuration commands are uniquely identified using the label argument, which can be any string value. Actions are sorted in ascending alphanumeric key sequence using the label argument as the sort key, and they are run using this sequence.
Explore the following example to gain more insight into the structure of an applet:
The first line,
registers the applet with Cisco IOS EEM and enters applet configuration mode.
The second line,
specifies the event criteria that cause the applet to run. In this example, a Cisco IOS EEM event is triggered when thesyslog pattern %SYS-5-CONFIG is detected.
The remaining lines specify the action to be taken when the applet is triggered. The argument following the action command (label) can be any string value, but actions are sorted and run in alphanumeric key sequence. Note in the following code that the statement starting with “4.5” has been added to the applet shown previously and will be inserted between statements 4.0 and 5.0. Likewise, the following code shows a statement starting with “6.2.2” inserted be between statements 6.2 and 7.0.
The preceding example shows a series of CLI commands that enter privileged execution mode and then global configuration mode to add the “file prompt quiet” command to the running configuration. This command is useful for applet execution when the CLI generates an interactive prompt requesting some input from the user. This command allows the script to bypass this prompt, which would otherwise require a response to the request. Global configuration mode is then exited, and the running configuration is copied to disk0 and written to NVRAM. Global configuration mode is then again entered to remove the “file prompt quiet” command from the running configuration.
Various action types are available within the body of applets and TCL scripts, including the capability to send an email. When an email message needs to be sent within the body of a script, the action syntax for sending the email message is as follows:
action label mail server email-server-address to to-email-address from from-email-address [cc cc-address] subject email-subject body email-body-text
An example of this applet statement is shown in the following example:
For more information about the command structures supported in an applet, go to.
TCL Scripts
TCL is a string-based command language that is interpreted at run time. Cisco IOS EEM supports TCL Version 8.3.4 and adds support for Cisco device-specific extensions using additional TCL libraries. Scripts are written using an ASCII editor on another device, not on the networking device. The script is then copied to the networking device andregistered with Cisco IOS EEM. As an enforced rule, Cisco IOS EEM policies are short-lived run-time routines that must be interpreted and executed in less than 20 seconds. If more than 20 seconds is required, the maxrun parameter can be specified in the body of the TCL script in the event_register statement to specify the amount of runtime required by that script.
Cisco IOS EEM policies use the full range of TCL’s capabilities. However, Cisco provides enhancements to TCL in the form of TCL command extensions that enhance the ability of the Cisco IOS EEM policies to invoke specific device actions. The main categories of TCL command extensions identify the detected event, the subsequent action, utility information, counter values, and system information.
Cisco IOS EEM allows you to write and implement your own policies using TCL. Writing a Cisco IOS EEM script involves:
● Selecting the event TCL command extension that establishes the criteria used to determine when the policy isrun
● Defining the event detector options associated with detection of the event
● Choosing the actions to implement the response to the detected event
Using TCL with Cisco IOS EEM
In general, TCL scripts are used with Cisco IOS EEM to accomplish the following:
● Trigger an event: Configure an event detector to trigger when a certain event occurs.
● Perform an action: Perform some action based on the event detector that was triggered.
● Decision criteria: Additional tasks may need to be performed to determine what action to take. To perform additional actions, the following additional steps may be necessary:
- Collect more information: More information can be obtained by executing CLI commands or by using theevent_reqinfo command (to see an array that the system creates for each event detector type).
- Use logic: Conditional loops such as If-Then-Else and If-Then-While can be used determine which action toperform.
As with applets, a TCL script is activated when a particular event is detected by an event detector. The TCL script can then gather additional information about the event. Gathering additional information about the event is achieved using the event_reqinfo command. This facility will populate an array with event details, and these details can then be called or referenced from within the script at a later point in the execution path.
You can also apply logic, which can use conditional loop constructs to define a condition before execution of a particular task. The action, which the TCL script can execute, can be any valid CLI command (or combination of commands) or other allowed Cisco IOS EEM action (see Figure 3 earlier in this document).
TCL scripts can use local variables and environment variables. The following section provides more information aboutthe differences between these variable types and the use of variables in a TCL script.
TCL Script Variables
Variables can be used within a script to store information to reference later in the script or for use in another script.
Local variables are created within the script and can be referenced later in the script. These variables only ever have relevance within the body of the executing script. The set command assigns a value to a variable. For example:
The preceding example creates the local variable mesg and stores the text “This is a test” in it. The variable can then be used later in the script as shown in the following example:
This command outputs a syslog message that would read “The mesg variable contains: This is a test.” A $ character must be prepended to variable names when calling them to reference their information.
Environment variables are defined outside of the body of the script and can also be used by multiple scripts that runon the switch. They are useful if you need to define global variables for use by multiple scripts. For example, ifmultiple scripts send email to an administrator as a result of an event, you may want to define the _email_server variable globally as follows (in device configuration mode):
This statement is entered through the CLI and creates a Cisco IOS EEM environment variable that can be called from multiple scripts (references to $_email_server retrieve the value of smtp.domain.com). If a script uses environment variables, typically notes will appear at the beginning of the script to identify the environment variables that are used or required.
Note: The values of environment variables are copied to a script’s local memory prior to script execution. Althoughenvironment variables can be changed in a script (through CLI commands), local references to an environment variable always contain the initial copied information. However, if a secondary script is executed within a primary script, the secondary script may copy updated environment variable information prior to execution. An example of thisbehavior is shown earlier in this document in the use case example for the Interface event detector.
Implementing TCL: Basic Steps for Registration
For a script to run on a switch, it must first be copied onto the device and then registered on the switch.
To load and register TCL scripts on a device, register the script directory (you need to perform this action just once):
1. Register a script directory.
● The directory must be in bootflash memory or on a removable compact flash drive (typically accessed through disk0: or disk1:).
● You should create a subdirectory to hold Cisco IOS EEM scripts.
● Register the directory using this command:
After a script directory is defined, perform the following steps (for every script):
1. Copy the script to the registered directory.
● The script must be created externally using a text editor.
● The script name must end with file extension .tcl.
● Use Trivial File Transfer Protocol (TFTP) to send the script to the directory, using this command:
2. Register the script with Cisco IOS EEM.
● Scripts must reside in the registered directory.
● Register the script using this global configuration command:
After the script is registered, it is loaded into memory and is active (unless you are using the None event detector). Toview registered scripts on the system, use this command:
Note: After a script is registered with Cisco IOS EEM, it is active and waiting for event detection. The only exception is a script that uses the None event detector. None event detector scripts must be run manually from the CLI using this command:
If a script is modified and the script file is overwritten, the active script resident in running memory is not automatically updated with the modified information. To activate the modified script, you must unregister the existing (active) script and then reregister the script. To unregister a script, you must enter global configuration mode (config t) and use thiscommand:
After the modified script is copied to the registered directory, you can reregister the script using this command:
The use case for the None event detector in the “Use Cases” section later in this document automates the process of copying a modified script using TFTP, unregistration of the existing script, and reregistration of the modified script.
TCL Script Components
In general, a TCL script includes the following elements:
● Event trigger: Event detector
● Import of Cisco TCL libraries
● Information collection (using event_reqinfo and CLI commands; optional)
● Logic (such as If-Then-Else; optional)
● Action
Event Trigger: Event Detector
An event detector is required in every TCL script and defines the event that triggers the script execution. The event detector identifies the type of detection Cisco IOS EEM will use. For example, a CLI event detector will examine events when a user types in the CLI, and the Syslog event detector will examine syslog messages. The GOLD eventdetector looks at Cisco GOLD diagnostics and can act when a major, minor, or normal event occurs.
The following example shows the syntax for an event detector:
Import of Cisco TCL Libraries
The namespace import command imports TCL libraries that include Cisco command extensions not available in the base TCL command set.
The following TCL syntax must be included in every TCL script that will run as a Cisco IOS EEM script:
Information Collection
After a script has been triggered into action, you will likely want to gather more information about the event or capture some command output. The event_reqinfo command can provide details about the event that triggered the script. The information provided depends on the event detector used. The following TCL script command will populate an array (called arr_einfo) with the event detail information:
See the appendix for more information about event_reqinfo.
For example, if the OIR event detector is used, the array is populated with a series of data fields that provide more information about the event. Specific parameters for each event are created and primed with data about that event. Further details showing an example of this is detailed in the appendix at the end of the document. The information is shown in Table 1.
The detail information can then be referenced by event type. The following commands set variables, which contain the detail information:
Alternatively, the information can be referenced directly in the script using $arr_einfo(event type).
Information can also be gathered through CLI commands. For example, if OIR event detection is triggered, you may want to get a current inventory of the modules installed in the chassis, using the show module command.
To invoke a CLI command from a script, add the following code snippet to the script:
(Do not worry about the detail of each line.)
This code instructs Cisco IOS EEM to open a telnet session to the device (in user execution mode). The script can then execute CLI commands through this telnet session by adding the following code snippet:
In the preceding example, the actual CLI command you want to invoke is shown in quotation marks. The command example invokes the enable command, which places the CLI in privileged execution mode. You can now add a show module command:
The show output of the CLI command is placed in the variable named result, which can be seen as the statement following the command in the preceding example. A new variable named cmd_output is created and populated with the command output (it copies the contents of the variable result). The cmd_output variable can now be used later in the script, perhaps, for example, within the body of an email message.
If the CLI was opened in a script, you should always be sure to close the telnet session to free vty resources on the device (vty is a virtual terminal and represents the telnet session). The code snippet to accomplish this is shown here:
Logic and Action
After information has been collected, you may want to apply logic in a loop construct (If-Then-Else, While, or For). Action can then be taken, which may include the CLI (changes to the device running configuration), an email alert, acustom syslog message, or a custom trap.
Custom syslog messages are simple to create. Here is an example:
These messages can be useful when testing and debugging a script. Syslog messages can be added anywhere in a script as checkpoints to verify that a particular part if a script has been reached.
For more information about logic and for examples of actions, see the appendix.
Figure 4 summarizes TCL prework, script components, post-work, and tips.
Use Cases
None Event Detector
Problem Statement
When a change is made to a TCL script (a frequent occurrence when developing or testing a script), the following steps typically are performed manually:
1. Use TFTP to send the script to the device.
2. Unregister the script (if it was previously registered).
3. Reregister the script.
Suggested Solution
Use the None event detector to automate the preceding actions in a single command using the following syntax:
A custom command (for example, loadscript) can be simulated by creating an alias for the preceding command as shown in the following command:
The syntax would now be loadscript <script-to-load> [tftp-server-ip].
The environment variable _tftp_ip is used, and it should be set to the TFTP server’s IP address. The tftp-server-ip argument is optional and can override the environment variable.
Note: The None event detector is also very useful for testing scripts since it can be triggered manually.
Script Flow
● Use event_reqinfo to determine the number of arguments that were passed.
- If 2 arguments were passed, the script-to-load and tftp-server-ip values were provided.
- If 1 argument was passed, the script-to-load value is assumed to have been provided.
- If 0 arguments were passed, no script-to-load value was specified, and this is an error.
● Set the variable scriptname to argument 1 (name of the script to load).
● Capture Cisco IOS EEM policy directory information through the CLI command show event manager directory user policy.
● Use TFTP to send the script and unregister and reregister the script.
● Verify script registration using the command show event manager policy reg | i $scriptname. Thiscommand filters output to the specific script registered.
● Output a custom syslog message with script registration information.
Test Procedure
● Add the environment variable _tftp_ip:
● Manually run the script using TFTP and register an updated script:
The following syslog output shows sample results:
Syslog Event Detector
A customer wants to monitor Open Shortest Path First (OSPF) adjacency changes on a core device and to be alerted when an OSPF neighbor is added or removed.
Use the Syslog event detector to detect OSPF adjacency changes. Parse the syslog message generated by the adjacency change to determine whether a neighbor was added or removed. Then email the administrator the current OSPF neighbor information.
The Syslog event detector is easy to implement and easily understood. It can also be helpful to gather troubleshooting information about a sporadic event.
● Register the script to look for the pattern “%OSPF-5-ADJCHG” in the syslog.
● Use the event_reqinfo command to get the full syslog message.
● Capture current neighbor information using show ip ospf neighbor.
● Parse the syslog message to determine whether a neighbor was removed (FULL is changed to DOWN) oradded (LOADING is changed to FULL).
● Save the script execution time.
● Email the administrator with the timestamp and current OSPF neighbor information.
● Add the environment variable _email_server:
● Register the script (configuration mode):
● Add and remove an OSPF neighbor to test the script.
Here is some sample email output:
Raw Script
SNMP Event Detector
You want to monitor CPU utilization for a device (or for all devices). You want a notification to be sent if CPU utilization reaches a certain threshold, and if high CPU utilization persists, you want notifications to continue to besent.
Use the SNMP event detector. Set it to send an email when CPU utilization reaches a certain threshold. Emails willcontinue to be sent as long as CPU utilization continues to stay above the threshold.
● Poll the SNMP MIB every 60 seconds for the CPU level.
● The SNMP event detector is triggered if CPU utilization is above 75 percent.
● Use event_reqinfo to get the CPU level that caused the event to trigger.
● Obtain the hostname.
● Send a syslog message to the console.
● Generate an email message to the on-call engineer.
● Set the threshold to a low value such as 5 percent.
● Enter show or debug commands to trigger the SNMP MIB.
Timer Event Detector
A customer needs to execute a particular CLI command at regular intervals and wants the results sent by email.
Use the Timer event detector with a cron entry to run CLI commands periodically. Email the command output to the administrator.
This example (and other examples) can be found in the Cisco IOS EEM TCL documentation:.
This script makes extensive use of environment variables. This approach provides flexibility; the same script can be used across many devices, with customization performed at the environment-variable level.
This script also uses one of the built-in email templates (email_template_cmd.tm), which uses the input variable cmd_output and the environment variable _show_cmd. The manual email method (shown in the previous two use cases) is probably easier and provides more customization options.
Register the script using a cron timer
● Check whether the required environment variables exist.
● Execute the CLI command and save the timestamp.
● This script removes the trailing router prompt, which may be useful.
● Write to the log file.
● Send email to the administrator.
● Define the environment variables used by the script:
● After the script is registered, periodic email messages should be received:
Interface Event Detector
A customer wants to monitor a problematic interface. After output errors cross a specific threshold, an email message should be sent with show interface information.
Use the Interface event detector to execute when FastEthernet4/1 output_errors on reach 10. Run a show interface CLI command and email the command output to the administrator.
There are 22 interface counters that can be monitored (typically shown in a show interface command). The interface counters are described in the following documentation:.
This script also demonstrates how another script can be launched. A secondary script is launched at the end to send the email message. The email content is stored in environment variables since local variables can be referenced only from a local script. The following environment variables are set by the primary script and referenced by the secondary script:
The secondary script also uses environment variables to define the email server and to and from email addresses.
● Register the script to monitor a counter on a specific interface.
● Collect event details using event_reqinfo.
● Run a CLI command (use the interface name collected with event_reqinfo).
● Write the command output to _email_text.
● Write the subject line to _email_text (using the counter name from the event_reqinfo information).
● Launch the secondary script to send email.
● The secondary script sends the email message and resets the environment variables.
● Define the environment variables used by the script and secondary email script:
● When error counters reach the specified value, an email message should be received.
Raw Script (Secondary Email Script)
OIR Event Detector
A customer wants to know whenever a module is added or removed from a chassis.
Use the OIR event detector to detect online insertion and removal events. If an OIR event is detected, collect information and email the current module information to the administrator.
● Register the script to monitor OIR events.
● Use the event_reqinfo command to get more information about the event (the slot and whether the event is an insertion or a removal).
● Save the timestamp for the email message.
● Determine whether the module was inserted or removed.
● Save the CLI output of show module or show module X.
● Email the timestamp and current module inventory information to the administrator.
● Insert or remove module to test it.
GOLD Event Detector
When a Cisco GOLD test runs, you want a notification sent if an error with a major or minor severity level needs attention.
Using the GOLD event detector, determine whether a major or minor error has occurred and send an email message to the operations alias.
● Use the GOLD event detector to trigger when an error with a major or minor severity level occurs (usethenormal severity level for testing purposes).
● Use event_reqinfo to get the event severity level, the card, and the test type.
● Generate an email message to the operations team.
● Add normal to the event detector.
● The addition of normal enables the event detector to trigger.
● Run diagnostic start module [x] tst testeobcstressping.
● This test will return a severity level of normal.
Process Event Detector
With Cisco IOS Software Modularity, processes can be restarted. When a process restarts, notification other than syslog needs to be sent to the on-call engineer.
Using the Process event detector, send an email message to the on-call engineer to let the engineer know the process that restarted.
● Use the Process event detector to trigger when any process in Cisco IOS Software Modularity restarts.
● Use event_reqinfo to get the process that restarted.
● In execution mode, enter a show process cpu command to obtain a list of processes.
● In execution mode, enter the command process restart [process name].
WDSYSMON Event Detector
A customer is concerned about occasional CPU utilization spikes and wants to know what processes may be runningduring the high CPU utilization condition.
Use the WDSYSMON event detector to monitor total CPU utilization and execute a script if CPU utilization exceeds70 percent. If the script triggered, collect CPU process information and email it to the administrator.
WDSYSMON event detector registration also allows logical operations on a combination of subevents within a giventime window. Subevents can include CPU and memory utilization for specific processes. For more information, see the documentation at.
● Register the script to monitor total CPU utilization over 70 percent.
● Save the CLI output of show process cpu | exclude 0.0.
● Email the timestamp and current CPU process information to the administrator.
● Increase the CPU utilization (for testing, the threshold was set to 5 percent, and simple show commands were run to increase the CPU utilization).
Track Event Detector
When a server resource fails, the static IP route advertising its IP address needs to be removed.
Use the Track event detector to track a Cisco IOS IP SLA probe to the server. If the probe fails, remove the static IProute from the routing table. This process is similar to a route health injection.
● Use the Cisco IOS IP SLA to probe a server. In this example, ping is used. The probe could be any probe such as TCP connect.
● Use the Track event detector to track the Cisco IOS IP SLA probe.
● If the server fails, a syslog message is sent to the console, and the static IP route is removed.
● When the server comes back up, a syslog message is sent to the console to report that the server is back up. Code can be written to reinstate the static IP route automatically, but in this case a message reporting that it came back up is all that is provided. Most likely, you will want to reinstate the IP route manually after you are sure that the server is healthy.
Conclusion
[[I RECOMMEND ADDING A CONCLUDING PARAGRAPH]]
For More Information
● Cisco IOS Embedded Event Manager (launch page with links to documentation, data sheets, case studies, and Q&A):
● Cisco IOS EEM scripting community:
● Cisco IOS EEM overview (updated information regarding Cisco IOS EEM, platform support, and available event detectors):
● Writing Cisco IOS EEM applets using Cisco IOS Software:
● Writing Cisco IOS EEM policies using TCL (documentation page):
● Cisco IOS Software diagnostic tools for commercial use:
Appendix
Detecting Multiple Events
Cisco IOS EEM Version 2.4 adds multiple-event detection, which provides more control over when an applet or script runs. Up to six event statements can be used in an applet, and up to eight event statements can be used in a TCL script. The following example from the documentation shows how an applet can be configured to run only if all three events occur within 1 hour:
Similar tag, trigger, and correlate commands are available for TCL scripts:
For more information, see Cisco IOS Embedded Event Manager Version 2.4 Expanded Capabilities at.
Using event_reqinfo
When an event is detected and a policy is triggered, you can use the powerful variable event_reqinfo to obtain details about the event. Simply add the following line in your TCL script:
You can now reference the array of information that is provided about the event. Each event detector populates the array (in the example here, the array name is arr_einfo) with event-specific data. For example, the OIR event detector populates the array with the information shown in Table 1.
Table 1. OIR Event Detector Detail Information
Event Type
Description
event_id
Unique number that indicates the ID for this published event; multiple policies may be run for the same event, and each policy will have the same event ID
event_type
Type of event
event_type_string
ASCII string that represents the name of the event for this event type
event_pub_sec
event_pub_msec
Time, in seconds and milliseconds, when the event was published to Cisco IOS EEM
slot
Slot number for the affected card
event
A string, removed or online, that represents either an OIR removal event or an OIR insertion event
The values in the array can be referenced by name. For example, to show which slot caused an OIR event, you can output a syslog message:
To see detail information about the event (whether a module was removed or inserted), you can enter:
For information about event_reqinfo detail information for each event detector, see.
You can also use online help to see what array names are available using event_reqinfo. Enter the following command at the CLI prompt (privileged execution mode):
Here is some sample output for OIR:
Applying Decision Logic
After information parsing, often some logic needs to be applied to the information to determine what action to take.
You can use a conditional loop to determine whether the situation warrants action. (Many other types of decision logic are also available, such as While and For logic; If-Then-Else is used here as an example.)
If-Then Example
If-Then logic provides a means of determining whether a situation warrants action. Essentially, the if statement is tested for validity. An if statement that is valid is considered to be true. An if statement that is true leads to the action in the then statement. If an if statement is false, no action is taken.
For example, If-Then logic could be applied to a Syslog event detector looking for downed interfaces. After a syslog message containing LINK-3-UPDOWN is detected, you can use regular expressions to determine which interface went down.
Here is the logic:
IF the interface is the uplink, THEN shut the port connected to Server1 to make Server1 migrate to its backup interface.
Without any more logic, a false if statement means “do nothing.”
Note: This example does not show how the interface was parsed.
You would probably also want to send an email message or post a syslog message, but that is beyond the scope for this example.
If-Then-Else Example
If-Then-Else logic can be written for a GOLD event detector to determine whether a major, minor, or normal event has occurred.
IF a major event occurred, THEN send an email message to the whole operations team. ELSE, IF a minor event occurred, THEN send an email message only to the on-duty engineer mailer. ELSE, IF a normal event occurred, THEN post a syslog message.
This example does not show the source of the severity level, email server environment variable, hostname variable, or source of the diagnostic test. The GOLD event detector TCL script example in the appendix shows how all these are obtained. [[SOMETHING IS WRONG HERE; THIS IS THE APPENDIX. WHAT EXAMPLE DO YOU MEAN?]]
Applying Actions
The previous section showed examples of some actions such as creating syslog messages and sending email messages. This section looks at these actions again as well as the action of sending an SNMP trap from within ascript.
Send a Syslog Message
Syslog is a common means of sending a message alert to the console. Since the syslog message can be customized by a script, the output of a syslog message can read any way that you want.
Here is a simple example that uses the Syslog event detector to look for a syslog message:
When this syslog message is sent to the console, the event detector will be triggered:
The action could provide more information from syslog:
Send an Email Alert
One of the most useful functions that Cisco IOS EEM provides is the capability to send email alerts in response to detected events. The smtp_send_email function requires text input in a specific format. Following is an example of an email message created manually within a TCL script:
The email message can now be sent using the following script:
When sending email messages from a script, note the following considerations:
● Make sure the following setting is configured on the device:
● If a hostname is used for mailservername, make sure that the following settings are configured:
● Do not indent the email message text using tabs for better readability. The tab characters may interfere with the message format.
Send an SNMP Trap
Cisco IOS EEM supports the sending of custom SNMP traps. Using this action, Cisco IOS EEM can send a custom message to any SNMP agent. The SNMP trap that is generated uses the Cisco IOS EEM MIB: CISCO-EMBEDDED-EVENT-MGR-MIB.my. See.
The following example uses TCL to send a custom SNMP trap text string:
Applets use similar syntax:
action label snmp-trap [intdata1 integer] [intdata2 integer] [strdata string]
For more information about the MIB, use the Cisco MIB locator at.
1. For information about the structure of the MIB, go to.
TCL Code Snippets
When you are creating a new TCL script, you can copy and paste commonly used script components. Table 2 provides some examples of commonly used functions.
Table 2. Common TCL Script Functions
Function
Code
Import namespace
Set hostname
Open CLI
Invoke enable as a CLI command
Use show commands
Configure term [[PLS CLARIFY WHAT YOU MEAN HERE]]
Close CLI
Output syslog
Get event_reqinfo information
Set variables using array information:
See the following URL for array labels: nmb_eemt.html#wp1050955
Send email (manual)
Applying Regular Expressions
Regular expressions (often abbreviated as regex) provide a means for parsing information from a text string. A regular expression matches a pattern of characters in a given string (a set of characters). Regular expressions offer flexibility, with a single regular expression matching multiple patterns.
A regular expression can be simple, matching an exact pattern. For example, a regular expression comparing the character string “dog” will match “dog” in the target string “It is raining cats and dogs.” Note that regular expressions are case sensitive, so “dog” will match “dog,” but not “Dog.” This is a literal pattern match. It matches a “d” followed by an “o” followed by a “g.”
More often, a regular expression will use special characters, called metacharacters, to match a pattern. Examples[[PLEASE BE SURE THAT THESE ARE INDEED JUST EXAMPLES AND NOT THE ENTIRE LIST OF METACHARACTERS]] of metacharacters are the asterisk (*), caret (^), dollar sign ($), and dot (.). These metacharacters have special meanings. For an example, a dot means match on any single character. A caret meansstart the comparison before the first character in the string. A dollar sign performs a match on the last character in the string. The regular expression “^a” matches “a” in the string “abc.” Similarly, the expression “$c” matches “c” in the string “abc.”
A full description of regular expressions is beyond the scope of this document, but a number of excellent tutorials on regular expressions are available on the web, and these are searchable using any of the common web search engines.
Regular expressions are very useful in Cisco IOS EEM TCL scripts. Variables provided in the event_reqinfo array often need to be parsed to obtain enough information to make an informed decision about what action needs to be taken.
For example, the Syslog event detector is triggered by the string “%sysdown” in a system-generated syslog message. To decide what action to take, the entire syslog message needs to be parsed to determine which interface went down.
A regular expression can be applied to this entire syslog-generated message to find the interface name.
The following is an example of the entire syslog message:
The regular expression is run against the syslog message, extracting the interface from the syslog as follows:
In this example, the entire syslog message is held in the variable $msg. The result of the regular expression run against the entire syslog message ($msg) is “Interface Ethernet2/3.” The variable intf is set to this value, which can then be used later in the script to help determine appropriate actions to take.
Annotated TCL Script Example
Table 3 presents an example of a TCL script. Along the right side is annotation explaining the lines of the script.
Table 3. Annotated TCL Script
This line creates an event detector for the CLI, where it is triggered when a user types any command with debug in it.
Setting sync to yes enables the command to be executed, but the logic will really decide whether the command is ultimately permitted.
This code is required for every Cisco IOS EEM TCL script. It imports libraries specific to Cisco IOS EEM extending the capabilities of standard TCL.
This line creates the array called cli from the event_reqinfo information.
This line sets a variable called command to the msg variable in the array cli.
This line is If-Then logic that determines whether the $command variable is either show debugging or undebug all. If so, let the command execute. If not, do nothing and move on to the next line in the script.
Note: The system sees the entire command. So if an user types sh debug, the system will actually see show debugging. You can complete the commands by pressing Tab while typing the shortcuts to see the entire command.
This line opens the CLI in execution mode.
This line types the command enable.
This line types the command show clock. The variable result is set to the output of show clock.
The show clock command sends this output to the console:
*12:20:19.187 PST Fri May 1 2009
The hour is needed to determine whether the time is between 9 a.m. and 5 p.m. local time. This line runs a regular expression against the show clock output (the entire output is $result). The result of the regular expression run against $result is used to set the variable hour.
This line is an action to type the syslog message “The hour is $hour.”
Remember that $hour was the variable set from the regular expression run against $result. The variable result often is used as a variable within the system. Consequently, set a result variable to another variable name (in this case, hour) before another line in the TCL scripts overwrites the variable result.
This line uses If-Then-Else logic to decide whether the hour is within the time frame in which debug commands are denied.
IF it is, THEN send a syslog message to the console notifying the user that the command is denied. Also, set exit 0, which denies the command.
ELSE set exit 1, which lets the command run as originally typed.
This line closes the CLI. Always close the CLI.
Testing the CLI TCL Script
From the CLI, type show clock to determine the time. The time is within the restricted time.
From the CLI, type debug ip address.
Because this entry is made during the restricted time, the resulting syslog message is sent to the console, and the command is not permitted.
From the CLI, type undebug all.
The command is allowed, and the system syslog message notifies the console that all debugging has been turned off.
Applet and TCL Script Comparison Example
Table 4 presents sample code for a Process event detector using both an applet and TCL script.
Table 4. Cisco IOS EEM Process Event Detector Using Both an Applet and a TCL Script
Both samples use the email.abc.com fully qualified domain name (FQDN) instead of the IP address of the mail server. Make sure that Domain Name System (DNS) is configured to resolve the FQDN, or use ip host to configure it manually on the switch.
This line in the applet creates the applet; the TCL script has no corresponding line.
The TCL script requires the importation of the Cisco libraries, Applets are embedded in Cisco IOS Software, so there is no need to import any additional libraries.
These equivalent lines create the event detector triggers. For the applet, the path is a regular expression, so “.” means any process.
Information supplied by event_reqinfo for TCL scripts is provided for the applets. With applets, the information is embedded in Cisco IOS Software, so arrays and variables do not need to be set up.
Global variables such as the router name are not in the event_reqinfo information or a standard variable for applets; consequently, for both the TCL script and applet, the router name needs to be obtained independently.
These equivalent commands invoke a syslog message.
These equivalent commands send an email message. Notice that variables are used extensively in the email message. This approach increases the portability of both applets and TCL scripts.
Note that the applet email body is slightly different because of the maximum number of characters per line in Cisco IOS Software. This maximum is a limitation in the use of applets rather than TCL.
If the domain is left out of the “From” line in the email, the system will append the domain configured on the switch. Also, note that the domain—cat6k(config)#ip domain name (name)—needs to be configured on the switch for SMTP to work and the email to be sent. | http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-567132.html | CC-MAIN-2015-22 | refinedweb | 12,563 | 53.92 |
Dependency isolation of implementation detailsScott Stark Oct 5, 2005 2:47 PM
With the desire to expand the use of hibernate as an implementation detail in the jms() and ejb timer() pms comes the problem of leaking the hibernate classes used as an implementation detail into the user space. Throw ejb3 into the mix with its potentially different version requirements and dual implementation detail and public api use of hibernate and you have a mess. The har deployer adds yet another potential point of conflict. A related JBBUILD is the following jarjar/genjar tool investigation:
Here we need to decide on how we are going to handle this. The jarjar tool should allow for isolation at the expense of bloat and possibly increased debugging/support issues. We may want different solutions for 4.0.x vs 5.0.x where we have additional opportunities to tweak the underlying architecture to better deal with this problem.
1. Re: Dependency isolation of implementation detailsScott Stark Oct 20, 2005 10:28 AM (in response to Scott Stark)
I would like to prototype a use of jarjar by removing the dom4j.jar classes from the public namespace. This seems like the simplest implementation detail dependency we have. The only problem I see is other jems components relying on the bundled dom4j.jar.
2. Re: Dependency isolation of implementation detailsRuel Loehr Oct 20, 2005 2:04 PM (in response to Scott Stark)
I have been doing some reading about jarjar and genjar and will work on setting up the prototype.
My understanding of the use case for this prototype is as follows:
A build would be created as usual but when the distribution is built instead of copying the dom4j.jar to output/jbossas5.0.alpha/lib we would instead create a new jar (using the jarjar task) called jboss-internal.jar (or something).
The jarjar tool would take the classes from dom4j, give them a new package name and place them into the new jar.
3. Re: Dependency isolation of implementation detailsScott Stark Oct 20, 2005 2:16 PM (in response to Scott Stark)
The jar name would have to remain dom4j.jar as we currently have that defined as a hard coded bootclasspath entry, but this could be externalized. It should be named the same as whatever the normal build uses or else there will be conflicts when running the normal build vs the dist build due to classpath entry mismatches.
The jarjar util will have to modify every jar that uses the dom4j.jar to modify the org.dom4j.* package names used at compile time to jarjard org.jboss.dom4j.* package name as well or else nothing will work. You will get ClassNotFoundErrors for the org.dom4j.* classes from the jboss jars unless this is done.
4. Re: Dependency isolation of implementation detailsRuel Loehr Oct 24, 2005 4:50 PM (in response to Scott Stark)
So the first part of this will basically be transforming all of the jars in build/output/...... and replacing the dom4j packaging schema with a new org.jboss.dom4.... schema as appopriate. The next step would be to run the testsuite against this version of the server. In order for the testsuite to work we will need to perform the transformation operation against the output/libs/jars of each of the modules. | https://developer.jboss.org/thread/89699 | CC-MAIN-2018-26 | refinedweb | 555 | 54.83 |
Play 2.0, A web framework for a new era
This week, I'm in Antwerp, Belgium for the annual Devoxx conference. After traveling 21 hours door-to-door yesterday, I woke up and came to the conference to attend some talks on Play and PhoneGap. I just got out of the session on Play 2.0, which was presented by Sadek Drobi and Guillaume Bort. Below are my notes from this presentation.
The Play 2.0 beta is out! You can read more about this release on the mailing list. This beta includes native support for both Scala and Java, meaning you can use both in the same project. The release also bundles Akka and SBT by default.
In other news, Play 2.0 is now part of the Typesafe Stack. Typesafe is the Scala company, started by the founder of Scala (Martin Odersky) and the founder of Akka (Jonas Bonér). Guillaume is also joining the Typesafe Advisory Board.
Sadek and Guillaume both work at zenexity, where Play is the secret weapon for the web applications they've built for the last decade. Play was born in the real world. They kept listening to the market to see what they should add to the project. At some point, they realized they couldn't keep adding to the old model and they needed to create something new.
The web has evolved from static pages to dynamic pages (ASP, PHP). From there, we moved to structured web applications with frameworks and MVC. Then the web moved to Ajax and long-polling to more real-time, live features. And this changes everything.
Now we need to adapt our tools. We need to handle tremendous flows of data. Need to improve expressiveness for concurrent code. We need to pick the appropriate datastore for the problem (not only SQL). We need to integrate with rapidly-evolving client side technologies like JavaScript, CoffeeScript, and Dart. We need to use elastic deployment that allows scaling up and scaling down.
zenexity wanted to integrated all of these modern web needs into Play 2.0. But they also wanted to keep Play approachable. They wanted to maintain fast turnaround so you can change your code and hit reload to see the changes. They wanted to keep it as a full stack framework with support for JSON, XML, Web Services, Jobs, etc. And they wanted to continue to use and conventions over configuration.
At this point, Guillaume did a Play 2.0 Beta demo, show us how it uses SBT and has a console so everything so it runs really fast. You can have both Scala and Java files in the same project. Play 2.0 templates are based on Scala, but you don't need to know Scala to use them. You might have to learn how to write a for loop in Scala, but it's just a subset of Scala for templates and views. SBT is used for the build system, but you don't have to learn or know SBT. All the old
play commands still work, they're just powered by a different system.
After the demo, Sadek took over and started discussing the key features of Play 2.0.
To handle tremendous amounts of data, you need to do chunking of data and be able to process a stream of data, not just wait until it's finished. Java's InputStream is outdated and too low level. Its read() method reads the next byte of data from the input and this method can block until input data is available.
To solve this, Play includes a reactive programming feature, which they borrowed from Haskell. It uses Iteratee/Enumerator IO and leverages inversion of control (not like dependency injection, but more like not micro-managing). The feature allows you to have control when you need it so you don't have to wait for the input stream to complete. The Enumerator is the component that sends data and the Iteratee is the component that receives data. The Iteratee does incremental processing and can tell the Enumerator when it's done. The Iteratee can also send back a continuation, where it tells the Enumerator it wants more data and how to give it. With this paradigm, you can do a lot of cool stuff without consuming resources and blocking data flow.
Akka is an actor system that is a great model for doing concurrent code. An Actor could be both an Enumerator and an Iteratee. This vastly improves the expressiveness for concurrent code. For example, here's how you'd use Akka in Play:
def search(keyword: String) = Action { AsyncResult { // do something with result } }
Play does not try to abstract data access because datastores are different now. You don't want to think of everything as objects if you're using something like MongoDB or navigating a Social Graph. Play 2.0 will provide some default modules for the different datastores, but they also expect a lot of contributed modules. Anorm will be the default SQL implementation for Play Scala and Ebean will be the default ORM implementation for Play Java. The reason they've moved away from Hibernate is because they needed something that was more stateless.
On the client side, there's so many technologies (LESS, CoffeeScript, DART, Backbone.js, jQuery, SASS), they didn't want to bundle any because they move too fast. Instead, there's plugins you can add that help you leverage these technologies. There's a lot of richness you can take advantage of on the client side and you need to have the tools for that.
Lastly, there's a new type of deployment: container-less deployment to the cloud. Akka allows you to distribute your jobs across many servers and Heroku is an implementation of elastic deployment that has built-in support for Play.
They've explained what they tried to design and the results of this new, clean architecture have been suprising. Side effects include: type-safety everywhere for rock-solid applications. There's an awesome performance boost from Scala. There's easier integration with existing projects via SBT. And it only takes 10 lines of code to develop an HTTP Server that responds to web requests..
At this point, Guillaume did another demo, showing how everything is type-safe in 2.0, including the routes file. If you mistype (or comment one out) any routes, the compiler will find it and notify you. Play 2.0 also contains a compiled assets feature. This allows you to use Google's Closure Compiler, CoffeeScript and LESS. If you put your LESS files in app/assets/stylesheets, compilation errors will show up in your browser. If you put JavaScript files in app/assets/javascripts, the Closure compiler will be used and compilation errors will show up in your browser.
Play 2.0 ships with 3 different sample applications, all implemented in both Java and Scala. HelloWorld is more than just text in a browser, it includes a form that shows how validation works. Another app is computer-database. When Guillaume started it, we saw how evolutions were used to create the database schema from the browser. The Play Team has done their best to make the development process a browser-based experience rather than having to look in your console. The computer-database is a nice example of how to do CRUD and leverages Twitter's Bootstrap for its look and feel.The last sample application is zentasks. It uses Ajax and implements security so you can see how to create a login form. It uses LESS for CSS and CoffeeScript and contains features like in-place editing. If you'd like to see any of these applications in action, you can stop by the Typesafe booth this week at Devoxx.
Unfortunately, there will be no migrating path for Play 1.x applications. The API seems very similar, but there are subtle changes that make this difficult. The biggest thing is how templating has changed from Groovy to Scala. To migrate from 1.2.x would be mostly a copy/paste, modify process. There are folks working on getting Groovy templates working in 2.0. The good news is zenexity has hundreds of 1.x applications in production, so 1.x will likely be maintained for many years.
Summary
This was a great talk on what's new in Play 2.0. I especially like the native support for LESS and CoffeeScript and the emphasis on trying to keep developers using two tools: their editor and the browser. The sample apps look great, but the documentation look sparse. I doubt I'll get a chance to migrate my Play 1.2.3 app to 2.0 this month, but I hope to try migrating sometime before the end of the year.
Posted by Marius S on November 16, 2011 at 08:35 AM MST #
@Matt Thanks for the write-up!
At this time, do you think that Play 2.0 should be a serious consideration for a team creating a new web app in Scala or Java? What are the drawbacks in Play 2.0 compared to other leading choices?
Posted by Jay Barker on November 16, 2011 at 08:57 AM MST #
Posted by Andreas on November 16, 2011 at 09:06 AM MST #
Posted by Jon K on November 16, 2011 at 09:06 AM MST #
I'm a *huge* fan of CS and Less. I don't know how we created website before these technologies were available. The 'native' support of CS/Less can be done easily with an Eclipse builder. Every time I save a file, it gets 'compiled' and I can see errors *before* I do a context switch to reload things in my browser. I don't know if I need my web app framework to do all this for me. Here is my writeup on how to set this up:
I worry about the speed and cpu load of Scala. Sure, it compiles down to the JVM byte codes, but that doesn't mean it is fast. Groovy is dog slow, so at least they've switched from that. I'm also not a huge fan of the syntax of Scala. Seems like more of a fad language to me. I'm waiting for Ceylon, mostly because I know Gavin really knows his shit and will do a great job with it.
I've been using a relatively unknown template engine for my latest project called Cambridge Templates (stupid name, I know). This is one of the most brilliantly simple engines I've ever used and it is very fast. I integrated it with the Jexl expression language (it is in trunk now) and that has been more than adequate. Most of the time, I'm working in CoffeeScript on the client side anyway. The author is very responsive to fixing bugs and adding requested features.
Posted by Jon Stevens on November 16, 2011 at 11:36 AM MST #
@Jon: No need to worry. Scala basically emits the same bytecode Java would emit for the same task. Most comparisons show that both languages have the same performance, sometimes Java is a few percents faster, sometimes Scala.
You think Scala is a fad language although it has regular releases, an active community, commercial support, a huge team of people working on the language, a standard library, dozens of books, IDE support, documentation and a huge eco-system around it? Interesting, considering that Ceylon has to this day nothing of the above.
Posted by Steve on November 16, 2011 at 12:49 PM MST #
To add to Steve's comment, Scala is designed by Martin Odersky--the guy who wrote javac and laid the groundwork for generics in Java. It's also used by several of the most popular web applications: Twitter, LinkedIn, and FourSquare among others.
But surely Gavin King can do better. He created Hibernate after all, and everyone uses that. Well, everyone except Play 2.0 apparently.
Posted by Zachary Kuhn on November 16, 2011 at 01:15 PM MST #
@Jon Go look at the Hibernate source code when Gavin was the only committer (< 3.x) and see if you are so sure he's the right guy to bring us all the next language.
I also don't like the Scala syntax that much. Mainly because its cutesy attempts at compacting the amount of written code is a nerdy step backwards: readability trumps all value-wise in a codebase, and there are tools to help the poor little programmer's fingers.
Posted by Rob on November 17, 2011 at 10:37 AM MST #
Posted by numan on November 21, 2011 at 10:45 AM MST #
For those of you 'loving' Scala... please read:
Posted by Jon Stevens on November 25, 2011 at 10:45 AM MST #
Thanks for the interesting link Jon. I think Play 2 does a good job of hiding Scala's complexity, but it can still be a very confusing language.
@numan - Play 2.0 is development ready, but not quite production ready.
Posted by Matt Raible on November 29, 2011 at 04:04 PM MST # | http://raibledesigns.com/rd/entry/play_2_0_a_web | CC-MAIN-2017-26 | refinedweb | 2,199 | 73.58 |
ursboller
12-09-2020
Having an app scren, is it possible to use the libraries aio-lib-state or aio-lib-files and access the data? eg. I want to have some "basic logs" save somewhere and display it on an app screen (component). or do I need to write an action fetching the desired data and then display the response?
duypnguyen
14-09-2020
Hi @ursboller , it is possible to use aio-lib-state and aio-lib-files in the UI components in `web-src/`, same as any nodejs code. However it does not have the built-in authentication like in the backend action which leverages the namespace credentials to obtain access tokens to State and Files. You would have to manage the authentication in your code. IMO this is not a secure way of using State and Files libs because there are too many vulnerability risks in the frontend code. The best way would still going through the action with `require-adobe-auth` enabled. | https://experienceleaguecommunities.adobe.com/t5/project-firefly-questions/use-aio-lib-state-and-aio-lib-file-in-web-src/qaq-p/378804 | CC-MAIN-2020-40 | refinedweb | 166 | 69.31 |
The objects in our programs exist only while the program is executing. When the program closed, these objects cease to exist. How can we save an object in our program and restore it when the program rerun? For example, suppose that we are playing a computer chess game. We close the game before it finished. When we restart the game, it should resume from where we had left it, instead of from the beginning. One way to accomplish this would be to save information about the game (such as the locations of various game pieces, scores, and so forth) to a file, and then read this information back from the file to restore the game to the state where we had left it when it runs next. This is the idea behind serialization. Serialization is saving an object's state as a binary stream (that is, as a sequence of bytes). When an object is serialized, it is said to be flattened or deflated. The reverse process of constructing an object from the binary data is known as deserialization. Thus, a deserialised (or inflated) object is one whose state has restored.
Java provides two classes to implement serialization: ObjectOutputStream and ObjectlnputStream. The ObjectOutputStream class is used to convert the instance fields of an object into a stream of bytes. This byte stream can then sent to a FileOutputStream, which writes it to a file. The binary data can then be read back from the file using a FilelnputStream and sent to an ObjectInputStream class to restore the object.
Below figure shows the readObject and writeObject methods of these two classes that are used to read and write objects, respectively. A class whose objects serialise must implement the java. io.Serializable interface. Otherwise, a NotSeria1izableException thrown at run time.
Although the state of objects can be saved to a file using the classes, it is much simpler to do so using these object stream classes. Additionally, these classes reduce the chances of introducing errors in your program-for example, by reading back the fields in a different order than they wrote in. If the object being saved contains references to other objects, the latter are also saved along with the former.
This next example shows how objects can serialise. Consider a class called AudioPlayer with a field called audioFile and a play method to play the contents of audioFile. Let us write a program in which we create an instance of this class called player that is playing some audio file. When we close the program and then rerun it later, we would like to have it replay the file that played last. It can be done by saving the value of the player's audioFile field in some me, and then reading it back from this file when the program is rerun. The AudioPlayer class must implement the Serializable interface:
public class AudioPlayer implements Serializable {
public File audioFile;
public void setAudioFile(File f){ audioFile = f;}
public void play() throwsException{// code to play audioFile}
}
You will see how to write the code in method play to play an audio file a littie later. Create an instance of AudioPlayer called player:
AudioPlayer player = new AudioPlayer();
player.setAudioFile(new File("audio/groovy.wav"))
Next, we discuss how to serialise this object player.
The process of reading and writing objects in a file is called object serialization. In Java, serialization occurs automatically.
An object is serialised using the following steps:
1. Create a FileOutputStream to write to a file (say, object.dat) and wrap it inside an ObjectOutputStream.
2. Invoke the object stream's writeObject method to save the object's state to object.dat.
For example, this shows how to serialise the instance player by saving its state in the file player.dat:
// wrap a FileOutputStream inside an ObjectOutputStream
FileOutputStream fileOut = new FileOutputStream("text/player.dat") ;
ObjectOutputStream objectOut = new ObjectOutputStream(fileOut);
// save the contents of field audioFile in player into file player.dat
objectOut.writeObject(player); objectOut.close();
In this example, the player contains only one field, but if there are multiple fields, all of their values saved by the writeObject method. Multiple objects (say, object1, object2, and object3) can also saved to the same file by calling the writeObject method successively for each object:
objectOut.writeObject(object1);
objectOut.writeObject(object2);
objectOut.writeObject(object3);
Writing an object to a file is called serializing the object and reading the object back from a file is called deserializing an object.
An object is deserialised using the following steps:
1. Create a FilelnputStream to read from a file (say,object.dat) and wrap it inside an ObjectInputStream.
2. Invoke the object stream's readObject method to restore the object's state from the file object.dat.
Suppose that the AudioPlayer program is rerun again, and a new instance of AudioPlayer called player created. When the player object is deserialized, the audioFile field of player will be initialized to the file played last using the data in the file player.dat:
FilelnputStream fileln = new FilelnputStream("text/player.dat");
ObjectlnputStream objectln = new ObjectlnputStream(fileln);
// restore player's audioFile field by reading it from player.dat
player = (AudioPlayer) objectln.readObject();
objectln.close();
When the object is read using method readObject, it is necessary to cast it back to its original type, as previously shown.
If the object being deserialized has several fields, all of the fields are restored when the readObject method is invoked. Multiple objects can be restored similarly via successive calls to readObject:
object1 = (type) objectln.readObject();
object2 = (type) objectln.readObject();
object3 = (type) objectln.readObject();
The instance player is serialised, as shown in Figure(a), and then deserialised, as shown in Figure (b).
String and array objects can serialise similarly. However, static fields cannot serialise. Java contains a keyword called transient that the programmer can use to specify fields that are not to serialise. Suppose that AudioPlayer has a field called isPlaying that is set to true while the player is playing audio:
transient boolean isPlaying;
This field not saved when the instance player serialised because it declared as transient. When an object deserialised, the transient fields of this object reinitialised to default values (null for objects). For example, the isPlaying field will be reinitialized to false.
The AudioMixer Class: Serialization This section shows how serialisation works using the AudioMixer class. The example used here is similar to the AudioPlayer example, except that we implement the play method in AudioMixer so that it can play audio files. Java contains many classes and interfaces to support playing and recording audio in the javax.sound.sampled package.
The main class in Java's sound API is the AudioSystemclass. The AudioSystem class contains various static methods that provide information about audio files and resources on your computer for playing sound. Some of these methods shown in Figure below. You can obtain the format of some audio file-say, groovy.way-by using the getAudioFi1eFormat method:
File audioFile = new File("audio/groovy.wav");
AudioSystem.getAudioFileFormat(audioFile);
The output is similar to the printSummary method of AudioMixer.
To read the contents of an audio file, convert it to an audio stream using the getAudioInputStream method of AudioSystem:
AudiolnputStream stream = AudioSystem.getAudiolnputStream(audioFile);
Figure shows some methods in class AudiolnputStream. The audio samples can then be read one at a time using the read() method of AudiolnputStream:
stream.read ();
Alternately, several samples can be read into an array using the read (byte[]) method:
byte[] sampleArray = new byte[100];
stream.read(b);
An audio stream can be played by using either a clip or a source data line. A clip is used when the audio data is fully available before playback starts and is not too large. An example of this is a song file stored on a CD. A source data line is used when the data is not fully available before playback starts or when the audio file is too large to be stored in the computer's memory.
An example of this is a concert being broadcast live. We explain how to use a clip.
You can obtain a clip using the getClip method in AudioSystem.The Clip interface declares several methods to open and playa clip. After the clip is obtained, it can open an audio stream and play it. To open a stream, use an AudiolnputStream as an argument to a clip's open method. The playback is starid by invoking the start method in Clip, as shown here:
Clip clip = (Clip) AudioSystem.getClip();
clip.open(stream); // open the audio stream
clip.start(); // start playing the audio
After the audio has been played fully, a special type of event called LineEvent.Type.STOP is generated. The clip can add an event handler of type LineListener to listen for this event using the addLineListener method in Clip.
The interface LineListener contains an update method, which must be implemented by the event handler:
void update(LineEvent event)
The LineEvent class contains the method getType to obtain the type of the event that has occurred (such as START or STOP,among others).
With this background, we are now ready to modify AudioMixer so that it can play audio files. Add the field audioFile and the methods play and set-AudioFile to AudioMixer. Also, declare the class to implement the Serializable interface to see how serialization works:
public class AudioMixer implements Serializable {
private File audioFile;
public void setAudioFile(File f){ audioFile = f;}
public void play() throwsException{
AudioInputStream stream = AudioSystem.getAudioInputStream(audioFile);
Clip clip =(Clip) AudioSystem.getClip();
//line listener causes program to exit after play is completed
clip.addLineListener(new LineListener(){
public void update(LineEvent evt){
if(evt.getType()== LineEvent.Type.STOP){
System.exit(0);
}
}
});//open the audio stream and start playing the clip
clip.open(stream);
clip.start();
//program waits here while the music is played
Thread.sleep(1800*1000);
}
}
This statement might appear somewhat strange because we have not yet discussed multithreading.
Thread.sleep(1800*1000);
If you comment out this line, the audio will not be played because the program will exit prematurely. This statement simply says that our program should wait for a specified time interval (1800 seconds in this case), which ensures that the audio is played by the player. If the audio ends before 1800 seconds, a STOP line event is generated, and the line listener attached to the clip handles this event by terminating the program. If you need to play audio fIles longer than 30 minutes, you must make the time interval larger.
Update the main method in AudioMixer as follows to see how serialization works. This example shows how an audio player can be made to restart playing the same audio file that it was playing before it was closed. The code in this method is almost identical to that discussed earlier in class AudioPlayer:
public static void main(String[] args) {
try {
AudioMixer player = new AudioMixer();
System.out.print("Enter name of file to play:");
Scanner s = new Scanner(System.in);
String input = s.next();
player.setAudioFile(new File(input));
FileOutputStream fileOut = new FileOutputStream("text/player.dat");
ObjectOutputStream objectOut = new ObjectOutputStream(fileOut);
objectOut.writeObject(player);
// serialize
objectOut.close();
System. out. println ("Written AudioMixer object called player to file player.dat");
System.out.println("Create new AudioMixer object called player1");
AudioMixer player1 = new AudioMixer();
System.out.println("Initialize player1 fields from file player.dat");
FilelnputStream fileIn = new FileInputStream("text/player.dat");
ObjectlnputStream objectln = new ObjectlnputStream(fileln);
player1 = (AudioMixer) objectln.readObject();
// deserialize
objectIn.close();
player1.play(); // play the audio
catch (Exception e) { e.printStackTrace(); }
}
You will also need to add these statements to AudioMixer:
import javax.sound.sampled.*;
import java.util.Scanner;
In main, an AudioMixer instance called player is created and its audioFile field is set to the filename entered by the user on the console. This player is then serialized to the file player.dat by using the writeObject method of objectOut.
After this, player is assigned to a new AudioMixer instance so that its audioFile field becomes null. Then, player1 is deserialized from the file player.dat by calling the readObject method of objectIn, and so audioFile is set to the filename that was previously entered by the user. Finally, p1ayerl plays the audio by calling its play method. When you compile and run the program, you will get the following output:
Enter name of file to play:audio/groovy.wav
Written AudioMixer object called player to file player.dat
Create new AudioMixer object called player1
Initialize player1 fields from file player.dat
The new player named player1 also starts playing the same file groovy.w | https://ecomputernotes.com/java/what-is-java/serialization-in-java | CC-MAIN-2019-35 | refinedweb | 2,092 | 56.45 |
Hello, this is my first article on CodeProject. I have been a long time
reader, and the CodeProject resource has been an endless supply of answers to
many questions. After searching CodeProject, I found that the .NET section
lacked any articles on compression, so I thought I would write this article.
First of all, this article depends on the SharpZipLib which is 100% free to
use, in any sort of projects. Details on the license and download links are
available here.
A friend asked me to teach him C#.NET, and as a project to teach him, I
decided to start writing a revision control system utilizing both server and
client, we've both had our share of pitfalls with CVS. One of the features he
wanted involved compression, so I sought out this library, but its documentation
is sketchy unless you use it purely for an API reference. Also, the
documentation only shows examples of file based compression. However, in our
project, we wanted the ability to work in memory (with custom diff-type
patches). Originally, I found this library on a forum that said this wasn't
possible, but after digging into the library documentation, I found some
Stream-oriented classes that looked promising. An hour or so of playing around,
and this simple and short code was the result. Since the code is relatively
short, I have not included any source or demo files to download. I hope someone
finds this useful!
For convenience sake, we localize the namespaces IO,
Text, and SharpZipLib:
IO
Text
SharpZipLib
using System;
using System.IO;
using System.Text;
using ICSharpCode.SharpZipLib.BZip2;
First of all, we'll start with compression. Since we're using
MemoryStreams, let's create a new one:
MemoryStream
MemoryStream msCompressed = new MemoryStream();
Simple enough, right? For this example, I will use BZip2. You can use Zip, or
Tar, however, they require implementing a dummy FileEntry, which is
extra overhead that is not needed. My choice of BZip2 over GZip comes from the
experience that larger data can be compressed smaller, at the cost of a slightly
larger header (discussed below).
FileEntry
Next, we create a BZip2 output stream, passing in our
MemoryStream.
BZip2OutputStream zosCompressed = new BZip2OutputStream(msCompressed);
Pretty easy... Now however, is a good time to address the header overhead I
mentioned above. In my practical tests, compressing a 1 byte string, rendered a
28 byte overhead from the headers alone when using GZip, plus the additional
byte that could not be compressed any further. The same test with BZip2 rendered
a 36 byte overhead from the headers alone. In practice, compressing a source
file from a test project of 12892 bytes was compressed to 2563 bytes, about a
75% compression rate give or take my bad math, using BZip2. Similarly, another
test revealed 730 bytes compressed to 429 bytes. And a final test, a 174 bytes
compressed to 161 bytes.
Obviously, with any compression, the more data is available, the better the
algorithm can compress patterns.
So with that little bit of theory out of the way, back to the code... From
here, we start writing data to the BZip2OutputStream:
BZip2OutputStream
string sBuffer = "This represents some data being compressed.";
byte[] bytesBuffer = Encoding.ASCII.GetBytes(sBuffer);
zosCompressed.Write(bytesBuffer, 0, bytesBuffer.Length);
zosCompressed.Finalize();
zosCompressed.Close();
Pretty easy. As with most IO and stream methods, byte arrays are used instead
of strings. So we encode our output as a byte array, then write it to the
compression stream, which in turn compresses the data and writes it to the inner
stream, which is our MemoryStream.
bytesBuffer = msCompressed.ToArray();
string sCompressed = Encoding.ASCII.GetString(bytesBuffer);
So now, the MemoryStream contains the compressed data, so we
pull it out as a byte array and convert it back to a string. Note that this
string is NOT readable, attempting to put this string into a textbox will render
strange results. If you want to view the data, the way I did it was to convert
it into a Base64 string, but this increases the size, anyone has any suggestions
to that are welcome to comment. The result of running this specific code renders
the 43 byte uncompressed data as 74 byte compressed data, and when encoded as a
base 64 string, the final result is 100 characters as follows:
QlpoOTFBWSZTWZxkIpsAAAMTgEABBAA+49wAIAAxTTIxMTEImJhNNDIbvQ
aWyYEHiwN49LdoKNqKN2C9ZUG5+LuSKcKEhOMhFNg=
Obviously, these are not desirable results. However, I believe the speed of
which the library compresses short strings of data could be extended into a
method which returns either a compressed or uncompressed string with a flag
indicating which was more efficient.
Now of course, to test our code above, we need some uncompression code. I
will put all the code together, since it's pretty much the same, just using a
BZip2InputStream instead of a BZip2OutputStream, and
Read instead of Write:
BZip2InputStream
Read
Write
MemoryStream msUncompressed =
new MemoryStream(Encoding.ASCII.GetBytes(sCompressed));
BZip2InputStream zisUncompressed = new BZip2InputStream(msUncompressed);
bytesBuffer = new byte[zisUncompressed.Length];
zisUncompressed.Read(bytesBuffer, 0, bytesBuffer.Length);
zisUncompressed.Close();
msUncompressed.Close();
string sUncompressed = Encoding.ASCII.GetString(bytesBuffer);
Now, a quick check on sUncompressed should reveal the original
string intact... No files involved, however, if you wanted to load a file, there
are a few ways you can do it, and I leave it to your imagination.
sUncompressed
Special thanks to the developers at ICSharpCode.Net for providing this
awesome library free to the public which makes this article possible. I have
no affiliation with ICSharpCode.Net, so I hope I have not breached anything in
posting this article.
I hope you all find this as useful as I. | http://www.codeproject.com/Articles/6834/MemoryStream-Compression?msg=970541 | CC-MAIN-2014-35 | refinedweb | 933 | 64.1 |
On 11/13/2010 11:29 AM, Mark Wooding wrote: > Alas, Python is actually slightly confusing here, since the same > notation `=' sometimes means assignment and sometimes means mutation. I disagree somewhat. An object is mutated by an internal assignment. "ll[0] = 1" assigns 1 to the 0 slot of ll. "o.a = 1" assigns 1 to the 'a' attribute of o. This which might be implemented by assigning 1 to the 'a' slot of o.__dict__, just as "a=1" might be implemented by assigning 1 to the 'a' slot of a namespace dict. Assignment *always* binds an object to a target. Names are just one possible target. And, of course, assignment always mutates something -- a set of associations -- even if the 'something' is not a Python object itself. So '=' always means assignment/binding by mutation. The question is what gets bound to what in what set of associations. The rest of your post helps clarify that. -- Terry Jan Reedy | https://mail.python.org/pipermail/python-list/2010-November/592369.html | CC-MAIN-2017-04 | refinedweb | 160 | 68.87 |
.
#lang racket
(require (planet soegaard/math/math))
(for/fold ([result ‘()])
([x (in-range 1000)])
(let ([d (digits (* x x))])
(let-values ([(left right) (split-at d (quotient (length d) 2))])
(if (equal? x (+ (digits->number left)
(digits->number right)))
(cons x result)
result))))
This is a good problem for python.
def kaprekar(n):
l = str(n * n)
length = len(l)
return n == int(l[0:length/2]) + int(l[length/2:])
for i in range(4,10000000):
if kaprekar(i):
print i
A simple ruby version …
A haskell solution
which gives
*Main> kaprekarLessThan 1000
[1,9,45,55,99,297,703,999]
And yes, yes I did misspell Kaprekar.
Answered in Python, with some pre-computating values and working only with digits (as opposed to strings) for speed:
Oops! Here we go:
ruby-1.9.2-p0 > require "kaprekar"
=> true
ruby-1.9.2-p0 > kaprekar? 9
=> true
ruby-1.9.2-p0 > kaprekar? 297
=> true
ruby-1.9.2-p0 > kaprekar? 50
=> false
ruby-1.9.2-p0 > kaprekar_to 1000
=> [1, 9, 45, 55, 99, 297, 703, 999]
;; First time I ever use the prelude…
(define (kaprekar? n)
(let* ((n2 (ipow n 2))
(size (+ 1 (ilog 10 n2)))
(size-l (if (even? size) (/ size 2) (/ (- size 1) 2)))
(size-r (if (even? size) size-l (+ 1 size-l))))
(let* ((right (modulo n2 (ipow 10 size-r)))
(left (/ (- n2 right) (ipow 10 size-r))))
(= n (+ left right)))))
(define (kaprekar n)
(filter kaprekar? (range 0 n)))
;; err, that was the wrong version.
(define (kaprekar? n)
(let* ((n2 (ipow n 2))
(size (+ 1 (ilog 10 n2)))
(size-r (/ (if (even? size) size (+ 1 size)) 2)))
(let* ((mask (ipow 10 size-r))
(right (modulo n2 mask))
(left (/ (- n2 right) mask)))
(= n (+ left right)))))
Common Lisp solution. No strings manipulation, but arithmetic (log, expt, ceiling, floor).
Same as model solution in scheme.
A soultion in Haskell using Data.Digits
Anyone have an idea, how to implement it without reversing the list twice?
Hackish code will come back to it when I have time.
Mine in F #
C Implementation
This is in python:
[…] Below is my solution to the Programming Praxis problem from Sept. 21, 2010. The problem was to find all Kaprekar numbers less than 1000. I wrote my solution in C++ to refresh my skills a little. For more info, see this link. […]
[…] A few days ago I attempted to put my Ruby skills to use, given a programming puzzle centred on Kaprekar numbers. […] | http://programmingpraxis.com/2010/09/21/kaprekar-numbers/?like=1&source=post_flair&_wpnonce=1fd78fe65c | CC-MAIN-2015-32 | refinedweb | 411 | 83.05 |
Hi -- After lots of searching I cant seem to find an answer (well one that I understand at least..)
I want to make the variables wobstacle, waisle, and wturn ONLY numbers where a user cannot input a letter and break the program. I was thinking of something along the lines of if wobstacle ! isnan then return them back to the beginning...But the syntax is incorrect and doesn't seem as though I can use it this way...any help?
Thanks
#include <iostream.h> #include <math.h> int main() { double wobstacle, //Width of obstacle waisle, //Width of aisle wturn; //Width of turn bool flag; //Calculate if the turn meets US flag=true; cout << endl << "Please enter the width of the OBSTACLE, AISLE, and " << "TURN in inches. (With spaces inbetween): "; cin >> wobstacle >> waisle >> wturn; while ( flag == true ) { //Check for not a number: if ( wobstacle ! nan && waisle ! nan && wturn ! nan ) { flag=false; } else cout<< endl << "Please enter a number"; cin >> wobstacle >> waisle >> wturn; flag=true; } if(wobstacle >= 48 && waisle >= 36 && wturn >= 36 || wobstacle < 48 && waisle >= 42 && wturn >= 48) { cout << "The design specifications meet the UFAS requirements"; } else cout << "The design specifications do not meet the UFAS requirements"; system("pause"); return 0; } | https://www.daniweb.com/programming/software-development/threads/225515/user-inputs-only-a | CC-MAIN-2017-43 | refinedweb | 199 | 69.21 |
tl ‘import *, though I do like using named exports.
CommonJS modules generally look like this when you’re importing them using babel’s ES Modules/CommonJS interop. ES Modules for both types of import namespacing
There’s a simple method to support both types of import namespacing with very minimal impact on ES Module authoring. Simply export the module’s own namespace as the default export .
The consumer can then carelessly use either method of getting a namespaced import.
It’s possible that importing a module into itself isn’t valid according to the ES Modules spec, though it does currently work using babel. I don’t know why this wouldn’t be valid. » ES Modules: Using named exports as the default export
评论 抢沙发 | http://www.shellsec.com/news/13266.html | CC-MAIN-2018-09 | refinedweb | 125 | 56.76 |
Originally, I wrote a C++ parser which was used to parse given MS Word documents and put them into some form of a structure that was more useful for data processing. After I wrote the parser, I started working with .NET and C# to re-create the parser. In the process, I also wrote my first article for Code Project, Automating MS Word Using Visual Studio .NET. Several people have requested to see the C++ version of the application, hence, I finally got some time to put something together. I have written this article with the intention of making it easier for someone who is looking for quick answers. I hope that people can benefit from the information provided and help them get started faster.
No special background is necessary. Just have some hands on experience with C++.
I think the best way to present the code would be to first give you the critical sections which you need to get an instance of MS Word, and then give you snapshots of code that perform specific functions. I believe this way will help you get started faster in developing your own programs.
The following block is the header portion of the CPP file.
Note: The most important include files are <utilcls.h> and <comobj.hpp>. These are used for COM and OLE.
// Vahe Karamian - 04-20-2004 - For Code Project
//---------------------------------------------------------------------------
#include <vcl.h>
#pragma hdrstop
// We need this for the OLE object
#include <utilcls.h>
#include <comobj.hpp>
#include "Unit1.h"
#include <except.h>
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TForm1 *Form1;
The following block creates MS Word COM Object. This is the object which will be used to access MS Word application functions. To see what functions are available, you can do within MS Word. Refer to the first article, Automating MS Word Using Visual Studio .NET.
As before, you can either make a Windows Forms Application or a Command Line application, the process is the same. The code below is based on a Windows Forms application, that has a button to start the process. When the user clicks the button, the Button1Click(TObject *Sender) event will be called and the code executed.
Button1Click(TObject *Sender)
Note: To better understand the code, ignore everything in the code except the portions word visible, to make invisible put false
my_word.OlePropertySet( "Visible", (Variant) true );
// get document object
my_docs = my_word.OlePropertyGet( "documents" );
Variant wordActiveDocument = my_docs.OleFunction( "open", fileName );
.
.
.
So a brief explanation, we define a OleVariant data type called fileName, we assign a file path to our fileName variable. In the code above, this is done using a OpenDialog object. Of course, you can just assign a whole path for testing if you like, i.e., c:\\test.doc.
OleVariant
fileName
OpenDialog
Next, we define two Variant data types called my_word, and my_docs. my_word will be used to create a word.application object and my_docs will be used to create a documents object.
Variant
my_word
my_docs
word.application
documents
Next, we define another Variant data type called myActiveDocument. Using this referenced object, we can now do what we want! In this case, we are going to open the given MS Word document.
myActiveDocument
Notice that most of the variables are of type Variant.
At this point, we have a Word document that we can start performing functions on. At first, it might take a while for you to see how it works, but once you get a hang of it, anything in MS Word domain is possible.
Let's take a look at the following code, it is going to be dealing with tables within a MS Word document.
.
.
Variant wordTables = wordActiveDocument.OlePropertyGet( "Tables" );
long table_count = wordTables.OlePropertyGet( "count" );
.
.
As I mentioned before, all your data types are going to be of Variant. So we declare a Variant data type called wordTables to represent Tables object in our Document object.
wordTables
Tables
Document
Variant wordTables = wordActiveDocument.OlePropertyGet( "Tables" );
The line above will return all Table objects that are within our active Document object. Since Tables is a property of a Document object, we have to use the OlePropertyGet( "Tables" ); to get the value.
Table
OlePropertyGet( "Tables" );
long table_count = wordTables.OlePropertyGet( "count" );
The line above will return the number of tables in out Tables object. This is done by calling the OlePropertyGet( "count" ); to return us the value.
OlePropertyGet( "count" );
You might be wondering where do I get this information from? The answer to that question is in the first article: Automating MS Word Using Visual Studio .NET.
The next block of code will demonstrate how to extract content from the Tables object.
.
.
.
int t, r, c;
try
{
for( t=1; t<=table_count; t++ )
{
Variant wordTable1 = wordTables.OleFunction( "Item", (Variant) t );
Variant tableRows = wordTable1.OlePropertyGet( "Rows" );
Variant tableCols = wordTable1.OlePropertyGet( "Columns" );
long row_count, col_count;
row_count = tableRows.OlePropertyGet( "count" );
col_count = tableCols.OlePropertyGet( "count" );
// LET'S GET THE CONTENT FROM THE TABLES
// THIS IS GOING TO BE FUN!!!
for( r=1; r<=row_count; r++ )
{
Variant tableRow = tableRows.OleFunction( "Item", (Variant) r );
tableRow.OleProcedure( "Select" );
Variant rowSelection = my_word.OlePropertyGet( "Selection" );
Variant rowColumns = rowSelection.OlePropertyGet( "Columns" );
Variant selectionRows = rowSelection.OlePropertyGet( "Rows" );
long rowColumn = rowColumns.OlePropertyGet( "count" );
for( c=1; c<=rowColumn; c++ ) //col_count; c++ )
{
Variant rowCells = tableRow.OlePropertyGet( "cells" );
Variant wordCell = wordTable1.OleFunction( "Cell",
(Variant) r, (Variant) c );
Variant cellRange = wordCell.OlePropertyGet( "Range" );
Variant rangeWords = cellRange.OlePropertyGet( "Words" );
long words_count = rangeWords.OlePropertyGet( "count" );
AnsiString test = '"';
for( int v=1; v<=words_count; v++ )
{
test = test + rangeWords.OleFunction( "Item",
(Variant) v ) + " ";
}
test = test + '"';
}
}
}
my_word.OleFunction( "Quit" );
}
catch( Exception &e )
{
ShowMessage( e.Message + "\nType: " + __ThrowExceptionName() +
"\nFile: "+ __ThrowFileName() +
"\nLine: " + AnsiString(__ThrowLineNumber()) );
}
.
.
.
Okay, so above we have the code that actually will go through all of the tables in the Document object and extract the content from them. So we have tables, and tables have rows and columns. To go through all of the Tables object in a document, we do a count and get the number of tables within a document.
So we have three nested for loops. The first one is used for the actual Table object, and the 2nd and 3rd are used for the rows and columns of the current Table object. We create three new Variant data types called wordTable1, tableRows, and tableCols.
for
wordTable1
tableRows
tableCols
Note: Notice that wordTable1 comes from the wordTables object. We get out table by calling wordTables.OleFunction( "Item", (Variant) t );. This returns us a unique Table object from the Tables object.
wordTables.OleFunction( "Item", (Variant) t );
Next, we get the Rows and Columns object of the given Table object. And this is done by calling OlePropertyGet( "Rows" ); and OlePropertyGet( "Columns" ); of the wordTable1 object!
Rows
Columns
OlePropertyGet( "Rows" );
OlePropertyGet( "Columns" );
Next, we get a count of rows and columns in the given Rows and Columns objects which belong to the wordTable1 object. We are ready to step through them and get the content.
Now, we will have to define four new Variant data types called tableRow, rowSelection, rowColumsn, and selectionRows. Now, we can start going from column to column in the selected row to get the content.
tableRow
rowSelection
rowColumsn
selectionRows
In the most inner for loop, the final one, we again define four new Variant data types called rowCells, wordCell, cellRange, and rangeWords. Yes, it is tedious, but we have to do it.
rowCells
wordCell
cellRange
rangeWords
Let's sum what we did so far:
Note: Yes, some steps are repeated, but the reason behind it is because not all tables in a given document are uniform! I.e., it does not necessarily mean that if row 1 has 3 columns, then row 2 must have 3 columns as well. More than likely, it will have different number of columns. You can thank the document authors/owners.
So then the final step will just step through the cells and get the content and concatenate it for a single string output.
And finally, we want to quit Word and close all documents.
...
my_word.OleFunction( "Quit" );
...
That is pretty much it. The code does sometimes get pretty tedious and messy. The best way to approach automating/using Word is by first knowing what it is that you exactly want to do. Once you know what you want to achieve, then you will need to find out what objects or properties you need to use to perform what you want. That's the tricky part, you will have to read the documentation: Automating MS Word Using Visual Studio .NET.
In the next code block, I will show you how to open an existing document, create a new document, select content from the existing document and paste it in the new document using Paste Special function, then do clean up, i.e., Find and Replace function.
Before you look at the block of code, the following list will identify which variable is used to identify what object and the function that can be applied to them.
vk_filename
vk_converted_filename
vk_this_doc
vk_converted_document
vk_this_doc_select
vk_this_doc_selection
vk_converted_document_select
vk_converted_document_selection
wordSelectionFind
// Get the filename from the list of files in the OpenDialog
vk_filename = openDialog->Files->Strings[i];
vk_converted_filename = openDialog->Files->Strings[i] + "_c.doc";
// Open the given Word file
vk_this_doc = vk_word_doc.OleFunction( "Open", vk_filename );
statusBar->Panels->Items[2]->Text = "READING";
// -------------------------------------------------------------------
// Vahe Karamian - 10-10-2003
// This portion of the code will convert the word document into
// unformatted text, and do extensive clean up
statusBar->Panels->Items[0]->Text = "Converting" );
// Paste selected text into the new document
Variant vk_converted_document_select =
vk_converted_document.OleFunction( "Select" );
Variant vk_converted_document_selection =
vk_word_app.OlePropertyGet( "Selection" );
vk_converted_document_selection.OleFunction( "PasteSpecial",
0, false, 0, false, 2 );
// Re-Select the text in the new document
vk_converted_document_select =
vk_converted_document.OleFunction( "Select" );
vk_converted_document_selection =
vk_word_app.OlePropertyGet( "Selection" );
// Close the original document
vk_this_doc.OleProcedure( "Close" );
// Let's do out clean-up here ...
Variant wordSelectionFind =
vk_converted_document_selection.OlePropertyGet( "Find" );
statusBar->Panels->Items[0]->Text = "Find & Replace...";
vk_timerTimer( Sender );
wordSelectionFind.OleFunction( "Execute", "^l",
false, false, false, false, false, true, 1, false,
" ", 2, false, false, false, false );
wordSelectionFind.OleFunction( "Execute", "^p", false,
false, false, false, false, true, 1, false,
" ", 2, false, false, false, false );
// Save the new document
vk_converted_document.OleFunction( "SaveAs", vk_converted_filename );
// Close the new document
vk_converted_document.OleProcedure( "Close" );
// -------------------------------------------------------------------
So what we are doing in the code above, we are opening an existing document with vk_this_doc = vk_word_doc.OleFunction( "Open", vk_filename );. Next we add a new document with Variant vk_converted_document = vk_word_doc.OleFunction( "Add" );. Then we want to select the content from the existing document and paste them in our new document. This portion is done by Variant vk_this_doc_select = vk_this_doc.OleFunction( "Select" ); to get a select object and Variant vk_this_doc_selection = vk_word_app.OlePropertyGet( "Selection" ); to get a reference to the actual selection. Then we have to copy the selection using vk_this_doc_selection.OleFunction( "Copy" );. Next, we perform the same task for the new document with Variant vk_converted_document_select = vk_converted_document.OleFunction( "Select" ); and Variant vk_converted_document_selection = vk_word_app.OlePropertyGet( "Selection" );. At this time, we have a selection object for the existing document and the new document. Now, we are going to be using them both to do our special paste using vk_converted_document_selection.OleFunction( "PasteSpecial", 0, false, 0, false, 2 );. Now, we have our original content pasted in a special format in the newly created document. We have to do a new select call in the new document before we do our find and replace. perform our find and replace with wordSelectionFind.OleFunction( "Execute", "^l", false, false, false, false, false, true, 1, false, " ", 2, false, false, false, false );.
vk_this_doc = vk_word_doc.OleFunction( "Open", vk_filename );
Variant vk_converted_document = vk_word_doc.OleFunction( "Add" );
Variant vk_this_doc_select = vk_this_doc.OleFunction( "Select" );
Variant vk_this_doc_selection = vk_word_app.OlePropertyGet( "Selection" );
vk_this_doc_selection.OleFunction( "Copy" );
Variant vk_converted_document_select = vk_converted_document.OleFunction( "Select" );
Variant vk_converted_document_selection = vk_word_app.OlePropertyGet( "Selection" );
vk_converted_document_selection.OleFunction( "PasteSpecial", 0, false, 0, false, 2 );
vk_converted_document_select = vk_converted_document.OleFunction( "Select" );
vk_converted_document_selection = vk_word_app.OlePropertyGet( "Selection" );
Variant wordSelectionFind = vk_converted_document_selection.OlePropertyGet( "Find" );
wordSelectionFind.OleFunction( "Execute", "^l", false, false, false, false, false, true, 1, false, " ", 2, false, false, false, false );
That's all there is to it!
Putting structure to a Word document is a challenging task, given that many people have different ways of authoring documents. Nevertheless, it would help for organizations to start modeling their documents. This will allow them to apply XML schema to their documents and make extracting content from them much easier. This is a challenging task for most companies; usually, either they are lacking the expertise or the resources. And such projects are huge in scale due to the fact that they will affect more than one functional business area. But on the long run, it will be beneficial to the organization as a whole. The fact that your documents are driven by structured data and not by formatting and lose documents has a lot of value added to your. | http://www.codeproject.com/Articles/6820/Borland-C-MS-Word-Automation?fid=39011&df=90&mpp=10&sort=Position&spc=None&tid=3001297 | CC-MAIN-2014-41 | refinedweb | 2,095 | 50.73 |
1 what is flash?
Flask was originally an April Fool's Day joke by author Armin Ronacher on April 1, 2010, but later became very popular and became a formal web framework written in python
Flask is a Web micro framework written in Python, which allows us to quickly implement a website or Web service using Python language. Before introducing flask, let's talk about its connection and difference with Django. Django is a large and comprehensive Web framework with many built-in modules. Flask is a small and refined lightweight framework, Django has large and comprehensive functions, and flask only contains basic configurations, Django's one-stop solution allows developers not to spend a lot of time choosing the application infrastructure before development. Django has built-in functions such as templates, forms, routing, basic database management and so on. On the contrary, flask is only a kernel, which depends on two external libraries by default: Jinja2 template engine and WSGI tool set - Werkzeug. The use feature of flask is that basically all tool uses depend on import to expand, and flask only retains the core functions of Web development.
WSGI (Web Server Gateway Interface) is a standard used in Python to specify how web servers communicate with Python web servers and python web programs. It is essentially a socket server. The Werkzeug module is a concrete implementation of WSGI
Key words: a Python micro web framework, one core and two libraries (Jinja2 template engine and WSGI toolset)
2 why flash?
The performance of flash basically meets the needs of general web development, and it is superior to other web frameworks in flexibility and scalability, and has a very high degree of fit to various databases
Key words: 1. The performance basically meets the requirements. 2. It is flexible and expandable. 3. It has a high degree of fit to various databases.
4. In the real production environment, the development of small projects is fast and the design of large projects is flexible
3 pre school preparation: Virtual Environment
3.1 what is a virtual environment?
A virtual environment is an isolated Python interpreter environment. By creating a virtual environment, you can have an independent Python interpreter environment, which is equivalent to copying a private copy of the global Python interpreter environment. The advantage of this is that you can create an independent Python interpreter environment for each project, because different projects often rely on different versions of libraries or Python versions. Using the virtual environment can keep the global Python interpreter environment clean, avoid the confusion of packages and versions, and easily distinguish and record the dependencies of each project. The so-called environment traceability is also a file. Since it is a file, it supports copying to various platforms, so it also improves the portability to reproduce the dependent environment in the new environment.
For example:
Example 1: if you have many projects at the same time, a crawler project, a Flask project and a Django project in one environment, it is inevitable that the management of related third-party libraries will be confused.
Example 2: if you have two flash projects, but the flash versions of the two projects are inconsistent, there will be version conflicts
Keywords: 1. A private copy of Python interpreter 2. It solves the confusion of package management, version conflict and improves portability
3.2 how to use virtual environment?
3.2.1 building a virtual environment
In the installation process of windows development environment, we use virtualenv virtual development environment. First, install the dependencies of related packages
pip install virtualenvwrapper-win
Using the installed modules, we create a virtual environment
##Note: this' first '_ 01_ Env '' is our own name for the virtual environment, and we need to record the installation path in figure (1) below. We need to use it later.
mkvirtualenv first_01_env
Other commands related to virtual environment:
01. Switch to the specified virtual environment: note that we need to use the workon command to enter the virtual environment, but the first successful installation will automatically enter the virtual environment.
workon first_01_env
02. Exit the virtual environment
deactivate
03. Delete the specified virtual environment
rmvirtaulenv first_01_env
04. List all virtual environments:
lsvirtualenv
05. Enter the directory where the virtual environment is located:
cdvirtualenv
3.2.1 install our flash module in the virtual environment
pip install flask
Collecting flask ... Successfully installed Jinja2-2.10 MarkupSafe-1.1.0 Werkzeug-0.14.1 click-7.0 flask-1.0.2 itsdangerous-1.1.0
As can be seen from the above successfully installed output, in addition to the Flask package, there are also five dependent packages installed at the same time. Their main descriptions are shown in table (1-1).
Table (1-1)
First, I have an impression of these five libraries, and then I will apply them in specific practical applications.
✔ Tip: these libraries were developed by the flash team
4 start our first flash program
Here, we use the pycharm editor to learn flash. We won't repeat the installation of pycharm.
4.1 create a flash program
The specific operation is shown in figure (a) - figure (d)
First step
Figure (a)
Step 2
Figure (b)
Step 3
! Note: if the virtual environment path cannot be found, you can refer to other commands of the virtual environment
lsvirtualenv # List all virtual environments workon first_01_env # Switch to the specified virtual environment cdvirtualenv # Switch to the specified virtual environment path, which is the path we want
Figure (c)
Step 4
Figure (d)
4.2 interpretation of flash program
4.2.1 detailed description of project directory:
"Static folder" is used to store various static files css, js, pictures, etc
"templates folder" is used to store html template files
"app.py" is our main file, which needs to be started to start the project
Note that the name of the app.py file can be freely named, except for the name of flash.py, which conflicts with the flash library
Main file app.py file code
from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello World!' if __name__ == '__main__': app.run()
4.2.2 the code is divided into three parts:
Part I
from flask import Flask app = Flask(__name__)
Import the installed flash package. Import the flash class through the flash package. The flash class is the core of flash. Instantiate the flash class to an instantiated object app.
__ name__ This special parameter: Python will assign it according to the module__ name__ The corresponding value of the variable. For our program (app.py), this value is app.
@app.route('/') def hello_world(): return 'Hello World!'
If you know about other web frameworks, I believe you have seen some ways. Yes, this @ app.route('/') is used to match the url. In our flash, it is implemented as a decorator. The decorator also refers to the object instantiated from the core class above.
So what is the following function if routing? Yes, it is our view function. If the route is matched, it will trigger the execution of our view function, and return the specific data to the front end or mobile end.
It doesn't matter if we don't understand it very well. We'll have an impression first. We'll explain the use of routing and view functions in detail in the next chapter
Part III
if __name__ == '__main__': app.run()
Regardless of the logical judgment, first look at the app.run(), app.run() source code. If you go on to read the source code, it is not difficult to find that the default ip + port is defined internally as 127.0.0.1:5000, and werkzeug.serving is called to create a development server for us (provided by the dependency package Werkzeug). Friends who have a certain understanding of sockets, Its internal function is to do a circular listening function for interaction
Key words: app.run() enables the flash program to run in the development environment, and the default ip and port are 127.0.0.1:5000.
In the third part, there is also an if judgment. What is the function of this judgment? python based friends are probably familiar with this writing method. If logic judgment can only be executed when this file is an execution file. Why is it designed like this? In the development environment, we use app.py as the execution file, but in the real production environment, this file will be used as the called file, and app.run() will not be used for listening allocation in the real generation environment. The reason is that the performance is too low,
Key words: it ensures that app.run() is only used in the development environment and does not affect the real production environment.
Three part crosstalk
Import the core class of Flask to instantiate the object app, and then distribute the app as a decorator to the following view function using the matching url. Then executing the page will trigger the app to call the run() method to run the whole project.
4.2.2.1 introduction to Werkzeug
werkzeug is a WSGI toolkit, which can be used as the underlying Library of a Web framework. Here, werkzeug is neither a Web server nor a Web framework, but a toolkit. The official introduction says it is a WSGI toolkit, which can be used as the underlying Library of a Web framework because it encapsulates many things of the Web framework, such as Request, Response, etc.
Code example:
from werkzeug.wrappers import Request, Response @Request.application def hello(request): return Response('Hello World!') if __name__ == '__main__': from werkzeug.serving import run_simple run_simple('localhost', 4000, hello)
understand
See if this wekzeug is particularly like our flash code. Yes, our flash depends on this werkzeug module. The wekzeug module implements the function of the socket server. Hello must run in parentheses before executing the code in hello. In our flash, app.run() will call run_simple(host, port, self, **options) replaces the Hello of the above code example with self, that is, app. app() will trigger the flash class__ call__ method.
So the entrance to the flash program is__ call__ Method, and__ call__ Method returns self.wsgi_app(environ, start_response), so the execution process of the whole program is in self.wsgi_app(environ, start_response)
Subsection:
1 app.run() call werkzeug.serving of run_simple(host, port, self, **options)
2 self()Equivalent to app(), app()call Flask Class__call__method
3 Flask Class__call__Method returned self.wsgi_app(environ, start_response)
4 flask The execution process of the program is self.wsgi_app(environ, start_response)in
Specific code:
... def __call__(self, environ, start_response): """The WSGI server calls the Flask application object as the WSGI application. This calls :meth:`wsgi_app` which can be wrapped to applying middleware.""" return self.wsgi_app(environ, start_response) ... def wsgi_app(self, environ, start_response): ctx = self.request_context(environ) error = None try: try: ctx.push() response = self.full_dispatch_request() except Exception as e: error = e response = self.handle_exception(e) except: error = sys.exc_info()[1] raise return response(environ, start_response) finally: if self.should_ignore_error(error): error = None ctx.auto_pop(error) ...
key word:
- Werkzeug is a WSGI toolkit, which is essentially a socket server.
- Flash is based on Werkzeug. Flash only retains the core functions of web development.
- Flash is executed in def wsgi_app(self, environ, start_response):
4.2.3 operation items
Run our flash project, as shown in figure (2), or right-click run in app.py to start the project
Figure (2)
Then visit See figure (3)
Figure (3)
! It is emphasized that we should not create a flash project with the flash shortcut provided by pycharm in the future. The shortcut above is easy to explain and understand. It is more recommended to directly create an empty python project in the real production environment
4.2.4 detailed explanation of DEBUG mode
4.3.4.1 DEBUG mode solves two problems.
If an exception occurs in the flash code, we will not prompt the specific error message in the browser. After the debug mode is turned on, we will send the specific error message to the browser.
If the flash code is modified, the project must be restarted to make the modified code effective. After the debug mode is enabled, if we modify the code, as long as ctrl+s, our flash project will be automatically reloaded without manually loading the whole website.
Example 1:
In this case, there is obviously an array out of bounds problem
from flask import Flask app = Flask(__name__) @app.route('/') def hello(): a = [1,2,3,4] print(a[4]) return "hello" if __name__ == '__main__': app.run()
Access is shown in figure (4)
Figure (4)
As shown in Figure 4, it only prompts the internal error of the server, and does not prompt the specific error reason
OK, let's add a parameter for app.run() and rewrite it to app.run(debug=True)
from flask import Flask app = Flask(__name__) @app.route('/') def hello(): a = [1,2,3,4] print(a[4]) return "hello" if __name__ == '__main__': app.run(debug=True)
Visit again as shown in figure (5)
Figure (5)
We see the specific error message IndexError: list index out of range
Pressing ctrl+s every time you modify the code will automatically reload the flash project code. There is no demonstration here
! It is emphasized that the project should not be created by creating false quickly, just like creating an ordinary python project, or by opening an empty file, otherwise debug=True will be invalid
4.2.4.2 four ways to start DEBUG
First kind
from flask import Flask app = Flask(__name__) @app.route('/') def hello(): a = [1,2,3,4] print(a[4]) return "hello" if __name__ == '__main__': app.run(debug=True) # set up
Second
from flask import Flask app = Flask(__name__) app.debug = True # set up @app.route('/') def hello(): a = [1,2,3,4] print(a[4]) return "hello" if __name__ == '__main__': app.run()
Third
from flask import Flask app = Flask(__name__) app.config.update(DEBUG=True) # set up @app.route('/') def hello(): a = [1,2,3,4] print(a[4]) return "hello" if __name__ == '__main__': app.run()
Fourth
You need to create another config.py in the directory where app.py is located. As we learn, we will use this configuration file more and more to configure our flash project. Note that the configuration information is generally capitalized.
config.py
DEBUG = True
app.py
from flask import Flask import config # Import app = Flask(__name__) app.config.from_object(config) # set up @app.route('/') def hello(): a = [1,2,3,4] print(a[4]) return "hello" if __name__ == '__main__': app.run()
app.config essentially inherits the dictionary and is an object of a subclass of the dictionary, as shown in figure (6)
Figure (6)
4.2.4.3 the PIN code of debug can be used for debugging code on the browser side (not recommended, just understand it)
* Debugger PIN: 648-906-962
Figure (7)
It can support debugging on the web side
| https://programmer.ink/think/613bf66ec2ccc.html | CC-MAIN-2021-39 | refinedweb | 2,502 | 57.47 |
- Hash table
- "Unordered map" redirects here. For the proposed C++ class, see unordered_map (C++)..
Ideally, the hash function should map each possible key to a unique slot index, but this ideal is rarely achievable in practice (unless the hash keys are fixed; i.e. new entries are never added to the table after it is created). Instead, most hash table designs assume that hash collisions—different keys that map to the same hash value—will occur[2]) cost per operation.[3][4]
In many situations, hash tables turn out to be more efficient than search trees or any other table lookup structure. For this reason, they are widely used in many kinds of computer software, particularly for associative arrays, database indexing, caches, and sets.
Hash functionMain article: Hash function, f:
index = f(key, arrayLength)
The hash function calculates an index within the array from the data key. arrayLength is the size of the array. For assembly language or other low-level programs, a trivial hash function can often create an index with just one or two inline machine instructions.
Choosing a good hash function
A good hash function and implementation algorithm are essential for good hash table performance, but may be difficult to achieve. Poor hashing usually degrades hash table performance by a constant factor,[citation needed] but hashing is often only a small part of the overall computation. uniform distributions [5] .[6]
The distribution needs to be uniform only for table sizes s.[7].[7]. do not have this property.[citation needed]
Perfect.
Collision resolution
Hash collisions are practically unavoidable when hashing a random subset of a large set of possible keys. For example, if 2,500 keys are hashed into a million buckets, even with a perfectly uniform random distribution, according to the birthday problem.
Load factor
The performance of most collision resolution methods does not depend directly on the number n of stored entries, but depends strongly on the table's load factor, the ratio n/s between n and the size s of its array of buckets. Sometimes this is referred to as the fill factor, as it represents the portion of the s buckets in the structure that are filled with one of the n stored entries. With a good hash function, the average lookup cost is nearly constant as the load factor increases from 0 up to 0.7(about 2/3 full) or so.[citation needed] Beyond that point, the probability of collisions and the cost of handling them increases.
On the other hand, as the load factor approaches zero, the proportion of the unused areas in the hash table increases but there is not necessarily any improvement in the search cost, resulting in wasted memory. adding a new entry record to either end of the list belonging to the hashed slot. Deletion requires searching the list and removing the element. (The technique is also called open hashing or closed addressing, which should not be confused with 'open addressing' or 'closed hashing'.),pointer in each entry record can be significant. An additional disadvantage is that traversing a linked list has poor cache performance, making the processor cache ineffective.
Separate chaining with list heads
Some chaining implementations store the first record of each chain in the slot array itself.[4] The purpose is to increase cache efficiency of hash table access. To save memory space, such hash tables often have about as many slots as stored entries, meaning that many slots have two or more entries.
Separate chaining with other structures expects to have many entries hashed to the same slot (e.g. if one expects extremely non-uniform or even malicious key distributions).
The variant called array hash table uses a dynamic array to store all the entries that hash to the same slot.[8][9][10.
Open addressing.[12]:
-.
Even experienced programmers may find such clustering hard to avoid.
Open addressing only saves memory if the entries are small (less than 4 are marginal, and other considerations typically come into play.
Coalesced hashing.
Robin Hood hashing
One interesting variation on double-hashing collision resolution is Robin Hood hashing.[13] The idea is that a new key may displace a key already inserted, if its probe count is larger than that of the key at the current position. The net effect of this is that it reduces worst case search times in the table. This is similar to Knuth's ordered hash tables.[14] External Robin Hashing is an extension of this algorithm where the table is stored in an external file and each table position corresponds to a fixed-sized page or bucket with B records.[15]
Cuckoo hashing
Another alternative open-addressing solution is cuckoo hashing, which ensures constant lookup time in the worst case, and constant amortized time for insertions and deletions. utilisation can be achieved.
Hopscotch hashing
Another alternative open-addressing solution is hopscotch hashing,[16].
Dynamic resizing
To keep the load factor under a certain limit, e.g. under 3/4, many table implementations expand the table when items are inserted. For example, in Java's HashMap class the default load factor threshold for table expansion is 0.75. r elements from the old table to the new table.
- When all elements are removed from the old table, deallocate it.
To ensure that the old table is completely copied over before the new table itself needs to be enlarged, it is necessary to increase the size of the table by a factor of at least (r + 1)/r during resizing.
Monotonic keys
If it is known that key values will always increase))).
Other solutions
Linear hashing [17] rehashing is prohibitively costly.
Performance analysis
In the simplest model, the hash function is completely unspecified and the table does not resize. For the best possible choice of hash function, the worst choice of hash function, every insertion causes a collision, and hash tables degenerate to linear search, with Ω(k) amortized comparisons per insertion and up to k comparisons for < n, Θ(1 + k/n) comparisons on average for an unsuccessful lookup, and hashing with open addressing requires Θ(1/(1 - k/n)).[18] Both these bounds are constant, if we maintain k/n < c using table resizing, where c is a fixed constant less than 1.
Features
Advantages.
Drawbacks). In critical applications, either universal hashing can be used or a data structure with better worst-case guarantees may be preferable.[19]
Uses
Associative arrays
Hash tables are commonly used to implement many types of in-memory tables. They are used to implement associative arrays (arrays whose indices are arbitrary strings or other complicated objects), especially in interpreted programming languages like AWK, Perl,.
Database indexing
Hash tables may also be used as disk-based data structures and database indices (such as in dbm) although B-trees are more popular in these applications.
Caches.
Sets.
Object representation
Several dynamic languages, such as Perl, Python, JavaScript, and Ruby, use hash tables to implement objects. In this representation, the keys are the names of the members and methods of the object, and the values are pointers to the corresponding member or method.
Unique data representation
Hash tables can be used by some programs to avoid creating multiple character strings with the same contents. For that purpose, all strings in use by the program are stored in a single.)
String interningMain article: String interning
Implementations
In programming languages
Many programming languages provide hash table functionality, either as built-in associative arrays or as standard library modules. In C++11, for example, the
unordered_mapclass provides hash tables for keys and values of arbitrary type.).
Python's built-in hash table implementation, in the form of the
dicttype, as well as Perl's hash type (%) are highly optimized as they are used internally to implement namespaces.
In the .NET Framework, support for hash tables is provided via the non-generic
Hashtableand generic
Dictionaryclasses, which store key-value pairs, and the generic
HashSetclass, which stores only values.
Independent packages
- Google Sparse Hash The Google SparseHash project contains several C++ hash-map implementations in use at Google, with different performance characteristics, including an implementation that optimizes for memory use and one that optimizes for speed. The memory-optimized one is extremely memory-efficient with only 2 bits/entry of overhead.
-.
History
The idea of hashing arose independently in different places. In January 1953, H. P. Luhn wrote an internal IBM memorandum that used hashing with chaining.[20].[20]
See also
- Rabin–Karp string search algorithm
- Stable hashing
- Consistent hashing
- Extendible hashing
- Lazy deletion
- Pearson hashing
Related data structures
There are several data structures that use hash functions but cannot be considered special cases of hash tables:
- Bloom filter, a structure that implements an enclosing approximation of a set, allowing insertions but not deletions.
- Distributed hash table (DHT), a resilient dynamic table spread over several nodes of a network.
- Hash array mapped trie, a trie structure, similar to the array mapped trie, but where each key is hashed first.
References
- ^ Thomas H. Corman [et al.] (2009). 'Introduction to Algorithms' (3rd ed.). Massachusetts Institute of Technology. pp. 253–280. ISBN 978-0-262-03384-8.
- ^ Charles E. Leiserson, Amortized Algorithms, Table Doubling, Potential Method Lecture 13, course MIT 6.046J/18.410J Introduction to Algorithms—Fall 2005
- ^. 221–252. ISBN 978-0-262-53196-2.
- ^ Karl Pearson (1900). "On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling". Philosophical Magazine, Series 5 50 (302): pp. 157–175.
- ^ Robin Plackett (1983). "Karl Pearson and the Chi-Squared Test". International Statistical Review (International Statistical Institute (ISI)) 51 (1): pp. 59–72.
- ^ a b Thomas Wang (1997), Prime Double Hash Table. Accessed April 11, 2009
- ^ Askitis, Nikolas; Zobel, Justin (2005). Cache-conscious Collision Resolution in String Hash Tables. 3772. 91–102. doi:10.1007/11575832_11. ISBN 1721172558..
- ^ Askitis, Nikolas; Sinha, Ranjan (2010). Engineering scalable, cache and space efficient tries for strings. doi:10.1007/s00778-010-0183-9. ISBN 1066-8888 (Print) 0949-877X (Online)..
- ^ (October 2005). "Exact distribution of individual displacements in linear probing hashing". Transactions on Algorithms (TALG) (ACM) 1 (2,): 214–242. doi:10.1145/1103963.1103965.
- ^ Celis, Pedro (March, 1988). External Robin Hood Hashing (Technical report). Computer Science Department, Indiana University. TR246.
- ^ Herlihy, Maurice and Shavit, Nir and Tzafrir, Moran (2008). "Hopscotch Hashing". DISC '08: Proceedings of the 22nd international symposium on Distributed Computing. Arcachon, France: Springer-Verlag. pp. 350–364.
- ^ Litwin, Witold (1980). "Linear hashing: A new tool for file and table addressing". Proc. 6th Conference on Very Large Databases. pp. 212–223.
- ^ Doug Dunham. CS 4521 Lecture Notes. University of Minnesota Duluth. Theorems 11.2, 11.6. Last modified 21 April 2009.
- ^ Crosby and Wallach's Denial of Service via Algorithmic Complexity Attacks.
- ^ a b Mehta, Dinesh P.; Sahni, Sartaj. Handbook of Datastructures and Applications. pp. 9–15. ISBN 1584884355.
Further reading
- "9: Maps and Dictionaries". Data Structures and Algorithms in Java (4th ed.). Wiley. pp. 369–418. ISBN 0-471-73884-0.
External links
- A Hash Function for Hash Table Lookup by Bob Jenkins.
- Hash Tables by SparkNotes—explanation using C
- Hash functions by Paul Hsieh
- Design of Compact and Efficient Hash Tables for Java
- Libhashish hash library
- NIST entry on hash tables
- Open addressing hash table removal algorithm from ICI programming language, ici_set_unassign in set.c (and other occurrences, with permission).
- A basic explanation of how the hash table works by Reliable Software
- Lecture on Hash Tables
- Hash-tables in C—two simple and clear examples of hash tables implementation in C with linear probing and chaining
- MIT's Introduction to Algorithms: Hashing 1 MIT OCW lecture Video
- MIT's Introduction to Algorithms: Hashing 2 MIT OCW lecture Video
- How to sort a HashMap (Java) and keep the duplicate entries
- Concurrent Data Structures (libcds) - a C++ template library of various lock-free hash table containers
Wikimedia Foundation. 2010. | http://en.academic.ru/dic.nsf/enwiki/8250 | CC-MAIN-2017-39 | refinedweb | 1,998 | 55.24 |
Simlply save a variable
How would you save a variable such that it is remembered after the app is closed or deleted from the multitasking menu?
#how do you "save" a? a = 10
This thread has a solution for that:
There might be others.
Better use file with name starting with . (dot) and (de)serialize json or whatever there. Keychain is not good idea for this purpose. It contains passwords, certs, tokens, ..., is synced across devices via cloud, ... Unless you do want to store this kind of data I would choose another solution.
If all you want to store is a few values or simple data structures, a JSON file is probably the easiest option. JSON's data types are basically the same as Python's
None,
bool,
int/
float,
str,
listand
dict, and JSON can be read or created from/to such objects using the
jsonmodule. This is especially useful if the data you want to store is already a
listor
dictcontaining simple values.
To store more complicated Python objects the
picklemodule may be a better option. Many built-in and custom objects can be pickled, though if you've written complex custom classes you may need to add some pickling support yourself so that the objects are properly saved and restored.
If you're working with large sets of data and know a thing or two about databases and SQL, you could also use a database and the
sqlitemodule. Building a database is not as easy and straightforward as the other two options though.
I have also used the Python ConfigParser module for very simple settings. Like .ini files on Windows.
If a new file is created using f.write() for example, how does iOS handle it? If a user restores a phone from iCloud backup, will that file be restored with the app?
Or do I have to do something to configure this?
The file is stored in the iOS file system (yes, iOS does have a file system). Now, where it is saved depends on what the current directory is. When you first run your script, the current directory is the folder your script is in. Your script can change that, though, by calling
os.chdir("some directory here")(you'll have to import the
osmodule using
import os). Remember, Pythonista only has access to its own app container/folder. This means that your script will return a permission denied error if you try to change directories into either other apps' folders or iOS system folders. Your script can only access files and folders that are part of Pythonista, including Pythonista's own system files that make the app run. You (probably) won't have access to any of omz's code, but still do not muck around with files and folders that are not in your script library unless you know what you're doing.
To answer your other question about iCloud backups, assuming the user has enabled the backup of app data to iCloud, your files should be restored, but don't count on anything. Always be prepared to regenerate the files if necessary.
The above code does not close the file which
- leaks memory,
- relies on the interpreter to properly flush the writes, etc. to file.
See:
Properly closing files is even more important in Pythonista than with a regular CPython runtime. Because the Pythonista app runs a single Python process until it is unloaded, the file object might not get garbage-collected even if you run a new script. | https://forum.omz-software.com/topic/1772/simlply-save-a-variable | CC-MAIN-2017-43 | refinedweb | 587 | 72.36 |
- create an intelligent, web enabled, Power Meter with Arduino
Electricity costs are rising, and just in case this is not enough, if you exaggerate with consumption your house circuit often stops working in the very moment of your greatest need. This project allows you to keep an eye on your electricity instant consumption even remotely.
The acquisition of current consumption rates takes place by means of small clamps placed on top of the phase conductor of the power line section you want to control. The acquisition system is based on a commercial product available in our store, the FR491. The kit includes a meter clamp to measure current plus a data transmission unit. The transmission occurs at 433.92 MHz and can cover about thirty meters. The central unit gathers samples sent via radio by the peripheral devices and stores them and shows a series of information about consumption, usage bursts and relative costs, by means of a nice web page.
The central unit is also equipped with an RJ45 output that allows you to connect it to a PC via a TTL-USB converter. Our goal is to connect the central unit to an Arduino Uno (instead of a PC), and implement a web enabled monitoring system. To do this we bypass the TTL-USB converter and connect the output of the controller directly to the D6 (RX) and D7 (TX) pins on the Arduino. The Arduino board mounts two shield. The Ethernet shield allows you to do both: sending data to the emoncms remote service for collection, storage and presentation, and respond directly to web calls used to monitor the status of sensors and alarms and manage the thresholds under which the alarm is started.
The shield must be connected via the Ethernet connector to a network that allows access to the internet. The second shield is the local monitoring and alarm management unit and the wiring diagram is visible in the section. The shield has a relay output and a buzzer, triggered if the consumption exceeds the threshold set via the web page. It also sports two buttons (besides the reset button) that can be activated to pausea the alarm function. The software sketch is made so that the alarm is still reactivated as soon as the consumption goes back below the threshold that caused the trip. A LED indicates the general operational state of the system.
From the embedded web page you can activate and deactivate the alarm, pause it and set the alarm new threshold: upon alarm activation, buzzer plays a melody .
Regarding emoncms, obviously you cannot install the server on Arduino, we will use the functionality available from emoncms website itself, integrating them with the available page from Arduino web server.
Diagram of the shield
The shield is powered thanks to Arduino’s 12 V: used to power the relay coil and the buzzer.
But, let’s analyze the serial line coming from the FR491 central controller. The output signal taken from line 8 of the RJ45 connector is connected to pin D6 (RX) of Arduino while the input signal on line 7 of the RJ45 connector is connected to pin D7 (TX) of Arduino. To use Arduino’s D6 and D7 pins for serial communication your need, in the sketch, to call the SoftSerial library.
The threshold breaking alarm activation outputs are at D4 pin for the relay driven by the switching transistor T1 and at D5 for what regards the buzzer, which is also driven by the switching transistor T2. Pin D3 drives LD1.
On FT1046 shield two additional buttons P1, connected to pin D9, to activate and deactivate the alarm locally and P2, connected to pin D8, to pause the alarm function.
The sketch for Arduino
#include <EtherCard.h> #include <SoftwareSerial.h> #include <EEPROM.h> #define DEBUG 1 // set to 1 to display debug info via seral link int ledalarm = 3; // pin the LED is on (you can't use the onboard one as the ethernet card uses it) int pulsalarm = 9; int pulspause = 8; int alarmout = 4; int speakerPin = 5; char TextBox1[10]; // Data in text box int length = 15; // the number of notes char notes[] = "ccggaagffeeddc "; // a space represents a rest int beats[] = { 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 4 }; int tempo = 300; boolean alarm=0; #define APIKEY "**************" #define emonlink "" // }; char website[] PROGMEM = "emoncms.org"; static const byte hisip[] = { 213,138,101,177 }; unsigned long previousMillis_power=0; int power_postrate=10000; //post every 10s int watts=0; int maxwatts=3000; byte inData1[10]; // Allocate some space for the string byte inData2[10]; //byte inData3[10]; boolean pause=0; char str[20]; SoftwareSerial mySerial(6, 7); // RX, TX // web page buffer byte Ethernet::buffer[800]; BufferFiller bfill; // store html header in flash to save memory char htmlHeader[] PROGMEM = "<html><head><title>Power meter by Boris</title></head><body>" "<h2 style='text-align: center;'><em><span style='color: rgb(153, 0, 0);'>Power meter by Boris</span></em></h2>" ; // ---------------------------------------------- // HTML page to display static word homePage() { checkmcee(); bfill = ether.tcpOffset(); // read A0 status //word sensorValue = analogRead(sensorPin); maxwatts = EEPROM_readint(0); Serial.print("maxwatts in mem : "); Serial.println(maxwatts); Serial.print("alarm in mem : "); Serial.println(EEPROM_readint(5)); // read statues of the LED char* alarmstat; if ( EEPROM_readint(5) == 1 ) { alarmstat = "On" ; } else { alarmstat = "Off"; } bfill.emit_p( PSTR ( "$F<p>" // $F = htmlheader in flash memory "<center>" "<iframe src='$S' width='1000' height='500' frameborder='0' ></iframe><br/>" "<em>Alarm: $S <br/>" "Alarm threshold: $D <br/>" "</em></p><p>" "<A HREF='?cmd=on'>Alarm on</A><br/><br/>" "<A HREF='?cmd=off'>Alarm off</A><br/><br/><br/>" "<A HREF='?cmd=pause'>Alarm pause</A><br/><br/>" "<FORM>Insert alarm threshold <input type=text name=maxwatts size=4 value=$D> <input type=submit value=Enter> </form> <br/><br/>" "<A HREF=''>More info</A>" "</p></center></body></html>" ) , htmlHeader , emonlink, alarmstat , maxwatts , maxwatts ) ; return bfill.position(); } // ---------------------------------------------- void setup () { pinMode(ledalarm, OUTPUT); pinMode(alarmout, OUTPUT); pinMode(speakerPin, OUTPUT); pinMode(pulsalarm, INPUT); digitalWrite(pulsalarm, HIGH); pinMode(pulspause, INPUT); digitalWrite(pulspause, HIGH); Serial.begin(9600); Serial.println("\n[webClient]"); if ((EEPROM.read(10)!=10)){ Serial.println("First power on"); EEPROM.write(10, 10); EEPROM_writeint(0, maxwatts); alarmoff(); } else { Serial.println("Data on eeprom already stored"); } maxwatts = EEPROM_readint(0); Serial.print("maxwatts in mem : "); Serial.println(maxwatts); Serial.print("alarm in mem : "); Serial.println(EEPROM_readint(5)); if (EEPROM_readint(5)==1){ digitalWrite(ledalarm, HIGH); } if (!ether.begin(sizeof Ethernet::buffer, mymac,10)) Serial.println( "Failed to access Ethernet controller"); else Serial.println("Ethernet controller initialized"); ether.staticSetup(myip,gwip,dnsip); ether.copyIp(ether.hisip, hisip); ether.printIp("IP Address:\t", ether.myip); ether.printIp("Netmask:\t", ether.mymask); ether.printIp("Gateway:\t", ether.gwip); ether.printIp("DNS:\t", ether.dnsip); ether.printIp("SRV: ", ether.hisip); mySerial.begin(9600); } // ---------------------------------------------- void loop () { word len = ether.packetReceive(); word pos = ether.packetLoop(len); if(pos) { Serial.print("pos "); Serial.println(pos); printpage(pos); } if ((millis()%1000)==0){ Serial.println(millis()); delay(50); if (pause==1) { digitalWrite(ledalarm, HIGH); delay(200); digitalWrite(ledalarm, LOW); delay(200); } else { if (alarm=1) { digitalWrite(ledalarm, HIGH); } else { digitalWrite(ledalarm, LOW); } } } if ((millis() - previousMillis_power) > (power_postrate) ) { previousMillis_power=millis(); checkmcee(); Serial.print("watts "); Serial.println (watts,DEC); pubblica(); checkalarm(); } if (digitalRead(pulsalarm)==0){ //toggle alarm if (EEPROM_readint(5)==1) { alarmoff(); delay(500); } else { alarmon(); delay(500); } } if (digitalRead(pulspause)==0){ //toggle alarm pause=1; Serial.println("pause on"); } } void alarmon(){ digitalWrite(ledalarm, HIGH); // set the LED on EEPROM_writeint(5, 1); //alarm off Serial.println("alarm on"); alarm=1; } void alarmoff(){ digitalWrite(ledalarm, LOW); // set the LED on EEPROM_writeint(5, 0); //alarm off Serial.println("alarm off"); alarm=0; } void printpage(word pos){ char* bufferdata = (char *) Ethernet::buffer + pos; Serial.println("----------------"); Serial.println("data received:"); Serial.println(bufferdata); Serial.println("----------------"); // "on" command received if (strncmp( "GET /?cmd=on" , bufferdata , 12 ) == 0) { alarmon(); } // "off" command received if (strncmp( "GET /?cmd=off" , bufferdata , 13 ) == 0) { alarmoff(); } // "pause" command received if (strncmp( "GET /?cmd=pause" , bufferdata , 13 ) == 0) { //digitalWrite(LEDpin, LOW); // set the LED on Serial.println("pause received"); pause=1; } // read data from text box if (strncmp( "GET /?maxwatts=" , bufferdata , 11 ) == 0) { Serial.print ( "text box input received - " ); if (ether.findKeyVal(bufferdata + 6, TextBox1 , sizeof TextBox1 , "maxwatts") > 0) { Serial.print ( "input = " ); Serial.println ( TextBox1 ); EEPROM_writeint(0, atoi(TextBox1)); } } ether.httpServerReply(homePage()); // send web page data } void checkmcee(){ Serial.println("checkmcee "); int data=0; int index=0; mySerial.write((byte)0xAA); mySerial.write((byte)0x02); mySerial.write((byte)0x00); mySerial.write((byte)0xAD); delay(500); while (mySerial.available() > 0) { byte inByte = mySerial.read(); //Serial.write(inByte); if (inByte=='S'){ index=0; data++; } switch (data){ case 1: inData1[index] = inByte; // Store it index++; break; case 2: inData2[index] = inByte; // Store it index++; break; } } watts=0; watts=(inData2[6]<<8) + inData2[5]; } void pubblica() { str[0]='\0'; srtJSON(str); //Start JSON addJSON(str,"realP",watts); //JSON line 1 - add more lines as needed endJSON(str); Serial.println(str); Serial.println(strlen(str)); ether.browseUrl(PSTR("/api/post?apikey="APIKEY"&json="), str, website, my_result_cb); } void checkalarm(){ if (alarm=1){ if (watts>maxwatts){ if (pause==0){ if (EEPROM_readint(5)==1) //if alarm on { digitalWrite(alarmout, HIGH); play(); } } } else{ pause=0; digitalWrite(alarmout, LOW); } } } // called when the client request is complete static void my_result_cb (byte status, word off, word len) { Serial.print("<<< reply "); //Serial.print(millis() - timer); //Serial.println(" ms"); Serial.println((const char*) Ethernet::buffer + off); } //write word to EEPROM void EEPROM_writeint(int address, int value) { EEPROM.write(address,highByte(value)); EEPROM.write(address+1 ,lowByte(value)); } unsigned int EEPROM_readint(int address) { unsigned int word = word(EEPROM.read(address), EEPROM.read(address+1)); return word; } play(){ for (int i = 0; i < length; i++) { if (notes[i] == ' ') { delay(beats[i] * tempo); // rest } else { playNote(notes[i], beats[i] * tempo); } // pause between notes delay(tempo / 2); if ((digitalRead(pulsalarm)==0)||(digitalRead(pulspause)==0)){ break; } } }
At the beginning of the sketch we can find all the variables definition needed talk with the emoncms server, and those related to the Ethernet interface configuration, in this case you have to set the MAC address of your Ethernet shield (usually specified on a special label affixed on the shield itself) and a static IP address compatible with the addressing scheme of your home network.
// };
The Setup block configures the Arduino pin for alarm management and initializes the debug serial port. In case of the first system start, EEPROM gets valued 3000, as the threshold value for triggering an alarm. Furthermore, setup configures the Ethernet port and initializes the serial port for the communication with the FR491 control unit.
The loop block is the one that runs in a continuous loop. The first action is to listen for external HTTP requests, intercept and handle them according to the “actions” received.
The statements checking for external http requests are:
word len = ether.packetReceive();
word pos = ether.packetLoop(len);
Calling the Arduino Board IP address from an external browser you only receive a basic emoncms display. In the page you’ll find the links and the text fields to set the alarm threshold, enable, disable, and pause the alarm.
If there are no requests, at each interval – as set in the power_postrate variable (three seconds) – the reading of the measurements is performed and stored in the FR491 control unit, using the checkmcee() function.
To query the FR491 you need to send the following commands:
mySerial.write((byte)0xAA);
mySerial.write((byte)0x02);
mySerial.write((byte)0x00);
mySerial.write((byte)0xAD);
Doing this, the FR491 will respond with the total consumption.
The FR491 control unit responds by sending a string as per the following example:
S01\0xff\0xff\0x00\0x002
S02\0xb73\0x00\0x002
……
S16\0xff\0xff\0x00\0x002\0xf3
The sample regarding the consumption of the sensor we’re interest in, is in positions 5 and 6 (after the “S”) of the S02 string, with the least significant byte (5) first and the most significant (6) following. You can convert it with:
watts=(inData2[6]<<8) + inData2[5];
Once the data is acquired, is then published on the emoncms server, with the pubblica() method, using an http call based on JSON Protocol.
Finally checkalarm() checks if the value exceeds the alarm threshold or not. If so the relay connected to Arduino’s D4 pin is activated. Depending on the setup (alarm is activated, pause is activated) the buzzer on D5 gets activated as well.
Also in loop() the P1 and P2 buttons states are checked to see if the activation or deactivation of the alarm (P1), or pause (P2), were requested. In case P1 button is clicked, it activates the same functions activated via the web page the commands: alarmon(), alarmoff (). If you click P2 the value of the pause variable is brought to “1” to control the operation of checkalarm() (LD1 blinks in case the alarm is activated).
Also from the webpage you’ve access to a text field to set the threshold, in Watts, for alarm activation and the Enter button (handled in printpage()).
The body of the web page is composed and sent by the function homepage() invoked by the statement:
ether.httpServerReply(homePage());
While composing the page an “iframe” contains the graphics to display the consumption data produced by the emoncms server.
Before using the whole system you need to configure the emoncms server. To configure the server follow the steps below.
Emoncms Server Configuration
The first thing to do is to create an account on emoncms at.
After registering go to the Input panel and click on the top right link, “Input API Help”. This will open a page where there are two long data strings: the API keys for reading and writing. We will use the writing Apikey as an authentication key inside the JSON message sent by the program that runs on ARDUINO.
First thing to do now, is to add to the program sketch the following define:
#define APIKEY “**************” (your api key)
For now, just copy the APIKEY of our configuration by replacing the one in the sketch.
Compile the code and upload it to the Arduino. With everything connected make it run few minutes.
Go back to open the Input panel, that should be empty now. Expand clicking on “+” and, as see the name of your input, “realP”, in the page.
Emoncms Inputs only create room for real values: kind of a port that is assigned to each channel. To create the containers where data is stored you need to create Feeds. We do this from the Input page. Double click on the blue name of the “realP” input and open the Feed creation page, give the feed a name, for example the same name ” realP” and press ADD.
Go back to the feed page and click the “+” to expand.
Now we can move on to the page graphics composition, doing it by selecting Dashboard, which introduces us to a page where the graphic pages are referred (you’ ll see “No dashboards created”).
To create our dashboard click on the small round little button at the top right with the “+”. You’ll see a line that indicates a reference to a, still empty, dashboard and a series of small buttons under dubbed Action. Mouse over and see the meanings. Click on the icons “Published” and “Public” so as to make the page available on the outside, then click on “Draw” and you’ll get a blank panel where we can begin to compose the graphics.
To insert the elements in the page you can choose from the the combo box with Widgets, Text, Containers and Visualizations.
To compose the graphics choose a component from the combo box in the menu, move the cursor where you want to place it and click with the left mouse button to fix it.
Then associate the feed to the component. You can do it immediately or later by selecting the component and pressing the Option button.
The components can be placed wherever you want, hold down the left mouse button and drag it. Don’t forget to save.
Let’s start from a gauge to display the power in Watts.
Choose Dial from the Widgets combo box and just place it somewhere. Now choose Option and configure power and appearance.
Select the “realP” feed the Max value, for example 40, set the scale factor to 1 and the field units to “W”.
With “type” you can then choose the appearance you want to give to the Dial.
Remember to save, and try to re-run the program that reads the control unit of the power consumption.
You should see the value in Watts at the center of the indicator the pointer should move to a location that graphically represents the value.
Finally, we can add a chart of consumption over time: from the “Visualizations” combo box select “realtime”, put the graph where you want and customize the options. Always choose “realP” feed: (with fill = 1 the chart area below the line is filled) and units = W. Save the chart options and save the dashboard.
Set a name for the dashboard by clicking on the icon shaped like a small wrench. Enter a name on the configuration page, such as “power” and save. Yet saved the entire Dashboard, select the “Dashboard” panel and then click on the “list” icon (nine dots in the form of a square). From here click on the eyeball (“view”).
Now you must compose the link to be included in the Arduino program sketch to feed the iframe. Copy the Dashboard URL (in our case)and compose the link to be included in the “#define emonlink” according to: + <user> + &id= + <URL id number>,
in our case:
#define emonlink “”
Now recompile and reload the sketch onto the Arduino. We now can use our system in normal mode.
From a browser window, insert the address we assigned to the Arduino. You will then see the screen shown in Fig 12 where you can see the status of the alarm and the threshold level set. Below are the links to activate, deactivate and “pause” the alarm plus the field where you can enter a new threshold level and the button to send the configuration.
Let’s see the behavior of the LD1 LEDs from the relay and the buzzer.
When “Alarm Off” the LED is off: if consumption exceeds the set threshold, nothing happens.
With “Alarm On” (and “Alarm Pause” off) the LED is lit to indicate the system is on, in case of exceeding the alarm threshold, the relay is activated and the buzzer will issue the melody set in the sketch. When in “Alarm Pause” state, the LED flashes, the buzzer stops sounding and the relay keeps being activated. With “Alarm On” and “Alarm Pause activated” the LED flashes in case the threshold is exceeded then the relay is activated but the buzzer does not sound. When consumption gets back below the threshold ‘”alarm Pause” turns off “Alarm On ” gets activated with the LED turned on.
Pingback: Getting Started with ESP8266 - Arduino collector blog() | http://www.open-electronics.org/how-to-create-an-intelligent-web-enabled-power-meter-with-arduino/ | CC-MAIN-2017-04 | refinedweb | 3,170 | 54.12 |
Dynamic Template Columns in the ASP.NET 2.0 GridView Control
One of the nice things about ASP.NET is its depth: the sheer number of tools and techniques built into this Web application framework can be quite staggering. Recently I was involved in a project where we needed to present the results of a database query as part of an ASP.NET application, but we needed extensive control over the on-screen formatting, down to the level of controls used to present individual columns of data, CSS classes used, and more. To make matters even trickier, we didn't know until runtime what the query would be. After some discussion and experimentation among the design team, though, we decided that there was no need to buy a third-party control to handle these demands. The built-in GridView could handle all of our requirements. The key lay in understanding and using the little-known ability to add columns to the GridView dynamically using templates at runtime.
A GridView template is a class that implements the ITemplate interface. It defines the controls that will be displayed on the GridView in a column, how they will bind to data, and can have special-case code to handle headers and footers. In this article I'll show you a simplified example of building a GridView up from scratch using a template to respond to a dynamic query; the technique can be extended to cover much more complex situations.
The Template Class
Let's start with the Template class itself. This is the class that holds the code that will do the actual heavy lifting of putting controls in the DataGrid, as well as formatting them and binding them to data. It starts off with some private member variables and a constructor to set them:
// dynamically added label column public class GridViewLabelTemplate : ITemplate { private DataControlRowType templateType; private string columnName; private string dataType; public GridViewLabelTemplate(DataControlRowType type, string colname, string DataType) { templateType = type; columnName = colname; dataType = DataType; }
The next block of code gets called whenever an instance of this
template is instantiated. If you think of a template as corresponding to a
column in the GridView, this happens every time a header, cell, or footer
of the GridView is created for that column. You can inspect the
templateType member to figure out which of these is the case.
Here, you want to create whatever control or controls you need to display
the data. You're not limited to a single control, though for this article
I'm only using one label for display. You can also do whatever you need to
format the control to your liking. I'm going to grab the container for the
control (which ends up being the wrapping table cell) and set its CSS
style so that I can right-justify numeric columns. This method also sets
up for data-binding by registering an event handler.
public void InstantiateIn(System.Web.UI.Control container) { DataControlFieldCell hc = null; switch (templateType) { case DataControlRowType.Header: // build the header for this column Literal lc = new Literal(); lc.Text = "<b>" + BreakCamelCase(columnName) + "</b>"; container.Controls.Add(lc); break; case DataControlRowType.DataRow: // build one row in this column Label l = new Label(); switch (dataType) { case "DateTime": l.CssClass = "ReportNoWrap"; break; case "Double": hc = (DataControlFieldCell)container; hc.CssClass = l.CssClass = "ReportNoWrapRightJustify"; break; case "Int16": case "Int32": hc = (DataControlFieldCell)container; hc.CssClass = l.CssClass = "ReportNoWrapRightJustify"; break; case "String": l.CssClass = "ReportNoWrap"; break; } // register an event handler to perform the data binding l.DataBinding += new EventHandler(this.l_DataBinding); container.Controls.Add(l); break; default: break; } }
As you'd expect, the event handler you set up for databinding gets called when data is bound to the GridView. In this case, I'm going to use this event handler to do some formatting of the bound data:
private void l_DataBinding(Object sender, EventArgs e) { // get the control that raised this event Label l = (Label)sender; // get the containing row GridViewRow row = (GridViewRow)l.NamingContainer; // get the raw data value and make it pretty string RawValue = DataBinder.Eval(row.DataItem, columnName).ToString(); switch (dataType) { case "DateTime": l.Text = String.Format("{0:d}", DateTime.Parse(RawValue)); break; case "Double": l.Text = String.Format("{0:###,###,##0.00}", Double.Parse(RawValue)); break; case "Int16": case "Int32": l.Text = RawValue; break; case "String": l.Text = RawValue; break; } }
The last thing in my template class is a little helper method that's used in displaying column headers. Here I'm making an assumption about naming conventions in my database - that column names are all CamelCase, and that I'd prefer to display these on the GridView interface as individual words broken at the obvious points.
// helper method to convert CamelCaseString to Camel Case String // by inserting spaces private string BreakCamelCase(string CamelString) { string output = string.Empty; bool SpaceAdded = true; for (int i = 0; i < CamelString.Length; i++) { if (CamelString.Substring(i, 1) == CamelString.Substring(i, 1).ToLower()) { output += CamelString.Substring(i, 1); SpaceAdded = false; } else { if (!SpaceAdded) { output += " "; output += CamelString.Substring(i, 1); SpaceAdded = true; } else output += CamelString.Substring(i, 1); } } return output; }
Page 1 of 2
| http://www.developer.com/net/asp/article.php/3609991/Dynamic-Template-Columns-in-the-ASPNET-20-GridView-Control.htm | CC-MAIN-2015-06 | refinedweb | 851 | 56.25 |
Last Updated on December 11, 2019
Many machine learning algorithms make assumptions about your data.
It is often a very good idea to prepare your data in such way to best expose the structure of the problem to the machine learning algorithms that you intend to use.
In this post you will discover how to prepare your data for machine learning in Python using scikit-learn.<<
How To Prepare Your Data For Machine Learning in Python with Scikit-Learn
Photo by Vinoth Chandar, some rights reserved.
Need For Data Preprocessing
You almost always need to preprocess your data. It is a required step.
A difficulty is that different algorithms make different assumptions about your data and may require different transforms. Further, when you follow all of the rules and prepare your data, sometimes algorithms can deliver better results without the preprocessing.
Generally, I would recommend creating many different views and transforms of your data, then exercise a handful of algorithms on each view of your dataset. This will help you to flush out which data transforms might be better at exposing the structure of your problem in general.
Need help with Machine Learning in Python?
Take my free 2-week email course and discover data prep, algorithms and more (with code).
Click to sign-up now and also get a free PDF Ebook version of the course.
Start Your FREE Mini-Course Now!
Preprocessing Machine Learning Recipes
This section lists 4 different data preprocessing recipes for machine learning.
All of the recipes were designed to be complete and standalone.
You can copy and paste them directly into your project and start working.
The Pima Indian diabetes dataset is used in each recipe. This is a binary classification problem where all of the attributes are numeric and have different scales. It is a great example of dataset that can benefit from pre-processing.
Each recipe follows the same structure:
- Load the dataset from a URL.
- Split the dataset into the input and output variables for machine learning.
- Apply a preprocessing transform to the input variables.
- Summarize the data to show the change.
The transforms are calculated in such a way that they can be applied to your training data and any samples of data you may have in the future.
The scikit-learn documentation has some information on how to use various different preprocessing methods. You can review the preprocess API in scikit-learn here.
1. Rescale Data
When your data is comprised of attributes with varying scales, many machine learning algorithms can benefit from rescaling the attributes to all have the same scale.
Often this is referred to as normalization and attributes are often rescaled into the range between 0 and 1. This is useful for optimization algorithms in used in the core of machine learning algorithms like gradient descent. It is also useful for algorithms that weight inputs like regression and neural networks and algorithms that use distance measures like K-Nearest Neighbors.
You can rescale your data using scikit-learn using the MinMaxScaler class.
After rescaling you can see that all of the values are in the range between 0 and 1.
2. Standardize Data
Standardization is a useful technique to transform attributes with a Gaussian distribution and differing means and standard deviations to a standard Gaussian distribution with a mean of 0 and a standard deviation of 1.
It is most suitable for techniques that assume a Gaussian distribution in the input variables and work better with rescaled data, such as linear regression, logistic regression and linear discriminate analysis.
You can standardize data using scikit-learn with the StandardScaler class.
The values for each attribute now have a mean value of 0 and a standard deviation of 1.
3. Normalize Data
Normalizing in scikit-learn refers to rescaling each observation (row) to have a length of 1 (called a unit norm in linear algebra).
This preprocessing can be useful for sparse datasets (lots of zeros) with attributes of varying scales when using algorithms that weight input values such as neural networks and algorithms that use distance measures such as K-Nearest Neighbors.
You can normalize data in Python with scikit-learn using the Normalizer class.
The rows are normalized to length 1.
4. Binarize Data (Make Binary)
You can transform your data using a binary threshold. All values above the threshold are marked 1 and all equal to or below are marked as 0.
This is called binarizing your data or threshold your data. It can be useful when you have probabilities that you want to make crisp values. It is also useful when feature engineering and you want to add new features that indicate something meaningful.
You can create new binary attributes in Python using scikit-learn with the Binarizer class.
You can see that all values equal or less than 0 are marked 0 and all of those above 0 are marked 1.
Summary
In this post you discovered how you can prepare your data for machine learning in Python using scikit-learn.
You now have recipes to:
- Rescale data.
- Standardize data.
- Normalize data.
- Binarize data.
Your action step for this post is to type or copy-and-paste each recipe and get familiar with data preprocesing in scikit-learn.
Do you have any questions about data preprocessing in Python or this post? Ask in the comments and I will do my best to answer.
Hey Jason,
On Normalizing, do you need to do this if you are planning on using euclidean, or cosine distance measures to find similar items in a dataframe?
e.g. you have a vector where each column has some attributes about the product, and you want to find other products that have similar attributes.
Keen to hear your thoughts
Thanks
SM
Excellent!
Thanks Ernest.
Hi Jason,
Thanks for the post and the website overall. It really explains a lot.
I have a question regarding preparing the data ,if I am to normalize my Input data, does the precision of the values have an effect ? Will it make the weight matrix more sparse while training with higher precision if the training data is not very high?
In that case should I be limiting the precision depending on the amount of training data?
I am interested in sequence classification for EEG, In my case I intend to try out RNN . I was planning on normalizing the data since I wish the scaling to be performed on each individual input sequence.
Hoping to hear from you,thanks !
Great question Akshay.
I don’t have a clear answer for you. It may. I have not seen it have an effect, but I would not rule it out.
If you’re worried, I would recommend testing with samples of your data at different precisions and different transforms and evaluate the effect.
I expect the configuration of your model will be a much larger leaver on performance.
Hi Jason,Thank you for the reply.
I intend to build an RNN from scratch for the application similar to sentiment analysis (Many to one). I am a bit confused about the final stage. while training, when I feed a single sequence(belong to one of the class) to the training set , do I apply softmax to the last output of the network alone and compute the loss and leave the rest unattended?
Where exactly is the many to “ONE” represented?
Sorry Akshay, I don’t have example of implementing an RNN from scratch.
My advice would be to peek into the source code for standard deep learning library like Keras.
Should one normalize the test and train datasets separately? or does he have to normalize the whole dataset, before splitting it?
Yes. Normalize the train dataset and use the min/max from train to normalize the test set.
In this case, min/max of test set might be smaller or bigger than min/max of the training set. If they are, would it cause a problem to the validation?
You should estimate them using domain knowledge if possible, otherwise, estimate from train and clip test data if they exceed the known bounds.
Hi Jason, I often read about people normalize on the input features, but not on output, why?
Should we normalize on the output features as well if the output have a wide range of scale too? from 1e-3 to 1e3
BTW, it is for a regression problem.
You can normalize the output variable in regression too, but you will need to reverse the scaling of predictions in order to make use of them or quote error scores in a meaningful way (e.g. meaningful to your problem).
The MSE loss is very high (1e8) when I didn’t applied normalization on the output variable, and small MSE loss (0.0xxx) when I applied normalization.
Is there something wrong in my implementation? Should I run large epochs(maybe 50000?) when the output variable isn’t normalized? (currently running 500 epochs with normalization).
Perhaps. Continue to try different things and see if you can learn more about your system/problem.
@Roy,
– if you don’t normalize and the features are not of similar scale, then the gradient descent would take a very long time to converge [1]
– if Root MSE is much much smaller than the mean/median value of the predicted vector, I think your model is good enough
[1] Stranford.edu:
Normalizing input variables is an intent to make the underlying relationship between input and output variables easier to model.
Your tutorials are awesome. 🙂
I have converted rescaledX to a dataframe and plotted histogram for rescaling, standardization and normalization. They all seem to be scaling down the magnitude of an attribute to a small range — 0 to 1 in case of rescaling and normalization.
– are they doing similar transformation i.e. scaling down attributes so they become comparable?
– do you only apply one method in any given situation?
– which would be appropriate in which situation?
Thanking in advance.
Good questions.
This post explains how they work and when to use them:
Hi Jason, I really like your posts. I was looking for some explanation on using power transformations on data for scaling. Like using logarithms and exponents and stuff like that. I would really like to understand what it does to the data and how we as data scientist can be power users of such scaling techniques
The best advice is to try a suite of transforms and see what results in the more predictive model.
I have a few posts on power transforms like log and boxcox, try the search feature.
Hi Jason,thanks for your all posts , I have question related to Multilayer Perceptron classification algorithm
if we want to apply this algorithm on mixed data set (numeric and nominal).
EX (23,125,75,black,green) this data presents the age ,length,weight ,Hair color, Eye color Respectively.
For numeric attributes we will normalize the data to be in the same range.
what about nominal attributes?
Do we need to transform nominal attributes to binary attributes?
I would recommend either using an integer encoding or a one hot encoding.
It is common to use a one hot encoding.
I have many posts on the topic.
Hello Jason, great post
However,
I have a question (maybe is almost the same that Dimos).
What is the most often approach to preprocess (I mean use 1 of 4 explained)
How values you normalize?
all features (X)
fit_transform train features(X_train_std=model.fit_trainsform(X_train)) and from them transform X_test (X_test_std=model.transform(X_test))
and then:
If we have to predict new features that I get today(for example: 0,95,80,45,92,36.5,0.330,26,0 in diabetes model)
we have to preprocess that feature or is not necessary relevant and predict it without preprocess:
Thank you for help
Any process used to prepare data for training the model must be performed when making predictions on new data with the final model.
This means coefficients used in scaling (e.g. min/max) are really part of the model and must be chosen mindfully.
thak you for your answer
Hi Jason
I am applying normalization for network attacker data. i used min/max normalization. but in the real data there is some features have a large values. if i want to apply standard deviation normalization. should i apply only one normalization type? or can i apply min/max for all data and then apply standard deviation for all data. what is the sequence and is it wrong if i apply standard deviation normalization only on the large value features?
I would recommend trying both approaches and see what works best for your data and models.
I don’t understand the two comands.
X = dataset[:,0:8]
Y = dataset[:,8]
This is called array slicing, I will have a post on this topic on the blog tomorrow.
Here, we are selecting the columns 0-7 for input and 8 for output.
Dear Dr. Jason Brownlee, i have prepared my own dataset on hand writing from different people, and i prepared the images in 28X28 pixel so the problem is how i am going to prepare the training and testing data set so as i will then write the code to recognize the data?
Sounds great.
my idea is can you help me how i have to do that? and how i have to read my images data set and training data set using tensorflow?
Perhaps this example would help:
That is a great link that shows how to use the existing CIFAR-10, thank you for that, but as i tried to mention it above, i have handwritten images prepared in 28×28 pixels, so how i have to prepare the training set (how to label my dataset)? it can be .csv or .txt file, i need the way how i have to prepare training set and access in tensorflow like MNIST?
The images will be standard image formats like jpg or png, the labels will be in a csv file, perhaps associated with each filename.
Hi Jason. First of all, great work with the tutorials.
Here’s something I don’t understand though. What’s the difference between rescaling data and normalizing data? It seems like they’re both making sure that all values are between 0 and 1 ?
So what’s the difference?
Thanks.
Please email me the answers as well since i do not check this blog often
Normalizing is a type of rescaling.
Hello Sir!! I am planning a research work which is about music genera classification. My work includes preparing the dataset for the type of music I want to use as there are no public dataset for those music. My problem is I don’t know how to prepare music dataset. I have red a lot about spectrogram. But, what are the best mechanisms to prepare music dataset? Is it only spectrogram I have to use or I have alternate choices?
Sorry, I cannot help you with music datasets. I hope to cover the topic in the future.
Hi, if I would like to scale a image with dim=3x64x64. How to use StandardScaler() to do that? Thank you
Sorry, the standard scalar is not for scaling images.
so, to improve the performance of training images, which scale method we should use? Or, just divide train set to a value, for example, train_x/255, …?
Try a suite of transforms and see what works best for your data.
Hi Jason, thanks for your posts.
I have a question about data preprocessing. Can we have multiple inputs with different shape? for example two different files, one including bit vectors, one including matrixes?
If so, how can we use them for ML algorithms?
Basically, I want to add additional information to data, so classifier can use for better prediction.
For most algorithms the input data must be reshaped and padded to be homogeneous.
Thanks for your response. Yes, I understand that. This extra information is like a metadata that gives information about the structure that generates the data. Therefore, it is a separate type that gives mores information about the system. Is there any way to apply it to ML algorithms?
Sure, perhaps you could use a multiple input model or an ensemble of models.
Do you have any link/reference suggestion that I can read more about it? I could not find a good resource yet. Thanks in advance.
Hi Jason,
What is Y used for? I realize the comment and description say it’s the output column, but after slicing the ‘class’ column to it, I’m not seeing Y used for anything in the four examples. Commenting it out does not seem to have any effect. Is it just a placeholder for later? If so, why did we assign ‘class’ data to it instead of creating an empty array?
Thanks,
John
We are not preparing the class variable in these examples. It is needed during modeling.
Thanks for great article. I would like to ask a question regarding using simple nearest neighbors algorithm from scikit learn library with standard settings. I have a list of data columns from salesforce leads table giving few metrics for total time spent on page, total emails opened, as well as alphabetical values such as – source of the lead with values signup, contact us etc., as well as country of origin information.
So far I have transformed all non-numerical data to numerical form in the simple way 0, 1, 2, 3, 4 for each unique value. With this approach scoring accuracy seams to reach 70% at its best. Now I want to go one step further and either normalize or standardize the data set, but can’t really decide which route to take. So far I have decided to go with safest advice and standardize all data. But then I have worries about some scenarios, for example certain fields will have long ranges of data, i.e. those representing each country, or those that show number of emails sent. On another hands other fields like source, will have numerical values 0, 1, 2, 3 and no more, but the field itself does have very high correlation to the outcome of winning lead or loosing lead.
I would be very grateful if you could point me to the right direction and perhaps without too much diving into small details, what would be the common sense approach.
Also, is it possible to use both methods for data set, i.e. standardize data first, and then normalize.
Thanks,
Donatas
Good question.
The data preparation methods must scale with the data. Perhaps for counts you can estimate the largest possible/reasonable count that you can use to normalize the count by, or perhaps invert the count, e.g. 1/n.
Hi @jason can you please tell why normalizer result and rescaling (0-1) results are different. isn’t there a standard way of doing so which should give the same result irrespective of the class used (i.e MinMaxScaler or normalizer).
I don’t follow, sorry. Can you give more context?
Hi Sír. I have a housing datasets whose target variable is a positively skewed distribution. So far that’s the only variable I have seen to be skewed although I think there will be more. Now I have read that there is need to make this distribution approximately a normal distribution using log transformation. But the challenge I’m facing right now is how to perform log transformation on the price feature in the housing dataset. I’d like to if there is a scikit-learn library for this and if not how should I go about it? More so I plan on using linear regression to predict housing prices for this dataset.
You can use a boxcox transform to fix the skew:
Hi Jason
I am using MinMaxScaler preprocessing technique to normalize my data. I have data of 200 patients, where each patient data for single electrode is 20 seconds i.e. 10240 sample.Then, the dimension of my data is 200*10240. I want to rescale my data row-wise but MinMaxScaler scale the data column wise which may not be correct for my data as i want to rescale my data accordingly 1*10240.
What changes are required in order to operate row wise independently of other electrode?
In general, each column of data represents a separate feature that may have a different scale or units from other columns.
For this reason, it is good practice to scale data column-wise.
Does that help?
HEllo sir,
I have colleted 1000 tweets on demonetization. Then i am extracting different features like pos based, lexiocn based fetaures, morphological features, ngram features.So different feature vectors are created for each type and then they are stacked column wise. I have divided dataset of 1000 tweets into 80% as training and 20% as testing. I have trained svm classifier but accuracy is not more than 60%.
How should i improve accuracy or which feature selection should i need to use?
Thanks
Here are some ideas:
Big Question is for me. Why should we use random values for weight and Bias value?
Good question.
Neural nets use random initial values for the weights. This is by design. It allows the learning algorithn (batch/mini-batch/stochastic gradient descent) to explore the weight space from a different starting point each time the model is evaluated, removing bias in the training process.
It is why it is a good idea to evaluate a neural net by training multiple instances of the model:
Does that help?
X = array[:,0:8]
Y = array[:,8] I have doubt here X is only 0 to right feature and Y is target 8th column right .
Confirm it by inspecting the data, it is correct.
Learn more about array slicing and ranges in python here:
Thank you so much for all of your help – I have learned a ton from all of your posts!
I have a project where I have 54 input variables, and 8 output variables. I have decent results from what I have learned from you. However, I have standardized all my input variables, and I think I could achieve better performance if I only standardize some of them. Meaning, 5 of the input columns are the same variable type as the outputs, I think it would be better not to scaler this. Additionally, I one of the inputs in the month of the year – I do not think that that needs to be standardized either.
Does my thought process to do selective preprocessing make any sense? Is it possible to do this?
Thank you
You’re welcome Sam.
Perhaps. I would recommend designing and running careful experiments to test your idea. Let the results guide you rather than “best practice”.
Hi, Jason,
Really helpful article. I tried to access the Pima Indian diabetes dataset and it’s no longer available to download at the link provided due to permission restrictions.
Note that I provided an alternate link in the post.
Here it is again:
Somehow I missed that. Thanks! 🙂
No probs.
I have n-dimensional binary data. Suggest some good classifier for binary data-set.
Try a suite of methods and see what works best for your specific dataset.
Hello Jason, I follow your posts very closely as I am studying machine learning on my own. With respect to scaling/normalizing data, I always have a dilemma. When do I use what? Is there any way to know beforehand which regression/classification models will benefit from scaling or normalizing data? For which models its not required to scale or normalize data?
Good question, I answer it here:
Hi Jason,
Thank you for this post. It was very helpful. I have a question on the normalization/standardization approach,when the dataset contains both numeric and categorical features. I am converting the categorical features into dummy values (contains 0 or 1). Should the numeric features be standardized along with the dummy variables?
or, 2) Should the numeric features be only be standardized?
Please provide your thoughts.
No need to scale dummy variables.
Hello Jason,
Thank you for sharing your expertise! I am a complete newbie to Python but have programmed before in stats software like EViews. Are the datasets in sklearn.database formatted differently? I tried to run the following code:
# Load the data
from sklearn.datasets import load_iris
iris = load_iris()
from matplotlib import pyplot as plt
# The indices of the features that we are plotting
x_index = 0
y_index = 1
# this formatter will label the colorbar with the correct target names
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.figure(figsize=(5, 4))
plt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target)
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index])
plt.tight_layout()
plt.show()
It’s fine with the dataset from sklearn. But once I use pandas.read_csv to load the iris dataset from a url and then run the code, it just gives me tons of angry text. If I were to use pandas.read_csv, how should I format and store the data such that the aforementioned scipy code would work? Thank you so much!
Perhaps try this tutorial to get a handle on loading CVS files:
Hi! Jason , thanks for the great tutorial. i have a question, would like to hear from you.
from the knowledge statistics to normalize the data we use the formulae
[ X – min(X) ]/( [max(X) – min(X) ] ).
by using this formulae when i am trying to compute the normalised data the answer i am getting is different from that obtained by the Normalizer class. Why is it so.
Normalizer is normalizing the vector length I believe.
Use the MinMax scaler instead.
How to build a single module which can Rescale data,Standardize data,Normalizedata,Binarize data. Please explain
You can create a Pipeline:
Thank you so much
I’m happy it helped.
I am a new learner with Python ,your comments are of great help Jason .
May you guide how i can comment on class distribution whether ot is balanced or unbalanced and can print the total number of 1’s and 0’s in the training and testing data.
I have advice on working with imbalanced data here:
Hi Jason,
your articles are awesome, thank you very much I am subscribed for forever.
I have a question after scaling the input for my regression model and created the model I need to scale again my input data when I use this ckpt file, how can I pass this scaling to the place where I will use the model? Via the TF session?
Maybe more clearly what I want to do
when I train I do (my data is multiple samples in a csv file):
scaler = StandardScaler().fit(X_train)
X_standard = scaler.transform(X_train)
when I validate I do (my data is multiple samples in a csv file):
scaler = StandardScaler().fit(X_validate)
X_standard = scaler.transform(X_validate)
But here it comes the problem using the saved the model I want to restore the model with a single sample as an input:
X_scaler = preprocessing.StandardScaler()
X_test_test = X_scaler.fit_transform(X_test)
But it gives me output [[0,0,0,0,0,]] and because of that I cannot predict anything with the model object. Can you tell me where I am making mistake?
I’m not sure I follow the question, sorry. What are you trying to do?
I am making a model using TensorFlow and before batching the data for training I am scaling it using StandardScaler.
After my model is created I want to restore the checkpoint and make a prediction, but the data that I am inputting is not scaled.
So my question is how to scale the same way the data when restoring a model?
Because when I am training the data I am scaling among all the data, the whole data.csv file but later when restoring the model my input is a single sample.
I hope is more clear now.
You must save the scaler as well, or the coefficients used by the scaler.
Can you show me an example? I have done this serializing in using pickle and then using it again but does not look like the most pretty option.
You can use pickle.
I think I have some examples of saving scalers and tokenizers with some of the NLP tutorials.
Hey Jason,
I am preprocessing CIFAR-10 data by the sklearn StandardScaler(copy = False, with_mean = True, with_std= True). Then I am doing dimensionality reduction by pca followed by lda on the principal components. The problem is that if I do dimensionality reduction without preprocessing everything works fine, however if I do so after preprocessing I am getting “Memory Error”. I am using svd solver for pca and and eigen solver with auto shrinkage for lda(linear discriminants). Do you have any idea about what might be the cause of this problem?
** I tried min max scaling without calling the library function. Even then I am getting the same error.
Thank You
No idea, sorry. Perhaps post your code and error to stackoverflow?
Hi Jason. I have recently derived the exact Hessian and gradient for the LM algorithm to train feedforward neural networks via direct differentiation. I have also applied data preprocessing to the input data via two of the techniques you have suggested – scaling and normalization. In some datasets, I have observed faster convergence than the neural network toolobox in MATLAB. You can also view the published paper at:
Is it possible I can send you my currently ongoing papers for review once it is finished.
Thanks and best regards,
Najam
Nice work.
Sorry, I don’t have the capacity to review your paper.
Great article! Do you have insights can multiple scalers/normalizers be used consequently? I mean applying them after each other to the original dataset.
I have some advice for their order here:
Is there any ways to report ML result set via mails?
What do you mean exactly?
You can write code to send emails, but that is an engineering question, not machine learning.
Dear Dr. Brownlee esq.
You are kindly requested to reply to my, rather simple question:
In ANN, the input data is scaled and the output of the ANN is between (0) and (1). How to convert the output back to real numbers, like power, impedance, voltages, currents etc.
Your reply is very highly appreciated,
Kind Regards
Ahmad Allawi
Consultant Electrical Engineer
PTC Community Member-Mathcad Div.,
In scikit-learn you can achieve this with: encoder.inverse_transform()
Hey jason thanks for the turorial.
I tried all the given preprocessing technique and then trained the data using SVC
i found that MinMaxScale is giving the higest acuracy
so my question is wether we should always go for preprocessing technique which gives higest
accuracy ?
Yes, exactly.
Use pre-processing techniques that give the best model performance.
hi jason
can i do this in prediction problem?
this in python
data_set = data_set_ / 100000
before prediction i did this
after i did predict*100000
is that right
Yes, if the largest value ever to be seen is 100K.
It Means it’s OK if 1st divide by 100000 than multiply at the end with prediction result? . 1st scale the data and than resclae at the end. I didn’t get meaning about to see data 100K
Can you explain or share some link. Thanks
Without scale and resclae error is big. I was thinking if I don’t sclae and rescale result should be same but not the error is big.
Yes, that sounds fine.
What is the difference between rescaling and normalising data?
Normalizing is a type of rescaling.
Hi, Jason,
one quick question. Should I standardscaler both input and labels? or only input should work fine. Any difference between them? which one do you recommend? Thanks.
Sang-Ho
Oops! the similar question was already answered above. please ignore this question. Thanks
No problem.
For regression, yes this can be a good idea.
I recommend testing with and without scaling inputs and outputs to confirm it lifts model skill.
Most of the reference articles I have followed used some kind of numeric data in their data set. How do I work with the non numeric data for making a recommendation system. I look forward very eagerly for your help. Thank you in advance.
If the data is categorical, you can encode it as integer, one hot encoding or use an embedding.
Does that help?
Can you share an example for this which includes the creation of recommendation system also?
Thanks for the suggestion, I hope to cover the topic in the future.
Is there any chance/situation, we got the higher value of rating prediction than actual rating? Let’s say i use the data set contain of 1-5 rating, and once i apply the collaborative filtering techniques, i got the prediction rating for some items is 5.349++ ?
Perhaps, it depends on your choice of model.
If your model does this, e.g. a neural net, you might want write some code to help interpret the prediction, e.g. round it down.
I thought at first it could be wrong to have the result of higher prediction value in recommender system. Thank you so much for the explanation.
maxVal = max(data_set[0:num, 0]).astype(‘float32’)
predict, len(test)+1-i) – maxVal
hi jason, can i ask you is that right above 2 lines of code if i do?beause if i dont minus maxval true and predicted have big difference,,one curve near to maximum value other is near to low values
Perhaps post to stackoverflow?
I think so yes. Because the maximum value in data set is 55000 and without minus of maxVal function maximum predicted value goes to 10000 and shows predicted near to 10000 and true near to 55000. And Kindly suggest some link how to use one week or one year predicted values to predict 24 hours ahead or one week ahead in CNN?
Is there a way I can detect and ignore outliers within the process of re-sampling a data-set using
df1= df.resample(‘3T’).agg(dict(Tme=’first’, Vtc=’mean’, Stc=’mean’))
This line groups the data by “Tme” and computes the mean of the “Vtc” and “Stc” using all the values of the “Vtc” that fell at a particular “Tme”. But some of these data points are outliers. Is there anything I cann do within the .agg() so that I can ignore the outliers whn evaluating the mean
see my problem on stackoverflow here
Perhaps use a statistical method after gathering the sample?
While Preparing data for machine learning, we have 3 options: we should either Rescale data,Standardize data or Normalize data. In case of Pima Indian diabetes dataset, which option should i select ? because the data visualization shows that some attributes have nearly Gaussian distribution, some have exponential distribution. So when attributes have mixed distribution, if i opt for standardization it will be good for those attributes which have Gaussian distribution and not for exponential distribution attributes as these attributes we should go for log or cube root . So how do we handle this??
It depends on the models used.
I recommend testing different data preparation methods with different models to rapidly find a combination that works well.
This is neat, but it’s all applied to the training data. What if we create this model and are happy with it, but we then get a new batch of data every night that we want to score? Is there a way to save ALL of these pre-processing treatments in such a fashion that they can be applied directly to a new set of data, hopefully in just a couple lines of code. (e.g. the vtreat package in R can do some data treatments and you can save the “treatment plan” then apply it at scoring time of any new data).
Is there an easy recipe for doing that in Python?
Yes, you can use a pipeline:
You could re-train a model each night and compare it to the old model to see what works best.
Hi Mr.Jason,
I am working on dataset containing ‘javascript code samples’.I have to preprocess the dataset so that it will be available for further model.For this type of dataset what type of encoding should i use.
Perhaps compare a bag of words and a word embedding?
Thanks for your reply.I think, since dataset is not for document analysis so i am in doubt whether BOW will work or not.May you please clearify a bit.
Perhaps try it and see?
ValueError: Input contains NaN, infinity or a value too large for dtype(‘float64’).
I need your help to resolve this error I am beginner.
Perhaps try removing the NaN values from your data prior to modeling?
you can also include RobustScaler() which scales features using statistics that are robust to outliers.
Great suggestion!
Thank you so much, sir. get a lot of knowledge from your post. but my question will be what is the benefit of data reprocessing for machine learning to make prediction model
Thanks.
To better expose the structure of the problem to the learning algorithms.
i have a data 400001×16 different types of sensors data how to choose input shape and classify the one class neural network??
training and testing different data …..
these are different types of sensors data how to put the input shape??
how to classify in neural network??
If these are time series, then this will show you how to reshape the data:
these are unlabels data so how to possable classification??
If you want to model your data as classification, you must have labels to learn from.
How to create a data set a for a particular object automatically without manual entry
I don’t know.
Which method should I use if I have too many 0’s and 1’s in my dataset.
I recommend testing a suite of data preparation and modeling algorithms in order to discover what works best for your specific dataset.
Hi
i like ur post to much. i have a question. Pls help me
i have a dataset. i want clustering customers by Kmeans. So i have preprocess my data
Revency Frequency Monetary
302 312 5288.63
31 196 3079.1
2 251 7187.34
95 28 948.25
Data Rescaling, Data Normalization,Data Standardization. of the three methods above. Which method should I choose to preprocess for my dataset
Perhaps start with normalization.
Thank you so much so much so much. i read to many ur post . really very very great
Thanks!
Hi
pls help me
i have a dataset. i want clustering customers by Kmeans. So i have preprocess my data. i used ” Normalization method” then Kmeans on data just preprocessing. but result differents to much with a other method clustering. so i dont sure . my code is wrong or right when i make preprocess by ” Normalization method”.So can u see my code and tell for me. it right or wrong (i only start study in Python and sorry for my english is not good)
Revency Frequency Monetary
302 312 5288.63
31 196 3079.1
2 251 7187.34
95 28 948.25
#PRE-PROCESSING ———————————————–
col_names = [‘R’,’F’, ‘M’]
#Step 3: Normalize Data
from sklearn.preprocessing import Normalizer
normalizer = Normalizer()
tx_user[col_names] = normalizer.fit_transform(tx_user[col_names])
Yes, results will be dependent upon the scale of the input data.
Thanks. have a beautifull day to u
You’re welcome.
Greate Job…
Thanks!
Can u explain the difference between Normalizing data
and Re-scaling data ?
My doubt arises when we rescaled between 0 to 1. Why arent the 2 same?
Also
“Normalizing in scikit-learn refers to rescaling EACH observation (row) to have a length of 1 (called a unit norm in linear algebra).”
“The rows are normalized to length 1.”
I didnt got these lines.
please explain.
Normalization is a specific type of rescaling. Rescaling is a broader term and might include standardization and other techniques.
Normalization as a term is confusing. In linear algebra it means making the magnitude of the vector 1, this is where sklearn get’s the name and refers to scaling each column as “minmaxscaler”.
In statistics, we refer to normalizing a feature (column) as normalziation. This is the common name that I use for the minmaxscaler.
How to invert values after using “NORMALIZER”, I mean how to get original values back?
Use:
ok. Thank you.
How can I visualize data that is words. For example, I have a csv file with words, but I cant visualize it.
I don’t know about visualizing words, sorry. | https://machinelearningmastery.com/prepare-data-machine-learning-python-scikit-learn/ | CC-MAIN-2021-31 | refinedweb | 6,827 | 66.03 |
Theme-based views in Laravel using vendor namespaces
| 2 min read.
<?php namespace App\Http\Controllers; use App\Client; class HomeController { public function __invoke(Client $client) { return view("themes.{$client->theme->name}.home", [ 'client' => $client, ]); } }
Lets say we’re dealing with a theme named
spatie. Here’s what our view looks like:
@extends('themes.spatie.layouts.app') @section('main') <p>Welcome to {{ $client->name }}'s site!</p> @include('themes.spatie.partials.introduction') @endsection
A few things I don’t like here. First off, passing
$client->theme->name to every view name starts to get tedious very fast. In the views we can hard code the theme name since we’re already in the theme, but it’s too easy to introduce silent errors by requiring a different theme’s view when copy-pasting across themes (which will happen). Finally, all of this could become annoying to refactor if we decide to change our strategy regarding themes.
There aren’t any huge issues here, but alltogether it feels like we should be able to do better. There are a few strategies to clean this up, but I just want to talk about vendor namespaces today.
Laravel allows you register a view vendor namespace which points to a specific directory containing Blade files. This feature is intended for package development, but it’s a perfect solution to our problem.
By registering a namespace with the current theme’s location, we can drop all the dynamic parts of our view names when we’re calling them.
<?php class HomeController { public function __invoke(Client $client) { return view('theme::home', [ 'client' => $client, ]); } }
@extends('theme::layouts.app') @section('main') <p>Welcome to {{ $client->name }}'s site!</p> @include('theme::partials.introduction') @endsection
Registering a vendor namespace is pretty straightforward. Create a service provider, and call the
loadViewsFrom in the
boot method. We’ll need to pass a directory containing the views, and a name for our “vendor”.
<?php namespace App\Providers; use App\Client; class ThemeServiceProvider extends ServiceProvider { /** * Bootstrap the application services. * * @return void */ public function boot(Client $client) { $views = resource_path("views/themes/{$client->theme->name}"); $this->loadViewsFrom($views, 'theme'); } }
Additionally, you could register a fallback path for the namespace, if you have a default theme for clients.
<?php $views = [ resource_path("views/themes/{$client->theme->name}"), resource_path("views/themes/default"), ]; $this->loadViewsFrom($views, 'theme');
That’s all we need to use our little
theme:: shortcut, with the added benefit that our the views don’t need to worry about any implementation details of our theme setup! | https://sebastiandedeyne.com/theme-based-views-in-laravel-using-vendor-namespaces/ | CC-MAIN-2019-47 | refinedweb | 420 | 55.34 |
Details
- Type:
Improvement
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 1.1.0
-
- Component/s: deployment
- Labels:None
Description
Issue Links
- depends upon
BIGTOP-2554 expose bind-host options in hieradata
- Closed
- links to
-
Activity
- All
- Work Log
- History
- Activity
- Transitions
Charms using this patch are available in the ~bigdata-dev namespace. I've tested on lxd and azure using:
Github user ktsakalozos commented on the issue:
LGTM +1 Tested on lxd and exposed services are indeed listening on the correct interface.
As you can see we still have some services on 127.0.1.1 but we can handle any problems we may have in future PRs
`
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:35482 0.0.0.0:* LISTEN
tcp 0 0 127.0.1.1:10020 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8649 0.0.0.0:* LISTEN
tcp 0 0 127.0.1.1:19888 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:10033 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8020 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::8088 :::* LISTEN
tcp6 0 0 127.0.1.1:20888 :::* LISTEN
tcp6 0 0 :::8025 :::* LISTEN
tcp6 0 0 :::8030 :::* LISTEN
tcp6 0 0 :::8032 :::* LISTEN
tcp6 0 0 :::8033 :::* LISTEN
tcp6 0 0 fe80::1:13128 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
udp 0 0 0.0.0.0:68 0.0.0.0:*
udp 0 0 0.0.0.0:8649 0.0.0.0:*
udp 0 0 0.0.0.0:37509 0.0.0.0:*
`
Github user johnsca commented on the issue:
I tested on aws and it works fine, but on lxd I see, like Kostantinos, two services still listening on 127.0.1.1. Furthermore, terasort fails with the following the the RM log:
Github user kwmonroe commented on the issue:
Thanks for the eyeballs @ktsakalozos and @johnsca! I missed a mapred.jobhistory binding. You will no longer see any 127.0.x.y bindings.
Also, thanks @johnsca for pointing out the terasort failure. This was due to a dns / hostname issue in lxd environments and was fixed with:
Github user kwmonroe commented on the issue:
Charms in ~bigdata-dev namespace have been refreshed and verified that terasort works on lxd again:
```
results:
meta:
composite:
direction: asc
units: secs
value: "385"
start: 2016-10-25T22:19:00Z
stop: 2016-10-25T22:25:25Z
results:
raw: '
'
status: completed
timing:
completed: 2016-10-25 22:25:27 +0000 UTC
enqueued: 2016-10-25 22:18:47 +0000 UTC
started: 2016-10-25 22:18:47 +0000 UTC
```
Github user asfgit closed the pull request at:
GitHub user kwmonroe opened a pull request:
BIGTOP-2555: hadoop charms should use bind-host overrides BIGTOP-2554. This fixes a problem where lxd would bind the apps to
`facter fqdn` which may not be resolvable to other containers in the lxd env.
the NN overrides so the RM puppet apply doesn't lose hdfs-site.xml config
from the NN puppet apply.
You can merge this pull request into a Git repository by running:
$ git pull bug/
BIGTOP-2555/bind-host-overrides
Alternatively you can review and apply these changes as the patch at:
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #153 | https://issues.apache.org/jira/browse/BIGTOP-2555 | CC-MAIN-2017-43 | refinedweb | 611 | 61.36 |
Multi-tier load-balancing with Linux
Vincent Bernat
A common solution to provide a highly-available and scalable service is to insert a load-balancing layer to spread requests from users to backend servers.1 We usually have several expectations for such a layer:
- scalability
- It allows a service to scale by pushing traffic to newly provisioned backend servers. It should also be able to scale itself when it becomes the bottleneck.
- availability
- It provides high availability to the service. If one server becomes unavailable, the traffic should be quickly steered to another server. The load-balancing layer itself should also be highly available.
- flexibility
- It handles both short and long connections. It is flexible enough to offer all the features backends generally expect from a load-balancer like TLS or HTTP routing.
- operability
- With some cooperation, any expected change should be seamless: rolling out a new software on the backends, adding or removing backends, or scaling up or down the load-balancing layer itself.
The problem and its solutions are well known. From recently published articles on the topic, “Introduction to modern network load-balancing and proxying” provides an overview of the state of the art. Google released “Maglev: A Fast and Reliable Software Network Load Balancer” describing their in-house solution in detail.2 However, the associated software is not available. Building a load-balancing solution with commodity servers consists of assembling three components:
- ECMP routing
- stateless L4 load-balancing
- stateful L7 load-balancing
In this article, I describe and support a multi-tier solution using Linux and only open-source components. It should offer you the basis to build a production-ready load-balancing layer.
Update (2018-05)
Facebook just released Katran, an L4 load-balancer implemented with XDP and eBPF and using consistent hashing. It could be inserted in the configuration described below.
Update (2018-08)
GitHub just released GLB Director, an L4 load-balancer using a rendez-vous hashing to select a pair of L7 load-balancers. Using a custom Netfilter module, the first member redirects the flow to the second if it is unable to find a match in its connection table.
Last tier: L7 load-balancing#
Let’s start with the last tier. Its role is to provide high availability, by forwarding requests to only healthy backends, and scalability, by spreading requests fairly between them. Working in the highest layers of the OSI model, it can also offer additional services, like TLS-termination, HTTP routing, header rewriting, rate-limiting of unauthenticated users, and so on. Being stateful, it can leverage complex load-balancing algorithm. Being the first point of contact with backend servers, it should ease maintenances and minimize impact during daily changes.
It also terminates client TCP connections. This introduces some loose coupling between the load-balancing components and the backend servers with the following benefits:
- connections to servers can be kept open for lower resource use and latency,
- requests can be retried transparently in case of failure,
- clients can use a different IP protocol than servers, and
- servers do not have to care about path MTU discovery, TCP congestion control algorithms, avoidance of the TIME-WAIT state and various other low-level details.
Many pieces of software would fit in this layer and an ample literature exists on how to configure them. You could look at HAProxy, Envoy or Traefik. Here is a configuration example for HAProxy:
# L7 load-balancer endpoint frontend l7lb # Listen on both IPv4 and IPv6 bind :80 v4v6 # Redirect everything to a default backend default_backend servers # Healthchecking acl dead nbsrv(servers) lt 1 acl disabled nbsrv(enabler) lt 1 monitor-uri /healthcheck monitor fail if dead || disabled # IPv6-only servers with HTTP healthchecking and remote agent checks backend servers balance roundrobin option httpchk server web1 [2001:db8:1:0:2::1]:80 send-proxy check agent-check agent-port 5555 server web2 [2001:db8:1:0:2::2]:80 send-proxy check agent-check agent-port 5555 server web3 [2001:db8:1:0:2::3]:80 send-proxy check agent-check agent-port 5555 server web4 [2001:db8:1:0:2::4]:80 send-proxy check agent-check agent-port 5555 # Fake backend: if the local agent check fails, we assume we are dead backend enabler server enabler [::1]:0 agent-check agent-port 5555
This configuration is the most incomplete piece of this guide. However, it illustrates two key concepts for operability:
Healthchecking of the web servers is done both at HTTP-level (with
checkand
option httpchk) and using an auxiliary agent check (with
agent-check). The latter makes it easy to put a server to maintenance or to orchestrate a progressive rollout. On each backend, you need a process listening on port 5555 and reporting the status of the service (
UP,
DOWN,
MAINT). A simple
socatprocess can do the trick:3
socat -ly \ TCP6-LISTEN:5555,ipv6only=0,reuseaddr,fork \ OPEN:/etc/lb/agent-check,rdonly
Put
UPin
/etc/lb/agent-checkwhen the service is in nominal mode. If the regular healthcheck is also positive, HAProxy will send requests to this node. When you need to put it in maintenance, write
MAINTand wait for the existing connections to terminate. Use
READYto cancel this mode.
The load-balancer itself should provide a healthcheck endpoint (
/healthcheck) for the upper tier. It will return a 503 error if either there is no backend servers available or if put down the
enablerbackend through the agent check. The same mechanism as for regular backends can be used to signal the unavailability of this load-balancer.
Additionally, the
send-proxy directive enables the proxy protocol to
transmit the real clients’ IP addresses. This protocol also works for non-HTTP
connections and is supported by a variety of servers, including nginx:
http { server { listen [::]:80 default ipv6only=off proxy_protocol; root /var/www; set_real_ip_from ::/0; real_ip_header proxy_protocol; } }
As is, this solution is not complete. We have just moved the availability and scalability problem somewhere else. How do we load-balance the requests between the load-balancers?
First tier: ECMP routing#
On most modern routed IP networks, redundant paths exist between clients and servers. For each packet, routers have to choose a path. When the cost associated to each path is equal, incoming flows4 are load-balanced among the available destinations. This characteristic can be used to balance connections among available load-balancers:
There is little control over the load-balancing but ECMP routing brings the ability to scale horizontally both tiers. A common way to implement such a solution is to use BGP, a routing protocol to exchange routes between network equipments. Each load-balancer announces to its connected routers the IP addresses it is serving.
If we assume you already have BGP-enabled routers available, ExaBGP is a flexible solution to let the load-balancers advertise their availability. Here is a configuration for one of the load-balancers:
# Healthcheck for IPv6 process service-v6 { run python -m exabgp healthcheck -s --interval 10 --increase 0 --cmd "test -f /etc/lb/v6-ready -a ! -f /etc/lb/disable"; encoder text; } template { # Template for IPv6 neighbors neighbor v6 { router-id 192.0.2.132; local-address 2001:db8::192.0.2.132; local-as 65000; peer-as 65000; hold-time 6; family { ipv6 unicast; } api services-v6 { processes [ service-v6 ]; } } } # First router neighbor 2001:db8::192.0.2.254 { inherit v6; } # Second router neighbor 2001:db8::192.0.2.253 { inherit v6; }
If
/etc/lb/v6-ready is present and
/etc/lb/disable is absent, all the IP
addresses configured on the
lo interface will be announced to both routers. If
the other load-balancers use a similar configuration, the routers will
distribute incoming flows between them. Some external process should manage the
existence of the
/etc/lb/v6-ready file by checking for the healthiness of the
load-balancer (using the
/healthcheck endpoint for example). An operator can
remove a load-balancer from the rotation by creating the
/etc/lb/disable file.
To get more details on this part, have a look at “High availability with ExaBGP.” If you are in the cloud, this tier is usually implemented by your cloud provider, either using an anycast IP address or a basic L4 load-balancer.
Unfortunately, this solution is not resilient when an expected or unexpected change happens. Notably, when adding or removing a load-balancer, the number of available routes for a destination changes. The hashing algorithm used by routers is not consistent and flows are reshuffled among the available load-balancers, breaking existing connections:
Moreover, each router may choose its own routes. When a router becomes unavailable, the second one may route the same flows differently:
If you think this is not an acceptable outcome, notably if you need to handle long connections like file downloads, video streaming or websocket connections, you need an additional tier. Keep reading!
Update (2021-03)
Some hardware vendors support resilient hashing to help circumvent this limitation. See for example the documentation from Juniper and Cumulus. Thanks to Jessy Vetter for the tip! This feature has also been added in Linux 5.12.
Second tier: L4 load-balancing#
The second tier is the glue between the stateless world of IP routers and the stateful land of L7 load-balancing. It is implemented with L4 load-balancing. The terminology can be a bit confusing here: this tier routes IP datagrams (no TCP termination) but the scheduler uses both destination IP and port to choose an available L7 load-balancer. The purpose of this tier is to ensure all members take the same scheduling decision for an incoming packet.
There are two options:
- stateful L4 load-balancing with state synchronization across the members, or
- stateless L4 load-balancing with consistent hashing.
The first option increases complexity and limits scalability. We won’t use it.5 The second option is less resilient during some changes but can be enhanced with a hybrid approach using a local state.
We use IPVS, a performant L4 load-balancer running inside the Linux kernel, with Keepalived, a frontend to IPVS with a set of healthcheckers to kick out an unhealthy component. IPVS is configured to use the Maglev scheduler, a consistent hashing algorithm from Google. Among its family, this is a great algorithm because it spreads connections fairly, minimizes disruptions during changes and is quite fast at building its lookup table. Finally, to improve performance, we let the last tier—the L7 load-balancers—sends back answers directly to the clients without involving the second tier—the L4 load-balancers. This is referred to as direct server return (DSR) or direct routing (DR).
With such a setup, we expect packets from a flow to be able to move freely between the components of the first two tiers while sticking to the same L7 load-balancer.
Configuration#
Assuming ExaBGP has already been configured like described in the previous section, let’s start with the configuration of Keepalived:
virtual_server_group VS_GROUP_MH_IPv6 { 2001:db8::198.51.100.1 80 } virtual_server group VS_GROUP_MH_IPv6 { lvs_method TUN # Tunnel mode for DSR lvs_sched mh # Scheduler: Maglev sh-port # Use port information for scheduling protocol TCP delay_loop 5 alpha # All servers are down on start omega # Execute quorum_down on shutdown quorum_up "/bin/touch /etc/lb/v6-ready" quorum_down "/bin/rm -f /etc/lb/v6-ready" # First L7 load-balancer real_server 2001:db8::192.0.2.132 80 { weight 1 HTTP_GET { url { path /healthcheck status_code 200 } connect_timeout 2 } } # Many others... }
The
quorum_up and
quorum_down statements define the commands to be executed
when the service becomes available and unavailable respectively. The
/etc/lb/v6-ready file is used as a signal to ExaBGP to advertise the service
IP address to the neighbor routers.
Additionally, IPVS needs to be configured to continue routing packets from a flow moved from another L4 load-balancer. It should also continue routing packets from unavailable destinations to ensure we can drain properly an L7 load-balancer.
# Schedule non-SYN packets sysctl -qw net.ipv4.vs.sloppy_tcp=1 # Do NOT reschedule a connection when destination # doesn't exist anymore sysctl -qw net.ipv4.vs.expire_nodest_conn=0 sysctl -qw net.ipv4.vs.expire_quiescent_template=0
The Maglev scheduling algorithm will be available with Linux 4.18, thanks to Inju Song. For older kernels, I have prepared a backport.6 Use of source hashing as a scheduling algorithm will hurt the resilience of the setup.
DSR is implemented using the tunnel mode. This method is compatible with routed datacenters and cloud environments. Requests are tunneled to the scheduled peer using IPIP encapsulation. It adds a small overhead and may lead to MTU issues. If possible, ensure you are using a larger MTU for communication between the second and the third tier.7 Otherwise, it is better to explicitly allow fragmentation of IP packets:
sysctl -qw net.ipv4.vs.pmtu_disc=0
You also need to configure the L7 load-balancers to handle encapsulated traffic:8
# Setup IPIP tunnel to accept packets from any source ip tunnel add tunlv6 mode ip6ip6 local 2001:db8::192.0.2.132 ip link set up dev tunlv6 ip addr add 2001:db8::198.51.100.1/128 dev tunlv6
Evaluation of the resilience#
As configured, the second tier increases the resilience of this setup for two reasons:
The scheduling algorithm is using a consistent hash to choose its destination. Such an algorithm reduces the negative impact of expected or unexpected changes by minimizing the number of flows moving to a new destination. “Consistent Hashing: Algorithmic Tradeoffs” offers more details on this subject.
IPVS keeps a local connection table for known flows. When a change impacts only the third tier, existing flows will be correctly directed according to the connection table.
If we add or remove an L4 load-balancer, existing flows are not impacted because each load-balancer takes the same decision, as long as they see the same set of L7 load-balancers:
If we add an L7 load-balancer, existing flows are not impacted either because only new connections will be scheduled to it. For existing connections, IPVS will look at its local connection table and continue to forward packets to the original destination. Similarly, if we remove an L7 load-balancer, only existing flows terminating at this load-balancer are impacted. Other existing connections will be forwarded correctly:
We need to have simultaneous changes on both levels to get a noticeable impact. For example, when adding both an L4 load-balancer and an L7 load-balancer, only connections moved to an L4 load-balancer without state and scheduled to the new load-balancer will be broken. Thanks to the consistent hashing algorithm, other connections will stay bound to the right L7 load-balancer. During a planned change, this disruption can be minimized by adding the new L4 load-balancers first, waiting a few minutes, then adding the new L7 load-balancers.
Additionally, IPVS correctly routes ICMP messages to the same L7 load-balancers as the associated connections.9 This ensures path MTU discovery works and there is no need for smart workarounds.
Tier 0: DNS load-balancing#
Optionally, you can add DNS load-balancing to the mix. This is useful either if your setup is spanned across multiple datacenters, or multiple cloud regions, or if you want to break a large load-balancing cluster into smaller ones. It is not intended to replace the first tier as it doesn’t share the same characteristics: load-balancing is unfair (it is not flow-based) and recovery from a failure is slow.
gdnsd is an authoritative-only DNS server with integrated healthchecking. It can serve zones from master files using the RFC 1035 zone format:
@ SOA ns1 ns1.example.org. 1 7200 1800 259200 900 @ NS ns1.example.com. @ NS ns1.example.net. @ MX 10 smtp @ 60 DYNA multifo!web www 60 DYNA multifo!web smtp A 198.51.100.99
The special RR type
DYNA will return
A and
AAAA records after querying the
specified plugin. Here, the
multifo plugin implements an all-active failover
of monitored addresses:
service_types => { web => { plugin => http_status url_path => /healthcheck down_thresh => 5 interval => 5 } ext => { plugin => extfile file => /etc/lb/ext def_down => false } } plugins => { multifo => { web => { service_types => [ ext, web ] addrs_v4 => [ 198.51.100.1, 198.51.100.2 ] addrs_v6 => [ 2001:db8::198.51.100.1, 2001:db8::198.51.100.2 ] } } }
In nominal state, an
A request will be answered with both
198.51.100.1 and
198.51.100.2. A healthcheck failure will update the returned set accordingly.
It is also possible to administratively remove an entry by modifying the
/etc/lb/ext file. For example, with the following content,
198.51.100.2 will
not be advertised anymore:
198.51.100.1 => UP 198.51.100.2 => DOWN 2001:db8::c633:6401 => UP 2001:db8::c633:6402 => UP
You can find all the configuration files and the setup of each tier in the Git repository. If you want to replicate this setup at a smaller scale, it is possible to collapse the second and the third tiers by using either localnode or network namespaces. Even if you don’t need its fancy load-balancing services, you should keep the last tier: while backend servers come and go, the L7 load-balancers bring stability, which translates to resiliency.
In this article, “backend servers” are the servers behind the load-balancing layer. To avoid confusion, we will not use the term “frontend.” ↩︎
A good summary of the paper is available from Adrian Colyer. From the same author, you may also have a look at the summary for “Stateless datacenter load-balancing with Beamer.” ↩︎
If you feel this solution is fragile, feel free to develop your own agent. It could coordinate with a key-value store to determine the wanted state of the server. It is possible to centralize the agent in a single location, but you may get a chicken-and-egg problem to ensure its availability. ↩︎
A flow is usually determined by the source and destination IP and the L4 protocol. Alternatively, the source and destination port can also be used. The router hashes these information to choose the destination. For Linux, you may find more information on this topic in “Celebrating ECMP in Linux.” ↩︎
On Linux, it can be implemented by using Netfilter for load-balancing and conntrackd to synchronize state. IPVS only provides active/backup synchronization. ↩︎
The backport is not strictly equivalent to its original version. Be sure to check the
READMEfile to understand the differences. Briefly, in Keepalived configuration, you should:
- not use
inhibit_on_failure
- use
sh-port
- not use
sh-fallback
At least 1520 for IPv4 and 1540 for IPv6. ↩︎
As is, this configuration is insecure. You need to ensure only the L4 load-balancers will be able to send IPIP traffic. ↩︎
This feature was added in Linux 4.4. See for example commit 1471f35efa86. ↩︎ | https://vincent.bernat.ch/en/blog/2018-multi-tier-loadbalancer | CC-MAIN-2021-21 | refinedweb | 3,136 | 53.71 |
Tracing Calls to Downstream HTTP Web Services Using the X-Ray SDK for Python
When your application makes calls to microservices or public HTTP APIs, you can use the X-Ray SDK for Python to instrument those calls and add the API to the service graph as a downstream service.
To instrument HTTP clients, patch the library that you use to
make outgoing calls. If you use
requests or Python's built in HTTP client, that's all you need to do.
For
aiohttp, also configure the recorder with an async
context.
If you use
aiohttp 3's client API, you also need to configure the
ClientSession's with
an instance of the tracing configuration provided by the SDK.
Example
aiohttp 3
Client API
from aws_xray_sdk.ext.aiohttp.client import aws_xray_trace_config async def foo(): trace_config = aws_xray_trace_config() async with ClientSession(loop=loop, trace_configs=[trace_config]) as session: async with session.get(url) as resp await resp.read()
When you instrument a call to a downstream web API, the X-Ray SDK for Python records a subsegment that contains information about the HTTP request and response. X-Ray uses the subsegment to generate an inferred segment for the remote API.
Example Subsegment for a Downstream HTTP Call
{ "id": "004f72be19cddc2a", "start_time": 1484786387.131, "end_time": 1484786387.501, "name": "names.example.com", "namespace": "remote", "http": { "request": { "method": "GET", "url": "" }, "response": { "content_length": -1, "status": 200 } } }
Example Inferred Segment for a Downstream HTTP Call
{ "id": "168416dc2ea97781", "name": "names.example.com", "trace_id": "1-5880168b-fd5153bb58284b67678aa78c", "start_time": 1484786387.131, "end_time": 1484786387.501, "parent_id": "004f72be19cddc2a", "http": { "request": { "method": "GET", "url": "" }, "response": { "content_length": -1, "status": 200 } }, "inferred": true } | https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-python-httpclients.html | CC-MAIN-2018-43 | refinedweb | 266 | 65.62 |
Batch has a system of macros and a templating engine, allowing you to make dynamic messages for your campaigns.
Using the templating engine, you can reference and use user data inside your message. You can also add conditions to a message so that the content changes if some condition is fullfilled.
We will see multiple examples later.
Before diving deep let's review the macros.
All attributes usable in a query are available in a macro.
They are:
attributeor
c.attributewill refer to an installation attribute.
u.attributewill refer to a user attribute.
This is the syntax to use them in a message:
You're only on level {{ c.current_level }}, come back and play !
If the user's
c.current_level attribute is
6 this evaluates to:
You're only on level 6, come back and play !
Note that with the previous macro if the user doesn't have the attribute the resulting message will be:
You're only on level, come back and play !
This is probably not what you want. Enter default values. A default value is used when the attribute doesn't exist.
Here is how to use them:
Special offer: Get {{ u.special_offer|default('-5%') }} by subscribing today !
If the user's
u.special_offer attribute is
-15% then it evaluates to:
Special offer: Get -15% by subscribing today !
Otherwise it evaluates to:
Special offer: Get -5% by subscribing today !
All tag collections usable in a query are available in a macro.
They are:
t.tagwill refer to an installation tag collection.
ut.tagwill refer to a user tag collection.
However, there's a catch: since a tag is a collection of values and we can only output a single value using a macro, we have to introduce the concept of filters. A filter is a function you can apply to a value to transform it.
In the case of a tag collection you can use the filter
join to concatenate all values into a single string.
For example:
You've already beaten those levels: {{ c.levels_done|join(',') }}
We will see more examples later, but you can check out the reference on filters to learn all about them.
Note that using a filter on a tag collection that doesn't exist always produces an empty string. Using the previous example, if the tag collection
c.levels_done doesn't exist the macro evaluates to:
You've already beaten those levels:
Using raw data like this is great but you might want to format the attributes yourself.
Formatting is done by using the following filters:
The two filters are explained in details in the reference but here is a small example:
You have accumulated {{ u.points|formatNumber(decimals=2) }} points, make sure to use them before {{ u.points_expiration_date|formatDate('yyyy-MM-dd') }}
This evaluates to:
You have accumulated 20.68 points, make sure to use them before 2017-02-30
Be sure to check out the reference documentation to learn about all options !
Macros are already really powerful but they're not always enough.
For example, what if you want to display a completely different message based on which level a user is on, or how much fidely points he has ?
This is a job for templates.
Templates allow you to do conditional statements, define variables and even do some light arithmetic.
Suppose you want to congratulate a user based on how far he is into your game.
You could write something like this:
{% if current_level > 3 %} Good job on beating level 3 ! You're now halfway through the game, keep pushing ! {% else if current_level > 5 %} Almost there ! One more level and you beat the game ! {% else if current_level == 6 %} Amazing ! You've beat the game ! Go take a look at the amazing perks you unlocked ! {% endif %}
Now depending on the value of the user's
current_level attribute, the message will be one of the 3 possibilities.
Sometimes it might be handy to compute some value and keep a reference to it so you can reuse it multiple times inside your template.
It is mainly a quality of life improvement but still useful.
For example, suppose you want to compute an expiration date based on multiple user attributes and remind the user when their subscription will expire.
Given the following rules:
You could write something like this:
{% if premium %} {% set $expiration_date = c.subscription_date + 90d %} {% else if c.has_newsletter_subscription %} {% set $expiration_date = c.subscription_date + 75d %} {% else if c.age < 25 %} {% set $expiration_date = c.subscription_date + 60d %} {% else %} {% set $expiration_date = c.subscription_date + 50d %} {% endif %} Hey {{ c.first_name ~ c.last_name }}, friendly reminder that your subscription will expire on {{ $expiration_date|formatDate('yyyy-MM-dd') }}
This evaluates to:
Hey John Smith, friendly reminder that your subscription will expire on 2018-01-03
Obviously the date will change based on what kind of subscription the user has.
There are a couple of new features here:
set $variableName = <expression>. A valid expression has to return a single value of any type. More about here
~operator.
Macros and templating also allow you to reference custom application data to use in your messages.
These are tables of key/value pairs that you can upload using our dashboard.
For example, given a table with the name
population_by_city with the following content:
2988507,2244000 2996944,484344 2995469,850726 2973783,271782 3031582,239157
These are the population of Paris, Lyon, Marseille, Strasbourg and Bordeaux respectively.
You can now use them like this.
You live in a city of {{ lookup('population_by_city', b.city_code) }}
For someone in Paris, this evaluates to:
You live in a city of 2244000
You can use any attribute or literal value as the lookup key. This works too:
You live in a city of {{ lookup('population_by_city', 2988507) }}
However by doing this you lose the benefit of the custom app data table. This also works:
You live in a city of {{ lookup('population_by_city', c.my_custom_city_code) }}
After looking into how it works, let's look at some use cases for macros and templates that would otherwise be hard or even impossible to do.
Suppose you want to report on electoral results for every city in France. There are currently 35416 cities in France and you want each user to have the result from their city when receiving the notification.
Without macros you could only do this by creating one campaign per city with a query matching the city. In that case you'd need more than 35k campaigns, this is obviously not good.
Instead you can do the following:
electoral_results_201710for example.
$city_code => $result.
Example:
2988507,Yes 20.3% - No 79.7% 2996944,Yes 48.5% - No 51.5% 2995469,Yes 74% - No 26% 2973783,Yes 38% - No 62% 3031582,Yes 93.2 - No 6.8%
These are the results of Paris, Lyon, Marseille, Strasbourg and Bordeaux respectively.
Example:
Election day result: {{ lookup('electoral_results_201710', b.city_code) }}
For someone in Paris this will evaluate to:
Election day result: Yes 20.3% - No 79.7%
Each user will have a customised message based on where he is.
Suppose you have a loyalty program for your application where the user can gain points and at some threshold you gain a loyalty level. Here is an example of a points scale:
Suppose also that you attach special one time discounts every time the user gains a level.
Finally, suppose you want to remind a user that they're about to reach the next level with the following message:
Gain 55 more points to reach Gold and get a one time discount of 15%!
Let's break down what we need:
For this to work you need to feed us the data we'll be working with; you can do so using custom attributes or the Custom Data API.
We imagine the following user attributes:
u.loyalty_points an
integer value representing the current number of points a user has
We also need two custom app data tables:
loyalty_thresholds containing this:
regular,100 silver,500 gold,1000 platinum,5000
Before diving into the template let's look at what query we should use:
{ "$or": [ "$and": [ "u.loyalty_points": { "$gte": 85 }, "u.loyalty_points": { "$lt": 100 } ], "$and": [ "u.loyalty_points": { "$gte": 475 }, "u.loyalty_points": { "$lt": 500 } ], "$and": [ "u.loyalty_points": { "$gte": 850 }, "u.loyalty_points": { "$lt": 1000 } ], "$and": [ "u.loyalty_points": { "$gte": 4950 }, "u.loyalty_points": { "$lt": 5000 } ] ] }
This will match any user that is just about to reach the next level.
Here is how the template could look like:
{% if u.loyalty_points >= 85 and u.loyalty_points < 100 %} {% set $nextLevel = 'regular' %} {% set $discount = '5%' %} {% else if u.loyalty_points >= 475 and u.loyalty_points < 500 %} {% set $nextLevel = 'silver' %} {% set $discount = '10%' %} {% else if u.loyalty_points >= 850 and u.loyalty_points < 1000 %} {% set $nextLevel = 'gold' %} {% set $discount = '15%' %} {% else if u.loyalty_points >= 4950 and u.loyalty_points < 5000 %} {% set $nextLevel = 'platinum' %} {% set $discount = '35%' %} {% endif %} {% set $remaining = lookup('loyalty_thresholds', $nextLevel)|int - u.loyalty_points %} Gain {{ $remaining }} more points to reach {{ $nextLevel|upper }} and get a one time discount of {{ $discount }}
Suppose you want to give a user a special discount if they didn't buy anything after a month, and at the same time you want to remind them what they bought.
Suppose also that the discount changes based on how much time has passed since the purchase:
For example you could have the following message:
Did you like your Nike Air Max ? Get a 20% discount on all purchase today!
Let's break down what we need:
Like before, for this to work you need to feed us the data we'll be working with; you can do so using custom attributes or the Custom Data API.
In this use case we'll use custom events so you need to have that working too.
We image the following user attributes:
u.last_purchase_product_name a
string containing the product name of the last purchase. It should be a properly formatted string.
We also need one custom event:
e.purchase which tracks every time a user has purchased something.
The query needs to filter users that haven't purchased anything since at least a month:
{ "age(e.purchase)": { "$gte": "30d" } }
Here is how the template could look like:
{% set $timeSinceLastPurchase = age(e.purchase) %} {% if $timeSinceLastPurchase > 30d and $timeSinceLastPurchase < 60d %} {% set $discount = '20%' %} {% else %} {% set $discount = '40%' %} {% endif %} Did you like your {{ u.last_purchase_product_name }} ? Get a {{ $discount }} discount on all purchase today! | https://batch.com/doc/guides/macros-and-templating/basics.html | CC-MAIN-2018-17 | refinedweb | 1,705 | 67.35 |
Header image by Irina Iriser on Unsplash.
This article is part 2 of an ongoing series of "Migrating to TypeScript".
In part 1, we explored how to initialise a project with the TypeScript compiler and the new TypeScript Babel preset. In this part, we'll go through a quick primer of TypeScript's features and what they're for. We'll also learn how to migrate your existing JavaScript project gradually to TypeScript, using an actual code snippet from an existing project. This will get you to learn how to trust the compiler along the way.
Table of contents
- Part 1: Introduction and getting started
- Part 2: Trust the compiler! (you are here)
Thinking in TypeScript
The idea of static typing and type safety in TypeScript might feel overwhelming coming from a dynamic typing background, but it doesn't have to be that way.
The main thing people often tell you about TypeScript is that it's "just JavaScript with types". Since JavaScript is dynamically typed, a lot of features like type coercion is often abused to make use of the dynamic nature of the language. So the idea of type-safety might never come across your average JS developer. This makes the idea of static typing and type safety feel overwhelming, but it doesn't have to be that way.
The trick is to rewire our thinking as we go along. And to do that we need to have a mindset. The primary mindset, as defined in Basarat's book, is Your JavaScript is already TypeScript.
But why is TypeScript important?
A more appropriate question to ask would be "why is static typing in JavaScript important?" Sooner or later, you're going to start writing medium to large-scale apps with JavaScript. When your codebase gets larger, detecting bugs will become a more tedious task. Especially when it's one of those pesky
Cant read property 'x' of undefined errors. JavaScript is a dynamically-typed language by nature and it has a lot of its quirks, like
null and
undefined types, type coercion, and the like. Sooner or later, these tiny quirks will work against you down the road.
Static typing ensures the correctness of your code in order to help detect bugs early. Static type checkers like TypeScript and Flow help reduce the amount of bugs in your code by detecting type errors during compile time. In general, using static typing in your JavaScript code can help prevent about 15% of the bugs that end up in committed code.
TypeScript also provides various productivity enhancements like the ones listed below. You can see these features on editors with first-class TypeScript support like Visual Studio Code.
- Advanced statement completion through IntelliSense
- Smarter code refactoring
- Ability to infer types from usage
- Ability to type-check JavaScript files (and infer types from JSDoc annotations)
Strict mode
TypeScript's "strict mode" is where the meat are of the whole TypeScript ecosystem. The
--strict compiler flag, introduced in TypeScript 2.3, activates TypeScript's strict mode. This will set all strict typechecking options to true by default, which includes:
--noImplicitAny- Raise error on expressions and declarations with an implied 'any' type.
--noImplicitThis- Raise error on 'this' expressions with an implied 'any' type.
--alwaysStrict- Parse in strict mode and emit "use strict" for each source file.
--strictBindCallApply- Enable strict 'bind', 'call', and 'apply' methods on functions.
--strictNullChecks- Enable strict null checks.
--strictFunctionTypes- Enable strict checking of function types.
--strictPropertyInitialization- Enable strict checking of property initialization in classes.
When
strict is set to
true in your
tsconfig.json, all of the options above are set to
true. If some of these options give you problems, you can override strict mode by overriding the options above one by one. For example:
{ "compilerOptions": { "strict": true, "strictFunctionTypes": false, "strictPropertyInitialization": false } }
This will enable all strict type-checking options except
--strictFunctionTypes and
--strictPropertyInitialization. Fiddle around with these options when they give you trouble. Once you get more comfortable with them, slowly re-enable them one by one.
Linting
Linting and static analysis tools are one of the many essential tools for any language. There are currently two popular linting solutions for TypeScript projects.
- TSLint used to be the de-facto tool for linting TypeScript code. It has served the TS community well throughout the years, but it has fallen out of favour as of late. Development seems to have stagnated lately, with the authors even announcing its deprecation recently in favour of ESLint. Even Microsoft themselves have noticed some architectural and performance issues in TSLint as of late, and recommended against it. Which brings me to the next option.
- ESLint - yeah, I know. But hear me out for a second. Despite being a tool solely for linting JavaScript for quite some time, ESLint has been adding more and more features to better support TS. It has announced plans to better support TS through the new typescript-eslint project. It contains a TypeScript parser for ESLint, and even a plugin which ports many TSLint rules into ESLint.
Therefore, ESLint might be the better choice going forward. To learn more about using ESLint for TypeScript, read through the docs of the typescript-eslint project.
A quick primer to TypeScript types
The following section contains some quick references on how TypeScript type system works. For a more detailed guide, read this 2ality blog post on TypeScript's type system.
Applying types
Once you've renamed your
.js files to
.ts (or
.tsx), you can enter type annotations. Type annotations are written using the
: TypeName syntax.
let assignedNumber: number | undefined = undefined assignedNumber = 0
function greetPerson(name: string) { return `Hello, ${name}!` }
You can also define return types for a function.
function isFinishedGreeting(name: string): boolean { return getPerson(name).isGreeted() }
Primitive & unit types
TypeScript has a few supported primitive types. These are the most basic data types available within the JavaScript language, and to an extent TypeScript as well.
// Boolean let isDone: boolean = false // Number let decimal: number = 6 let hex: number = 0xf00d let binary: number = 0b1010 let octal: number = 0o744 // string let standardString: string = 'Hello, world!' let templateString: string = `Your number is ${decimal}`
These primitive types can also be turned into unit types, where values can be their own types.
// This variable can only have one possible value: 42. let fortyTwo: 42 = 42 // A unit type can also be combined with other types. // The `|` turns this into a union type. We'll go through it in the next section. let maybeFalsey: 0 | false | null | undefined
Intersection & union types
You can combine two or more types together using intersection and union types.
Union types can be used for types/variables that have have one of several types. This tells TypeScript that "variable/type X can be of either type A or type B."
function formatCommandline(command: string[] | string) { var line = '' if (typeof command === 'string') { line = command.trim() } else { line = command.join(' ').trim() } return line }
Intersection types can be used to combine multiple types into one. This tells TypeScript that "variable/type X contains type A and B."
type A = { a: string } type B = { b: string } type Combined = A & B // { a: string, b: string }
// Example usage of intersection types. // Here we take two objects, then combining them into one whilst using intersection types // to combine the types of both objects into one. function extend<T, U>(first: T, second: U): T & U { // use TypeScript type casting to create an object with the combined type. let result = {} as T & U // combine the object. for (let id in first) { result[id] = first[id] } for (let id in second) { if (!result.hasOwnProperty(id)) { result[id] = second[id] } } return result } const x = extend({ a: 'hello' }, { b: 42 }) // `x` now has both `a` and `b` property console.log(x.a) console.log(x.b)
types and
interfaces
For defining types of objects with a complex structure, you can use either the
type or the
interface syntax. Both work essentially the same, with
interface being well-suited for object-oriented patterns with classes.
// Types type ComponentProps = { title?: string } function ReactComponent(props: ComponentProps) { return <div>{props.title}</div> }
// Interfaces interface TaskImpl { start(): void end(): void } class CreepTask implements TaskImpl { state: number = 0 start() { this.state = 1 } end() { this.state = 0 } }
Generics
Generics provide meaningful type constraints between members.
In the example below, we define an Action type where the
type property can be anything that we pass into the generic.
interface Action<T = any> { type: T }
The type that we defined inside the generic will be passed down to the
type property. In the example below,
type will have a unit type of
'FETCH_USERS'.
// You can also use `Action<string>` for any string value. interface FetchUsersAction extends Action<'FETCH_USERS'> { payload: UserInfo[] } type AddUserAction = Action<'ADD_USER'> const action: AddUserAction = { type: 'ADD_USER' }
Declaration files
You can let TypeScript know that you're trying to describe a some code that exists somewhere in your library (a module, global variables/interfaces, or runtime environments like Node). To do this, we use the
declare keyword.
Declaration files always have a
.d.ts file extension.
// For example, to annotate Node's `require()` call declare const require: (module: string) => any // Now you can use `require()` everywhere in your code! require('whatwg-fetch')
You can include this anywhere in your code, but normally they're included in a declaration file. Declaration files have a
.d.ts extension, and are used to declare the types of your own code, or code from other libraries. Normally, projects will include their declaration files in something like a
declarations.d.ts file and will not be emitted in your compiled code.
You can also constrain declarations to a certain module in the
declare module syntax. For example, here's a module that has a default export called
doSomething().
declare module 'module-name' { // You can also export types inside modules so library consumers can use them. export type ExportedType = { a: string; b: string } const doSomething: (param: ExportedType) => any export default doSomething }
Let's migrate!
Alright, enough with the lectures, let's get down and dirty! We're going to take a look at a real-life project, take a few modules, and convert them to TypeScript.
To do this, I've taken upon the help of my Thai friend named Thai (yeah, I know). He has a massive, web-based rhythm game project named Bemuse, and he's been planning to migrate it to TypeScript. So let's look at some parts of the code and try migrating them to TS where we can.
From
.js to
.ts
Consider the following module:
Here we have your typical JavaScript module. A simple module with a function type-annotated with JSDoc, and two other non-annotated functions. And we're going to turn this bad boy into TypeScript.
To make a file in your project a TypeScript file, we just need to rename it from
.js to
.ts. Easy, right?
Oh no! We're starting to see some red! What did we do wrong?
This is fine, actually! We've just enabled our TypeScript type-checking by doing this, so what's left for us is to add types as we see fit.
The first thing to do is to add parameter types to these functions. As a quick way to get started, TypeScript allows us to infer types from usage and include them in our code. If you use Visual Studio Code, click on the lightbulb that appears when your cursor is in the function name, and click on "Infer parameter types from usage".
If your functions/variables are documented using JSDoc, this gets much easier as TS can also infer parameter types from JSDoc annotations.
Note that TypeScript generated a partial object schema for the function at the bottom of this file based on usage. We can use it as a starting point to improve its definition using
interfaces and
types. For example, let's take a look at this line.
/** * Returns the accuracy number for a play record. */ export function formattedAccuracyForRecord(record: { count: any; total: any }) { return formatAccuracy(calculateAccuracy(record.count, record.total)) }
We already know that we have properties
count and
total in this parameter. To make this code cleaner, we can put this declaration into a separate
type/
interface. You can include this within the same file, or separately on a file reserved for common types/interfaces, e.g.
types.ts
export type RecordItem = { count: any total: any [key: string]: any }
import { RecordItem } from 'path/to/types' /** * Returns the accuracy number for a play record. */ export function formattedAccuracyForRecord(record: RecordItem) { return formatAccuracy(calculateAccuracy(record.count, record.total)) }
Dealing with external modules
With that out of the way, now we're going to look at how to migrate files with external modules. For a quick example, we have the following module:
We've just renamed this raw JS file into
.ts and we're seeing a few errors. Let's take a look at them.
On the first line, we can see that TypeScript doesn't understand how to deal with the
lodash module we imported. If we hovered over the red squiggly line, we can see the following:
Could not find a declaration file for module 'lodash-es'. '/Users/resir014/etc/repos/bemusic/bemuse/node_modules/lodash/lodash.js' implicitly has an 'any' type. Try `npm install @types/lodash` if it exists or add a new declaration (.d.ts) file containing `declare module 'lodash';`
As the error message says, all we need to do to fix this error is to install the type declaration for
lodash.
$ npm install --save-dev @types/lodash
This declaration file comes from DefinitelyTyped, an extensive library community-maintained declaration files for the Node runtime, as well as many popular libraries. All of them are autogenerated and published in the
@types/ scope on npm.
Some libraries include their own declaration files. If a project is compiled from TypeScript, the declarations will be automatically generated. You can also create declaration files manually for your own library, even when your project is not built using TypeScript. When generating declaration files inside a module, be sure to include them inside a
types, or
typings key in the
package.json. This will make sure the TypeScript compiler knows where to look for the declaration file for said module.
{ "main": "./lib/index.js", "types": "./types/index.d.ts" }
OK, so now we have the type declarations installed, how does our TS file look like?
Whoa, what's this? I thought only one of those errors would be gone? What's happening here?
Another power of TypeScript is that it's able to infer types based on how data flows throughout your module. This is called control-flow based type analysis. This means that TypeScript will know that the
chart inside the
.orderBy() call comes from what was passed from the previous calls. So the only type error that we have to fix now would be the function parameter.
But what about libraries without type declaration? On the first part of my post, I've come across this comment.
Some packages include their own typings within the project, so oftentimes it will get picked up by the TypeScript compiler. But in case we have neither built-in typings nor
@types package for the library, we can create a shim for these libraries using ambient declarations (
*.d.ts files).
First, create a folder in your source directory to hold ambient declarations. Call it
types/ or something so we can easily find them. Next, create a file to hold our own custom declarations for said library. Usually we use the library name, e.g.
evergreen-ui.d.ts.
Now inside the
.d.ts file we just created, put the following:
declare module 'evergreen-ui'
This will shim the
evergreen-ui module so we can import it safely without the "Cannot find module" errors.
Note that this doesn't give you the autocompletion support, so you will have to declare the API for said library manually. This is optional of course, but very useful if you want better autocompletion.
For example, if we were to use Evergreen UI's Button component:
// Import React's base types for us to use. import * as React from 'react' declare module 'evergreen-ui' { export interface ButtonProps extends DimensionProps, SpacingProps, PositionProps, LayoutProps { // The above extended props props are examples for extending common props and are not included in this example for brevity. intent: 'none' | 'success' | 'warning' | 'danger' appearance: 'default' | 'minimal' | 'primary' isLoading?: boolean // Again, skipping the rest of the props for brevity, but you get the idea. } export class Button extends React.PureComponent<ButtonProps> {} }
And that's it for part 2! The full guide concludes here, but if there are any more questions after this post was published, I'll try to answer some of them in part 3.
As a reminder, the
#typescript channel on the Reactiflux Discord server has a bunch of lovely people who know TypeScript inside and out. Feel free to hop in and ask any question about TypeScript! | https://resir014.xyz/posts/2019/02/25/migrating-to-typescript-part-2 | CC-MAIN-2022-21 | refinedweb | 2,830 | 65.22 |
Hello! I am beginner trying to learn c++. Yesterday I wrote a simple calculator program, but when I run it I get strange results. Here is the code:
When I compile and run the program, I get this (Red color is what I enter):When I compile and run the program, I get this (Red color is what I enter):Code:#include <iostream> using namespace std; int main() { int br1; int br2; cout << "Enter first number: "; cin >> br1; cout << "Enter second number: "; cin >> br2; cout << endl << "Results: " << endl << endl; cout << "First + second number = " << br1 + br2 << endl; cout << "First - second number = " << br1 - br2 << endl; cout << "First * second number = " << br1 * br2 << endl; cout << "First / second number = " << br1 / br2 << endl; cin.ignore(); cin.get(); }
Enter first number: 34593874
Enter second number: 34532224535
Results:
First + second number = -2112889775
First - second number = -2112889773
First * second number = -34593874
First / second number = 0
Process returned 0 (0x0) execution time : 8.596 s
Press any key to continue.
How could I, with adding and multiplying two positive numbers, get a negative number?
Should it not be that 34593874 + 34532224535 = 34566818409 and not -2112889775 ?
It is probably silly question, but I don't get it
PS. I used Code Blocks compiler. | http://cboard.cprogramming.com/cplusplus-programming/154769-beginner-calculator-program-problem.html | CC-MAIN-2014-15 | refinedweb | 202 | 66.37 |
The current code is extremely short. If I understand the "in" function correctly, shouldn't the for loop only iterate and return True if both [1,3] is in [1,4,5]? Right now I am getting true for all of my tests. I feel like there is an easy fix to this, I just don't know.
I tried putting an if statement in-between the for and return lines but that still only returned true.
def innerOuter(arr1, arr2):
for arr1 in arr2:
return True
return False
You have to use
if one_element in array
def innerOuter(arr1, arr2): for x in arr1: if x not in arr2: return False return True innerOuter([1,3], [1,4,5]) # False innerOuter([1,4], [1,4,5]) # True
Or you can use
set() to check it
def innerOuter(arr1, arr2): return set(arr1).issubset(set(arr2)) innerOuter([1,3], [1,4,5]) # False innerOuter([1,4], [1,4,5]) # True
The same:
def innerOuter(arr1, arr2): return set(arr1) <= set(arr2) | https://codedump.io/share/DdtaQVjhYcvY/1/having-trouble-using-quotinquot-function-to-check-for-containment-of-one-array-in-another | CC-MAIN-2017-17 | refinedweb | 171 | 59.74 |
Today, machine learning is the premise of big innovations and promises to continue enabling companies to make the best decisions through accurate predictions. But what happens when the error susceptibility of these algorithms is high and unaccountable?
That is when Ensemble Learning saves the day!
AdaBoost is an ensemble learning method (also known as “meta-learning”) which was initially created to increase the efficiency of binary classifiers. AdaBoost uses an iterative approach to learn from the mistakes of weak classifiers, and turn them into strong ones.
In this article we'll learn about the following modules:
- What is Ensemble Learning?
- Types of Ensemble Methods
- Boosting in Ensemble Methods
- Types of Boosting Algorithms
- Unraveling AdaBoost
- Pseudocode of AdaBoost
- Implementation of AdaBoost Using Python
- Advantages and Disadvantages of AdaBoost
- Summary and Conclusion
You can run the code for this tutorial for free on the ML Showcase.
Launch Project For Free
What Is Ensemble Learning?
Ensemble learning combines several base algorithms to form one optimized predictive algorithm. For example, a typical Decision Tree for classification takes several factors, turns them into rule questions, and given each factor, either makes a decision or considers another factor. The result of the decision tree can become ambiguous if there are multiple decision rules, e.g. if threshold to make a decision is unclear or we input new sub-factors for consideration. This is where Ensemble Methods comes at one's disposable. Instead of being hopeful on one Decision Tree to make the right call, Ensemble Methods take several different trees and aggregate them into one final, strong predictor.
Types Of Ensemble Methods
Ensemble Methods can be used for various reasons, mainly to:
- Decrease Variance (Bagging)
- Decrease Bias (Boosting)
- Improve Predictions (Stacking)
Ensemble Methods can also be divided into two groups:
- Sequential Learners, where different models are generated sequentially and the mistakes of previous models are learned by their successors. This aims at exploiting the dependency between models by giving the mislabeled examples higher weights (e.g. AdaBoost).
- Parallel Learners, where base models are generated in parallel. This exploits the independence between models by averaging out the mistakes (e.g. Random Forest).
Boosting in Ensemble Methods
Just as humans learn from their mistakes and try not to repeat them further in life, the Boosting algorithm tries to build a strong learner (predictive model) from the mistakes of several weaker models. You start by creating a model from the training data. Then, you create a second model from the previous one by trying to reduce the errors from the previous model. Models are added sequentially, each correcting its predecessor, until the training data is predicted perfectly or the maximum number of models have been added.
Boosting basically tries to reduce the bias error which arises when models are not able to identify relevant trends in the data. This happens by evaluating the difference between the predicted value and the actual value.
Types of Boosting Algorithms
- AdaBoost (Adaptive Boosting)
- Gradient Tree Boosting
- XGBoost
In this article, we will be focusing on the details of AdaBoost, which is perhaps the most popular boosting method.
Unraveling AdaBoost
AdaBoost (Adaptive Boosting) is a very popular boosting technique that aims at combining multiple weak classifiers to build one strong classifier. The original AdaBoost paper was authored by Yoav Freund and Robert Schapire.
A single classifier may not be able to accurately predict the class of an object, but when we group multiple weak classifiers with each one progressively learning from the others' wrongly classified objects, we can build one such strong model. The classifier mentioned here could be any of your basic classifiers, from Decision Trees (often the default) to Logistic Regression, etc.
Now we may ask, what is a "weak" classifier? A weak classifier is one that performs better than random guessing, but still performs poorly at designating classes to objects. For example, a weak classifier may predict that everyone above the age of 40 could not run a marathon but people falling below that age could. Now, you might get above 60% accuracy, but you would still be misclassifying a lot of data points!
Rather than being a model in itself, AdaBoost can be applied on top of any classifier to learn from its shortcomings and propose a more accurate model. It is usually called the “best out-of-the-box classifier” for this reason.
Let's try to understand.
An Example of How AdaBoost Works
Step 1: A weak classifier (e.g. a decision stump) is made on top of the training data based on the weighted samples. Here, the weights of each sample indicate how important it is to be correctly classified. Initially, for the first stump, we give all the samples equal weights.
Step 2: We create a decision stump for each variable and see how well each stump classifies samples to their target classes. For example, in the diagram below we check for Age, Eating Junk Food, and Exercise. We'd look at how many samples are correctly or incorrectly classified as Fit or Unfit for each individual stump.
Step 3: More weight is assigned to the incorrectly classified samples so that they're classified correctly in the next decision stump. Weight is also assigned to each classifier based on the accuracy of the classifier, which means high accuracy = high weight!
Step 4: Reiterate from Step 2 until all the data points have been correctly classified, or the maximum iteration level has been reached.
Note: Some stumps get more say in the classification than other stumps.
The Mathematics Behind AdaBoost
Here comes the hair-tugging part. Let's break AdaBoost down, step-by-step and equation-by-equation so that it's easier to comprehend.
Let's start by considering a dataset with N points, or rows, in our dataset.
In this case,
- n is the dimension of real numbers, or the number of attributes in our dataset
- x is the set of data points
- y is the target variable which is either -1 or 1 as it is a binary classification problem, denoting the first or the second class (e.g. Fit vs Not Fit)
We calculate the weighted samples for each data point. AdaBoost assigns weight to each training example to determine its significance in the training dataset. When the assigned weights are high, that set of training data points are likely to have a larger say in the training set. Similarly, when the assigned weights are low, they have a minimal influence in the training dataset.
Initially, all the data points will have the same weighted sample w:
where N is the total number of data points.
The weighted samples always sum to 1, so the value of each individual weight will always lie between 0 and 1. After this, we calculate the actual influence for this classifier in classifying the data points using the formula:
Alpha is how much influence this stump will have in the final classification. Total Error is nothing but the total number of misclassifications for that training set divided by the training set size. We can plot a graph for Alpha by plugging in various values of Total Error ranging from 0 to 1.
Notice that when a Decision Stump does well, or has no misclassifications (a perfect stump!) this results in an error rate of 0 and a relatively large, positive alpha value.
If the stump just classifies half correctly and half incorrectly (an error rate of 0.5, no better than random guessing!) then the alpha value will be 0. Finally, when the stump ceaselessly gives misclassified results (just do the opposite of what the stump says!) then the alpha would be a large negative value.
After plugging in the actual values of Total Error for each stump, it's time for us to update the sample weights which we had initially taken as 1/N for every data point. We'll do this using the following formula:
In other words, the new sample weight will be equal to the old sample weight multiplied by Euler's number, raised to plus or minus alpha (which we just calculated in the previous step).
The two cases for alpha (positive or negative) indicate:
- Alpha is positive when the predicted and the actual output agree (the sample was classified correctly). In this case we decrease the sample weight from what it was before, since we're already performing well.
- Alpha is negative when the predicted output does not agree with the actual class (i.e. the sample is misclassified). In this case we need to increase the sample weight so that the same misclassification does not repeat in the next stump. This is how the stumps are dependent on their predecessors.
Pseudocode of AdaBoost
Initially set uniform example weights. for Each base learner do: Train base learner with a weighted sample. Test base learner on all data. Set learner weight with a weighted error. Set example weights based on ensemble predictions. end for
Implementation of AdaBoost Using Python
Step 1: Importing the Modules
As always, the first step in building our model is to import the necessary packages and modules.
In Python we have the
AdaBoostClassifier and
AdaBoostRegressor classes from the scikit-learn library. For our case we would import
AdaBoostClassifier (since our example is a classification task). The
train_test_split method is used to split our dataset into training and test sets. We also import
datasets, from which we will use the the Iris Dataset.
from sklearn.ensemble import AdaBoostClassifier from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn import metrics
Step 2: Exploring the data
You can use any classification dataset, but here we'll use traditional Iris dataset for a multi-class classification problem. This dataset contains four features about different types of Iris flowers (sepal length, sepal width, petal length, petal width). The target is to predict the type of flower from three possibilities: Setosa, Versicolour, and Virginica. The dataset is available in the scikit-learn library, or you can also download it from the UCI Machine Learning Library.
Next, we make our data ready by loading it from the datasets package using the load_iris() method. We assign the data to the iris variable.
Further, we split our dataset into input variable X, which contains the features sepal length, sepal width, petal length, and petal width.
Y is our target variable, or the class that we have to predict: either Iris Setosa, Iris Versicolour, or Iris Virginica. Below is an example of what our data looks like.
iris = datasets.load_iris() X = iris.data y = iris.target print(X) print(Y) Output: [[5.1 3.5 1.4 0.2] [4.9 3. 1.4 0.2] [4.7 3.2 1.3 0.2] [4.6 3.1 1.5 0.2] [5.8 4. 1.2 0.2] [5.7 4.4 1]
Step 3: Splitting the data
Splitting the dataset into training and testing datasets is a good idea to see if our model is classifying the data points correctly on unseen data.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
Here we split our dataset into 70% training and 30% test which is a common scenario.
Step 4: Fitting the Model
Building the AdaBoost Model. AdaBoost takes Decision Tree as its learner model by default. We make an AdaBoostClassifier object and name it abc. Few important parameters of AdaBoost are :
- base_estimator: It is a weak learner used to train the model.
- n_estimators: Number of weak learners to train in each iteration.
- learning_rate: It contributes to the weights of weak learners. It uses 1 as a default value.
abc = AdaBoostClassifier(n_estimators=50, learning_rate=1)
We then go ahead and fit our object abc to our training dataset. We call it a model.
model = abc.fit(X_train, y_train)
Step 5: Making the Predictions
Our next step would be to see how good or bad our model is to predict our target values.
y_pred = model.predict(X_test)
In this step, we take a sample observation and make a prediction on unseen data. Further, we use the predict() method on the model to check for the class it belongs to.
Step 6: Evaluating the model
The Model accuracy will tell us how many times our model predicts the correct classes.
print("Accuracy:", metrics.accuracy_score(y_test, y_pred)) Output: Accuracy:0.8666666666666667
You get an accuracy of 86.66% – not bad. You can experiment with various other base learners like Support Vector Machine, Logistic Regression which might give you higher accuracy.
Advantages and Disadvantages of AdaBoost
AdaBoost has a lot of advantages, mainly it is easier to use with less need for tweaking parameters unlike algorithms like SVM. As a bonus, you can also use AdaBoost with SVM. Theoretically, AdaBoost is not prone to overfitting though there is no concrete proof for this. It could be because of the reason that parameters are not jointly optimized — stage-wise estimation slows down the learning process. To understand the math behind it in depth, you can follow this link.
AdaBoost can be used to improve the accuracy of your weak classifiers hence making it flexible. It has now being extended beyond binary classification and has found use cases in text and image classification as well.
A few Disadvantages of AdaBoost are :
Boosting technique learns progressively, it is important to ensure that you have quality data. AdaBoost is also extremely sensitive to Noisy data and outliers so if you do plan to use AdaBoost then it is highly recommended to eliminate them.
AdaBoost has also been proven to be slower than XGBoost.
Summary and Conclusion
In this article, we have discussed the various ways to understand the AdaBoost Algorithm. We started by introducing you to Ensemble Learning and it's various types to make sure that you understand where AdaBoost falls exactly. We discussed the pros and cons of the algorithm and gave you a quick demo on its implementation using Python.
AdaBoost is like a boon to improve the accuracy of our classification algorithms if used accurately. It is the first successful algorithm to boost binary classification. AdaBoost is increasingly being used in the industry and has found its place in Facial Recognition systems to detect if there is a face on the screen or not.
Hope this article was able to tingle your curiosity for you to research more in-depth about AdaBoost and various other Boosting algorithms.
References is a general ensemble, errors from the first model..
Add speed and simplicity to your Machine Learning workflow today | https://blog.paperspace.com/adaboost-optimizer/ | CC-MAIN-2022-27 | refinedweb | 2,421 | 54.12 |
Hello all, I am back with another little projects which may or may not help out other WPF devs. Coming off of Hanasu, I have become disgusted by the kind of code I was writing. With Hanasu 2, I wanted to do MVVM which seemed to be the new thing these days. It meant I could keep all my logical code away from the view (and it's code). Over the years, I've been reading the works of Sacha Barber (and his MVVM framework, Cinch), Josh Smith and a few others. Reading their stuff, I began to get an idea of what MVVM was.
I don't know about you but, when I need to learn something, I learn it once and write something so I won't have to remember it. It got me through pre-calc, it applies here. So I set off on implementing the MVVM side of Crystal. The first thing that was implemented, obviousily is the view model. In Crystal, there are two classes for view models: BaseViewModel and DependencyBaseViewModel. Both implement INotification Property Changed which is necessary for real-time data binding. Another neat thing about them is that they handle firing the propertychanged event and storing variables for you like so:
using Crystal.Core; public class SampleViewModel: BaseViewModel { public object SomeProperty { get { return (object)this.GetProperty("SomeProperty"); } set { this.SetProperty("SomeProperty", value); } } }
Imports Crystal.Core Public Class SampleViewModel Inherits BaseViewModel Public Property SomeProperty As Object Get Return Me.GetProperty("SomeProperty") End Get Set (ByVal value As Object) Me.SetProperty("SomeProperty", value) End Set End Property End Class
Of course, the BaseViewModel has some more advanced features which will be revealed later once I get a chance to fully test them. Anyway, the next part is the Models (the first M in MVVM). Models are your structs or classes that contain data, for example, an email could be a Model. The email server could act like the ViewModel while the View itself would be your email client/website. Models, that inherit from BaseModel, get some of the same fancy features as BaseViewModel (Get/SetProperty), along with more such as IDataErrorInfo (which is incomplete at the time of writing) for displaying conflicts with the model on your GUI. There is also a subclass of BaseModel called EditableBaseModel which implements IEditableObject so that changes to the model can be reverted if needed.
Commands
Now that I had the base framework down, I needed to be able to execute code without adding code to the view (Window, or controls)'s code behind. WPF and Silverlight have an interface called ICommand. Yeah, you guessed it. It executes code when activated. Most controls that inherit from ButtonBase have a Command property. Just define a property that returns a ICommand and bind to it. Bam, it executes said command when the button is clicked. I was unaware at the time that RelayCommand and DelegateCommand existed, so I wrote my own classed called CrystalCommand which gets the job done. The nice thing about them is that they bind to your ViewModel/Model's propertychanged event and will activated/deactived based on the function you defined. For example, you can define a CrystalCommand that can only execute when 'SomeProperty' is not null (Nothing in VB.NET).
For the most part, only button(base) controls and menuitems can execute commands. What if I wanted to have a command execute when something was double-clicked on, say, a ListView? Well, Micrososft has released a library called System.Windows.Interactivity (formerly, Micrososft.Expression.Interactivity?) and it was sort of hard to come by. I did not want to bog down the main library with non-GAC references so whenever I reference a non-GAC reference, I add it to Crystal.Extras. The Extras lib has a special trigger (from the Interactivity lib, not the standard Triggers) called EventToCommand. It fires a command when an event is trigger, obviously. So that takes care of executing commands. But there's more!
Messaging
Sacha's Cinch has something called a Mediator which allows for multiple viewmodels to communiate without having a direct reference to each other. It sounded like a good idea and I've toyed around with something similar before. The notifications in Hanasu used some of my earlier code. My latest IRC library uses a slightly improved version (even though it was written before Hanasu. YAY for code reuse!)
I grabbed my notifications code and rewrote it. It is called Messenger. Its fast and easy to use. BaseViewModel implements an interface called IMessageReceiver which contains one function: ReceiveMessage. The Messenger (class) fires this method whenever a registered ViewModel (all view models automatically register) gets a message. BaseViewModel implements that method and makes it 'virtual' (Overridable in VB.NET) so that users can get down to the dirt with it. I was happy with this method at first but I wanted a simple, cleaner way to do it. That was when I came up with the idea (or did I take it from Cinch as well) to add an attribute to a method. The method would fire when a message with the correct message string is received:
[MessageHandler("ClientAdded")] public void HandleClientAdded(object data) { Clients.Add(new Client() { Fullname = data.ToString() }); }
Whenever the message, "ClientAdded" is broadcasted, this method would fire along with the message's payload. That is a very asynchronous way to do it but what about doing it synchronously? Well, the Messenger class has a method called WaitForMessage which blocks until a message matching the message string is received. It can also time out after a certain time limit. Once you're ready to receive messages, your other VM (View Models) can push them back. Using Messenger.PushMessage, you can broadcast a message and its data to all view models.
Localization
I don't know about you but when I write applications, I would like to have them translatable from the start. Especially when I started using MVVM. The standard .resx method was not for me. I can go through the story of how I did what I did but I'll save that for another time. Long story short, I use .INI files for localizing.
[locale] ; Provide locale information to the Crystal localization manager. Name = English EnglishName = English IetfLanguageTag = en-US [data] ; Define key/value pairs that represent text used in an application. FirstNameHeader = First Name LastNameHeader = Last Name IDHeader = ID CustomerTabHeader = Customers AddCustomerToolTip = Add a new customer to the customer list. CancelAddCustomerToolTip = Cancel this operation and close this dialog. AcceptAddCustomerToolTip = Accept the entered information and close this dialog.
Crystal has a class called the LocalizationManager which is in charge of localizing your applications. Use its ProbeDirectory function to read your .ini files. If you do it early enough, you're application will have the correct (localized) text without 'changing' in front of the user. You can also call its SwitchLocale method as well. The manager has a function called GetLocalizedValue, which as you guessed, gets the correct value for the current Culture from your .ini files.
Another neat thing is the CrystalLocalizedValueMarkupExtension which calls GetLocalizedValue for you so you do not have to go to into code and code your text. As an added bonus, if the locale is switched (using the SwitchLocale function), all text using the markup extension will automatically update to the new localized value.
Services
Another neat feature, which I give credit to Sacha for this idea, is services. They allow you define things such as message boxes, custom file dialogs, errog loggers seperate from your viewmodels. Again, you do not need a direct reference to them. You simply call it like so:
//Taken from Hanasu 2 ServiceManager.Resolve<IMessageBoxService>().ShowMessage("Connection Error", "Unable to stream from station:" + Environment.NewLine + ex.Message);
The above code finds the implementation of IMessageBoxService in Hanasu via MEF (manual reflection coming later) and uses it to show a message. My MessageBox service is defined like this:
namespace Hanasu.Services { [Export(typeof(IMessageBoxService))] public class HanasuMessageBoxService: IMessageBoxService { public void ShowMessage(string title = "Title", string message = "Message") { MessageBoxWindow mbw = new MessageBoxWindow(title, message, System.Windows.MessageBoxButton.OK); mbw.Owner = Application.Current.MainWindow; mbw.ShowDialog(); mbw.Close(); } public bool? ShowOkayCancelMessage(string title, string message) { throw new NotImplementedException(); } } }
Misc
Early on, I wanted to implement something similar to Rx and ReactiveUI. Crystal came pretty close to the general idea. I defined something called Tasks which is not finished but they work similar to Rx and are pretty close to my 'Carbon' idea from a while ago. I'll speak more on this later.
Crystal has some other useful features. For example, how do you close a window from a view model? Easy. Use the ViewModelOperations class. It has a handle method which finds the current window by reference using the current view model. You can optionally define a dialog result as well.
Another thing is the IsMainWindow attached property which Crystal defines. Basically, it automatically sets the said window as the Application.Current.MainWindow. This is useful for making dialog boxes (as shown above).
Lastly, I have ported a decent chunk of Crystal to WP7 (relatively easy to get it to Desktop Silverlight) for use in a possible future project.
TL;DR
I have created a library that serves as a framework for my future applications and I wanted to share it with everyone. It has features for validation, localization, MVVM and more!
You can find the source here with binaries coming soon. Also, for a sneak peak of Hanasu 2 which uses Crystal heavily.
This post has been edited by Amrykid: 30 August 2012 - 10:11 PM | http://www.dreamincode.net/forums/topic/290536-project-crystal-wpfwp7-mvvmreactiveui-likemisc-framework/ | CC-MAIN-2017-22 | refinedweb | 1,603 | 57.47 |
User:Aethix/HowTo:Write a C++ program
From Uncyclopedia, the content-free encyclopedia
“In C++, STDs are standard.”
So, you want to write a C++ program. You want to use your computer for something other than surfing porn sites, do you? You should know, it won't be easy. You must be an 1337 |-|@X0r to succeed. Are you an 1337 |-|@X0r? ARE YOU? HUH? Can you even understand what I'm saying? Oh we've got so much work to do. You have much to learn, young C student. Nevertheless, I am willing to teach you. Now, come with me, as we explore the exciting world that is C++.
edit Download an IDE
Let's face it, you can't program in MS Word, so you need to find an IDE and a compiler on the internet. So just open up your browser, close out all the porn sites, and find a free, open-source IDE. Now then, download the binary. You know, the download would go a lot faster if you weren't also downloading porn. Are we done yet? Excellent. Now then, just install it. Good. Hey, this isn't so hard, huh? Now just open it up...
Alright! You've got green on the screen already! Wait, that isn't right. WTF is going on here?
edit Consult the user guide
Okay, don't panic, just go back to the website you downloaded the IDE from. There should be some sort of instruction manual or something. What are you looking at me for? I'm just here to teach you how to program, not fix your little problems. Okay, lets see...
My IDE help - Interwebz Explorer
Okay, those are nice, we might use those later, I have no idea what you're talking about. How about the actual IDE itself? No? Nuts...
edit Ask some random geek on the internet
Okay, we probably aren't the only people who have experienced this problem, the internet is full of smart people who can help us. No, I don't think anybody on those porn sites can help us, but hey, nice try. Okay, I've gone to an online help forum, and I'm asking for help. And I'm done--what do you think? Well, I may have made you look kind of n00bish, but you've got to learn somehow. Oh wait, I just figured out what we're doing wrong, we need to add a source file. You'd better delete that. You can't? Wow, that's kind of embarrassing, huh? And those are some rather scathing replies, there. Sorry about that.
edit Your first program
Okay, you've wasted a couple of hours of your life and become an internet laughing stock, but you're ready to start programming!
We should start with a simple "Hello world" program, as all little n00bs should. Er, sorry. First, get off of the porn site... off... Now open up your IDE... Create a frickin' source file like I told you to... Just close out all those error messages, they aren't important. Now we are ready! Are you ready to write your first C++ program. Don't just sit there shaking your head, say "YES!" You are ready! YES! YOU! ARE! W00000000T!
I'm... I'm so proud of you. Sniff...
Let's get started. Hello, world. A new pwnz0r is born. Okay, now just type what I show you in the little box there...
My Shiny New IDE
#include <iostream>
using namespace std
int main
{
cout << "Hello world"
return 0
}
Alright, now just build and run and it should say "Hello world". Alrighty let's see here...
My Shiny New IDE
Expecting semicolons.
You forgot the semicolons.
You seriously forgot the semicolons?
LOL! U forgot the semicolons!
What a n00b! Go back to looking at pr0nz, n00b!
You seem to have acquired a particularly mean IDE. That's okay, you just need to put in some semicolons at the end of every line. Why? I don't know, somebody important decided we had to. I don't know, just put in the semicolons. Why didn't I tell you that sooner? Oh, what? This is my fault now? I suppose it's my fault everybody is LOLing at your n00bishness on the interwebz too. That's right--N00BISHNESS! N00B-ISH-NESS! Okay, okay, calm down, breathe, man, breathe. It's all cool. Let's just that try again...
My Shitty New IDE
STFUn00b! SYNTAX ERROR! SYNTAX ERROR! 68 errors detected.
Do you have any fucking idea what a semicolon is!?!?!
SEMICOLON!!1!
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
See? SEE?
;'
Hell, this program is so poorly written, it might be dangerous.
LET THIS BE A LESSON TO YOU!
Running killerapp.exe...
lol ur computer is fucked.
Oh dear, that's not good.
edit Buy a new computer
Yeah, it's too bad about your computer there. Hey, don't blame me, you should have been good enough to overcome my shortcomings. It was... you know... a test, you see. And you failed, so you must pay the penalty. LOL j/k, man, I'll buy you a new computer. I've got lots of money, you know. Mad programming skillz helps you to get rich, like Bill Gates. He didn't get rich by sitting on his ass and browsing porn. He was an 1337 |-|@X0r. You still don't know what I'm talking about? N00b... Sorry.
Alright, here you go. Try not to wreck this one.
You got a new computer. | http://uncyclopedia.wikia.com/wiki/User:Aethix/HowTo:Write_a_C%2B%2B_program | CC-MAIN-2013-48 | refinedweb | 923 | 87.21 |
.NET Framework 4.5.1 is an in-place update to .NET Framework 4.0 and the .NET Framework 4.5. This version can be run side-by-side with .NET 3.5 and earlier versions. A couple of new features were introduced in .NET framework 4.5.1, as well as some improvements were implemented in the areas of Application Performance and Debugging applications. Some of them are listed here:
This article is published from the DotNetCurry .NET Magazine – A Free High Quality Digital Magazine for .NET professionals published once every two months. Subscribe to this eMagazine for Free and get access to hundreds of free .NET tutorials from experts
64-Bit Edit and Continue support in Visual Studio 2013
The debugging feature Edit and Continue, as our VB 6 developers will recall, was a popular feature which allowed developers to make changes to the code during debugging it. This feature allows user to fix the bugs quickly without restarting the debugging session, which in turn expedites testing of various scenarios.
Versions prior to Visual Studio 2013 provided support for Edit and Continue for 32-bit targeted environment. If you tried Edit and Continue for a 64-bit targeted environment in a version prior to VS 2013, you would receive the following error
In Visual Studio 2013, we now have support for Edit and Continue in 64-bit environments. This feature is by default ON for Windows Application as well as Web Application under Visual Studio 2013. You can cross check the same in the debugging section by going to Tools > Options Menu.
When you use the Edit and Continue with 64-bit targeted environments, you can start editing the code while debugging it, as shown here -
Inspecting Method Return value While Debugging
The next feature improvement which we will talk about is how to inspect a method return value in .NET 4.5.1. In Visual Studio 2013, while debugging, we can now check the return values of a function or of nested functions. The first way of checking return value is to make use of Autos window. Let's see an example of the same -
class Program{ static void Main(string[] args) { double result = SalesNetProfit(TotalCostOfGoodsAndServices(), TotalExpenseCost(), TotalSales()); }
private static double SalesNetProfit(double COGS, double Expense, double ActualSales) { return ActualSales - (COGS + Expense); } private static double TotalCostOfGoodsAndServices() { return 10000; } private static double TotalExpenseCost() { return 12000; } private static double TotalSales() { return 120000; }}
The above console application calls the SalesNetProfit() in the Main function. The SalesNetProfit function calls other functions to fetch parameter values. When you start debugging the application and execute the SalesNetProfit() function, open the Autos window and see the output. You will see that the return values of functions/nested functions are listed. This is useful when you are not storing the result of a method call in a variable, e.g: when a method is used as a parameter or return value of another method.
Another way of finding the return values of a function is using a new pseudo variable "$ReturnValue". Just type $ReturnValue in the Immediate window (Ctrl + Alt + I) or a Watch window after you have stepped out of the method call. Shown here is the variable in action:
Entity Framework and ADO.NET Connection Resiliency
The Entity Framework and ADO.NET Connection Resiliency introduced in .NET Framework 4.5.1 recreates a broken connection and automatically tries to establish a connection with the database.
Any data centric application is crucial to a business and should be capable of displaying/fetching the data from the remote servers, whenever required. When we design data centric applications, there are a large number of possibilities that needs to be thought of when the connection to the database goes idle/broken.
In such situations, you will have to build your own custom Connection Resiliency feature which will retry establishing broken connections. In .NET Framework 4.5.1, Microsoft has provided EF/ADO.NET Connection Resiliency feature out-of-box. What you have to do is target your application to the use Framework 4.5.1 and this feature will be available to you. No configurations/APIs are needed when you work with this feature. This feature will automatically recreate broken connections and also retry transactions. Pretty cool!
Async Debugging Enhancements
Another feature improvement I want to highlight is Async Debugging Enhancements in Visual Studio 2013. In the .NET world, Async programming has gained momentum when Task Parallel Library (TPL) and keywords like async and await were introduced. For a developer, debugging is an important step during development to check how the code executes, proceeds and if it provides the desired results. We can make use of a Call Stack window for seeing these details.
Let's write a program to test the Async Debugging Enhancements feature using Visual Studio 2013. Here’s a sample screenshot of our program -
The button click event code looks like the following -
private async void button1_Click(object sender, EventArgs e){ var result = await WriteReadFile(); MessageBox.Show(result);}
private async Task WriteReadFile(){ StreamWriter SW = new StreamWriter(@"E:\test.txt", true); await SW.WriteLineAsync("WelCome To India"); SW.Close(); StreamReader SR = new StreamReader(@"E:\test.txt"); var result=await SR.ReadToEndAsync(); SR.Close(); return result;}
If you debug the application in Visual Studio 2012, you will see a similar output:
The call stack window shows a lot of code which you don't want see as it does not contain any logical starting point. However if you debug the same code in Visual Studio 2013, you will observe that the Call Stack is much more cleaner as shown here -
The Call Stack window in Visual Studio 2013 has hidden External Code. Now we have a neat and a clean view of Call Stack window.
Also take a look at the Task window. It shows you all the active tasks, completed tasks as well as scheduled tasks as shown below -
ASP.NET App Suspend
In .NET Framework 4.5.1, a new feature called ASP.NET App Suspend has been introduced for hosting an ASP.NET Web site. This feature is introduced in IIS 8.5 and Windows Server 2012 R2.
The basic concept is that all sites are in an inactive mode. When a site is requested, it is loaded into memory, the mode becomes active, and the site responds to page requests. When a site becomes idle (depending on the timeout setting), the site is put in a suspended state and the CPU resources and the memory it was using is made available for requests to other sites. As soon as a request for the site is made, it can be resumed very quickly and respond to traffic again.
This new feature makes effective utilization of your hardware resources and increases the start-up time. To use this feature, you don't have to learn any new API. This feature is a simple configuration of Application Pool. Right click the Application Pool and click on Advance Settings as shown below -
In the Advance Settings window, go to Process Model section and expand the value of "Idle Time-out Action" property value. You will see two values. The default value is set to "Terminate". The other value which is available is "Suspend" as shown here -
By setting the Idle Time-out Action to suspend, when the Idle Time-out occurs, the ASP.NET web site gets suspended from CPU activities. It also gets paged to the disk in a "Ready to Go" state. Hence the start-up time is dramatically increased.
On-Demand Large-Object heap compaction
Another very exciting feature introduced in .NET 4.5.1 is the On-Demand Large Object Heap Compaction. Before the release of .NET Framework 4.5.1, the large object heap compaction was not available. Because of this, fragmentation occurred over a period of time. When you use 32-bit systems, the processes running under them may throw the OutOfMemory exception in case your app is making use of large amount of memory, like large arrays. The On-demand large object heap compaction is now supported in .NET framework 4.5.1 to avoid the OutOfMemory exception. This feature is available as an on-demand feature as it is expensive. The Garbage Collection pause time is more and hence using this feature needs to be decided based on the situation. For example if you are experiencing OutOfMemory exceptions.
You can set this feature using GCSettings which comes under System.Runtime namespace. The GCSetting provides a property LargeObjectHeapCompactionMode that has two values - Default and CompactOnce. The default value is Default which does not compact the Large Object Heap compaction during Garbage collection. If you assign the property a value of CompactOnce, the Large Object Heap is compacted during the next full blocking garbage collection, and the property value is reset to GCLargeObjectHeapCompactionMode.Default
MultiCore JIT Improvements
When you work with large applications, a common concern for developers is how to increase the start-up time of an application. Most of the developers make use of NGen.exe [Native Image Generator] to reduce the start-up time which JIT’s your code at installation and not at start-up time. Now this gives us a flexibility to reduce the start-up time of our applications. However what in cases where we don’t have installation scenarios? In such situations, you will not be able to take advantage of the NGen.exe tool.
MultiCore JT is a performance feature that can use be used to speed up an application launch time by 40-50%. The benefits can be realized both in a Desktop as well as an in an ASP.NET application and server as well as client apps. Multi-core JIT is automatically enabled for ASP.NET apps and is enabled on Assembly.LoadFrom and Appdomain.AssemblyResolve You can read more about it here.
For additional details on .NET 4.5.1 check the official MSDN documentation.
So these were some new features and improvements in the latest release of the .NET Framework. I hope you will make use of them in your .NET. | https://www.dotnetcurry.com/dotnet/968/new-features-dotnet-framework-451 | CC-MAIN-2018-51 | refinedweb | 1,685 | 57.67 |
How to change language at run-time in WPF with loadable Resource Dictionaries and DynamicResource Binding
Update: I created a project for this. It also has a NuGet package.
See.
Dynamically changing the language of a WPF application at run-time or on-the-fly is possible and quite simple.
With this solution you do not have Resources.resx files, you do not have to have x:Uid on all your XAML tags. See the expectations this solution is following here: My WPF Localization and Language Expectations. Most of my expectations are met with this solution.
Here is the sample project code: How to load a DictionaryStyle.xaml file at run time?
Example 1 – Dynamic Localization in WPF with Code-behind
Step 1 – Create a new WPF Application project in Visual Studio
- In Visual Studio, go to File | New | Project.
- Select WPF Application.
- Provide a name for the project and make sure the path is correct.
- Click OK.
Step 2 – Configure App.xaml.cs to support dynamic ResourceDictionary loading
The App.xaml.cs is empty by default. We are going to add a few variables, a constructor, and add a few simple functions. This is a fairly small amount of code.
- Open App.xaml.cs.
- Add a static member variable for App called Instance, so it can be accesses from anywhere.
- Add a static member variable for App called Directory, so it can be accesses from anywhere.
- Add a LanguageChangedEvent.
- Add a private GetLocXAMLFilePath(string inFiveCharLang) method.
- Add a public SwitchLanguage(string inFiveCharLanguage) method.
- Add a private SetLanguageResourceDictionary(string inFile) method.
- Add code to the constructor to initialize these variables and to set the default language.Note: The code is well commented.
using System; using System.Globalization; using System.IO; using System.Threading; using System.Windows; namespace WpfRuntimeLocalizationExample { /// <summary> /// Interaction logic for App.xaml /// </summary> public partial class App : Application { #region Member variables public static App Instance; public static String Directory; public event EventHandler LanguageChangedEvent; #endregion #region Constructor public App() { // Initialize static variables Instance = this; Directory = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location); // Load the Localization Resource Dictionary based on OS language SetLanguageResourceDictionary(GetLocXAMLFilePath(CultureInfo.CurrentCulture.Name)); } #endregion #region Functions /// <summary> /// Dynamically load a Localization ResourceDictionary from a file /// </summary> public void SwitchLanguage(string inFiveCharLang) { if (CultureInfo.CurrentCulture.Name.Equals(inFiveCharLang)) return; var ci = new CultureInfo(inFiveCharLang); Thread.CurrentThread.CurrentCulture = ci; Thread.CurrentThread.CurrentUICulture = ci; SetLanguageResourceDictionary(GetLocXAMLFilePath(inFiveCharLang)); if (null != LanguageChangedEvent) { LanguageChangedEvent(this, new EventArgs()); } } /// <summary> /// Returns the path to the ResourceDictionary file based on the language character string. /// </summary> /// <param name="inFiveCharLang"></param> /// <returns></returns> private string GetLocXAMLFilePath(string inFiveCharLang) { string</param> private void SetLanguageResourceDictionary(String inFile) { if (File.Exists(inFile)) { // Read in ResourceDictionary File var languageDictionary = new ResourceDictionary(); languageDictionary.Source = new Uri(inFile); // Remove any previous Localization dictionaries loaded int langDictId = -1; for (int i = 0; i < Resources.MergedDictionaries.Count; i++) { var md = Resources.MergedDictionaries[i]; // Make sure your Localization ResourceDictionarys have the ResourceDictionaryName // key and that it is set to a value starting with "Loc-". if (md.Contains("ResourceDictionaryName")) { if (md["ResourceDictionaryName"].ToString().StartsWith("Loc-")) { langDictId = i; break; } } } if (langDictId == -1) { // Add in newly loaded Resource Dictionary Resources.MergedDictionaries.Add(languageDictionary); } else { // Replace the current langage dictionary with the new one Resources.MergedDictionaries[langDictId] = languageDictionary; } } } #endregion } }
Step 3 – Create a basic WPF User Interface
We need a little sample project to demonstrate the localization, so lets quickly make one.
- Create a basic interface.
Note 1: You can make one yourself, or you can use the basic interface I used. Just copy and paste from below.
Note 2: I have already added a menu for selecting the Language for you.
<Window x: <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="30" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="25" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="15" /> <ColumnDefinition Width="*" MinWidth="100"/> <ColumnDefinition Width="*" MinWidth="200"/> <ColumnDefinition Width="15" /> </Grid.ColumnDefinitions> <DockPanel Grid. <Menu DockPanel. <MenuItem Header="_File"> <MenuItem Header="E_xit" Click="MenuItem_Exit_Click" /> </MenuItem> <MenuItem Name="menuItemLanguages" Header="_Languages"> <MenuItem Tag="en-US" Header="_English" Click="MenuItem_Style_Click" /> <MenuItem Tag="es-ES" Header="_Spanish" Click="MenuItem_Style_Click" /> <MenuItem Tag="he-IL" Header="_Hebrew" Click="MenuItem_Style_Click" /> </MenuItem> </Menu> </DockPanel> <Label Content="First Name" Name="labelFirstName" Grid. <Label Content="Last Name" Name="labelLastName" Grid. <Label Content="Age" Name="labelAge" Grid. <TextBox Name="textBox1" Grid. <TextBox Name="textBox2" Grid. <TextBox Name="textBox3" Grid. <Button Content="Clear" Grid. </Grid> </Window>
- Also populate the MainWindow.xaml.cs file with the code-behind needed for the menu click events.
using System; using System.Globalization; using System.Windows; using System.Windows.Controls; namespace WpfRuntimeLocalizationExample { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); foreach (MenuItem item in menuItemLanguages.Items) { if (item.Tag.ToString().Equals(CultureInfo.CurrentUICulture.Name)) item.IsChecked = true; } } private void MenuItem_Exit_Click(object sender, RoutedEventArgs e) { Environment.Exit(0); } private void MenuItem_Style_Click(object sender, RoutedEventArgs e) { // Uncheck each item foreach (MenuItem item in menuItemLanguages.Items) { item.IsChecked = false; } MenuItem mi = sender as MenuItem; mi.IsChecked = true; App.Instance.SwitchLanguage(mi.Tag.ToString()); } private void button1_Click(object sender, RoutedEventArgs e) { labelFirstName.Content = labelLastName.Content = labelAge.Content = string.Empty; } } }
Though localization is not yet working, this should compile and run.
Step 4 – Create Resource Dictionaries for Localization
By recommendation of my expert localization team, we are going to create a folder for each language, using the five character string (en-US, es-ES, he-IL), and put a ResourceDictionary in each folder. The resource dictionary will also have the five character language string.
- Create the folder and the file.
- Right-click on the project in Visual Studio and choose Add | New Folder.
- Name the folder en-US
- Right-click on the en-US folder and choose Properties.
- Set the ‘Namespace Provider’ value to false.
- Right-click on the en-US folder and choose Add | Resource Dictionary.
- Provide a file name and make sure that Resource Dictionary (WPF) is selected.
- Note: I named my first resource dictionary LocalizationDictionary.en-US.xaml.
- Click Add.
- Right-click on the LocalizationDictionary.en-US.xaml file and choose Properties.
- Set ‘Build Action’ to ‘Content’.
- Set ‘Copy to Output Directory’ to be ‘Copy if newer’.
- Set ‘Custom Tool’ to blank.
- Name the ResourceDictionary.
- Open the resource LocalizationDictionary.en-US.xaml
- Add a reference to the System namespace from the mscorlib assembly.
- Add a string with the x:Key set to ResourecDictionaryName.
- Set the string value to “Loc-en-US”.
Important! Because our code is specifically looking for Loc-, you need to use that scheme or change the code.
- Add to the string a Localization.Comment and set the value to $Content(DoNotLocalize).
- Save the resource dictionary.
<ResourceDictionary xmlns="" xmlns: <!-- The name of this ResourceDictionary. Should not be localized. --> <sys:String x:Loc-en-US</sys:String> </ResourceDictionary>
- Repeat all the steps in steps 1 and 2 for each langauge.
Note: We will do the following three languages in this example:
- en-US
- es-ES
- he-IL
Step 4 – Localize the strings
- Open the MainWindow.xaml.
- The first string is the Title of the Window.
Note: My window title is “WPF Run-time Localization Example”
- Replace the value with a DynamicResource to MainWindow_Title.
Here is a snippet…
<Window x: <Grid>
- Open the LocalizationDictionary.en-US.xaml.
- Add a string as a resource for the MainWindow’s Title, making sure the x:Key value is matches the ResourceKey in the MainWindow.xaml: MainWindow_Title.
<!-- Localized strings --> <sys:String x:WPF Run-time Localization Example</sys:String>
- Repeat steps for each of the remaining string that needs to be localized.
The following strings need to be localized in my sample UI:
- _File
- E_xit
- _Language
- _Spanish
- _Hebrew
- Age
- Clear
Step 5 – Configure the FlowDirection
The main motivation for adding Hebrew to this example is so that we can show how to dynamically switch FlowDirection for languages that flow from right to left.
Note: This step is optional and only necessary if your application will support a language that flows right to left.
- Add two FlowDirection resources to the LocalizationDictionary.en-US.xaml files.
<!-- Localization specific styles --> <FlowDirection x:LeftToRight</FlowDirection> <FlowDirection x:RightToLeft</FlowDirection>
- For Spanish, copy these same lines to the LocalizationDictionary.es-ES.xaml file. Spanish and English have the same FlowDirection.
- For Hebrew, (you have probably already guessed this), add these same lines to the LocalizationDictionary.he-IL.xaml file but reverse the values.
<!-- Localization specific styles --> <FlowDirection x:RightToLeft</FlowDirection> <FlowDirection x:LeftToRight</FlowDirection>
- Open the MainWindow.xaml file.
- Add a FlowDirection tag to the MainWindow.xaml file.
- Set the FlowDirection value to using DynamicResource binding to FlowDirection_Default.Here is the XAML snipppet.
<Window x: <Grid>
Build compile and test. Both the language and FlowDirection should now change when switching to Hebrew.
See the Framework for this here:
Canon EOS DIGITAL REBEL XSi
How to change language at run-time in WPF with loadable Resource Dictionaries and DynamicResource Binding (Example 1) | WPF
How to dyanamically edit or add string resources in resources.resw file?
Please submit a bug here:
Please provide a github or other online project that reproduced the issue.
Hi,
I create multi lang on datagrid in wpf, but not resolved.
Pls help me.
P/s: Multi Lang on windows in okie (ex, label…)
Thanks.
I have created wpf localization as said above.
It works..fixed my issue..!
Great post thanks for it..!
But i am facing one problem, now i have to localiza the containt of text box too.
I have wpf forms lets say customer master.
Now in customer name text box how can i localize it.
the contains also be loclized and save in database & fetched with it.
In short if i select hindi language the lables are changed no doubt but the containt
of data in text should change.
How can i do it ..?
Plzz help
Please tell me how can i apply localization to labels.
As in my project it has labels oneach form.
when i try this :
DataContext=”{DynamicResource ResourceKey=lblAge}”
also i tried
Content =”{DynamicResource ResourceKey=lblAge}”
couldn’t work..
plz help me
Have you checked out the WPFSharp.Globalizer nuget packager and GitHub project? It has way more info. It has a sample project. The GitHub is newer than the NuGet package. I would pull the source from GitHub if I were you.
This is a great solution – very well explained and easy to implement. I just have one question…
Is there a way to configure a design-time resource dictionary, to the WPF designer will show some text for the labels, buttons, etc. It is hard to get a feeling for the look of the dialog when there isn’t any text displayed.
Thank you again.
I have downloaded and experimented with the solution. I can see that the binding is not quite finished for the MVVM example. I found that it is referencing the wrong names – I changed them to be the same as the non MMVM version (with an underscore separator). However, I found that I had to change the Content to reference the GlobalizedResource rather than the LocalizationBinding approach i.e
This works:-
this does not work:-
The trouble with using the globalizer:GlobalizedResource approach however, is that I do not see DESIGN time fallback values. Is there a simple fix to resolve this?
Kind regards
Alan.
Mmm – some details stripped out there.
Essentially globalizer:LocalizationBinding binds correctly, but not at design time.
Al.
There are design time values.
Did you look at the GitHub link?
My example on the GitHub page has design time values.
Thanks Rhyous. I did get the data from Github – it is the same code that I was testing with. The design time binding does NOT work for the MVVMExample. If I use LocalizationBinding for the Name and Age values they appear at design time – but are never replaced and looks not to be binding at all. If I use GlobalizedResource (the values are missing at DesignTime) but appear at runtime.
Likewise the values do NOT appear at design time for the non Mvvm example but do appear at runtime.
Any help is much appreciated.
Kind regards
Alan.
what is DoNotLocalize I got error reading Localization.Comments=”$Content(DoNotLocalize)” and don’t know it refers to what?
The problem is that the XML schema for ‘sys:String’ or ‘FlowDirection’ does not include the ‘Localization.Comments’ attribute. I found that the error was harmless (ignored) but if you are having problems with it, you should be able to remove it. It is used for translation tools, to indicate that the text value should not be translated (leave the value alone). In my case, we have a policy that there cannot be any errors, so I removed them and simply put an XML comment at the beginning stating that the value should not be translated:
Loc-ENU
It looks like my example got swallowed up… Let me try again with some markup.
<!– DO NOT LOCALIZE –%gt;<sys:String x:Key=”ResourceDictionaryName”>Loc-ENU</sys:String>
(if that doesn’t work, you should at least get the idea)
Ha Ha – one more time, without the typo…
<– DO NOT LOCALIZE –> <sys:String x:Key=”ResourceDictionaryName”>Loc-ENU</sys:String>?
I don’t have the MVVM example built out yet.
I sent you an email. I am not sure. I assume the localization files weren’t loaded for some reason. I’d love to debug it with you.
Hey there. I was about to take your Globalizer project code for a spin but the project file has oodles of dead source file links (ie, “EnhancedResourceDictionary.cs”, “AvailableLanguages.cs”).
Were the files intentionally removed?
Cheers,
Berryl
It is working now. please try again.
The MVVM example is not actual an MVVM example, yet.
Yeah it does. It looks like some of the files didn’t come along with the check out. I will investigate.
Sorry. What happened was I got hired at a new company to be a WPF engineer and they pulled the carpet out from under me and decided not to use WPF.
I am turning this into a API. Stay tuned…
[…] Tutorial – Binding to Resources.resx for strings in a WPF Application: A technique to prepare for localization October 20, 2010, 12:14 pm by Rhyous Update: Check out my preferred way for doing WPF Localization on my other blog: How to change language at run-time in WPF with loadable Resource Dictionaries and DynamicResource Bi… […]
Thanks for sharing your great solution. My ideas went in similar directions, but I was still struggling with cognitively putting some pieces together. Luckily I found your post, YMMD.
Nevertheless, I am still very interested in reading about your ideas regarding this topic and MVVM – as you indicated at the end of your post. Any plans when you will be publishing it? Looking forward… Have a nice weekend.
[…] How to change language at run-time in WPF with loadable Resource Dictionaries and DynamicResource Bi… Category: Localization, WPF | Comment (RSS) | Trackback […]?
[…] « How to load a DictionaryStyle.xaml file at run time? How to change language at run-time in WPF with loadable Resource Dictionaries and DynamicResource Bi… […] | http://www.wpfsharp.com/2012/01/27/how-to-change-language-at-run-time-in-wpf-with-loadable-resource-dictionaries-and-dynamicresource-binding/ | CC-MAIN-2017-39 | refinedweb | 2,516 | 51.55 |
Chris Lalancette <clalance redhat com> wrote: > Jim Meyering wrote: >>> diff -r c6a3e36cdf54 ext/libvirt/_libvirt.c >>> --- a/ext/libvirt/_libvirt.c Thu Jul 17 15:24:26 2008 -0700 >>> +++ b/ext/libvirt/_libvirt.c Fri Aug 08 06:04:56 2008 -0400 >>> @@ -637,16 +637,51 @@ VALUE libvirt_conn_num_of_defined_storag >>> } >>> #endif >>> >>> +static char *get_string_or_nil(VALUE arg) >>> +{ >>> + if (TYPE(arg) == T_NIL) >>> + return NULL; >>> + else if (TYPE(arg) == T_STRING) >>> + return STR2CSTR(arg); >> >> STR2CSTR is marked as obsolete in ruby.h, where it says >> to use StringValuePtr instead: >> >> /* obsolete API - use StringValuePtr() */ >> #define STR2CSTR(x) rb_str2cstr((VALUE)(x),0) > > Yeah, you are right. I looked through the ruby source code, actually, and I > ended up using StringValueCStr (which is used elsewhere in the ruby-libvirt That does sound better, at least when (as here) you know there should be no empty string and no embedded NUL byte. > bindings). It's basically the same as StringValuePtr, but does an additional > check to make sure the string is not of 0 length and that there aren't > additional embedded \0 in the string. > > I also updated the patch with the const pointers as you suggested. Attached. > Thanks for the review! Looks fine. ACK. | http://www.redhat.com/archives/libvir-list/2008-August/msg00236.html | CC-MAIN-2015-35 | refinedweb | 196 | 58.72 |
Holiday Wish List
Summary:!.
The IR receiver module is
screwed into the Microbric Viper, in my case, into the addressable P14 port on the side of the main body. The Viper includes a main chip called the Basic Atom that is flashed using BASIC programs written in an included BASIC IDE. In my BASIC file a constant
is used to address the IR "Data Pin," in this case, P14.
The Basic Atom includes a number of BASIC helper methods that the microprocessor uses to talk to the external world. The "pulsin" method is used to talk to the IR Data Pin and detect a pulse. Infrared receivers use high-speed pulses at a certain frequency (how fast it flashes) and duty cycle (a percentage of how long the LED is on versus off). These pulses of light continue for a set amount of time to indicate ones, zeros or the header.
I'm trying to pulse the IR LED to emulate the Sony Remote Control, and started by looking for some information on how the Sony Infrared Protocol (SIRC) works. I found two excellent writeups.
IR Protocols are fairly sensitive to timing. The pulses need to happen at specific intervals with the correct frequency. Because the timing is so fine-grained, I'm using the new System.Diagnostic.Stopwatch class in .NET (a wrapper for the Win32 QueryPerformanceCounter API) that uses Ticks for its unit of measurement. The Frequency is 2992540000 ticks per second, so I figure that'd be enough resolution.
The Sony remote I'm trying to emulate uses a 40kHz frequency, so it wants to flash the LED one cycle, once, every 1/40000 of a second. That means every 74814 ticks or every 25µs (microseconds are 1/1000000 of a second.)
I'm trying to send a header pulse of 2.4ms in length and I need to cycle the LED once every 25µs. I turn it on for 8µs and turn if off for 17µs. That means it will cycle 96 (2400µs) times for the header, 24 (1200µs) times for a space or zero, and 48 (600µs)times for a one. An image from San Bergmans illustrates:
The Iguanaworks IR serial port board uses DTR (Data Terminal Ready) to turn on the IR LED. When DTR is high, the LED lights up, when it's off, the LED turns off. Using the Stopwatch and some really tight loops I figure I can flash (pulse) the LED fast enough.
Now at this point, you may, dear reader, have already had the thought that perhaps trying to flash the LED via software, rather than hardware, is an astonishingly bad idea. Well, I could have used your insight earlier, my friend. But, on with my tale, as it is the telling that is so enjoyable, right?.
We'll take a look at more details of the internal implementation of the Sony Protocol in a moment.
INTERESTING REMINDER: Remember, you can't see IR (it's infrared, therefore not in our visible spectrum) but you can see it if you point it at a Webcam or digital camera, which is what I've been doing to sanity check the hardware. The picture at left is an image of the LED pointed at my Logitech Webcam.
Of
course, I've started with managed code, because I'm a managed kind of a guy. I started using System.IO.Ports to address the Iguanaworks IR transmitter that is connected to a serial point on my PC. They make both a Serial Port and a USB version, but there are
not yet drivers for the USB version so I got a three-foot long serial extension cord and went to work.
The Iguanaworks IR is a uniquely high-power transmitter that charges up a capacitor in order to provide a range of up to 10-meters. Your mileage may vary. However, it requires a few minutes to charge up the capacitor. Once it is charged up, however, even if you are using it constantly, it'll find a comfortable middle place where the output matches the input. If you use it intermittently, which is more typical, you'll likely get very good range and bright output. In my initial testing, though, while I had no trouble getting output from it using Winlirc (the only officially supported Open Source software package for this transmitter) but when I used my application, the transmitter would peter out and eventually go dim. What the heck was going on?
I fought with it for a while, then decided to RTFS (where "S" is "Schematic). The board layout is here. Notice that the RTS (Serial Port Ready-To-Send) Pin 7 goes straight to VoltageIn. Duh! <slaps forehead>. They are sipping power off the Ready To Send pin and I'm not setting that pin hot via RtsEnable.
1: port = new SerialPort(portString);
2: port.RtsEnable = true; //needed for power!
3: port.BaudRate = 115200;
4: port.StopBits = StopBits.One;
5: port.Parity = Parity.None;
6: port.DataBits = 7;
7: port.Handshake = Handshake.None;
So, if you ever find yourself using the High-Power LIRC Transmitter/Receiver in an unsupported way writing your own program, remember to set RTS high or you won't get any power. Heh!
Back to the narrative. I suspected I might be writing more than one serial port class, or possibly using different pieces of hardware, so I defined an IIRSerialPort interface with some basics like On, Open, Off, etc. Since the Iguanaworks IR uses DTR to turn on the LED, it was as easy as setting the port's DtrEnable property to true to get a solid LED output.
Here's part of the first Managed IR Serial Port class.
1: public class ManagedIRSerialPort : IIRSerialPort
2: {
3: SerialPort port = null;
4:
5: public ManagedIRSerialPort(string portString)
6: {
7: port = new SerialPort(portString);
8: port.RtsEnable = true; //needed for power!
9: port.BaudRate = 115200;
10: port.StopBits = StopBits.One;
11: port.Parity = Parity.None;
12: port.DataBits = 7;
13: port.Handshake = Handshake.None;
14: }
15:
16: public void Open()
17: {
18: port.Open();
19: }
20:
21: public void On()
22: {
23: port.DtrEnable = true;
24: }
25:
26: public void Off()
27: {
28: port.DtrEnable = false;
29: }
30:
31: public void Close()
32: {
33: port.Close();
34: }
35: }
This class is just the beginning, as it only turns the LED on and off. Remember I need to "pulse" the LED, turning it on and off 96 times all in the space of 2400µs. I wrote a SendPulse that spun very tightly and used the StopWatch class to manage timing.
1: public unsafe void SendPulse(long microSecs)
2: {
3: long end = LastTimeInTicks + (microSecs * (STOPWATCHFREQ / MILLION));
4: int i = 0;
5: timer.Reset();
6: timer.Start();
7: do
8: {
9: i++;
10: port.On();
11: MicroWait(pulseWidth);
12: port.Off();
13: MicroWait(spaceWidth);
14: }
15: while (LastTimeInTicks < end);
16: timer.Stop();
17: System.Diagnostics.Debug.WriteLine(i.ToString() + " pulses in " + timer.ElapsedTicks/(double) req*MILLION+"us");
18: }
Even with lots of optimization, I just couldn't get it to cycle fast enough. Remember, I need the header to take 2400µs total. In this screenshot, you can see it's taking an average of 30000µs! That sucks - it's 10 times slower than I need it to be.
So I futzed with this for a while, and then Reflector'd around. I noticed the implementation of set_dtrEnable inside of System.IO.Ports.SerialStream was WAY more complicated than it needed to be for my purposes.
1: //Reflector'd Microsoft code from inside System.IO.Ports.Port
2: internal bool DtrEnable
3: {
4: get
5: {
6: int num1 = this.GetDcbFlag(4);
7: return (num1 == 1);
8: }
9: set
10: {
11: int num1 = this.GetDcbFlag(4);
12: this.SetDcbFlag(4, value ? 1 : 0);
13: if (!UnsafeNativeMethods.SetCommState(this._handle, ref this.dcb))
14: {
15: this.SetDcbFlag(4, num1);
16: InternalResources.WinIOError();
17: }
18: if (!UnsafeNativeMethods.EscapeCommFunction(this._handle, value ? 5 : 6))
19: {
20: InternalResources.WinIOError();
21: }
22: }
23: }
All I figured I needed to do was call the Win32 API EscapeCommFunction to set the DTR pin high. One thing I learned quickly was that calling EscapeCommFunction was 4 times faster than calling SetCommState for the purposes of raising DTR, so in this code sample those lines are commented out.
I then wrote another implementation of IIRSerialPort, still in managed code, but this one talking to the COM Port using the underlying Win32 APIs.
1: public class UnmanagedIRSerialPort : IIRSerialPort
2: {
3: IntPtr portHandle;
4: DCB dcb = new DCB();
5: string port = String.Empty;
6: public UnmanagedIRSerialPort(string portString)
7: {
8: port = portString;
9: }
10:
11: public void Open()
12: {
13: portHandle = CreateFile("COM1",
14: EFileAccess.GenericRead | EFileAccess.GenericWrite,
15: EFileShare.None,
16: IntPtr.Zero,
17: ECreationDisposition.OpenExisting,
18: EFileAttributes.Overlapped, IntPtr.Zero);
19: GetCommState(portHandle, ref dcb);
20: dcb.RtsControl = RtsControl.Enable;
21: dcb.DtrControl = DtrControl.Disable;
22: dcb.BaudRate = 115200;
23: SetCommState(portHandle, ref dcb);
24: }
25:
26: public void On()
27: {
28: EscapeCommFunction(portHandle, SETDTR);
29: //dcb.DtrControl = DtrControl.Enable;
30: //SetCommState(portHandle, ref dcb);
31: }
32:
33: public void Off()
34: {
35: EscapeCommFunction(portHandle, CLRDTR);
36: //dcb.DtrControl = DtrControl.Disable;
37: //SetCommState(portHandle, ref dcb);
38: }
39:
40: public void Close()
41: {
42: CloseHandle(portHandle);
43: }
44:
45: #region Interop Serial Port Stuff
46: [DllImport("kernel32.dll")]
47: static extern bool GetCommState(IntPtr hFile, ref DCB lpDCB);
48:
49: [DllImport("kernel32.dll")]
50: static extern bool SetCommState(IntPtr hFile, [In] ref DCB lpDCB);
51:
52: [DllImport("kernel32.dll", SetLastError = true)]
53: public static extern bool CloseHandle(IntPtr handle);
54:
55: [DllImport("kernel32.dll", SetLastError = true)]
56: static extern bool EscapeCommFunction(IntPtr hFile, int dwFunc);
57: //Snipped so you don't go blind...
58: #endregion
59: }
Here's the NOT COMPLETE (Remember, it just does DTR) Unmanaged Serial Port class. Thanks
PInvoke.NET for the structures to kick start this)!:
File Attachment: UnmanagedIRSerialPort.cs (10 KB)
As you can see I've got it abstracted away with a common interface so I can switch between managed serial and unmanaged serial quickly. I ran the same tests again, this time with MY serial port stuff:
Sweet, almost 10x faster and darn near where I need it to be. However, it's not consistent enough. I need numbers like 2400, 600, 1200. I'm having to boost the process and thread priority just to get here...
1: previousThreadPriority = System.Threading.Thread.CurrentThread.Priority;
2: System.Threading.Thread.CurrentThread.Priority = System.Threading.ThreadPriority.Highest;
3: System.Diagnostics.Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.RealTime;
...and undo it with...
1: System.Threading.Thread.CurrentThread.Priority = previousThreadPriority;
2: System.Diagnostics.Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.Normal;
...and that's just naughty and not getting the job done.
At this point, it's close, but I'm wondering if it's even possible to flash this thing fast enough. I'm at the limit of my understanding of serial ports (Is DTR affected by Baud Rate? Is 115200 the fastest? Would this be faster in C++ (probably not), or is there a faster way to PInvoke?)
The Sony IR Protocol uses a header that consists of a pulse 2400µs long, followed by 12 ones or zeros (twelve bits) where a "1" is a pulse 1.2ms long followed by an off state for 600µs long and a "0" is a pulse 600µs long followed by an off state for 600µs long. The whole packet is finalized with an off state 600µs long.
According to the SIRC spec, these are the "1s and 0s" we will be using to control the robot. They are the buttons used for the cursor on Sony remotes. This next code snippet is from the BASIC program that controls the robot, so this is what the receiver on the robot needs to see to take action.
1: TestIRData:
2: High LED_Pin ;Set LED_Pin high
3: if irdata = %0000100100000000 then forward
4: if irdata = %1000100100000000 then backward
5: if irdata = %0100100100000000 then twistright
6: if irdata = %1100100100000000 then twistleft
7: if irdata = %0010100100000000 then still
8: return
The protocol indicates that it wants the Most Significant Bit on the RIGHT, which is the opposite of how it's done on Intel chips. Note that this doesn't refer to "Endianness," which is byte-by-byte. We're talking about reversing or flipping the entire bit stream.
Here's the constants in C# that correspond (in reverse) to the values the BASIC file. These values are straight from the spec.
1: const int FORWARD = 144;
2: const int BACKWARD = 2192;
3: const int RIGHT = 1168;
4: const int LEFT = 3216;
5: const int STILL = 656;
I could have stored them already reversed, but in the interest of making the code more reusable for other IR-related projects, I added a reverse function with help from Nicolas Allen and Wesner Moise that takes a long and the number of bits to reverse.
1: public long Reverse(long i, int bits)
2: {
3: i = ((i >> 1) & 0x5555555555555555) | ((i & 0x5555555555555555) << 1);
4: i = ((i >> 2) & 0x3333333333333333) | ((i & 0x3333333333333333) << 2);
5: i = ((i >> 4) & 0x0F0F0F0F0F0F0F0F) | ((i & 0x0F0F0F0F0F0F0F0F) << 4);
6: i = ((i >> 8) & 0x00FF00FF00FF00FF) | ((i & 0x00FF00FF00FF00FF) << 8);
7: i = ((i >> 16) & 0x0000FFFF0000FFFF) | ((i & 0x0000FFFF0000FFFF) << 16);
8: i = (i >> 32) | (i << 32);
9: return i >> (64 - bits);
10: }
This makes the SendData() method nice and clean and allows me to send codes straight from the SIRC spec:
1: public unsafe void SendData(long data, int bits)
2: {
3: int i;
4:
5: data = Reverse(data,bits);
6: for(i=0;i<bits;i++)
7: {
8: if((data & 1) == 1)
9: {
10: SendPulse(ONEPULSE);
11: SendSpace(ZEROPULSE);
12: }
13: else
14: {
15: SendPulse(ZEROPULSE);
16: SendSpace(ZEROPULSE);
17: }
18: data=data>>1;
19: }
20: }
This method takes each bit at a time and flashes the IR port. But, we've still got the problem of speed. It's just not happening fast enough. The basic design is there, but at this point I'm starting to suspect I need different hardware.
At this point, I went back to the folks at Iguanaworks and explained the problem. They agreed it was time to put the "carrier" onto the hardware. We'd change the original serial port board that turned the LED on and off using the Serial DTR signal so that now it would oscillate when DTR was set hot.
The graphic above shows the original serial port adapter on the right, the prototype in the middle with the "daughter board" attached. We tried a number of resistor values before we got it right. And the completed first-run prototype on the far left. There's ways to make it smaller, but this way gives the user more choices. The frequency can be swapped to emulate different remote controls by changing resistors yourself in the final design, or turning the carrier off completely.
My IR controlling code wouldn't change significantly, it would actually shrink, as it was silly for me to try and do the work that was better handled by hardware. Here's the updated SendPulse() method along with SendSpace().
1: public unsafe void SendPulse(long microSecs)
2: {
3: port.On();
4: MicroWait(microSecs);
5: }
6:
7: public unsafe void SendSpace(long length)
8: {
9: port.Off();
10: MicroWait(length);
11: }
It's worth noting also that the guys from Iguanaworks and the guys from Microbric worked very hard, collaborating between Colorado and Australia, along with me chiming in occasionally, to get this to work. To the right you can see an image of the IR pulse on an oscilloscope. You can see the header on the far right, of 2400 microseconds, followed by 0,0,0,0,1,0,0,1,0,0,0,0.
The next step is hooking it all up to a console app to try moving the robot around with the keyboard. At this point, the hard work is done and we can setup a simple key-checking loop that runs until you press Ctrl-C
1: class Program
2: {
3: const int FORWARD = 144;
4: const int BACKWARD = 2192;
5: const int RIGHT = 1168;
6: const int LEFT = 3216;
7: const int STILL = 656;
8: const int REPEAT = 4;
9:
10: static void Main(string[] args)
11: {
12: using (IR ir = new IR("COM1"))
13: {
14: while (true)
15: {
16: ConsoleKeyInfo key = Console.ReadKey(true);
17: switch (key.Key)
18: {
19: case ConsoleKey.UpArrow:
20: ir.Send(FORWARD, REPEAT);
21: break;
22: case ConsoleKey.DownArrow:
23: ir.Send(BACKWARD, REPEAT);
24: break;
25: case ConsoleKey.RightArrow:
26: ir.Send(RIGHT, REPEAT);
27: break;
28: case ConsoleKey.LeftArrow:
29: ir.Send(LEFT, REPEAT);
30: break;
31: }
32: }
33: }
34: }
35: }
In the next article, we'll attach the Viper and IR classes to PowerShell and script enable them like the LOGO Turtle of years past.!
I recently was in Australia and picked up a bunch of VIPERS (I am a discerning enthusiast) simply because I thought that this was one of the best robotic platform ideas I have ran across ina long time!
If anyone is interested, I will part with two sets (complete with all known expansion kits) just to try and stimulate interest in a great product. I will let them go to REAL enthusiasts (not just some profiteer) in hopes to try and get a small grass roots MicroBric club going. Please let me know if you are interested.
My email is mrpolygon@hotmail.com and I have all sorts of other obsure hobbies and interests like Hydrogen Fuel cell technology, PIC, PICAXE, PARALAX etc...
Thanks
SO SWEET!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
YOUR CODES ARE WONDERFUL BUT CAN YOU PLEASE SEND TO ME CODES IN VB 2005 OR IN VB 6.0
THANKS IN ADVANCE
This can help translate the code for you.
Try this, his updated article with source.
Hello,
I am attempting to write a program for an older Pocket PC 2003, to try and read and write raw IR data. The idea is to communicate with TVs and TV remotes (like a lot of people try). I am using VB.Net, which I know is probably not the smartest choice, but until I figure out C# / C++ it's all I've got.
So far, your article and code seems like the closest thing to what I am trying to accomplish.
I cannot seem to find the IIRSerialPort type anywhere in the NETCF. Was it something you defined yourself? If so, would you be willing to share it, or explain it? | https://channel9.msdn.com/coding4fun/articles/Controlling-a-Microbric-Viper-Robot-with-a-custom-IR-Serial-Port-using-NET | CC-MAIN-2019-30 | refinedweb | 3,086 | 65.01 |
18 October 2012 08:23 [Source: ICIS news]
SINGAPORE (ICIS)--AkzoNobel said on Thursday it swung to a net loss of €2.38bn ($3.09bn) in the third quarter from a €149m net profit in the 2011 third quarter, weighed by a €2.5bn impairment charge on the firm’s decorative paints business.
The company’s revenue rose by 6% year on year to €4.28bn in July-September this year, while its earnings before interest, tax and depreciation (EBITDA) rose by 7% to €540m, the Dutch coatings and specialty chemicals maker said in a statement..
Its sales were up by 6% year on year at €12.7bn, while EBITDA was up by 4% at €1.56bn.
“During the year, the economic slowdown, particularly in ?xml:namespace>
“The company is confident with regard to the long-term growth of its business, but remains cautious over the shorter-term development of its markets,” | http://www.icis.com/Articles/2012/10/18/9604941/dutch-akzonobel-swings-to-q3-net-loss-of-2.38bn.html | CC-MAIN-2015-22 | refinedweb | 152 | 64.41 |
Autotest is an open source project designed for testing the linux kernel. Before starting this codelab you might benefit from scrolling through some upstream documentation on autotest client tests. Autotest is responsible for managing the state of multiple client devices as a distributed system, by integrating a web interface, database, servers and the clients themselves. Since this codelab is about client tests, what follows is a short description of how autotest runs a specific test, on one client.
Autotest looks through all directories in client/tests and client/site_tests for simple python files that begin with ‘control.’. These files contain a list of variables, and a call to job.run_test. The control variables tell autotest when to schedule the test, and the call to run_test tells autotest how. Each test instance is part of a job. Autotest creates this job object and forks a child process to execute its control file.
Note the exec mentioned above is the python keyword, not os.exec
Tests reside in a couple of key locations in your checkout, and map to similar locations on the DUT (Device Under Test). Understanding the layout of these directories might give you some perspective:
<cros_checkout>/chromium/src/chrome/test/functional
<cros_checkout>/src/third_party/autotest/client/site_tests
<cros_checkout>/src/platform/factory
chroot build environment.
Autotest source, basic python knowledge.
In this codelab, we will:
Run a test, edit and rerun a test on the client device
Write a new test from scratch
View and collect results of the test
Get an overview of how autotest on the client works
Get an overview of the classes and libraries available to help you write tests.
<cros_checkout>/src/scripts/bin/cros_start_vm --image_path=path/to/chromiumos_qemu_image.bin
If the cros_start_vm scripts fails you need to enable virtualization on your workstation. check for /dev/kvm or run ‘sudo kvm-ok’ (you might have to ‘sudo apt-get install cpu-checker’ first). It will either say /dev/kvm exists and kvm acceleration can be used or that /dev/kvm doesn’t and kvm acceleration can NOT be used. In the latter case, hit esc on boot and go to ‘system security:’, turn on virtualization. More information about running tests on a vm can be found here.
Once you have autotest, there are 2 ways to run tests, either using your machine as a server or from the client DUT. Running it directly on the device is faster, but requires invoking it from your server at least once.
1. enter chroot:
cros_checkout_directory$ cros_sdk
2. Invoke test_that, to run login_LoginSuccess on a vm with local autotest bits:
test_that localhost:9222 l
ogin_LoginSuccess
The basic usage of test_that:
test_that -b board dut_ip[:port] TEST
TEST can be the name of the test, or suite:suite_name for a suite. For example, to run the smoke suite on a device with board x86-mario
test_that -b x86-mario 192.168.x.x suite:smoke
Please see the test_that page for more details.
You have to use test_that at least once so it copies over the test/dependencies before attempting this; If you haven’t, /usr/local/autotest may not exist on the device.
ssh root@<ip_address_of_dut>,
password=test0000
Once you're on the client device:
cd /usr/local/autotest
./bin/autotest_client tests/login_LoginSuccess. run it by invoking autotest_client, as described in the section on Running Tests Directly on the client. Note a print statement won’t show up when the test is run via test_that.
The more formal way of editing a test is to change the source and emerge it. The steps to doing this are very similar to those described in the section on emerging tests. You might want to perform a full emerge if you’ve modified several files, or would like to run your test in an environment similar to the automated build/test pipeline.
A word of caution: copy-pasting from Google Docs has been known to convert consecutive whitespace characters into unicode characters, which will break your control file. Using CTRL-C + CTRL-V is safer than using middle-click pasting on Linux.
Our aim is to create a test which does the following:
hdparm -T <disk>
1. Create a directory in client/site_tests, name kernel_HdParmBasic.
client/site_tests
kernel_HdParmBasic
2. Create a control file kernel_HdParmBasic/control, a bare minimum control file for the hdparm test:
AUTHOR = "Chrome OS Team"
NAME = "kernel_HdParmBasic"
TIME = "SHORT"
TEST_TYPE = "client"
DOC = """
This test uses hdparm to measure disk performance.
"""
job.run_test('kernel_HdParmBasic', named_arg='kernel test')
To which you can add the necessary control variables as described in the autotest best practices. Job.run_test can take any named arguments, and the appropriate ones will be cherry picked and passed on to the test.
3. Create a test file:
At a bare minimum the test needs a run_once method, which should contain the implementation of the test; it also needs to inherit from test.test. Most tests also need initialize and cleanup methods. Create a test file client/site_tests/kernel_HdParmBasic/kernel_HdParmBasic.py: kernel_HdParmBasic(test.test):
version = 1
def initialize(self):
logging.debug('initialize')
def run_once(self, named_arg=''):
logging.debug(named_arg)
def cleanup(self):
logging.debug('cleanup')
Notice how only run_once takes the argument named_arg, which was passed in by the job.run_test method. You can pass arguments to initialize and cleanup this way too. You can find examples of initialize and cleanup methods in helper classes, like cros_ui.
The results folder contains many logs, to analyze client test logging messages you need to find kernel_HdParmBasic.(DEBUG, INFO, ERROR) depending on which logging macro you used. Note: logging message priorities escalate, and debug < info < warning < error. If you want to see all logging messages just look in the debug logs.
Client test logs should be in: /tmp/test_that.<RESULTS_DIR_HASH>/kernel_HdParmBasic/kernle_HdParmBasic/debug
/tmp/test_that_latest
In the DEBUG logs you should see messages like:
01/18 12:22:46.716 DEBUG| kernel_HdParmBasic:0025| Your logging message
Note that print messages will not show up in these logs since we redirect stdout. If you’ve already performed a ‘run_remote’ once you can directly invoke your test on a client, as described in the previous section. Two things to note when using this approach:
a. print messages do show up
b. logging messages are also available under autotest/results/default/
You can import any autotest client helper module with the line
from autotest_lib.client.<dir> import <module>
You might also benefit from reading how the framework makes autotest_lib available for you.
kernel_HdParmBasic Needs test.test, so it needs to import test from client/bin.
Looking back at our initial test plan, it also needs to: kernel_HdParmBasic(test.test): """ Measure disk performance: both disk (-t) and cache (-T). """ version = 1 def run_once(self): if (utils.get_cpu_arch() == "arm"): disk = '/dev/mmcblk0' else: disk = '/dev/sda' logging.debug("Using device %s", disk) result = utils.system_output('hdparm -T %s' % disk) match = re.search('(\d+\.\d+) MB\/sec', result) self.write_perf_keyval({'cache_throughput': match.groups()[0]}) result = utils.system_output('hdparm -t %s' % disk) match = re.search('(\d+\.\d+) MB\/sec', result) self.write_perf_keyval({'disk_throughput': match.groups()[0]})'
Note the use of performance keyvals instead of plain logging statements. The keyvals are written to /usr/local/autotest/results/default/kernel_HdParmBasic/results/keyval on the client and will be reported on your console when run through run_remote_tests:
/usr/local/autotest/results/default/kernel_HdParmBasic/results/keyval | http://www.chromium.org/chromium-os/testing/test-code-labs/autotest-client-tests | CC-MAIN-2014-42 | refinedweb | 1,222 | 56.86 |
.
PingBack from
Rick Anderson Rick’s blog focuses on Dynamic Data, including a FAQs and Dynamic Data samples. Most current
Please post corrections/new submissions to the Dynamic Data Forum . Put FAQ Submission/Correction in
Before I can across this post I tried another approach that works… Set the DisplayColumn attribute to a function that returns what you want displayed. For example:
[DisplayColumn("LastFirst")]
[MetadataType(typeof(CustomerMetaData))]
public partial class Customer {
public string LastFirst() {
return LastName.ToString() + ", " + FirstName.ToString();
}
public class CustomerMetaData {
}
Great solution!
But what if I would want to display 2 columns (Last and First name separately) instead of 1?
Thank you
"ToString" works. But the problem is, how do you control the sorting then?
Thanks
I have tried WesSmith's technique, and the technique of adding a public property to a partial class (as opposed to a public method in WesSmith's example) and neither work. Only the very limited ToString() override seems to work.
Ideas?
Hi Rick_Anderson
Can you please tell me, in which file should I make the above changes? Please help me out.
Thanks
No answer to odinhaus' question? I am having the same problem.
Hi
I have Question in Dynamic Data. How can i display Fields(label and textbox ) instead of dropdown foreign key in dynamic data… so that i can insert new record in this foreign key also .. i have searching about this but i didn't find it's answer yet.. | https://blogs.msdn.microsoft.com/rickandy/2008/11/21/improving-the-fk-field-display-showing-two-fields-in-foreign-key-columns/ | CC-MAIN-2016-22 | refinedweb | 240 | 66.33 |
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question)
Introduction
You might have hear about the concept of a tail call already. It's already quite some time ago I found myself browsing through the entire ECMA 335 spec of the CLI (to be precise, from mid July this summer for some Rotor 2.0 experiments). A few days ago however, I was playing around with F# (Wikipedia) again, triggered by two Channel 9 videos of Mike Hall interviewing Don Syme (blog) (Don has been involved in .NET generics in the FX 2.0 timeframe). Basically, F# is a modern variant of ML languages running on the CLR, created by the Microsoft Research folks. Watch the Channel 9 videos here and there. Back to the original subject: tail calls.
Tail calls
Tail calls are - how can you guess? - calls at the tail of a method. Let's start by taking a look at the concept of a call. Normally a call instruction (call, calli, callvirt) grows the stack. That is, the current method's state is kept and a new activation frame is pushed on the call stack which is used by the called method. Just try this one:
class So{ public static void Main() { Main(); }}
No wonder it will crash with a "Process is terminated due to StackOverflowException" message (on my machine after approx. 80,000 calls). (Quiz: Is there any way to catch this exception? Why? Why not? Tip: The answer is very close.)
This code emits a simple call instruction:
call void So::Main()
A tail call on the other hand releases the current stack frame before making the call, therefore keeping the stack low. This is a common 'requirement' in dynamic languages (which C# isn't but F# has chacteristics of this). This means that after a tail call nothing else than the ret instruction should appear in the current method ("Call Terminates Current Method", ECMA 335 - Partition III - 2.1).
Ildasm the executable compiled out of the previous piece of code and patch it as follows (red = drop; green = add):
.class private auto ansi beforefieldinit So extends [mscorlib]System.Object{ .method public hidebysig static void Main() cil managed { .entrypoint // Code size 8 (0x8) .maxstack 8 IL_0000: nop IL_0001: tail. call void So::Main() IL_0006: nop IL_0007: ret } // end of method So::Main
Then ilasm again. Now the app will run till the end of times.
I agree that this sample isn't particularly useful (not to say it's worth just nothing :-)) but I thought it might be useful to some of you to know a) what a tail call is; b) that it is supported by the CLI/CLR; c) how cool dynamic languages should be...
Tip: for a more useful (or better: less useless) example, take a look at Partition V of ECMA 335 which contains an example of tail calls. It's basically the equivalent of this piece of C# (but which doesn't run out of stack space):
class EvenOdd{ static bool IsEven(int n) { if (n == 0) return true; else return IsOdd(n-1); } static bool IsOdd(int n) { if (n == 0) return false; else return IsEven(n-1); } static void Main() { Console.WriteLine(IsEven(1000000)); }}
It might be a good exercise for anyone who has time left to try to patch the IL of this program to use tail calls (== not straightforward).
C ya in the F# universe soon?
I really liked that post. Thank you for informing about Tail in CLR. Dynamic languages are just great.
By the way how to call CLR tail function (or whatever in CLR) from C#? Is it possible to use it by using Reflection.Emit namespace.
Thank you.
See
Introduction A while ago I was explaining runtime mechanisms like the stack and the heap to some folks | http://community.bartdesmet.net/blogs/bart/archive/2006/09/22/4463.aspx | CC-MAIN-2015-40 | refinedweb | 639 | 71.24 |
prad <prad at towardsfreedom.com> wrote: > On Sun, 15 Aug 2010 21:11:52 +0200 > Ertugrul Soeylemez <es at ertes.de> wrote: > > > Just remember this: You have an IO computation? You want to refer to > > its result? Fine, just use '<-'. That's it. > > that's what i've been doing the past two days and it works ... except > in one case. > > for example, i have: > > ======== > flData <- readFile key > let (sec:tle:kwd:dsc:url:typ:dat:txt) = lines flData > ======== > > so flData is the computation which reads the contents of a file and i > can take that and do a lines on it. no problem. I have the impression that you have understood the problem, but you have yet to realize that you have. Your 'subs' function needs the contents of a file, so either you pass those contents explicitly or it needs to become an IO computation, so it can read them itself. Anyway, honestly I don't understand what your 'subs' function is about. It seems to interpret stuff after "```", but you can write this a whole lot simpler and probably more correct, although still very fragile: subs :: [String] -> [String] subs [] = [] subs ("```" : code : ts) = gt code : subs ts subs (t:ts) = t : subs ts Why is this fragile? Well, try the following: subs ["```"] Also do yourself and others a favor and write type annotations at least for all top level definitions. Yes, Haskell has type inference, but for important parts of your code you should really write explicit type annotations. Reason: First of all, the types of your functions are the specification of your program. To write a function, the very first thing you should do is to write its type signature. Never start with the function definition. Sometimes you write a valid function, which doesn't match your intended specification. Also given the type signature you can reason much better about whether your code really does what it should do, without even compiling it. Secondly type annotations make your program far more readable. For some really simple one-liner functions they aren't strictly necessary for readability, but personally I write them even for the simplest functions/values. Now what's the type of your function, when it should read a file? Think about it for a while. One very easy way is to give it the contents of the file as a parameter: subs :: String -> [String] -> [String] Another reasonable way is to let the function read the file itself: subs :: FilePath -> [String] -> IO [String] But beware: The way you were going to do it is very bad. It would read the file once for each occurence of "```". Better read the file at the beginning only, at which point you can just as well split this into two functions: subs :: String -> [String] -> [String] subsUsingFile :: FilePath -> [String] -> IO [String] This particular way to make a nonmonadic functions monadic is called lifting, and because it is such a common thing to do, there are loads of combinators to do it, most notably liftM, fmap and (<$>), which are all the same (though liftM can be used in monads only): liftM :: Monad m => (a -> b) -> (m a -> m b) fmap :: Functor f => (a -> b) -> (f a -> f b) (<$>) :: Functor f => (a -> b) -> (f a -> f b) The subsUsingFile function can be implemented in one of the following ways: -- Raw: subsUsingFile fn ts = do content <- readFile fn return (subs content ts) -- Using lifting: subsUsingFile fn ts = liftM (`subs` ts) $ readFile fn subsUsingFile fn ts = fmap (`subs` ts) $ readFile fn subsUsingFile fn ts = (`subs` ts) <$> readFile fn Further note that you are overusing do-notation, even for nonmonadic code. Read section 6 of my tutorial [1] or a HaskellWiki article about this [2]. Whenever you see this pattern: do x <- c return (f x) you should consider using lifting: f <$> c and don't use do-notation if all you want to do is to make an equation: let x = y in ... especially when your code is not monadic. Your use of the do-notation in your 'subs' function works only incidentally, because it is indeed a monadic function, but not in the IO monad, but in the list monad. [1] [2] Greets, Ertugrul -- nightmare = unsafePerformIO (getWrongWife >>= sex) | http://www.haskell.org/pipermail/beginners/2010-August/005061.html | CC-MAIN-2014-10 | refinedweb | 704 | 66.17 |
Remake the Mosquito Killer(Arduino)
Introduction: Remake the Mosquito Killer(Arduino)
Hi, everyone, i would like to introduce how i remake my mosquito killer here. With this remaking, my mosquito killer become "Smart", and really facilitate my life.
I.
Step 1: Connect the Temperature Sensor to the Screw Shield
Ok, let's go! The first step is connecting the temperature sensor to the screw shield. Just as the picture shows, a resistor is needed here. Connect a 4.7k resistor in parallel to the red and white pins of the temperature sensor( thanks to the carefulness of elecrow , this resistor was packaged with the temprature sensor) . Then, connect the wires to the screw shield as in the picture.
The three wires of the temperature sensor definition are:
VCC<->RED;
GND<->BLACK;
SIGNAL<->WHITE;
Step 2: Connect the RTC Module to Screw Shield
The second step is to connect the RTC module, we need this module to supply us the time information. I do not need to worry about the driver of this module, there is already a library to operating this module. In addition, we only use this module to supply time information, so we just need to use 4 pins: VCC, GND SDA, SCL.
Follow the picture to connect the RTC module to the screw shield. And, don't forget to install a battery to the RTC module.
As the time setting, please refer to the comment in the code to learn how to set the current time.
Step 3: Connect the Shields
After the last two steps, the temperature sensor and the RTC module has been connected to the screw shield, now we just need to plug the screw shield to the crowduino and plug the relay shield to the screw shield.
Step 4: Refitting the Electronic Liquid Vaporizer
The liquid vaporizer is very nice and solid. It was cost me about half a hour to try to take apart it, but finally i fail, it fully shows that the product 'made in China' has high quality...
In the end, I have to using violence to take it apart, sorry, my liquid vaporizer... Just like the picture, lead out the power wire and cut it off, then using two jumper wire to connect the power wire, using the insulation paste to tangle on the wire. Finally, using insulation paste to sealing the wound of the liquid vaporizer. I think with this paste, the wound could cure soon, and it is nice too...
Step 5: Connect the Liquid Vaporizer to the Relay Shield
This step is to connect the liquid vaporizer to the Relay shield, Relay shield acts as a “low voltage controlled switch to control high voltage”. Please note that the high voltage may cause some dangerous.
Plug the two wires of liquid vaporizer to the terminal COM3 and NO3 of the Relay Shield, using screw driver to tightening it. In order to avoid touch the terminal accidentally, I paste some insulation paste to wrapped up the terminal of the relay shield. After this, I don't have to worry about waking up in the mid-night and touch the relay shield accidentally.
Step 6: Programming
Now, all hardware stuffs has been finished, we need to programming for the system. The program is just simply combine the existing library of DS18B and the RTC module and do some changed. If you want to DIY this project and reach your purpose, you only need to change several places.
See the picture one, you can define the work time, current time and the trigger temperature in the first several lines.
For the RTC module, you need to initialize it to setting its current time. Please see the picture 1, in the setup(), run the setDateDs1307(); //Set current time to the RTC module; this function is only need to run one time, after you setting the current time, you need to comment that line and re-upload the program to your arduino. You can see the RTC time in the monitor window.
Here is the code of this project, you can download the attachment in the below, or just simply cope the code in the below to your Arduino IDE, and upload to the crowduino.
#include "Wire.h"
#define DS1307_I2C_ADDRESS 0x68 // the I2C address of Tiny RTC
//define the start time, for example, I want to monitoring start at 22:00:00
#define STA_HOUR 22
#define STA_MINUTE 00
#define STA_SECOND 0
//define the end time, stop monitoring at 6:30:00
#define END_HOUR 6
#define END_MINUTE 30
#define END_SECOND 0
//define the current time, you can configure the current time in here
#define CURRENT_SECOND 0
#define CURRENT_MINUTE 0
#define CURRENT_HOUR 12
#define CURRENT_DAYOFWEEK 3
#define CURRENT_DAYOFMONTH 16
#define CURRENT_MONTH 3
#define CURRENT_YEAR 2013
//define the trigger temperature, only the temperature higher than 22, the relay will trigger in the specify time
#define TRIGGER_TEMPERATURE 22
OneWire ds(14); // on pin 14 for temperature
boolean flag; //To recorde the state of the temperature sensor
byte second, minute, hour, dayOfWeek, dayOfMonth, month, year;
long staTotalSecond, endTotalSecond, currentTotalSecond;
// Convert normal decimal numbers to binary coded decimal
byte decToBcd(byte val)
{
return ( (val/10*16) + (val%10) );
}
// Convert binary coded decimal to normal decimal numbers
byte bcdToDec(byte val)
{
return ( (val/16*10) + (val%16) );
}
void setup() {
Wire.begin();
Serial.begin(19200);
flag = true;
staTotalSecond = long(STA_HOUR) * 3600 + long(STA_MINUTE) * 60 + long(STA_SECOND);//to caculate the total seconds
//Serial.println(staTotalSecond);
endTotalSecond = long(END_HOUR) * 3600 + long(END_MINUTE) * 60 + long(END_SECOND);//to caculate the total seconds
//Serial.println(endTotalSecond);
//define the relay pins, the relay shield have 4 relays
pinMode(4,OUTPUT);
pinMode(5,OUTPUT);
pinMode(6,OUTPUT);
pinMode(7,OUTPUT);
digitalWrite(4,LOW);
digitalWrite(5,LOW);
digitalWrite(6,LOW);
digitalWrite(7,LOW);
setDateDs1307(); //Set current time to the RTC module,
//this code is only need run one time, after setting the current time successfully, pleas comment this line.
}
void loop()
{
float temperature;
getDateDs1307();//get the time data from tiny RTC
currentTotalSecond = long(hour) * 3600 + long(minute) * 60 + long(second);
// Serial.println(currentTotalSecond);
if(currentTotalSecond > endTotalSecond && currentTotalSecond < staTotalSecond)// to judge whether the current time in the term of setting
{
digitalWrite(5,LOW);//relay off
}
else
{
temperature = getTemperature('c');//to get the teperature
if (flag)
{
Serial.println(temperature);
if(temperature > TRIGGER_TEMPERATURE)//if temperature higher than setting temperature, relay on
{
digitalWrite(5,HIGH);//relay on
}
else
{
digitalWrite(5,LOW);//relay off
}
}
}
delay(60000);//detect the time and the temperature each 60 seconds
}
// Function to set the currnt time, change the second&minute&hour to the right time
void setDateDs1307()
{
second =00;
minute = 51;
hour = 10;
dayOfWeek = 5;
dayOfMonth =28;
month =2;
year= 13;
Wire.beginTransmission(DS1307_I2C_ADDRESS);
Wire.write(decToBcd(0));
Wire.write(decToBcd(second)); // 0 to bit 7 starts the clock
Wire.write(decToBcd(minute));
Wire.write(decToBcd(hour)); // If you want 12 hour am/pm you need to set
// bit 6 (also need to change readDateDs1307)
Wire.write(decToBcd(dayOfWeek));
Wire.write(decToBcd(dayOfMonth));
Wire.write(decToBcd(month));
Wire.write(decToBcd(year));
Wire.endTransmission();
}
// Function to gets the date and time from the ds1307 and prints result
void getDateDs1307()
{
// Reset the register pointer
Wire.beginTransmission(DS1307_I2C_ADDRESS);
Wire.write(decToBcd(0));
Wire.endTransmission();
Wire.requestFrom(DS1307_I2C_ADDRESS, 7);());
Serial.print(hour, DEC);
Serial.print(":");
Serial.print(minute, DEC);
Serial.print(":");
Serial.print(second, DEC);
Serial.print(" ");
Serial.print(month, DEC);
Serial.print("/");
Serial.print(dayOfMonth, DEC);
Serial.print("/");
Serial.print(year,DEC);
Serial.print(" ");
Serial.println();
//Serial.print("Day of week:");
}
//get the temperature, the paremeter is a char, if it equal to 'f', return fahrenheit,else return celsius
float getTemperature(char unit)
{
byte i;
byte present = 0;
byte type_s = 0;
byte data[12];
byte addr[8];
float celsius, fahrenheit;
if ( !ds.search(addr)) {
// Serial.println("No more addresses.");
// Serial.println();
ds.reset_search();
delay(250);
flag = false;
return 0;
}
else
flag = true;
if (OneWire::crc8(addr, 7) != addr[7]) {
// Serial.println("CRC is not valid!");
return 2;
}
// Serial.println();
// the first ROM byte indicates which chip
switch (addr[0]) {
case 0x10:
type_s = 1;
break;
case 0x28:
type_s = 0;
break;
case 0x22:
type_s = 0;
break;
default:
return 3;
}
for ( i = 0; i < 9; i++) { // we need 9 bytes
data[i] = ds.read();
}
//;
if ('f'== unit)
return fahrenheit;
else
return celsius;
}
Step 7: Install and Test the System
Now, we have connected the hardware and uploaded the program, the last thing is testing the system. I use a 9V adapter to power the system, and put the system in my bed room to help me defeat the mosquito. After three days' testing, the system work very well, from now on I can have a good dream.
I used the same RTC for a binary clock. It was relatively easy to program the time, but after two weeks it is already inaccurate by three minutes. Thinking this might become a problem if it is intended for longtime use. Darn those cheap Chinese products! The prices are so consumable, but their reliability can always be foreseen as nonexistent. I will never stop buying them though :)
Have you thought about producing high frequency using arduino to repel the mosquitoes? I think it would be healthier.
It maybe has a little difficult, do you have any idea?
Hey, nice project! I might be putting together one for the summer.
Just a quick note: RTC=real-time clock.
Thanks for your remind! I have modify it.
What liquid to you put in the vaporizer?
Hi Ringai,
I put the mosquito incense liquor to the vaporizer, it is matched to the vaporizer, both them are sell together in the market.
Thanks, Jack!
are you sure that this system kills mosquitoes ?
Hi Andreyeurope,
Yes, of course. The main thing I want to show is the automatic control, you can use it in much places not only for the liquid vaporizer.
Then it is very nice.
it's very useful!I live in shenzen too! can you share this instruction in built this site for chinese DIYers!
No problem, I glad to share it, this instruction has GPL license, you can share it too.
awesome app and works great on my tablet. theres lots of inspirational designs.
awesome app and works great on my tablet. theres lots of inspirational designs. | http://www.instructables.com/id/Smart-kill-mosquitoes-systemArduino/ | CC-MAIN-2017-47 | refinedweb | 1,700 | 53.1 |
Quicksort algorithm is one of the most used sorting algorithm, especially to sort large lists/arrays. Quicksort is a divide and conquer algorithm, which means original array is divided into two arrays, each of them is sorted individually and then sorted output is merged to produce the sorted array. On the average, it has O(n log n) complexity, making quicksort suitable for sorting big data volumes.
In more standard words, quicksort algorithm repeatedly divides an un-sorted section into a lower order sub-section and a higher order sub-section by comparing to a
pivot element. At the end of recursion, we get sorted array. Please note that the quicksort can be implemented to sort “in-place”. This means that the sorting takes place in the array and that no additional array need to be created.
Quicksort algorithm
The basic idea of Quicksort algorithm can be described as these steps:
If the array contains only one element or zero elements then the array is sorted. If the array contains more then one element then:
- Select an element as a pivot element, generally from middle but not necessary.
- Data elements are grouped into two parts: one with elements that are in lower order than the pivot element, one with element that are in higher order than the pivot element.
- Sort the both parts separately by repeating step 1 and 2.
Quicksort Java Example
Below is sample quicksort java implementation.
public class QuickSortExample { public static void main(String[] args) { // This is unsorted array Integer[] array = new Integer[] { 12, 13, 24, 10, 3, 6, 90, 70 }; // Let's sort using quick sort quickSort( array, 0, array.length - 1 ); // Verify sorted array System.out.println(Arrays.toString(array)); } public static void quickSort(Integer[] arr, int low, int high) { //check for empty or null array if (arr == null || arr.length == 0){ return; } if (low >= high){ return; } //Get the pivot element from the middle of the list int middle = low + (high - low) / 2; int pivot = arr[middle]; // make left < pivot and right > pivot int i = low, j = high; while (i <= j) { //Check until all values on left side array are lower than pivot while (arr[i] < pivot) { i++; } //Check until all values on left side array are greater than pivot while (arr[j] > pivot) { j--; } //Now compare values from both side of lists to see if they need swapping //After swapping move the iterator on both lists if (i <= j) { swap (arr, i, j); i++; j--; } } //Do same operation as above recursively to sort two sub arrays if (low < j){ quickSort(arr, low, j); } if (high > i){ quickSort(arr, i, high); } } public static void swap (Integer array[], int x, int y) { int temp = array[x]; array[x] = array[y]; array[y] = temp; } } Output: [3, 6, 10, 12, 13, 24, 70, 90]
In Java,
Arrays.sort() method uses quick sort algorithm to sort array of primitives using double pivot elements. Double pivot makes this algorithm even more faster. Check that out.
Happy Learning !!
Feedback, Discussion and Comments
Ivan A Aranda
Jeliot 3.7.2 appears to not be doing autoboxing and unboxing between int and Integer, I had to use intValue() for the array items and in the swap method switch to Integer rather than int for the temp variable. eclipse Build id: 20200313-1211 works fine.
Val
Quick sort is very good article of discussion. I propose to describe a little bit more that the worst performance cases take n^2 complexity. And why Java SDK choose this implementation.
Lokesh Gupta
your welcome
Val
Little mistake in comment of code:
instead “//Check until all values on left side array are greater than pivot”,
should be “//Check until all values on right side array are greater than pivot” | https://howtodoinjava.com/algorithm/quicksort-java-example/ | CC-MAIN-2021-31 | refinedweb | 622 | 56.79 |
Lesson 5 - Birthday Reminder in Java Swing - Logic Layer
In the previous lesson, Birthday Reminder in Java Swing - Form design, we completely designed forms for our application. In today's lesson, we're going to focus on the design of the logic layer, which are the classes that contain application logic.
Date
Since working with date and time is quite uncomfortable in Java, let's add a short static class to the project to make our work easier:
public class Date { private static DateTimeFormatter dateFormat = DateTimeFormatter.ofPattern("d'.'M'.'y"); public static String format(LocalDateTime date) { return date.format(dateFormat); } public static LocalDate parse(String dateText) throws ParseException { return LocalDate.parse(dateText, dateFormat); } }
The class contains a date format definition. Moreover, it has two methods to convert between a date and its text representation. The date can be extracted from a string (as entered by the user) or e.g. written as text.
Person
We'll certainly have persons in our application, so let's create a class for them.
Properties
A person will have 2 properties:
name and
birthday.
The
name will be a
String; the
birthday
will be of the
LocalDate type. We'll implement both properties as
private fields and generate getters for them:
public class Person { private String name; private LocalDate birthday; public String getName() { return name; } public LocalDate getBirthday() { return birthday; } }
Methods
A person will have several methods, but now we'll focus just on its constructor to make our application executable as soon as possible. We'll complete it later. So we'll add a parametric constructor to the class.
Constructor
Besides setting the instance properties, the constructor will be also responsible for validating these properties. Let's see its code:
public Person(String name, LocalDate birthday) throws IllegalArgumentException { if (name.length() < 3) { throw new IllegalArgumentException("The name is too short"); } if (birthday.isAfter(LocalDate.now())) { throw new IllegalArgumentException("The birth date cannot be in the future!"); } this.name = name; this.birthday = birthday; }
First, we verify that the name isn't too short, or the entered birthday isn't
in the future. If one of the situations occurs, the
IllegalArgumentException is thrown and we pass a text message for
the user into its constructor.
If you haven't worked with exceptions yet, it doesn't matter. All you need to know is that this is how object-oriented applications handle errors, especially those caused by entering an invalid value by the user or occurred while working with files. Throwing an exception terminates the method immediately. We'll show how to respond to the exception further in the course. We'll always throw exceptions only in logic classes.
toString()
Since we want to print our persons, we'll override the
toString() method to return the person's name:
@Override public String toString() { return name; }
This method will be later used by the
JList to print its
items.
Person manager
The last logic component of the application will be a person manager. The class will manage the persons. It'll be able to add, remove, and save their list to a file and load it back from it. Finally, it'll be able to search for a person who has the nearest birthday.
So add a new
PersonManager class to the project.
Fields
The only class field is a list of persons. The list is of the
DefaultListModel type. We haven't encountered this collection in
this course yet. It's a special collection type that can be set as the data
source for form components.
ListModel can trigger change events
when its contents change. All components on the form that have this
ListModel set as the data source are automatically refreshed thanks
to this mechanism. You can imagine that refreshing dozens of components on a
form manually when data changes could be very difficult. When we add a new
person in our app, it'll be immediately visible in the person list without the
need to refresh it anyhow. It'll refresh automatically. We'll generate a getter
for
DefaultListModel and rename it to
getModel():
So far, the class looks as follows:
public class PersonManager { private DefaultListModel<Person> persons = new DefaultListModel<>(); public ListModel getModel() { return persons; } }
Methods
Again, let's put only the most important methods in the class for now.
add() and remove()
The methods for adding and removing a person are absolutely trivial:
public void add(Person person) { persons.addElement(person); } public void remove(Person person) { persons.removeElement(person); }
getPersons()
Since
DefaultListModel lacks a lot of important methods, we'll
add a
getPersons() method to the manager which returns persons as
an ordinary
List. So we can get both a model and a list. We'll
convert the model to the
List using the static
Collections class:
public List<Person> getPersons() { return Collections.list(persons.elements()); }
We've finished the core of the logic layer. Next time, Birthday Reminder in Java Swing - Wiring the layers, we'll show you how to wire the logic to the form and get the whole application up and running.
No one has commented yet - be the first! | https://www.ict.social/java/forms/form-applications-in-java-swing/birthday-reminder-in-java-swing-logic-layer | CC-MAIN-2019-26 | refinedweb | 850 | 55.74 |
I guarantee only that the remaining kids will be alive in the morning. The condition of the house? Weeellllll....
TimA:dcole13:
Yooooooo, can I come!?
OK, When?
#include <std_disclaimer>
Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.
TimA:dcole13:TimA:Andib: Was a good night, Great to finally put faces to usernames.
Only thing I could suggest for next time is Name Tags, Would make knowing who's who much easier.
Also thanks Microsoft for the BoomPods and Bigpipe for the bartab!
Edit: and @nate for hosting the night
Great to meet you Andrew!
Met a few people which is good!
I WON SOMETHING, I ACTUALLY WON SOMETHING! 2 passes to the zoo!
Yooooooo, can I come!?
OK, When?
Home ADSL: School:
hio77:TimA:dcole13:
Yooooooo, can I come!?
OK, When?
be sure to take photos you two ;)
Great night everyone.
To those who I didnt get to met, next time maybe :)
Had a great time, and Was nice to come out of my shell a little and meet you all
Defiantly name badges would be a good move, although guess who you are was a fun game - Couldn't help it Lorenceo, was a great laugh to see how long it would take you!
Steam: Coil (Same photos as profile here)
Origin: Scranax
Currently playing on PC: Rust, Subnautica, CS:GO, AOE2 HD, BeamNG Drive, BF1.
shermanp: Thanks to Nate and the staff at the Tuihana Cafe. You were great hosts.
And thanks to Big Pipe providing the bar tab.
It was nice to meet some new people, and get out of my shell a bit.
BigHammer:shermanp: Thanks to Nate and the staff at the Tuihana Cafe. You were great hosts.
And thanks to Big Pipe providing the bar tab.
It was nice to meet some new people, and get out of my shell a bit.
If short term memory serves (and it often doesn't) and you're the chap with the fully functioning 2008 Sony e-reader. My not much more than a year old kindle is still broken. Had a quick look when I got home tonight. :-( | https://www.geekzone.co.nz/forums.asp?forumid=48&topicid=151474&page_no=12 | CC-MAIN-2018-09 | refinedweb | 365 | 83.66 |
* Berin Loritsch (bloritsch@apache.org) wrote :
> Ok, but what we the end product, an Aggregation system?
> I think there is the beginnings of one with the
> FragmentExtractorGenerator that will make recursive
Hmm. That's not quite what it does -- it takes SAX events from a
document, and stashes them for later retrival from another pipeline.
Common use being to extract inline SVG documents from xhtml streams.
(roll on proper SVG and namespace support in browsers).
I'm going to try and post a proposal for content aggregation for voting
sometime over this weekend. I would have done it earlier, but my Copious
Free Time has been taken up with other things of late.
P.
--
Paul Russell Email: paul@luminas.co.uk
Technical Director Tel: +44 (0)20 8553 6622
Luminas Internet Applications Fax: +44 (0)870 28 47489
This is not an official statement or order. Web: | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200102.mbox/%3C20010216142808.A8072@hydrogen.internal.luminas.co.uk%3E | CC-MAIN-2014-15 | refinedweb | 147 | 65.93 |
]
Early life]
Professional wrestling career]
World Wrestling Federation / Entertainment
Debut and Team Xtreme (1999–2000)].[3][17],[2][3] on one occasion unsuccessfully challenging him for the WWF Light Heavyweight Championship.
The Invasion (2001–2002).].[19]
In late 2001, the Hardy Boyz began a storyline in which they were feuding with one another.[18][20] Lita refereed a match between them at Vengeance on December 9.[2][20].[21] On the December 17 episode of Raw, both Jeff and Lita were sidelined with storyline injuries following a title match between Jeff and WWF Hardcore Champion The Undertaker.[1].[1][22].[23]
Injury and return (2002–2003)].[25][27].[2][25]
She returned to the ring after an absence of seventeen months on the September 15 episode of Raw, saving Trish Stratus from a beating at the hands of Molly Holly and Gail Kim.[1].]
Rivalry with Trish; storyline with Kane .].[41].]
Relationship with Edge and retirement (2006–2007)
After Edge defeated John Cena to win the WWE Championship on January 8 at New Year's Revolution, he announced that he and Lita were going to celebrate by having sex in the middle of the ring the next night on Raw.[2]..
Music career]
Media]
Personal life].
In wrestling
- Signature moves
- Double team moves
- Entrance music
- "Electron" (WWF)[76]
- "Loaded" by Zack Tempest (with The Hardy Boyz) (WWF)
- "It Just Feels Right" by Jim Johnston (WWF)[76]
- "Lovefurypassionenergy"(remix) by Boy Hits Car (WWE)[76][77]
Championships and accomplishments
- Pro Wrestling Illustrated
- Feud of the Year[79] with Edge vs. Matt Hardy
- Woman of the Year (2001)[80]
- Pro Wrestling Report
- Diva of the Year (2006)
- World Wrestling Federation / World Wrestling Entertainment
Notes
- ^ a b c d e f g h i j k l m "Lita's Alumni Profile". WWE.. Retrieved on 2007-06 "Lita's Bio". SLAM! Wrestling. 2005-03-03.. Retrieved on 2007-10-18.
- ^ a b c d e f g h i j k l m Stephen Laroche (2001-02-14). "Lita riding wave of popularity". SLAM! Wrestling.. Retrieved on 2007-10-18.
- ^ a b c d e f g h i j k Thomas Chamberlin (April 2001). "Lita's More Than Lovely". Wrestling Digest.. Retrieved on 2007-06-06.
- ^ a b c d e f Williams, Scott E. (2007). Hardcore History: The Extremely Unauthorized Story of ECW. Sports Publishing. p. 171. ISBN 1596702257.
- ^ a b c d e f g h i j k l m n o p q r s t u v w x y Jeff Clark (2007-09-07). "The Luchagors Drop a Powerbomb". Stomp and Stammer.. Retrieved on 2007-10-02.
- ^ a b c d Craig Tello (2006-11-27). "Lita says goodbye".. Retrieved on 2007-02-25.
- ^ a b Rennie, Steve (2005-04-21). "Matt Hardy pulls no punches on Between the Ropes". SLAM! Wrestling.. Retrieved on 2008-01-25.
- ^ a b Amy Dumas. Lita: A Less Traveled R.O.A.D – The Reality of Amy Dumas, 41.
- ^ a b c d Ramezanpour, Pejman (2001-06-26). "Lita vid a revealing look at a WWF Diva". SLAM! Wrestling.. Retrieved on 2007-10-18.
- ^ Amy Dumas. Lita: A Less Traveled R.O.A.D – The Reality of Amy Dumas, 73.
- ^ Amy Dumas. Lita: A Less Traveled R.O.A.D – The Reality of Amy Dumas, 69–70.
- ^ Copeland, Adam (2004). Adam Copeland On Edge. Simon and Schuster. p. 225. ISBN 1416505237.
- ^ Amy Dumas. Lita: A Less Traveled R.O.A.D – The Reality of Amy Dumas, 202.
- ^ a b c "Lita's First Reign". WWE.. Retrieved on 2007-04-05.
- ^ Pat McNeill. The Tables All Were Broken, 36.
- ^ "Ivory's Third Reign". WWE.. Retrieved on 2007-04-05.
- ^ a b Steve Anderson (December 2001). "Broken hearts and broken limbs". Wrestling Digest.. Retrieved on 2007-10-23.
- ^ "Survivor Series 2001: Results". WWE.. Retrieved on 2007-11-03.
- ^ a b Pat McNeill. The Tables All Were Broken, 339.
- ^ "RAW: December 10, 2001 Results". Online World of Wrestling.. Retrieved on 2007-11-04.
- ^.. Retrieved on 2007-06-06.
- ^ Scott Keith. Wrestling's One Ring Circus: The Death of the World Wrestling Federation, 40.
- ^ Tim Baines (2003-11-09). "Lita makes up for lost time". Ottawa Sun.. Retrieved on 2007-02-25.
- ^ a b c d "November 17, 2003 Raw Results". Online World of Wrestling.. Retrieved on 2007-11-04.
- ^ "November 24, 2003 RAW Results". Online World of Wrestling.. Retrieved on 2008-04-12.
- ^ Mike McAvennie (2007-0-2-14). "Can't get no Stratus-faction". WWE.. Retrieved on 2007-11-04.
- ^ Ian Hamilton. Wrestling's Sinking Ship: What Happens to an Industry Without Competition, 88.
- ^.. Retrieved on 2007-11-04.
- ^ a b Jason Clevett (March 2, 2005). "Lita on road to recovery". SLAM! Wrestling.. Retrieved on 2007-10-18.
- ^ Ian Hamilton. Wrestling's Sinking Ship: What Happens to an Industry Without Competition, 133.
- ^ Ian Hamilton. Wrestling's Sinking Ship: What Happens to an Industry Without Competition, 134.
- ^ "WrestleMania 21: Results". WWE.. Retrieved on 2007-11-04.
- ^ "Backlash 2005: Results". WWE.. Retrieved on 2007-11-04.
- ^ "A Barbaric Batista". WWE.com. 2005-05-30.. Retrieved on 2007-11-03.
- ^ "Nobody Gets Up From The Pedigree". WWE. 2005-06-20. Archived from the original on 2005-07-20.. Retrieved on 2007-11-03.
- ^ a b Lilsboys (February 2006). "Matt: I still will not die". The Sun.. Retrieved on 2007-04-12.
- ^ Ian Hamilton. Wrestling's Sinking Ship: What Happens to an Industry Without Competition, 213.
- ^ "RAW ratings rise". WWE. 2006-01-10.. Retrieved on 2007-03-21.
- ^ McAvennie, Mike (2006-12-24). "Raw's Sex Edge-ucation". Archived from the original on 2007-02-05.. Retrieved on 2007-03-21.
- ^ a b "The Road to WrestleMania". 2006-02-06.. Retrieved on 2007-02-25.
- ^ Brian Elliot (2006-06-12). "ECW resurrected at PPV". SLAM! Sports.. Retrieved on 2007-09-23.
- ^ Zack Zeigler (2006-08-14). "Cena goes off". WWE.. Retrieved on 2007-11-13.
- ^ Zack Zeigler (2006-08-28). "DX death sentence?". WWE.. Retrieved on 2007-11-13.
- ^ Steven Schiff (2006-09-17). "Trish bows out on top". WWE.. Retrieved on 2007-11-03.
- ^ Brett Hoffman (2006-11-05). "Champion again". WWE.. Retrieved on 2007-11-03.
- ^ Zack Zeigler (2006-11-20). "Breaking Down in Baltimore". WWE.. Retrieved on 2007-11-13.
- ^ "Rock-n-Shock at The Masquerade". 2006-09-14.. Retrieved on 2007-04-07.
- ^ "PunkRockalypse and Merch". 2007-04-16.. Retrieved on 2007-04-17.
- ^ a b "World Wrestling Federation Superstar Lita Holds Signing At WWF NY For New Home Video". Business Wire. 2001-07-16.. Retrieved on 2007-10-18.
- ^ a b c Bob Kapur (October 24, 2003). "Lita's book an interesting R.E.A.D.". SLAM! Wrestling.. Retrieved on 2007-10-18.
- ^ "Fear Factor Rewind: Episode 215". NBC.com. 2002-02-25.. Retrieved on 2007-11-03.
- ^ Eric Benner (2001-11-16). "WWF shows strength on The Weakest Link". SLAM! Sports.. Retrieved on 2007-11-03.
- ^ George Appiah (2004-03-12). "Let's Get Ready to...Wrestle". TheHillTopOnline.com.. Retrieved on 2007-11-06.
- ^ Nicholas Sammond (2005). Steel Chair To The Head: The Pleasure And Pain Of Professional Wrestling. Duke University Press. p. 174. ISBN 0822334380.
- ^
- ^ Amy Dumas. Lita: A Less Traveled R.O.A.D – The Reality of Amy Dumas, 38.
- ^ Ian Hamilton. Wrestling's Sinking Ship: What Happens to an Industry Without Competition, 150.
- ^ Ian Hamilton. Wrestling's Sinking Ship: What Happens to an Industry Without Competition, 152.
- ^ Zeigler, Zack (2006-08-18). "Winning the War".. Retrieved on 2007-04-09.
- ^ a b c d e f "Lita Bio". Online World of Wrestling.. Retrieved on 2008-08-05.
- ^ Dumas, Amy. Lita: A Less Traveled Road: The Reality of Amy Dumas, p. 149.
- ^ a b Tylwalk, Nick (December 7, 2004). "Raw: Lita finally prevails". SLAM! Wrestling.. Retrieved on 2009-06-22.
- ^ a b Hubbard, Nathan. "WWE Heat TV report for December 16". Wrestling Observer.. Retrieved on 2009-06-22.
- ^ "Off the mat, on to music". Topeka Capital-Journal. 2008-03-13.. Retrieved on 2008-01-28.
- ^ Powell, John (July 24, 2000). "A bloody good PPV: WWF stars bleed for the company at Fully Loaded". SLAM! Wrestling.. Retrieved on 2009-06-22.
- ^ Dumas, Amy. Lita: A Less Traveled Road: The Reality of AMy Dumas, p. 250.
- ^ a b c "WWF/E Wrestling Theme Count and Title Names". Wrestling Information Archive.. Retrieved on 2007-11-03.
- ^ "WWF Forceable Entry Debuts At no.3 On Billboard Top 200". Business Wire. 2002-04-04.. Retrieved on 2007-10-18.
- ^ Eric Schomburg (2006-12-14). "WWE and TNA: Year in Review 2006". American Chronicle.. Retrieved on 2007-11-04.
- ^ "Pro Wrestling Illustrated Award Winners Feud of the Year". Wrestling Information Archive.. Retrieved on 2008-07-09.
- ^ "Pro Wrestling Illustrated Award Winners Woman of the Year". Wrestling Information Archive.. Retrieved on 2008-07-09.
- ^ "WWE Women's Championship official title history". WWE.. Retrieved on 2008-07-09.
References
- Amy Dumas (2004). Lita: A Less Traveled R.O.A.D – The Reality of Amy Dumas. WWE Books. ISBN 074347399X.
- Ian Hamilton (2006). Wrestling's Sinking Ship: What Happens to an Industry Without Competition. Lulu.com. ISBN 1411612108.
- Pat McNeill (2002). The Tables All Were Broken: McNeill's Take on the End of Professional Wrestling. iUniverse. ISBN 0595224040.
- Scott Keith (2004). Wrestling's One Ring Circus: The Death of the World Wrestling Federation. Citadel Press. ISBN 080652619X.
External links
- Amy Dumas at the Internet Movie Database
- Lita's WWE Alumni Profile
- AdoreYourPets.org (charity website)
- The Luchagors Official Website
- WrestlingEpicenter.com August 2008 Interactive Interview with Amy Dumas
- Amy Dumas at MySpace
This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer) | http://www.answers.com/topic/amy-dumas | crawl-002 | refinedweb | 1,617 | 81.29 |
R vs Python: head to head data analysis
The epic battle between R vs Python goes on. Here we are comparing both of them in terms of generic tasks of data scientist’s like reading CSV, finding data summary, PCA, model building, plotting, and many more.
By Vik Paruchuri, Dataquest.io. matchup started!
Read in a csv file
R. Dataframes are available in both R and Python, and are two-dimensional arrays (matrices) where each column can be of a different datatype. At the end of this step, the csv file has been loaded by both languages into a dataframe.
Find the number of players
R
dim(nba)
[1] 481 31
Python
nba.shape
(481, 31)
This prints out the number of players and the number of columns in each. We have
481 rows, or players, and
31 columns containing data on the players.
Look
sapply(nba,, taking the mean of string values will just result in
NA – not available.)
ggpairs(nba[,c("ast", "fg", "trb")])
Python
import seaborn as sns
import matplotlib.pyplot as plt
sns.pairplot(nba[["ast", "fg", "trb"]])
plt.show()
>>IMAGE). | https://www.kdnuggets.com/2015/10/r-vs-python-data-analysis.html | CC-MAIN-2019-18 | refinedweb | 185 | 75 |
Matplotlib plot format, x**2 as x^2
So when I plot, I noticed matplot doesn't like x^2 and other similar formats. Instead it wants Python based math expressions such as x**2.
How do I plot using x^2 notation instead of x**2? Another example is it only wants exp(x) instead of e^(x).
x = arange(-self.x_window, self.x_window, self.res) try: y = eval(self.tf.text.lower()) plt.plot(x, y, linewidth=3) except: console.hud_alert('syntax error','error',1.00) return
In my code, should I just detect for ^ or e and replace it with ** or exp, respectively?
By the way, the end goal here is to allow a user to enter "familiar" math notation into a text field and allow it to be processed. x**4 for example is not familiar notation.
I guess you'd have to use regular expressions to preprocess the user input before you plot. You have to be careful how you do it though, because e^ should become exp and not e**.
A quick and dirty solution would be to print a line to tell the user to use ** instead of ^ etc.
formula = formula.replace('e^(', 'exp(').replace('^', '**')
Too smooth.
Ok Mr. smarty pants lol. How about this one....
What if a user puts in 5x ( as in 5 times x) or any NUMBER next to x? Matplot only wants 5*x. How can I use string.replace to squeeze that * between them?
- MartinPacker
I would bid for permitting both notations - in your final code. After all users are quite likely to come across both in their lives. (I certainly have.) :-)
regexis probably better here but...
def x_times(formula): if 'x ' in formula: for i in xrange(10): formula = formula.replace('{}x '.format(i), '{} * '.format(i)) return formula
@Martin
Yes, agreed. Thank you.
The exp( ) example above inherently allows for both. Also, what ever the solution is for the 5x example should allow for both 5x and 5*x as well.
Also, think about e^x or e^-x or e^4 (without the brackets). For that you definitely need regular expressions.
import re formula = re.sub(r'e\^(-?[x0-9.]+?)', 'exp(\1)', formula)
Something like that, haven't tested it, but should give you a start if you want to do something with re. | https://forum.omz-software.com/topic/1940/matplotlib-plot-format-x-2-as-x-2 | CC-MAIN-2021-04 | refinedweb | 389 | 68.77 |
Hi, I need help to some simple algorithm
I have a variable that I receive each 5seconds a value like 10 ..20 ..34..56..75 on websocketI need to store it in a array like value[n]
only with n and n-1 this means the actual value received and last value receivedFor example in above store only 34 and 56 and get its diference like 22 in a variable result
I was thinking about push and pop in arrays
Any help is welcomeRegards
SOLVED
var data = [0];
function diference(value){data.push(value);x=data.shift()-data[0]return (Math.abs(x));}
This topic is now closed. New replies are no longer allowed. | http://community.sitepoint.com/t/simple-algorithm-array-elemets-diference/26349 | CC-MAIN-2015-35 | refinedweb | 114 | 55.07 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Button not clickable in list view
Hello Community,
I try to add a button in a list; the view appears, the button appears too but it is not clickable. It's like in a ghost state and I am unable to have the focus on it.
Here the code :
<record id="pkg-cfg_lines_tree_tree" model="ir.ui.view"> <field name="name">pkg-cfg.tree</field> <field name="model">pkg-cfg_lines</field> <field name="view_mode">form</field> <field name="view_type">form</field> <field name="arch" type="xml"> <tree string="Tree" version="7.0" > <field name="pname" readonly="1" /> <field name="chse" readonly="1"/> <button name="add_line" string="Add" type="object" icon="gtk-go-forward" /> </tree> </field> </record>
and the python code for the function :
def add_line(self, cr, uid, ids, context): print "ENTER IN pkg-cfg_lines.add_line" return True
When I try to click on the button, the action is never called....
I also tried using a button with action type for the button :
< button
with the following action :
<record id="add_line" model="ir.actions.server"> <field name="type">ir.actions.server</field> <field name="name">Testing</field> <field name="condition">True</field> <field name="state">code</field> <field name="model_id" ref="model_sale_configure_container"/> <field eval="5" name="sequence"/> <field name="code">action=obj.add_line(context=context)</field> </record>
Any ideas ?
Thanks in advance, Marc
You need to be in the reading mode, not the edit mode of your entry. We have the same issue.
EDIT
Hi sven,
So I can you launch the view in reading mode ?
Did you manage to solve your issue ? Marc Cassuto (SFL)
Yes, we have made some dirty hack : In place of the button, we have put a checkbox (fields.boolean) with an onchange on it.
Hi sven,
So I can you launch the view in reading mode ?
Did you manage to solve your issue ?
Yes, we have made some dirty hack : In place of the button, we have put a checkbox (fields.boolean) with an onchange on it.
Hello Sven,
yes, we also thought about this solution... but this is a dirty one, I agree and there is some GUI glitch with it....
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
You mentioned view_type and view_mode while displaying tree view.Is there need to mentioned it? Please try without those 2 lines.< button This is generally used to call wizard from onclick of button
Hello, Yes I also tried without these 2 lines => no success.
I know about my second try (with "%(add_line)d"), it is generally used to call a wizard but it is also a method to call an action ;-) | https://www.odoo.com/forum/help-1/question/button-not-clickable-in-list-view-27679 | CC-MAIN-2018-05 | refinedweb | 483 | 66.64 |
25 January 2011 16:58 [Source: ICIS news]
TORONTO (ICIS)--DuPont has raised its 2011 earnings guidance due to a lower base tax rate and reduced pension obligations as its previous sales outlook remains unchanged, the company's chief financial officer said on Tuesday.
DuPont raised the guidance for 2011 to $3.45-$3.75 per share, up from $3.30-$3.60 per share. The new guidance excluded DuPont’s planned $6.3bn deal for ?xml:namespace>
DuPont’s previous sales forecast for 2011 of around $33bn (€24bn) to $34bn remained unchanged, he added.
While the economic recovery would continue this year, growth would proceed “at a more moderate pace” compared with 2010, he said.
DuPont expected raw materials prices to increase 4-5%, largely driven by metals, as well as price increases in ethane, adipic acid, chlorine, solvent and pigments, he said.
DuPont’s ability to pass through those higher costs to customers would vary from market to market, Fanandakis said.
In its titanium dioxide (TiO2) business, for example, DuPont was well-positioned, as that market was “extremely tight”, with capacity utilisation above 90% and no near-term major capacity expansion, he said.
“Sales to capacity continues to be tight for most performance chemical markets, and as a result we will continue to focus on debottlenecking incremental capacity to drive both earnings and productivity,” he added.
CEO Ellen Kullman said one of the challenges DuPont would face in 2011 was slower growth in the
In
However, the Asia-Pacific automotive, photovoltaic, electronics, TiO2 and other markets were expected to continue “very, very strong”.
In
DuPont earlier on Tuesday reported that 2010 fourth-quarter net profit fell by 14.7% year on year to $376m. However, before special charges and gains, its net profit rose 15.2% year on year to $463m.
New York-based equity research firm Alembic Global Advisors said DuPont’s fourth quarter exceeded estimates, largely because of lower taxes.
“That said, the quarter logged an impressive 12% volume growth with increases in all regions,” Alembic's head of research, Hassan Ahmed, said in a note to clients.
“We were particularly encouraged by 9% volume growth seen in the US and 8% volume growth in [Europe, Middle East and Africa],” he added.
($1= €0.73)
For more on DuP | http://www.icis.com/Articles/2011/01/25/9429249/dupont-raises-2011-profit-guidance-keeps-sales-outlook-unchanged.html | CC-MAIN-2014-42 | refinedweb | 383 | 62.68 |
If you’re a Pythonist who’s been longing for some Sinatra action, then look no further. Denied is the next generation Python micro-web-framework, wrapped in a single portable library.
Let’s run a simple routed app on port 8080:
from deny import * @route('/') def hello(): return 'Hello World!' if __name__ == '__main__': run()
It’s a beautiful thing.
[Source on GitHub] [Project Page]
Update
Turns out, the developer wrote this as an April Fool’s joke. Though functional, this micro-framework will not be maintained. If you like what you saw, and want to use a similar, rock-solid, maintained Python framework, have a look at Bottle or Itty.
Have comments? Send a tweet to @TheChangelog on Twitter.
Subscribe to The Changelog Weekly – our weekly email covering everything that hits our open source radar. | http://thechangelog.com/denied-sinatra-for-python/ | CC-MAIN-2014-52 | refinedweb | 136 | 74.08 |
More of a just fragment of things to do, rather than a full post on anything I’ve done, because the blog thing is just increasingly hard to find the time or motivation for now; work sucks time and energy and makes everything just whatever…
Anyway – years ago a post on data textualisation. Nothing’s changed, hype around robot journalism has waned and the offerings there are are still pretty much just your plain old report/document generation.
Note a lot more than homebrew stuff like this, still:
One thing I have noticed is more official Excel spreadsheets starting to include dynamic reporting in them. For example, the NHS Digital Hospital Accident and Emergency Activity, 2016-17 data has an output sheet that will dynamically create a report from the other data sheets based on a user selection.
The report uses formulas to generate output text:
and tables and charts are also generated as part of the report:
A couple of times, Leigh Dodds has linked out to tracery.io in the data2txt / text generation context, but I’ve yet to play with it (tutorial). There’s a python port – pytracery – at aparrish/pytracery which is probably the one I’ll use.
There’s a notebook that looks like it could provide a handy crib to using
pytracery with CSV data. I’m not sure how well it copes with turning numbers into words, but it might be interesting to try to weave in support from something like inflect.py if it doesn’t.
I had a play, when I should have been in the garden… :-( Fork here with modifiers from
inflect.py and a demo of using it to generate sentences from rows in a
pandas dataframe.
One of the things I’ve started trying to do is package up simple tools to grab structured data from webpublished CSVs and Excel docs in simple SQLite3 databases that can be published using datasette (ouseful-datasupply).
Also on the to do list in this regard is to look at some simple demos for creating datasette templates (perhaps even cribbed from report generating formulas in spreadsheets as a shortcut) that render individual rows (or joined rows) as text reports, such as paragraphs of text reporting on winter sitrep stats for a specified hospital. From one template, we’d auto generate reports for every hospital: database reporting, literally.
Or maybe database reporting, 2.0…
(Hmm… datasette reporting…? The practice of generating templated news wire reports using datasette?)
Why 2.0? Because writing database reports has been going on for ever, but getting folk to think about it as a basic journalistic reporting skill that represents a practical example of the sort of thing of everyday task that might become more widespread as a result of “everyone should learn to programme” initiatives.
PS code example using my
inflect.py enhanced version of
pytracery:
import pandas as pd df=pd.DataFrame({'name':['Jo','Sam'], 'pos':[1,2]}) df name pos 0 Jo 1 1 Sam 2 rules = {'origin':"#name# was placed #posord.number_to_words#.", 'posord':'#pos.ordinal#'} def row_mapper(row, rules): row=row.to_dict() rules=rules.copy() for k in row: rules[k] = str(row[k]) grammar = tracery.Grammar(rules) grammar.add_modifiers(base_english) return grammar.flatten("#origin#") df['report']=df.apply(lambda row: row_mapper(row, rules), axis=1) df name pos report 0 Jo 1 Jo was placed first. 1 Sam 2 Sam was placed second
PPS another example of local data report generation in the wild: Local area SEND report .
One thought on “To Do – Textualisation With Tracery and Database Reporting 2.0” | https://blog.ouseful.info/2018/02/11/to-do-textualisation-with-tracery-and-database-reporting-2-0/ | CC-MAIN-2021-25 | refinedweb | 598 | 51.68 |
a starting point and complete overview of Building Secure ASP.NET Applications.
Summary: Applications may choose to store encrypted data such as connection strings and account credentials in the Windows registry. This How To shows you how to store and retrieve encrypted strings in the registry. (7 printed pages)."
HKEY_LOCAL_MACHINE\Software\TestApplication
This How To includes the following steps:
This procedure creates a Windows application that will be used to encrypt a sample database string and store it in the registry.
To store the encrypted data in the registry
To create this assembly, you must perform the steps described in How To: Create an Encryption Library in .NET 1.1 in the Reference section of this guide.
using Encryption;
using System.Text;
using Microsoft.Win32;
Table 1. EncryptionTestApp controls
Figure 1. Encryption Test Harness dialog box
"Server=local; database=pubs; uid=Bob; pwd=Password"
.
"Encryption Test Harness"");
The encrypted connection string is displayed in the Encrypted String field.
The original string is displayed in the Decrypted String field.
HKLM\Software\TestApplication
Confirm that encoded values are present for the connectionString, initVector and key named
Table 2: WebForm1.aspx controls);
The encrypted and decrypted connection strings are displayed on the Web form.
For more information, see How To: Create an Encryption Library in the Reference section of this guide. | http://msdn.microsoft.com/en-us/library/aa302406.aspx | crawl-002 | refinedweb | 218 | 50.73 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
In order to be as general as possible, the library uses a class to
compute all the necessary functions rounded upward or downward. This class
is the first parameter of
policies, it is also the type named
rounding in the policy definition of
interval.
By default, it is
interval_lib::rounded_math<T>. The
class
interval_lib::rounded_math is already specialized for
the standard floating types (
float ,
double and
long double). So if the base type of your intervals is not one
of these, a good solution would probably be to provide a specialization of
this class. But if the default specialization of
rounded_math<T> for
float,
double, or
long double is not what you seek, or
you do not want to specialize
interval_lib::rounded_math<T> (say because you prefer to
work in your own namespace) you can also define your own rounding policy
and pass it directly to
interval_lib::policies.
Here comes what the class is supposed to provide. The domains are written next to their respective functions (as you can see, the functions do not have to worry about invalid values, but they have to handle infinite arguments).
/* Rounding requirements */ struct rounding { // defaut constructor, destructor rounding(); ~rounding(); // mathematical operations T add_down(T, T); // [-∞;+∞][-∞;+∞] T add_up (T, T); // [-∞;+∞][-∞;+∞] T sub_down(T, T); // [-∞;+∞][-∞;+∞] T sub_up (T, T); // [-∞;+∞][-∞;+∞] T mul_down(T, T); // [-∞;+∞][-∞;+∞] T mul_up (T, T); // [-∞;+∞][-∞;+∞] T div_down(T, T); // [-∞;+∞]([-∞;+∞]-{0}) T div_up (T, T); // [-∞;+∞]([-∞;+∞]-{0}) T sqrt_down(T); // ]0;+∞] T sqrt_up (T); // ]0;+∞] T exp_down(T); // [-∞;+∞] T exp_up (T); // [-∞;+∞] T log_down(T); // ]0;+∞] T log_up (T); // ]0;+∞] T cos_down(T); // [0;2π] T cos_up (T); // [0;2π] T tan_down(T); // ]-π/2;π/2[ T tan_up (T); // ]-π/2;π/2[ T asin_down(T); // [-1;1] T asin_up (T); // [-1;1] T acos_down(T); // [-1;1] T acos_up (T); // [-1;1] T atan_down(T); // [-∞;+∞] T atan_up (T); // [-∞;+∞] T sinh_down(T); // [-∞;+∞] T sinh_up (T); // [-∞;+∞] T cosh_down(T); // [-∞;+∞] T cosh_up (T); // [-∞;+∞] T tanh_down(T); // [-∞;+∞] T tanh_up (T); // [-∞;+∞] T asinh_down(T); // [-∞;+∞] T asinh_up (T); // [-∞;+∞] T acosh_down(T); // [1;+∞] T acosh_up (T); // [1;+∞] T atanh_down(T); // [-1;1] T atanh_up (T); // [-1;1] T median(T, T); // [-∞;+∞][-∞;+∞] T int_down(T); // [-∞;+∞] T int_up (T); // [-∞;+∞] // conversion functions T conv_down(U); T conv_up (U); // unprotected rounding class typedef ... unprotected_rounding; };
The constructor and destructor of the rounding class have a very
important semantic requirement: they are responsible for setting and
resetting the rounding modes of the computation on T. For instance, if T is
a standard floating point type and floating point computation is performed
according to the Standard IEEE 754, the constructor can save the current
rounding state, each
_up (resp.
_down) function
will round up (resp. down), and the destructor will restore the saved
rounding state. Indeed this is the behavior of the default rounding
policy.
The meaning of all the mathematical functions up until
atanh_up is clear: each function returns number representable
in the type
T which is a lower bound (for
_down)
or upper bound (for
_up) on the true mathematical result of
the corresponding function. The function
median computes the
average of its two arguments rounded to its nearest representable number.
The functions
int_down and
int_up compute the
nearest integer smaller or bigger than their argument. Finally,
conv_down and
conv_up are responsible of the
conversions of values of other types to the base number type: the first one
must round down the value and the second one must round it up.
The type
unprotected_rounding allows to remove all
controls. For reasons why one might to do this, see the protection paragraph below.
A lot of classes are provided. The classes are organized by level. At
the bottom is the class
rounding_control. At the next level
come
rounded_arith_exact,
rounded_arith_std and
rounded_arith_opp. Then there are
rounded_transc_dummy,
rounded_transc_exact,
rounded_transc_std and
rounded_transc_opp. And
finally are
save_state and
save_state_nothing.
Each of these classes provide a set of members that are required by the
classes of the next level. For example, a
rounded_transc_...
class needs the members of a
rounded_arith_... class.
When they exist in two versions
_std and
_opp,
the first one does switch the rounding mode each time, and the second one
tries to keep it oriented toward plus infinity. The main purpose of the
_opp version is to speed up the computations through the use
of the "opposite trick" (see the performance notes).
This version requires the rounding mode to be upward before entering any
computation functions of the class. It guarantees that the rounding mode
will still be upward at the exit of the functions.
Please note that it is really a very bad idea to mix the
_opp version with the
_std since they do not have
compatible properties.
There is a third version named
_exact which computes the
functions without changing the rounding mode. It is an "exact" version
because it is intended for a base type that produces exact results.
The last version is the
_dummy version. It does not do any
computations but still produces compatible results.
Please note that it is possible to use the "exact" version for an
inexact base type, e.g.
float or
double. In that
case, the inclusion property is no longer guaranteed, but this can be
useful to speed up the computation when the inclusion property is not
desired strictly. For instance, in computer graphics, a small error due to
floating-point roundoff is acceptable as long as an approximate version of
the inclusion property holds.
Here comes what each class defines. Later, when they will be described more thoroughly, these members will not be repeated. Please come back here in order to see them. Inheritance is also used to avoid repetitions.
template <class T> struct rounding_control { typedef ... rounding_mode; void set_rounding_mode(rounding_mode); void get_rounding_mode(rounding_mode&); void downward (); void upward (); void to_nearest(); T to_int(T); T force_rounding(T); }; template <class T, class Rounding> struct rounded_arith_... : Rounding { void init(); T add_down(T, T); T add_up (T, T); T sub_down(T, T); T sub_up (T, T); T mul_down(T, T); T mul_up (T, T); T div_down(T, T); T div_up (T, T); T sqrt_down(T); T sqrt_up (T); T median(T, T); T int_down(T); T int_up (T); }; template <class T, class Rounding> struct rounded_transc_... : Rounding { T exp_down(T); T exp_up (T); T log_down(T); T log_up (T); T cos_down(T); T cos_up (T); T tan_down(T); T tan_up (T); T asin_down(T); T asin_up (T); T acos_down(T); T acos_up (T); T atan_down(T); T atan_up (T); T sinh_down(T); T sinh_up (T); T cosh_down(T); T cosh_up (T); T tanh_down(T); T tanh_up (T); T asinh_down(T); T asinh_up (T); T acosh_down(T); T acosh_up (T); T atanh_down(T); T atanh_up (T); }; template <class Rounding> struct save_state_... : Rounding { save_state_...(); ~save_state_...(); typedef ... unprotected_rounding; };
namespace boost { namespace numeric { namespace interval_lib { /* basic rounding control */ template <class T> struct rounding_control; /* arithmetic functions rounding */ template <class T, class Rounding = rounding_control<T> > struct rounded_arith_exact; template <class T, class Rounding = rounding_control<T> > struct rounded_arith_std; template <class T, class Rounding = rounding_control<T> > struct rounded_arith_opp; /* transcendental functions rounding */ template <class T, class Rounding> struct rounded_transc_dummy; template <class T, class Rounding = rounded_arith_exact<T> > struct rounded_transc_exact; template <class T, class Rounding = rounded_arith_std<T> > struct rounded_transc_std; template <class T, class Rounding = rounded_arith_opp<T> > struct rounded_transc_opp; /* rounding-state-saving classes */ template <class Rounding> struct save_state; template <class Rounding> struct save_state_nothing; /* default policy for type T */ template <class T> struct rounded_math; template <> struct rounded_math<float>; template <> struct rounded_math<double>; /* some metaprogramming to convert a protected to unprotected rounding */ template <class I> struct unprotect; } // namespace interval_lib } // namespace numeric } // namespace boost
We now describe each class in the order they appear in the definition of a rounding policy (this outermost-to-innermost order is the reverse order from the synopsis).
Protection refers to the fact that the interval operations will be
surrounded by rounding mode controls. Unprotecting a class means to remove
all the rounding controls. Each rounding policy provides a type
unprotected_rounding. The required type
unprotected_rounding gives another rounding class that enables
to work when nested inside rounding. For example, the first three lines
below should all produce the same result (because the first operation is
the rounding constructor, and the last is its destructor, which take care
of setting the rounding modes); and the last line is allowed to have an
undefined behavior (since no rounding constructor or destructor is ever
called).
T c; { rounding rnd; c = rnd.add_down(a, b); } T c; { rounding rnd1; { rounding rnd2; c = rnd2.add_down(a, b); } } T c; { rounding rnd1; { rounding::unprotected_rounding rnd2; c = rnd2.add_down(a, b); } } T d; { rounding::unprotected_rounding rnd; d = rnd.add_down(a, b); }
Naturally
rounding::unprotected_rounding may simply be
rounding itself. But it can improve performance if it is a
simplified version with empty constructor and destructor. In order to avoid
undefined behaviors, in the library, an object of type
rounding::unprotected_rounding is guaranteed to be created
only when an object of type
rounding is already alive. See the
performance notes for some additional details.
The support library defines a metaprogramming class template
unprotect which takes an interval type
I and
returns an interval type
unprotect<I>::type where the
rounding policy has been unprotected. Some information about the types:
interval<T, interval_lib::policies<Rounding, _>
>::traits_type::rounding is the same type as
Rounding, and
unprotect<interval<T,
interval_lib::policies<Rounding, _> > >::type is
the same type as
interval<T,
interval_lib::policies<Rounding::unprotected, _> >.
First comes
save_state. This class is responsible for
saving the current rounding mode and calling init in its constructor, and
for restoring the saved rounding mode in its destructor. This class also
defines the
unprotected_rounding type.
If the rounding mode does not require any state-saving or
initialization,
save_state_nothing can be used instead of
save_state.
The classes
rounded_transc_exact,
rounded_transc_std and
rounded_transc_opp expect
the std namespace to provide the functions exp log cos tan acos asin atan
cosh sinh tanh acosh asinh atanh. For the
_std and
_opp versions, all these functions should respect the current
rounding mode fixed by a call to downward or upward.
Please note: Unfortunately, the latter is rarely the
case. It is the reason why a class
rounded_transc_dummy is
provided which does not depend on the functions from the std namespace.
There is no magic, however. The functions of
rounded_transc_dummy do not compute anything. They only return
valid values. For example,
cos_down always returns -1. In this
way, we do verify the inclusion property for the default implementation,
even if this has strictly no value for the user. In order to have useful
values, another policy should be used explicitely, which will most likely
lead to a violation of the inclusion property. In this way, we ensure that
the violation is clearly pointed out to the user who then knows what he
stands against. This class could have been used as the default
transcendental rounding class, but it was decided it would be better for
the compilation to fail due to missing declarations rather than succeed
thanks to valid but unusable functions.
The classes
rounded_arith_std and
rounded_arith_opp expect the operators + - * / and the
function
std::sqrt to respect the current rounding mode.
The class
rounded_arith_exact requires
std::floor and
std::ceil to be defined since it
can not rely on
to_int.
The functions defined by each of the previous classes did not need any
explanation. For example, the behavior of
add_down is to
compute the sum of two numbers rounded downward. For
rounding_control, the situation is a bit more complex.
The basic function is
force_rounding which returns its
argument correctly rounded accordingly to the current rounding mode if it
was not already the case. This function is necessary to handle delayed
rounding. Indeed, depending on the way the computations are done, the
intermediate results may be internaly stored in a more precise format and
it can lead to a wrong rounding. So the function enforces the rounding.
Here is an example of what happens when the rounding
is not enforced.
The function
get_rounding_mode returns the current rounding
mode,
set_rounding_mode sets the rounding mode back to a
previous value returned by
get_rounding_mode.
downward,
upward and
to_nearest sets
the rounding mode in one of the three directions. This rounding mode should
be global to all the functions that use the type
T. For
example, after a call to
downward,
force_rounding(x+y) is expected to return the sum rounded
toward -∞.
The function
to_int computes the nearest integer
accordingly to the current rounding mode.
The non-specialized version of
rounding_control does not do
anything. The functions for the rounding mode are empty, and
to_int and
force_rounding are identity functions.
The
pi_ constant functions return suitable integers (for
example,
pi_up returns
T(4)).
The class template
rounding_control is specialized for
float,
double and
long double in
order to best use the floating point unit of the computer.
The default policy (aka
rounded_math<T>) is simply
defined as:
template <class T> struct rounded_math<T> : save_state_nothing<rounded_arith_exact<T> > {};
and the specializations for
float,
double and
long double use
rounded_arith_opp, as in:
template <> struct rounded_math<float> : save_state<rounded_arith_opp<float> > {}; template <> struct rounded_math<double> : save_state<rounded_arith_opp<double> > {}; template <> struct rounded_math<long double> : save_state<rounded_arith_opp<long double> > {};
This paragraph deals mostly with the performance of the library with
intervals using the floating-point unit (FPU) of the computer. Let's
consider the sum of [a,b] and [c,d] as an
example. The result is [
down(a+c),
up(b+d)], where
down and
up indicate the rounding mode needed.
If the FPU is able to use a different rounding mode for each operation, there is no problem. For example, it's the case for the Alpha processor: each floating-point instruction can specify a different rounding mode. However, the IEEE-754 Standard does not require such a behavior. So most of the FPUs only provide some instructions to set the rounding mode for all subsequent operations. And generally, these instructions need to flush the pipeline of the FPU.
In this situation, the time needed to sum [a,b] and [c,d] is far worse than the time needed to calculate a+b and c+d since the two additions cannot be parallelized. Consequently, the objective is to diminish the number of rounding mode switches.
If this library is not used to provide exact computations, but only for pair arithmetic, the solution is quite simple: do not use rounding. In that case, doing the sum [a,b] and [c,d] will be as fast as computing a+b and c+d. Everything is perfect.
However, if exact computations are required, such a solution is totally unthinkable. So, are we penniless? No, there is still a trick available. Indeed, down(a+c) = -up(-a-c) if the unary minus is an exact operation. It is now possible to calculate the whole sum with the same rounding mode. Generally, the cost of the mode switching is worse than the cost of the sign changes.
The interval addition is not the only operation; most of the interval operations can be computed by setting the rounding direction of the FPU only once. So the operations of the floating point rounding policy assume that the direction is correctly set. This assumption is usually not true in a program (the user and the standard library expect the rounding direction to be to nearest), so these operations have to be enclosed in a shell that sets the floating point environment. This protection is done by the constructor and destructor of the rounding policy.
Les us now consider the case of two consecutive interval additions: [a,b] + [c,d] + [e,f]. The generated code should look like:
init_rounding_mode(); // rounding object construction during the first addition t1 = -(-a - c); t2 = b + d; restore_rounding_mode(); // rounding object destruction init_rounding_mode(); // rounding object construction during the second addition x = -(-t1 - e); y = t2 + f; restore_rounding_mode(); // rounding object destruction // the result is the interval [x,y]
Between the two operations, the rounding direction is restored, and then initialized again. Ideally, compilers should be able to optimize this useless code away. But unfortunately they are not, and this slows the code down by an order of magnitude. In order to avoid this bottleneck, the user can tell to the interval operations that they do not need to be protected anymore. It will then be up to the user to protect the interval computations. The compiler will then be able to generate such a code:
init_rounding_mode(); // done by the user x = -(-a - c - e); y = b + d + f; restore_rounding_mode(); // done by the user
The user will have to create a rounding object. And as long as this
object is alive, unprotected versions of the interval operations can be
used. They are selected by using an interval type with a specific rounding
policy. If the initial interval type is
I, then
I::traits_type::rounding is the type of the rounding object,
and
interval_lib::unprotect<I>::type is the type of the
unprotected interval type.
Because the rounding mode of the FPU is changed during the life of the rounding object, any arithmetic floating point operation that does not involve the interval library can lead to unexpected results. And reciprocally, using unprotected interval operation when no rounding object is alive will produce intervals that are not guaranteed anymore to contain the real result.
Here is an example of Horner's scheme to compute the value of a polynom. The rounding mode switches are disabled for the whole computation.
// I is an interval class, the polynom is a simple array template<class I> I horner(const I& x, const I p[], int n) { // save and initialize the rounding mode typename I::traits_type::rounding rnd; // define the unprotected version of the interval type typedef typename boost::numeric::interval_lib::unprotect<I>::type R; const R& a = x; R y = p[n - 1]; for(int i = n - 2; i >= 0; i--) { y = y * a + (const R&)(p[i]); } return y; // restore the rounding mode with the destruction of rnd }
Please note that a rounding object is specially created in order to
protect all the interval computations. Each interval of type I is converted
in an interval of type R before any operations. If this conversion is not
done, the result is still correct, but the interest of this whole
optimization has disappeared. Whenever possible, it is good to convert to
const R& instead of
R: indeed, the function
could already be called inside an unprotection block so the types
R and
I would be the same interval, no need for a
conversion.
It was said at the beginning that the Alpha processors can use a specific rounding mode for each operation. However, due to the instruction format, the rounding toward plus infinity is not available. Only the rounding toward minus infinity can be used. So the trick using the change of sign becomes essential, but there is no need to save and restore the rounding mode on both sides of an operation.
There is another problem besides the cost of the rounding mode switch. Some FPUs use extended registers (for example, float computations will be done with double registers, or double computations with long double registers). Consequently, many problems can arise.
The first one is due to to the extended precision of the mantissa. The rounding is also done on this extended precision. And consequently, we still have down(a+b) = -up(-a-b) in the extended registers. But back to the standard precision, we now have down(a+b) < -up(-a-b) instead of an equality. A solution could be not to use this method. But there still are other problems, with the comparisons between numbers for example.
Naturally, there is also a problem with the extended precision of the exponent. To illustrate this problem, let m be the biggest number before +inf. If we calculate 2*[m,m], the answer should be [m,inf]. But due to the extended registers, the FPU will first store [2m,2m] and then convert it to [inf,inf] at the end of the calculus (when the rounding mode is toward +inf). So the answer is no more accurate.
There is only one solution: to force the FPU to convert the extended values back to standard precision after each operation. Some FPUs provide an instruction able to do this conversion (for example the PowerPC processors). But for the FPUs that do not provide it (the x86 processors), the only solution is to write the values to memory and read them back. Such an operation is obviously very expensive.
Here come several cases:
floator
doubletypes, use the default
rounded_math<T>;
save_state_nothing<rounded_transc_exact<T> >;
save_state_nothing<rounded_transc_dummy<T, rounded_arith_exact<T> > >or directly
save_state_nothing<rounded_arith_exact<T> >;
Revised 2006-12-24
Brönnimann, Polytechnic University
Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at) | http://www.boost.org/doc/libs/1_45_0/libs/numeric/interval/doc/rounding.htm | CC-MAIN-2016-44 | refinedweb | 3,471 | 51.68 |
/* * . * * @(#)lockf.h 8.1 (Berkeley) 6/11/93 * $FreeBSD: src/sys/sys/lockf.h,v 1.16 2004/04/07 04:19:49 imp Exp $ */ #ifndef _SYS_LOCKF_H_ #define _SYS_LOCKF_H_ #include <sys/queue.h> #include <sys/cdefs.h> struct vnop_advlock_args; struct vnode; #ifdef MALLOC_DECLARE MALLOC_DECLARE(M_LOCKF); #endif /* * The lockf structure is a kernel structure which contains the information * associated with a byte range lock. The lockf structures are linked into * the vnode structure. Locks are sorted by the starting byte of the lock for * efficiency after they have been committed; uncommitted locks are on the list * head so they may quickly be accessed, and are both short lived and transient. */ TAILQ_HEAD(locklist, lockf); #pragma pack(4) struct lockf { short lf_flags; /* Semantics: F_POSIX, F_FLOCK, F_WAIT */ short lf_type; /* Lock type: F_RDLCK, F_WRLCK */ off_t lf_start; /* Byte # of the start of the lock */ off_t lf_end; /* Byte # of the end of the lock (-1=EOF) */ caddr_t lf_id; /* Id of the resource holding the lock */ struct lockf **lf_head; /* Back pointer to the head of the locf list */ struct vnode *lf_vnode; /* Back pointer to the inode */ struct lockf *lf_next; /* Pointer to the next lock on this inode */ struct locklist lf_blkhd; /* List of requests blocked on this lock */ TAILQ_ENTRY(lockf) lf_block;/* A request waiting for a lock */ }; #pragma pack() /* Maximum length of sleep chains to traverse to try and detect deadlock. */ #define MAXDEPTH 50 __BEGIN_DECLS #ifdef KERNEL_PRIVATE int lf_advlock(struct vnop_advlock_args *); int lf_assert(struct vnop_advlock_args *, void **); void lf_commit(void *, int); #ifdef LOCKF_DEBUG void lf_print(char *, struct lockf *); void lf_printlist(char *, struct lockf *); #endif #endif /* KERNEL_PRIVATE */ __END_DECLS #endif /* !_SYS_LOCKF_H_ */ | http://opensource.apple.com//source/xnu/xnu-1456.1.26/bsd/sys/lockf.h | CC-MAIN-2016-40 | refinedweb | 259 | 50.6 |
Arun Singh Total Post:309 Points:1545 Posted by Arun Singh August-29-2011 0:42 AM SharePoint SharePoint 1 Answer(s) 1666 View(s) Ratings: Rate this:
Post:309Points:1545
What are safe controls, and what type of information, is placed in that element in a SharePoint web.config file?
When you deploy a Web Part to SharePoint, you must first make it as a safe control to use within SharePoint in the web.config file. Entries made in the safe controls element of SharePoint are encountered by the Share Point Handler object and will be loaded in the SharePoint environment properly, those not will not be loaded and will throw an error.
In the generic safe control entry (this is general, there could be more), there is generally the Assembly name, the namespace, the public key token numeric, the type name, and the safe declaration (whether it is safe or not). There are other optional elements. | https://www.mindstick.com/interview/1181/what-are-safe-controls-and-what-type-of-information-is-placed-in-that-element-in-a-sharepoint-web-config-file | CC-MAIN-2018-22 | refinedweb | 158 | 50.09 |
Project Overview
In this article, I will show you a very fast way to use Python and cloud-based APIs from the Renesas IoT Sandbox and the IBM Watson service to present a concept to your boss to get approval to build a prototype or pilot deployment of a highly scalable IoT project.
For simplicity and speed of development, I’ll show you how to build a mobile application entirely in Python, which is very popular for data analysis. Python is also the language used to build Renesas IoT Sandbox Data Intelligence workflows. Let’s polish up those Python skills for the new IoT world swimming with data that we live in.
Although Python is not as popular as JavaScript and Cordova for mobile app prototyping, I chose Python for this project because it is commonly used for IoT data analytics on the server side and fits in nicely with the overall skillset for the Renesas IoT Sandbox ecosystem. If you want to use JavaScript on the front-end, there’s a gazillion tutorials out there. If you aren’t already experienced with JavaScript and are coming to IoT from the device side, Python may be a faster route for you.
My previous article, Using Speech in Your IoT Projects, showed the use of bash and Python scripts from the command line to connect to the Renesas IoT Sandbox Data Monitoring API and then send the data to the IBM Watson API.
This article focuses on GUI creation with Python. A future article will show how you can finish the mobile application and replace the API access scripts using curl with Python using the Requests module.
Although I’m using a non-standard Python graphics framework called Pygame that is primarily used for game creation on the desktop, it is fast and easy to use and suitable for single-screen applications. By spending less time on your GUI, you can spend more time on the core functionality of your IoT project.
The underlying cloud APIs from Renesas and IBM can scale your dream project to millions of devices, billions of rows of data, and provide advanced analytics and AI. Let’s get started unleashing your imagination.
Prerequisites
Learning
- Using Speech in Your IoT Projects
- Renesas IoT Sandbox Getting Started Guide from the S5D9 hacking guide
Software
- Python 2.7.x
- Pygame 1.9.3
- Install with pip on most OSs, including Windows 10
Code Structure
main.py uses two small modules,
gui and
event. I’ve put my GUI code in a file called
lib/gui.py. My event handling code is in
lib/event.py.
To use the modules, put a blank file called
__init__.py in the lib directory.
Main Starting Point For Your Code
We’re going to run our application on the desktop first before modifying the application to run on an Android phone. Don’t worry, the modifications are minor and you’ll be able to use all of the code you create for the desktop application.
Save the following code snippet into a new folder called watsoniot. Call the file
main.py.
import pygame from lib import gui from lib import event def main(): pygame.init() screen = pygame.display.set_mode((720, 1280)) handler = event.Handler() while handler.appOn: handler.process(pygame.event.get()) screen.blit(gui.screen, (0, 0)) pygame.display.update() if __name__ == "__main__": main()
GUI
To use the example GUI code, simply create a file called gui.py inside of the lib directory. The lib directory will hold both
gui.py and
event.py
I built the GUI using
gui.py to create a surface that I call screen. Using
from lib import gui in
main.py, I can access screen from
main.py with
gui.screen.
Fonts
In
gui.py, I am using fonts that I downloaded from the Internet. If you search for free fonts on the Internet, you’ll see a wide catalog of cool fonts. Two sites that I use are:
Colors
Pygame uses RGB (Red, Green, Blue) color codes. You can get the codes using a tool like this:
Icons
If you don’t want to build your own icons, you can grab the icons from my GitHub repository in the
img folder. Note that the actual code in my GitHub repository may be different than the code in this tutorial as I am still extending the functionality.
Contents of lib/gui.py
import pygame pygame.init() screen = pygame.Surface((720, 1280)) LIGHT_YELLOW = (255, 250, 202) titleFont = pygame.font.Font("fnt/iron.ttf", 100) iconFont = pygame.font.Font("fnt/prime.ttf", 48) line1 = "Renesas IoT Sandbox" titleSurface = titleFont.render(line1, True, LIGHT_YELLOW) titleRect = titleSurface.get_rect(y=20, centerx=360) line2 = "IBM Watson API" titleSurface2 = titleFont.render(line2, True, LIGHT_YELLOW) titleRect2 = titleSurface2.get_rect(y=120, centerx=360) micIcon = pygame.image.load("img/microphone.png") micRect = micIcon.get_rect(x=300, y=700) tempIcon = pygame.image.load("img/temp.png") watsonIcon = pygame.image.load("img/watson_icon.png") watsonRect = watsonIcon.get_rect(x=500, y=300) getIoT = pygame.image.load("img/getiot.png") getIoTRect = getIoT.get_rect(x=50, y=300) convertData = pygame.image.load("img/convert.png") convertDataRect = convertData.get_rect(x=280, y=300) screen.blit(titleSurface, titleRect) screen.blit(titleSurface2, titleRect2) screen.blit(micIcon, micRect) screen.blit(getIoT, getIoTRect) screen.blit(convertData, convertDataRect) screen.blit(watsonIcon, watsonRect)
Event Handler
In the code in
main.py, I am using an infinite while loop to handle screen updates and process events. Pygame creates a list of events using
pygame.event.get(). An event could be a mouse button press, closing your application, or moving your mouse.
For the first stage, we’re just going to create the buttons and then print a debug message to the console to test that each button functions. In a future article, we’ll convert the bash scripts from part 1 into Python modules and use the buttons to grab the IoT data, process it, then push the processed data up to the Watson API.
For now, we’ll just create a skeleton event handler.
Inside of the lib folder in your watsoniot project folder, create a new file called event.py. Copy the contents of the code snippet below into the new file.
lib/event.py
import pygame import gui class Handler(): def __init__(self): self.appOn = True def process(self, events): for event in events: if event.type == pygame.QUIT: self.appOn = False mouse_pos = pygame.mouse.get_pos() if event.type == pygame.MOUSEBUTTONDOWN: if gui.getIoTRect.collidepoint(mouse_pos): print("Received IoT JSON data") if gui.convertDataRect.collidepoint(mouse_pos): print("Converted data to human text") if gui.watsonRect.collidepoint(mouse_pos): print("Sent data to Watson") if gui.micRect.collidepoint(mouse_pos): print("play sound")
Test It
$ python main.py
You should see a window open that contains a title and four icons. Press each icon to test the event handler. At this stage, you will not hear sound.
Congratulations, you’ve just completed a GUI app framework that can be used as the basis for your IoT project demos. Your first million IoT devices are just a few steps away.
Preview of Upcoming Article
To keep this project short and manageable, I focused only on the GUI design. However, if you’re eager to finish your application, you can try out the Python requests module with the IBM Watson API as well as move your application to an Android mobile phone.
Using Python Requests with the IBM Watson API
In the next article, I’ll attach the Python scripts to the buttons. However, you can test this script below to see how easy it is to use the Watson API with Python. If you don’t have requests installed, you can install it with:
$ pip install requests
This is an example showing Watson API access from Python. Save the snippet below to
watson.py
import requests url = '' headers = {'accept': 'audio/wav'} r = requests.get( url, params={"text": "Internet of Things and Watson API is Awesome"}, auth=('xxxxx-f3c0-46e6-b731-xxxxxx', '7r4RMxxxxx'), headers=headers) with open('sound.wav', 'wb') as fd: for chunk in r.iter_content(1024): fd.write(chunk)
Example of the code above in use with sample audio.
Using Python to Get IoT Data from Renesas IoT Sandbox Data Monitoring
Renesas IoT Sandbox Data Monitoring uses the concept of Dweets for small IoT devices that send out information through the cloud. Dweepy is a Python client to talk to Dweets. Although Renesas IoT Sandbox Data Monitoring is based on dweet.io, it uses a different base URL. I’ve modified dweepy by paddcarey to use the Renesas IoT Sandbox Data Monitoring base URL. Get modified dweepy on GitHub.
Clone dweepy and then install it.
$ git clone $ cd dweepy $ python setup.py install
With my S5D9 IoT Fast Prototyping Kit plugged into Ethernet and power, I can access the sensor data with this Python script.
""" Access the Renesas IoT Sandbox Data Monitoring API with Python. You must install the modified dweepy below or modify the base URL of the original dweepy to point to renesas.dweet.io """ import dweepy import pprint response = dweepy.get_latest_dweet_for('S5D9-63de') pprint.pprint(response)
The response.
$ python renesasiot.py [{u'content': {u'globals': {u'application': u'DweetS5D9Client', u'dweet-count': 1065}, u'sensors': {u'AccelX': 0.02, u'AccelY': 0.31, u'AccelZ': 0.97, u'Humidity': 38, u'LED': 0, u'MagX': -77, u'MagY': 34, u'MagZ': -179, u'Pressure': 1016, u'SoundLevel': 2, u'TemperatureC': 29, u'TemperatureF': 84}}, u'created': u'2017-10-18T20:18:00.162Z', u'thing': u'S5D9-63de'}]
Preview of Mobile Packaging
If you want to get your desktop Python application on a mobile phone right now, you can check out my daughter’s example template game using rapt-pygame. Following the instructions, rapt-pygame will also install the Android SDK, Java, and Gradle. I’ll cover these steps in detail in a future article.
Next Step: Completing Your IoT Mobile App
Requirements for Finishing Your Application
HTTP Libraries
IoT Hardware and Software
- S5D9 IoT Fast Prototyping Kit ($35, but $20 off coupon is available)
- Renesas IoT Sandbox Data Monitoring (free)
- IBM Watson API (free account needed for testing)
- Android phone with Android 4.0 or higher
Summary
To transform your creative ideas for an IoT project into reality, you first need to build a prototype of your idea. This process often starts with a proof of concept that is used to get other people excited about your project. Using cloud-based APIs like the IBM Watson API and the Renesas IoT Sandbox API is a great way to quickly show that your idea works. Once your idea gets support, you can use the same cloud APIs to scale your project to millions of IoT devices. Although many people build their prototype with JavaScript, Java, or Swift, as this article shows, Python is also a viable option to test your ideas.
Additional Info
IBM Watson Python SDK (not used in this basic tutorial to keep dependencies simple) | https://learn.iotcommunity.io/t/how-to-ibm-watson-api-with-renesas-iot-sandbox-data-monitoring/1536 | CC-MAIN-2019-09 | refinedweb | 1,832 | 67.45 |
EVP_PKEY_CTX_set_hkdf_md.3ssl man page
EVP_PKEY_CTX_set_hkdf_md, EVP_PKEY_CTX_set1_hkdf_salt, EVP_PKEY_CTX_set1_hkdf_key, EVP_PKEY_CTX_add1_hkdf_info — HMAC-based Extract-and-Expand key derivation algorithm
Synopsis
#include <openssl/kdf.h>_new_id(EVP_PKEY_HKDF, NULL);
The digest, key, salt and info values must be set before a key is derived or an error occurs.
The total length of the info buffer cannot exceed 1024 bytes in length: this should be more than enough for any normal use of HKDF.
The output length of the KDF is specified via the length parameter to the EVP_PKEY_derive(3) function. Since the HKDF output length is variable, passing a NULL buffer as a means to obtain the requisite length is not meaningful with HKDF. Instead, the caller must allocate a buffer of the desired length, and pass that buffer to EVP_PKEY_derive(3) along with (a pointer initialized to) the desired length.
Optimised versions of HKDF can be implemented in an ENGINE.
Return Values
All these functions return 1 for success and 0 or a negative value for failure. In particular a return value of -2 indicates the operation is not supported by the public key algorithm.
Example_salt(pctx, "salt", 4) <= 0) /* Error */ if (EVP_PKEY_CTX_set1_key(pctx, "secret", 6) <= 0) /* Error */ if (EVP_PKEY_CTX_add1_hkdf_info(pctx, "label", <>. | https://www.mankier.com/3/EVP_PKEY_CTX_set_hkdf_md.3ssl | CC-MAIN-2017-17 | refinedweb | 198 | 52.7 |
Jasmine is one of the popular JavaScript unit testing frameworks which is capable of testing synchronous and asynchronous JavaScript code. It is used in BDD (behavior-driven development) programming which focuses more on the business value than on the technical details. In this Jasmine tutorial, we will learn Jasmine framework in detail from setup instructions to understanding output of testcases.
Table of Contents 1. Jasmine Setup Configuration 2. Writing Suite and Specs 3. Setup and Teardown 4. Jasmine Describe Blocks 5. Jasmine Matchers 6. Disable Suites and Specs 7. Working with Jasmine Spies 8. Final Thoughts
1. Jasmine Setup Configuration
First download jasmine framework and extract it inside your project folder. I will suggest to create a separate folder
/jasmine under
/js or
/javascript folder which may be already present in your application.
You will get below four folders/files in distribution bundle:
/src: contains the JavaScript source files that you want to test
/lib: contains the framework files
/spec: contains the JavaScript testing files
SpecRunner.html: is the test case runner HTML file
You may delete
/src folder; and reference the source files from their current location inside
SpecRunner.html file. The default file looks like below, and you will need to change the files included from
/src and
/spec folders.
<="src/Player.js"></script> <script src="src/Song.js"></script> <!-- include spec files here... --> <script src="spec/SpecHelper.js"></script> <script src="spec/PlayerSpec.js"></script> </head> <body></body> </html>
In this demo, I have removed
/src folder and will refer files from their current locations. The current folder structure is below:
To concentrate on what Jasmine is capable of, I am creating a simple JS file
MathUtils.js with some basic operations and we will unit-test these functions.
MathUtils = function() {}; MathUtils.prototype.sum = function(number1, number2) { return number1 + number2; } MathUtils.prototype.substract = function(number1, number2) { return number1 - number2; } MathUtils.prototype.multiply = function(number1, number2) { return number1 * number2; } MathUtils.prototype.divide = function(number1, number2) { return number1 / number2; } MathUtils.prototype.average = function(number1, number2) { return (number1 + number2) / 2; } MathUtils.prototype.factorial = function(number) { if (number < 0) { throw new Error("There is no factorial for negative numbers"); } else if (number == 1 || number == 0) { return 1; } else { return number * this.factorial(number - 1); } }
And after adding file reference in
SpecRunner.html, file content will be :
<="../MathUtils.js"></script> <!-- include spec files here... --> <script src="spec/MathUtils.js"></script> </head> <body></body> </html>
2. Jasmine Suite and Specs
In Jasmine, there are two important terms – suite and spec.
2.1. Suite
A Jasmine suite is a group of test cases that can be used to test a specific behavior of the JavaScript code (a JavaScript object or function). This begins with a call to the Jasmine global function
describe with two parameters – first parameter represents the title of the test suite and second parameter represents a function that implements the test suite.
//This is test suite describe("Test Suite", function() { //..... });
2.2. Spec
A Jasmine spec represents a test case inside the test suite. This begins with a call to the Jasmine global function
it with two parameters – first parameter represents the title of the spec and second parameter represents a function that implements the test case.
In practice, spec contains one or more expectations. Each expectation represents an assertion that can be either
true or
false. In order to pass the spec, all of the expectations inside the spec have to be
true. If one or more expectations inside a spec is
false, the spec fails.
//This is test suite describe("Test Suite", function() { it("test spec", function() { expect( expression ).toEqual(true); }); });
Let’s start writing unit tests for
MathUtils.js to better understand suite and specs. We will write these specs in
spec/MathUtils.js.
describe("MathUtils", function() { var calc; //This will be called before running each spec beforeEach(function() { calc = new MathUtils(); }); describe("when calc is used to peform basic math operations", function(){ //Spec for sum operation it("should be able to calculate sum of 3 and 5", function() { expect(calc.sum(3,5)).toEqual(8); }); //Spec for multiply operation it("should be able to multiply 10 and 40", function() { expect(calc.multiply(10, 40)).toEqual(400); }); //Spec for factorial operation for positive number it("should be able to calculate factorial of 9", function() { expect(calc.factorial(9)).toEqual(362880); }); //Spec for factorial operation for negative number it("should be able to throw error in factorial operation when the number is negative", function() { expect(function() { calc.factorial(-7) }).toThrowError(Error); }); }); });
On opening the
SpecRunner.html file in browser, specs are run and result is rendered in browser as shown below:
3. Setup and Teardown
For setup and tear down purpose, Jasmine provides two global functions at suite level i.e.
beforeEach() and
afterEach().
3.1. beforeEach()
The
beforeEach function is called once before each spec in the
describe() in which it is called.
3.2. afterEach()
The
afterEach function is called once after each spec.
In practice, spec variables (is any) are defined at the top-level scope — the
describe block — and initialization code is moved into a
beforeEach function. The
afterEach function resets the variable before continuing. This helps the developers in not to repeat setup and finalization code for each spec.
4. Jasmine Describe Blocks
In Jasmine,
describe function is for grouping related specs. The string parameter is for naming the collection of specs, and will be concatenated with specs to make a spec’s full name. This helps in finding specs in a large suite.
Good thing is, you can have nested
describe blocks as well. In case of nested
describe, before executing a spec, Jasmine walks down executing each
beforeEach function in order, then executes the spec, and lastly walks up executing each
afterEach function.
Let’s understand it by an example. Replace the content in
MathUtilSpecs.js will following code:
describe("Nested Describe Demo", function() { beforeEach(function() { console.log("beforeEach level 1"); }); describe("MyTest level2", function() { beforeEach(function() { console.log("beforeEach level 2"); }); describe("MyTest level3", function() { beforeEach(function() { console.log("beforeEach level 3"); }); it("is a simple spec in level3", function() { console.log("A simple spec in level 3"); expect(true).toBe(true); }); afterEach(function() { console.log("afterEach level 3"); }); }); afterEach(function() { console.log("afterEach level 2"); }); }); afterEach(function() { console.log("afterEach level 1"); }); });
Now execute this file by opening
SpecRunner.html in browser. Observe the console output, it is written as:
beforeEach level 1 beforeEach level 2 beforeEach level 3 A simple spec in level 3 afterEach level 3 afterEach level 2 afterEach level 1
I will suggest you to place more specs in above code, and check out the execution flow for more better understanding.
5. Jasmine Matchers
In first example, we saw the usage of
toEqual and
toThrow function. They are matchers and use to compare the actual and expected outputs of any jasmine test. Thy are just like java assertions – if it may help you.
Let’s list down all such Jasmine matchers which can help you more robust and meaningful test specs.
The Jasmine
not keyword can be used with every matcher’s criteria for inverting the result. e.g.
expect(actual).not.toBe(expected); expect(actual).not.toBeDefined(expected);
6. Disable Suites and Specs
Many times, for various reasons, you may want to disable suites – for some time. In this case, you need not to remove the code – rather just add char
x in start of
describe to make if
xdescribe.
These suites and any specs inside them are skipped when run and thus their results will not appear in the results.
xdescribe("MathUtils", function() { //code });
In case, you do not want to disable whole suite and rather want to disable only a certain spec test, then put the
x before that spec itself and this time only this spec will be skipped.
describe("MathUtils", function() { //Spec for sum operation xit("should be able to calculate the sum of two numbers", function() { expect(10).toBeSumOf(7, 3); }); });
7. Working with Jasmine Spies
Jasmine has test double functions called spies. A spy can stub any function and tracks calls to it and all arguments. A spy only exists in the
describe or
it block in which it is defined, and will be removed after each spec. To create a spy on any method, use
spyOn(object, 'methodName') call.
There are two matchers
toHaveBeenCalled and
toHaveBeenCalledWith which should be used with spies.
toHaveBeenCalled matcher will return true if the spy was called; and
toHaveBeenCalledWith matcher will return true if the argument list matches any of the recorded calls to the spy.
describe("MathUtils", function() { var calc; beforeEach(function() { calc = new MathUtils(); spyOn(calc, 'sum'); }); describe("when calc is used to peform basic math operations", function(){ //Test for sum operation it("should be able to calculate sum of 3 and 5", function() { //call any method calc.sum(3,5); //verify it got executed expect(calc.sum).toHaveBeenCalled(); expect(calc.sum).toHaveBeenCalledWith(3,5); }); }); });
Above example is very much most basic in nature, you can use spies to verify the calls for internal methods as well. E.g. If you call method
calculateInterest() on any object then you may want to check if
getPrincipal(),
getROI() and
getTime() must have been called inside that object. Spy will help you verify these kind of assumptions.
When there is not a function to spy on,
jasmine.createSpy can create a bare spy. This spy acts as any other spy – tracking calls, arguments, etc. But there is no implementation behind it. Spies are JavaScript objects and can be used as such. Mostly, these spies are used as callback functions to other functions where it is needed.
var callback = jasmine.createSpy('callback'); //Use it for testing expect(object.callback).toHaveBeenCalled();
If you need to define multiple such methods then you can use shortcut method
jasmine.createSpyObj. e.g.
tape = jasmine.createSpyObj('tape', ['play', 'pause', 'stop', 'rewind']); tape.play(); //Use it for testing expect(tape.play.calls.any()).toEqual(true);
Every call to a spy is tracked and exposed on the
calls property. Let’s see how we can use these properties to track the spy.
8. Jasmine tutorial – final thoughts
Jasmine is very capable framework for testing javascript functions, but learning curve is little bit difficult. It will require a great amount of discipline in writing actual javascript code – before it could be tested with Jasmine effectively.
And remember that Jasmine is intended to be used for writing tests in BDD (Behavior-driven development) style. Do not misuse it by testing irrelevant things.
Let me know your thoughts on this jasmine tutorial for beginners.
Happy Learning !!
References:
6 thoughts on “Jasmine unit testing tutorial with examples”
Excellent work! I wish that the people that develop these frameworks and such would take the effort to give meaningful examples like you have done to explain the ways their libraries can be used. (not to mention keeping them up to date) Seeing examples where they just writes a trivial block of javascript code inside an it() function without demonstrating how best to structure a test around an actual Javascript function/class you want tested isn’t very encouraging. Nice work outlining how the Jasmine test features can be set up around the stuff I’d expect to test.
nice
Thanks for this, its nice to see a demo in pure javascript.
I’m curios how you would run the tests in the command line without the help of the html script tags, is it still possible to run the jasmine command in the commandline? How does it import the javascript files without the html?
Hi, if i want to stop execution of suits on any one test case fails, or if first case fails then i want to stop execution of spec means i do not want remaining test to run. What is the way to achieve that?
throwFailures=true
Add that to the runner page’s URL.–issues
#577
This is extremely helpful!
I currently have an issue where I’m trying to test whether internal methods of a void method have been called. You briefly mention that you can use spies to verify the calls for internal methods, but can you give an example of how to do this? My spy for the external method passes, but when I try to call the internal methods I get an error:
“Error: Expected a spy, but got undefined.” | https://howtodoinjava.com/javascript/jasmine-unit-testing-tutorial/ | CC-MAIN-2022-27 | refinedweb | 2,070 | 57.06 |
The rules of English grammar were laid down, written in stone, and encoded in the DNA of grammar school teachers long before computers were invented. Unfortunately, this means that sometimes I have to decide between syntactically correct code and syntactically correct English. When I’m forced to do so, English normally loses. This means that sometimes a punctuation mark appears outside a quotation mark when you’d normally expect it to appear inside, a sentence begins with a lower case letter, or something similarly unsettling occurs. For the most part, I’ve tried to use various typefaces to make the offending phrase less jarring. In particular,
Italicized text is used for emphasis, the titles of books and other cited works, words in languages other than English, words used to refer to the words themselves (for example, Booboisie is a very funny word.), the first occurrence of an important term, Java system properties, host names, resolvable URLs.
Monospaced text is used for XML and Java source code, namespace URLs, system prompts, and program output
Italicized monospaced text is used for pieces of XML and Java source code that should be replaced by some other text
Bold text is used for emphasis
Bold monospaced text is normally used for literal text the user types at a command line, as well as for emphasis in code.
It’s not just English grammar that gets a little squeezed either. The necessities of fitting code onto a printed page rather than a computer screen have occasionally caused me to deviate from the ideal Java coding conventions. The worst problem is line length. I can only fit 65 characters across the page in a line of code. To try and make maximum use of this space, I indent each block by two spaces and indent line continuations by one space, rather than the customary four spaces and two spaces respectively. Even so, I still have to break lines where I’d otherwise prefer not to. For example, I originally wrote this line of code for Chapter 4:
result.append(" <Amount>" + amount + "</Amount>\r\n");
However, to fit it on the page I had to split it into two pieces like this:
result.append(" <Amount>"); result.append(amount + "</Amount>\r\n");
This case isn’t too bad, but sometimes even this isn’t enough and I have to remove indents from the front of the line that would otherwise be present. For example, this occasionally forces the indentation not to line up as prettily as it otherwise might, as in this example from Chapter 3
wout.write( "xmlns=''\r\n" );
The silver lining to this cloud is that sometimes the extra attention code gets when I’m trying to cut down its size results in better code. For example, in Chapter 4 I found I needed to remove a few characters from this line:. | http://www.cafeconleche.org/books/xmljava/chapters/pr01s04.html | crawl-001 | refinedweb | 477 | 57.81 |
Summary
My colleague Matthew Wilson made a post recently suggesting that overloading operator&() should be avoided. While I agree with much of what he says, I would like to suggest a valid use for overloading operator&() which enables writing safer code.
In response to Matthew Wilson's recent post at the C++ Source about overloading operator&(), I would like to submit a valid use case. Consider the following code:
SomeObject o; SomeObject* p; p = &o; delete p; // dumb but legalI know you are thinking that the code example is simply asinine, but it is intended as a purely artificial example of what can and does happen in real world scenarios. This is due to what I percieve as a flaw in the C++ type system, which has only one kind of pointer. If you think about it from, it does seem silly for a strongly typed language to allow you to so easily delete stack allocated objects. This can be easily fixed by writing some trivial pointer classes which I call unsmart pointers.
You can use an undeletable unsmart pointer to represent the return type of a call to operator&(). An unsmart pointer is a pointer class which does not automatically destroy the resource, and undeletable means that the pointer does not provide any method to delete the resource it references.
Consider the following implementation of SomeObject which overloads operator&().
#include <ootl/ootl_ptr.hpp> struct SomeObject { typedef ootl::ptr<SomeObject, !ootl::deletable> undeletable_ptr; undeletable_ptr operator&() { return this; } }; int main() { SomeObject o; SomeObject* p; p = &o; // compiler error; undeletable pointers can not be assigned to raw pointers. delete p; return 0; }Today I just sent in my final revision on make next installment of Agile C++ for the C++ Users Journal, Unsmart Pointers Part 1, which not surprisingly covers undeletable and their complement deletable pointers in more depth.
You can download the unsmart pointer source code as part of the most recent ootl release at or view the source online at.
I would like to point out that even if I disagree on this small point that I own and heartily recommend Matthew's Book Imperfect C++.
Have an opinion? Be the first to post a comment about this weblog entry.
If you'd like to be notified whenever Christopher Diggins adds a new entry to his weblog, subscribe to his RSS feed. | https://www.artima.com/weblogs/viewpost.jsp?thread=102857 | CC-MAIN-2018-13 | refinedweb | 392 | 58.32 |
I'm trying to recreate a dataset produced by the SciGRID energy mapping project detailed here. I used osm2pgsql to export German transmission line data to a postgreSQL database and I'm trying to run a Python script written by the SciGRID people that abstracts this data so that it can be turned into a .csv. Osm2pgsql creates a few tables with the prefix "planet_osm_".
The script requires the "planet_osm_nodes" table to have a "tags" column but it does not. I'm told this table used to have such a column but isn't meant for front end use so doesn't have it from osm2pgsql version 0.88 onwards. I cannot use "planet_osm_point" instead because it lacks all the "planet_osm_nodes" columns that I need (ie. "id", "lat", "lon"). Is there a way to bring the "tags" column back (perhaps with the style file or flex output?). Otherwise, are there any pre-built binaries of older versions of osm2pgsql, pre-0.88? I have no experience building from source.
asked
yesterday
kev_7
16●2
accept rate:
0%
Your approach is deeply flawed, not your fault, likely the programmers of whatever software you are using did not know what they were doing. planet_osm_point has osm_id which is what id is in planet_osm_nodes, and lat and lon are contained in the way column. If you have used the -l flag (ell) on import, you can extract IDs and lat and lon like this:
planet_osm_point
osm_id
id
planet_osm_nodes
way
-l
osm_id as id,
st_x(way) as lon,
st_y(way) as lat
FROM
planet_osm_point;
The planet_osm_point table will not contain all nodes, just those selected based on the style file you specify (or the default osm2pgsql.style if you do not specify any).
osm2pgsql.style
If you want to export all nodes, then the process involving osm2pgsql is unnecessary; you can use the command-line tool osmium to convert an .osm.pbf file to a text-based representation called the "opl" format, and that can be converted to CSV with a few trivial transformations.
osmium
Do not try to revive decades-old osm2pgsql versions, it's not worth the hassle.
answered
yesterday
Frederik Ramm ♦
77.3k●88●688●1196
accept rate:
24%
edited
yesterday
Jochen Topf
4.8k●5●47●68
Thank you! And is there a way to reveal the tags column for planet_osm_point? Currently this table only has "osm_id", "power", "cables", "voltage", "wires" and "way".
Yes and no. If you create extension hstore in your postgres database and then use the --hstore flag on osm2pgsql, you will get a tags column. But it will be different from the tags column that used to be in planet_osm_nodes; the latter was a string array with 2 entries for every tag - first the key, then the value, then the next key, etc., whereas the the tags column in planet_osm_point will be "hstore" type column that represents a proper key-value map.
create extension hstore
--hstore
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
osm2pgsql ×244
nodes ×185
tags ×169
column ×4
flex ×2
question asked: yesterday
question was seen: 69 times
last updated: 20 hours ago
adding sale node to OSM
address node on house
How to create / tag trail markers
josm filter out nodes in closed ways which have tags
Custom database statistics
Convert a Polygon ( Close way ) to a node with its tags.
classifying a university department node
Mapnik error: column "generator:source" does not exist
import osm data with osm2pgsql and truncate tables
Search by a tag
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/81054/planet_osm_nodes-missing-tags-column | CC-MAIN-2021-31 | refinedweb | 617 | 68.5 |
Search...
FAQs
Subscribe
Pie
FAQs
Recent topics
Flagged topics
Hot topics
Best topics
Search...
Search Coderanch
Advance search
Google search
Register / Login
B Barnett
Greenhorn
9
6
Threads
0
Cows
since Oct 03, (9 (9/10)
Number Threads Started (6/10)
Number Likes Received (0/3)
Number Likes Granted (0/3)
Set bumper stickers in profile (0/1)
Set signature in profile
Set a watch on a thread
Save thread as a bookmark
Create a post with an image (0/1)
Recent posts by B Barnett
Need help installing JSDK with Visual Cafe
Greetings:
If any one is currently using Visual Cafe Standard edition, please let me know how install the JSDK to work with it. I have searched the Sun site and the support pages for Visual Cafe and have found no help.
Your help is greatly appreciated,
B Barnett
SCJP
show more
20 years ago
Servlets
We Did It!!!!!!
Greetings fellow JavaRanchers:
Yes, I have joined the ranks of the Sun Certified Java Programmers and am very happy. I shouted in the testing center when I got the results and danced all through the parking lot.
Okay, here is what everyone taking the test wants to know:
I do not have an IT background but wanted to change carrers so I attended a technical school for 10 months. We studied Java for a month and it just made sense to me so I decided that I would pursue that.
The books I used to prepare was Bill Brogden's Exam Cram 2 (thanx bill for a great job! i got my money's worth) & Java How To Program 3rd Edition by Harvey Deitel. I studied for about two months, approximately 5-6 hours a day after work.
The one thing you can't prepare for is the actual test experiencs. Taking the mock exams is less stressful because you know it doesn't count but when you are doing it for real and the money you paid is on the line it can be unsettling. So, first of all, I would say STAY CALM and don't second guess yourself.
Here we go with the format of the exam without giving away any actual questions:
- I got hit with 5-6 questions about java I/O off the top. You need to know what the legal constructors are for File Input Stream, Random Access File, etc. Also, you need to be aware that Input Stream and Output Stream are abstract class and what the implications are because of that.
- Be aware that a String object is always initialized to null. There were about 5 questions that were testing you on this. I was suprised that it came up so much.
- Be aware (don't assume) what the substring(0,3) method of the String class does. Don't assume, check the API.
- Know what the rules are for using the String.equals() method.
- Know pre-increment and post-increment inside and out. For example, x++, ++x, x--, --x.
- Know GridBag and Grid Layouts and what will happen to components when the window is resized.
- Did I mention, KNOW JAVA IO!!! thoroughly.
- Garbage Collection: You cannot force garbage collection. Get that straight before you take the exam. You can "suggest" the JVM do gc but you cannot force it. Also, know the implications to passing a variable to an Array then setting the variable to null. Is it eligible for gc?
- Speaking of arrays, know how to create 2-dimensional arrays and what happens if you try to to asssign a reference to a one-dimensional array to a 2-dimensional array.
- Know under what conditions you can assign a subclass reference to a superclass object and when you have to explicitly cast.
- You should have overriding/overloading rules down 100%. I ran into about 5-6 questions on that.
- Collections: know which allow for duplicate values and which do not. Know which ones will allow you to sort objects. Sutdy ArrayList, Vector, Map.
- Constructors. You need to know that if you define constructors that take arguments, the compliler will not create a default constructor. Also, know that a constructor for a subclass will call the default constructor for the superclass and what will happen if you did not create a default constructor for the superclass.
- Exceptions: Know when you have use try/catch and that the finally block is ALWAYS executed.
- Threads: these questions will kill you if you are weak on threads. I would urge you to practice live code using threads that attempt to modify private member variables. Know what the valid constructors are for a thread. Keep in mind, you can use this as an argument to Thread constructor. I had about 5 questions on this and these were the ones with the longest code lines.
- There were also some simple questions on language fundamentals. Yes, you need to know how to shift bits: >>, >>>, <<.
- There was one trick question I remember. Know that when you instantiate a superclass object and assign it a reference to a subclass, the compiler knows what type of object it really is. If you call a method, you are calling the method of what type of object is really is, not what you instantiated. I can not say anymore without giving away the question.
- Know what the valid access modifiers are for inner classes and how can they be created.
I took about 10 mock exams I must give Marcus Green the credit. His Mock exam #1 and #2 were the closest to the real thing except that some of his questions are much harder than what Sun gave me.
Thank you to everyone posting their questions. At least three of the questions on the exam had to do with questions people posted within the last week. I can not say which ones exactly without giving away the question. It is crucial that you post a question, no matter how silly you think it is because you might give someone a better perspective on something.
I will still visit this forum to help with questions where I can.
Thank you all
B Barnett
show more
20 years ago
Programmer Certification (OCPJP)
Need rules for overloading please
Greetings:
I am scheduled to take the exam on Sunday, October 15 at 1:00pm EST and wanted to clarify something. A lot of the people who have just taken the test have mentioned that there are several questions regarding overloading/overriding methods.
I have researched the JLS and the tutorial at Sun's website and also am using the Exam Cram book by Bill Brogden but I have not been able to locate a list of rules about overloading.
Can someone tell me, as concisely as possible, what conditions must exist for a method to be overloaded without causing a compiler error.
I understand that the argument list must be different. If the order of the arguments in the method is different, will it be considered overloading?
Example: public void method(String s, int i){}
public void method(int i, String s){}
//Is this considered overloading.
Please enlighten me as to what the rules are for overloading a method, whether in a class or subclass.
Thank you
- B Barnett
show more
20 years ago
Programmer Certification (OCPJP)
Question about Constructors on Marcus Green's Mock Exam #3
Ashok,
Thanks, I get it now.
- B Barnett
show more
20 years ago
Programmer Certification (OCPJP)
Question about Constructors on Marcus Green's Mock Exam #3
On Marcus Green's Mock Exam #3 Question 11 is as follows:
What will happen when you attempt to compile and run the following code
class Base{
public void Base(){
System.out.println("Base");
}
}
public class In extends Base{
public static void main(String argv[]){
In i=new In();
}
}
1) Compile time error Base is a keyword
2) Compilation and no output at runtime
3) Output of Base
4) Runtime error Base has no valid constructor
// End of Sample Code
I chose 3 because I was under the impression that even though the subclass "In" has no constructor the JVM would automatically make one.
My first question is will the JVM make a default contructor?
Second,
I was also under the impression that when an object of a derived class is instantiated, the superclass constructor is called first. In this case the constructor for class "Base" would be called thereby printing out the string Base.
Is this not correct also?
Thanks in advance,
B Barnett
show more
20 years ago
Programmer Certification (OCPJP)
Question about Marcus Green Mock Exam #3 regarding Interfaces
I was doing Marcus Green's Mock Exam #3 and Question 6) is as follows:
May answer was #1 and #2. The reason I chose #1 was because I was under the assumption that an interface could not be instantiated due to the abstract implementation. Therefore, there could not be a instance of the interface and subsequently there could not be a reference to a an interface. If there can not be a reference to an interface then option #1 makes sense: the instanceof operator can not be used to determine if a reference is an instance of a class.
What part am I not getting?
Thanks in advance,
B Barnett
show more
20 years ago
Programmer Certification (OCPJP)
Marcus Green exam3, question 52, OOD issues
Greetings group:
I just wanted to add my opinion. I feel like another way to design this would be to first have an abstract EMPLOYEE class with with a subclass of CHEF that has sublasses of HEAD CHEF and all the other types of chefs. Every chef is going to be an employee (theoretically) so why not make this the base class.
Just sharing my opinion
B Barnett
show more
20 years ago
Programmer Certification (OCPJP)
char primitive initialization values
Greetings:
I was doing Marcus Green's Mock Exam # 2 and question #28 is as follows:
public class As{
int i = 10;
int j;
char z= 1; //**This line is what I have a question about**
boolean b;
public static void main(String argv[]){
As a = new As();
a.amethod();
}
public void amethod(){
System.out.println(j);
System.out.println(b);
}
}
I assumed there would be a compiler error becaucse I was under the impression that char primitives had to be initilized using single quotes ( '' ). Unfortunately, I have not been able to find much documention on this, only the Character class. Please let me know what are valid initializers for the char primitive.
Thank you
B Barnett
show more
20 years ago
Programmer Certification (OCPJP)
equals method
Upenda:
The equals method checks for object equality, not the value.
Because one reference is to an Integer object and the other one is to Long object, this will always return false.
show more
20 years ago
Programmer Certification (OCPJP) | https://www.coderanch.com/u/3611/B-Barnett | CC-MAIN-2021-31 | refinedweb | 1,805 | 69.01 |
This manual documents nxml-mode, an Emacs major mode for editing XML with RELAX NG support.
Copyright © manual is not yet complete.
Apart from real-time validation, the most important feature that nxml-mode provides for assisting in document creation is "completion". Completion assists the user in inserting characters at point, based on knowledge of the schema and on the contents of the buffer before point.-<RET>, what happens depends on what the set of possible completions are.
<html xmlns=""> <-!-
C-<RET> will yield
<html xmlns=""> <head-!-
<html x-!-
The symbol to be completed is ‘x’. The possible completions are ‘xmlns’ and ‘xml:lang’. These share a common prefix of ‘xml’. Thus, C-<RET> will yield:
<html xml-!-
Typically, you would do C-<RET> again, which would have the result described in the next item.
<html xml-!-
Emacs will prompt you in the minibuffer with
Attribute: xml-!-
and the buffer showing possible completions will contain
Possible completions are: xml:lang xmlns
If you input xmlns, the result will be:
<html xmlns="-!-
(If you do C-<RET> again, the namespace URI will be inserted. Should that happen automatically?)
The main redundancy in XML syntax is end-tags. nxml-mode provides several ways to make it easier to enter end-tags. You can use all of these without a schema.
You can use C-<RET>:
<p>This is a paragraph with an <emph>emphasized</emph> phrase.
the ‘<emph>’ start-tag would not be considered as starting a paragraph, because its corresponding end-tag is not at the end of the line.
.
nXML mode allows you to display all or part of a buffer as an outline, in a similar way to Emacs'
nxml-section-element-name-regexp;
nxml-heading-element-name-regexp; the first such element is treated as the section's heading.
You can customize these variables using M-x customize-variable.
There are three possible outline states for a section: contaiing simulataneously.
nXML mode has some limitations: | http://www.gnu.org/software/emacs/manual/html_mono/nxml-mode.html#Top | crawl-003 | refinedweb | 327 | 67.35 |
Details
- Type:
Improvement
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: Trunk
- Fix Version/s: 16.11.01, Upcoming Release
-
- Labels:None
Description
I am trying to enable auto-completion when coding compound widget.
My plan as follows:
1. The following xsd will be modified to use namespace
site-conf.xsd
widget-form.xsd
widget-screen.xsd
widget-menu.xsd
simple-methods.xsd
For example, in site-conf.xsd, we add the following document level attribute
xmlns="" targetNamespace=""
2. Import the above schema into compound-widgets.xsd so that compound widgets use only one consolidated schema.
3. Update ExampleCompoundWidgets.xml to use the new compound-widgets.xsd. For example
<compound-widgets xmlns: <site-conf> <sc:request-map <sc:security <sc:event <sc:response </sc:request-map> <sc:request-map<sc:security<sc:response</sc:request-map> <sc:view-map <sc:view-map </site-conf> ...... the rest
4. Change java code to support reading xml with namespace (i.e. xml for compound widgets)
5. Update the attributes at document level for rest of the controllers, menus, forms, simple methods and screens. Current setting will not work for schema with a namespace. For example, in controller.xml, we will change
xsi:noNamespaceSchemaLocation="”
to
xmlns="” xsi:schemaLocation="”>
Issue Links
- breaks
OFBIZ-7526 View Quote screen is broken
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
I was not familiar with targetNamespace attribute, so I read and
Since targetNamespaces values are just names why not use (eg)
targetNamespace=""
instead of
targetNamespace=""
? This would be clearer. Is there something blocking this possibility (recursivity or/and xmlns value)? We could keep the shorcut for the xmlns prefixes.
This said, it does not change your question about step 5, and I'm more to have it optional (ie separate copies of the schemas at step 1). Then of course we would use (eg)
targetNamespace=""
for the current files, and
targetNamespace=""
for the modified copy
I have though still to see how this would affect autocompletion, notably using catalogs, but that seems not to be a problem.
Hi Jacques Le Roux,
I agree with your points regarding the naming of the targetNamespace. Will also leave out step 5.
Note that for the given patch, the schema location at compound-widgets.xsd is relative to the project for testing purposes. Also the same for ExampleCompoundWidgets.xml
Hi James,
Here is your patch w/o tabs in Java files, looks good and works well
Do you think it would be possible to constrain in autocompletion to the relevant elements? I mean for instance when in form you are proposed elements from controller, etc.
Thanks for the feedback, Jacques Le Roux]. Here is an updated patch to constrain autocompletion.
Thanks James,
Your patch is in trunk at r1746302
Good work (but the tabs
), I simply replaced tabs by spaces in Java files and removed this commented out line
<compound-widgets xmlns:
in ExampleCompoundWidgets.xml
I mentionned that we decided to duplicate the simple-methods.xsd rather than updating all concerned files. Else we would have to update the attributes at document level for rest of the controllers, menus, forms, simple methods and screens. For example, in controller.xml, we would have to change
xsi:noNamespaceSchemaLocation=""
to
But especially we would have to use the same syntax with namespaces prefixes than in ExampleCompoundWidgets.xml (see example above). This can be discussed because with auto-completion it's not a big deal, but could be overwhelming for existing custom projects...
I finally decided to not ask. Having to put namespaces prefixes everywhere is overwhelming. If someone thinks otherwise please chime in...
I am fine if updating the root attributes is overwhelming. Whether we update the root attributes or not, there is no need to use prefixes in existing (non-compound) xml files.
Some clarifications:
1. For controller.xml files, we do something like the following:
Change
xsi:noNamespaceSchemaLocation=""
to
2. This file location can be changed from relative path to a url if it exists
../../../../../framework/webapp/dtd/site-conf-ns.xsd
3. Existing custom projects are using the noNamespaceSchemaLocation and can choose not to use schema with namespace.
To make the change at step 5 easier, we can make the namespace schemas be accessible from a public URL.
Examples:
Assuming the above namespace schemas are made available, we can change=""
I have actually the files ready for a while now. Just forgot to commit them after your comment from 01/Jun/16 17:52. I will check that tomorrow and close this issue, thanks for the reminder James!
The files are committed in at revision: 1749415
Thanks Jacques Le Roux!
OK, I thought about something while working on this and especially widget-catalog.xml. Since we will get rid of the "old" xsd files in favour of the "-ns.xsd", we can rename the "-ns.xsd" using the old names. I'll do that in 2 steps in order to no miss anything.
Eventually done at revision: 1749489. I hope I did not break anything (should not) quite a large (but straighforward) set of changes
Thanks James for your continued work on this!
Website up to date at revision: 1749490.
I wondered about removing simple-methods-v2.xsd and regions.xsd which are no longer used, and for a moment, but could be that old custom projects still need them. I will ask about it on dev ML and about all XSD files actually.
In ExampleCompoundWidgets.xml, 'compound-widget.xsd' should be 'compound-widgets.xsd'
Hi Jacques Le Roux,
There are problems in my previous code change proposal; I didn't test it
For example in menu,
Originally
xsi:noNamespaceSchemaLocation="" to xmlns="" targetNamespace=""
But should be
xsi:noNamespaceSchemaLocation="" to xmlns="" xsi:schemaLocation=""
Note the change from 'targetNamespace' from 'xsi:schemaLocation'.
The rest (e.g screen, form) should be the same, but I can only test it later.
Regards,
James
'targetNamespace' should have been 'xsi:schemaLocation'.
Sorry for not testing the previous code change.
Thanks James,
I did not spot it as well, we should indeed not have used targetNamespace (should only be used in schema) instead of xsi:schemaLocation. BTW all files were concerned, not only menu.
Fixed at r1749634.
Getting following or similar errors on console while going through any application.
Error message: cvc-complex-type.3.2.2: Attribute 'field' is not allowed to appear in element 'set'.
I am not able to reproduce the error message. Can you provide a url that I can try on?
Regards,
James
I confirm I reproduce locally with trunk HEAD after doing an "ant clean build start" and getting to a webapp (same catalog link than on demos)
Reopening, BTW it's also in Eclipse but only in minilang files it seems.
I got the error too after updating from SVN.
Can you try replacing in the minilangs
xsi:schemaLocation=""
with
xsi:schemaLocation=""
?
Thanks, it works. But why do we need something different than in form and other XSDs?
Ha, maybe because of the prefix we had to put there for widget-common.xsd?
<xs:include
Hence the attributes are qualified?
And then we also lose the autocompletion. OK I must go, will see later...
hmm...i am not sure how autocompletion is affected; the feature is still working on my side.
Hi Jacques Le Roux,
Was also looking at this issue. The "field" attribute is actually a tag directly under the schema tag in the xsd. Such kind of attribute needs to be prefixed in the xml. For example, from ExampleCompoundWidgets.xml, we have on line 117
<sm:log
The 'level' attribute is also a direct tag under the schema tag in the xsd.
So the proposed change from
xsi:schemaLocation=""
to
xsi:schemaLocation=""
circumvents the prefix requirement, but introduce another problem
I found that the proposed change will result in the default value not being recognised. For example, in
<simple-method...
the validate-method tag has a default value for the 'class' attribute.
If we go to Party module and create a customer. The email field will fail after form submission because the default value for 'class' attribute is not found.
I see there are now 2 possible solutions:
1) Add the prefix to the affected attributes in the simple methods. But the change is great.
2) Revert the simple method to use the original schema without namespace
A 3rd possible solution is to change simple-methods.xsd to avoid using top-level attributes. I am not sure if attribute groups are also affected.
I don't like 1, 2 seems simple and 3 more elegant and brings consistency, but not that much because we have still a bunch of other XSDs which are not using namespaces.
For the sake of simplicity and remembering, here are the necessary changes for option 2
XSD
Index: framework/minilang/dtd/simple-methods.xsd =================================================================== --- framework/minilang/dtd/simple-methods.xsd (revision 1749654) +++ framework/minilang/dtd/simple-methods.xsd (working copy) @@ -17,7 +17,7 @@ specific language governing permissions and limitations under the License. --> -<xs:schema xmlns: +<xs:schema xmlns: <!-- ================================================== ========== The Simple Methods Section ==========
An example for XML
Index: applications/party/minilang/communication/CommunicationEventServices.xml =================================================================== --- applications/party/minilang/communication/CommunicationEventServices.xml (revision 1749654) +++ applications/party/minilang/communication/CommunicationEventServices.xml (working copy) @@ -19,9 +19,8 @@ --> <simple-methods xmlns: + xsi:
But then we still need to "duplicate" simple-methods.xsd in simple-methods-ns.xsd as we did, to get the simple-method namespace for compound-widgets
So I'd try 3 first and if it's really too hard, then 2, at least temporary, other opinions?
I am in favor of the 3rd one too, but can only look into it this weekend.
Thanks James,
I have anyway decided to go with 2nd in the meantime, I'll certainly do it today...
Done at revision: 1749938
For myself: when we will do the 3rd more elegant option (no schema duplication), have been changed also:
minilang-catalog.xml
compound-widgets.xsd
Dear Jacques Le Roux,
Preferences screen under in 'My Portal' component is broken (), it is because of missing
xmlns:xsi=""
in framework/common/widget/PortalPageForms.xml
Might be it is missed under the commit at rev #1749634
Thanks!
Thanks Jacques Le Roux!
Fixed at revision: 1750050 .It was the only case like that (checked in Eclipse using validate, there are tons of other minor issues BTW)
I have uploaded a patch for the simple methods schema.
In general, the top level attributes are converted to simpleType, and attributeGroups to complexType.
Changed the following attribute name
1. ‘field’ to ‘fieldType’
2. ‘level’ to ‘levelType'
Denormalize the following attributeGroups
1. attlist.compare
2. attlist.compare-field
Left attributeGroup ‘typeDefaultString’ unchanged, cos it doesn’t matter. Not in use, it seem.
Hi James,
After reverting r1749938, and applying your patch, the tests pass and though when I manually run, for instance, the getCountryList or getAssociatedStateList services they work well, we have small issues when validating. They mostly concern the "<call-..." elements. The validation says
cvc-type.3.1.1: Element 'field' is a simple type, so it cannot have attributes, excepting those whose namespace name is identical to '' and whose [local name] is one of 'type', 'nil', 'schemaLocation' or 'noNamespaceSchemaLocation'. However, the attribute, 'field' was found.
same for field attribute
So we are almost ready, I hope with this fixed we can commit...
Hi Jacques Le Roux,
In the simple-method.xsd, can you find 2 occurence of
<xs:element name="field" type="fieldType”/>
and change back to
<xs:element ref="field”/>
I mistaken them for attributes.
Thanks!
Thanks James,
We are finally done \o/
Your last slightly modified patch is committed in trunk at revision: 1750647
I found a small issue (not minilang related) in ExampleCompoundWidgets.xml
- <sm:log + <sm:log
It should be noted that changing the XML headers (aka namespaces declarations), as we did here, has an impact on custom projects if they use copies of OOTB XML files or use XML files generated by the create-component Ant target before the changes.
Thanks Jacques Le Roux for the followup. Glad this jira issue is completed
It is annoying that because of the disordered way in which this task has been managed we have lost the history of changes to most of the OFBiz widget xsd files: thru a series of commits files like widget-screen.xsd have been copied and renamed (without using the svn copy command) then the originals have been deleted, then their copies have been renamed back to the original names.
Are you looking for something specific between the start and the end of this task? If you don't we could revert to start and put end contents in.
Yes, and put the current files content then and commit again. If you don't need the between changes it should be OK.
ok, I am fine with it.
My only concern is that we may miss any changes done to the files after this ticket.
If you are going to implement what you have proposed please consider the above.
I'm not quite sure of what you mean by
changes done to the files after this ticket.
Maybe you speak about the intermediate changes but I guess you know they will still be there in the intermediate commits for those possibly interested.
So what is it? If you mean changes done after the last commit (the one after reverting the 9 ones, to restart and to put the current content in, in place) I can't see what we could miss.
To summarize: we will revert to the situation which prevailed before changes done for this Jira, in order to create an history beginning before changes done for this Jira. The history will continue with all the functional changes done in the 9 commits but done once and directly on the initial files. Then I can't see what the problem could be.
Agreed?
I just wanted to mention that you should also take care of including all the changes (if any) to the modified files that happened after the 9 commits (as part of other unrelated modifications).
Ah I see what you mean, a bit harder than what I thought. I just wanted to copy the current state under the reverted version. I'll check what we could miss then, if any and if important value (for instance I'll not muck around with trivial changes)
Are concerned
site-conf.xsd
widget-form.xsd
widget-screen.xsd
widget-menu.xsd
simple-methods.xsd
Hi Jacopo Cappellato when you look at the dtd folders which contain the XSDs above you can see the history before and after the commits done for this issues. Is that not sufficient? I understand in the case of the widgets it mixes several files, but I think it's still usable, isn't ?
Hi Jacopo,
Please if you agree with my comment above close the issue, else explain why it's not sufficient for you, thanks!
Without answer from Jacopo for a month, I suppose it's OK and close this issue.
Step 5 can be optional. If step 5 is not implemented, we will create separate copies of the schemas at step 1. So existing schemas will not be modified.
What do you think? | https://issues.apache.org/jira/browse/OFBIZ-7061 | CC-MAIN-2017-26 | refinedweb | 2,540 | 65.93 |
Hi,
In the slide show code below, there's a function to press "previous" and "next" links. The "next" one works fine, and if you keep pressing it, it cycles through all the slides.
The "previous" one is a bit messed up, for some reason - it will go back a slide or two but then it will just go blank!
Could you please help?
Thank you!
<script type="text/javascript">
start_slideshow(1, 3, 3000);
var currentSlide = 1;
function start_slideshow(start_frame, end_frame, delay) {
id = setTimeout(switch_slides(start_frame,start_frame,end_frame, delay), delay);
}
function switch_slides(frame, start_frame, end_frame, delay) {
return (function() {
Effect.Fade('slide' + frame, { duration: 1.0 });
if (frame == end_frame) {
frame = start_frame;
currentSlide = frame;
} else {
frame = frame + 1;
currentSlide = frame;
}
Effect.Appear('slide' + frame, { duration: 1.0 });
if (delay == 1000) {
delay = 3000;
}
id = setTimeout(switch_slides(frame, start_frame, end_frame, delay), delay);
})
}
function stop_slideshow() {
clearTimeout(id);
}
function next_slide() {
clearTimeout(id);
Effect.Fade('slide' + currentSlide, { duration: 1.0 });
if (currentSlide == 4) {
currentSlide = 0;
}
currentSlide = currentSlide + 1;
Effect.Appear('slide' + currentSlide, { duration: 1.0 });
id = setTimeout(switch_slides(currentSlide, currentSlide, currentSlide, delay), delay);
}
function previous_slide() {
clearTimeout(id);
if (currentSlide == 0) {
currentSlide = 1;
} else {
Effect.Fade('slide' + currentSlide, { duration: 1.0 });
currentSlide = currentSlide - 1;
Effect.Appear('slide' + currentSlide, { duration: 1.0 });
id = setTimeout(switch_slides(currentSlide, currentSlide, currentSlide, delay), delay);
}
}
</script>
The previous/next links are like this:
<a href="#" onclick="next_slide()">Next</a>
<a href="#" onclick="previous_slide()">Previous</a>
Try in previous_slide:
if (currentSlide == 0) {
currentSlide = [COLOR="Blue"]4[/COLOR];
Fang;1064411 wrote:Try in previous_slide: if (currentSlide == 0) {
currentSlide = [COLOR="Blue"]4[/COLOR];
Hi Fang - many thanks for your reply. I tried that, it did not make any difference, unfortunately
Thanks!
Do you have a 'working' example?
Hi Fang,
Many thanks for your reply.
Here's an example:
If you click the "next" button a couple of times, it's fine. But if you click the "previous" button and go before the first slide, it's blank...
It also seems if you click the "next" button too fast, it will go blank too??
delay hasn't been defined
Hi - how do I do that? | https://www.webdeveloper.com/forum/d/223722-please-help-with-this-javascript-slideshow-bug | CC-MAIN-2018-17 | refinedweb | 347 | 59.19 |
OleDB provides fast read access to Excel data, but it didn't meet my specific needs, which included accessing only certain columns and data validation. While this article will not get into these specifics, it does explain the concepts used to read Excel data in a fairly quick manner. My first pass accessed each cell one by one, which is slow and used a lot of CPU. So in looking for a better way to accomplish my goals with reasonable CPU and speed, I experimented with the Office 2003 Interop Assemblies. I found, in my opinion, a decent way to accomplish the needed speed. CPU usage can still be high, but at an acceptable trade off for my needs. My attempts at finding an article to address this situation came up short, therefore, I am writing one.
We first need to setup a project with references to the Interop Assemblies. Lars-Inge Tnnessen has written a good article about this here: An introduction on how to control Excel 2003 with J#.NET. It's for J# but should translate to C# without too much effort.
Once referenced, you can add the following using statement:
using
using Microsoft.Office.Interop.Excel;
I've created a console application and just kept all the code within the main method. I've done this to make it a bit easier to follow. Next, we need to setup the objects that we'll be working with.
main true which will help us see what's going on with the document while debugging.
true]; method of the Range object and the XlDirection enumeration to specify which direction to find the end. We'll go to the right first and down second. The get_End stops at the first empty cell. And works on the first row or column in the range. So based on our initial range selection of A1, it will look for the first empty cell in row 1 moving to the right from column A.
get_End
Range
XlDirection object. The XlReferenceStyle specifies the format of the address returned. We want xlA1 because the get_Range method expects that format. The following returns a string containing E20:
get_Address
XlReferenceStyle
get_Range
string downAddress = range.get_Address(
false, false, XlReferenceStyle.xlA1,
Type.Missing, Type.Missing);
We'll use the get_Range method to get a range from A1 to E20.
range = sheet.get_Range("A1", downAddress);
We now have a reference to the data.
Range objects will return their data in a two dimensional array of objects with the Value2 property. Dimension one represents the rows, while dimension two represents the columns. This is much faster than reading the data cell by cell.
Value2
object[,] values = (object[,])range.Value2;
Console.WriteLine("Row Count: " + values.GetLength(0).ToString());
Console.WriteLine("Col Count: " + values.GetLength(1).ToString());
With the values object array, all we need to do is loop through to get the data. We'll start by writing out the column numbers.
values();
}
In order for the GC to collect the objects, which can have a large memory footprint, we want to set the references to null, close the workbook, and quit the Excel application.
null
range = null;
sheet = null;
if (book != null)
book.Close(false, Missing.Value, Missing.Value);
book = null;
if (app != null)
app.Quit();
app = null;
Included in the sample code is a timed access class which gets the range and then reads it two different ways (see screen shot at top). First it reads it using the method described above. Secondly, it loops through the rows and columns, reading each value from a Range object. Running this will illustrate the difference in time between the methods.
The interop assemblies provide a lot of options for working with Excel data, both reading and writing. Some experimenting with the object model can yield decent performance and expose some very useful options which are not obvious from reading the documentation. Writing to Excel can be done fairly quickly using the same technique. See An introduction on how to control Excel 2003 with J#.NET for more detail in writing to Excel using this method.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
range = range.get_End(XlDirection.xlToRight);
range = range.get_End(XlDirection.xlDown);
range = range.SpecialCells(XlCellType.xlCellTypeLastCell);
var ef = new ExcelFile();
ef.LoadXls("Excel file.xls");
// DataSet schema has to be defined before this.
for(int i = 0; i < ef.Worksheets.Count; ++i)
{
var ws = ef.Worksheets[i];
ws.ExtractToDataTable(dataSet.Tables[i], ws.Rows.Count, ExtractDataOptions.StopAtFirstEmptyRow, ws.Rows[0], ws.Columns[0]);
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/9992/Faster-MS-Excel-Reading-using-Office-Interop-Assem?msg=3570780 | CC-MAIN-2015-27 | refinedweb | 820 | 66.64 |
We plan to use python to control android phone. In this tutorial, we will tell you how to capture android phone screenshot using python and adb.
Capture android phone screenshot using adb
We can use adb command to capture screenshot.
Here is the command:
adb shell screencap -p
However, if you want to use python get run adb command and save screenshot to an image file. How to do?
Capture android phone screenshot using adb and python
Here we write an example to show you how to do.
import subprocess import os import sys def get_screen(filename): cmd = "adb shell screencap -p" process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) binary_screenshot = process.stdout.read() binary_screenshot = binary_screenshot.replace(b'\r\r\n', b'\n') with open(filename,'wb') as f: f.write(binary_screenshot) get_screen(filename='phone.png')
In this example, we will use subprocess.Popen() to run adb command and get the data of android phone screenshot.
Then we will save image data to png file.
You should notice: binary_screenshot cotains the screenshot data, the type of it is byte.
Run this python code, this code will save phone screenshot to phone.png file. | https://www.tutorialexample.com/python-capture-android-phone-screenshot-using-adb-adb-tutorial/ | CC-MAIN-2020-45 | refinedweb | 194 | 69.07 |
This section of the chapter deals with how the Common Language Runtime manages memory, and how you can make some adjustments in your code to be more accommodating of the Common Language Runtime's memory manager. You will be introduced to the concepts of boxing and unboxing and how those concepts apply when dealing with collections and arrays. In addition, this section discusses string management, how the Common Language Runtime deals with strings, and what you can do to increase string performance.
Boxing and unboxing refer to the ability to convert between value types and reference types. A value type is a simple type such as an integer or a decimal or a float. A value type can also be a struct, which is a simple value version of a class. A reference type is a type whose value is not contained in the variable; rather, the variable contains a reference that points into a location on the managed heap for the actual data. Such types are class instances and strings.
Boxing is the process by which a value type is treated as an object. Many people think that when a value type is boxed, a dynamic reference is created. For example, assume that you box an integer variable that contains the value 10,000. Then you change the original integer value to 452. Interestingly, the boxed object will not recognize the change. When you box a value type, a copy of it is placed on the managed heap (as opposed to the stack, where normal value types reside) and a reference to that value type is placed in the object variable. After the boxing operation, there is no relation between the original value and the boxed value. Listing 14.2 is a demonstration of boxing a value, and how the boxed value and original value are not linked in any way.
using System; namespace Boxing { /// <summary> /// Summary description for Class1. /// </summary> class Class1 { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main(string[] args) { int x = 10000; object ob = x; Console.WriteLine("Value of X = {0}", x ); Console.WriteLine("Value of Ob = {0}", ob ); x = 452; Console.WriteLine("Value of X after change = {0}", x ); Console.WriteLine("Value of Ob after change = {0}", ob ); Console.ReadLine(); } } }
Here is the output of the code in Listing 14.2:
Value of X = 10000 Value of Ob = 10000 Value of X after change = 452 Value of Ob after change = 10000
Remember that excessive numbers of allocations are among the things that slow down the GC. Every time a value type is boxed, it is a new allocation. The difficult part is that you never see the allocation. Boxing itself also incurs a performance penalty. You should be aware of when your code is boxing, and avoid it if you can.
Unboxing is the opposite of boxing. When a reference type is unboxed, it is converted from a reference type to a value type; its value is copied from the managed heap onto the stack.
When a value is unboxed, the object instance is checked to make sure that it is indeed a boxed value of the right type. If this check succeeds, the value is copied from the heap onto the stack and assigned to the appropriate value type variable. As with boxing, unboxing a variable incurs some performance overhead. Whereas boxing creates a new allocation on the managed heap to store the new reference value, unboxing creates a new allocation on the stack to store the unboxed value. The following few lines of code, taken from Listing 14.1, show an unboxing operation:
int x = 10000; object ob = x; Console.WriteLine("Value of X = {0}", x ); Console.WriteLine("Value of Ob = {0}", ob ); x = 452; Console.WriteLine("Value of X after change = {0}", x ); Console.WriteLine("Value of Ob after change = {0}", ob ); Console.WriteLine("Value of Ob unboxed to int = {0}", (int)ob);
The output of the preceding code is as follows:
Value of X = 10000 Value of Ob = 10000 Value of X after change = 452 Value of Ob after change = 10000 Value of Ob unboxed to int = 10000
Collections (and other weakly typed classes such as DataSets) through their nature and use perform a large amount of boxing and unboxing. For example, assume that you are using an ArrayList to store integers, as in the following code:
ArrayList al = new ArrayList(); // load arraylist from some source foreach (int x in al) { // do something with integer }
There are a few issues with the loop in the preceding code. The first issue is that each iteration through the loop causes an unboxing operation to occur. This could become very slow and very costly, depending on the size of the ArrayList. Another issue is that the use of foreach causes some generalization code to occur that might be slower than using a number-based for loop. Eventually foreach will be optimized to work just as fast as a regular for loop. Although the foreach loop is easier to read, it might not always be the fastest solution.
The bottom line is that the performance penalties for boxing and unboxing are multiplied by the size of a collection whenever you perform a boxing or unboxing operation within an iteration through a collection. The next time you find yourself writing a for loop, double-check the contents of the loop to see whether you might be doing something expensive during each iteration.
One thing that seems to take people a while to grasp fully is that the .NET Framework treats strings as immutable. In other unmanaged languages, you typically allocate a contiguous block of memory in which to store a string. You can continue along in your code, making changes to the string at will as long as you don't exceed its allocated space.
Consider the following few lines of code:
string sample = "This is a sample string"; sample = "This is another sample string"; sample = sample.Replace("sample", "cool"); Console.WriteLine(sample);
If this were an unmanaged language, the preceding code would have allocated enough memory to store the phrase "This is a sample string". Then, on the second line, it would have modified the same piece of memory and extended the allocation. The third line would have modified the same area of memory yet again.
.NET, however, treats strings as immutable. When a string has been defined, it cannot be changed or modified. This might make you think that an operation such as Replace would be impossible to perform on strings. When you modify strings in C#, you are actually creating additional strings that represent the changed values. For example, when you execute the preceding code, the following strings are allocated and stored on the heap:
This is a sample string This is another sample string This is another cool string
In the preceding code, each concatenation of a single variable was actually creating a new string in memory. Consider the following for loop:
string myString = "Hello, "; for (int i=0; i < 500; i++) { myString += i.ToString(); }
The preceding for loop contains a few mistakes that might not be immediately obvious. The first mistake is that the i variable is intentionally boxed during each iteration, which can cause performance problems. The second mistake is that a string is concatenated with the += operator. As you now know, you cannot modify existing strings in C#; you can only create new strings on the heap. When you iterate through a loop 500 times, concatenating strings to an existing string, you end up with 501 allocated strings on the heap, only one of which is live (that is, only one has a valid reference pointing to it). That means 500 collections must take place on unused strings during the next Garbage Collection process.
There is a way around this performance problem. Whenever you construct a string through concatenation or modify an already allocated string, you can use the StringBuilder class instead of simple concatenation. Because of the way the StringBuilder class manages its internal data, you can perform all the concatenations you like using StringBuilder and you will not have the performance problems that come with standard concatenation. The following code shows you a more efficient way to perform repeated concatenations:
StringBuilder sb = new StringBuilder(); sb.Append("Hello "); for (int x=0; x < 500; x++) { sb.AppendFormat("{0}", x); }
The preceding code has a boxing issue, but at least you don't have 500 unused strings sitting on the heap after the loop. | https://flylib.com/books/en/1.238.1.105/1/ | CC-MAIN-2019-13 | refinedweb | 1,421 | 61.77 |
Daniel Fagerstrom wrote:
> If we compare the situation with concepts from Java, my view was:
>
> Java: You download a class with unique combination of name and namespace.
> Blocks: You download a block with a unique URI.
>
> Java: You call the constuctor of the class possibly with parameters
> and get an object with an unique object id.
> Blocks: You deploy the block and get a block instance with a unique
> (in your Cocoon) block instance id. During deployment you give it
> parameter values and connect it to other block instances.
This depends on whether the Java class is a singleton, in which case the
constructor is called only when the class is instantiated the first
time. The same could be true for blocks as well, if that is desirable.
It probably would be more manaageable if the first release required that
blocks be singletons and then expand that later if needed. Isn't that
basically the way servlets work?
>
>
> --- o0o ---
>
> I guess that in your view there is no istantiation, you subclass and
> have everything "static" instead.
Being a singleton doesn't mean that you can't have some initialization.
>
> Both views will solve the same problem but in different ways. With
> your view we might want to have tool support for automatic subclassing ;)
>
> /Daniel
> | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200504.mbox/%3C4253E8B5.5060100@dslextreme.com%3E | CC-MAIN-2015-22 | refinedweb | 214 | 71.95 |
Smart pointer that allows safe sharing of data between multiple threads. More...
#include <rtt/extras/ReadOnlyPointer.hpp>
Smart pointer that allows safe sharing of data between multiple threads.
This smart pointer registers a memory area as being read-only. It can therefore be shared safely between threads without adding means of synchronization between the threads. Indeed: the value will *not* be changed between the threads.
If a thread wants to modify the value in-place (i.e. not do any copying), it can get ownership of the object referenced by the ReadOnlyPointer by calling ReadOnlyPointer::write_access. If this smart pointer if the only owner of the memory zone, then no copy will be done. Otherwise, one copy is done to satisfy the caller.
Definition at line 98 of file ReadOnlyPointer.hpp.
Modifies the value referenced by this smart pointer.
After this call,
ptr is owned by the smart pointer, and the caller should not modify the value referenced by
ptr (i.e. it should be treated as read-only by the caller).
This does not change the other instances of ReadOnlyPointer that were referencing the same memory zone as
this. I.e. after the following snippet, ptr2 refers to value2 and ptr1 to value1.
T* value1 = new T; T* value2 = new T; ReadOnlyPointer<T> ptr1(value1); ReadOnlyPointer<T> ptr2(ptr1); ptr2->reset(value2);
If
this is the only owner of the object it refers to, this object will be deleted.
Definition at line 141 of file ReadOnlyPointer.hpp.
Gets write access to the pointed-to object if it does not require any copying.
This method is like write_access, except that it will return NULL if a copy is needed
If non-NULL, it is the responsibility of the caller to delete the returned value.
Definition at line 178 of file ReadOnlyPointer.hpp.
Gets write access to the pointed-to object.
If
this is the only owner of that object, then no copy will be done *and* the pointer will be invalidated. Otherwise, the method returns a copy of the pointed-to object.
If the copy might be a problem, one can use try_write_access to get the object only when a copy is not needed.
It is the responsibility of the caller to delete the returned value.
Definition at line 210 of file ReadOnlyPointer.hpp. | http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/classRTT_1_1extras_1_1ReadOnlyPointer.html | CC-MAIN-2014-35 | refinedweb | 384 | 57.47 |
The problem “Maximum number of segments of lengths a, b and c” states that you are given a positive integer N, and you need to find the maximum number of segments of lengths a,b, and c that can be formed using N.
Example
N = 7 a = 5, b = 2, c = 3
3
Explanation
Here if we go with the greedy approach, trying to cut all the segments with the smallest segment(=2). We will not be able to create the last segment of size 1. Thus we divide the length 4 with two 2 length segments and one 4 length segment.
Approach
The problem provides us with a positive integer N, and some other integers a, b, c. Here we want to divide the number in lengths of a, b, and c. We need to divide N such that the number of segments is maximum. So to solve the problem, let’s first try a greedy approach. A greedy approach to solve the problem is choosing the smallest of a, b, and c. Now we divide N into segments with the minimum length. But there is a catch to it, what if the smallest segment does not divide N? Then we will be left with a remainder segment which is not possible to make. So, this confirms that we cannot find the answer using a greedy approach in some cases.
So, instead of doing this using a greedy approach. We solve the problem using recursion. We make a function that gives us the answer for N, then this function calls itself with values N-a, N-b, and N-c. Thus the original problem has been divided into smaller subproblems. We can also reduce the exponential time complexity of this recursive function using dynamic programming. The program with DP will run in linear time. Because we will create a DP array that will store the answer for smaller subproblems. And whenever their result is required we simply use them instead of recomputing them as we did in recursive solution.
Code
C++ code to find the maximum number of segments of lengths a, b and c
#include <bits/stdc++.h> using namespace std; int main() { int n = 7, a = 5, b = 2, c = 3; int dp[n + 1]; memset(dp, -1, sizeof(dp)); // base case dp[0] = 0; for (int i = 0; i < n; i++) { if (dp[i] != -1) { if(i + a <= n )dp[i + a] = max(dp[i] + 1, dp[i + a]); if(i + b <= n )dp[i + b] = max(dp[i] + 1, dp[i + b]); if(i + c <= n )dp[i + c] = max(dp[i] + 1, dp[i + c]); } } cout<<dp[n]; }
3
Java code to find the maximum number of segments of lengths a, b and c
import java.util.*; class Main{ public static void main(String[] args) { int n = 7, a = 5, b = 2, c = 3; int dp[] = new int[n+1]; for(int i=0;i<=n;i++)dp[i] = -1; // base case dp[0] = 0; for (int i = 0; i < n; i++) { if (dp[i] != -1) { if(i + a <= n )dp[i + a] = Math.max(dp[i] + 1, dp[i + a]); if(i + b <= n )dp[i + b] = Math.max(dp[i] + 1, dp[i + b]); if(i + c <= n )dp[i + c] = Math.max(dp[i] + 1, dp[i + c]); } } System.out.println(dp[n]); } }
3
Complexity Analysis
Time Complexity
O(N), because we simply ran a loop until the given integer. Thus the time complexity is linear.
Space Complexity
O(N), because we had to create a 1D DP array for storing the intermediate results to avoid recomputation. Thus the space complexity is also linear. | https://www.tutorialcup.com/interview/dynamic-programming/maximum-number-of-segments-of-lengths-a-b-and-c.htm | CC-MAIN-2021-49 | refinedweb | 613 | 71.65 |
Using a Raspberry Pi with MiniBee
(also applies to RelayBee and Mini-RelayBee )
The following details show how to control a
MiniBee using a
program written in Python running under the Raspbian operating system on
a Raspberry Pi model B single board Computer. Since the
RelayBee and
Mini-RelayBee are simply MiniBee's with some of their outputs replaced
by relays, the following also applies to them.
( If you are not familiar with basic Minibee functionality, more details can be found here )
The Raspberry Pi has two standard USB sockets. One of them is usually dedicated to the keyboard (or keyboard and mouse where a small USB hub has been used). It is assumed the MiniBee is connected via a standard USB lead to one of these ports or to a free USB port on a hub if one is connected.
The following simple example of Python Code is all that is required to turn on output 1 on the MiniBee. Have a look at this code and then see the explanation that follows for detailed description of each of the lines.
import usb.core
dev = usb.core.find(idVendor=0x04d8, idProduct=0x0003)
if dev is None:
raise ValueError('Device not found')
else:
try:
dev.detach_kernel_driver(0)
except:
pass
dev.set_configuration()
data=[4, 0b00000010, 0b00000000]
dev.write(2,data)
import usb.core
This line of code imports the very popular PyUSB library of functions for communicating with USB devices. It avoids having to write any of your own code to access USB devices and provides a good range of functions to meet many requirements. In our case we only need to use four of its functions. It is freely available to download and can be installed on your Raspberry Pi very easily. There is no point in duplicating install instructions here so I will just give you a link to follow.... (opens in new window)
dev = usb.core.find(idVendor=0x04d8,
idProduct=0x0003)
This uses one of the PyUSB functions to "find" the attached MiniBee. The way it finds the MiniBee is by checking all available USB ports to find the one with a device that has the unique identifier associated with the MiniBee. This identifier is made up to two parts: the Vendor ID and the Product ID. For the MiniBee this is the two hexadecimal numbers 0x04d8 and 0x0003 respectively. If the device is found, the "dev" object is created for the MiniBee and can then be used for susequent operations on the MiniBee. It is only necessary to use this statement once in your program, but obviously, it needs to be before any other use of the "dev" functions.
if
dev
is
None:
raise ValueError('Device not found')
It is always good programming practice to check if the MiniBee has actually been found before continuing. This "if" statement will "raise" an exception and halt the program if it is not.
Assuming it is found, execution continues with the statements after the else:
try:
dev.detach_kernel_driver(0)
except:
pass
The function dev.detach_kernel_driver(0) gets round a possible problem with the operating system restricting the use of the MiniBee. When the MiniBee is first plugged into the USB port the operating system tries to be helpful and associates one of its standard drivers to deal with all operations to the board. We don't want this, but, with it "attached" to the MiniBee it won't then allow any other direct operations. This line "detaches" this driver from the MiniBee allowing us to access it directly. Note that this only needs to be done the first time the program is run after connecting the MiniBee (or after reboot). If you try and detach the driver when it is already detached the program will raise an exception and halt the program. I have found the best way to deal with this is simply to put the detach function at the head of the program inside a "try... except" clause. In this way it will detach the driver if necessary and ignore it if already detached.
dev.set_configuration()
USB devices have different "configurations" that they can be set to depending on the task they are required to perform. In the case of the MiniBee, it simply has one configuration which is it's default. However, this still needs to be "set" as the active one using the statement shown. This only needs to done once in your program and before any other code that communicates with the MiniBee.
data=[4, 0b00000010,
0b00000000]
The MiniBee obeys the usual rules of a standard USB device which requires data to be sent to it in the form of message blocks (in USB terms these are called Endpoints). In Python we specifiy this message block by using a number sequence using the square brackets as shown. A message block for the MiniBee consists of a message type number and two 8-bit numbers that correspond to the on/off pattern of the outputs. The message type number is simply '4'. This indicates to the Min MiniBee output is shown below...
A logic '1' corresponds to the associated output being ON and logic '0' is OFF.
dev.write(2,data)
Once the data sequence has been specified it is simply sent to the MiniBee using the "write" function within the "dev" object as shown. The number '2' used, is just the name of the internal buffer to use within the MinBee, or , as previously mentioned, the USB "endpoint" number. For the MiniBee this is always set to 2. The MinBee outputs will now be set accordingly.
In the above description its worth noting that the first few lines may look slightly complicated but, once your program has "found" the MiniBee, detached the kernel driver (if necessary) and set the MiniBee configuration, the control of the minibee outputs is simply a case of using the "dev.write" function (with the appropriate bits set in the data) as often as you like.
All code and descriptions on this page describing the control of the MiniBee outputs also apply to the RelayBee and Mini-RelayBee boards. The relay bee is simply a minibee with its first seven outputs replaced by relays. The mini relay bee just uses the first two outputs, both of which are relays.
Raspbian Note: Always remember that running code in Raspbian that accesses USB devices requires you to be in "superuser" mode to have the necessary permissions. This may mean running your Python program with the usual "sudo ...." prefix or, if using the graphical environment with IDLE as your programming environment, you may need to run it from a terminal window (i.e. typing "sudo idle" in the terminal window) for the programs created to work properly.
Some examples:
# set output3 on and all others off
outputs1= 0b00001000
outputs2=0b00000000
data=[4,outputs1,outputs2]
dev.write(2, data)
# set outputs 1,5 and 14 on and all others
off
outputs1= 0b00100010
outputs2=0b10000000
data=[4,outputs1,outputs2]
dev.write(2, data)
# set all outputs on
data=[4, 0b11111110, 0b01111111]
dev.write(2, data)
# set output1 on and all others off (using
heaxadecimal)
outputs1= 0x02
outputs2=0x00
data=[4,outputs1,outputs2]
dev.write(2, data)
# set outputs 3,4 and 5 on and all others off
(using decimal)
outputs1= 56
outputs2=0
data=[4,outputs1,outputs2]
dev.write(2, data)
Disclaimer: Please note that the above code and descriptions are designed to help our customers use our boards with the Raspberry Pi. We offer no warranty or guarantees to the suitability of the code for your application. We are also unable to enter into discussions about any errors your code may have and will not, under any curcumstances, attempt to look for programming errors in code sent to us. This applies equally to Python, PyUSB, LibUSB. Linux (all distros) and any Raspberry Pi specifics.
13602 | https://pc-control.co.uk/control/raspi/raspi-minibee.php | CC-MAIN-2018-43 | refinedweb | 1,310 | 61.67 |
Learning Cocoa with Objective-C/Cocoa Overview and Foundation/Cocoa Development Tools
From WikiContent.
Installing the Developer Tools.
You can quickly check to see if you have the Developer Tools installed. If you have a /Developer/Applications folder on your hard drive, as shown in Figure 2-1, you are ready to go. If not, you'll need to install the tools from either the Developer Tools CD that came with your copy of Mac OS X or from a disk image you can download from the Apple Developer Connection (ADC) site.
Installing from the Developer Tools CD Figure 2-2.
Installing from the ADC Site
If you can't find your Developer Tools CD, or if you received a Mac OS X upgrade package that didn't include it, you will need to go to the ADC member web site at and download a disk image..
Tip.
Upgrading Your Tools.
Note.
If you don't have a high-speed connection, you can get Apple to send you the latest copy of the Developer Tools CD at a nominal charge. Log in to the ADC member web site, and go to the Purchase section.
Project Builder
Project Builder is the hub application of Apple's Developer Tools. It manages software-development projects and orchestrates and streamlines the development process. Project Builder's key features include the following:
- A project browser that manages all the resources of a project, allowing you to view, edit, and organize your source files.
- The ability to invoke the build system to build your projects and run the resulting program.
- A graphical source-level debugger that allows you to walk through the code, set breakpoints, and examine call stacks.
- A code editor that supports language-aware keyword highlighting, delimiter checking, and automatic indentation for many languages, including C, Objective-C, C++, Java, and AppleScript.
- Project search capabilities that allow you to find strings anywhere in a project.
- Source control management integration using the Concurrent Version System (CVS). CVS enables development teams (local or distributed) to work together easily on the same source code base.
Project Builder's main window is shown in Figure 2-3.
Say "Hello, World"
To introduce you to Project Builder and to tip our hat to almost every introductory tutorial ever written on programming, we are going to build a very simple working program that prints "Hello, World!" Building and running this program will also verify that you have a working development environment.
Open Project Builder
Before you can start building applications with Project Builder, you will need to launch the application.
- Find Project Builder in /Developer/Applications.
- Double-click the icon.
If this is the first time that you have started Project Builder, you will be presented with an Assistant to set up your application preferences.
- Click Next on the Assistant's welcome page.
- Choose where the components of your programs will be placed when they are built. We recommend that you go with the default, although you can change this at any time via Project Builder's preferences. Click Next to move on.
- Next, you'll be presented with a Window Environment configuration option, as shown in Figure 2-4. This configuration sets how Project Builder's interface is presented to you. Choose between having everything in one window and having everything occupy its own window..
Creating a new project
To create the "Hello, World" project, select File → New Project. Project Builder then displays the New Project Assistant, shown in Figure 2-5, which takes you through a few simple steps to create a new project.:
- Application
- Starting points for creating Cocoa applications (Objective-C- and Java-based), as well as Carbon- and AppleScript-based applications
- Bundle
- Starting points for creating bundles that link against the Cocoa, Carbon, or Core Foundation frameworks
- Framework
- Starting points for creating frameworks that link against either Cocoa or Carbon
- Java
- Starting points for developing Java applets or applications using either the AWT or Swing APIs
- Kernel Extension
- Starting points for developing both generic kernel extensions and IOKit drivers
- Standard Apple Plug-ins
- Starting points for developing palettes for Interface Builder, preference panes for the System Preferences application, and screen savers
- Tool
- Starting points for creating command-line applications that link against the Core Foundation, Cocoa Foundation, or Core Services frameworks
Throughout this book, we focus almost exclusively on two categories of applications: simple tools with no GUI (called Foundation Tools) and applications with GUI windows. For this example, we will build a simple tool that doesn't have a graphical interface. Proceed as follows:
- Scroll down to the list of Tool choices, and select Foundation Tool from the list, as shown in Figure 2-5, and click Next.
- The Assistant gives you an opportunity to name your new project and choose a location in the filesystem in which to save it. Type hello in the Project Name field, as shown in Figure 2-6.
-.
- Click Finish.
When you finish creating the project, the main project window opens, as shown in Figure 2-7.
Notice that Project Builder uses hierarchical groups to organize the various parts of a project. In this project, these groups are the following:
- Source
- This group contains main.m, the file that contains the main function that is the entry point for your application.
- Documentation
- This group contains a prototype Unix manpage for the program.[1]
- External frameworks and libraries
- This group contains references to the frameworks that the application imports to gain access to system services.
- Products
- This group contains the results of project builds and is automatically populated with references to the products created by each target in the project..
To see the source code for the application's entry point as shown in Figure 2-7:
- In the Groups & Files list of Project Builder's main window, click the disclosure triangle to the left of the Source group.
- Click on the icon for the main.m file. You will see the contents of the file in the code editor.
The main.m file contains the entry point for the application. The Foundation Tool project template provides a standard main function that prints "Hello, World!", so we don't even need to add any code.
import <Foundation/Foundation.h> // 1 int main (int argc, const char * argv) { // 2 NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; // 3 // insert code here... NSLog(@"Hello, World!"); // 4 [pool release]; // 5 return 0; // 6 }
Now let's walk through the code, line-by-line, so you can get a feeling for what's going on here:
- Imports the Foundation framework. This directive is similar to #include, except that it won't include the same file more than once.
- Declares the standard C main function for a program. This function is where execution starts when the program is started.
- The NSAutoreleasePool is one of Cocoa's memory-management tools. We'll cover more about how memory management works in Chapter 4.
- The NSLog function works very much like printf in the C language. The difference is that NSLog takes an NSString object instead of a C string. The @" . . . " construct is a compiler directive that creates an NSString object using the characters between the quotation marks.
- This line contains another part of Cocoa's memory housekeeping that we will explain in depth later.
- A return from the main function indicating a normal program exit.
Tip
Why does NSLog have the NS prefix? Simple: NS stands for NeXTSTEP. All of the classes and functions in the Cocoa frameworks start with NS to help protect the namespace in which all functions and classes exist from collisions. The continued use of NS is a vestige that shows Cocoa's heritage.
Building the project Figure 2-8.
Tip.
Solving build problems NSLog(@"Hello, World!") statement; then try building the project. You'll see a build failure notice, as shown in Figure 2.
Add the missing semicolon back in after the NSLog function, and recompile by clicking the Build button to get a working program again.
Running the application
Congratulations! You've just created your first Cocoa application and didn't even have to type in any code. All that is left to do is click the Build and Run button, as shown in Figure 2-10. When the application launches, the Run pane of Project Builder's main window will enlarge to display the output of the NSLog function.
Tip
Since building and running is a straightforward process, in future chapters we don't tell you how to "build and run" your application—we just say "build and run your application."
In addition to the string, the NSLog function prints the current date and time, the program name, and the process ID number (PID) of the program. Since this is a tool application with no GUI, you might want to see the behavior of this program on the command line.
- Open up a Terminal window, found in the /Applications/Utilities folder.
Tip
As with Project Builder, you may want to add the Terminal application to your Dock for easy access, if you haven't already done so.
- The hello executable is built into a subdirectory of your project. To run it, enter the following into the Terminal window:
[localhost:~] duncan% LearningCocoa/hello/build/hello
When the program is run, you should see something similar to the following output:
2002-06-08 23:23:29.919 hello[490] Hello, World!
The timestamp and process ID information come in handy when you are looking for the output from a program that was launched from the Finder, but not from inside of Project Builder or from the command line. In those cases, the output from NSLog will show up in the system's message log. You can easily view these messages using the Console application, also found in the /Applications/Utilities folder.
Warning:
[localhost:~] duncan% Learning\ Cocoa/hello/build/hello
The backslash in front of the space tells the shell that the space is part of the path of the program.
Using the Debugger
Project Builder provides an easy-to-use interface to the system's debugger that lets you step through code line-by-line, set breakpoints (places to pause the execution of the program), and view variables and threads. The following steps allow you explore the debugger.
- Set a breakpoint in your code by clicking into the left margin of the code editor near the main method declaration, as shown in Figure 2-11. Notice that where you click, a marker appears.
- Now click the Build and Debug button. This will start the debugger and then load the hello program into it. Execution will stop at the first statement after the breakpoint. In this case, it will stop at the first line of our main method, as shown in Figure 2-12. pool variable.
- Click once on the Step over Function button (called out in Figure 2-12). Note that the current execution highlighter moves to the next valid line of code. Also notice that the value of the pool variable is highlighted in red. This means that the pool variable was just set. The value is actually a pointer to the contents of the object in memory.
- Click the Step over Function button once again. The NSLog function was called. To see the output, click the Console tab above the variable viewer, as shown in Figure 2-13. Also notice tht the value of the pool variable is no longer red. This highlighting lasts only one step after the contents of a variable change.
- Click the Continue execution button to let the program execute as normal. You can click the Restart button (called out in Figure 2-12) to restart the program at the beginning of execution.
- Click the Stop button in the toolbar to exit the debugger.
Now that we have explored how to say "Hello, World!" to the console, let's take a look at building a GUI application that says hello in a much different way.
Interface Builder.
Interface Builder generates nib [2]file are created and manipulated using Interface Builder's graphical tools.
Interface Builder's standard palettes hold an assortment of AppKit components. Other palettes can include Cocoa objects from other frameworks, third-party objects, and custom-compiled objects.
Graphical "Hello, Figure 2-14.
This will create a project that is set up as a simple Cocoa application. Go ahead and create the project, giving it the name Hello Worldand saving into your~/LearningCocoa folder. When you have created the project, you will see a window similar to that shown in Figure 2-15.
The Cocoa Application project type uses a different set of groups to organize projects than those used in the Foundation Tool example. The groups in this project type are as follows:
- Classes
- This group is empty at first, but is used to hold the implementation (.m ) and header (.h) files for your project's classes.
- Other Sources
- This group contains main.m , the file containing the main function that loads the initial set of resources and runs the application. In Cocoa applications with graphical interfaces, you typically don't have to modify this file.
- Resources
- This group contains the nib files and other resources that specify the application's GUI.
- Frameworks
- This group contains references to the Frameworks (Foundation and AppKit) that the application imports to gain access to system services.
- Products
- This group contains the results of project builds.
To see what Project Builder provides for you by default, go ahead and build and run the project. A blank window should appear once Project Builder is done compiling everything. Play with this window a little bit, and you'll notice that you can resize, minimize, and maximize it.
Now, to finish our application, we should make it say "Hello, World" to us!
Open the main nib file
To begin constructing a user interface using Interface Builder, the first step is to open the application's main nibfile. Double-click MainMenu.nibin the Resources group of the Groups & Files list of Project Builder's main window. This will launch Interface Builder (if it is not already running) and open the nibfile, as shown in Figure 2-16. A lot of windows will appear. You might want to hide your other running applications so that you can concentrate on just the windows that belong to Interface Builder. You can do this by using the Interface Builder → Hide Others menu.
These are the various parts of Interface Builder (called out in Figure 2-16):
- Nib file window
- This window is where the various objects that are part of your nib file are defined and manipulated. We'll explain more about the various parts of this window in Chapter 5.
- Control palette window
- This window contains all the controls that you can add to an interface. We'll explore many of these controls throughout the book.
- Menu bar
- The menu-bar window contains the menu bar that will be active when your application is running.
- Empty interface window
- This is the window that will be displayed when your application is run. Notice that it is the same size and in the same location on screen as when you Built and Ran the application.
Interface Builder stores all kinds of information about user-interface objects in an application's nib files. For example, you can set both the size and initial location of an application's main window by simply resizing and moving the window in Interface Builder.
- Move the window near the upper-left corner of the screen by dragging the titlebar.
- Make the window smaller by using the resize control at the bottom-right corner of the window.
To create our application, we need to add a text label to the window.
- Select the Cocoa-Views by clicking the second button from the left top of the Cocoa objects palette window, as shown in Figure 2-17. If you don't see the Cocoa palette window for some reason, select Palettes from the Tools menu to bring it forward.
- Drag a System Font Text label from the palette onto the window.
- Double-click on the new label and change the text to "Hello World".
- Resize the interface window to a smaller size, and move the text label to the center of the window. You should have something that looks similar to Figure 2-18.."
To see the application in action:
- Return to Project Builder.
- Click the Build and Run button..
Other Tools
In addition to Project Builder and Interface Builder, there are other applications that you can use in the Cocoa development process. Development tools that feature a GUI are listed in Table 2-1. Except where noted, these applications are installed in the /Developer/Applications folder.
Table 2-1. Other development tools
Command-Line Tools
There are several command-line tools for compilation, debugging, performance analysis, and so on, installed as part of the Developer Tools package. Many of these tools are ports of standard Unix applications with which you may have prior experience. These tools, listed in Table 2-2, can be found in the /usr/bindirectory.
Table 2-2. Command-line development tools.
Exercises
- Locate the Project Builder and Interface Builder applications, and put them into the Dock.
- Locate the developer documentation, and place a shortcut to it in your Dock or in your browser.
- Watch the "Accessing API Documentation in Project Builder" movie at.
Notes
- ↑ Manpages are the standard form of Unix documentation for command line utilities and are written as plain text files with nroff macros. See for more information.
- ↑ The name "nib" is an acronym for "NeXT Interface Builder," yet another vestige of Mac OS X's heritage. | http://commons.oreilly.com/wiki/index.php?title=Learning_Cocoa_with_Objective-C/Cocoa_Overview_and_Foundation/Cocoa_Development_Tools&oldid=6261 | CC-MAIN-2017-04 | refinedweb | 2,949 | 63.39 |
Swift version: 5.4
iOS has a built-in speech transcription system, which allows you to convert any audio recording into a text stream. It takes a few steps to configure, so let’s walk through them.
First, add
import Speech to the top of your Swift file, to bring in the Speech framework.
Second, request permission to transcribe audio:
func requestTranscribePermissions() { SFSpeechRecognizer.requestAuthorization { [unowned self] authStatus in DispatchQueue.main.async { if authStatus == .authorized { print("Good to go!") } else { print("Transcription permission was declined.") } } } }
Third, add a key to your Info.plist called
NSSpeechRecognitionUsageDescription, then give it a string describing what you intend to do with the transcriptions.
Finally, write a method to perform transcription on an audio URL. This URL should be a recording you’ve already made, that is stored locally on the device:
func transcribeAudio(url: URL) { // create a new recognizer and point it at our audio let recognizer = SFSpeechRecognizer() let request = SFSpeechURLRecognitionRequest(url: url) // start recognition! recognizer?.recognitionTask(with: request) { [unowned self] (result, error) in // abort if we didn't get any transcription back guard let result = result else { print("There was an error: \(error!)") return } // if we got the final transcription back, print it if result.isFinal { // pull out the best transcription... print(result.bestTranscription.formattedString) } } }
Note: the
isFinal property is there because you may get an initial transcription back containing some or all of the text, but it’s only considered final – i.e. as good as it gets – when the
isFinal flag is – learn more in my book Advanced iOS: Volume One
This is part of the Swift Knowledge Base, a free, searchable collection of solutions for common iOS questions.
Link copied to your pasteboard. | https://www.hackingwithswift.com/example-code/libraries/how-to-convert-speech-to-text-using-sfspeechrecognizer | CC-MAIN-2021-43 | refinedweb | 282 | 57.57 |
WelcomeDimEndEnd If' Set up the PresentParameters which determine how the device behavesDim presentParams As New PresentParameters()presentParams.SwapEffect = SwapEffect.Discard' Make sure we are in windowed mode when we are debugging#If DEBUG Then presentParams.Windowed = True#End If' Now create the devicedevice = New Device(adapterOrdinal, DeviceType.Hardware, Me, _ createFlags, presentParams)
private Device device;.
device.Clear(ClearFlags.Target, Color.DarkBlue, 1.0f, 0);device.Present();;
Public Shared Function CalculateFrameRate() As Integer If System.Environment.TickCount - lastTick >= 1000 Then lastFrameRate = frameRate frameRate = 0 lastTick = System.Environment.TickCount End If frameRate += 1 Return lastFrameRateEnd Function 'CalculateFrameRate Private Shared lastTick As IntegerPrivate Shared lastFrameRate As IntegerPrivate Shared frameRate As Integer
this.Text = string.Format("The framerate is {0}", FrameRate.CalculateFrameRate());.
If you would like to receive an email when updates are made to this post, please register here
RSS
Just thought I'd suggest linking to the previous/next sections of this tutorial. I've also have troubles finding each section of this article via your web page. After I did section 1 I couldn't find section 2 without typing 'Beginning Game Development: Part II' into google. Perhaps I'm blind and just missed a very visible link to the articles but right now I'm not seeing it. Thanks,
Adam
After Putting in Private Device device .......... It said It couldn't find a Namespace Device
Sean, you must right-click it and choose "using Microsoft.DirectX" to link it to that class ;)
make sure you put in -
using Microsoft.DirectX.Direct3D;
- on around line 13 in the GameEngine.cs
"After Putting in Private Device device .......... It said It couldn't find a Namespace Device"
u just need to put
using Microsoft.DirectX;
in GameEngine class
I have had the same issue as Sean (April 7, 2007)
I added the code: using Microsoft.DirectX.Direct3D;
and the error dissapeared. I hope that this would be sufficient to use for the rest of the game.
Anton
I still get the error about namespace samples is not part of class microsoft... I've done everything as you said... Pleas help...
Hi!
Sean, you must put the following line:
See ya!
re: Namespace Device error, make sure to add the "using Microsoft.DirectX;" and "using Microsoft.DirectX.Direct3D;" directives to the top of your GameEngine.cs file
I get the same error as Sean...
Error 1 The type or namespace name 'Device' could not be found (are you missing a using directive or an assembly reference?)
Never mind, fixed the problem.
You must add this at the top:
I plugged in the Private Device device, and is said that it coudn't find a namespace device.
Same problem,Can't find the Namespace Device
if it couldn't find a Namespace...add to code
yea it says to be exact.. The type or namespace name 'Device' could not be found(are you missing a using directive or an assembly reference.
those words pop up in my compiler. it is saying to me as i think i may be wrong but your trying to create a type of something we havent got down.. #include<somefile> anyone?
also any chance you can post the actual code. this will enable us to compare it. copy paste the gameengine class would be appreciated
I am interested in learning how to use models in this content, I am a new person to this your help would be apreciated.
Visual C# didn't like the int's in declarations in the FrameRate class
yeah!....i'm having the same problem....it said it couldn't find a namespace Device....
Device is located under Microsoft.DirectX.Direct3D
Sean: you must add
"using Microsoft.DirectX.Direct3D;"
to the GameEngine-class.
># Sean said on April 7, 2007 2:40 AM:
>After Putting in Private Device device .......... It said It couldn't find a Namespace Device
You should add 'using Microsoft.DirectX.Direct3D;' to the others, than it'll work.
Silicon Brain
private Device device; gave an error, adding Microsoft.DirectX and Microsoft.DirectX.Direct3D to the references did not work for me (c# express). I had to add them in the using section:
The type of namespace name 'Device' could not be found.
It occured after putting in the: private Device device;
I'm a total noob in this and am stuck here, can anybody help me?
Use
in the using region
if you get the namespace error for
To Sean
Just float your mouse on the device word in your code page.Click on the arrow that appears you'll get a small box next to the word device.Click on Direct 3D Device.
You just need to include directX libraries
i had the same problem as sean
Sean,
Add the following using directive to the top of your GameEngine form:
Nevin
Type this line:
and the problem is solved
I've just made the program upto the paragraph "3d Graphics Terminology" and my program just crashes. After commenting some code out is seems that the line with "new Device(etc.)" causes the problem, any suggestions?
you need to make sure you have using Microsoft.DirectX.Direct3D; at the top when u have ur input of "private device device" to work and not bring up the namespace not found message
For whatever reason, I get an exception thrown from the constructor of Device if I don't set presentParams.Windowed = true. I don't know why, but when this is not set on my system, the constructor throws an unhelpful (Message = "Error in Application") exception...
Any ideas?
Dast, in order to you Windowed mode you must setup BackBuffers in the PresentParameters.
When I try code for VB from part I or part II, I get the same error message: 'Samples' is not a member of 'Microsoft'. Do I have to install C# for support of DirectXSampleFramework?
When i get to the point of commenting out the stuff in dxmutmisc.cs and try to build i get the following errors:
This one is from the GameEngine.cs file, with reference to the OnPaint method parameters:
Error 1 The type or namespace name 'PaintEventArgs' could not be found (are you missing a using directive or an assembly reference?)
And i get a few of these every time i have something to do with the Drawing object.
Error 7 The type or namespace name 'Drawing' does not exist in the namespace 'System' (are you missing an assembly reference?)
I have followed everything as explained and i am using the DirectX SDK (April 2007)
I get an error when trying to compile and the debugger stops at the line of code: Application.Run(new GameEngine() );
The error is:
BadImageFormatException was unhandled
"is not a valid Win32 application. (Exception from HRESULT: 0x800700C1)"
Anyone else get this or know how to resolve it? I assume it may have something to do with the 64-bit OS.
Caps caps = Manager.GetDeviceCaps(adapterOrdinal, DeviceType.Hardware); throws an exception
An unhandled exception of type 'Microsoft.DirectX.Direct3D.NotAvailableException' occurred in Microsoft.DirectX.Direct3D.dll
Any Ideas
I made the program up to "3d Graphics Terminology", and I'm using Microsoft.DirectX.Direct3D but the program will only run in Debug mode. Otherwise it crashes.
I get this exception when I try and debug or run the program, this is with downloaded battletank files so no mistyping from me. Can anyone help me (email:luke321321{at]gmail DoT com, exception:
System.BadImageFormatException was unhandled
Message=" is not a valid Win32 application. (Exception from HRESULT: 0x800700C1)"
Source="BattleTank2005"
StackTrace:
at BattleTank2005.GameEngine..ctor()
at BattleTank2005.Program.Main() in C:\Users\Luke\Documents\Visual Studio 2005\Projects\BattleTank2005\BattleTank2005\Program.cs:line 19()
I cant build at the step right before "My GPU is Bigger than Yours". I get this error:
Error 1 'BattleTank2005.GameEngine.Dispose(bool)': no suitable method found to override C:\Users\Angel\AppData\Local\Temporary Projects\BattleTank2005\Form1.Designer.cs 14 33 BattleTank2005
and it points to the line:
protected override void Dispose(bool disposing)
Ok, I fixed the problem (I didnt add .cs when I renamed form1 to GameEngine). But now, after adding "Microsoft.Samples.DirectX.UtilityToolkit;" at the top of the code, I get an error that says:
Error 1 A namespace does not directly contain members such as fields or methods C:\Users\Angel\AppData\Local\Temporary Projects\BattleTank2005\GameEngine.cs 1 1 BattleTank2005
P.S. Is it the placement? I put it at the very top
When I put in the code:
this.Text = string.Format("The framerate is {0}",
FrameRate.CalculateFrameRate());
C# comes up with a bunch of errors.
I put the code right after the code that says:
private double deltaTime;
Am I putting it in the wrong place or what. Please help
Anyone solved Dast's problem? I am having the same issue.
Hello,
I get an error after I press f6: "Error 1 The type or namespace name 'Samples' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?) "
I tried to add: using Microsoft.DirectX.Direct3D;
The problem remains.
my email:hagitha25@walla.co.il
Do you know what to do in this case?
thank u...
I am getting Microsoft.DirectX.Direct3D.NotAvailableException at Caps caps = Manager.GetDeviceCaps(adapterOrdinal, DeviceType.Hardware);
Any suggessions?
Thanks
I've got exception on the Device Constructor as Dast said it. Any useful ideas from those of who has worked it out after removing(commenting out) the
#if DEBUG
presentParams.Windowed=true;
#endif
section.
hey there,
at first i wan't to thank you for this very good article(s). I'm a programmer since two years now but i've never tried to to desing a came yet. YOU got me realy into that now ;)
Now my question:
is there any specific reason why you create global variables such as "deltaTime" or "device" at the very END of a class or is it just some sort of style?
I learned to create global variables always at the BEGINNING of a class.
It doesn't matter anyhow where i create them, does it?
Seems like i just learned a diffrent style... please correct me if i'm wrong.
chris from germany
When I run it in debug mode I get the blue window... but if I just run it (full-screen) it crashes.
I suspect Vista and DX10 are my problems... and fixes?
I am getting a similar exception at
Caps caps = Manager.GetDeviceCaps(adapterOrdinal, DeviceType.Hardware)
The error string :"D3DERR_NOTAVAILABLE" and errror code -2005530518.
I am up to the part where it says to complile the program and you get a blue screen. I have done all the suggestions made including DirectX libraries and remcoing a few "unsafe" words from the dxmutmisc.cs file but I still get an error message about not finding the namespace "Framework" in that very file. I am using Visual C# Express Edition.
Hey just FYI, if you are running on 64 bit, you have to change the build tab on the project properties to build for x86, otherwise you get an error every time you try to run.
I get an error that says cannot process code for
if (caps.DeviceCaps.SupportsHardwareTransformAndLight) {
createFlags = CreateFlags.HardwareVertexProcessing;
}
else
{
createFlags = CreateFlags.SoftwareVertexProcessing;
}
it says you shouldn't modify the code in the Initialize Component method.
I get the same exception if I don't set presentParams.Windowed = true
I got the same error Sean did... I added using Microsoft.DirectX.Direct3D; but then it threw a warning:
Warning 1 Field 'BattleTank2005.GameEngine.device' is never assigned to, and will always have its default value null C:\Users\Sean\Documents\Visual Studio 2005\Projects\BattleTank2005\BattleTank2005\GameEngine.cs 32 24 BattleTank2005
Any one else get this warning also??
I really like your writing style and I feel like I am learning a lot. At time it seems like you are trying to cram a lot of information into a little space but I understand where you are coming from. This is a tutorial and not a book. With that said I would like to suggest that you write a book and try to get one of those big publishers to pick you up. I'm sure it's easier said then done but after reading two parts of your tutorial I am hooked and would purchase your book and recommend it to others in a heart beat. I have wanted to write games for a long time now and this is the first time any tutorials have made since to me.
You stated you removed Application.EnablRTLMirroring() from the Program.cs class. My program.cs class did not have this method call in it. I assume this is because I am running a newer update of VS2005
My only question is: Can you elaborate more on why you added a using block in the program.cs class. Or better yet can you explain why or how it works. I understand the purpose but not why it does that.
Just so i understand, what do you mean by "wraping the creation of the GameEngine class into a using statement"? That was mentionned in the Code Housekeeping section.
Thank-you!
Same problem as Dast - without adding
presentParams.Windowed = true;
to the GameEngine() class, the program crashes.
OS I'm using is Windows Vista.
Problem that I am having is in the dxmutmisc.cs file provided with the 2007 SDK. FramWork is not identiefied:
Error 1 The type or namespace name 'Framework' could not be found (are you missing a using directive or an assembly reference?) C:\Projects C#\BattleTank2005\BattleTank2005\DirectXSupport\dxmutmisc.cs 2189 61 BattleTank2005
Anyone figure out why it Crashes?
I've added all the includes and what not... runs but crashes..
Would anyone please tell me why the window stays grey when I start it? I told the computer to change the color to dark blue in the color part of the device's setup near the framework timer start function but it just won't do it! Any suggestions?
PingBack from
PingBack from.
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from | http://blogs.msdn.com/coding4fun/archive/2006/11/03/940223.aspx | crawl-002 | refinedweb | 2,355 | 66.74 |
Together, in my RxSwift primer series, we’ve:
- Learned what Rx is all about
- Begun to convert a simple sample app to use Rx
- Eliminated stored state
- Learned how to use a RxCocoa unit
Today, we’re going to tackle something we probably should have been doing all along: unit testing.
A Quick Digression on Unit Testing
Unit testing is, for some reason, a bit controversial. To me, I wouldn’t ship code without decent unit test coverage any sooner than I’d drive without a seatbelt on. While neither can guarantee your safety, both are reasonably low cost ways to improve your chances.
Many iOS developers I know—particularly indies—don’t seem to have the time for unit testing. I’m not in their shoes, so I can’t really argue. That being said, if you have any spare time in your day, I can’t speak highly enough about how helpful I’ve found unit testing to be.
TDD is 🍌 though. No one does that, right?
Architecture Changes
We left things here, with our
ViewController looking like this:) } }
As written, this code works great. Truth be told, there’s a good argument to be made that it isn’t even worth unit testing. However, as with everything in this series, this is just barely enough to allow us to see how we could unit test it.
The first thing we need to do is separate the pieces in that
Observable chain.
As written, there’s no easy way to test what’s going on in the
ViewController.
A whole discussion could be had about architecture here. I may approach that at a later time. For now, suffice it to say, we’re going to introduce two new types.
Event Provider
The
EventProvider is a
struct that carries any
Observables that are being
emitted from
ViewController. These
Observables are anything that drive
business logic. In our case, our business logic is the counter, and the
Observable that drives that is the button tap. Thus, here is our entire
EventProvider:
struct EventProvider { let buttonTapped: Observable<Void> }
Presenter
Taking a cue from VIPER, the
Presenter is where business logic
happens. For us, that’s as simple as incrementing the count, or really,
the
scan. Here’s the entire
Presenter:
struct Presenter { let count: Observable<Int> init(eventProvider: EventProvider) { self.count = eventProvider.buttonTapped.scan(0) { (previousValue, _) in return previousValue + 1 } } }
The general path of communication is as such:
The
ViewController exposes its
Observables to the
Presenter by way of the
EventProvider. The
ViewController enrolls in
Observables that are Properties
on the
Presenter itself.
Aside: Alternatively, you could choose to have the
Presenter emit a
ViewModel that encapsulates the entire state of the view. For simplicity,
I’m just emitting the count by way of an
Observable<Int> exposed on the
Presenter.
Here is our revised
ViewController that takes advantage of the new
Presenter
by using an
EventProvider:
class ViewController: UIViewController { // MARK: Outlets @IBOutlet weak var label: UILabel! @IBOutlet weak var button: UIButton! // MARK: ivars private let disposeBag = DisposeBag() private lazy var presenter: Presenter = { let eventProvider = EventProvider(buttonTapped: self.button.rx.tap.asObservable()) return Presenter(eventProvider: eventProvider) }() override func viewDidLoad() { self.presenter.count .asDriver(onErrorJustReturn: 0) .map { currentCount in return "You have tapped that button \(currentCount) times." } .drive(self.label.rx.text) .addDisposableTo(disposeBag) } }
The real differences are the addition of
lazy var presenter and the
implementation in
viewDidLoad(). We’re storing the
presenter as a property
so it never falls out of scope until our entire
ViewController does. We’re
using a
lazy property so that we don’t have to make it optional, but can
still create it after
init time.
The chain in
viewDidLoad() is mostly the same as we had seen before, except
that we are using the
presenter's
count property to drive everything. A way
to diagram this out is:
ViewController.button.rx.tap drives
EventProvider.buttonTapped, which drives
Presenter.count, which drives
our
map and
Driver, which drives
ViewController.label.rx.text
Everything is wired up as we expect, if slightly less linearly. Since I’ve been using an architecture similar to this at work for months, this reads very clearly to me now. If you’re scratching your head, that’s not unreasonable at this stage in the game. Nonetheless, by using an architecture like this, we now have separated our concerns:
- The view controller is simply in charge of maintaining the user interface
- The presenter is in charge of business logic
- The event provider is what will need to be faked
Now we know what we need to unit test: the
Presenter.
Unit Testing Observables
Remember what I said about
Observables way back in part 2:
At the end of the day, just remember that an
Observableis simply a representation of a stream of events over time.
It’s the end that makes things a little bit dodgy:
stream of events over time
How do we represent that in a unit test, that’s supposed to run and return
immediately? Clearly, we need a way to fake signals on input
Observables
(like our
EventProvider) and a way to capture the results on output
Observables (like our
Presenter).
Preparing for Unit Testing
Thankfully, RxSwift has a peer that we can take as a dependency only for the purposes of testing: the appropriately named RxTest.
Let’s amend our
Podfile; I’m showing only the relevant portion:
# Pods for RxSwiftDemo pod 'RxSwift' pod 'RxCocoa' target 'RxSwiftDemoTests' do inherit! :search_paths # Pods for testing pod 'RxTest', '~> 3.0' end
Once we do a
pod install, we have some new features available to us. Most
notably,
TestScheduler.
Creating our Unit Test
A
TestScheduler allows you to fake one or more
Observables by defining
at what time they should signal, and what those signals should be. The unit of
measure for “time” is largely irrelevant; the tests will run as fast as the host
machine allows.
In order to unit test our
Presenter, we will create a fake
Observable that
we will feed into our
EventProvider. This will, in turn, get fed into our
Presenter. Since we know exactly how this fake
Observable will signal, we
can know exactly how the resulting
count from the
Presenter should signal.
We’ll create a new unit test class, and we’re going to store two instance
variables within it: a
DisposeBag and this new
TestScheduler. We will also
reset them between each test in the class, to ensure each test starts from a
clean slate. So our test class looks like this, with
imports included for
reference:
import XCTest @testable import RxSwiftDemo import RxSwift import RxTest class RxSwiftDemoTests: XCTestCase { var disposeBag = DisposeBag() var scheduler: TestScheduler! override func setUp() { super.setUp() self.scheduler = TestScheduler(initialClock: 0) self.disposeBag = DisposeBag() } }
Now we need to leverage the scheduler. Let’s create a test case.
In the test case, we will have to follow these steps:
- Create a hard-coded list of events to drive the faked
buttonTappedstream
- Create an
Observerto observe the results of the
countstream
- Wire up our
EventProviderand
Presenter
- Wire up the
Observer
- Run the scheduler
- Compare the results to what we expect
Let’s take a look at each step:
Create a Fake Stream & Observer
To create the fake stream, we’ll use our
TestScheduler's ability to create an
Observable. We have to choose between a hot and cold observable, which is a
whole other topic[1], but just rest assured that hot will generally be a fine
choice, especially for UI-sourced streams. We’ll fake it by specifying what
events happen at what times:
let buttonTaps = self.scheduler.createHotObservable([ next(100, ()), next(200, ()), next(300, ()) ])
This can be approximated using this marble diagram:
---[@100]---[@200]---[@300]--->
Basically, at time 100, time 200, and time 300, we’re simulating a button tap.
You can tell because we’re doing a
next event (as opposed to
error or
complete) at each of those times.
Now we need something to observe the result stream. We don’t need the actual stream we’re observing yet; we simply need to know what type it is:
let results = scheduler.createObserver(Int.self)
Later, we’ll use that
results observer to interrogate what values were signaled
on the
Presenter's
count: Observable<Int>.
Wiring Everything Up
This portion is standard unit testing: pass your fakes into your objects under
test. For us, that means passing our
buttonTaps observable into a new
EventProvider, and then passing that into a
Presenter:
let eventProvider = EventProvider(buttonTapped: buttonTaps.asObservable()) let presenter = Presenter(eventProvider: eventProvider)
Running the Scheduler
Now we need to actually run the scheduler, which will cause the
buttonTap
stream to start emitting events. To do so we need to do two things. First,
we ensure that we’re capturing what’s emitted by the
Presenter in our
Observer:
self.scheduler.scheduleAt(0) { presenter.count.subscribe(results).addDisposableTo(self.disposeBag) }
Note that we’re scheduling this enrollment at time
0. Given the way we’ve set
up
buttonTaps, we can do this any time before time
100. If we do it after
time
100, we’ll miss the first event.
Now, we actually tell the scheduler to run:
scheduler.start()
Testing the Results
By this point, the scheduler will have run, but we still haven’t tested the
results. We can do so by comparing what’s in our
Observer to a known expected
state. Note that the expected state happens at the same times as our faked
buttonTaps, but the values are the results of the
scan operator:
let expected = [ next(100, 1), next(200, 2), next(300, 3) ]
Now, thanks to an overload provided by RxTest, we’ll do a normal
XCAssertEqual
to confirm the results match what we expected:
XCTAssertEqual(results.events, expected)
Let’s look at the whole thing all together:
func testPresenterCount() { let buttonTaps = self.scheduler.createHotObservable([ next(100, ()), next(200, ()), next(300, ()) ]) let results = scheduler.createObserver(Int.self) let eventProvider = EventProvider(buttonTapped: buttonTaps.asObservable()) let presenter = Presenter(eventProvider: eventProvider) self.scheduler.scheduleAt(0) { presenter.count.subscribe(results).addDisposableTo(self.disposeBag) } scheduler.start() let expected = [ next(100, 1), next(200, 2), next(300, 3) ] XCTAssertEqual(results.events, expected) }
A quick ⌘U to run the test, and we see what we hoped for:
You can see the final version of this code here.
Now, feel free to modify
buttonTaps,
expected, or the time we used in
scheduleAt() to see how tests fail. Also pay attention to the Console output,
as it does a good job of showing the difference between expected and actual.
Wrapping Up
My RxSwift Primer is, for now, complete. We’ve learned:
- What makes Rx appealing
- How to convert imperative code to reactive
- How to eliminate stored state
- How to leverage RxCocoa units
- How to unit test
You now have all the tools you need to start writing your own code using RxSwift. For more help with Rx, I recommend:
Rx has made my code better in almost every measure. I’m really glad to have been introduced to it, and I can’t really imagine writing code any other way. Even though it’s a steep learning curve, and it requires rewiring your brain to think about problems differently, the juice is well worth the squeeze.
Good luck!
Acknowledgements
My thanks to Daniel “Jelly” Farrelly for pushing me to write this series, and for doing first-pass edits. You can hear Jelly and I discuss RxSwift on his now-complete podcast, Mobile Couch, on episode #93.
My thanks to Jamie Pinkham for introducing me to RxSwift, and for doing the technical edits on each of these posts.
Observablescan be either hot or cold. Cold
Observables do not emit events until they are subscribed to. This is the default behavior for most
Observables. Hot
Observables will emit even if there are no subscribers. UI elements are examples of hot
Observables: just because no one is listening for a button tap doesn’t mean it didn’t happen. You can find more details in the RxSwift documentation. ↩ | https://www.caseyliss.com/2016/12/21/rxswift-primer-part-5?utm_source=Swift_Developments&utm_medium=email&utm_campaign=Swift_Developments_Issue_69 | CC-MAIN-2020-45 | refinedweb | 2,007 | 62.17 |
According to the VST specs the setParameter getParameter functions work on the float range of 0.0f to 1.0f for the value of the parameter. My components have negative values and i know i need to map the to that range of 0.0f to 1.0f i was wondering if anyone has a piece of code to handle such translations (performance is an issue here too). How to map the negative range: -64 - 63/128 values to 0.0f - 1.0f and the other way around. I know it’s simple math but i’m not sure i’m doing it the right way.
If you just need a linear mapping, the calculations should be as simple as this:
float zplControllableLib::LinearAutomationNormalizer::normalizeValue ( const double& value, const double& minValue, const double& maxValue ) { jassert(maxValue > minValue); float normalized = (float)((value - minValue) / (maxValue - minValue)); jassert(normalized >= 0.0f && normalized <= 1.0f); return normalized; } double zplControllableLib::LinearAutomationNormalizer::denormalizeValue ( const float& normalized, const double& minValue, const double& maxValue ) { jassert(normalized >= 0.0f && normalized <= 1.0f); jassert(maxValue > minValue); return minValue + normalized * (maxValue - minValue); }
This should work in all cases as long as maxValue is > minValue, of course you’ll need to add the rounding for your usecase.
These methods are part of a fairly complex set of classes that deal with parameter automation, serialization, validation. I have been thinking about sharing these as a 3rd party module (see (), but there didn’t seem to be so much demand, probably since most people have come up with their own solution.
sorry i posted something stupid, nevermind
Hi steffen,
<but there didn’t seem to be so much demand<
I don’t know why you think so. I think it would be great if you share your parameter classes.
All the new people need it and the one who made their own can compare if their missing somthing.
There have been viewtopics with wishes for parameter classes in juce.
I’ll try to find some time on the weekend then to give an example of how to work with those classes, so you and others can decide if they seem useful.
Since it would be quite some work to clean them up before publishing, I’d like to make sure they won’t become useless or obsolete with the changes and improvements others have been thinking about or have already implemented. (My classes don’t replace the way parameters are handled by the wrappers, but rather build up on that)
Here’s some functions for manipulating parameters. Use the inverse functions in ‘getParameter’ and ‘setParameterNotifyingHost’ and the non-inverse in ‘setParamer’. The warp functions, borrowed from JUCE’s slider class allow you to set a value you wish to be represented by 0.5 - useful for frequency control for example. The intStep functions allow integers to be represented and the Bool is self explanatory. Hope these help.
float warp(float x, float max, float min, float mid) { float sf = warpCoefficients(max, min, mid); float y = exp (log (x) / sf); return min + (max - min) * y;; } float inverseWarp(float x, float max, float min, float mid) { float sf = warpCoefficients(max, min, mid); float n = (x - min) / (max - min); return pow (n, sf); } float warpCoefficients(float max, float min, float mid) { float skewFactor = log (0.5f) / log ((mid - min) / (max - min)); return skewFactor; } float linear(float x, float max, float min) { return min + (max - min) * x; } float inverseLinear(float x, float max, float min) { return (x - min) / (max - min); } int intStep(float x, int max, int min) { return min + (int)((float)(max - min) * x + 0.5f); } float inverseIntStep(int x, int max, int min) { return (float)(x - min) / (float)(max - min); } bool boolStep(float x) { if (x < 0.5) return false; else return true; } float inverseBoolStep(bool x) { if (x) return 1.0; else return 0.0; } | https://forum.juce.com/t/vst-parameter-values-and-negatives/8363 | CC-MAIN-2020-40 | refinedweb | 640 | 61.97 |
hello everybody, and forgive me if some of my writings are a bit obscure (you know what sleep deprivation can do on any brain, so imagine with mine) >De: Russell Coker >A: whygee > >On Wed, 9 Oct 2002 00:36, whygee. i don't know everything but it looks good anyway. I am also in contact with the GIPI Linux project and the french Linux and Hurd community (among other things). But i'm happy to speak to other people as well :-) >> -...) > >Debian currently runs on i386, SPARC, Alpha, > M68k, PPC, ARM, MIPS, HPPA, IA64, and S/390. i already know about this (more or less). my point is : the F-CPU ISA (Instruction Set Architecture) can behave more or less like a MIPS or 680x0 (to some extent) but, because it is then underused, it would be an overkill. I know that one part of the issue is with GCC (and it will probably be very long before F-CPU is correctly supported) but the other part is in the algorithms. The network example is a good one : F-CPU can do dual-endian data reads and writes, and it's probably going to confuse both the compiler and the code because the machine endian can be changed for each instruction. Ok, F-CPU is relatively fast but it's a big waste anyway... > I think that for the majority of software (and all the really >important stuff) the endianness issues and architecture dependencies have >been sorted out. It will probably depend on GCC, too, and how it can understand the ISA. Now the problem is that neither C or GCC are made for handling dual-endian data. maybe a new data characteriser must be created, such as in unsigned long big_endian IP_address; ? >> - verify that F-CPU idioms don't collide with existing ones >> (for example, the "FCPU_*" name space should be unused >> in all preprocessing directives and C/++ libraries) > >Why does it need this? a namespace for f-cpu is useful for some libraries that could be written specifically for this architecture. But there can be a lot of upcoming reasons that can't be planned, so i prefer to make sure we won't bumpt into a problem at any time. >Why not __fcpu__? well, it is not determined for good, now, so we can still discuss about this. If a proposition is logical and realistic, it's a good candidate :-) >> -... i have already started investigating that issue. as a lot of others did, but in a PC-centric way... F-CPU is going to be used for parallel computing, mainly because the first silicons (if ever) will be too slow for what people expect (a first figure is to reach 200MHz and a theoretic maximum of 400MHz for FC0) when compared to current CPUs (xxx GHz). A "BIOS" like this of a PC is definitely *NO* because it drags a lot of compatibility and scalability issues. So i started doing a romfs-like utility that can help anybody boot the image one wants. The FLASH contains the f-romfs functions, a short bootstrap code that tests and setups the SDRAM, and then uses f-romfs to load an image in memory and execute it there. This image can then load other "files" with the f-romfs functions. >From there, i don't care much what kernel boots (even though i have some personal ideas). This is important if people want to "play" with F-CPU, develop and debug small funny or embedded sofware. A simple board with F-CPU, a FLASH and SDRAM is enough (i'm currently investigating the use of the JTAG interface, usually used for PCB testing purpose, for communicating with a small virtual console). This can even be completely simulated, in C or VHDL (at the gate level) in a single PC. Then new interfaces can be developped (serial lines, keyboard/mouse/usb, network....) but the JTAG "console" will still remain useful when a driver hangs... >Personally I'd rather see LinuxBIOS support and > have the kernel and initrd in the flash memory. yup, that's the logical choice. However, you can still develop a small program (probably in asm but that's not too complex) that can do "multi-boot" if you want to select an image. Using fromfs, this is possible. >> - ] > >Initially we'll do without that support. that's what i guess. > Most programs don't need such things, there's a lot > of work to get it basically booting, we can go back and >add F-CPU specific things later. yup. However, one of the challenges is that F-CPU must be fast from the start. People would be disapointed if it doesn't seem competitive enough. One way to be fast is to superpipeline the core (and it's done). However, if the SW doesn't exploit the features, it will be rather disapointing, and this can lead some people to think that F-CPU is not efficient... >> [though pointers should not have a size, but be defined >> just as : "up to as wide as a register"] > >This sounds difficult. As long as the kernel and the compiler are correctly setup and in sync, there should be no problem with correclty written code (ie that does not expect pointers to be 4-bytes long). The policy of clearing the unused MSB in integer computations should make pointers safe across many F-CPU implementations. However, pointer arithmetics can become nasty as wider registers appear : unaligned accesses are not allowed and if this is a problem for porting code from 32 to 64-bit data, it can become a nightmare when the architecture says that loaded and stored words can be as wide as any power of two (> 8 bits) but must be aligned. This can pop up in mallocs, in stacks, as well as any other random location... the price of performance... >> ... > >Actually most applications don't do anything that > requires more than 32bit math. 32-bit math, yes. 64-bit ints are often discussed but are slowly accepted as file sizes and memory widths increase. However the whole point of having even wider registers is not to compute 256-bit ints, but more ints at a time. F-CPU evaluation by other teams members showed that it is both more efficient and easier to increase the register width than to execute more instructions per cycle. And when it is well programmed, it can even need no recompilation :-) Currently, the first F-CPU core ("FC0") is very simple and reaches its best performance in numerical code (graphics, crypto, communications, DSP...). Performance scales "almost" linearly with the register width (depending on the compiler's ability to vector the code and it's still subject to Amdahl's law, of course). So one way to keep the impression of "speed" is to avoid "bloatware" as much as possible... :-) > None of the programs that I am currently working on do it. > >64bit and SIMD should be good for SSL, GPG, bzip2, RAID-5, ray tracing, and >DVD/AVI playing. i'm not sure for ray-tracing and bzip2 (there is no scatter/gather instruction yet) but other applications will LOVE it. just think about this as a "super-SSE2" :-) we can add : playing chess or network routing. A cool "router" could even be built with a network of F-CPUs (around a crossbar) and with dedicated Ethernet hardware on each chip. To add more ports and power, add more F-CPUs. Same goes for other applications. > While these are among the greatest CPU hogs they are not >the most common tasks to use a computer for. unless your computer is setup as a firewall and/or multimedia streaming server :-) and if it is sitting idle, it can achive nice SETI@home or RC5 scores. >>...). > >What do you mean here by "a simple line in a configuration file"? Are you >referring to configuration for a CPU emulator? (ooops sorry). > If so where can I find a copy >of the FCPU emulator? Can I package it for Debian? F-CPU "snapshots" are becoming rare these days (i had to rest this summer after LSM@Bordeaux) but they are stored there : (my latest one is but i haven't integrated Jaap's latest simulator updates). Making a Debian package is not a problem in the principle. But in practice it is highly "beta", it is 25 or 30% complete, few people are actively working on this, and the C code is tightly bound to the VHDL code. So either the Debian community can find several "supermen", or you'll have to wait a bit... (which is probably best). >>. > >Debian developers include people who work on upstream code development for >almost type of Linux software. While the Debian project is not focussed on >upstream development, to fulfil our aim of developing the best possible >packages for Debian we often need to write code for contribution upstream. > >>). > >Apart from the unlikely issue of programs using the macro name FCPU_ name >space (which is easy for an autobuilder to find and for us to fix once we get >things going properly) great ! > I don't think that there's anything that needs to be >done. Anything that could be done by static analysis has already been done >(and more) through the other ports. i guess so. but sticking to "compatibility mode" is a bit sad because it doesn't force people to think more about the platform, the coding practices .... think of it a while : x86 never succeeded implanting MMX and most other extensions. One reason is that they are hard to use and the newer extensions used different opcodes. But in the end, C and ia32 remains. OTOH if F-CPU tries to force people to rewrite all the code, it will never succeed. It is friendly with most data types for this reason (among others). But if people don't know that there are orthogonal features that can help their code behave better, they will not use them and some guys will want to remove these features "because it's not used"... And the performance will get back to what you'd expect from a classical MIPS. >>). > >Do you have anything that's ready to be packaged for Debian? i wouldn't call our code "packageable". the structure might change at any time (though much efforts are done to avoid this but this is what makes this eventuality more likely to happen anyway). Most tool are not complete. And only a few units are more or less ready (and only "execution units", not "control units"). However, much effort is done on testability, scalability, easing customization, scripting for regression tests, etc... >It doesn't have to be fully operational, it is not "operational" at all :-) we're mostly trying to setup the ground level with all the scripts and the coding practice (it gets intricate because a lot of skills are required). > just to a stage where > it can demonstrate the concept and > encourage people to try it out. Currently, Jaap Stolk has made a part of the "execution pipeline" with a few units, register set and some scheduling stuffs. However i won't encourage people to "use" it because of several problems : - it's not "scalable". registers are fixed-size 64-bits because Jaap uses C's long long ints. With VHDL, a bit vector of any width is possible (and used). - each C unit must remain in sync with the corresponding VHDL version. This must keep the whole thing from breaking and forking. Each unit should (ideally) be written in C AND VHDL and tested with the same test patterns. However, some units are not yet recoded in C, and some have been written by Jaap in C... - and to make it clearer : the F-CPU sources depend a lot on the availability of a VHDL compiler/simulator. 2 are currently working and available "for free" on the Net. i have been provided with 2 other proprietary tools by companies in order to ensure portability. If someone wants to use the F-CPU source, he'll have to understand and install one of these tools, and (believe me) it's not always straight-forward. I have written an extensive HOWTO on this matter, but you know that nobody reads the fucking manual... >If I could package a CPU simulator that can run "hello world" in the FCPU >instruction set then it would excite many people, some of whom would be >likely to have the skill and time to contribute code. all the F-CPU "lurkers" are waiting for that but none have done the necessary steps forward... and when they speak, we drown under some trolls of some sort or another... However, the few people who actively contribute (i'm fortunately not alone) have solved some highly important problems that go beyong writing "hi". - what format to use - what testing methodologies to use - what tools to use - how the damns stuff must be structured but when all the "fun" of "discussing" these problems had gone, nobody was there to implement what was agreed/decided so painfully... now the whole dirty work remains. - finishing ALL the execution units - defining the memory architecture - defining the protection and VM supports - implementing them - testing them you get the picture. However, when it will work, it will be close to flawless. >Getting some FCPU related programs in Debian will raise the visibility of your >project a lot! sure. But F-CPU is only at 30% after 4 years of trol^H^H^H^Hdiscussions. The "core" will work roughly when implemented at 60 or 70% for simple programs. >> ? > . \o/ yeah ! \o/ >> You are free to copy this message wherever needed. >> You can also post technical questions to the >> english f-cpu mailing list. > >Thanks, I have forwarded it to the debian-devel list (which is CC'd on this >reply). cool. i have forwarded the last message to the F-CPU list and this triggered a lot of (mostly off-topic) posts... but you can follow them here : :-) Thanks for the informations, YG | https://lists.debian.org/debian-devel/2002/10/msg00492.html | CC-MAIN-2018-09 | refinedweb | 2,354 | 70.43 |
:
- Personal..
Latest Release ¶
The latest version of the module is v1.2.0 released on 25-Oct-2014. Refer the CHANGE LOG for details.'].
Demo ¶
You can see detailed documentation or view a complete demo on usage of the extension..
Usage ¶
Module ¶ ], ],
DynaGrid ¶.
- Report any issues on the project page
- Use the forum page for any discussions on this extension
License ¶
yii2-dynagrid is released under the BSD 3-Clause License. See the bundled
LICENSE.md for details.
unable to install
please help
getting not found error
C:\bin>php composer.phar require kartik-v/yii2-dynagrid "*"
./composer.json has been updated
Updating dependencies (including require-dev)
Your requirements could not be resolved to an installable set of packages.
Problem 1
sion, there may be a typo in the package name.
worked by changing "minimum-stability": "dev" in composer.json
worked by changing "minimum-stability": "dev" in composer.json, thank you
namespace is wrong
The namespace in the usage code would be
use kartik\dynagrid\DynaGrid;
Minimum Stability
@sltech the instructions have been updated in the installation section. Before installation, you must remember to check if the
minimum-stabilityis set to
devin your composer.json file in the application root.
Namespace
@sltech the example is updated to include the correct namespace
well done! thx.
well done! thx.
Re: well done! thx.
thanks for the feedback scott - the latest release v1.2.0 is now released which supports saving of grid filters and sorts in addition to pagesize, columns order, and theme.
search is not working
after setting dynagrid in index.php, the search stopped working. Can you please help
Thank you
If you have any questions, please ask in the forum instead. | https://www.yiiframework.com/extension/yii2-dynagrid | CC-MAIN-2022-33 | refinedweb | 282 | 52.15 |
CI Status:
Welcome to Exiv2Welcome to Exiv2
Exiv2 is a C++ library and a command-line utility to read, write, delete and modify Exif, IPTC, XMP and ICC image metadata.
The file ReadMe.txt in a build bundle describes how to install the library on the platform. ReadMe.txt also documents how to compile and link code on the platform.
TABLE OF CONTENTSTABLE
- Thread Safety
- Library Initialisation and Cleanup
- Cross Platform Build and Test on Linux for MinGW
- Building with C++11 and other compilers
- Static and Shared Libraries
- Support for bmff files (CR3, HEIF, HEIC, and AVIF)
- License and Support
- Test Suite
- Platform Notes
2 Building, Installing, Using and Uninstalling Exiv22 Building, Installing, Using and Uninstalling Exiv2
You need CMake to configure the Exiv2 project and the GCC or Clang compiler and associated tool chain.
2.1 Build, Install, Use Exiv2 on a UNIX-like system2.1 Build, Install, Use Exiv2 on a UNIX-like system
$ cd ~/gnu/github/exiv2 # location of the project code $ mkdir build && cd build $ cmake .. -DCMAKE_BUILD_TYPE=Release $ cmake --build . $ make tests $ sudo make install
This will install the library into the "standard locations". The library will be installed in
/usr/local/lib, executables (including the exiv2 command-line program) in
/usr/local/bin/ and header files in
/usr/local/include/exiv2
cmake generates files in the build directory. cmake generates the project/solution/makefiles required to build the exiv2 library and sample applications. cmake also creates the files exv_conf.h and exiv2lib_export which contain compiler directives about the build options you have chosen and the availability of libraries on your machine.
Using the exiv2 command-line programUsing the exiv2 command-line program
To execute the exiv2 command line program, you should update your path to search /usr/local/bin/
$ export PATH="/usr/local/bin:$PATH"
You will also need to locate libexiv2 at run time:
$ export LD_LIBRARY_PATH="/usr/local/lib:$LD_LIBRARY_PATH" # Linux, Cygwin, MinGW/msys2 $ export DYLD_LIBRARY_PATH="/usr/local/lib:$DYLD_LIBRARY_PATH" # macOS
UninstallUninstall
I don't know why anybody would uninstall Exiv2.
$ cd ~/gnu/github/exiv2 # location of the project code $ cd build $ sudo make uninstall
These commands will remove the exiv2 executables, library, header files and man page from the standard locations.
2.2 Build and Install Exiv2 with Visual Studio2.2 Build and Install Exiv2 with Visual Studio
We recommend that you use conan to download the Exiv2 external dependencies on Windows. On other platforms (maxOS, Ubuntu and others), you should use the platform package manger. These are discussed: Platform Notes The options to configure and compile the project using Visual Studio are similar to UNIX like systems. See README-CONAN for more information about Conan.
When you build, you may install with the following command.
> cmake --build . --target install
This will create and copy the exiv2 build artefacts to C:\Program Files (x86)\exiv2. You should modify your path to include C:\Program Files (x86)\exiv2\bin.
2.3 Build options2.3 Build options
There are two groups of CMake options. There are many options defined by CMake. Here are some particularly useful options:
Options defined by /CMakeLists.txt include:
576 rmills@rmillsmm:~/gnu/github/exiv2/exiv2 $ grep ^option CMakeLists.txt option( BUILD_SHARED_LIBS "Build exiv2lib as a shared library" ON ) option( EXIV2_ENABLE_XMP "Build with XMP metadata support" ON ) option( EXIV2_ENABLE_EXTERNAL_XMP "Use external version of XMP" OFF ) option( EXIV2_ENABLE_PNG "Build with png support (requires libz)" ON ) ... option( EXIV2_ENABLE_BMFF "Build with BMFF support" ON ) 577 rmills@rmillsmm:~/gnu/github/exiv2/exiv2 $
Options are defined on the CMake command-line:
$ cmake -DBUILD_SHARED_LIBS=On -DEXIV2_ENABLE_NLS=Off
2.4 Dependencies2.4 Dependencies
The following Exiv2 features require external libraries:
On UNIX systems, you may install the dependencies using the distribution's package management system. Install the
development package of a dependency to install the header files and libraries required to build Exiv2. The script
ci/install_dependencies.sh is used to setup CI images on which we build and test Exiv2 on many platforms when we modify code. You may find that helpful in setting up your platform dependencies.
Natural language system is discussed in more detail here: Localisation
Notes about different platforms are included here: Platform Notes
You may choose to install dependences with conan. This is supported on all platforms and is especially useful for users of Visual Studio. See README-CONAN for more information.
LibiconvLibiconv
The library libiconv is used to perform character set encoding in the tags Exif.Photo.UserComment, Exif.GPSInfo.GPSProcessingMethod and Exif.GPSInfo.GPSAreaInformation. This is documented in the exiv2 man page.
CMake will detect libiconv of all UNIX like systems including Linux, macOS, UNIX, Cygwin64 and MinGW/msys2. If you have installed libiconv on your machine, Exiv2 will link and use it.
The library libiconv is a GNU library and we do not recommend using libiconv with Exiv2 when building with Visual Studio.
Exiv2 includes the file cmake/FindIconv.cmake which contains a guard to prevent CMake from finding libiconv when you build with Visual Studio. This was added because of issues reported when Visual Studio attempted to link libconv libraries installed by Cygwin, or MinGW or gnuwin32. Exiv2#1250
There are build instructions about Visual Studio in libiconv-1.16/INSTALL.window require you to install Cygwin. There is an article here about building libiconv with Visual Studio..
If you wish to use libiconv with Visual Studio you will have to build libiconv and remove the "guard" in cmake/FindIconv.cmake. Team Exiv2 will not provide support concerning libiconv and Visual Studio.
2.5 Building and linking your code with Exiv22.5 Building and linking your code with Exiv2
There are detailed platform notes about compiling and linking in
releasenotes/{platform}/ReadMe.txt
where
platform: { CYGWIN | Darwin | Linux | MinGW | msvc | Unix }
In general you need to do the following:
- Application code should be written in C++98 and include exiv2 headers:
#include <exiv2/exiv2.hpp>
Compile your C++ code with the directive:
-I/usr/local/include
Link your code with libexiv2 using the linker options:
-lexiv2and
-L/usr/local/lib
The following is a typical command to build and link with libexiv2:
$ g++ -std=c++98 myprog.cpp -o myprog -I/usr/local/include -L/usr/local/lib -lexiv2
2.6 Consuming Exiv2 with CMake2.6 Consuming Exiv2 with CMake
When exiv2 is installed, the files required to consume Exiv2 are installed in
${CMAKE_INSTALL_PREFIX}/lib/cmake/exiv2
You can build samples/exifprint.cpp as follows:
$ cd <exiv2dir> $ mkdir exifprint $ cd exifprint $ cat - > CMakeLists.txt <<EOF cmake_minimum_required(VERSION 3.8) project(exifprint VERSION 0.0.1 LANGUAGES CXX) set(CMAKE_CXX_STANDARD 98) set(CMAKE_CXX_EXTENSIONS OFF) find_package(exiv2 REQUIRED CONFIG NAMES exiv2) # search ${CMAKE_INSTALL_PREFIX}/lib/cmake/exiv2/ add_executable(exifprint ../samples/exifprint.cpp) # compile this target_link_libraries(exifprint exiv2lib) # link exiv2lib EOF $ cmake . # generate the makefile $ make # build the code $ ./exifprint # test your executable Usage: bin/exifprint [ path | --version | --version-test ] $
2.7 Using pkg-config to compile and link your code with Exiv22:
$ export PKG_CONFIG_PATH="/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH"
To compile and link using exiv2.pc, you usually add the following to your Makefile.
PKGCONFIG=pkg-config CPPFLAGS := `pkg-config exiv2 --cflags` LDFLAGS := `pkg-config exiv2 --libs`
If you are not using make, you can use pkg-config as follows:
g++ -std=c++98 myprogram.cpp -o myprogram $(pkg-config exiv2 --libs --cflags)
2.8 Localisation2.8 Localisation
Localisation is supported on a UNIX-like platform: Linux, macOS, Cygwin and MinGW/msys2. Localisation is not supported for Visual Studio builds.
Crowdin have provided Exiv2 with a free open-source license to use their services. The Exiv2 localisation project is located at. You will also need to register to have a free user account on Crowdin. The Crowdin setup is discussed here: Exiv2#1510. It is recommended that you coordinate with Leonardo before contributing localisation changes on Crowdin. You can contact Leonardo by via GitHub..
- Running exiv2 in another language
$ env LANG=fr_FR exiv2 # env LANGUAGE=fr_FR exiv2 on Linux! exiv2: Une action doit être spécifié exiv2: Au moins un fichier est nécessaire Utilisation : exiv2 [ option [ arg ] ]+ [ action ] fichier ... Image metadata manipulation tool. $
- Adding additional languages to exiv2
To support a new language which we'll designate 'xy' for this discussion:
2.1) Generate a po file from the po template:
$ cd <exiv2dir> $ mkdir -p po/xy $ msginit --input=po/exiv2.pot --locale=xy --output=po/xy.po
2.2) Edit/Translate the strings in po/xy.po
I edited the following:
#: src/exiv2.cpp:237 msgid "Image metadata manipulation tool.\n" msgstr ""
to:
#: src/exiv2.cpp:237 msgid "Image metadata manipulation tool.\n" msgstr "Manipulate image metadata.\n"
2.3) Generate the messages file:
$ mkdir -p po/xy/LC_MESSAGES $ msgfmt --output-file=po/xy/LC_MESSAGES/exiv2.mo po/xy.po
2.4) Install and test your messages:
You have to install your messages to test them. It's not possible to test a messages file by executing build/bin/exiv2.
$ sudo mkdir -p /usr/local/share/locale/xy/LC_MESSAGES $ sudo cp -R po/xy/LC_MESSAGES/exiv2.mo /usr/local/share/locale/xy/LC_MESSAGES $ env LANG=xy exiv2 # env LANGUAGE=xy on Linux! exiv2: An action must be specified exiv2: At least one file is required Usage: exiv2 [ option [ arg ] ]+ [ action ] file ... Manipulate image metadata. <--------- Edited message! $:
$ zip xy.po.zip po/xy.po adding: po/xy.po (deflated 78%) ls -l xy.po.zip -rw-r--r--+ 1 rmills staff 130417 25 Jun 10:15 xy.po.zip $
2.9 Building Exiv2 Documentation2.
$ cmake ..options.. -DEXIV2_BUILD_DOC=On $ make doc
To build the documentation, you must install the following products:
2.10 Building Exiv2 Packages2.
- Platform Package (header files, binary library and samples. Some documentation and release notes)
Create and build exiv2 for your platform.
$ git clone $ mkdir -p exiv2/build $ cd exiv2/build $ cmake .. -G "Unix Makefiles" -DEXIV2_TEAM_PACKAGING=On ... -- Build files have been written to: .../build $ cmake --build . --config Release ... [100%] Built target addmoddel $ make package ... CPack: - package: /path/to/exiv2/build/exiv2-0.27.1-Linux.tar.gz generated.
- Source Package
$ make package_source Run CPack packaging tool for source... ... CPack: - package: /path/to/exiv2/build/exiv2-0.27.1-Source.tar.gz generated.
You may prefer to run
$ cmake --build . --config Release --target package_source
2.11 Debugging Exiv22.11 Debugging Exiv2
- Generating and installing a debug library
In general to generate a debug library, you should use the option cmake option
-DCMAKE_RELEASE_TYPE=Debug and build in the usual way.
$ cd <exiv2dir> $ mkdir build $ cd build $ cmake .. -G "Unix Makefiles" "-DCMAKE_BUILD_TYPE=Debug" $ make
You must install the library to ensure that your code is linked to the debug library.
You can check that you have generated a debug build with the command:
$ exiv2 -vVg debug exiv2 0.27.1 debug=1 $
- About preprocessor symbols
NDEBUGand
EXIV2_DEBUG_MESSAGES
Exiv2 respects the symbol
NDEBUG which is set only for Release builds. There are sequences of code which are defined within:
#ifdef EXIV2_DEBUG_MESSAGES .... #endif
Those blocks of code are not compiled unless you define
EXIV2_DEBUG_MESSAGES. They are provided for additional debugging information. For example, if you are interested in additional output from webpimage.cpp, you can update your build as follows:
$ cd <exiv2dir> $ touch src/webpimage.cpp $ make CXX_FLAGS=-DEXIV2_DEBUG_MESSAGES $ bin/exiv2 ... -- or -- $ sudo make install $ exiv2 ...
If you are debugging library code, it is recommended that you use the exiv2 command-line program as your test harness as Team Exiv2 is very familiar with this tool and able to give support.
- Starting the debugger
This is platform specific. On Linux:
$ gdb exiv2
- work on macOS and use Xcode to develop Exiv2. For a couple of years, Team Exiv2 had free
open-source licences from JetBrains for CLion. I really liked CLion as it is cross platform
and runs on Windows, Mac and Linux. It has excellent integration with CMake and will automatically
add
-DCMAKE_BUILD_TYPE=Debug to the cmake command. It keeps build types in separate directories
such as
<exiv2dir>/cmake-build-debug.
- cmake --build . options
--config Release|Debugand
--target install
Visual Studio and Xcode can build debug or release builds without using the option
-DCMAKE_BUILD_TYPE because the generated project files can build multiple types. The option
--config Debug can be specified on the cmake command-line to specify the build type. Alternatively, if you prefer to build in the IDE, the UI provides options to select the configuration and target.
With the Unix Makefile generator, the targets can be listed:
$ make help The following are some of the valid targets for this Makefile: ... all (the default if no target is provided) ... clean ... depend ... install/local .........
2.12 Building Exiv2 with clang and other build chains2.12 Building Exiv2 with clang and other build chains
- On Linux
$ cd <exiv2dir> $ rm -rf build ; mkdir build ; cd build $ cmake .. -DCMAKE_C_COMPILER=$(which clang) -DCMAKE_CXX_COMPILER=$(which clang++) $ cmake --build .
OR
$ export CC=$(which clang) $ export CXX=$(which clang++) $ cd <exiv2dir> $ rm -rf build ; mkdir build ; cd build $ cmake .. $ cmake --build .
- On macOS
Apple provide clang with Xcode. GCC has not been supported by Apple since 2013. The "normal unix build" uses Clang.
- On Cygwin, MinGW/msys2, Windows (using clang-cl) and Visual Studio.
I have been unable to get clang to work on any of those platforms.
2.13 Building Exiv2 with ccache2:
$ sudo apt install --yes ccache
To build with ccache, use the cmake option -DBUILD_WITH_CCACHE=On
$ cd <exiv2dir> $ mkdir build ; cd build ; cd build $ cmake .. -G "Unix Makefiles" -DBUILD_WITH_CCACHE=On $ make # Build again to appreciate the performance gain $ make clean $ make. Exiv2#361
2.14 Thread Safety2.14 Thread Safety
Exiv2 heavily relies on standard C++ containers. Static or global variables are used read-only, with the exception of the XMP namespace registration function (see below). Thus Exiv2 is thread safe in the same sense as C++ containers: Different instances of the same class can safely be used concurrently in multiple threads.
In order to use the same instance of a class concurrently in multiple threads the application must serialize all write access to the object.
The level of thread safety within Exiv2 varies depending on the type of metadata: The Exif and IPTC code is reentrant. The XMP code uses the Adobe XMP toolkit (XMP SDK), which according to its documentation is thread-safe. It actually uses mutexes to serialize critical sections. However, the XMP SDK initialisation function is not mutex protected, thus Exiv2::XmpParser::initialize is not thread-safe. In addition, Exiv2::XmpProperties::registerNs writes to a static class variable, and is also not thread-safe.
Therefore, multi-threaded applications need to ensure that these two XMP functions are serialized, e.g., by calling them from an initialization section which is run before any threads are started. All exiv2 sample applications begin with:
#include <exiv2/exiv2.hpp> int main(int argc, const char* argv[]) { Exiv2::XmpParser::initialize(); ::atexit(Exiv2::XmpParser::terminate); #ifdef EXV_ENABLE_BMFF Exiv2::enableBMFF(true); #endif ... }
The use of the thread unsafe function Exiv2::enableBMFF(true) is discussed in 2.19 Support for bmff files
2.15 Library Initialisation and Cleanup2.15 Library Initialisation and Cleanup
As discussed in the section on Thread Safety, Exiv2 classes for Exif and IPTC metadata are fully reentrant and require no initialisation or cleanup.
Adobe's XMPsdk is generally thread-safe, however it has to be initialized and terminated before and after starting any threads to access XMP metadata. The Exiv2 library will initialize this if necessary, however it does not terminate the XMPsdk.
The exiv2 command-line program and sample applications call the following at the outset:
Exiv2::XmpParser::initialize(); ::atexit(Exiv2::XmpParser::terminate); #ifdef EXV_ENABLE_BMFF Exiv2::enableBMFF(true); #endif
2.16 Cross Platform Build and Test on Linux for MinGW2.16 Cross Platform Build and Test on Linux for MinGW
You can cross compile Exiv2 on Linux for MinGW. We have used the following method on Fedora and believe this is also possible on Ubuntu and other distros. Detailed instructions are provided here for Fedora.
Cross Build and Test On FedoraCross Build and Test On Fedora
1 Install the cross platform build tools1 Install the cross platform build tools
$ sudo dnf install mingw64-gcc-c++ mingw64-filesystem mingw64-expat mingw64-zlib cmake make
2 Install Dependancies2 Install Dependancies
You will need to install x86_64 libraries to support the options you wish to use. By default, you will need libz and expat. Your
dnf command above has installed them for you. If you wish to use features such as
webready you should install openssl and libcurl as follows:
[rmills@rmillsmm-fedora 0.27-maintenance]$ sudo yum install libcurl.x86_64 openssl.x86_64 Last metadata expiration check: 0:00:18 ago on Fri 10 Apr 2020 10:50:30 AM BST. Dependencies resolved. ========================= Package Architecture Version Repository Size ========================= Installing: ...
3 Get the code and build3 Get the code and build
$ git clone://github.com/exiv2/exiv2 --branch 0.27-maintenance exiv2 $ cd exiv2 $ mkdir build_mingw_fedora $ mingw64-cmake .. $ make
Note, you may wish to choose to build with optional features and/or build static libraries. To do this, request appropriately on the mingw64-cmake command:
$ mingw64-cmake .. -DEXIV2_TEAM_EXTRA_WARNINGS=On \ -DEXIV2_ENABLE_WEBREADY=On \ -DEXIV2_ENABLE_WIN_UNICODE=On \ -DBUILD_SHARED_LIBS=Off
The options available for cross-compiling are the same as provided for all builds. See: Build Options
4 Copy "system dlls" in the bin directory4 Copy "system dlls" in the bin directory
These DLLs are required to execute the cross-platform build in the bin from Windows
for i in libexpat-1.dll libgcc_s_seh-1.dll libstdc++-6.dll libwinpthread-1.dll zlib1.dll ; do cp -v /usr/x86_64-w64-mingw32/sys-root/mingw/bin/$i bin done
5 Executing exiv2 in wine5 Executing exiv2 in wine
You may wish to use wine to execute exiv2 from the command prompt. To do this:
[rmills@rmillsmm-fedora build_mingw_fedora]$ wine cmd Microsoft Windows 6.1.7601 Z:\Home\gnu\github\exiv2\main\build_mingw_fedora>bin\exiv2 exiv2: An action must be specified exiv2: At least one file is required Usage: exiv2 [ option [ arg ] ]+ [ action ] file ... Image metadata manipulation tool.
If you have not installed wine, Fedora will offer to install it for you.
6 Running the test suite6 Running the test suite
On a default wine installation, you are in the MSDOS/cmd.exe prompt. You cannot execute the exiv2 test suite in this environment as you require python3 and MSYS/bash to run the suite.
You should mount the your Fedora exiv2/ directory on a Windows machine on which you have installed MinGW/msys2. You will need python3 and make.
My build machines is a MacMini with VMs for Windows, Fedora and other platforms. On Fedora, I build in a Mac directory which is shared to all VMs.
[rmills@rmillsmm-fedora 0.27-maintenance]$ pwd /media/psf/Home/gnu/github/exiv2/0.27-maintenance [rmills@rmillsmm-fedora 0.27-maintenance]$ ls -l build_mingw_fedora/bin/exiv2.exe -rwxrwxr-x. 1 rmills rmills 754944 Apr 10 07:44 build_mingw_fedora/bin/exiv2.exe [rmills@rmillsmm-fedora 0.27-maintenance]$
On MinGW/msys2, I can directly access the share:
$ cd //Mac/Home/gnu/github/exiv2/0.27/maintenance/build_mingw_fedora $ export EXIV2_BINDIR=$pwd/bin $ cd ../test $ make tests
You will find that 3 tests fail at the end of the test suite. It is safe to ignore those minor exceptions.
2.17 Building with C++11 and other compilers2.17 Building with C++11 and other compilers
Exiv2 uses the default compiler for your system. Exiv2 v0.27 was written to the C++ 1998 standard and uses
auto_ptr. The C++11 and C++14 compilers will issue deprecation warnings about
auto_ptr. As
auto_ptr support has been removed from C++17, you cannot build Exiv2 v0.27 with C++17 or later compilers._ Exiv2 v1.0 and later do not use
auto_ptr and will require a compiler compliant with the C++11 Standard.
To build Exiv2 v0.27.X with C++11:
cd <exiv2dir> mkdir build ; cd build cmake .. -DCMAKE_CXX_STANDARD=11 -DCMAKE_CXX_FLAGS=-Wno-deprecated make
The option -DCMAKE_CXX_STANDARD=11 specifies the C++ Language Standard. Possible values are 98, 11, 14, 17 or 20.
The option -DCMAKE_CXX_FLAGS=-Wno-deprecated suppresses warnings from C++11 concerning
auto_ptr. The compiler will issue deprecation warnings about video, eps and ssh code in Exiv2 v0.27. This is intentional. These features of Exiv2 will not be available in Exiv2 v1.0.
Caution: Visual Studio users should not use -DCMAKE_CXX_FLAGS=-Wno-deprecated.
2.18 Static and Shared Libraries2.18 Static and Shared Libraries
You can build either static or shared libraries. Both can be linked with either static or shared run-time libraries. You specify the shared/static with the option
-BUILD_SHARED_LIBS=On|Off You specify the run-time with the option
-DEXIV2_ENABLE_DYNAMIC_RUNTIME=On|Off. The default for both options default is On. So you build shared and use the shared libraries which are
.dll on Windows (msvc, Cygwin and MinGW/msys),
.dylib on macOS and
.so on Linux and UNIX.
CMake creates your build artefacts in the directories
bin and
lib. The
bin directory contains your executables and .dlls. The
lib directory contains your static libraries. When you install exiv2, the build artefacts are copied to your system's prefix directory which by default is
/usr/local/. If you wish to test and use your build without installing, you will have to set you PATH appropriately. Linux/Unix users should also set
LD_LIBRARY_PATH and macOS users should set
DYLD_LIBRARY_PATH.
The default build is SHARED/DYNAMIC and this arrangement treats all executables and shared libraries in a uniform manner.
Caution: The following discussion only applies if you are linking to a static version of the exiv2 library. You may get the following error from CMake:
CMake Error at src/CMakeLists.txt:30 (add_library): Target "my-app-or-library" links to target "Iconv::Iconv" but the target was not found. Perhaps a find_package() call is missing for an IMPORTED target, or an ALIAS target is missing?
Be aware that the warning concerning
src/CMakeLists.txt:30 (add_library) refers to your file src/CMakeLists.txt. Although exiv2 has statically linked
Iconv(), your code also needs to link. You achieve that in your src/CMakeLists.txt with the code:
find_package(Iconv) if( ICONV_FOUND ) target_link_libraries( my-app-or-library PRIVATE Iconv::Iconv ) endif()
This is discussed: Exiv2#1230
2.19 Support for bmff files (CR3, HEIF, HEIC, and AVIF)2.19 Support for bmff files (CR3, HEIF, HEIC, and AVIF)
Attention is drawn to the possibility that bmff support may be the subject of patent rights. Exiv2 shall not be held responsible for identifying any or all such patent rights. Exiv2 shall not be held responsible for the legal consequences of the use of this code.
Access to the bmff code is guarded in two ways. Firstly, you have to build the library with the cmake option:
-DEXIV2_ENABLE_BMFF=On. Secondly, the application must enable bmff support at run-time by calling the following function.
EXIV2API bool enableBMFF(bool enable);
The return value from
enableBMFF() is true if the library has been build with bmff support (cmake option -DEXIV2_ANABLE_BMFF=On).
Applications may wish to provide a preference setting to enable bmff support and thereby place the responsibility for the use of this code with the user of the application.
3 License and Support3 License and Support
All project resources are accessible from the project website.
3.1 License3.1 License
Copyright (C) 2004-2021 Exiv2 authors. You should have received a copy of the file.
3.2 Support3.2 Support
For new bug reports, feature requests and support: Please open an issue in Github.
4 Running the test suite4 Running the test suite
Different kinds of tests:Different kinds of tests:
The term bash scripts is historical. The implementation of the tests in this collection originally required bash. These scripts have been rewritten in python. Visual Studio Users will appreciate the python implementation as it avoids the installation of mingw/cygwin and special PATH settings.
Environment Variables used by the test suite:Environment Variables used by the test suite:
If you build the code in the directory <exiv2dir>build, tests will run using the default values of Environment Variables.
The Variable EXIV2_PORT or EXIV2_HTTP can be set to None to skip http tests. The http server is started with the command
python3 -m http.server $port. On Windows, you will need to run this manually once to authorise the firewall to permit python to use the port.
4.1 Running tests on a UNIX-like system4.1 Running tests on a UNIX-like system
You can run tests directly from the build:
$ cmake .. -G "Unix Makefiles" -DEXIV2_BUILD_UNIT_TESTS=On $ make ... lots of output ... $ make tests ... lots of output ... $
You can run individual tests in the
test directory. Caution: If you build in a directory other than <exiv2dir>/build, you must set EXIV2_BINDIR to run tests from the
test directory.
$ cd <exiv2dir>/build $ make bash_tests addmoddel_test (testcases.TestCases) ... ok .... Ran 176 tests in 9.526s OK (skipped=6) $ make python_tests ... lots of output ... test_run (tiff_test.test_tiff_test_program.TestTiffTestProg) ... ok ---------------------------------------------------------------------- Ran 176 tests in 9.526s OK (skipped=6) $
4.2 Running tests on Visual Studio builds from cmd.exe4.2 Running tests on Visual Studio builds from cmd.exe
Caution: _The python3 interpreter must be on the PATH, build for DOS, and called python3.exe. I copied the python.exe program:
> copy c:\Python37\python.exe c:\Python37\python3.exe > set "PATH=c:\Python37;%PATH%
You can execute the test suite as described for UNIX-like systems. The main difference is that you must use cmake to initiate the test as make is not a system utility on Windows.
> cd <exiv2dir>/build > cmake --build . --target tests > cmake --build . --target python_tests
Running tests from cmd.exeRunning tests from cmd.exe
You can build with Visual Studio using Conan. The is described in detail in README-CONAN.md
As a summary, the procedure is:
c:\...\exiv2>mkdir build c:\...\exiv2>cd build c:\...\exiv2\build>conan install .. --build missing --profile msvc2019Release c:\...\exiv2\build>cmake .. -DEXIV2_BUILD_UNIT_TESTS=On -G "Visual Studio 16 2019" c:\...\exiv2\build>cmake --build . --config Release ... lots of output from compiler and linker ... c:\...\exiv2\build>
If you wish to use an environment variables, use set:
set VERBOSE=1 cmake --build . --config Release --target tests set VERBOSE=
4.3 Unit tests4.3 Unit tests
The code for the unit tests is in
<exiv2dir>/unitTests. To include unit tests in the build, use the cmake option
-DEXIV2_BUILD_UNIT_TESTS=On.
There is a discussion on the web about installing GTest: Exiv2#575
$ pushd /tmp $ curl -LO $ tar xzf release-1.8.0.tar.gz $ mkdir -p googletest-release-1.8.0/build $ pushd googletest-release-1.8.0/build $ cmake .. ; make ; make install $ popd $ popd
4.4 Python tests4.4 Python tests
You can run the python tests from the build directory:
$ cd <exiv2dir>/build $ make python_tests
If you wish to run in verbose mode:
$ cd <exiv2dir>/build $ make python_tests VERBOSE=1
The python tests are stored in the directory tests and you can run them all with the command:
$ cd <exiv2dir>/tests $ export LD_LIBRARY_PATH="$PWD/../build/lib:$LD_LIBRARY_PATH" $ python3 runner.py
You can run them individually with the commands such as:
$ cd <exiv2dir>/tests $ python3 runner.py --verbose bugfixes/redmine/test_issue_841.py # or $(find . -name "*841*.py")
You may wish to get a brief summary of failures with commands such as:
$ cd <exiv2dir>/build $ make python_tests 2>&1 | grep FAIL
4.5 Test Summary4.5 Test Summary
The name bash_tests is historical. They are implemented in python.
4.6 Fuzzing4.6 Fuzzing
The code for the fuzzers is in
exiv2dir/fuzz
To build the fuzzers, use the cmake option
-DEXIV2_BUILD_FUZZ_TESTS=ON and
-DEXIV2_TEAM_USE_SANITIZERS=ON.
Note that it only works with clang compiler as libFuzzer is integrated with clang > 6.0
To build the fuzzers:
$ cd <exiv2dir> $ rm -rf build-fuzz ; mkdir build-fuzz ; cd build-fuzz $ cmake .. -DCMAKE_CXX_COMPILER=$(which clang++) -DEXIV2_BUILD_FUZZ_TESTS=ON -DEXIV2_TEAM_USE_SANITIZERS=ON $ cmake --build .
To execute a fuzzer:
cd <exiv2dir>/build-fuzz mkdir corpus ./bin/fuzz-read-print-write corpus ../test/data/ -jobs=$(nproc) -workers=$(nproc) -max_len=4096
For more information about fuzzing see
fuzz/README.md.
4.6.1 OSS-Fuzz4.6.1 OSS-Fuzz
Exiv2 is enrolled in OSS-Fuzz, which is a fuzzing service for open-source projects, run by Google.
The build script used by OSS-Fuzz to build Exiv2 can be found here. It uses the same fuzz target (
fuzz-read-print-write) as mentioned above, but with a slightly different build configuration to integrate with OSS-Fuzz. In particular, it uses the CMake option
-DEXIV2_TEAM_OSS_FUZZ=ON, which builds the fuzz target without adding the
-fsanitize=fuzzer flag, so that OSS-Fuzz can control the sanitizer flags itself.
5 Platform Notes5 Platform Notes
There are many ways to set up and configure your platform. The following notes are provided as a guide.
5.1 Linux5.1 Linux
Update your system and install the build tools and dependencies (zlib, expat, gtest and others)
$ sudo apt --yes update $ sudo apt install --yes build-essential git clang ccache python3 libxml2-utils cmake python3 libexpat1-dev libz-dev zlib1g-dev libssh-dev libcurl4-openssl-dev libgtest-dev google-mock
For users of other platforms, the script /ci/install_dependencies.sh has code used to configure many platforms. The code in that file is a useful guide to configuring your platform.
Get the code from GitHub and build
$ mkdir -p ~/gnu/github/exiv2 $ cd ~/gnu/github/exiv2 $ git clone $ cd exiv2 $ mkdir build ; cd build ; $ cmake .. -G "Unix Makefiles" $ make
5.2 macOS5.2 macOS
You will need to install Xcode and the Xcode command-line tools to build on macOS.
You should build and install libexpat and zlib. You may use brew, macports, build from source, or use conan.
I recommend that you build and install CMake from source.
5.3 MinGW/msys25.3 MinGW/msys2
Please note that the platform MinGW/msys2 32 is obsolete and superceded by MinGW/msys2 64.
MinGW/msys2 64 bitMinGW/msys2 64 bit
Install:
The file
appveyor_mingw_cygwin.yml has instructions to configure the AppVeyor CI to configures itself to build Exiv2 on MinGW/msys2 and Cygwin/64.
I use the following batch file to start the MinGW/msys2 64 bit bash shell from the Dos Command Prompt (cmd.exe)
@echo off setlocal set "PATH=c:\msys64\mingw64\bin;c:\msys64\usr\bin;c:\msys64\usr\local\bin;" set "PS1=\! MSYS \u@\h:\w \$ " set "HOME=c:\msys64\home\rmills" if NOT EXIST %HOME% mkdir %HOME% cd %HOME% color 1f c:\msys64\usr\bin\bash.exe -norc endlocal
Install MinGW DependenciesInstall MinGW Dependencies
Install tools and dependencies:
for i in base-devel git coreutils dos2unix tar diffutils make \ mingw-w64-x86_64-toolchain mingw-w64-x86_64-gcc mingw-w64-x86_64-gdb \ mingw-w64-x86_64-cmake mingw-w64-x86_64-gettext mingw-w64-x86_64-python3 \ mingw-w64-x86_64-libexpat mingw-w64-x86_64-libiconv mingw-w64-x86_64-zlib \ mingw-w64-x86_64-gtest do (echo y | pacman -S $i) ; done
Download exiv2 from github and buildDownload exiv2 from github and build
$ mkdir -p ~/gnu/github/exiv2 $ cd ~/gnu/github/exiv2 $ git clone $ cd exiv2 $ mkdir build ; cd build ; $ cmake .. -G "Unix Makefiles" # or "MSYS Makefiles" $ make
5.4 Cygwin/645.4 Cygwin/64
Please note that the platform Cygwin/32 is obsolete and superceded by Cygwin/64.
Download: and run setup-x86_64.exe. I install into c:\cygwin64
You need: make, cmake, curl, gcc, gettext-devel pkg-config, dos2unix, tar, zlib-devel, libexpat1-devel, git, libxml2-devel python3-interpreter, libiconv, libxml2-utils, libncurses, libxml2-devel libxslt-devel python38 python38-pip python38-libxml2
The file
appveyor_mingw_cygwin.yml has instructions to configure the AppVeyor CI to configures itself to build Exiv2 on MinGW/msys2 and Cygwin/64.
To build unit tests, you should install googletest-release-1.8.0 as discussed 4.3 Unit tests
I use the following batch file "cygwin64.bat" to start the Cygwin/64 bash shell from the Dos Command Prompt (cmd.exe).
@echo off setlocal set "PATH=c:\cygwin64\usr\local\bin;c:\cygwin64\bin;c:\cygwin64\usr\bin;c:\cygwin64\usr\sbin;" if NOT EXIST %HOME% mkdir %HOME% set "HOME=c:\cygwin64\home\rmills" cd %HOME% set "PS1=\! CYGWIN64:\u@\h:\w \$ " bash.exe -norc endlocal
5.5 Visual Studio5.5 Visual Studio
We recommend that you use Conan to build Exiv2 using Visual Studio. Exiv2 v0.27 can be built with Visual Studio versions 2008 and later. We actively support and build with Visual Studio 2015, 2017 and 2019.
As well as Visual Studio, you will need to install CMake, Python3, and Conan.
- Binary installers for CMake on Windows are availably from.
- Binary installers for Python3 are available from python.org
- Conan can be installed using python/pip. Details in README-CONAN.md
..>copy c:\Python37\python.exe c:\Python37\python3.exe
The python3 interpreter must be on your PATH.
5.6 Unix5.6 Unix
Exiv2 can be built on many Unix and Linux distros. With v0.27.2, we are starting to actively support the Unix Distributions NetBSD and FreeBSD. For v0.27.3, I have added support for Solaris 11.4
We do not have CI support for these platforms on GitHub. However, I regularly build and test them on my MacMini Buildserver. The device is private and not on the internet.
I have provided notes here based on my experience with these platforms. Feedback is welcome. I am willing to support Exiv2 on other commercial Unix distributions such as AIX, HP-UX and OSF/1 if you provide with an ssh account for your platform. I will require super-user privileges to install software.
For all platforms you will need the following components to build:
- gcc or clang
- cmake
- bash
- sudo
- gettext
To run the test suite, you need:
- python3
- chksum
- dos2unix
- xmllint
NetBSDNetBSD
You can build exiv2 from source using the methods described for linux. I built and installed exiv2 using "Pure CMake" and didn't require conan.
You will want to use the package manager
pkgsrc to build/install the build and test components listed above.
I entered links into the file system
# ln -s /usr/pkg/bin/python37 /usr/local/bin/python3 #FreeBSD
Clang is pre-installed as ``/usr/bin/{cc|c++}` as well as libz and expat. FreeBSD uses pkg as the package manager which I used to install cmake and git.
$ su root Password: # pkg install cmake # pkg install git # pkg install bash # pkg install python
Caution: The package manager pkg is no longer working on FreeBSD 12.0. I will move to 12.1 for future work. Others have reported this issue on 12.1. Broken package manager is very bad news. There are other package managers (such as ports), however installing and getting it to work is formidable.
634 rmills@rmillsmm-freebsd:~/gnu/github/exiv2/0.27-maintenance/build $ sudo pkg install libxml2 340.2kB/s 00:19 pkg: repository meta /var/db/pkg/FreeBSD.meta has wrong version 2 pkg: Repository FreeBSD load error: meta cannot be loaded No error: 0 Unable to open created repository FreeBSD Unable to update repository FreeBSD Error updating repositories! 635 rmills@rmillsmm-freebsd:~/gnu/github/exiv2/0.27-maintenance/build $
SolarisSolaris
Solaris uses the package manager pkg. To get a list of packages:
$ pkg list
To install a package:
$ sudo pkg install developer/gcc-7
Written by Robin Mills
robin@clanmills.com
Updated: 2021-09-21 | https://giters.com/exiftool/exiv2 | CC-MAIN-2022-27 | refinedweb | 5,873 | 50.33 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.