text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Since 2018, the creator of Node.js has been working on a new JavaScript runtime called Deno that addresses some of the pain points he had identified. Deno’s features include:
- An improved security model
- Decentralized package management
- A standard library
- Built in tooling
Check out this blog post for a full overview of Deno’s features and how it compares to Node.js.
Deno 1.0 was just released. What better time to dive in? In this post we’ll get started with Deno by building a small project.
When I’m learning a new framework or language, I try to find a Goldilocks project. That is, not too hard, not too soft, somewhere between “hello world” and production ready code.
Let’s write a Deno script that sends you an SMS affirmation. In order to do that we'll use Deno to:
- Read a file
- Read environment variables
- Run a script
- Import and use modules
- Make an API call
To follow along, you’ll need:
- A computer where you can run code
- A Twilio account - sign up for a free one here and get an extra $10 when you upgrade
- A Twilio phone number with SMS capabilities
We’ll be writing TypeScript since Deno has fantastic TypeScript support out of the box. If you are unfamiliar with JavaScript or haven’t used a typed programming language, you may want to read up on TypeScript basics.
Setting up your Deno dev environment (Denovironment?)
First we need to install Deno. Instructions vary depending on your operating system.
Mac and Linux folks: if you’re a Homebrew user, you can
brew install deno. If not, try running
curl -fsSL | sh.
Windows users can run
iwr -useb | iex or
choco install deno if you use Chocolatey.
To verify that the installation was successful, try running
deno --help from the command line. If you have trouble with installation check out the official Deno docs.
Optional, but highly recommended: if you’re using Visual Studio Code install the Deno extension. Otherwise you’ll get annoying Intellisense errors that don’t apply to Deno.
Hello, Deno: running your first script
As is traditional, we’ll write a Hello World script to ensure we can execute code. Add a new file in the top level directory of your project. Call it “send-affirmation.ts”. Copy the following code into it:
import { bgBlue, red, bold } from ""; console.log(bgBlue(red(bold("Hello world!"))));
Run the file with
deno send-affirmation.ts on the command line. It should print the “Hello world!” text with some bold colors.
Reading a file with Deno
I know, I know, it’s a little contrived to put the affirmations in a file just so we can use the file-reading API. Since the goal is learning it’s okay to add a bit of unnecessary complexity.
Add a file called
affirmations.txt at the top level of your project directory. I copied the first ten from affirmations.dev but feel free to create your own.
You got this. You'll figure it out. You're a smart cookie. I believe in you. Sucking at something is the first step towards being good at something. Struggling is part of learning. Everything has cracks - that's how the light gets in. Mistakes don't make you less capable. We are all works in progress.
Replace the code in
send-affirmation.ts with the following:
const getAffirmations = async (fileName: string): Promise<Array<string>> => { const decoder = new TextDecoder("utf-8"); const text: string = decoder.decode(await Deno.readFile("affirmations.txt")); return text.split("\n"); }; const affirmations: Array<string> = await getAffirmations("affirmations.txt"); const affirmation: string = affirmations[Math.floor(Math.random() * affirmations.length)]; console.log(affirmation);
A lot of this isn’t too different from plain ol’ TypeScript. At a high level, we are:
- Using
TextDecoderand Deno’s built-in
readFilemethod to read the file
- Splitting the file into lines and throwing the individual values into an array
- Picking a random affirmation with
Math.random
If we try to run the script with the previous command, we get an error response of
error: Uncaught PermissionDenied: read access to "/Users/tthurium/github/deno-getting-started/affirmations.txt", run again with the --allow-read flag
Deno’s security model requires you to specify exactly which permissions your module needs. You can run
deno run --help to see an explanation of the permissions model and all available flags.
This is a huge improvement over Node.js, which allows reading your hard drive, making requests, and all kinds of other potentially sketchy activities by default. Also, it’s great that Deno’s error message tells you exactly how to fix it.
Execute your script by adding the
allow-read flag, like so:
deno run --allow-read send-affirmation.ts
Which should get you the following output:
Compile Everything has cracks - that's how the light gets in.
Deno compiles your TypeScript files for you, without you having to do anything. Which is why the
Compile file… line is only printed after you run a file that has changed.
Deno and third-party libraries
The Deno standard library is slick but standard libraries are never gonna give you everything you need. How do helper libraries work?
Unlike Node.js, there is no centralized Deno package manager. Decentralization is bound to be one of Deno’s most controversial design decisions.
Does decentralization make Deno less secure than Node.js? It depends. If you’re using npm 6.0 or above, you can run
npm audit to get a list of known vulnerabilities in your dependencies. Deno has no such functionality.
npm audit only helps to a point, because unknown vulnerabilities are unknown. At the end of the day, neither npm or Deno are the Apple Store -- with both you’re running code that is fundamentally untrusted. IMO Deno is more secure, because at least Deno requires explicit permission for potentially sketchy activities like reading your hard drive.
Deno modules can be imported from any URL, and they’re cached on your hard drive on the first run. You can read the list of “official” 3rd-party Deno packages here. Or, you can search on Pika for Deno-compatible Node modules.
If we search Pika for the Twilio Node.js SDK, we get
Package found! However, no web-optimized "module" entrypoint was found in its package.json manifest.
Well, fine. The Twilio Node.js SDK does make it easier to make Twilio calls, but we’ll learn more from making raw API calls.
The Twilio API uses basic auth, which requires base 64 encoding. We’ll need to use a third-party Deno module for that. To import the base64 library, add the following code at the top of
send-affirmation.ts:
import * as base64 from "";
Reading environment variables in Deno
To do Twilio things, you need your account SID and auth token which are found on the Twilio console. You don’t want to commit these values in code, because if you push them to a publicly accessible repository an attacker could use these credentials to do bad things with your account. Set these values as environment variables instead.
To double check that your environment variables are stored correctly, add the following code to the bottom of
send-affirmation.ts:
const accountSid: string | undefined = Deno.env.get("TWILIO_ACCOUNT_SID"); const authToken: string | undefined = Deno.env.get("TWILIO_AUTH_TOKEN"); console.log(`accountSid: ${accountSid}, authToken: ${authToken}`);
To run this code we'll need to add another flag to enable reading environment variables. From the command line, run
deno run --allow-read --allow-env --send-affirmation.ts to validate that the variables you just added have been set correctly. Make sure to delete this
console.log statement afterwards to not expose your credentials in your logs accidentally.
Making a POST request in Deno
In addition to the account SID and auth token, the Twilio API requires a phone number to send to and from if you're sending a text message. Copy this code into
send-affirmation.ts and replace the phone numbers with the ones you want to use.
const accountSid: string | undefined = Deno.env.get("TWILIO_ACCOUNT_SID"); const authToken: string | undefined = Deno.env.get("TWILIO_AUTH_TOKEN"); const fromNumber: string = "Replace with your Twilio number"; const toNumber: string = "Replace with your cell number";
Next we’ll add a new function to make the request to the Twilio API.
Deno uses the same API as the
fetch method that’s built in to the browser. Having to rely on third-party packages for basic functionality is one of JavaScript’s biggest frustrations. Thanks Deno developers for having a sensible standard library!
const sendTextMessage = async ( messageBody: string, accountSid: string | undefined, authToken: string | undefined, fromNumber: string, toNumber: string, ): Promise<any> => {
Yeah I know, this is a lot of params and some of these could be defined inside the function. At the same time, I prefer to pass in args because it makes it easier to write unit tests. Not that I'll be writing unit tests for this code, but it's the principle of the thing. 😆
If the account credentials aren't set throw a friendly error message:
if (!accountSid || !authToken) { console.log( "Your Twilio account credentials are missing. Please add them.", ); return; }
Inside the
sendTextMessage function body add the following code to encode Twilio credentials.
const url: string = `{accountSid}/Messages.json`; const encodedCredentials: string = base64.fromUint8Array( new TextEncoder().encode(`${accountSid}:${authToken}`), );
We’ll finish out our function by actually making the API call.
x-www-form-urlencoded requires URL-encoded params in the request body. To the
URLSearchParams mobile, Batman!
const body: URLSearchParams = new URLSearchParams({ To: toNumber, From: fromNumber, Body: messageBody, }); const response = await fetch(url, { method: "POST", headers: { "Content-Type": "application/x-www-form-urlencoded", "Authorization": `Basic ${encodedCredentials}`, }, body, }); return response.json(); };
Of course, we actually need to call the function.
const response = await sendTextMessage( affirmation, accountSid, authToken, fromNumber, toNumber, ); console.log(response);
deno run --allow-read --allow-env --allow-net send-affirmation.ts
You should receive a text message containing one of the affirmations. Well done! 🦕🎉
Conclusion: building your first Deno app
Today you have learned a few things about Deno, like how to:
- Make POST requests
- Read from a file
- Deal with environment variables
- Understand Deno’s permissions model
- Import modules in Deno
- Use some of Deno’s standard library functionality
Are you building something cool with Deno? Tell me about it! Hit me up on Twitter or over email (tthurium {at} twilio {dot} com). | https://www.twilio.com/blog/getting-started-deno | CC-MAIN-2020-45 | refinedweb | 1,748 | 58.48 |
Invertible fixed-point complex rotationGeraint Luff
A look at how to perform lossless rotations on integer / fixed-point co-ordinates or complex values.
This is a slightly longer one - sorry about that! The first half lays out some cool ideas, and then we have to deal with some of the things we glossed over.
The problem
Let's say you have a integer (or fixed-point) complex value, and you want to perform a complex rotation:
The problem is that the result doesn't lie exactly on one of the fixed-point values. There are a couple of obvious ways to handle this:
- Use a larger fixed-point data format for the result, treating the extra bits as fractional.
- Round the result to the nearest fixed-point value
The problem with a larger data-type is that... well, now you're computing and storing a larger data-type. But in fact, both of these solutions run into trouble with invertibility.
Invertibility
If you add two fixed-point complex numbers together, you can subtract them again, to (perfectly) get your original answer back. Even integer overflows don't actually screw up this property.
It would be great to have the same property for our rotation: if we take our rotated result and perform the opposite rotation, we should get our original point back.
Simple rounding (our second approach above) fails on this because two different input values might round to the same result when rotated. If that happens, how can the inverse rotation know which one to return?
What do we need?
If we want our rotation to be perfectly invertible, and not need a larger data-type, then every possible value on our grid has to map to a distinct point on the same grid. If we have
Let's take a little detour into a related problem:
Detour: losslessly rotating an image
Let's say we have a 100x100 pixel image:
Let's say we want to rotate this image, but with no interpolation, or dropping/duplicating pixels. The result should be the same size, but we don't mind a bit of wrapping-around at the edges. This means that all we're doing is shuffling pixels around inside the image.
This image rotation is equivalent to our complex-rotation problem. Each fixed-point complex number can be considered as a pixel co-ordinate within the image, which is then be mapped to a new (rotated) position.
Skewing the image
One operation we can definitely do losslessly is skewing. Here's a horizontal skew, where each row in the image is shifted horizontally by 15% of its vertical position:
This corresponds to the matrix:
Since all we've done is shift rows around, it's entirely reversible - if you applied the opposite skew to that result above, you would restore the original image.
Here's a -25% vertical skew, corresponding to the matrix:
Approximating a rotation
What we actually want is this rotation matrix:
If we can express this matrix as the product of skew matrices (corresponding to a sequence of skew operations), then each pixel will end up approximately in its rotated position. It's approximate because we have to round the co-ordinates after every step.
It turns out we need three steps to make this work. If we define
This corresponds to a horizontal skew, a vertical skew, and another horizontal skew:
The result is a rotated version of the image. Awesome! 😃
OK, so it's not exactly perfect - the vertical arrow is a bit wobbly, and the edge of the circle looks a bit gnarly:
But come on - it's a lossless image rotation, that's exciting! I'm excited.
Invertible fixed-point complex rotation
Now let's apply that same approach to rotating a fixed-point complex number. If we represent our complex number
then a rotation by the complex value
We know how to split that up into skew operations:
For simplicity (and stability for
rotate_forward(a, b, sinTheta, tanHalfTheta): a = a - round(b*tanHalfTheta) b = b + round(a*sinTheta) a = a - round(b*tanHalfTheta) return (a, b) rotate_reverse(a, b, sinTheta, tanHalfTheta): a = a + round(b*tanHalfTheta) b = b - round(a*sinTheta) a = a + round(b*tanHalfTheta) return (a, b)
sinThetaand
tanHalfThetaare fixed-point values. So we need a suitable fixed-point multiplication where the result has extra fractional bits, and a
round()which converts back to our original bit-depth.
Asymmetrical rounding
We defined two functions above, because (depending how we implement it)
round(-x) is not always equal to
-round(x) - which means calling
rotate() with an inverted angle might not exactly reverse things.
An obvious example is truncation (where everything rounds towards
0.5 and
-0.5. Some don't (e.g. Python rounds towards even numbers), and if you ensure this symmetry you only need the one
rotate() function.
C++ example code
Here's some example code for the 16-bit case:
Complications
The principle above is very neat, and if you're just here for the cool ideas, you can stop here and skip to the end.
But there are couple of little tweaks required for practical use, which I wanted to at least mention:
Larger rotations
Let's return to our image-rotation analogy, and try the same process again for a 150° rotation:
The first skew is so extreme that most of our image has wrapped around at least once, and that throws everything else off. Let's take a look at our skew coefficients for different
Angle units are shown in radians because... because.
As you can see, our vertical skew stays somewhat manageable, but as we get close to a 180° rotation (
Limiting to ±90° or ±45°
There's a straightforward fix, though. A 180° rotation is just flipping the signs of both axes - so let's try again, but this time we flip by 180° first, and then rotate back by -30°:
Using an optional pre-rotation of 180°, we only need rotations of ±90° to cover every possible angle. This makes things a bit more fiddly, but it's totally worth it.
If we optionally pre-rotate by ±90° as well (by swapping axes), we can reduce our maximum rotation to ±45°.
Overflow
When rotating an image like this, some of the output pixels can't be correct because their ideal source pixel/co-ordinate is outside the bounds of the image. In general, this can happen (for at least some angle) for anything outside this circle:
But not everything inside this circle is correct in our result either. Here are the operations for a 60° rotation, but this time whenever a pixel wraps around, we shade it in:
The problem is really the third step, which produces overflows where we might expect meaningful results:
If we allow pre-rotating by 180°, our maximum rotation is ±90° (or ±45° if we allow ±90° pre-rotation). Using the horizontal skew factor
So, if we use 180° pre-rotation, our safe zone is the central diamond. If we include pre-rotations of ±90° as well, we get a bit more, but it's a slightly weird shape (being the intersection of the shallower skew-lines and the circle above). 🤷
Animated examples
Here's an example using the optional 180° pre-rotation, for various angles:
Each frame in the above animation is a single rotation of the original image, to illustrate the effect of overflow.
If you instead repeatedly apply a 10° rotation, the cumulative errors are much worse (particularly near the centre):
Ranges for coefficients
In our C++ example code above, we used a 16-bit value
int16_t for
sinTheta and
tanHalfTheta. If we scale it in the obvious way (one sign bit, and 15 fractional bits), this can represent
-1 but not
+1. We therefore might run into trouble if we try to rotate by something like 89.9°, where
One option is to have more non-fractional bits, by storing the rotation coefficients as
int32_t (particularly as we cast to
int32_t for the multiplication anyway). Or we could add some extra checking for
As long as we do the same thing for
rotate() and
inv_rotate(), we won't break invertibility - but if our coefficients wrap around (in some edge-cases) we could get results which don't reflect the rotation we want.
FFT butterfly
We can use this rotation approach to create an entirely reversible fixed-point FFT (or related transforms like MDCT). I'm not going to dive into all the details here - but let's look at just a single radix-2 FFT butterfly:
This is equivalent to multiplying the vector
If we swap the columns around, and apply an extra scaling factor, you get the matrix for a -45° rotation:
So, the radix-2 FFT butterfly can be represented as a rotation, plus a bit of extra fiddling with columns/signs. (There's no complex multiplication here, so we'd just rotate the real/imaginary parts independently.)
Between this, and using this rotation approach for the twiddle-factors, you can hopefully see how we could construct a lossless fixed-point (I)FFT.
Comparison with floating-point FFT
In a floating-point FFT, we avoid the
We can't avoid this factor in fixed-point FFTs (because it's required to be invertible), which means our butterflies take more computation. In addition, the knock-on effects of overflow in one butterfly (which we don't get with floating-point) are hard to intuitively predict. If our signal is close to full-scale, we could end up with results which don't intuitively match the signal.
Conclusion
We've seen how you can rotate a fixed-point complex number in a perfectly reversible way, and used image rotation as an analogy, including the effect of overflow. We briefly sketched out how this enables a fully-invertible fixed-point FFT.
There are some fiddley details if you want to maximise the correctness of the result, but I think the underlying technique (splitting rotation into reversible skew operations) is just really cool. | https://signalsmith-audio.co.uk/writing/2021/fixed-point-rotation-multiplication/ | CC-MAIN-2021-39 | refinedweb | 1,681 | 54.76 |
0
Hello,
I just started learning this wonderful language! but I m facing some difficulties, which I would like to share, and hope for some help from you guys.
Yesterday while learning I created a program which gets the screen resolution of desktop and displays the form in top-center of screen.
I have written a code but then I get some weird error, and I dont understand it!
"An object reference is required for the nonstatic field, method, or property 'member'"
Please check my code and tell me whats wrong
- Thankyou!
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace MouseC { public partial class form1 : Form { int deskWidth = Screen.PrimaryScreen.Bounds.Width/2; public form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { form1.Location = new Point(deskWidth, 0); } private void button3_Click(object sender, EventArgs e) { } } } | https://www.daniweb.com/programming/software-development/threads/400219/an-object-reference-is-required-for-the-nonstatic-field | CC-MAIN-2017-09 | refinedweb | 154 | 53.78 |
Introduction to Random Number Generator in C
In order to generate the expected output, the program must need the proper input. Usually, the inputs are provided by the user but sometimes the program has to pick the input by itself. For instance, to get the current timestamp the application uses an inbuilt function to fetch it from the system. In the same way, sometimes we need to have application generate any random number which could be processed further to get the supposed output. Though it looks random to the user the programming language offers us the mechanism to define the range of the random number. In this article, we will see the program implementation of random numbers generation using the C programming language. We will be focusing on the inbuilt function that is provided by C in order to generate a random number.
What is Random Number Generator Functions in C?
There are several approaches to generate the random number using any of the programming language. One can define the function of their own way to estimate or generate the random number while there are inbuilt functions in any of the programming language that generates the random number. In the C programming language, we have a function called rand, which helps in generating the random number. This function comes predefined in C and can be implemented in the program using stdlib.h header file. The developer needs to mention the stdlib.h header file in the beginning of the program in order to leverage the rand function. Everytime this function is called, it generates a totally random number. Based on the requirement one can generate the number belongs to integer, float or double data type. It can be simply used in the program using rand() function.
Though the rand function is supposed to generate the random value, it stuck to generate the same value every time the program is executed and it may happen due to the constant seed value. If the requirement is to have the new random number generated every time the program executes than we have to make sure that the seed should change whenever the program runs. Time is something that keeps on changing and can also be considered as something that can help in getting a random seed value every time and to use time in the program we have to use time.h header file.
Generation Integers
The rand() function is used to generate a random number. Every time it is called, it gives a random number. If the developers add some logic with it, they can generate the random number within a defined range and if the range is not defined explicitly, it will return a totally random integer value. The rand() function in C could be used in order to generate the random number and the generated number is totally deleting seed. A seed is a value that is used by rand function to generate the random value. If the seed value is kept on changing, the number generated will new every time the program is compiled else it will return the same value every time that was generated when the program was executed first. In order to generate the Below is the program to generate the integer random number.
Program
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main()
{
int rand_num;
srand(time(0));
printf("The randomly generated number is”);
rand_num = rand();
printf("%d\n", rand_num);
}
}
Output:
The randomly generated number is 1804289383.
In this program, we have used time.h header file which is used to leverage the system time in generating the random number. As the time changes every time, the value of seed will change every time the program will be executed, helping us to generate a random number every time the program is executed. Rand_num is the variable that is used to store a randomly generated number. The function rand() generates a random number which is assigned to the variable rand_num. As we didn’t define the return value of the random number explicitly, it will give us an integer number.
Generating Float Point Numbers
The approach to generate the random float value is similar to the approach for generating the integer number. The only difference is, we will need to explicitly define that the value we are expecting from the rand function should be a float. The float value usually consumes more space in storage as compared to the short int. The program that we have written in the above for random integer value generation will be the same as we are going to write here. The only difference will be an explicit data type definition. Similar to the last program, we have used time.h header file here as well to let it contribute in random float number generation. If this header file is not included in the program, it will give the same value every time the program. Is executed. Below is the program for random float value generation.
Program
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main()
{
float rand_num;
srand(time(0));
printf("The randomly generated float number is ”);
rand_num = (float) rand();
printf("%f", rand_num);
}
}
Output:
In this program, we have used (float) which is used to explicitly define that the value returned from the rand function should be float in nature. As the ran_num variable is also defined with a float data type, it will be able to handle the float number which usually has six digits after the decimal point when generated in the C programming language. While printing the float value, we have used %f as it is something that has to be taken care of while printing the float value.
Conclusion
To enhance the randomness of the number, one can leverage mathematical expressions. Also, using logic one can also define the range of numbers under which they want the number to be generated randomly. The feature to generate random rubber is provided by all of the programming languages and used in the actual application based on the requirement. In order to ensure the strong randomness of the number, we have to make sure that the seed that rand function is used to generate the random value should be very random and new every time the program runs.
Recommended Articles
This is a guide to Random Number Generator in C. Here we discuss the function, generation integers and generating float point numbers in C. You can also go through our other suggested articles to learn more – | https://www.educba.com/random-number-generator-in-c/ | CC-MAIN-2021-04 | refinedweb | 1,091 | 62.17 |
Problem with Qt 4.7, WebKit and Symbian^3
I have installed "Qt 4.7 libs":^3_-_developer_version to my Nokia N8.
When I try to launch qml application with webkit, I get error:
@module "QtWebKit" is not installed
import QtWebKit 1.0 @
What is the problem?
It looks like the QtWebKit.sis file does not contain the "qmlwebkitplugin.dll" which is needed for exposing WebKit bindings to QML :-( I'll mention this to the appropriate parties.
- epenttinen
I have filed a bug - "":.
Will it be fixed only in 4.7.2?
[quote author="Alexander Kuchumov" date="1287568584"]Will it be fixed only in 4.7.2?[/quote]
Looks like a sis packaging issue, so probably can be solved earlier.
I saw in gira: Fix Version/s: 4.7.2
[quote author="Alexander Kuchumov" date="1287569605"]I saw in gira: Fix Version/s: 4.7.2[/quote]
Why JIRA does not have a Qt 4.7.1 in "fix versions":? 0o
- epenttinen
We will be able to provide a resolution before actual Qt 4.7.2 (we plan to fix that in next development version hopefully in 2 weeks time).
Good news in "QTBUG-14571": :-)
Any news on this? Are we getting packages before 4.7.2 that have the plugin?
For those who are not following the bug on JIRA:
[quote]
Eero Penttinen commented on QTBUG-14571:
We will create a new package end of next week so likely early w 45 - we want to have some additional fixes in as well.
[/quote]
Good news: "QTBUG-14571": is done :-)
[quote]
Eero Penttinen added a comment - 17/Nov/10 5:29 AM
Has been fixed in next developer version - LE for new package and of week 46
[/quote]
it's good news:) We are waiting development package of Qt 4.7.x for Symbian^3 devices | https://forum.qt.io/topic/1346/problem-with-qt-4-7-webkit-and-symbian-3/4 | CC-MAIN-2019-30 | refinedweb | 306 | 77.23 |
I am digging threw freedesktop.org.xml and I am seeing mimetype entry's for things like "Quattro Pro spreadsheet"
Have to wikipedia that one.
Quattro Pro is a spreadsheet program developed by Borland and now sold by Corel, most often as part of Corel's WordPerfect Office.
And I am also seeing the same mimetype icon used for like 30 different entry's, x-office-generic.
and stuff like this for filetype patterns
<glob pattern="*.wb1"/>
<glob pattern="*.wb2"/>
<glob pattern="*.wb3"/>
I understand why they did things like the above but honestly I would assume that a
streamline approach would be better than a kitchen sink approach.
For instance I wouldn't have any non-native mimetypes.
Every mimetype for the most part would have its own icon
And instead of <glod pattern="*.wb1"/>... I would simply <pattern value="wb1, wb2, wb3"/>
and do a string spit on the result. And you don't need wildcards for extensions. Just about every programming language
or framework provides a function to get a strings suffix.
You might be thinking who cares if there are a bunch of obscure mimetypes in the freedesktop.org.xml file.
Well it makes a big difference and really depends on if your using Sax or Dom to parse the file. One is slower at parsing and uses less memory
and the other is faster at parsing but uses more memory.
Those little things add up. And when your freedesktop.org.xml is 1mb or larger
that's going to make a difference on how responsive whatever application is that uses that file, like your desktop, filemanager,
open, save - file and directory dialogs. Your media player.... etc.
The situation only gets worse when you have multiple applications opening at once that all use that same freedesktop.org.xml file.
Example: Session support when you restart your system and your desktop loads and starts all the previous applications you were working with in the last session.
Last edited by zester (2011-09-13 20:58:28)
Offline
Well we have a new library in the workings called QMimeType
It's MIT Licensed
In order to use the QMimeType library you need to ....
Build and Install it.
Add #include "qmimetype.h" in your source code
and
LIBS += -L/usr/lib -lQMimeType to .pro
Example:
#include <QtXml> #include <QDebug> #include "qmimetype.h" int main(int argc, char *argv[]) { QMimeType *mimeData = new QMimeType("shared-mime-info.xml"); mimeData->setByName("example.pdf"); qDebug() << mimeData->type(); qDebug() << mimeData->summery(); qDebug() << mimeData->description(); qDebug() << mimeData->icon(); qDebug() << mimeData->pattern(); qDebug() << mimeData->application(); return 0; }
Output:
-----------------------------------------------
"application/pdf"
"PDF Document"
"Portable Document Format"
"application-pdf"
"pdf"
"okular"
The basic features of this library are working so now I can replace my current code in QDesktop that deals with mimetype's
freeing QDesktop of anymore GPL code. And hopefully get QDesktop usable for everyday use.
Last edited by zester (2011-09-14 03:23:26)
Offline
qDebug() << mimeData->summery();
I'd say that would be
summary()
... or is that
isWeatherSummery()
?
Offline
zester wrote:
qDebug() << mimeData->summery();
I'd say that would be
summary()
... or is that
isWeatherSummery()
?
Crap lol I do that with get and git also ..... get add --all WTF why isn't it working!!!!!! F$%^ you linus, ohhhh whoops its git lol
Offline
stealthy wrote:
Hey zester I'm gonna work on mate-desktop-environment, sorry.
What an odd comment to make, I wasn't aware that you were even working on this project.
Anyways good luck to you.
Nah its not odd, I was gonna work on this, but then I saw MDE, nothing personal.
clipodder-git A small simple cron-friendly podcast downloader, with support for arbitrary user defined media types (pdf, html, etc...)
Offline
There is now a Composite Manager called QCompositeManager in the repo and
a QML based Desklets example, seen below.
Desklets are like Bell Bottoms they never go out of style, in second thought. lol
Anyways I will be adding a few different types of desklets including C++ and Html/Css/JavaScript Based.
And eventually a Desklet Manager like SuperKaramba.
In order to use the example that you see below you will need to eather have your own Composite Manager
or use QCompositeManager.
Last edited by zester (2011-09-15 11:16:39)
Offline
Did some work on the Desktop Menu, particularly in setting desktop
icon size.
Offline
A composite manager? Sweet! Man, I really need to install the whole Quantum desktop and start playing around with it. But this weekend is for brewing beer. Hopefully next weekend.
Still, thanks for the hard work and glad to see you've picked up a few other developers. My free time should be increasing a bit in about a month so hopefully I can start poking around and learning by creating some of the simpler apps you guys are leaving for us inexperienced folk. I'm thinking of starting very simply with a QT leafpad clone but we'll see what happens in the next month.
I'm married to an author. This is shameless self-promotion.
Offline
Showing and Hiding Desktop Icons is now complete. When you hide your
desktop icons there not actual hidden there removed from the view and
the memory that they used is freed. And your settings are saved to the
desktops config file.
Desklets also have the capability to be rotated or resized like with kde plasma.
Offline
Added a new feature to QDesktop you can set your Desktop Icons or Menus to be Right or Left Handed and
started working on the QDesktopSettings so you can change your wallpaper and settings visually.
Ill update the repo later tonight.
Offline
Added thumbnails to QDesktopSettings Combobox, here is what the Desktop currently looks like, all from scratch in less than
30 days. There is still alot more work to do before I move on the the next Desktop but in my opinion not bad, not bad at all
Before using QDesktopSettings make sure you have a Pictures directory in your Home directory that
contain the wallpapers you would like to use. Thats where QDesktopSettings will look for your wallpapers.
And read QDesktops readme.
Offline
Beware I am about to RANT!!!!!!
Ok say you want to I don't know install a package using libalpm well naturally you need root privileges to do this and
AFAIK there is no C/C++ api for SUDO and setuid() and getuid() is disabled in just about every modern distro.
The answer everyone seams to be given to this problem is to use sudo via QProcess. Any time you rely on the shell
as a programmable function, that's just a dirty hack!
The next option is Policykit(polkit) first off it brings in Gtk dependencies, second there is nothing strait forward about
using it. It has a horrible api, Documents are entirely gobject based. GObject is huge pile of steaming dog SH#$.
Not to mention this is what polkit has to say about mounting removable filesystems..
Mounting removable filesystems, CDs, USB devices, and the like, is a classic example of a root-only task that some non-privileged users might be allowed to perform.
Ok sure in a Server environment, but not for a normal user!!!! and even in the case of a Server there wouldn't even be (shouldn't be) those types of devices attached in the first
place unless your doing some form of maintenance.
For god sakes this is what Permissions, and ACL's are for.
And even if a hacker was able to gain control of your user account well your totally screwed anyways. There is nothing polkit is going to be able to help you with.
Last edited by zester (2011-09-18 06:02:58)
Offline
Nice to see the progress here. As soon as there is a usable version of your project i will try it out.
Is it possible to install KDE and Quantum side by side?
You know what I haven't tested it on kde with the plasma desktop there shouldn't be a problem unless the kde plasma desktop forces itself
to the top layer, and it might. But if it does you can go a file a bug report with kde because it's not suppose to do that
it's considered bad form and plane out rude. I say that because I think someone mentioned that to me.
But on gnome and xfce and every window manager including kwin I have tested it works just fine.
You don't have to install it just build it with qmake and make and run it from its dir but be sure to read the README
file because you have to.....
Create $HOME ./config/chipara/desktop.conf
export QDESKTOP_CONFIG=$HOME/.config/chipara/desktop.conf
And Add
[window]
wallpaper=/home/steven/Picture/default.jpg
iconTheme=oxygen
iconSize=2
showIcons=1
layoutDirection=1
desktopSettings=/home/steven/Desktop/Quantum/QDesktopSettings/QDesktopSettings
the wallpaper= is the wallpaper you want to use to start with
and desktopSettings= is the path to the QDesktopSettings excutable
Its almost completely usable there are a few thing that need to be done but all .desktop file are executable
and almost all of the desktop menu is done. I need to work on things like opening dir and files with there
respected programs and I need to work on the desktop icon menu but QDesktop is getting real close.
If you give it a try and have any question fill free to ask
Last edited by zester (2011-09-18 08:24:37)
Offline
Hey. This looks pretty neat.
You should do a 10 minutes demo video of what you have so far and put it up on youtube. That way new people will be able to see why they should consider using it.
Offline
And perhaps you could provide a PKGBUILD? That would make more people try it, I guess. Are there any other plans other than providing the desktop?
If you can't sit by a cozy fire with your code in hand enjoying its simplicity and clarity, it needs more work. --Carlos Torres
Offline
A second vote for the PKGBUILD. I'm tempted to try and grab the source and compile it myself, but this would make life easier when it comes to updates.
"You can just deny pain… until infection leads to amputation. Then it’s really gone."
--Craig Benzine
Offline
A second vote for the PKGBUILD. I'm tempted to try and grab the source and compile it myself, but this would make life easier when it comes to updates.
I will work on it, the reason I haven't provided PKGBUILD files, is because it's not really ready for that. Some applications are seeing up to 10 updates a day. And I am still working out how I am going to do different things. And some applications need to be redesigned for performance reasons.
I was providing screenshots so you could see the progress.
Offline
It's looking like a great project so far though. Good job!
"You can just deny pain… until infection leads to amputation. Then it’s really gone."
--Craig Benzine
Offline
It has been pretty quiet around here.. How's the progress zester?
If you can't sit by a cozy fire with your code in hand enjoying its simplicity and clarity, it needs more work. --Carlos Torres
Offline
It has been pretty quiet around here.. How's the progress zester?
I am still here was just taking a short break to keep from getting burned out
I will post some new stuff in a day or so.
Offline
I certainly don't want you getting burned out on this. This is one of the projects that I'm really excited about.
"You can just deny pain… until infection leads to amputation. Then it’s really gone."
--Craig Benzine
Offline
This is one of the projects that I'm really excited about.
Agreed - glad to know you're still around and working on this!
I'm married to an author. This is shameless self-promotion.
Offline
I will post some new stuff in a day or so.
zester, any update?
Offline | https://bbs.archlinux.org/viewtopic.php?pid=1004676 | CC-MAIN-2016-26 | refinedweb | 2,035 | 73.68 |
The Button component provides
one of the most frequently used objects in graphical applications. When
the user selects a button, it signals the program that something needs
to be done by sending an action event. The program responds in its handleEvent()
method (for Java 1.0) or its actionPerformed()
method (defined by Java 1.1's ActionListener
interface). Next to Label,
which does nothing, Button
is the simplest component to understand. Because it is so simple, we will
use a lot of buttons in our examples for the next few chapters.
This constructor creates an empty Button.
You can set the label later with setLabel().
This constructor creates a Button
whose initial text is label.
The getLabel() method retrieves
the current text of the label on the Button
and returns it as a String.
The setLabel() method changes
the text of the label on the Button
to label. If the new text is
a different size from the old, it is necessary to revalidate the screen
to ensure that the button size is correct.
With Java 1.1, every button can have two names. One is what the user sees
(the button's label); the other is what the programmer sees and
is called the button's action command.
Distinguishing between the label and the action command is a major help to
internationalization. The label can be localized for the user's environment.
However, this means that labels can vary at run-time and are therefore
useless for comparisons within the program. For example, you can't
test whether the user pushed the Yes button if that button
might read Oui or Ja, depending on some run-time
environment setting. To give the programmer something reliable for comparisons,
Java 1.1 introduces the action command. The action command for our button
might be Yes, regardless of the button's actual label.
By default, the action command is equivalent to the button's label.
Java 1.0 code, which only relies on the label, will continue to work. Furthermore,
you can continue to write in the Java 1.0 style as long as you're
sure that your program will never have to account for other languages.
These days, that's a bad bet. Even if you aren't implementing
multiple locales now, get in the habit of testing a button's action
command rather than its label; you will have less work to do when internationalization
does become an issue.
The getActionCommand() method
returns the button's current action command. If no action command
was explicitly set, this method returns the label.
The setActionCommand() method
changes the button's action command to command.
The addNotify() method creates
the Button peer. If
you override this method, first call super.addNotify(),
then add your customizations. Then you can do everything you need with
the information about the newly created peer.
The paramString() method overrides
the component's paramString()
method. It is a protected method that calls the overridden paramString()
to build a String from the
different parameters of the Component.
When the method paramString() is called for a Button, the
button's label is added. Thus, for the Button
created by the constructor new Button ("ZapfDingbats"),
the results displayed from a call to toString()
could be:
java.awt.Button[77,5,91x21,label=ZapfDingbats]
With the 1.0 event model, Button
components generate an ACTION_EVENT
when the user selects the button.
With the version 1.1 event model, you register an ActionListener
with the method addActionListener(). When the user selects the Button,
the method ActionListener.actionPerformed()
is called through the protected Button.processActionEvent()
method. Key, mouse, and focus listeners are registered through the Component
methods of addKeyListener(),
addMouseListener(), or addMouseMotionListener(),
and addFocusListener(), respectively. Action
The action() method for a Button
is called when the user presses and releases the button. e
is the Event instance for the
specific event, while o is
the button's label. The default implementation of action()
does nothing and returns false,
passing the event to the button's container for processing. For a
button to do something useful, you should override either this method or the container's action()
method. Example 5.1 is a simple applet called ButtonTest
that demonstrates the first approach; it creates a Button
subclass called TheButton,
which overrides action(). This
simple subclass doesn't do much; it just labels the button and prints
a message when the button is pressed. Figure 5.3
shows what ButtonTest looks
like.
import java.awt.*;
import java.applet.*;
class TheButton extends Button {
TheButton (String s) {
super (s);
}
public boolean action (Event e, Object o) {
if ("One".equals(o)) {
System.out.println ("Do something for One");
} else if ("Two".equals(o)) {
System.out.println ("Ignore Two");
} else if ("Three".equals(o)) {
System.out.println ("Reverse Three");
} else if ("Four".equals(o)) {
System.out.println ("Four is the one");
} else {
return false;
}
return true;
}
}
public class ButtonTest extends Applet {
public void init () {
add (new TheButton ("One"));
add (new TheButton ("Two"));
add (new TheButton ("Three"));
add (new TheButton ("Four"));
}
}
Buttons are able to capture keyboard-related events once the button has
the input focus. In order to give a Button
the input focus without triggering the action event, call requestFocus().
The button also gets the focus if the user selects it and drags the mouse
off of it without releasing the mouse.
The keyDown() method is called
whenever the user presses a key while the Button
has the input focus. e is the
Event instance for the specific
event, while key
is the integer representation of the character pressed. The identifier
for the event (e.id) could
be either Event.KEY_PRESS for
a regular key or Event.KEY_ACTION
for an action-oriented key (i.e., an arrow or a function key). There is
no visible indication that the user has pressed a key over the button.
The keyUp() method is called
whenever the user releases a key while the Button
has the input focus. e is the
Event instance for the specific
event, while key
is the integer representation of the character pressed. The identifier
for the event (e.id) could
be either Event.KEY_RELEASE
for a regular key or Event.KEY_ACTION_RELEASE
for an action-oriented key (i.e., an arrow or a function key). keyUp()
may be used to determine how long key
has been pressed.
With the 1.1 event model, you register listeners, which are told when the
event happens.
The addActionListener() method
registers listener as an object
interested in receiving notifications when an ActionEvent
passes through the EventQueue
with this Button as its target.
The listener.actionPerformed()
method is called when these events occur. Multiple listeners can be registered.
The following code demonstrates how to use an ActionListener
to handle the events that occur when the user selects a button. This applet
has the same display as the previous one, shown in Figure 5.3.
// Java 1.1 only
import java.awt.*;
import java.applet.*;
import java.awt.event.*;
public class ButtonTest11 extends Applet implements ActionListener {
Button b;
public void init () {
add (b = new Button ("One"));
b.addActionListener (this);
add (b = new Button ("Two"));
b.addActionListener (this);
add (b = new Button ("Three"));
b.addActionListener (this);
add (b = new Button ("Four"));
b.addActionListener (this);
}
public void actionPerformed (ActionEvent e) {
String s = e.getActionCommand();
if ("One".equals(s)) {
System.out.println ("Do something for One");
} else if ("Two".equals(s)) {
System.out.println ("Ignore Two");
} else if ("Three".equals(s)) {
System.out.println ("Reverse Three");
} else if ("Four".equals(s)) {
System.out.println ("Four is the one");
}
}
}
The removeActionListener()
method removes listener as
an interested listener. If listener
is not registered, nothing happens.
The processEvent() method receives
AWTEvent with this Button
as its target. processEvent()
then passes them along to any listeners for processing. When you subclass
Button,ActionEvent() method
receives ActionEvent with
this Button as its target.
processActionEvent() then passes
them along to any listeners for processing. When you subclass Button,
overriding processActionEvent()
allows you to process all action events yourself, before sending them to
any listeners. In a way, overriding processActionEvent()
is like overriding action() using
the 1.0 event model.
If you override the processActionEvent() method,
you must remember to call super.processActionEvent(e)
last to ensure that regular event processing can occur. If you want to
process your own events, it's a good idea to call enableEvents()
(inherited from Component)
to ensure that events are delivered even in the absence of registered listeners. | https://docstore.mik.ua/orelly/java/awt/ch05_03.htm | CC-MAIN-2019-18 | refinedweb | 1,411 | 58.99 |
Ever wondered running your selenium script but also thinking of saving the logs of every execution of your test cases? There is a much convenient way of doing this task in Selenium using Python. This article is all about this.
Main code
First of all, you need to create a python package, let's say it's name log. Inside of the package, create a python file (lets say it's logCapture.py). Then in this file, import a python package called logging.
import logging It should be included in the python when you install.
Then you can create a class and define method inside of it. It's better if you declare the method as a static method so that you can access the method from anywhere of your project without creating any objects of it's class. Then you can add the piece of code given below:
import logging class LogGen: @staticmethod def loggen(): logger = logging.getLogger("Test Login") fileHandler = logging.FileHandler('.\\YourDesiredFolderName') formatter = logging.Formatter("%(asctime)s :%(levelname)s : %(name)s :%(message)s") fileHandler.setFormatter(formatter) logger.addHandler(fileHandler) logger.setLevel(logging.INFO) return logger
`
You can pass the name of your test case as a parameter inside the getLogger() method. The filehandler class will handle the your desired location for your logs. Formatter method will make sure your logs are being stored following a proper format. The logger variable will return the log at the end of the execution.
Driver code
Now it's time to use that loggen() function. Create another python file to add the driver code. Inside of this file, import the loggen function by simply using this code
from log.logCapture import logGen. The code is basically for accessing the LogGen class which we created earlier. Then use the logging by just calling the logger variable.
Let's say we want to test a login page. So you can add logs by using
logger.info("You desired massage") right before the execution of the test case or where ever you want. After executing the test cases, you will see a log file has been created in your desired folder. The log should look something like this
This is the output of my logs after executing the test cases. This is how my code looks like behind this
You should be able to use log if you follow this procedure :)
Thanks for reading. Pardon me for the mistakes I may have made as I am new to automation testing and still exploring everyday. I would be glad if it helps anyone. :)
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/ekanto/adding-logs-into-your-selenium-script-using-python-ig3 | CC-MAIN-2022-33 | refinedweb | 430 | 66.74 |
Red Hat Bugzilla – Bug 205822
kernel-devel: bttvp.h requires non-existent header (btcx-risc.h)
Last modified: 2015-01-04 17:28:38 EST
Description of problem:
The drivers/media/video/bt8xx/bttvp.h header contains the statement:
#include "btcx-risc.h"
btcx-risc.h does not exist in the same directory, but in the parent, so kernel
modules that require bttvp.h fail to compile.
Version-Release number of selected component (if applicable):
2.6.17-1.2174_FC5 and generally in 2.6.17 kernels where the bt8xx files where
re-organized.
How reproducible:
Example: I could not build the LIRC module lirc_gpio, even after correcting the
paths to bttv.h and bttvp.h in its source. The module could be compiled after i
copied btcx-risc.h into the drivers/media/video/bt8xx/ directory.
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
from that header file..
bttv's *private* header file -- nobody other than bttv itself
should ever include this file.
lirc shouldn't be using this. If this was meant to be used by out of tree
modules, it would be in include/
So, lirc_gpio source is the exclusive cause of the problem. I thought otherwise
because the lirc module could be compiled after I copied the "btcx-risc.h"
header into the new bt8xx directory in the kernel tree and I thought that the
problem was in the bttvp.h header.
Thanks for clearing this out because I was only trying to guess, taking into
account my very little knowledge of C.
Also, I made a mistake in the bug title. This problem actually *is not* related
to kernel-devel. The bttv related headers are completely missing from this
package. I used the kernel SRPM. | https://bugzilla.redhat.com/show_bug.cgi?id=205822 | CC-MAIN-2016-50 | refinedweb | 294 | 69.99 |
The real problem with using Fragments is making them work together with Activities. This comes down to working out how to allow the Activity to communicate with the Fragment and vice versa. It is a tricky problem and the solution is often just accepted without a deep understanding of the how and why. It is time to take a close look at the Fragment template.
If you are interested in creating custom template also see:
Custom Projects In Android Studio
Suppose you want to add another Fragment to a project.
The steps are nearly always the same - create an XML layout and add a class that extends Fragment. In fact this is so regular you might expect some help from Android Studio to do this.
You would be correct and there is a Fragment template that makes it easy
Start a new Blank Activity project, or any project type for that matter. All we are going to do is add a Fragment to the project using the Fragment template.
To do this first navigate to the java folder and the MainActivity file, right click on it and select New, Android (Other).
From the choices of component you can insert into the project select Fragment (Blank):
At the next screen you can customize the Fragment. In many cases you will want to leave uncheck some of the options but in the spirit of finding out what they are all about let the default selection stand.
When you click Finish button a complete Fragment is generated for you including the XML layout, unless you deselected Create layout XML, and the java file.
We need to look at each of these in turn.
The XML layout is straightforward. You get a FrameLayout with a TextView saying Hello blank Fragment.
The Java code is much more complex than you might have expected and it includes some features of using Fragments that we haven't so far looked at. Although these might be considered "best practices" you don't have to use them if you would prefer simplicity.
As these features are general, you can make use of them in your own Fragments even if they are not generated by the template it makes sense to explain them in isolation.
In other word let's use the generated Fragment to learn some more advanced ways of working with Fragments.
Before we start on examining the code generated by Android Studio it is worth spending a few moments considering what the real difficulty the Activity has in working with the Fragment.
In most cases the Activity can create an instance of an object and customize it by providing initializing data in the constructor or by setting properties soon after creating the object. It could also wire up event handlers directly to the object in the usual way so that it could signal that something needed attention.
None of this works with Fragments for the simple reason that a Fragment can be destroyed by the system at any time and recreated without initialization and without event handlers etc.
For this reason the Fragment has to save and restore its state in the same way that an Activity has to.
However there are additional problems with a Fragment. In general other objects don't link into an Activity any deep way because an Activity is supposed to do something in its own right. A Fragment on the other hand is suppose to work closely with an Activity. It is the Activity's UI component and as such there has to be links from the Activity into the Fragment and more importantly the Fragment has to be able to connect with the Activity.
This is difficult because say the Activity hooks up an event handler to some Fragment generated event or even to a widget in the Fragment generated UI, then when the Fragment is recreated the event handler will also have to "reconnected" - but in general the Activity doesn't know when a Fragment has been recreated and so can't remake the connection. There are other difficulties which will become apparent.
So we have three problems to solve -
All of these have to work after the Fragment has been destroyed and recreated.
The hardest part is implementing something like an event mechanism. To do it properly requires the use of some moderately sophisticated Java and object-oriented ideas.
A standard way to customize an object is to pass arguments into its constructor. You might think that you could do the same with a Fragment by providing a constructor with parameters and you can do this.
However a Fragment also has to have a parameterless constructor which is used by the system to recreate the Fragment. This is provided by the template:
public BlankFragment() { // Required empty public constructor}
The parameterless constructor is needed because the system cannot know how to call a constructor that accepts parameters.
So if you want to pass a parameter to the newly created Fragment you might use:
public BlankFragment(int Param1) { // code to deal with parameters}
If you do define a constructor with parameters then you need to remember to make sure that the parameterless constructor is explicitly included - as it is in the code that the template creates.
That is, you only get a default constructor if you haven't explicitly defined one.
Clearly if we are going to use a constructor with parameters to set things up we are going to have to find a way to persist the parameter values and restore them using something other than the constructor.
We could now continue to the details of persisting the constructor supplies parameters but the template does something slightly different.
Instead of using a constructor to initialise the new instance the preferred method is to use a static factory method. There doesn't seem to be a clear cut reason to prefer this to an overloaded constructor, despite it being described as "best practice". As this is what the template generates, let's take a look at how this all works - but to be clear you could put the same code into an overloaded constructor as demonstrated later.
A factory method is a common idiom in most object-oriented languages and all that happens is that it creates an instance of the object that it is a factory for. Of course for this to be possible it has to be a class or static method. If you are a big doubtful about static methods read the quick introduction below - otherwise skip forward.
A static method is created within the class definition by use of the modifier static. It is different from normal or instance methods in that it is called using the class name. For example to create a static method you would write:
public class myClass{ public static void myStaticMethod(){ what ever the method does }
and you could call the static method using
myClass.myStaticMethod();
Notice that you call the method treating the class as if it was an object. | http://www.i-programmer.info/programming/android/6996-fragment-and-activity-working-together.html | CC-MAIN-2015-48 | refinedweb | 1,170 | 57.4 |
Forum Posts
0 Answers Where is bug in below code? C# programming lang
Debug below code
- ans October 29, 2018
class ItemsChecker{ private List<string> _badItems; public ItemsChecker(List<String> badItems) { _badItems=badItems; } public List<String> GetGoodItems(List<String> items) { List<string> goodItems=new List<string>(); foreach(string item in items) { if(!_badItems.contains(item)) { goodItems.add(item); } } return goodItems; }
| Flag | PURGE
0 Answers hourglass
Given a 6*6 2D Array, :- bhawana121998 October 24, 2018
1 1 1 0 0 0
0 1 0 0 0 0
1 1 1 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
We define an hourglass in A to be a subset of values with indices falling in this pattern in arr's graphical representation:
a b c
d
e f g
There are 16 hourglasses in arr, and an hourglass sum is the sum of an hourglass' values. Calculate the hourglass sum for every hourglass in arr, then print the maximum hourglass sum.
For example, given the 2D array:
-9 -9 -9 1 1 1
0 -9 0 4 3 2
-9 -9 -9 1 2 3
0 0 8 6 6 0
0 0 0 -2 0 0
0 0 1 2 4 0
We calculate the following hourglass values:
-63, -34, -9, 12,
-10, 0, 28, 23,
-27, -11, -2, 10,
9, 17, 25, 18
Our highest hourglass value is 28 from the hourglass:
0 4 3
1
8 6 6
Note: If you have already solved the Java domain's Java 2D Array challenge, you may wish to skip this challenge.
Function Description
Complete the function hourglassSum in the editor below. It should return an integer, the maximum hourglass sum in the array.
hourglassSum has the following parameter(s):
arr: an array of integers
Input Format
Each of the 6 lines of inputs arr[i] contains 6 space-separated integers arr[i][j].
Constraints
Output Format
Print the largest (maximum) hourglass sum found in .
Sample Input
1 1 1 0 0 0
0 1 0 0 0 0
1 1 1 0 0 0
0 0 2 4 4 0
0 0 0 2 0 0
0 0 1 2 4 0
Sample Output
19
Explanation
contains the following hourglasses:
The hourglass with the maximum sum () is:
2 4 4
2
1 2 4
int hourglassSum(vector<vector<int> > arr) {
int i,j,k,sum,max,count=0;
for(i=0;i<4;i++)
{
for(j=0;j<4;j++)
{
sum=0;
for(k=j;k<j+3;k++)
{
sum+=arr[i][k];
sum+=arr[i+2][k];
}
sum+=arr[i+1][j+1];
if(count==0)
{
max=sum;
count++;
}
else
{
if(max<sum)
max=sum;
}
}
}
return max;
}
please tell me what's wrong with my code?
| Flag | PURGE
0 Answers minimize digitsum
You are given positive integers N and D. You may perform operations of the following two types:- ceaser October 19, 2018
add D to N, i.e. change N to N+D
change N to digitsum(N)
Here, digitsum(x) is the sum of decimal digits of x. For example, digitsum(123)=1+2+3=6, digitsum(100)=1+0+0=1, digitsum(365)=3+6+5=14.
You may perform any number of operations (including zero) in any order. Please find the minimum obtainable value of N and the minimum number of operations required to obtain this value.
this is my code I am getting the wrong answer in some test cases please let me know where I am wrong.
#include<iostream>
#include<queue>
using namespace std;
void check(long long int a, long long int &b, long long int &c, long long int d)
{
if(a<b)
{
b = a;
c = d;
}
}
int digitSum(long long int a)
{
int num = 0;
while(a>0)
{
num += a%10;
a = a/10;
}
return num;
}
void func(int N, int D, long long int &minElement, long long int &minTrial)
{
queue<long long int> q;
q.push(N);
if(N==1)
{
minElement = 1;
minTrial = 0;
}
long long int countNodes = q.size();
long long int level = 1;
int cnt = 1;
while(!q.empty())
{
countNodes = q.size();
while(countNodes>0)
{
long long int temp = q.front();
q.pop();
long long int first = temp+D;
check(first,minElement,minTrial,level);
if(minElement == 1)
{
return;
}
q.push(first);
long long second = digitSum(temp);
check(second,minElement,minTrial,level);
if(minElement == 1)
{
return;
}
q.push(second);
countNodes--;
cnt++;
if(cnt>1000000)
{
break;
}
}
if(cnt>1000000)
{
break;
}
level++;
}
}
int main()
{
long long int t = 1;
cin>>t;
while(t--)
{
long long int N, D;
cin>>N>>D;
long long int minElement = 1e18, minTrial = 1e18;
func(N,D,minElement, minTrial);
cout<<minElement<<" "<<minTrial<<endl;
}
return 0;
}
| Flag | PURGE
-
0 Answers Register dump
Using tools like memtool one can read the memory addresses and dump it on screen. But, mapping these bits into their meaning as per SOC's register definition is a tedious and error prone task.- psk1801 May 23, 2018
So, i am working on developing a tool that can take the memtool output and then map it to the SOC register bits.
I have SOC register header file and memtool as inputs. The challenging part is parsing through the SOC register header file some way and may be create an intermediate output which helps in mapping the data.
Please suggest what would be best approach to follow here, when it comes to parsing through the SOC register header file.
| Flag | PURGE
-
-
0 Answers Will HR check my exact diploma title and fake internships in my resume?
I started bachelor of computer science, but it was too challenging,my GPA went down, it was very low and I was failing to meet min.GPA requirements. So I switched to Math, graduated with "3 y general degree in Math". Though I started as Computer science student and completed some CS courses, my last year was pure math courses.- erjcan April 08, 2018
I have been working as front-end web dev for 1.5y.
I want to work in a tech giant company(FB, google etc)
Also, I did not do any internships during my bachelor's. Can I add in my resume that I had some 1-2 internships in startups?
Will they run some check?
1)check exactly diploma title
2)check my internships?
| Flag | PURGE
0 Answers GROSSLY UNPROFESSIONAL company GOOGLE
It has been a long overdue review – but better to expose such things late than never.- Interviewer February 07, 2018
It would be shocking for any reader to know the reality of Google’s (Hyderabad India office) fraud interview process, and the gross unprofessionalism prevailing there.
It gives a telltale picture of what sort of people Google has started recruiting and promoting to senior roles nowadays.
A bit about myself – 11+ years in the software industry, 7 years as a technical interviewer, 8 years in software QA automation, 3 years as a Technical Lead/Architect with full hands on coding work.
Here are the details – read till the end with the dates to get a clear idea of Google’s unprofessionalism.
7th July 2017 – I got a message on LinkedIn from a person S****bh G***a, who introduced himself as a recruiter working at Google, and claimed that they are looking for an experienced Test Engineer to own and manage the Testing process of an upcoming product at Hyderabad location.
I asked him to email the details to my email id from his google.com email id. He obliged, and then we had a long telephonic conversation regarding the details of the role.
Right at the onset – I made it very clear that I would consider any further discussion if and only if you have a role in Hyderabad – as relocation is simply out of question for me.
He immediately made it clear that we are discussing about Hyderabad location only.
He further went on to disclose that the internal codename of this product is “Next Billion Users”. (Later, I came to know that this product was the TEZ payments app).
He scheduled a Google Hangouts call, with a clear instruction that it will be of exactly 45 minutes.
18th July 2017 – The call was scheduled for 2230-2315 (late night) my local time.
Here comes the first glimpse of unprofessionalism.
The interviewer dude turns up for interview 10 minutes late.
He initially asked me a few things from my resume, and then asked me to give a test-plan for Google Maps.
I did goof up a bit but managed to list down reasonable list of scenarios.
Now, with just 12 minutes remaining (i.e. at 2303), he gave me a coding problem.
[No, I was not asked to sign any NDA or any other document – hence I am disclosing the problem statements].
Given two strings - S1 and S2.
Arrange the characters of S1 in same alphabetical order as the characters of S2.
If a character of S1 is not present in S2 - such characters should come at the end of the result string, but make sure to retain the order of such characters
Case sensitivity is irrelevant
e.g. S1 = "Google", S2 = "dog"
Output = "ooggle"
e.g. S1 = "abcdedadf", S2 = "cae"
Output = "caaebdddf"
By the time he explained the problem and I made sense of the problem, there were barely 7-8 minutes remaining.
I somehow completed the code, detected a bug in my code – and was about to fix it – then the clock struck 2315, and the Mr Latecomer simply got up from his chair and ran away saying that he has a meeting – thus offering me just 35 minutes in total because His Highness came 10 minutes late.
When the interviewer turns up 10 minutes late, isn’t he/she supposed to give 10 extra minutes for the interview? This is a basic minimal courtesy – which I know because I myself have been an interviewer since past 7+ years.
Next day – the recruiter S****bh G***a called me and gave me a long lecture saying that “your code was not efficient, Google pays lots of importance to efficiency and scalability etc etc”.
I bluntly told him that the interviewer was 10 minutes late, and I got barely 8 minutes to code – is it fair? Can the interviewer himself solve such problems in 8 minutes without any bugs, that too on a Google Doc?
S****bh G***a fell silent for a while, and then offered me 2 more back-to-back Google Hangouts interviews with a different set of interviewers. I asked for some time for more preparation to which he agreed.
31st July 2017 – Again, the call was scheduled for 2230-2315 (late night) my local time.
Yet another unprofessionalism awaited me.
I kept waiting till 2310, and nobody turned up for the call. I dropped an email to the recruiters and went to sleep, and received a sheepish Sorry as the answer.
1st August 2017 – Again, the call was scheduled for 2230-2315 (late night) my local time.
For a change, this lady (based in London) turned up bang on time.
She asked me lot of questions about my past testing experiences.
She was in-particular interested when I talked about how I had applied Pairwise Testing technique in a past testing assignment in a previous company.
She gave me a coding question – with 15 minutes of time remaining.
Given any uppercase string. Report the starting index at which any valid permutation of ABCDEF starts. If not found, then report -1.
Possible permutations of ABCDEF are ABCDFE, BCDAFE, FEDCAB etc (a total of 6! = 720 permutations)
e.g. S = "ACDBFE", Output = "0"
S = "ACXBFE", Output = "-1"
S = "ACXBFEDABCFE", Output = "4"
I managed to code it reasonably – she seemed to be satisfied with my solution, and asked me all testcases for the problem. The interview apparently went well.
2nd August 2017 – Again, the call was scheduled for 2230-2315 (late night) my local time.
And, yet another sample of unprofessionalism from Google.
Again a delay – at 2245 – I received a call from an International unknown number on my mobile phone – it was the Interviewer – he said that he has not been provided the Google Hangouts link!!!!
Fortunately he had the link of the shared Google Doc, through which I sent him the link to the Google Hangouts call, and then the interview started.
Fortunately – this gentleman (based in US) gave me full 45 minutes for my interview.
His coding problems were very easy.
Convert a natural number to base-3 equivalent
He was happy when I asked clarifying question as to how big the number can be – he said stick to integers, and I managed to code it within 5 minutes, as it was an easy problem.
He asked for testcases – seemed happy with my test data – and moved to next question.
Find the sum of all nodes stored in a binary tree.
I gave him a choice – recursive code or iterative?
He himself asked me to ignore integer overflows etc.
He asked for recursive solution – which I coded within minutes, and again – he asked for testcases. He seemed satisfied with my solution, and then he repeated an earlier problem.
The next question was – Design a test plan for Google Maps – yes the same problem – which was asked in my first round of telephonic interview.
This time – I was well prepared for such problems for all commonly used Google products – hence I answered this time in a systematic way – classifying the scenarios, prioritizing them, discussed the test automation strategy etc.
He seemed to be happy and the interview ended.
10+ days passed by – during this period – I followed up twice on phone, only to hear from S****bh G***a that – “we are still waiting for feedback from one of the 2 interviewers, and the other one has given positive feedback” (10+ days to write an interview feedback – this is what we call as Scalability and Efficiency at Google – right?)
Finally, one fine day – the recruiter S****bh G***a informed me that they are inviting me for Onsite Interview at Google’s Hyderabad office.
He informed that it would be a 2-step process – first day will be 3 rounds of technical interviews, and if I do well there – then they would invite me once more for an interaction with someone from the Developer Team.
The first onsite was scheduled for 21st August 2017.
11:00 AM – Round 1 – The interviewer was a Software Test Engineer, and focused only on Testing. He asked a generic question – when do you declare testing as complete.
He asked me to take the example of Gmail, and then we had a discussion about various scenarios and prioritization for 20+ minutes, using whiteboard.
He asked me to list out all testcases for a function
performOperation(input1, operator, input2), where operator is any binary mathematical operator, and input1 and input 2 are any numeric values.
We had a detailed discussion on whiteboard for 30+ minutes for this problem as I put up a lot of clarifying questions before listing out my testcases – and he seemed very happy with my testcases – and acknowledged that he has seen very few people who can think upto this level of testcases.
The interview ended, and after a breather for 10 minutes, I was ushered into a conference room – for a Google Hangouts video call with a guy located in Bangalore.
12:00 PM – Round 2 – This person was the previous interviewer’s Boss. He was yawning and seemed sleepy, tired and disinterested outright – definitely not a professional behavior by any standards – that too for a person at a technical manager level.
He too started with the same problem statement – when do you declare testing as complete.
I had to tell him that I have answered this question just now in previous interview.
Clear proof of lack of coordination between interviewers – I seriously wonder haven’t these morons been trained that feedback from one interview is supposed to be circulated to remaining interviewers!!!
He gave me a coding problem then.
Input is an integer array A
Return an array B such that B[i] = product of all elements of A except A[i]
I coded it, started to describe testcases – then immediately I discovered a bug in my code.
I fixed it – all this while the interviewer was yawning – then I did a dry run on Google Doc, and explained him that my code is correct.
Trust me – this Mr Sleepy had a hard time in understanding that my code had time complexity of O(N).
After this – the recruiter took me for lunch at their cafeteria at around 1300.
14:00 – Round 3 – This person was interested purely in my design and coding – and gave a good problem.
Data structure for Task Dependency
A task can start only after all its pre-requisites are done
He asked me to design the data structure properly, and then code the methods addNewTask and getExecutionSequence for this.
It was a reasonably lengthy one – the discussion and analysis took some 25+ minutes, and then I was asked to code it – which I did.
I figured out a bug in my code again – to which the interviewer said that its ok – its hard to code on Google Doc – and seemed to be satisfied with my code.
Again, 3+ weeks of silence – and on following up with S****bh G***a, I was told each time that there are more candidates – we will let you know.
I kept waiting patiently.
Finally, on 5 September 2017 – I got a call from him saying that they want me to come to their office once more for one more round of Google Hangouts call with one of their Software Development Engineers.
7th September 2017 – I reached their office again, and had a call with a guy based in Singapore, who was probably working on the TEZ app.
It was not exactly an interview – more of a discussion around people management, resource prioritization, test planning, release management etc.
NOW – DEAFENINGLY DEAD SILENCE FOR 2 MONTHS – yes, 2 MONTHS.
Whenever I would follow up with S****bh G***a, his answer would be – 1 week, 2 week etc – with no definite answer.
7th November 2017 – Finally, my patience gave up, and I sent a nasty message to S****bh G***a over Google Chat, blasting out Google for this unprofessional behavior of keeping dead silence and not even being able to say Yes/No after 7 rounds of interviews.
I explicitly mentioned – is this the Scalability and Efficiency which is followed at Google ?
He called me – said Sorry etc, and then I clearly told him – if you have selected someone else – then be a man and speak up, and finish off the drama – to which he replied that – You are the only candidate for this position as of now, the team was BUSY with TEZ app release – hence the delay.
By the way, during these 2 months – the TEZ app was released. I started using it – and emailed with full details and screenshots to S****bh G***a about BUG in their app – that it was being flagged as a VIRUS by 360 Security Antivirus running on my Vivo V5 (Android) phone.
Needless to say – NO ACKNOWLEDGEMENT.
Finally, MORE DRAMA.
S****bh G***a scheduled a Lunch Meeting for 17th November 2017 at Google’s Hyderabad office, and also asked me to provide a list of REFERENCES.
I immediately provided him the list of references, and later came to know on contacting the persons referred that NONE of them received any questionnaire from Google asking about me.
The meeting was with their Engineering Director J** K**a, and one more person.
Over lunch, J** K**a asked me about my past experiences, most challenging testing assignments etc. I did make a point to mention about that bug as well – he seemed to be shocked that their team had missed such a basic test scenario.
Then, again a 1 hour chat with another person – similar pattern of talk, and there as well I mentioned about this bug – and he too thanked me for bringing it out, and said that he would get this fixed.
Both the lunch interviewers explicitly said that I am a great fit for multiple roles in TEZ team, as well as many other teams in Hyderabad – and not just as a Software Test Engineer, but also as a Developer.
NOW, the REAL MELODRAMA.
Some 10 days later – I received a call from S****bh G***a saying that they have got approval from their Hiring Committee – and made me 3 offers – Sweden, US and Bangalore.
When I reminded him in stern words, that I had attended all this interview cycle only for Hyderabad – then he said that – The Hyderabad is still 60:40 about you, but we are giving you offers for any of these 3 locations.
So after 7 rounds of interviews – you are still not decided whether a candidate is fit for your team or not – GREAT – Scalability and Efficiency – right?
I sternly told him that I have a well settled family and an expecting wife – hence relocation is ruled out – either Hyderabad or nothing.
7 days later - S****bh G***a again called me and this time the LIAR said – there is one guy based in US, who is very keen to hire you, and he would work out an opportunity based in Hyderabad for you.
5th December 2017 – another Google Hangouts Video call with another Engineering Manager Ja****h S***h B***a. He too offered me a position in US, and clearly said that Hyderabad is NOT AT ALL a strategically important location for Google right now – and hence NO HIRING is going on there.
Read again – Hyderabad is NOT AT ALL a strategically important location for Google right now – and hence NO HIRING is going on there.
I clearly told him that I am not going to relocate – hence its over.
Finally, I sent an emailed to S****bh G***a clearly blasting him out that I am not interested in joining an UNPROFESSIONAL company who does not know whether they have a vacancy or not, and spends 5 months and still stays undecided whether to hire or not.
Thinking of joining Google – think again !!
|
Open Chat in New Window | https://careercup.com/forum | CC-MAIN-2019-13 | refinedweb | 3,736 | 66.57 |
I got the leiningen build system setup in ST2 (from someone on Github) but am struggling with the "file_regex"
I have:
{
"cmd": ["lein", "compile", ":all"],
"file_regex": "^[^()]*\\((...*?):([0-9]*)\\).*",
"selector": "source.clj",
"working_dir": "$project_path",
}
Now when I click on the error line such as:
Exception in thread "main" java.lang.RuntimeException: EOF while reading, starting at line 99, compiling:(domain.clj:99)
at clojure.lang.Compiler.compile(Compiler.java:7190)
then ST creates a new domain.clj in the project root directory.
Is there some clever way I can get it to open the correct file?
The standard lein project layout is that files are folders per namespace under $project_path/src/
There is nothing in the error output that gives you the directory of the file, just the name domain.clj
As a starting point I could live with the assumption that src files are in the src/ directory itself | http://www.sublimetext.com/forum/viewtopic.php?f=3&t=10755&start=0 | CC-MAIN-2014-15 | refinedweb | 149 | 64.91 |
20 January 2011 06:38 [Source: ICIS news]
?xml:namespace>
“We see very limited downside potential because crude and monomer prices are so high now, particularly in Europe and the
Negotiations for February shipment of some Middle East and Asian PE and PP grades had begun this week, with deals cited at least $10-20/tonne (€7.4-14.8/tonne) higher than previous transactions, market sources said. Price spikes of the same magnitude were expected after the holidays, they said.
Strong feedstock costs will keep Asian polyolefins prices firm through next week even with the slowing down of trades ahead of the Lunar New Year festivities in China on 2-8 February.
Spot ethylene prices in Asia had risen to $1,230-1,250/tonne CFR (cost and freight) northeast Asia on Wednesday, $50-60/tonne higher from four weeks ago, while propylene prices were up $60-70/tonne over the same period at $1,330-1,350/tonne CFR China, on the back of firm crude values, according to ICIS data.
Crude was hovering near $91/bbl at midday on Thursday.
The prevailing retail prices of many PE and PP grades in China’s domestic market were much higher than the import costs, but the retail prices are expected to catch up with the import prices in the coming weeks because Chinese stockists would not sell at a loss, the Shanghai-based distributor said.
Imported LLDPE was, on average, selling at around yuan (CNY) 11,100/tonne ($1,687/tonne) ex-warehouse in east China, but the cost of booking fresh LLDPE cargoes for February shipment was above CNY11,500/tonne ex-warehouse, after taking duty and tax into account, local distributors said.
Meanwhile, tighter PP supply arising from heavy plant turnaround schedule in the Middle East and
Honam Petrochemical, Polymirae, Samsung Total Petrochemicals in
But producers may not be able to convince buyers to accept steep price hikes for February shipments even as the positive post-holiday outlook was stoking buying interest, industry sources said.
“We revised our February offers for linear low density PE (LLDPE) to $1,450/tonne CFR China because our initial offers at $1,470/tonne CFR China were rejected by our key customers,” said a Korean producer.
LLDPE was assessed at $1,380-1,430/tonne CFR China for the week ended 14 January, according to ICIS.
($1 = €0.74 / $1 = CNY6.58)For more on PE, | http://www.icis.com/Articles/2011/01/20/9427656/china-pe-pp-to-extend-gains-post-holidays-on-restocking.html | CC-MAIN-2013-48 | refinedweb | 405 | 53.75 |
- NAME
- SYNOPSIS
- DESCRIPTION
- METHODS
- POD commands specifically for reStructuredText
- EXAMPLES
- TODO
- DEPENDENCIES
- SEE ALSO
- AUTHORS
- highlighting().
namespace
If a namespace is declared then links to that namespace are converted to cross references and an anchor is added for each head tag. highlighted according to the syntax of the programming/markup/config language lang. Verbatim sections are assumed to be Perl code by default. Sphinx uses Pygments to do syntax highlighting in these sections, so you can use any value for lang that Pygments supports, e.g., Python, C, C++, Javascript, SQL, etc.
EXAMPLES
Need to document:
TODO
- code highlighting
Currently, a verbatim block (indented paragraph) gets output as a Perl code block by default in reStructuredText. There should be an option (e.g., in the constructor) to change the language for highlighting purposes (for all verbatim blocks), or disable syntax highlighting.
SEE ALSO
pod2rst (distributed with Pod::POM::View::HTML)
reStructuredText:
Sphinx (uses reStructuredText):
Pygments (used by Sphinx for syntax highlighting):
AUTHORS
Don Owens <don@regexguy.com>
Jeff Fearn <Jeff.Fearn@gmail.com>
Alex Muntada <alexm@cpan.org>
This software is copyright (c) 2010 by Don Owens <don@regexguy.com>, 2016 by Jeff Fearn <Jeff.Fearn@gmail.com>, and 2016-2018 by Alex Muntada <alexm@cpan.org>.
This software is available under the same terms as the perl 5 programming language system itself.
VERSION
1.000002 | https://metacpan.org/pod/Pod::POM::View::Restructured | CC-MAIN-2019-22 | refinedweb | 226 | 50.12 |
15 May 2011 - 05:32 PM
I read a really good post about up and downcasting on this forum:
But my question is, why do you want to cast?
(referring to the link I posted up there)
if all cats can perform animal methods, why would you want to upcast a cat to an animal in first place?
p.s: wasn't sure where to post...
But my question is, why do you want to cast?
(referring to the link I posted up there)
if all cats can perform animal methods, why would you want to upcast a cat to an animal in first place?
p.s: wasn't sure where to post...
- 0
Recommended for you: Get network issues from WhatsUp Gold. Not end users.
#2
Posted 15 May 2011 - 08:56 PM
Found a really good tutorial on this. I don't know this so I'm currently reading it.
- 0
#3
Posted 16 May 2011 - 03:24 AM
You want to cast because you will have to write less code. Take the example of Cat and Dog who extend Animal. Now you have another class Cage (how cruel ^^). Cage can hold both dog and cat. Without casting your Cage class will look like:
Also with generics, like an ArrayList, you can't put a Dog AND a cat in an ArrayList.
public class Cage{ private Cat cat; private Dog dog; public void addCat(Cat cat){ this.cat = cat; } public void addDog(Dog dog){ this.dog = dog; } }While with casting you can just do
public class Cage{ private Animal animal; public void addAnimal(Animal animal){ this.animal = animal; } }
Also with generics, like an ArrayList, you can't put a Dog AND a cat in an ArrayList.
ArrayList<Dog, Cat> myPets = new ArrayList<>(); //Won't workThat's not possible. You can however decide to put Animal in the arrayList, and then both dog and cat can be added:
ArrayList<Animal> myPets = new ArrayList<>(); myPets.add(new Cat()); myPets.add(new Dog()); ... Cat cat = (Cat) myPets.get(0); Dog dog = (Dog) myPets.get(1);
- 1
#4
Posted 11 January 2013 - 09:59 AM
Genus! Here is a master piece that complements perfect the Cat and Dog example! A practical example. Great job. I always say, the simple the better! I'll expend ours and days looking for someone that can explain just the reason for casting! If there is Polymorphism why casting! Thanks, thanks, thank you very much, finally my nightmares have come to an end. From today on I won't have to spend any more time trying to find a reason for casting. I'm a free maaaaaaan!
- 0
Java Java Doooooo!!!
#5
Posted 23 January 2013 - 02:24 PM
I figurit out!! Since ClassD extends ClassC and ClassC extends ClassB, ClassD was being capture by the ClassB evaluation code. So when I placed the condition upsidedown (evaluating D, C, B [backwards]), it works perfect! Thanks any way. It was a good exercise on casting!
Edited by Petros, 24 January 2013 - 05:41 AM.
- 0
Java Java Doooooo!!!
Also tagged with one or more of these keywords: upcast, downcast
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download | http://forum.codecall.net/topic/63767-about-castingdowncasting/ | CC-MAIN-2020-29 | refinedweb | 535 | 76.32 |
thanks so much ilan :3
I'll try it out asap :D
Type: Posts; User: happyxcrab
thanks so much ilan :3
I'll try it out asap :D
I'm currently working on a program like this also :D
Try something like this with a while loop
So I'm writing a code to ask the user how many weeks they go till they get paid, how many hours they worked each day, how much their pay rate is, and then compile it all. Only problem is that I can't...
Thank you! I'll be sure to google everything in the world!! haha
Is there any chance that I could get pointed in the right direction? I'm brand new at this.
Thank you for the help though :)
package exercise2;
import java.util.Scanner;
public class Exercise2 {
Scanner user_input = new Scanner(System.in);
public static void main(String[] args) { | http://www.javaprogrammingforums.com/search.php?s=f9caf85280b4c70793aa73fc6f9ef140&searchid=1273584 | CC-MAIN-2014-52 | refinedweb | 150 | 79.3 |
Struggling to wrap your head around Redux? Don’t worry, you’re not alone.
I’ve heard from many, many people that Redux is the biggest barrier to writing the React apps they want to.
By the end of this post you’ll understand what Redux is for, and how to know when it’s time to add it to your own app.
Why?
The best question to start with is, Why should we use Redux at all?
And the answer isn’t “because everyone else on the internet is using it.” (I don’t doubt that’s why a lot of people are using it, but let’s go deeper.)
The reason Redux is useful is that it solves a problem.
And no, the problem it solves is not “state management.” That’s super vague. Heck, React already does state management. Redux does help manage state, but that’s not the problem it solves.
It’s About Data Flow
If you’ve used React for more than a few minutes, you probably know about props and one-way data flow. Data is passed down the component tree via props. Given a component like this:
The
count, stored in
App’s state, would be passed down as a prop:
For data to come back up the tree, it needs to flow through a callback function, so that callback function must be passed down to any components that want to pass data up.
You can think of the data like electricity, connected by colored wires to the components that care about it. Data flows down and up through these wires, but the wires can’t be run through thin air – they have to be connected between each component in the tree.
This is all review, hopefully. (If not, you should stop here, go learn React, build a couple small apps, and come back in a few days. Seriously. Redux is gonna make no sense until you understand how React works.).
Layers and Layers of Data Flow
Sooner or later you run into a situation where a top-level container has some data, and a child 4 levels down needs that data. Here’s a screenshot of Twitter, with all the avatars highlighted:
Let’s say the user’s avatar is stored as part of their profile data, and the top-level
App component holds the user. In order to deliver the
user data to the all 3
Avatar components, the
user needs to be woven through a bunch of intermediate components that don’t need the data.
Getting the data down there is like threading a needle through a mining expedition. Wait that doesn’t make any sense. Anyway, it’s a pain in the ass.
More than that, it’s not very good software design. Intermediate components in the chain must accept and pass along props that they don’t care about. This means refactoring and reusing components from that chain will be harder than it needs to be.
Wouldn’t it be nice if the components that didn’t need the data didn’t have to see it at all?
Plug Any Data Into Any Component
This is the problem that Redux solves. It gives components direct access to the data they need.
Using the
connect function that comes with Redux, you can plug any component into Redux’s data store, and the component can pull out the data it requires.
This is Redux’s raison d’etre.
Yeah, it also does some other cool stuff too, like make debugging easier (Redux DevTools let you inspect every single state change), time-travel debugging (you can roll back state changes and see how your app looked in the past), and it can make your code more maintainable in the long run. It’ll teach you more about functional programming too.
But this thing here, “plug any data into any component,” is the main event. If you don’t need that, you probably don’t need Redux.
The
Avatar Component
To tie all this back to code, here’s an example of the
Avatar component from above:
import React from 'react'; import { connect } from 'react-redux'; const Avatar = ({ user }) => ( <img src={user.avatar}/> ); const mapStateToProps = state => ({ user: state.user }); export { Avatar }; export default connect(mapStateToProps)(Avatar);
The component itself doesn’t know about Redux – it just accepts a
user prop and renders the avatar image. The
mapStateToProps function extracts the
user from Redux’s store and maps it to the
user prop. Finally, the
connect function is what actually feeds the data from Redux through
mapStateToProps and into
Avatar.
You’ll notice there are two
exports at the end – a named one, and a default. This isn’t strictly necessary, but it can be useful to have access to the raw component and the Redux-wrapped version of it.
The raw component is useful to have when writing unit tests, and can also increase reusability. For example, part of the app might want to render an
Avatar for another user other than the signed-in user. In that case, you could even go a step further and export the Redux-connected version as
CurrentUserAvatar to make the code clearer.
When To Add Redux
If you have a component structure like the one above – where props are being forwarded down through many layers – consider using Redux.
If you need to cache data between views – for instance, loading data when the user clicks on a detail page, and remembering the data so the next access is fast – consider storing that data in Redux.
If your app will be large, maintaining vast data, related and not – consider using Redux. But also consider starting without it, and adding it when you run into a situation where it will help.
Up Next
Read Part 2 of this series where we’ll dive into the details of Redux: how to set it up, and how the important pieces fit together (actions and reducers and stores oh my!).
Translations
You can read this in Russian thanks to translation by howtorecover | https://daveceddia.com/what-does-redux-do/ | CC-MAIN-2019-35 | refinedweb | 1,012 | 70.33 |
OK I've started. finished the install, not sure what happened there, and started a tutorial. Of course, immediate error. I've made a lsit and cheked it twice to make sure I did all the steps correctly. In a dos command prompt I ran the application creation command and then created the controller MyTest.
In the controllers directory it created My_test_controller.rb. When I went to test it I got an error that says "We're sorry, but something went wrong. We have been notified about this issue and will look into it shortly." In jEdit I have the file open and it has
1. How do I know things are working as far as the interpreter and the code?1. How do I know things are working as far as the interpreter and the code?Code:
class MyTestController < ApplicationController
def index
render_text "Not another Hello world?!?"
end
end
2. Is the "Something went wrong..." equivalent to a 404 eror?
3. Will I always have to start Ruby script/server in a DOS prompt?
4. Does the DOS window always have to remain open while playing with rails?
Windows XP
Firefox 3.5.3
Rails 2.3.4
Tutorial
Thanks for the patient replies. | http://www.codingforums.com/printthread.php?t=177874 | CC-MAIN-2016-07 | refinedweb | 205 | 78.55 |
Right now its __closure1, __closure2, __closure3...
I think a better way to name them would be, "__ClassClosureName" for members, or "__ClassMethodClosureName" for closures declared inside of a method... etc.
The way they're currently declared, its practically impossible to use profiling applications like NProf or ANTS with Boo. :/
What does you, the viewer, think? One way or another, BOO-503 will come to pass, so do not ph33r the inevitable.
I was thinking about what kind of attributes could be used to uniquely name a closure while still giving enough context someone trying to debug a problem, and thought of this:
"__ParentClass.Closure_LINE_COLUMN"
class t:
def method():
z = { }
This would be: __t.Closure_3_15
This way is undeniably specific; you know where, exactly, this particular closure was declared. Its not very friendly to the eyes, though.
Another way is,
__ParentClass.LocalNameClosure/Callable_LINE_COLUMN
So now it would looke like, __t.zClosure_3_15
Now its a little more distinct, and if the variable is uniquely named enough I can just go to that particular chunk of the source code and search for it.
What happens to this, though?
a = { print 'wtfpwned' }
a = { print 'another closure entirely' }
What should the second closure be named? If we follow the second set of suggested naming rules, then we've got two closures named very similarly.
They are distinct, of course, __aClosure_1_1 and __aClosure_2_2.
With the first form, you don't have that sort of potential visual confusion, but you lack the potential ease of use of identifying them by parameter name.
With the second form, its vice versa.
Hm.
Edit the patch to suit your tastes.
c1 = {"Name:FirstMethod"; print("c1")}
c1()
c2 = def(x as int):
"""Name: AnotherMethod
Some other documentation..."""
print "c2", x
c2(3)
The patch looks really nice!
public event SomeEvent as callable(sender, e as SomeEventArgs)
That would generate the following in the type where it is defined.
public callable SomeEventHandler(sender as object, e as SomeEventArgs) as void
Right now, this is supported:
public event SomeEvent as callable(object, SomeEventArgs), which generates some ugly delegates with arg0, arg1, etc. as parameters. | http://jira.codehaus.org/browse/BOO-503 | crawl-001 | refinedweb | 350 | 65.22 |
Introduction
In one of my previous posts – Pandas tricks to split one row of data into multiple rows, we have discussed a solution to split the summary data from one row into multiple rows in order to standardize the data for further analysis. Similarly, there are many scenarios that we have the aggregated data like a Excel pivot table, and we need to unpivot it from wide to long format for better analysis. In this article, I will be sharing with you a few tips to convert columns to rows with pandas DataFrame.
Prerequisites
To run the later code examples, you shall get pandas installed in your working environment. Below is the pip command to install pandas:
pip install pandas
And we will be using the data from this file for the later demonstration, so you may download and examine how the data looks like with below code:
import pandas as pd import os data_dir = "c:\\your_download_dir" df = pd.read_excel(os.path.join(data_dir, "Sample-Data.xlsx"))
You shall see the sample sales data as per below:
The sales amount has been summarized by each product in the last 4 columns. With this wide data format, it would be difficult for us to do some analysis, for instance, the top salesman by month by products or the best seller products by month etc.
A better data format should be transforming the product columns into rows so that each single row only represents 1 product and its sales amount. Now let’s start to explore what are the different ways to convert columns to rows with pandas.
Using Pandas Stack Method
The most immediate solution you may think of would be using the stack method as it allows you to stack the columns vertically onto each other and make it into multiple rows. For our case, we will need to specify the DataFrame index as “Salesman” and “Order Date“, so that the product columns will stack based on this index. For instance:
df.set_index(["Salesman", "Order Date"]).stack()
If you check the result now, you shall see the below output:
This is an MultiIndex Series with index name – [‘Salesman’, ‘Order Date’, None], so you can reset the index and rename the Series name as “Amount”, meanwhile give the name of the “None” index as “Product Desc” to make it more meaningful. E.g.:
df.set_index(["Salesman", "Order Date"])\ .stack()\ .reset_index(name='Amount')\ .rename(columns={'level_2':'Product Desc'})
With the above code, you can see the output similar to below:
If you do not want to have the 0 sales amount records, you can easily apply a filter to the DataFrame to have cleaner data.
Using Pandas Melt method
The melt method is a very powerful function to unpivot data from wide to long format. It is like the opposite operation to the pivot_table function, so if you are familiar with pivot_table function or the Excel pivot table, you shall be able to understand the parameters easily.
To achieve the same result as per the stack function, we can use the below code with melt method:
df.melt(id_vars=['Salesman', 'Order Date'], value_vars=['Beer', 'Red Wine', 'Whisky', 'White Wine'], var_name="Product Desc", value_name='Amount')
The id_vars specifies the columns for grouping rows. The value_vars and var_name specify the columns to unpivot and the new column name, and the value_name indicates the name of the value column. To help you better understand this parameters, you can imagine how the data is generated via pivot table in Excel, now it’s the reversing process.
Using Pandas wide_to_long Method
The wide_to_long method is quite self-explanatory by its name. The method uses pandas.melt under the hood, and it is designed to solve some particular problems. For instance, if your columns names follows certain patterns such as including a year or number or date, you can specify the pattern and extract the info when converting those columns to rows.
Below is the code that generates the same output as our previous examples:
pd.wide_to_long( df, stubnames="Amount", i=["Salesman", "Order Date"], j="Product Desc", suffix=r"|Red Wine|White Wine|Whisky|Beer").reset_index()
The stubnames parameter specifies the columns for the values converted from the wide format. And i specifies the columns for grouping the rows, and j is the new column name those stacked columns. Since our product column names does not follow any pattern, in the suffix parameter, we just list out all the product names.
As the wide_to_long returns a MultiIndex DataFrame, we need to reset index to make it flat data structure.
You may not see the power of this function from the above example, but if you look at the below example from its official document, you would understand how wonderful this function is when solving this type of problems.
Performance Consideration
When testing the code performance for the above 3 methods, the wide_to_long method would take significant longer time than the other two methods, and melt seems to be the fastest. But the result may vary for large set of data, so you will need to evaluate again based on your data set.
#timeit for stack method 4.52 ms ± 329 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) #timeit for melt method 3.5 ms ± 238 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) #timeit for wide_to_long method 17.8 ms ± 709 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Conclusion
In this article, we have reviewed through 3 pandas methods to convert columns to rows when you need to unpivot your data or transform it from wide to long format for further analysis. A simple testing shows that melt method performs the best and the wide_to_long takes the longest time, but bear in mind that wide_to_long method has its specific use cases which the other functions may not be able to achieve. | https://www.codeforests.com/category/pandas/ | CC-MAIN-2022-21 | refinedweb | 987 | 57.61 |
iPcPortal Struct Reference
This is a property class holding the representation of a portal. More...
#include <propclass/portal.h>
Inheritance diagram for iPcPortal:
Detailed Description
This:
- mesh (string, read/write): the name of the portal mesh.
- portal (string, read/write): the name of the portal.
- closed (bool, read/write): if the portal is closed or not.
Definition at line 40 of file portal.h.
Member Function Documentation
Close portal.
Get the portal.
Is the portal closed?
Open portal.
Set the portal to use.
- Parameters:
-
The documentation for this struct was generated from the following file:
Generated for CEL: Crystal Entity Layer 2.0 by doxygen 1.6.1 | http://crystalspace3d.org/cel/docs/online/api-2.0/structiPcPortal.html | CC-MAIN-2016-07 | refinedweb | 108 | 53.88 |
Creating self-hosted WCF Data Services
This question came up the other night at the June NEBytes meeting after a presentation by Iain Angus (Black Marble) on WCF Data Services – can you host a WCF Data Service outside of IIS, inside a console application for example?
Since WCF Data Services are based upon the normal WCF architecture, you can self-host them just like any other WCF Service using a DataServiceHost. As a quick example, try this:
1. Create an entity
public class Person { public int ID { get; set; } public string Name { get; set; } }
2. Create a custom data context to return some data
public class ExampleDataContext { public IQueryable People { get { return new List() { new Person() { ID = 1, Name = "Steve"}, new Person() { ID = 2, Name = "Dave"} }.AsQueryable(); } } }
3. Create a class which inherits from DataService
public class PersonDataService : DataService { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); } }
4. Finally, in your startup code create an instance of a DataServiceHost, specifying the typeof DataService and the Uri to host on. Open the host and wait for a key-press to close it again:
DataServiceHost host = new DataServiceHost(typeof(PersonDataService), new Uri [] { new Uri("")}); try { host.Open(); Console.WriteLine("Host is running; press a key to stop"); Console.ReadKey(); host.Close(); } catch (Exception) { host.Abort(); throw; }
Using this example, you can now browse to in your browser and retrieve your data. You can also create a client proxy against this service at the same Uri just as normal.
You can read more information on hosting WCF Data Services in IIS or otherwise on MSDN.
Download the example code | https://stevescodingblog.co.uk/creating-self-hosted-wcf-data-services/ | CC-MAIN-2017-17 | refinedweb | 269 | 53.1 |
Elm Friday: Functions (Part V)
11/20/15
Elm is a functional language, so naturally, functions and function calls are pretty important. We have already seen some functions in the previous episodes. This episode goes into more detail regarding function definition and function application.
About This Series
This is the fifth.
Functions
This is a function definition in Elm:
multiply a b = a * b
The function definition starts with the name of the function, followed by the parameter list. As opposed to C-style languages like Java, JavaScript and the likes, there are no parentheses or commas involved, the parameters are only separated by spaces.
The equals character
= separates the function name with the parameter list from the function body. The function body can be any Elm expression that produces a value. We can reference the parameters in the function body.
Function definitions can use other functions in their body. This is how this looks like:
square a = multiply a a
Here, we define a
square function by using the
multiply function we defined earlier. Function calls also don’t need any parentheses, just list the function parameters you want to pass into the function separated by whitespace.
You do need parentheses when you have nested expresssions:
productOfSquares a b = multiply (square a) (square b)
You can also declare anonymous functions (also known as lambda expressions) on the fly:
incrementAll list = List.map (\ n -> n + 1) list
This thing wrapped in
(\ and
) is an anonymous function that takes one parameter and returns the parameter, incremented by one. This anonymous function is then applied to all elements in a list by using
List.map.
Actually, we could have written this shorter. The following is equivalent:
incrementAll2 = List.map (\ c -> c + 1)
Why is that the same? Because Elm supports something called currying.
incrementAll defines a function that takes a list and produces another list.
incrementAll2 also defines a function, but it is a function that takes no arguments and returns another function.
So when we write
incrementAll2 [1, 2, 3] Elm first evaluates
incrementAll2, gets a function and then procedes to put the remaining arguments (
[1, 2, 3] in this case) into this function. The result is the same.
If you find it hard to wrap your head around currying, don’t worry about it too much for now. You can always resort to write a more verbose version of your function without currying and come back to this concept later. As a rule of thumb, if the last element in the parameter list in the function declaration is simply repeated at the end of the function body (like
list in this case), you can probably omit both.
Let’s wrap this up. Here is a complete, working Elm program that uses the functions we defined above:
import Html multiply a b = a * b square a = multiply a a productOfSquares a b = multiply (square a) (square b) incrementAll list = List.map (\ c -> c + 1) list incrementAll2 = List.map (\ c -> c + 1)]))) ]
What happens in these lengthy expressions in the
main function? Well, the functions we defined return mostly numbers (or lists, in the case of incrementAll). So we need to convert their results into strings via the toString function (which comes from the
Basics package and is imported by default). We then use
++ to append the resulting string to a string literal (
"3 × 5 = ", for example) and use
Html.text to convert the string into an HTML text node.
Fancy Function Application
Whoa, did you see what we did there to bring the result of one of our functions to the screen? Let’s take a look at
Html.text ("3 × 5 = " ++ (toString (multiply 3 5))) for a moment. That’s a lot of parentheses right there. Elm has two operators,
|> and
<|, to write expressions like that in a more elegant fashion.
|>: Take the expression to the left of the operator and put it into the function on the right hand side.
<|: Take the expression to the right of the operator and put it into the function on the left hand side.
Here is the main function of the previous program, rewritten with the new operators:]) ]
If you like to go a bit crazy with this, you can even rewrite
Html.text <| "3 × 5 = " ++ (toString <| multiply 3 5)
as
Html.text <| (++) "3 × 5 = " <| toString <| multiply 3 5
or
square 4 |> toString |> (++) "4² = " |> Html.text
Here we used the infix operator
++ as a non-infix function to be able to apply it with
<| and
|>. We also used a bit of currying again:
(++) actually takes two arguments (the two strings that are to be concatenated). The expression
(++) "3 × 5 = " is a partial function application, that is, we provide the first of the two arguments to yield a new function that takes only one argument, prepending
"3 × 5 = " to everything that is passed to it.
To read code like this with ease, just imagine this line as the ASCII art represenation of a data pipeline. In
|> style pipelines, data flows from left to right, in
<| style pipelines, data flows from right to left. So, for example, to decipher a line like
Html.text <| (++) "3 × 5 = " <| toString <| multiply 3 5, you start at the end (
multiply 3 5), push this into the
toString function to convert the number into a string, the resulting string the goes into the append function (
(++)) together with the string literal and finally the concatenated string goes into the
Html.text function.
This concludes the fifth episode of this blog post series on Elm. Make sure to check out the next episode, where we will take a look at type annotations.
Comment | https://blog.codecentric.de/en/2015/11/elm-friday-part-05-functions/ | CC-MAIN-2017-17 | refinedweb | 941 | 62.27 |
Sofu for .Net
This library requires at least .Net Framework 2.0 (unless you compile it yourself).
The interface is excatly like the one from SofuD, except two things:
- The call to read a file is called: SofuReader.LoadFile(filename), because you can't have namespace static funktions..
- All methodname start with a Capital letter, (so IsMap() instead of isMap()). Only exceptions map(), list() and value() would clash with their constructors..., so they still start with a small letter
How to get Sofu.Net
Download the latest Sofu.dll with documentation from Sourceforge
Get the latest sources
How to use it
There is quick and easy method for distributing .Net Libraries that I know of, so:
Simply copy the Sofu.dll somewhere, right click on References in your Project Explorer, and add the Sofu.dll in the dialog (under Brows) that comes up when you click "Add References"
Alternatively you can add the sources to your project and select those in the same dialog somewhere
Sofu.Net Examples
Reading a .sofu file and printing the top-level keys and objects to the console
using Sofu.Sofu; using System.Collections.Generic; Sofu.Map file = SofuReader.LoadFile("test.sofu"); foreach (KeyValuePair<string,Sofu.SofuObject> entry in file) { Console.Out.Write("Key : "); Console.Out.Write(entry.Key); Console.Out.Write(", Object : "); Console.Out.WriteLine(entry.Value.TypeString()); }
Sofu.Net News
0.2.1 is out now supports binary and SofuML; Documentation updated as well.
Sofu.Net does not support binary sofu as of now, but the next version will
The SofuBrowser is written using the Sofu.Net library and might serve as a reference for using it | http://sofu.sourceforge.net/sofunet.html | CC-MAIN-2017-30 | refinedweb | 274 | 53.07 |
Hi!
I am new to both Rust and the forum, so bear with me if something’s not right.
I am a physicist doing my PhD and 90% of my day is coding C++ for GPU calculations, more specifically experimenting with advanced Template Metaprogramming that does some pretty neat things in both host-side and device-side code. TMP propagating all the way through to kernels is just pure awesome. For this we’re using portable tools, C++AMP (with Clamp under Linux) and we are SYCL beta testers. (No CUDA) C++AMP uses DirectCompute on Windows, while Clamp uses SPIR and HSAIL as a back-end. SYCL is capable of generating SPIR and flat OpenCL C kernels.
A few of us in our group have come across Rust and it simply blew our mind. Rust is the better C++ we need. Rust’s type system is far more powerful (and human readable) than what I can express with TMP. Concepts are a long way, before it is a widely adopted C++ feature. (Not before C++17, plus implementation time.) I saw that there is a Rust port to the OpenCL API (most likely making excessive use of C compatibility), and I even found the RustGPU project, from where I contacted Eric Holk, a member of the group implementing that proof of concept.
Due to the infancy of SPIR at the time, RustGPU was implementeg using NVPTX as the kernel intermediate, and OpenCL for the host-side stuff. Read more in this blog post. OpenCL 2.0 with it’s matching SPIR 2.0 however have received numerous refinements since the first provisional specification and is close to being finalized, with 2 implementations already at hand (Intel, AMD). SPIR 2.0 has support for function pointers (called ‘blocks’ in OpenCL C), multiple levels of Shared Virtual Memory, just to name the most important stuff.
Following up on the brief mailing with Eric about this RustGPU pilot project, I have the feeling that all the underlying headaches have been cleared out. LLVM already supports OpenCL memory namespaces, as I was following the LLVM list a while back too.
I understand that getting Rust 1.0 is top priority now, and that Rust will have subsequent backward-compatible updates in the future. My questions are:
- How open is the community for bringing single-source GPU programming to Rust?
- If yes, is it something of an ‘explicit goal’ or more like ‘we’re not against it’?
- Are there people with the necessary skills and time available to pull something like this off?
While I would be very much excited to work on a project like this, I fear my agenda in the next 1-2 years (while my PhD is running, beside a full-time job and my newborn child (the first)) will not allow me to take on such an endeavor. I do know however that I would be one of the most enthusiastic users of the feature, and I’d even be willing to beta test it with all it’s headache, similar to what we’re doing with SYCL now.
Thoughts? Ideas? Comments?
Cheers, Máté
ps.: second but similar question: substitute HSA and HSAIL for OpenCL and SPIR. | https://internals.rust-lang.org/t/single-source-gpu-support/898 | CC-MAIN-2019-18 | refinedweb | 538 | 69.72 |
uriliburilib
A Python library for handling URIs. Based off of my experience with Perl's URI, having a class to wrap URIs allows for quick editting of pieces of the URI without the need to write code to decompose/recompose every time.
SynopsisSynopsis
import urilib uri = urilib.URI('') assert uri.scheme == 'http' assert uri.authority == '' assert uri.path == '/' assert uri.query == 'q=value' assert uri.fragment == ''
RequirementsRequirements
- Python 2.3+
BugsBugs
No known bugs, but this hasn't been extensively tested, and there are probably unaccounted-for edge cases out there in the wild. If you encounter any, or can think of any that were missed, please contact the maintainer:
Brandon Sandrowicz brandon@sandrowicz.org
CreditsCredits
2012 (c) Brandon Sandrowicz brandon@sandrowicz.org
LicenseLicense
See LICENSE | https://libraries.io/pypi/urilib | CC-MAIN-2022-40 | refinedweb | 126 | 54.39 |
Patch for found issues
I try to compile the Common C++ 1.1.1 on my XP today by
BCB 6. But several issues I have got.
Please, see attached patch that I am commenting below.
1. Sequence of the standart header and namespacing in
the fifo.cpp calls the compilation-time errors in the
<sys/types.h> and other system headers. Simple changes
in the fifo.cpp makes compilation clear.
2. Some minor issue in the numbers.cpp in the
realization of the += and -= operators. I think the
'value' variable on the left-side must be 'long'
instead of 'const long'
3. Serial.cpp has been included undefined constants
(this affects while compilation by MSVC6.0)
INVALID_FILE_HANDLE. I have read MSDN and catched
INVALID_HANDLE_VALUE return code for CreateFile().
4. thread.h has non-standart definition checking in
preprocessor code like #if _MSC_VER < 1300 when
_MSC_VER is undefined.
In this case _MSC_VER = 0 and this checking returns true.
I propose to change this to following sentence
#if defined(_MSC_VER) && _MSC_VER < 1300
that is true.
5. missing.h has compile-time errors under BCB due to
absence types and time related functions.
I just have added following into header
#ifdef __BORLANDC__
#include <time.h>
#endif
6. install.bat in the w32 directory looks no good for
my XP.
I can not define the directory like "%XXX%/MY" if I
have defined it like set XXX="YYY" previously.
Due to above, main library files is not copied to MSVC
system directory.
7. Also, I have added correct Makefile.bcc for the BCB
6 (as well as for BCB 5).
Patch for found issues
Logged In: YES
user_id=448807
I need to sorry due to my mistake.
The attached patch for Makefile.bcc is not correct (I think
that some compile switches must be added). All other seems
to be ok.
I have rebuilt library with a CBuilder Project group and
Project set.
However, I try to build the new (1.1.2) version and then I
shall have attached the *.bpr and *.bpg files here.
(Now, I have tested the project set with CCXX-1.1.1) | https://sourceforge.net/p/cplusplus/bugs/178/ | CC-MAIN-2016-30 | refinedweb | 355 | 78.65 |
- NAME
- Synopsis
- Description
- Variable Depth
-
- The Past
- Comparison with typeglob constants
- Functions
- Cloning
- Examples
- Exports
- Internals
- Requirements
- Bug Reports
- Acknowledgements
- Author
- License and Legal
NAME
Readonly - Facility for creating read-only scalars, arrays, hashes
Synopsis
use Readonly; # Deep Read-only scalar Readonly::Scalar $sca => $initial_value; Readonly::Scalar my $sca => $initial_value; # Deep Read-only array Readonly::Array @arr => @values; Readonly::Array my @arr => @values; # Deep, ...);, ...);
Description
This is a facility for creating non-modifiable variables. This is useful for configuration files, headers, etc. It can also be useful as a development and debugging tool for catching updates to variables that should not be changed.
Variable Depth
Readonly has the ability to create both deep and shallow readonly variables.
If you pass a
$ref, an
@array or a
%hash to corresponding functions
::Scalar(),
::Array() and
::Hash(), then those functions recurse over the data structure, marking everything as readonly. The entire structure is then non-modifiable. This is normally what you want.
If you want only the top level to be readonly, use the alternate (and poorly named)
::Scalar1(),
::Array1(), and
::Hash1() functions.
Plain
Readonly() creates what the original author calls a "shallow" readonly variable, which is great if you don't plan to use it on anything but only one dimensional scalar values.
Readonly::Scalar() makes the variable 'deeply' readonly, so the following snippet kills over as you expect:
use Readonly; Readonly::Scalar my $ref => { 1 => 'a' }; $ref->{1} = 'b'; $ref->{2} = 'b';
While the following snippet does not make your structure 'deeply' readonly:
use Readonly; Readonly my $ref => { 1 => 'a' }; $ref->{1} = 'b'; $ref->{2} = 'b';
The Past
The following sections are updated versions of the previous authors documentation.
Comparison with "use constant"
Perl provides a facility for creating constant values, via the constant pragma. There are several problems with this pragma.
The constants created have no leading sigils.
These constants cannot be interpolated into strings.
Syntax can get dicey sometimes. For example:
use constant CARRAY => (2, 3, 5, 7, 11, 13); $a_prime = CARRAY[2]; # wrong! $a_prime = (CARRAY)[2]; # right -- MUST use parentheses
You have to be very careful in places where barewords are allowed..
These constants are global to the package in which they're declared; cannot be lexically scoped.
Works only at compile time.
Can be overridden:
use constant PI => 3.14159; ... use constant PI => 2.71828;
(this does generate a warning, however, if you have warnings enabled).
It is very difficult to make and use deep structures (complex data structures) with
use constant.
Comparison with typeglob constants
Pros;
Cons.
Functions
- Readonly::Scalar $var => $value;
Creates a nonmodifiable scalar,
$var, and assigns a value of
$valueto it. Thereafter, its value may not be changed. Any attempt to modify the value will cause your program to die.
A value must be supplied. If you want the variable to have
undefas its value, you must specify
undef.
If
$value
$valuemarked as Readonly, use
Scalar1.
If $var is already a Readonly variable, the program will die with an error about reassigning Readonly variables.
- Readonly::Array @arr => (value, value, ...);
%@arritself marked as Readonly, use
Array1.
If
@arris already a Readonly variable, the program will die with an error about reassigning Readonly variables.
- Readonly::Hash %h => (key => value, key => value, ...);
-
- Readonly::Hash %h => {key => value, key => value, ...};
%hitself marked as Readonly, use
Hash1.
If
%his already a Readonly variable, the program will die with an error about reassigning Readonly variables.
- Readonly $var => $value;
-
- Readonly @arr => (value, value, ...);
-
- Readonly %h => (key => value, ...);
-
- Readonly %h => {key => value, ...};
-
- Readonly $var;
The
Readonlyfunction is an alternate to the
Scalar,
Array, and
Hashfunctions..
Note that you can create implicit undefined variables with this function like so
Readonly my $var;while a verbose undefined value must be passed to the standard
Scalar,
Array, and
Hashfunctions.
- Readonly::Scalar1 $var => $value;
-
- Readonly::Array1 @arr => (value, value, ...);
-
- Readonly::Hash1 %h => (key => value, key => value, ...);
-
- Readonly::Hash1 %h => {key => value, key => value, ...};
Cloning
When cloning using Storable or Clone you will notice that the value stays readonly, which is correct. If you want to clone the value without copying the readonly flag, use the
Clone function:
Readonly::Scalar my $scalar => {qw[this that]}; # $scalar->{'eh'} = 'foo'; # Modification of a read-only value attempted my $scalar_clone = Readonly::Clone $scalar; $scalar_clone->{'eh'} = 'foo'; # $scalar_clone is now {this => 'that', eh => 'foo'};
The new variable (
$scalar_clone) is a mutable clone of the original
$scalar.
Examples
These are a few very simple examples:
Scalars
A plain old read-only value
Readonly::Scalar $a => "A string value";
The value need not be a compile-time constant:
Readonly::Scalar $a => $computed_value;
Arrays/Lists"); # This dies with "May not store an odd number of values in a hash"
Exports
Historically, this module exports the
Readonly symbol into the calling program's namespace by default. The following symbols are also available for import into your program, if you like:
Scalar,
Scalar1,
Array,
Array1,
Hash, and
Hash1.
Internals performed.
You do not need to install Readonly::XS.
You should stop listing Readonly::XS as a dependency or expect it to be installed.
Stop testing the
$Readonly::XSokayvariable!
Requirements "Internals" in the section on Readonly's new internals.
There are no non-core requirements.
Bug Reports
If email is better for you, my address is mentioned below but I would rather have bugs sent through the issue tracker found at.
Acknowledgements+.
Author
Sanko Robinson <sanko@cpan.org> -
CPAN ID: SANKO
Original author: Eric J. Roode, roode@cpan.org
License and Legal
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://web-stage.metacpan.org/pod/Readonly | CC-MAIN-2020-29 | refinedweb | 926 | 56.66 |
Provided by: manpages-dev_5.10-1ubuntu1_all
NAME
bdflush - start, flush, or tune buffer-dirty-flush daemon
SYNOPSIS
#include <sys/kdaemon.h> int bdflush(int func, long *address); int bdflush(int func, long data);
DESCRIPTION
Note: Since Linux 2.6, this system call is deprecated and does nothing. It is likely to disappear altogether in a future kernel release. Nowadays, number, or to write an invalid value to a parameter. EPERM Caller does not have the CAP_SYS_ADMIN capability..10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.ubuntu.com/manpages/impish/en/man2/bdflush.2.html | CC-MAIN-2022-33 | refinedweb | 104 | 61.12 |
Dear NXP users,
I'm having from a few days a problem that I wasn't able to solve, and to which I can't find any solution.
I downloaded, after registering, the following example code from the Micrium uC-OS website: FRDM-K64F_OS3-TCPIP-HTTPs-DHCPc-KSDK-LIB | Micrium
The problem is that the project is already present only for IAR and Keil, but I need to work with KDS only.
I tested the project on a 30-days trial version of IAR, and it works perfectly.
Then, I tried adding the files I needed to an existing project in KDS, copying them in the correct folders and linking them to the KDS project.
The problem is the following: everything compiles... until I call some function that references to the library! In that case I keep getting "undefined reference to..." to functions that are defined in .h/.c files that are included!
After calling, for instante, AppTCPIP_Init(), that is a function provided by the library uC-TCP/IP, I cannot compile anymore.
For example, inside "app_dhcp-c.c" there's a function call:
dhcp_status = DHCPc_ChkStatus(if_nbr, &err_dhcp);
The compiler tells me:
"Undefined reference to 'DHCPc_ChkStatus'"
While DHCPc_ChkStatus() is definied in "dhcp-c.h", that is included though a chain of header files. Also including it directly by adding:
#include "dhcp-c.h"
does not change the situation at all.
Please help, I really need to be able to launch soon this project in KDS.
What am I missing?
Thanks a lot in advance!
Hello Francesco Raviglione ,
How about add the lib name and path in KDS ?
You can refer to the below thread to do it:
Creating and using Libraries with ARM gcc and Eclipse | MCU on Eclipse
Hope it helps,
Alice | https://community.nxp.com/thread/465738 | CC-MAIN-2018-22 | refinedweb | 293 | 74.59 |
31 January 2011 16:48 [Source: ICIS news]
LONDON (ICIS)--Brent crude on the ICE futures exchange broke above $100/bbl on Monday for the first time since October 2008 as a result of concerns that the political unrest in Egypt could lead to oil supply disruptions.
Although shipping operations through the ?xml:namespace>
By 16:25 GMT, March Brent crude had hit a high of $100.05/bbl, a gain of $0.63/bbl from the previous close of $99.42/bbl, before easing back to around $99.75/bbl.
At the same time, March NYMEX light, sweet crude futures were trading around $90.10/bbl, having hit a high of $90.87 over the weekend, a gain of $1.54 | http://www.icis.com/Articles/2011/01/31/9430941/egypt-unrest-pushes-brent-crude-above-100bbl.html | CC-MAIN-2014-42 | refinedweb | 122 | 76.52 |
I have developed a docflow application which includes a process module which provides document flow by different routes (depends on user answer). For example action 1 if "Yes" then action 2, if "No", action 3, and etc.
The process is an oriented graph where nodes are stages and edges are connections. The simple way for building routes is a using graphical editor (where user can build it by mouse move and click). I was looking for a similar module on C#, but found only the piccolo.net framework, but it was too "hard" for my task. Therefore I decided to develop a simple graph editor module by myself.
Graph Editor builds a simple oriented graph using modified Button and Panel controls (GraphNode, GraphPanel - classes). User can add, remove, drag node, remove, and add connection, mark node as the first element. The GraphNode and GraphPanel classes allow the user to use any event (mouse click, drag, move, and etc.).
Button
Panel
GraphNode
GraphPanel
The main task was a building an oriented graph:
At first I had to choose a container (or background) for drawing my graph. It was a Panel control. It allows to add new controls and has autoscroll for controls disposing out of bounds. GraphPanel is a child class of the Panel control. It includes methods for node building and management. Mouse events for the GraphNode class are defined in it.
Second, choose the base class for the node. I chose Button control, but you can use any control (ListView, PictureBox, Label, etc.). I called it the GraphNode class. This class has its own fields and methods. GraphNode includes two arrays of points (start point and end point for every outgoing edge) and links to connected nodes. It's very easy to modify it.
ListView
PictureBox
Label
I use a very simple way to solve my task:
I move the Node control using an offset between the previous and current location:
Node
//Set Node offset
//Params: offset
public void SetOffset(Point offset )
{
//Get current location
Point p=this.Location;
///set offset
p.Offset(offset);
//set new location
this.Location = p;
}
For edge drawing I use the standard Graphics methods DrawLine and DrawCurve which draw on my GraphPanel. At first I use the absolute coordinate (in the panel) for saving edges, but using these absolute values has a problem when you scroll the panel and all the coordinates shift), therefore the best way is using relative coordinates from the current and connected nodes. For example, EdgeYes[0] is a point on the current node and EdgeYes[1] is a point on the NodeYes control. Every time the panel invalidates, all edges redraw using relative coordinates.
Graphics
DrawLine
DrawCurve
EdgeYes[0]
EdgeYes[1]
NodeYes
The other problem was checking mouse events over Edge, because Edge is a simple line drawn on the GraphPanel. I wrote a function which solves an equation of line (x-x1)/(x2-x1)=(y-y1)/(y2-y1). If left side is equal to the right side then point(x,y) on the line (x1,y1),(x2,y2).
Edge
private bool isPointIn(Point p1, Point p2, Point px)
{
//Check line bounds
if (((px.X > p1.X) && (px.X > p2.X)) || ((px.X < p1.X) && (px.X < p2.X)))
return false;
double r1 = (double)(px.X - p1.X) / (p2.X - p1.X);
double r2 = (double)(px.Y - p1.Y) / (p2.Y - p1.Y);
//if r1==r2 or r1=0 or r2=0 then px belongs to the line p1;p2
if ((r1 == 0) || (r2 == 0))
return true;
return Math.Round(r1, 1) == Math.Round(r2, 1);
}
If you want to use classes in your own application, simply include classes GraphNode and GraphPanel and change the namespace and you can use it.
I defined some fields in the GraphPanel, which helps user to set up graph:
//Edge width
public int LineWidth = 2;
//Edge color
public Color LineColor = Color.Black;
LineWidth is the edge line width, LineColor is the edge line color.
LineWidth
LineColor
You can change the default GraphNode properties (color, size, control, and etc.) as you wish. If you want to add a popup menu on mouse right click on the node, you can define it:
public Form1()
{
InitializeComponent();
pnGraph.NodeMenu = mnuNode;
}
mnuNode.Tag contains a link to the current GraphNode object.
mnuNode.Tag
Mouse right click on the panel and click on the "Add" menu item.
Mouse left click and move it until mouse up.
Set mouse cursor over the edge (it becomes red) and then mouse right click.
I use serialization for saving and opening a graph, therefore in the GraphNode class, I defined the method:
public virtual void GetObjectData(SerializationInfo info, StreamingContext context) {
info.AddValue("Name",this.Name);
info.AddValue("Location", this.Location);
info.AddValue("Width", this.Width);
info.AddValue("Text", this.Text);
info.AddValue("isFirst", this.isFirst,typeof(Boolean));
info.AddValue("NodeYes", this.NodeYes,typeof(GraphNode));
info.AddValue("NodeNo", this.NodeNo, typeof(GraphNode));
info.AddValue("EdgeYes", this.EdgeYes, typeof(Point[]));
info.AddValue("EdgeNo", this.EdgeNo, typeof(Point[]));
}
It saves only fields enumerated in this method. This method allows to save a graph into a file, array, database, and load from the source.
Honestly, the source code of these classes is very simple for understanding and using.
It was my first article here, please don't judge me harshly. I hope it will be useful for somebody and help them in their projects. As for me, I have successfully used these classes in my. | http://www.codeproject.com/Tips/562829/Building-an-oriented-graph-in-a-graphical-applicat | CC-MAIN-2016-40 | refinedweb | 908 | 65.42 |
Icon Constructor (Type, String)
Namespace: System.DrawingNamespace: System.Drawing
Assembly: System.Drawing (in System.Drawing.dll)
Parameters
- type
- Type: System.Type
A Type that specifies the assembly in which to look for the resource.
- resource
- Type: System.String
The resource name to load.
This constructor creates an Icon from a resource with the name specified by the resource parameter in the assembly that contains the type specified by the type parameter.
This constructor combines the namespace of the given type together with the string name of the resource and looks for a match in the assembly manifest. For example you can pass in the Control type and Error.ico to this constructor, and it looks for a resource that is named System.Windows.Forms.Error.ico.
The following code example demonstrates how to use the Icon constructor. To run this example, paste the code into a Windows Form and handle the form's Paint event. Call the ConstructAnIconFromAType method from the Paint event handler, passing e as Event. | http://msdn.microsoft.com/en-us/library/b5bzkwt9.aspx | CC-MAIN-2013-48 | refinedweb | 168 | 60.21 |
Implementing OAuth with GWT
I've heard about OAuth for quite some time, but never had an opportunity to implement it on a project. For a good explanation of what OAuth is, see its Introduction. Here's an excerpt:
...it allows).
The reason I needed OAuth was to interact with the Google Contacts API. I've always hated how sites make you import all your contacts from Gmail. I wanted to develop a system that'd let you simply read your contacts from Google in real-time.
Since the application I'm working on uses GWT, I chose to implement an OAuth client in GWT. After googling for "gwt oauth", I found two examples. Unfortunately, neither worked out-of-the-box..
The best project for OAuth libraries seems to be oauth on Google Code. However, you'll notice that there is no JavaScript implementation listed on the homepage. I did look at the Java implementation, but quickly realized it wouldn't be usable in GWT. Therefore, I opted for the JavaScript implementation.
OAuth consists of several steps. The following diagram explains the authentication flow nicely.
In a nutshell, you have to complete the following steps:
- Get a token from the service provider.
- Redirect user to service provider to grant access and redirect back to application.
- Request access token to access protected resources.
- Access protected resources and pull/push data.
To access a service provider's OAuth service, you'll likely need to start by registering your application. For Google, OAuth Authentication for Web Applications is an excellent resource. Google's OAuth Playground is a great way to with the Google Data APIs after you've registered.
Now that you know how OAuth works, let's look at how I implemented it with GWT. I started by adding the necessary JavaScript references to my *.gwt.xml file.
<script src="//oauth.googlecode.com/svn/code/javascript/oauth.js"/> <script src="//oauth.googlecode.com/svn/code/javascript/sha1.js"/>
Next, I needed a way to sign the request. I tried to use Sergi Mansilla's OAuth.java for this, but discovered issues with how the parameters were being written with GWT 1.6. I opted for Paul Donnelly's makeSignedRequest function instead. By adding this to my application's HTML page, I'm able to call it using the following JSNI method:
public native static String signRequest(String key, String secret, String tokenSecret, String url) /*-{ return $wnd.makeSignedRequest(key, secret, tokenSecret, url); }-*/;
After the URL is signed, it needs to be sent to the provider to get a request token. To do this, I used GWT's RequestBuilder and created a send() method:
protected void send(RequestCallback cb, String URL) { RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL); builder.setTimeoutMillis(10000); builder.setCallback(cb); Request req = null; try { req = builder.send(); } catch (RequestException e) { cb.onError(req, e); } }
If you try this with Google's Request Token URL in GWT's hosted mode, nothing will happen. Compile/browse to Safari and you'll still see nothing. Try it in Firefox and you'll see the following.
To workaround browsers' Same Origin Policy, I added a proxy servlet to send the requests. I started with Jason Edwards's ProxyServlet and modified it to fit my needs. I then registered it in both *.gwt.xml and web.xml.
<servlet path="/google/" class="org.appfuse.gwt.servlet.AlternateHostProxyServlet"/>
Now, before calling the send() method, I replace the start of the URL so the request would be routed through the servlet.
public void getToken(RequestCallback cb) { String url = signRequest(provider.getConsumerKey(), provider.getConsumerSecret(), "", provider.getRequestTokenURL()); url = url.replace("", "/google/"); send(cb, url); }
When the request returns, I create two cookies by calling a createOAuthCookies() method with the payload returned:}; }
The next step is to authorize the token. This is where things got tricky with my proxy servlet and I had to add some special logic for GWT. Google was sending back a 302 with a Location header, but it wasn't hitting the onResponseReceived() method in my callback. For this reason, I had to change it to a 200 status code and add the redirect location to the body. I also discovered that sometimes they'd return an HTML page with a
<meta http-equiv="refresh" ...> tag. When using Twitter, I discovered the full HTML for the allow/deny page was returned.
Below is the callback I'm using. WindowUtils is a class I got from Robert Hanson and the gwt-widget project.()); }
The 3rd step is to get an access token. The most important thing to remember when you do this is to include the "oauth_token_secret" value when signing the request.
signRequest(provider.getConsumerKey(), provider.getConsumerSecret(), getAuthTokenSecret(), url); send() method as well as utility methods to get the cookie values of the oauth tokens.
public class ContactsRequest extends DefaultRequest { private static final String GOOGLE_CONTACTS_URL = ""; private OAuthProvider provider; public ContactsRequest(OAuthProvider provider) { this.provider = provider; } public void getContacts(RequestCallback cb) { String url = GOOGLE_CONTACTS_URL.replace("$1", getAuthToken()); url = signRequest(provider.getConsumerKey(), provider.getConsumerSecret(), getAuthTokenSecret(), url); String proxiedURLPrefix = "/contacts/"; // allow for deploying at /gwt-oauth context if (WindowUtils.getLocation().getPath().contains("gwt-oauth")) { proxiedURLPrefix = "/gwt-oauth" + proxiedURLPrefix; } url = url.replace("", proxiedURLPrefix); send(cb, url); } }
If all goes well, the response contains the data you requested and it's used to populate a textarea (at least in this demo application). Of course, additional processing needs to occur to parse/format this data into something useful.
This all sounds pretty useful for GWT applications, right? I believe it does - but only if it works consistently. I sent a message to the OAuth Google Group explaining the issues I've had.?
I received a response with a cleaner makeSignedRequest().
To make it easier to create a robust example of GWT and OAuth, I created a gwt-oauth project you can download or view online. # | http://raibledesigns.com/rd/entry/implementing_oauth_with_gwt | CC-MAIN-2017-26 | refinedweb | 971 | 58.79 |
Oracle on Sun Cluster
By mkb on Oct 16, 2006
Oracle is by far and away the most popular service running on Sun Cluster 3.x . Sun Cluster supports highly available (HA) Oracle, Oracle Parallel Server (OPS) and Oracle Real Application Cluster ( RAC ) giving users a very wide choice. Here it is the breadth of release, operating system and platform coverage that drives its appeal.
The HA Oracle agent on SPARC supports a long list of Oracle releases from 8.1.6.x on Solaris 8 to 10.2.0.x on Solaris 10 and numerous options in between. Additionally, the Sun Cluster 3.1u4 for (64 bit) x86 HA Oracle agent supports Oracle 10g R1 (32 bit) and 10g R2 (64 bit).
The parallel database coverage is similarly extensive with the SPARC platform supporting a broad set of volume manager (Solaris Volume Manager and Veritas Volume Manager) and Oracle releases from 8.1.7 up to 10.2.0.x. In addition Oracle 10g R2 (10.2.0.x) is also supported on the 64 bit x86 platform.
There are also a wide set of Oracle data storage options: raw disk, highly available local file systems and global file systems for HA Oracle; raw disk or network attached storage for Oracle OPS and raw disk, network attached storage or shared QFS file system for Oracle RAC.
But why even mention that Sun support these releases, why don't Sun support all releases in every hardware and software combination? The answer is that high availability is Sun Cluster's number one goal and achieving this doesn't happen by accident. It demands careful design and implementation of the software using extensive peer review of all code changes, followed by extremely thorough testing.
Having only joined the engineering group in the last year or so, I was staggered by the sheer volume of testing that is actually performed. It was also encouraging to see how close the engineering relationship was with Oracle too. For the recent release of Oracle 10g R2 on 64 bit x86 Solaris, the team I work with performed numerous Oracle designed tests on the product. These checked the installation process, its 'flexing' capability, i.e. adding or removing nodes, and its co-existence with previous releases, each for the various types of storage option. These tests numbered in the 100s and often required re-tests if bugs were found and these were just the Oracle mandated tests. In addition the Sun Cluster QA performed extensive load and fault injection tests.
It's these latter two items that set Sun Cluster apart in the robustness stakes. What makes an insurance policy worth the investment is the degree of confidence the user has that it will 'do the right thing' when a failure occurs. When system is sick or under load, user land processes often don't respond or may only respond after a long delay. It may also be difficult to determine whether other cluster nodes are alive or dead. Here, Sun Cluster comes into its own; the kernel based membership monitor very quickly determines whether cluster nodes are alive or not and takes action, i.e. failure fencing, to ensure that failed or failing nodes do not corrupt critical customer data.
By using automated test harnesses, Sun Cluster's Quality Assurance (QA) team are able to simulate a wide variety of fault conditions, e.g. killing critical processes or aborting nodes. These can be performed repeatably at any point during the test cycle. Faults are also injected even while the cluster is recovering from previous faults. In addition, the QA team perform a comprehensive set of manual, physical fault injections, such as disconnecting network cables and storage connections. All of this helps ensure that the cluster survives and continues to provide service, even in the event of cascading failures, and under extreme load.
This level of "certification", rather than simple functional regression testing, means that Sun Cluster has the capability to achieve levels of service availability that competing products may struggle to match.
Tim Read
Staff Engineer
Posted by Gustavo on April 11, 2007 at 03:01 AM PDT #
Posted by Tim Read on April 23, 2007 at 06:49 PM PDT #
Is there a nice easy way to determine what versions of Oracle are supported with which versions of Sun Cluster and Solaris ?
Thanks
Mick
Posted by Mick Scott on November 06, 2007 at 10:25 AM PST #
There are two sides to this question: there is what Oracle support and there is what Sun support. The two don't always entirely coincide.
Support for Oracle's products is primarily Oracle's concern. Their support matrix is given on MetaLink site (). Sun work with Oracle to certify these products on Solaris Cluster, but once we (Sun) have certified a combination we don't usually withdraw support for it. Consequently, Sun's matrix is usually a superset of Oracle's.
I'm not sure if that qualifies as an easy way to find out whether your combination is supported or not!
Tim
---
Posted by Tim Read on November 06, 2007 at 05:35 PM PST #
I recently read the documenation, "Installation Guide for Solaris Cluster 3.2 SW and Oracle 10g Rel2 RAC. My project is in the process of setting up a RAC environment and we're using both Sun cluster and Oracle clusterware. I would like to know which cluster controls the VIPs, Sun cluster or Oracle Clusterware?
Posted by Dawkins on February 04, 2008 at 10:27 PM PST #
Oracle clusterware controls the Oracle VIP resources.
Posted by Tim Read on February 04, 2008 at 11:09 PM PST #
Tim, thanks for you response. That means that we will need to unregister the vips from Sun cluster and only place the vips in the /etc/hosts and register them in the DNS. When we get to the point during the Oracle Clusterware install we will enter the Vips and Oracle will configure/activate them at that time?
Posted by Dawkins on February 04, 2008 at 11:15 PM PST #
Correct. As you are using Oracle 10g RAC, Oracle Clusterware itself controls all its own Oracle related resources: VIPs, listeners, database instances, services, ASM, etc, etc. Solaris Cluster works with Oracle Clusterware to provide the necessary infrastructure support: DIDs, membership and fencing, clprivnet, knowledge of storage status, etc, etc.
So yes, you are correct. Put the VIPs in /etc/hosts and register in DNS (if required). Then supply them when installing Oracle RAC. Make sure that when you come to the private networks that you \*only\* choose clprivnet. All others should be unused or public.
Hope that clarifies things.
Tim
---
Posted by Tim Read on February 04, 2008 at 11:58 PM PST #
Tim,
Have you seen environments using the combination of sun cluster and oracle clusterware hosting multiple databases with their own VIPs? If so, how were the additional vips registered with the oracle clusterware?
Posted by Dawkins on February 05, 2008 at 10:16 PM PST #
So you want a VIP per database instance? If so, it's not something I've tried. I don't know if it is done by customers either, though I can't see why it shouldn't be possible. I would expect you would just register the additional VIPs using crs_register. Furthermore, I would expect that to be documented in the Oracle manuals.
I think this is really a question of the capabilities of Oracle Clusterware rather than Solaris Cluster. We certainly don't restrict what Clusterware can do.
Just curious - why are you going for separate VIPs and not using separate ports on the VIP to control access to the databases? I would have thought you could set up various listeners on different ports and have suitable TNS name entries to map to them.
Tim
---
Posted by Tim Read on February 05, 2008 at 10:37 PM PST #
I am currently waiting on Oracle Support to respond to me. The databases are controlled by different contractors and will run on different ports/listeners, this is why we want to use separate VIPs. Scenario, If the (3) DB instances are using the same VIP and Instance2 goes down, what happens to Instance1 & 3 when the Clusterware failover the VIP to node2? That's our concern. If we're using diff VIPs and Instance2 goes down, then the clusterware will only failover the vip associated with Instance2 to the other node leaving instance1 & 3 alone.
Posted by Dawkins on February 05, 2008 at 11:50 PM PST #
Why should the instance failing cause the VIP to migrate? Normally there is no dependency of the VIP on the instance!
Certainly the listener resource depends on the VIP and without the listener the database is inaccessible.
If you use "crs_stat -p <resource>" you can see its properties.
Tim
---
Posted by Tim Read on February 06, 2008 at 12:28 AM PST #
The Oracle documentation states that the clusterware will move the VIP over to the available node.
Posted by Dawkins on February 07, 2008 at 02:51 AM PST #
Sorry, I'm a bit dim. I can't find that in the documentation. Could you send me a point to the relevant section of the docs?
The only thing I could find was that the VIP would fail-over if a node failed and that was to allow a rapid "connection refused" (see)
Tim
---
Posted by Tim Read on February 07, 2008 at 05:26 PM PST #
What is the maximum number of nodes that are supported for an Oracle 10g RAC cluster? I am finding that this number depends on your storage array where VCS just flately says 32 nodes regardless of what infrastructure you run it on.
Posted by John Franklin on May 23, 2008 at 04:27 AM PDT #
Sun Cluster supports up to 16 nodes for SPARC Solaris RAC 10g and up to 8 nodes for Solaris x64 RAC 10g. See the Oracle certification page for the certified storage management and associated node counts:
Posted by Gia-Khanh on May 23, 2008 at 05:17 AM PDT #
That is why I was confused because looking at the Sun Cluster Open Storage certification numbers the numbers are actually quite a bit lower. I.E only 4 nodes are the max if you are running on Hitachi Storage for the T2000's that I have.
Posted by john franklin on May 23, 2008 at 05:46 AM PDT #
The certified configurations information given earlier is more from a SW compatibility point of view. For a given choice of HW components the supported node count could be lower. Quoting another SC OSP config, 16 nodes (including T2000s) are supported for certain EMC storage products:
Look down the end of the table.
Posted by Gia-Khanh on May 23, 2008 at 12:28 PM PDT #
Ok, then it looks like the number of nodes supported is dependent on the storage array being used. My understanding is this is due to the support of persistant group reservation support across the arrays. Veritas support SCSI III reservations so I guess that is why as long as you run on their list of supported arryas you can do the max of 32 nodes whereas Sun ranges all the way from 2 nodes to 64 nodes depending on the PGR support in the array.
Posted by John Franklin on May 27, 2008 at 02:21 AM PDT #
Hi John,
To address your question, "the number of supported nodes is dependent on the storage arrays being used", is driven by the business requirements of each vendor. Some vendors believe that there customer base will use no more than (4) node connectivity, and will therefore only test up to the maximum number. Others may opt for (6) or (8) and decide this is what is best for their business. Technically the capability is their to go much higher, but we let the business dictate how much resourcing will be applied to a particular certification. I hope this addresses your question.
Roger Autrand
Senior Manager
Solaris Cluster Open Storage Program
Posted by Roger Autrand on June 03, 2008 at 10:44 PM PDT #
Thanks for the response Roger. This helps. So what you are saying is that it could be possible to go higher than what is stated for max nodes in the Open Storage docs for a particular storage array. It is just that the max node count was the highest that vendor had tested with based on thier customer's assumptions.
Posted by John Franklin on June 04, 2008 at 12:42 AM PDT #
John,
Yes, you are correct !
Roger
Posted by roger Autrand on June 04, 2008 at 12:54 AM PDT #
Hi,
I had a lot of problem designing an Oracle RAC without any issue in the rac interconnect, in the case of two nodes, it's obvious that the Solaris 10 aggregation is the solution, but in case more than two nodes, the IPMP is a half solution, but I'm considering reviewin the design. I found the the technology for the private interconnect on the Sun Cluster 3.x is interesting for more than one two nodes since it give us a virtual interface name which solve my issues in the Oracle interconnect.
But more than a year ago I decided to avoid the use of the Sun Cluster because it's not a free product and it add more configuration/administration which means for the customer COMPLEXITY.
I'm wondering if the package used for the cluster private interconnect could be installed alone without installing the other packages from the sun cluster ???
This may help me, on, the net I didnt find some one who tried this solution !!!
Any comments ???
Regards.
Posted by Mourad on November 06, 2008 at 01:38 AM PST #
You cannot install just the Solaris Cluster private interconnect functionality only. So unfortunately, that's not an option.
You could argue that Oracle RAC adds complexity compared with HA Oracle or standalone Oracle, but because it has functionality that you need, you use it. I would suggest that the same is true of Solaris Cluster. It may add a small amount of complexity, but the benefits are substantial. It's not just the private interconnects, it's the consistent (automatically maintained) global device namespace, the support for volume managers and shared QFS file system, etc.
For more details on the benefits, see our whitepaper on the subject ()
As for cost, well I think you'll find that the Solaris Cluster and RAC agent licenses are pretty reasonably priced, but I'll agree, they are not zero.
Tim
---
Posted by Tim Read on November 06, 2008 at 01:56 AM PST #
Hi Tim,
First thank you for your quick reply.
Yes I know about the benifits, I read last year the document you supplied me, and during a flight.
But I can't decide on the cost, I just want to solve the issue on the interconnect, till know every thing is free on the Solaris 10 (aggregation is not free on the Solaris 8/9).
Is there another option which can do what the clprvnet ?
Which means a virtual interface for two or more real interfaces connected to more than one ethernet switch.
PS : even installing the clprvnet alone is not free ?? kidding.
Regards.
Posted by Mourad on November 06, 2008 at 02:10 AM PST #
There is no other option (that I know of) that gives you exactly the same functionality as clprivnet - that's why we added it to Solaris Cluster.
Tim
---
Posted by Tim Read on November 06, 2008 at 07:10 PM PST #
Hi people,
I recently set up an Oracle RAC 10g on a SUN Cluster 3.2 for a Certification Authority following the "Installation Guide for Solaris(TM) Cluster 3.2 Software and Oracle(R) 10g Release 2 Real Application Clusters" by Fernando Castano (and succeeded).
My question is: Is the resource group and resources setup pointed out in the document sufficient? I am storing my db files on an SVM managed multi-owner metaset, i.e. I do have the resource group "rac-framework-rg" and the three resources "rac-framework-rs", "rac-udlm-rs", and "rac-svm-rs".
To be more concrete: Do I need an additional HA StoragePlus resource, which ensures that the SVM mount point is really "there" before the RAC "monster" is started?
Cheers, Ingo.
Posted by Ingo Kubbilun on February 27, 2009 at 02:23 AM PST #
Ingo,
I'm not quite sure what your phrase:
'the SVM mount point is really "there" ...'
means. An SVM disk set doesn't have a mount point, only a file system has a mount point. If you mean how do you ensure that the SVM disk set is imported, then that function is performed by the rac-svm-rs.
I'm assuming that:
'storing my db files on an SVM managed multi-owner metaset...'
means that you have a shared QFS file system. If so, there should be a couple of other Sun Cluster resources configured including a QFS metadata server resource and a scalable mount point resource. These can be created using the RAC configuration wizard in clsetup or via the Solaris Cluster Manager GUI.
If you need clarification of any of this, please post further questions or email me directly.
Tim Read
Solaris Availability Engineering
Posted by Tim Read on March 01, 2009 at 05:07 PM PST #
Dear Tim,
sorry for the confusing email; it was already a little bit tired.
No, I do not deploy QFS. I concatenated two LUNs of my SUN StorEdge to one entity using SVM. It can be mounted on /global/oraracdata (fs is UFS).
Maybe I misunderstood the HAStoragePlus resource type: I thought that another resource of type HAStoragePlus with the "FileSystemMountPoints=/global/oraracdata" is needed to ensure that it is mounted before the RAC group may become operational?
Am I wrong?
Thanks in adavnce and kind regards, Ingo.
Posted by Ingo Kubbilun on March 01, 2009 at 05:26 PM PST #
Ingo,
You cannot put UFS or VxFS on a multi-owner diskset. Furthermore, you cannot install Oracle RAC data files on UFS or VxFS file systems mounted globally. Only shared QFS is supported for Oracle RAC data.
Your options for storing various Oracle RAC structures are given in table 1-2 (page 22) of the "Sun Cluster Data Service for Oracle RAC Guide for Solaris OS"
I hope that helps. If not, please post again.
Tim Read
Solaris Availability Engineering
Posted by Tim Read on March 01, 2009 at 05:47 PM PST # | https://blogs.oracle.com/SC/entry/oracle_on_sun_cluster | CC-MAIN-2015-22 | refinedweb | 3,116 | 60.14 |
Share.
Before start with the application, I just wanted to let you know about the different type of integration model that are available Silverlight and SharePoint Integration. We can display a Silverlight application inside SharePoint application either as Html, Iframe, Host as web parts or use Object Model or services with Silverlight Application etc. But there is no such hard and fast classification types on the same . I found this great classification from the Designing Enterprise Corporate Web Sites using SharePoint 2010 Presentation by Paul Stubbs. These classifications are
1. No Touch : By no touch means, there is no direct integration with SharePoint and Silverlight. You may have some different Silverlight web application and you are showing then inside Share Point using Iframe. You can use this kind of scenarios when you have some existing web sites which is based on silver light and you want to show it inside your Sharepoint application.
2. Low Touch : Low Touch does a bit more interaction with SharePoint. This is nothing but hosting a Silverlight Application with in SharePoint site using SharePoint Out-of-the-Box Silverlight Web Parts. That application is an independent application which may call some other services apart from SharePoint API.
In one of my previous blog post I have explained about Bing Maps Silverlight Control Integration with SharePoint 2010 – Integration of Silverlight 4 with SharePoint 2010 .
Which is an example of Low Touch Integration of Silverlight and SharePoint 2010
High Touch : High touch integration is nothing but using the power of using SharePoint Object model. Where we can use either point Client Object model or Web services to read and write information from SharePoint Server. We can use any kind of client application like Silverlight, ASP.NET or Win forms, WPF even JavaScript. Below diagram showing Silverlight as an Client Application.
Now, I am going to show a complete demonstration on High Touch Integration of Silverlight and SharePoint application using Silverlight Client Object Model. I will be describe below three scenarios in this application using Client Object Model
1. In Browser Silverlight Application
2. Out of Browser (OOB) Silverlight Application
3. Host XAP File as Silverlight Web Part inside SharePoint using OOB (Out-Of-The-Box) Silverlight Web parts
So, let’s start by Creating a Silverlight Application , Open Visual Studio > New > Project . Select Silverlight Project Template and Select .NET Framework 4.0 . Give the Project Name as “SilverlightSPIntegration” .
Once you click on “OK” button, you will get another popup window automatically which will ask permission to create a web application which will host your silverlight application in a ASP.NET web application.”
Once you have done with adding reference, you will get the below Solution structure for your Silverlight Application
Now, before going forward do design the XAML for Silverlight application, let’s finished the work at SharePoint site. As I have already said, In this application we are going to read some data from SharePoint Inbuilt “Task” List. So, Open the SharePoint Site and navigate to Task List from Quick Launch.
This is exact default task list for SharePoint. You can use any of the list, even your custom created list. But based on you have to design UI and code for the same. Now, Enter some dummy data in to task list.
Below is the list of dummy data that I have added inside Task List .
As of now we are done with the SharePoint part. Now we have to create a custom UI using Silverlight to show these data in a Silverlight application . Again, the demo is kinda simple, but objective is to see how the integration things works with Object Model.
Go back to your visual studio Silverlight solution and design a simple screen with some line of XAML code.
To give a quick style, I did some color customization using Expression Blend.
Save the project from Expression Blend, and Open the Silverlight application from Visual Studio. It will ask for Reload the application as it has been modified out side of environment.
Below is the XAML code block for Silverlight UI
<!-- <Setter Property="MinWidth" Value="150" /> <Setter Property="MaxWidth" Value="150" /> <Setter Property="HorizontalAlignment" Value="Left" /> --> <!-- <Setter Property="HorizontalAlignment" Value="Right" /> <Setter Property="Foreground" Value="Orange" /> -->
Now you are almost done with the Designing with the application. At this point you can press “F5” to run the application to check how your UI looks like.
Above screen is your designed Silverlight application which is hosted inside ASP.NET Web application. Now, it’s time to use SharePoint Client Object Model to read the data from SharePoint Task List.
As we are going to read the information of List of Class, so first create a Task type class which will be the place holder for Task. Below is code snippet for Task Class.
public class Tasks { public string Title { get; set; } public string DueDate { get; set; } public string Status { get; set; } public string Priority { get; set; } public double PercentComplete { get; set; } }
Now, Open MainPage.XAML.CS file and first add the below name spaces
using Microsoft.SharePoint.Client;
When you are done the adding namespace you are ready to use OM API’s.
Below is the sample code block to get the lists Instance for an given SharePoint site.
/// /// Handles the Loaded event of the MainPage control. /// /// The source of the event. /// The instance containing the event data. void MainPage_Loaded(object sender, RoutedEventArgs e) { using (ClientContext SharePointContext = new ClientContext(this.SPWebSiteURL)) { this.query = new CamlQuery(); SharePointContext.Load(SharePointContext.Web); this.task = SharePointContext.Web.Lists.GetByTitle("Tasks"); this.strQuery = @" "; query.ViewXml = this.strQuery; this.taskLists = this.task.GetItems(this.query); SharePointContext.Load(this.taskLists); SharePointContext.ExecuteQueryAsync(this.OnSiteLoadSuccess, this.OnSiteLoadFailure); } }
SharePointContext.ExecuteQueryAsync method execution is asynchronous. This method used two different call back event handler for success and failure of the context operation.
As per my application If the asynchronous execution successed, it will invoked OnSiteLoadSuccess otherwise OnSiteLoadFailure. Below is the sample code snippet for those two methods.
/// /// Called when [site load success]. /// /// The sender. /// The instance containing the event data. private void OnSiteLoadSuccess(object sender, ClientRequestSucceededEventArgs e) { foreach (ListItem item in this.taskLists) { Tasks objTask = new Tasks(); objTask.Title = item["Title"].ToString(); objTask.DueDate = item["DueDate"].ToString(); objTask.Status = item["Status"].ToString(); objTask.Priority = item["Priority"].ToString(); double fraction = Convert.ToDouble(item["PercentComplete"]); objTask.PercentComplete = fraction * 100; this.SharePointTasks.Add(objTask); } AddTaskList( this.SharePointTasks); } /// /// Called when [site load failure]. /// /// The sender. /// The instance containing the event data. private void OnSiteLoadFailure(object sender, ClientRequestFailedEventArgs e) { MessageBox.Show(e.Message + e.StackTrace); }
Now we have to create a delegate to update the UI using Dispatcher.BeginInvoke to avoid the cross thread exception.
/// /// Adds the task list. /// private void AddTaskList() { this.Dispatcher.BeginInvoke(new UpdateSilverLightUI(this.AddItemsToLists), this.SharePointTasks); } /// /// Updates the UI. /// /// The tasks. private void AddItemsToLists(List tasks) { foreach (Tasks t in tasks) { TaskList.Items.Add(t.Title); } }
Now all tasks has been added to Silverlight Task List , after that we have to handled the Listbox Selection changed Event and showing the proper value to different text boxes.
/// /// Handles the SelectionChanged event of the listBox1 control. /// /// The source of the event. /// The instance containing the event data. private void TaskList_SelectionChanged(object sender, SelectionChangedEventArgs e) { var v = this.SharePointTasks.FirstOrDefault(item => item.Title.Equals(TaskList.SelectedItem.ToString())); if (v != null) { txtTitle.Text = v.Title; txtDueDate.Text = v.DueDate; txtPriority.Text = v.Priority; txtStatus.Text = v.Status; txtPercentage.Value = v.PercentComplete; } }.
Press F5 and run your application. Yes, Below screen will appear with exact same set of sharepoint task list data that we have entered earlier in SharePoint Lists. You can click an of the Task from the list and get the details.
This is all about the first level of integration of Silverlight and SharePoint where you can host the application inside a ASP.NET Web sites separately.
Now, let’s make it more interesting by creating this application as Out Of Browser (OOB) Silverlight Application. This is just kind of small configuration setting to convert this as OOB.
Right Click on Silverlight Application > Properties and Select the “Enable running application out of the Browser” check box as shown in below image.
To do some more configuration, Click on “Out-of-Browser Settings…” . You can change the height as width and display names and click on OK.
After the OOB settings, run the application, you will find your application is ruining as a windows application as shown below.
Till now I have discussed about the In Browser Silverlight and OOB Silverlight application integration with SharePoint.
Now I am going to describe how you can host this Silverlight Application Inside SharePoint as a SharePoint Web Parts. Hosting Silverlight XAP file as SharePoint site is similar as I have already discussed in my Bing Map Control article
First create one sample SharePoint 2010 Project under same solution. Select “Empty Share Point Project“
Give the Project name and click on “OK“. The next screen will appear as below.
This will ask you for the site location and trust level type. You can go with the default selection. Click on “OK”. You are done with the SharePoint Project Creation.
Now you have to add one SharePoint Module with this project. diagram.
Keep in mind, you need to select the Deployment Type as “ElementFile” and Project name should be the name of your “Silverlight Solution”. Click on the OK. Now go to “Element.XML” File under the SharePoint Module. Verify the path for the Silverlight “.xap” File. You need to provide the same url while you will add the Silverlight Web part inside SharePoint Site.
Now, you are done with SharePoint Project. Build and Deploy it. Just right click on the SharePoint Project and select Deploy. You are done with the deployment.
Host Silverlight Web Part to SharePoint Site
Hosting of Silverlight Web Part is quite similar that we generally used for our normal web parts. First of all you need to open the SharePoint Page in Edit Mode, Go To Insert and Select Web Part. Under the category section select “Media and Content” category and select SharePoint Out-Of-The-Box “Silverlight Web Part” from Web Part . Click on Add.
You will see one AJAX Popup will apear and you need to input the url for the Silverlight Application Package (xap) file. Provide the url which you have already specified in Element. XML file inside the SharePoint Module.
Click on OK. You will get the below screen after successfully adding Silverlight Web Parts.
You are done. Save the page and Browse. :) Sometimes you may get this below error screen after providing the XAP file path.
For that, you have to make sure you have given the valid XAP deployed path. If the path is valid, you have to check the Corresponding features is activated or not. You can check that from Site Actions > Site Settings
If you check the below screen where I have shown both the SharePoint Task List and Silverlight Task Control in a single Page.
If you add any new task in you task list that will also be reflected in your Silverlight Task Control.
If you want to avoid this long process of hosting just upload the XAP File in a document library provide the XAP file path in Silverlight Web Part.
You can also create custom web part with this Silverlight Control in SharePoint 2010 Project and then deploy the web part.
Summary : In this blog post I have explained what are the different type of integration can be done with Silverlight and SharePoint 2010. As a example I have shown how we can create a custom Silverlight Task Control using SharePoint 2010 Client Object Model. I have also described the different way to hosting the Silverlight XAP file inside SharePoint.
Finally, Thanks to Paul for his great presentation on Designing Enterprise Corporate Web Sites using SharePoint 2010 which helped a to learned about the different classification integration mode of Silverlight and share Point and I would like to thanks to Chakkaradeep for his excelent intro article on SharePoint 2010: Introducing the Client Object Model . Thanks Guys !
I hope this will help you !
October 9, 2010 at 10:56 pm
Awesome post Abhijit. You are now getting expertise in Silverlight too. Good to see that.
Keep it up man. You are rocking… :)
Regards,
Kunal
October 10, 2010 at 8:25 pm
Yes Kunal !!
October 9, 2010 at 11:09 pm
Wow, seems I should try some HiTouch apps in Silverlight. I would consider Sharepoint to be the most efficient data services from my silverlight client I guess. :)
Good work.
Keep it up.
October 10, 2010 at 8:25 pm
Thanks Abhishek !
October 11, 2010 at 12:40 pm
Excellent One..!
October 12, 2010 at 3:04 pm
Thanks Sashidhar
October 12, 2010 at 2:00 pm
Very cool, thanks!
October 12, 2010 at 3:03 pm
Thanks !
October 14, 2010 at 10:44 am
amazing!
December 3, 2010 at 6:45 am
I am very impress your this post, this is too much interesting. Thanks a lot…
July 14, 2012 at 12:03 am
awesome work. | http://abhijitjana.net/2010/10/09/silverlight-task-control-for-sharepoint-2010-example-of-high-touch-integration/ | CC-MAIN-2015-40 | refinedweb | 2,181 | 58.08 |
unavailable for new accesses, immediately disconnect the filesystem and all filesystems mounted below it from each other and from the mount table, and actually perform the unmount when the mount ceases to be busy.
- MNT_EXPIRE (since Linux 2.6.8)
- Mark the mount as expired. If a mount is not currently in use, then an initial call to umount2() with this flag fails with the error EAGAIN, but marks the mount as expired. The mount remains expired as long as it isn't accessed by any process. A second umount2() call specifying MNT_EXPIRE unmounts an expired mount. to indicate the error.
- target is locked; see mount_namespaces(7).
-MNT_DETACH and MNT_EXPIRE are available in glibc since version 2.11.
CONFORMING TOThese functions are Linux-specific and should not be used in programs intended to be portable.
NOTES
umount() and shared mountsShared mounts cause any mount activity on a mount, including umount() operations, to be forwarded to every shared mount may be remounted using a mount(2) call with a mount_flags argument that includes both MS_REC and MS_PRIVATE prior to umount() being called. | https://man.archlinux.org/man/umount2.2.en | CC-MAIN-2022-27 | refinedweb | 181 | 63.9 |
Today, Aaron L. shares the tale of an innocent little network mapping program that killed itself with its own thoroughness:
I was hired to take over development on a network topology mapper that came from an acquisition. The product did not work except in small test environments. Every customer demo was a failure.
The code below was used to determine if two ports on two different switches are connected. This process was repeated for every switch in the network. As the number of switches, ports, and MAC addresses increased the run time of the product went up exponentially and typically crashed with an array index out of bounds exception. The code below is neatly presented, the actual code took me over a day of repeatedly saying "WTF?" before I realized the original programmer had no idea what a Map or Set or List was. But after eliminating the arrays the flawed matching algorithm was still there and so shortly all of the acquired code was thrown away and the mapper was re-written from scratch with more efficient ways of connecting switches.
public class Switch { Array[] allMACs = new Array[numMACs]; Array[] portIndexes = new Array[numPorts]; Array[] ports = new Array[numPorts]; public void load() { // load allMACs by reading switch via SNMP // psuedo code to avoid lots of irrelevant SNMP code int portCounter = 0; int macCounter = 0; for each port { ports[portCounter] = port; portIndexes[portCounter] = macCounter; for each MAC on port { allMACs[macCounter++] = MAC; } } } public Array[] getMACsForPort(int port) { int startIndex; int endIndex; for (int ictr = 0; ictr < ports.length; ictr++) { if (ports[ictr] == port) { startIndex = portIndexes[ictr]; endIndex = portIndexes[ictr + 1]; } } Array[] portMACS = new Array[endIndex - startIndex]; int pctr = 0; for (int ictr = startIndex; ictr < endIndex - 1; ictr++) { portMACS[pctr++] = allMACs[ictr]; } return(portMACS); } } ... for every switch in the network { for every other switch in the network { for every port on switch { Array[] switchPortMACs = Switch.getMACsForPort(port); for every port on other switch { Array[] otherSwitchPortMACs = OtherSwitch.getMACsForPort(other port); if (intersect switchPortMACs with otherSwitchPortMACs == true) { connect switch.port with otherSwitch.port; } } } } }
| https://thedailywtf.com/articles/mapping-every-possibility | CC-MAIN-2018-09 | refinedweb | 340 | 51.28 |
Add() class Tally(object): def __init__(self): pass def add_tally(self,sender=None): self.ty = ui.Label() self.ty.frame = (0, 0, w, h*0.25) self.ty.bg_color = 'yellow' self.sv = ui.load_view()['scrollview1'] self.sv.add_subview(self.ty) t = Tally() t.add_tally() tc = ui.load_view('Tally Counter') tc.present(hide_title_bar=True)
I'm confused. you are presenting
tc, not
t.sv!
t.svhas never been added to any view that you are presenting..... in order for a view to show up, it must be a subview of a presented view.
I tried to change your code as little as possible. But still would not write it like this. So, the coding is not correct, only in line with what you asked about.
import ui w,h = ui.get_screen_size() class Tally(ui.View): def __init__(self): pass def add_tally(self): self.ty = ui.Label() self.ty.frame = (0, 0, w, h*0.25) self.ty.bg_color = 'yellow' #self.sv = ui.load_view()['scrollview1'] self.sv = ui.ScrollView(frame = (0,0,w,h)) self.sv.add_subview(self.ty) self.add_subview(self.sv) t = Tally() t.add_tally() t.present() #tc = ui.load_view('Tally Counter') #tc.present(hide_title_bar=True)
I guess I get confused when adding objects to UI editor objects. Your example does it all programmatically which is fine.
@donnieh, I also had a lot of problems at first also. I was relying on the ui designer thinking it was easier. But a short time doing it programmatically, you won't see the point of doing it in the ui designer. Well that's my opinion anyway. The ui designer is great to get going with and look at the properties etc. I just found its actually a lot more work now. But again, it really help me in the start to get my head around things. I still have a long way to go :)
@donnieh. By getting involved here on the forum and trying to help as I have been helped so many times, I learnt something very cool from you today. Maybe a lot of people just know it, but I didn't. It was your method (self, sender = None). I thought that was really crazy. But today while I was coding, I had a small problem about reusing an already defined method in my custom class from a callback function. The defined method didn't take additional args other than self. But if I call that method from a callback it will fail, silently. For the callback to work it requires (self, sender). I guess I could have just passed None to sender when I called it as a direct method (I hadn't thought of that also), but that would be messy and confusing. Best option was to definfe as (self, sender =None). Nice and clean. Yes, I am sure for a lot of people easy. But I was happy to get it.
import ui class test(ui.View): def __init__(self): # create a simple btn in the view # sets it callback function self.btn = ui.Button(title = 'Ok') self.btn.action = self.do_something_stupid self.add_subview(self.btn) self.style() def style(self): # the view style self.background_color = 'white' # the button style btn = self.btn btn.border_width = 1 btn.background_color = 'red' btn.tint_color = 'white' btn.font = ('<system>', 10) def layout(self): btn = self.btn btn.x = btn.y = 100 btn.width = 200 btn.height = 128 ''' i didnt understand why you were making sender = None on your previous example. but today when i was coding i wanted to call a function in my class from a callback. i was about to write a wrapper when i remembered your syntax. so nice, i would not have thought of it otherwise. if only one function you could say so what, but as the projects get larger is so important. ''' def do_something_stupid(self, sender= None ): ''' the stupid action is to increase the size of the font by 5, each time the button is pressed or is called as a method but because sender = None, a callback can use it or can be called as a normal method of the class. of course, not smart to reference sender in this case(unless you test for a sender). just good for calling code with logic that does not care about the sender ''' self.btn.font = (self.btn.font[0], self.btn.font[1] + 5) if __name__ == '__main__': x = test() x.present('sheet') for i in range(10): x.do_something_stupid() print x
Yes, every time I start a project in the UI editor I end up re-doing the entire UI in my code eventually. I think I just like the editor because it is cool.
As for the sender=None, I probably use it more than I should. Ha | https://forum.omz-software.com/topic/1861/add-label-to-scroll-view-in-ui-editor | CC-MAIN-2018-39 | refinedweb | 795 | 78.35 |
This chapter describes the schema objects that you use in the Oracle Database Java environment and:
Unlike a conventional Java virtual machine (JVM), which compiles and loads Java files, the the Oracle JVM, you must use the
loadjava tool to create a Java class schema object from the class file or the source file and load it into a schema. To make a resource file accessible to the Oracle JVM, you must use
loadjava to create and load a Java resource schema object from the resource file.
The
dropjava tool deletes schema objects that correspond to Java files. You should always use
dropjava to delete a Java schema object that was created with
loadjava. Dropping schema objects using SQL data definition language (DDL) commands will not update auxiliary data maintained by
loadjava and
dropjava.
You must load resource files using
loadjava. If you create
.class files outside the database with a conventional compiler, then you must load them with
loadjava. The alternative to loading class files is to load source files and let Oracle Database compile and manage the resulting class schema objects. In Oracle Database 10g, the most productive approach is to compile and debug most of your code outside the database, and then load the
.class files. For a particular Java class, you can load either its
.class file or the corresponding
.java file, but not both.
The
loadjava tool accepts Java Archive (JAR) files that contain either source and resource files or class and resource files. When you pass a JAR or ZIP file to
loadjava, it opens the archive and loads its members individually. There are no JAR or ZIP schema objects. A file whose content has not changed since the last time it was loaded is not reloaded. As a result, there is little performance penalty for loading JAR files. Loading JAR files is a simple, fool-proof way to use
loadjava.
It is illegal for two schema objects in the same schema to define the same class. For example, assume that
a.java defines class
x and you want to move the definition of
x to
b.java. If
a.java has already been loaded, then
loadjava will reject an attempt to load
b.java. Instead, do either of the following:
Drop
a.java, load
b.java, and then load the new
a.java, which does not define
x.
Load the new
a.java, which does not define
x, and then load
b.java.
All Java classes contain references to other classes. A conventional JVM searches for classes in the directories, ZIP files, and JAR files named in the
CLASSPATH. In contrast, the Oracle JVM searches schemas for class schema objects. Each class in the database has a resolver specification, which is
An.
ANot,
loadjavaresolves references to classes but not to resources. Ensure that you correctly load the resource files that your classes need.
If you can, defer resolution until all classes have been loaded. This avoids call
loadjava for collections of files,,
loadjava computes a digest of the content of the file and then looks up the file name in the digest table. If the digest table contains an entry for the file name that has an identical digest, then
loadjava does not load the file, because a corresponding schema object exists and is up to date. If you call, then use the
loadjava -force option to bypass the digest table lookup or delete all rows from the table
JAVA$CLASS$MD5$TABLE.
Loading a source file creates or updates a Java source schema object and invalidates the class schema objects previously derived from the source. If the class schema objects do not exist, then
loadjava
loadjava -resolve.
The compiler writes error messages to the predefined
USER_ERRORS view. The
loadjava tool retrieves and displays the messages produced by its compiler invocations.
The compiler recognizes some options. There are two ways to specify options to the compiler. If you run
loadjava with the
-resolve option, then you can specify compiler options on the command line. You can additionally specify persistent compiler options in a per-schema database table,
JAVA$OPTIONS. You can use the
JAVA$OPTIONS table for default compiler options, which you can override selectively using a
loadjava command-line
loadjava tool creates schema objects from files and loads them into a schema. Schema objects can be created from Java source, class, and data files.
loadjava
loadjava to terminate prematurely. These errors are printed with the following syntax:
exiting: error_reason
This section covers the following:
The syntax of the
loadjava command is as follows:]...] [ 11-1 summarizes the
loadjava arguments. If you run
loadjava multiple times specifying the same files and different options, then the options specified in the most recent invocation hold. However, there are two exceptions to this, as follows:
If
loadjava does not load a file because it matches a digest table entry, then most options on the command line have no effect on the schema object. The exceptions are
-grant and
-resolve, which always take effect. You must use the
-force option to direct
loadjava to skip the digest table lookup.
The
-grant option is cumulative. Every user specified in every
loadjava invocation for a given class in a given schema has the
EXECUTE privilege.
This section describes the details of some of the
loadjava arguments whose behavior is more complex than the summary descriptions contained in Table 11-1.
You can specify as many
.class,
.java,
.sqlj,
.jar,
.zip, and resource files as you want and in any order. If you specify a JAR or ZIP file, then
loadjava processes the files in the JAR or ZIP. There is no JAR or ZIP schema object. If a JAR or ZIP contains another JAR or ZIP,
loadjava
loadjava will also work, without having to learn anything about resource schema object naming.
Schema object names are different from file names, and
loadjava names different types of schema objects differently. Because class files are self-identifying, the mapping of class file names to schema object names done by
loadjava. Because classes use resource schema objects and the correct specification of resources is not always intuitive, it is important that you specify resource file names correctly on the command line.
The perfect way to load individual resource files correctly is to run
loadjava,
loadjava,
alpha/beta/x.properties and
ROOT/home/scott/javastuff/alpha/beta/x.properties. The name of the resource schema object is generated from the file name as entered.
Classes can refer to resource files relatively or absolutely. To ensure that
loadjava and the class loader use the same name for a schema object, enter the name on the command line, which alpharesources.jar
To simplify the process further, place both the class and resource files in a JAR, which makes the following invocations equivalent:
% loadjava options alpha.jar % loadjava options /home/scott/javastuff/alpha.jar
The preceding
loadjava commands imply that you can use any path name to load the contents of a JAR file. Even if you run the redundant commands,
loadjava would realize from the digest table that it need not load the files twice. This implies that reloading JAR files is not as time-consuming as it might seem, even when few files have changed between
loadjava invocations.
{
loadjava process. Some Oracle Database-specific optimizations for interpreted performance are put in place during the verification process. Therefore, the interpreted performance of your application may be adversely affected by using this option.
[-optionfile <file>]
This option enables you to specify a file with
loadjava options. This file is read and processed by
loadjava before any other
loadjava options are processed. The file can is checked against the patterns. Patterns can end in a wildcard (
*) to indicate an arbitrary sequence of characters, or.
You can use Java comments in this file. A line comment begins with a
#. Empty lines are ignored. The quote character is a double quote (
"). That is, options containing spaces should be surrounded by double quotes. Certain options, such as
-user and
-verbose, affect the overall processing of
loadjava and not the actions performed for individual Java schema objects. Such options are ignored if they appear in an option file.
To help package applications,
loadjava looks for the
META-INF/loadjava-options entry in each JAR it processes. If it finds such an entry, then it treats it as an options file that is applied for all other entries in the option file. However,
loadjava does some processing on entries in the order in which they occur in the JAR.
If
loadjava has partially processed entities before it processes
META-INF/loadjava-options, then
loadjava will attempt to patch up the schema object to conform to the applicable options. For example,
loadjava alters classes that were created with invoker rights when they should have been created with definer rights. The fix for
-noaction will be to drop the created schema object. This will yield the correct effect, except that if a schema object existed before
loadjava started, then:
loadjava to compile and resolve a class that has previously been loaded. It is not necessary to specify
-force, because resolution is performed after, and independent of, loading.
{-resolver | -R} resolver_specification
This option associates an explicit resolver specification with the class schema objects that
loadjava,
loadjava
loadjava uses the user's default database. If specified,
loadjava commands:
Connect to the default database with the default OCI driver, load the files in a JAR into the
TEST schema, and then resolve them:
loadjava -u joe/shmoe -resolve -schema TEST ServerObjects.jar
Connect with the JDBC Thin driver, load a class and a resource file, and resolve each class:
loadjava -thin -u SCOTT/TIGER@dbhost:5521:orcl \ -resolve alpha.class beta.props
Add Betty and Bob to the users who can run
alpha.class:
loadjava -thin -schema test -u SCOTT/TIGER@localhost:5521:orcl \ -grant BETTY,BOB alpha.class
The
dropjava tool is the converse of
loadjava.
dropjava.
dropjavaon the same source file. If you translate on a client and load classes and resources directly, then run
dropjavaon the same classes and resources.
You can run
dropjava either from the command line or by using the
dropjava method in the
DBMS_JAVA class. To run
dropjava
loadjava. The output is directed to
stderr. Set
serveroutput on and call
dbms_java.set_output, as appropriate.
This section covers the following topics:
The syntax of the
dropjava command is: 11-2 summarizes the
dropjava arguments.
This section describes few of the
dropjava argument, which are complex
dropjava interprets most file names as
loadjava
dropjava interprets the file name as a schema object name and drops all source, class, and resource objects that match the name.
If
dropjava
dropjava uses the user's default database. If specified, then
database can be a TNS name or an Oracle Net Services name-value list.
-thin:@
database
dropjava.
Drop all schema objects in the
TEST schema in the default database that were loaded from
ServerObjects.jar:
dropjava -u SCOTT/TIGER -schema TEST ServerObjects.jar
Connect with the JDBC Thin driver, then drop a class and a resource file from the user's schema:
dropjava -thin -u SCOTT/TIGER@dbhost:5521:orcl alpha.class beta.props
dropjava to remove the resources.
The
ojvmjava tool is an interactive interface to the session namespace of a database instance..
This section covers the following topics:
The syntax of the
ojvmjava command is:
ojvmjava {-user user[/password@database ] [options] [@filename] [-batch] [-c | -command command args] [-debug] [-d | -database conn_string] [-fileout filename] [-o | -oci | -oci8] [-oschema schema] [-t | -thin] [-version | -v]
Table 11-3 summarizes the
ojvmjava arguments.
Open a shell on the session namespace of the database
orcl on listener port
2481 on the host
dbserver, as follows.
ojvmjava -thin -user SCOTT/TIGER@dbserver:2481:orcl
The
ojvmjava commands span several different types of functionality, which are grouped as follows:
This section describes the options for the
ojvmjava command-line tool ojvmjava Commands in the @filename Option
This
@
filename option designates a script file that contains one or more
ojvmjava
ojvmjava to run another script file, then this file must exist in
$ORACLE_HOME on the server.
Enter the
ojvmjava command followed by any options and any expected input arguments.
The script file contains the
ojvmjava command followed by options and input parameters. The input parameters can be passed to
ojvmjava on the command line.
ojvmjava processes all known options and passes on any other options and arguments to the script file.
To access arguments within the commands in the script file, use
&1...&
n to denote the arguments. If all input parameters are passed to a single command, then you can type
&*
Note:You can also supply arguments to the
-commandoption in the same manner. The following shows an example:
ojvmjava ... -command "cd &1" contexts
After processing all other options,
ojvmjava passes
contexts as argument to the
cd command.
To run this file, do the following:
ojvmjava -user SCOTT -password TIGER -thin -database dbserver:2481:orcl \ @execShell alpha beta
ojvmjava testhello alpha beta
You can add any comments in your script file using hash (
ojvmjava. For example:
#this whole line is ignored by ojvmjava
Running sess_sh Within Applications
You can run
sess_sh commands from within a Java or PL/SQL application using the following commands:
This section describes the commands used for manipulating and viewing contexts and objects in the namespace.
The following shell commands function similar to their UNIX counterparts:
Each of these shell commands have some options in common, which are summarized in Table 11-4:
This commnad prints prints the class. The class must be loaded with
loadjava. this command is:
java [-schema schema] class [arg1 ... argn]
Table 11-5 summarizes the arguments of this command.
Consider the following Java file,
World.java: prints the user name of the user who logged in to the current session. The syntax of the command is:
whoami | http://docs.oracle.com/cd/B19306_01/java.102/b14187/cheleven.htm | CC-MAIN-2016-18 | refinedweb | 2,315 | 55.54 |
EmberCommandEntry Struct Reference
Command entry for a command table. More...
#include <
command-interpreter2.h>
Command entry for a command table.
Definition at line
135 of file
command-interpreter2.h.
Field Documentation
A reference to a function in the application that implements the command. If this entry refers to a nested command, the action field has to be set to NULL.
Definition at line
154 of file
command-interpreter2.h.
For normal, non-nested, commands, argumentTypes is a string that specifies the number and types of arguments the command accepts. The argument specifiers are:
- u: one-byte unsigned integer.
- v: two-byte unsigned integer
- w: four-byte unsigned integer
- s: one-byte signed integer
- b: string. The argument can be entered in ASCII by using quotes, for example: "foo". It can also be entered in hexadecimal values by using curly braces, for example: { 08 A1 f2 }. The number of hexadecimal digits must be even and spaces are ignored.
- *: zero or more of the previous type. If used, this must be the last specifier.
- ?: unknown number of arguments. If used this must be the only character, which means that the command interpreter will not perform any validation of arguments and will call the action directly trusting that it will handle whatever arguments are passed in. Integer arguments can be either decimal or hexadecimal. A 0x prefix indicates a hexadecimal integer, for example: 0x3ed.
For a nested command (action is NULL), argumentTypes will contain a pointer to the nested commands.
Definition at line
183 of file
command-interpreter2.h.
A description of the command.
Definition at line
188 of file
command-interpreter2.h.
Use letters, digits, and underscores, '_', for the command name. Command names are case-sensitive.
Definition at line
142 of file
command-interpreter2.h.
The documentation for this struct was generated from the following file:
command-interpreter2.h | https://docs.silabs.com/connect-stack/2.4/structEmberCommandEntry | CC-MAIN-2018-51 | refinedweb | 308 | 51.75 |
[Solved] QGraphicsIte setPos , moveBy - how do they work?
I looked at the "colliding mouse example": that ships with Qt.
It took me several days to figure out
@setPos(mapToParent(0,1));@
moves a QGraphicsItem downwards, everytime 'advance' (QGraphicsView function) is called. I don't understand how this works.
(This is not part of the code, but I modified the 'setPos' function slightly to see how it works)
I had to rummage though the code to figure out what was moving the QGraphicsItem. I expected to see something like
@setPos(mapToParent(0,ypos));@
to move something downwards, with ypos incremented in 'advance' member function in QGraphicsItem. This seems more intuitive to me. However, it seems this causes the item to move with a constant acceleration.
I also saw another function somewhere else that works exactly the same way (Correct me if I am wrong)
@moveBy(0,1);@
Now this seems intuitive to me. Everytime advance is called, the item is 'moved by' (0,1)
Could somebody explain how setPos works?
Thanks!
- sraboisson
The confusion does not come from your interpretation of "setPos" (which acts as you expect), but from the "mapToParent" function
THIS is the important point:
The "mapToParent" function converts a coordinate in item coordinates to coordinate in parent's coordinate system (or scene coordinates if no parent)
For example, if you have a mouse "myMouse" at (10,10) in your scene:
- The "myMouse" local coordinates are (0,0), i.e origin of the mouse item
- The "myMouse" coordinate in scene coordinates are (10,10).
A call myMouse.mapToParent(0,0) will return (10,10);
A call myMouse.mapToParent(0,1) will return (10,11);
and this is the point:
@myMouse.setPos(10,10); /* Position the mouse in it's parent coordinate system : (10, 10) /
newPos = myMouse.mapToParent(0,1); / newPos = (10,10) + (0,1) = (10, 11); /
myMouse.setPos(newPos); / Position the mouse in it's parent coordinate system : (10, 11) /
newPos = myMouse.mapToParent(0,1); / newPos = (10,11) + (0,1) = (10, 12); /
myMouse.setPos(newPos); / Position the mouse in it's parent coordinate system : (10, 12) */@
and so on...
Note: In the "colliding mice" sample, a mouse coordiantes system is yaxis = nose to tail axis: combining with the mouse item rotation, a mouse does not only go down, but in the good direction
Perhaps, "The Graphics View Coordinate System": can help...
Thanks sraboisson! Wonderful explanation. Ah so 'increment' part comes from mapToParent as you showed in the code. I think the reason it did not occur to me was I didn't think the origin changed everytime setpos(mapToParent(..)) was called. | https://forum.qt.io/topic/15401/solved-qgraphicsite-setpos-moveby-how-do-they-work | CC-MAIN-2018-30 | refinedweb | 432 | 54.42 |
What is SharePoint ? SharePoint Development Introduction covers Server Object Model of SharePoint. You can know more about 2013 Server Object Model
SharePoint 2013 Server Object Model
by
Introduction
What is Server Object Model ?
Set of classes and namespaces separated into .Net Libraries.
What is the use of these classes?
Allows to develop server-side programs interacting with the SharePoint engine
What is server-side programs/solutions?
Set of instruction running on Server
How to use Server Object Model
Add the library/assembly as a reference in your Visual Studio project.
When is it best to use Server Object Model?
Only when a solution requires to talk with SharePoint farm/engine. This solution is called Farm or Sandboxed solution based on how it is deployed.
Introduction
What is the initial best development approach?
Develop solution using Console application in initial stage.
Why ?
Its speeds up the development process
It does not require deployment for testing
Any cautions ?
Make sure project is targeted to 4.5 Framework, and 64x Platform
Which is the main library need to reference?
Microsoft.SharePoint.dll
Where Microsoft.SharePoint.dll and other assemblies are located?
Under 15Hive\ISAPI (15Hive = C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\)
Practical
Objects Types
Located or defined in the namespace starts with
Microsoft.SharePoint.*
Microsoft.Office.*
Type or Class names starts with SP
Examples : SPFarm, SPWebApplication, SPSite, SPWeb etc…
Object Hierarchy
Practical
Creating first LIST Programmatically.
Practical
Display All sub sites under a site collections
Practical
Display All Sites
Practical
Practical
Console Application
Practice 1 : How to display name of Lists available in a site ?
Practice 2 : How to display List of sites under a site collection ?
Practice 3 : How to display List Items of a List ?
Practice 3 : How to create List ?
Practice 4 : How to create List Item ?
Practice 5 : How to create a Site ?
Practice 6 : How to Add a Field in a List?
Practice 7 : How to include Field in a Default View | http://www.slideserve.com/SwathiRachala/sharepoint-2013-developer-training | CC-MAIN-2017-22 | refinedweb | 329 | 51.34 |
Hi Folks,
I am a newbie in Flex and I used an example on Flex.IDLE event from. … nt-page-1/
to logout a user after some specific idle interval.
What is happening with me is
1. When browser window is minimized, this event doesn’t trigger.
2. When some flash contents are running in same or different browser, this event sometime fires and sometimes not.
Can anyone help me out in figuring out why this is happening and how I can fix it. Please help me.
Thanks in Advance.
Regards,
the IDEL event "Dispatched every 100 millisecondes when there has been no keyboard or mouse activity for 1 second"
Many many thanks for your quick response. this is mentioned in the example I have followed.Please look at my scenario 1 and 2. I am working while Flex application window minimized. After some time, I came back to my minimized window and restore it and I see that no logout method is called which is registered with Flex.IDEL event. When I leave my machine (whole system) IDEL, then this event occurs fine.
when did you add the linstener of IDLE event?
I've done a experiment :if put the code "systemManger.addEventLinster(FlexEvent.IDLE,handler)" after the addToStage event has been dispatched in application,that works well
I register this event when a user successfully logged into the application. You mean to say that if I add this event after the "addToStage", then it will work fine even if browser window is minimized? Let me try and I'll let you know my output.
Thanks a lot for your time and quick responses.
Regards.
Thanks alot for your help. Now it is working but again there's something else problem
for setting one minute timeout interval I using following formaula
var TIME_OUT_LIMIT:int = (1000*60*1)/100;
below is my logout code
public function logoutUser(e:FlexEvent):void{
use namespace mx_internal;
if(e.currentTarget.mx_internal::idleCounter >= TIME_OUT_LIMIT){
Alert.show(new Date().toTimeString());
this.systemManager.removeEventListener(FlexEvent.IDLE, removeFlexIdelEvent);
}
}
when browser window is minimized, logoutUser method is called after inteval 5 times greater than the TIME_OUT_LIMIT. Can you tell me what's wrong I am doing.
Regards. | https://forums.adobe.com/thread/885062 | CC-MAIN-2017-51 | refinedweb | 368 | 66.84 |
#include <mpi.h> int XMPI_Coloron(int red, int green, int blue)
The LAM implementation of MPI is integrated with the XMPI run/debug viewer. It can generate tracefiles and on-the-fly trace streams suitable for display in XMPI.
A new functionality in XMPI is the ability to enable and disable select colors in the trace stream. LAM/MPI allows this functionality with the XMPI_Coloron and XMPI_Coloroff functions.
XMPI_Coloron is called with red , green , and blue parameters. Each value may be from 0 to 255. The resulting RGB composite will become activated for that rank at that point in time. Enabling and disabling colors is a local action; the calls will return immediately. The color will be activated or deactivated on the timeline corresponding to the rank that invoked XMPI_Coloron / XMPI_Colorff in the XMPI trace window.
Only one color is active at a time. However, XMPI_Coloron may be invoked multiple times to build a stack of colors. XMPI_Coloroff will pop the last color off the stack and make the previous color active.
If this function is invoked and tracing is not active, the color is ignored.
There is no profiling version of this function.
This is a LAM/MPI-specific function and is intended mainly for use with XMPI. If this function is used, it should be used in conjunction with the LAM_MPI C preprocessor macro
#if LAM_MPI XMPI_Coloron(255, 255, 0); . | http://www.makelinux.net/man/3/X/XMPI_Coloron | CC-MAIN-2014-52 | refinedweb | 232 | 57.37 |
Search: Search took 0.02 seconds.
Did listeners on panel depends on the panel content?
Production built and singleton requirementStarted by Tchinkatchuk, 1 Oct 2014 8:24 AM
- Last Post By:
- Last Post: 2 Oct 2014 12:04 AM
- by dongryphon
Bug? disableCaching: false in Loader doesn't seem to workStarted by andreas-spindler, 21 Aug 2014 12:32 AM
[OPEN] why extjs 5 Ext.ElementLoader HtmlRenderer use setHtml(can not execute scripts)
- Last Post By:
- Last Post: 10 Dec 2014 8:49 PM
- by b.batja@gmail.com
[INFOREQ] Google API being loaded when no Google dependencies exist in the project
The right folder to place custom classesStarted by jerome.kael@gmail.com, 8 Jul 2014 3:01 AM
- Last Post By:
- Last Post: 9 Jul 2014 6:02 PM
- by jerome.kael@gmail.com
Ext.Loader: requires doesn't resolve namespace
CSS link missing (loader problems in IE)
How to support remote loading of Menu content as is done for ComboBox, Grid, etc.
Execute a function in component configuration file
Calling multiple MVC apps from an MVC app (how to?)
Including External JS LibrariesStarted by studio4development, 21 Jun 2013 7:26 PM
- Last Post By:
- Last Post: 25 Jun 2013 2:14 PM
- by Diego Garcia
class loader problems
loader in utility class function not displayedStarted by maheshparvekar, 6 May 2013 8:38 PM
Deep Linking/Routing with Ext.Loader disabled (cross dependency problem)Started by sergei@gmx.net, 2 May 2013 3:04 AM
- Last Post By:
- Last Post: 4 May 2013 12:56 PM
- by sergei@gmx.net
MVC, Ext.Loader and deployment
Ext.Loader vs app.jsonStarted by jeffrey.courter, 16 Apr 2013 7:01 PM
- Last Post By:
- Last Post: 16 Apr 2013 7:01 PM
- by jeffrey.courter
[INFOREQ] [4.1.1] ExtJS is complaining about missing files that aren't missingStarted by BillHubbard, 13 Feb 2013 3:46 PM
- Last Post By:
- Last Post: 17 Apr 2013 5:11 PM
- by BillHubbard
Setup JSTestDriver with ExtJS 4 & jasmine
- Last Post By:
- Last Post: 29 Dec 2013 2:27 PM
- by jamie.priest
Custom Loader for TreeGrid (TreePagingLoader)
class loading suggestions written to console
- Last Post By:
- Last Post: 17 Jan 2013 7:23 AM
- by AndreaCammarata
[FIXED] method ensureHandler() from Loader should be protected
- Last Post By:
- Last Post: 30 Oct 2013 9:59 AM
- by Colin Alworth
Select first row from Tree Grid when loadedStarted by mario.amaya, 3 Dec 2012 10:37 AM
- Last Post By:
- Last Post: 5 Dec 2012 11:33 AM
- by Colin Alworth
SenchaCmd 3.0.0.250 issues with Ext.Loader
How to load additional file that depend on Ext object?
Results 1 to 25 of 73 | http://www.sencha.com/forum/tags.php?tag=loader | CC-MAIN-2014-52 | refinedweb | 452 | 63.19 |
compile a regular expression, for use with regexec()
#include <regex.h> int regcomp( regex_t *preg, const char *pattern, int cflags );
The regcomp() function prepares the regular expression, preg, for use by the function regexec(), from the specification pattern and cflags. The member re_nsub of preg is set to the number of subexpressions in pattern. The argument cflags is the bitwise inclusive OR of zero or more of the following flags:
The functions that deal with regular expressions (regcomp(), regerror(), regexec(), and regfree()) support two classes of regular expressions, the Basic and Extended Regular Expressions. These classes are rigorously defined in IEEE P1003.2, Regular Expression Notation.
The Basic Regular Expressions are composed of these terms:
The Extended Regular Expressions also include:
Zero if successful, nonzero if the function fails for any reason.
/* The following example prints out all lines from FILE "f" that match "pattern". */ #include <stdio.h> #include <regex.h> void grep( char *pattern, FILE *f ) { int t; regex_t re; char buffer[MAX_INPUT]; if ((t=regcomp( &re, pattern, REG_NOSUB )) != 0) { regerror(t, &re, buffer, sizeof buffer); fprintf(stderr,"grep: %s (%s)\n",buffer,pattern); return; } while( fgets( buffer, MAX_INPUT, f ) != NULL ) { if( regexec( &re, buffer, 0, NULL, 0 ) == 0 ) { fputs( buffer, stdout ); } } regfree( &re ); }
POSIX 1003.2
regerror(), regexec(), regfree() | https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/regcomp.html | CC-MAIN-2022-33 | refinedweb | 211 | 56.05 |
How Hacker News ranking algorithm works
In this post I’ll try to explain how the Hacker News ranking algorithm works and how you can reuse it in your own applications. It’s a very simple ranking algorithm and works surprising well when you want to highlight hot or new stuff.
Digging into news.arc code
Hacker News is implemented in Arc, a Lisp dialect coded by Paul Graham. Hacker News is open source and the code can be found at arclanguage.org. Digging through the news.arc code you can find the ranking algorithm which looks like this:
; Votes divided by the age in hours to the gravityth power.
; Would be interesting to scale gravity in a slider.
(=)) 1
(blank s!url) nourl-factor*
(lightweight s) (min lightweight-factor*
(contro-factor s))
(contro-factor s))))
In essence the ranking performed by Hacker News looks like this:
Score = (P-1) / (T+2)^G
where,
P = points of an item (and -1 is to negate submitters vote)
T = time since submission (in hours)
G = Gravity, defaults to 1.8 in news.arc
As you see the algorithm is rather trivial to implement. In the upcoming section we’ll see how the algorithm behaves.
To see this visually we can plot the algorithm to Wolfram Alpha.
How score is behaving over time
As you can see the score decreases a lot as time goes by, for example a 24 hour old item will have a very low score regardless of how many votes it got.
plot(
(30 - 1) / (t + 2)^1.8,
(60 - 1) / (t + 2)^1.8,
(200 - 1) / (t + 2)^1.8
) where t=0..24
How gravity parameter behaves
As you can see by the graph the score decreases a lot faster the larger the gravity is.
plot(
(p - 1) / (t + 2)^1.8,
(p - 1) / (t + 2)^0.5,
(p - 1) / (t + 2)^2.0
) where t=0..24, p=10
Python implementation
As already stated it’s rather simple to implementing the score function. Here’s a implementation in Python:
def calculate_score(votes, item_hour_age, gravity=1.8):
return (votes - 1) / pow((item_hour_age+2), gravity)
The most crucial aspect is understanding how the algorithm behaves and how you can customize it for your application and I hope I have contributed that knowledge :-)
You can view comments to this post and a lot more thoughts on HN’s ranking here:
Paul Graham has shared the updated HN ranking algorithm:
(=))))) | https://medium.com/hacking-and-gonzo/how-hacker-news-ranking-algorithm-works-1d9b0cf2c08d | CC-MAIN-2016-36 | refinedweb | 410 | 74.49 |
beardscratchers.com - Journal A music-focused web experiment and creative-arts journal from London, England tag:beardscratchers.com,2005:4357bcf9c70b1807791cd39e76b16baa/journal Textpattern 2009-06-22T20:54:47Z Nick Nick 2009-03-22T15:03:55Z 2009-03-22T15:03:55Z Highlighting Google Search Result Pages with Stylish and CSS3 tag:beardscratchers.com,2009-03-22:4357bcf9c70b1807791cd39e76b16baa/6ee358aa6707266c843ecd443af9feb3 <p class="first">Like most websites, a significant portion of the beardscratchers.com audience arrive via a Google search query—you might well be one of them. And like most website owners, I have a minor obsession with viewing the latest stats to keep track of who’s coming in from where, why and what for. </p> <p>With Google frequently updating <acronym title="Search Engine Results Page"><span class="caps">SERP</span></acronym>s and different result sets showing dependant on the viewer’s geographical location, a small frustration I’ve had is being able to quickly pick out exactly where my site appears on a Google <span class="caps">SERP</span>. Wouldn’t it be nice if any time your site appears in the results of a Google search, it would be immediately visible to you? </p> <p><strong>From this…</strong></p> <p><img src="" width="608" height="310" alt="" class="nobg ic" /></p> <p><strong>To this…</strong></p> <p><img src="" width="591" height="301" alt="" class="nobg ic" /></p> <p>With just a few lines of <span class="caps">CSS</span> and a modern CSS3-capable browser like <a href="">Firefox</a> or <a href="">Opera</a>, this is <em>incredibly easy</em> to do! Here’s the code we use:</p> <div class="code-sample"><table summary="This table lists the contents of the file google-highlighter"><colgroup><col class="line-no" /><col class="line" /></colgroup> <thead><tr><th>#</th><th>Code</th></tr></thead> <tbody><tr class="odd"><td>0001</td><td class="tab0">@-moz-document url-prefix() {</td></tr> <tr><td>0002</td><td class="tab1">#res h3.r > a[href*="beardscratchers.com"] {</td></tr> <tr class="odd"><td>0003</td><td class="tab2">background-color: #FFFFAA !important;</td></tr> <tr><td>0004</td><td class="tab1">}</td></tr> <tr class="odd"><td>0005</td><td class="tab0">}</td></tr> <tr><td>0006</td><td class="tab0"> </td></tr> </tbody></table> <p><a href="">download this code</a></p> </div> <p>First up, <code>@-moz-document url-prefix()</code> specifies the rule should run only on <span class="caps">URL</span>s that have the prefix “”. This should cover all the Google bases—UK, US, Canada, Japan, Germany and so on— and avoid having the rule run on any other site. </p> <p>Next up, we have a <span class="caps">CSS</span> selector that makes use of some of the new CSS3 features of <a href="">attribute selectors</a> that allow us to perform substring matching on element attributes. <code>a[href*="beardscratchers.com"]</code> pulls out any anchor on the page with an <code>href</code> attribute that contains the string “beardscratchers.com” somewhere within it. <code>^=</code> and <code>$=</code> allow you to match attributes that start and end with a string, respectively. The difference with CSS2 is that pattern matching can only work with <em>exact</em> matches.</p> <p>Note that substring matches with the attribute selector doesn’t permit the use of multiple strings to match. So in this example, we would need to duplicate the selector to add multiple sites: <code>a[href*="bbc.co.uk"], a[href*="wikipedia.org"]</code>.</p> <p>Finally <code>background-color: #FFFFAA !important;</code> provides our link with a subtle highlight.</p> <h2>Installing the code</h2> <p>For simplicity, I recommend the <a href="">Stylish Extension</a> for Firefox. This extension is really just a nice <span class="caps">GUI</span> wrapper for a special file in Firefox called <code>userContent.css</code>. This file exists in the <a href="">user profile folder</a> and allows users to add their own custom <span class="caps">CSS</span> rules to any site they view. For other modern browsers like Opera, there are some <a href="">useful details</a> on userstyles.org for alternative solutions.</p> <p>Once installed, simply create a new Blank rule:</p> <p><img src="" width="386" height="159" alt="" class="nobg ic" /></p> <p>Paste the code in, change the site <span class="caps">URL</span> you wish to highlight, give it a title and click Save:</p> <p><img src="" width="578" height="277" alt="" class="nobg ic" /></p> <p>And that’s it. As a quick test, search for the site’s domain in Google and you should find most of the results have been highlighted. </p> <p>There’s lots of potential here to do interesting things with <span class="caps">CSS</span> on Google <span class="caps">SERP</span>s; let me know if you come up with anything useful.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Nick 2009-03-09T23:58:57Z 2009-03-09T23:59:55Z Best API Update. Ever. tag:beardscratchers.com,2009-03-09:4357bcf9c70b1807791cd39e76b16baa/a9820d8d3f1ae8731cf823e196e0cebc <p class="first">One thing I <strong>love</strong> about all the <span class="caps">API</span>s and web-services out there—especially those from commercial entities—is that they’re driven, designed and built <em>by</em> developers <em>for</em> developers. </p> <p>They bypass all the detritus in ‘economic leveraging’, ‘strategic incubation’, ‘synergistic e-business’ and the rest of the bullshit and produce something simply for the challenge and love of experimentation. Last week <a href="">flickr</a> exemplified this beautifully, and really renewed my faith in the development and web community at-large having a good sense of humour. Yet at the same time pushing forward at what’s possible with data and technology. I present:</p> <p><img src="" width="500" height="341" alt="" class="nobg ic" /></p> <p class="c"><a href="">The Rainbow Vomiting Panda of Awesomeness</a></p> <p>If the image wasn’t enough to warrant the highest of praise—I foresee pandas as the new meme to beat unicorns— flickr have just launched a couple of very curious and <em>fun</em> <span class="caps">API</span> calls that may have limited usefulness but really exalt what today’s Web should be about. </p> <p>We’re no longer a passive Web of static, inaccessible content. Today’s Web is a huge, living, breathing pool of data and it’s all about seeing what we can do with it!</p> <p><strong>Long may the renaissance continue!</strong></p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Nick 2009-03-09T00:16:59Z 2009-03-10T00:01:02Z Doing the Semantic-Web Tango with Wikipedia and Freebase tag:beardscratchers.com,2009-01-18:4357bcf9c70b1807791cd39e76b16baa/e5c5722c52dcd5c6a23026fb174a3979 <p class="first">Behind the scenes, I’m working hard at writing a much improved <a href="">Beardscratchers Compendium</a> while still trying to trickle out new features in the current version. Recent browsing will have revealed the automatic inclusion of abstracts for both artists <em>and</em> releases direct from Wikipedia. Implementing this feature seemed, at first glance, to be very simple.</p> <p>In practice it ended up requiring the use of a completely separate <span class="caps">API</span>, lots of <span class="caps">RTFM</span>ing, and plenty of blind hacking. Let’s start with Wikipedia.</p> <h2>The Wikipedia Problem</h2> <p><img src="" width="171" height="210" alt="" class="nobg ir" /> Wikipedia is built on top of the MediaWiki software; its content is fully accessible <a href="">via an <span class="caps">API</span></a> without it needing to build one itself. Great, so what’s the problem?! Go check out the documentation in that <span class="caps">API</span> link. Actually accessing Wikipedia content directly involves a <em>lot of hard work</em>. I had imagined it would be a simple case of “query artist name” -> “display article text”. I should try and remember that nothing is ever that simple. </p> <p>Let’s say we want to use the MediaWiki <span class="caps">API</span> to retrieve the content for <a href="">Sting’s Wikipedia entry</a>, to display it on <a href="">Sting’s Compendium entry</a>. A read of the docs tells us the <span class="caps">URL</span> we’re after looks like <code></code>. The important bit here being the need of a <code>pageid</code> value to retrieve the content. </p> <p>The only useful query value we do have is the artist’s name “Sting”. A further read of the documentation tells us we can use <em>another</em> <span class="caps">API</span> call to search the Wikipedia database with a free-text query. However, take note that Sting’s page on Wikipedia is actually <code>Sting_(musician)</code>. Like many articles on Wikipedia, it’s a disambiguated title, used to distinguish identically named articles. There are <a href="">many articles</a> entitled “Sting”. So how do we know which one to actually retrieve once we’ve managed to get a list of article pageids with the title “Sting”? </p> <p><em>The short answer is we can’t</em>. Not without lots of string parsing, munging and making assumptions in code. It’s not really possible to do this without producing a lot of false positives.</p> <p>To make matters worse, have a look at the data returned from the <a href=""><span class="caps">XML</span> response</a>. All the content is still in wiki format. Even if we were able to pull the exact articles we needed, the content would have to be pushed through a MediaWiki parser before being pushed out into a usable format. I’ve as yet been unable to find a decent standalone wiki parser written in <span class="caps">PHP</span>. Please add a link in the comments if you do know of one (Pear isn’t standalone!).</p> <p>At this point I pretty much gave up… until I happened across the mighty Freebase. </p> <h2>The Freebase Solution</h2> <p><img src="" width="220" height="42" alt="" class="nobg ir" /> What is <a href="">Freebase</a>? The official blurb says it’s “a massive, collaboratively-edited database of cross-linked data.”. In essence, it’s an encyclopaedia like Wikipedia but favours facts, relationships and explicit data over written content. </p> <p>It’s Wikipedia for machines and is a seriously fantastic idea. I’m not sure how I’ve previously manage to miss it. Freebase connects up many external data resources as well as its own data and gives them meaning, structure and relationships. The [open] community pitches in and helps maintain and expand the databases. <a href="">Metaweb</a> then provides a hugely-featured open <span class="caps">API</span> to access this data with its own comprehensive query language to query it with— <a href=""><span class="caps">MQL</span></a>. While I’m a huge advocate of genuine <span class="caps">REST</span> <span class="caps">API</span>s with real <span class="caps">REST</span>ful endpoints, the flexibility and potential of the Freebase approach for an open webservice has got me very excited. It’s <span class="caps">SOAP</span>, but without all the rubbish it introduces.</p> <p>So how does it help with this problem of accessing Wikipedia content? As mentioned, Freebase connects many existing data-sets together in a structured manner. Two of these data-sets are Wikipedia and a beardscratcher’s favourite, Musicbrainz. Suddenly one of the world’s largest music databases is unambiguously connected with one of the world’s largest encyclopaedias, providing a <em>huge</em> mine of accurately related and structured information.</p> <h2>Connecting Freebase and Wikipedia</h2> <p>Covering the ins-and-outs of working with the Freebase <span class="caps">API</span> is well beyond the scope of this entry, and is expertly covered in the <a href="">Make section</a> on freebase.com. </p> <p>In summary, the <span class="caps">API</span> has two core calls – database read and database write. Both simply take a single <span class="caps">MQL</span> query and return a response. You can experiment with <span class="caps">MQL</span> in their handy <a href="">query editor tool</a>.</p> <p>As I’m not much of a Sting fan, I’m going to continue this entry with a more interesting artist, <a href="">My Brightest Diamond</a>. Taking a look at the the Freebase front-end, there’s <a href="">lots to discover</a> about the artist in the database. Here, we’re only interested in a few small specific pieces of data; namely <em>what is the entry on Wikipedia for the artist “My Brightest Diamond”?</em></p> <p>I’m repeating myself now, but I’ll reiterate that Freebase entries are interlinked datasets, and this relationship is formulated (in part) by identifying ‘keys’. Such keys are Freebase object types (like a ‘music artist’ or ‘animal’) or keys from external datasets like the Wikipedia article ID we’re after and the Musicbrainz <span class="caps">MBID</span> that uniquely identifies an artist in the Musicbrainz database. The <span class="caps">MQL</span> we need to use to query Freebase for an artist’s keys looks like the following:</p> <pre><code>{ "query" : { "name":"My Brightest Diamond", "type":"/music/artist", "limit":1, "key" : [{ "namespace" : null, "value" : null }] } } </code></pre> <p>This <span class="caps">MQL</span> should be fairly self-explanatory. We’re asking for a “/music/artist” with the name “My Brightest Diamond” and want just one result. The <code>null</code> values indicate the properties of the query that we want returned. It’s like saying “Hey Freebase, I’m stuck on a few things. Can you fill in the rest plz?”.</p> <p>Freebase responds with a number of keys in its response, a subset of which look like:</p> <pre><code>{ "namespace": "/authority/musicbrainz", "value": "15f835dc-ee52-4b74-b889-113678f54119" }, { "namespace": "/wikipedia/en_id", "value": "7490642" } </code></pre> <p>Perfect. It appears we have the exact page id to use in a query to Wikipedia for the artist’s entry. What’s also fantastic is that we can actually verify the match by checking the <span class="caps">MBID</span> it’s linked to, if we have it available (The Compendium always has the <span class="caps">MBID</span> available). There are are a surprising number of artists with identical names!</p> <h2>Finally retrieving Wikipedia content</h2> <p>It doesn’t end there. Recall that I mentioned that the output of the Mediawiki api is wiki-encoded content. </p> <p><code></code></p> <p>Screw this approach. Let’s do things old-school, and find an actual wikipedia.org page that uses the page id value and returns something approaching <span class="caps">HTML</span> or just plain text. Bingo, <a href="">printable version</a>. Well, at least <em>someone</em> will be making use of a printable version link (I certainly can’t recall the last time I actually <em>needed</em> to use one).</p> <p>And that’s it! One accurately matched artist bio. With a little bit of strip_tags() and preg_match() voodoo, and a touch of substr(), extracts of Wikipedia articles now appear on both <a href="">artist entries</a> and <a href="">release entries</a>.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Nick 2009-03-08T18:14:19Z 2009-03-25T13:35:48Z How to inline Opensocial Message Bundles for improved performance tag:beardscratchers.com,2009-03-08:4357bcf9c70b1807791cd39e76b16baa/77b589fb0222185b914044dd3a0f5772 <p class="first">In my work at <a href="">team collaboration</a> startup, Huddle.net, we’ve encountered a number of issues with externally referenced <a href="">message bundles</a> in our Opensocial application, Huddle Workspaces.</p> <p><img src="" width="288" height="70" alt="" class="nobg ir" /> Due to the size and scale of of the application—it’s quite big and complex in comparison to many Opensocial apps—our message bundle <span class="caps">XML</span> files contain a lot of content and, as a consequence, consume quite a lot of a bandwidth over the wire. </p> <p.</p> <p>So, in lieu of consistent caching across Opensocial containers, this issue can be sidestepped by <em>inlining</em> the message bundle content inside your application gadget specification. This is a new feature introduced in Opensocial 0.8.1, and so you should ensure the container you’re working with is operating on a 0.8.1 codebase.</p> <h2>How to Inline a Message Bundle</h2> <p>It’s actually <em>very</em> simple, but it’s also something that isn’t officially documented anywhere. Opensocial development often involves shooting in the dark… but fortunately it’s well enough spec’ed that a little bit of guesswork goes a long way.</p> <p>Message bundles are always referenced from the <code><ModulePrefs /></code> block in a gadget spec, and look something like this:</p> <div class="code-sample"><table summary="This table lists the contents of the file moduleprefs-external"></td></tr> <tr class="odd"><td>0009</td><td class="tab1"><Locale lang="fr" country="fr" messages=""/></td></tr> <tr><td>0010</td><td class="tab1"><Locale lang="es" messages=""/></td></tr> <tr class="odd"><td>0011</td><td class="tab0"></ModulePrefs></td></tr> <tr><td>0012</td><td class="tab0"> </td></tr> </tbody></table> </div> <p>By simply removing the <code>messages</code> attribute from each <code><Locale /></code> element, and placing the relevant message bundle content (excluding the xml prolog), message bundles are no longer referenced externally and load along with your gadget spec:</p> <div class="code-sample"><table summary="This table lists the contents of the file moduleprefs-inline">></td></tr> <tr class="odd"><td>0009</td><td class="tab2"><messagebundle></td></tr> <tr><td>0010</td><td class="tab3"><msg name="hello"></td></tr> <tr class="odd"><td>0011</td><td class="tab4">Hello!</td></tr> <tr><td>0012</td><td class="tab3"></msg></td></tr> <tr class="odd"><td>0013</td><td class="tab2"></messagebundle></td></tr> <tr><td>0014</td><td class="tab1"></Locale></td></tr> <tr class="odd"><td>0015</td><td class="tab1"><Locale lang="fr" country="fr"></td></tr> <tr><td>0016</td><td class="tab2"><messagebundle></td></tr> <tr class="odd"><td>0017</td><td class="tab3"><msg name="hello"></td></tr> <tr><td>0018</td><td class="tab4">Bonjour!</td></tr> <tr class="odd"><td>0019</td><td class="tab3"></msg></td></tr> <tr><td>0020</td><td class="tab2"></messagebundle></td></tr> <tr class="odd"><td>0021</td><td class="tab1"><Locale lang="es"></td></tr> <tr><td>0022</td><td class="tab2"><messagebundle></td></tr> <tr class="odd"><td>0023</td><td class="tab3"><msg name="hello"></td></tr> <tr><td>0024</td><td class="tab4">Hola!</td></tr> <tr class="odd"><td>0025</td><td class="tab3"></msg></td></tr> <tr><td>0026</td><td class="tab2"></messagebundle></td></tr> <tr class="odd"><td>0027</td><td class="tab1"></Locale></td></tr> <tr><td>0028</td><td class="tab0"></ModulePrefs></td></tr> <tr class="odd"><td>0029</td><td class="tab0"> </td></tr> </tbody></table> </div> <p>We’ve found that it’s much more common for gadget specs to be cached hard. Thus the additional weight of inlining your message bundle content inside your gadget <span class="caps">XML</span> is negated, and the application should—container-dependant—get some improvements to stability and performance.</p> <p.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Nick 2009-01-18T12:39:58Z 2009-01-18T12:44:47Z Linux Tip: Connecting to a Remote Shell with RSync on a Port other than :22 tag:beardscratchers.com,2009-01-18:4357bcf9c70b1807791cd39e76b16baa/e44bea30cd263232475e6acd141007f0 <p class="first">It’s fairly common that server admins will change the port that the <span class="caps">SSH</span> daemon listens on, to something other than 22. It’s security by obscurity; adding just a little extra hard work for those nefarious people trying to access public-facing machines they don’t own. </p> <p>Unfortunately this makes it a bit more tricky to use a number of ssh-enabled commands since it’s not always obvious how to pass the alternative port through. With rsync, it’s pretty easy once you know how.</p> <p>A normal rsync command to sync files <em>from</em> a remote machine to a local machine—over port 22—would look something like this:</p> <pre><code>rsync -rav nick@myhostname.com:/home/nick/somedirectory . </code></pre> <p>So, if instead we wanted to use port 12345, we simply add the <code>--rsh=</code> switch (or <code>-e</code>, which is an alias) to our command. This allows us to specify a *r*emote *sh*ell to connect to; in other words, the parameters to pass to the ssh command. Here’s how it looks:</p> <pre><code>rsync -rav --rsh='ssh -p12345' nick@myhostname.com:/home/nick/somedirectory . </code></pre> <p>or using the <code>-e</code> alias:</p> <pre><code>rsync -rav -e 'ssh -p12345' nick@myhostname.com:/home/nick/somedirectory . </code></pre> <p>Hopefully that saves a few headaches…</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Nick 2008-12-30T10:59:24Z 2008-12-30T11:20:16Z 5 Hot Icon Sets for your Next Web Application Design tag:beardscratchers.com,2008-12-14:4357bcf9c70b1807791cd39e76b16baa/ebaacfc3ec0af065dc4fe7aaace17207 <p class="first">In this age of web-development, where static brochure sites are truly old-hat and every client demands interactivity, having a consistent theme for user interaction has become one of the most important steps in realising a successful design. </p> <p>One of the first lessons of <acronym title="Human Computer Interaction"><span class="caps">HCI</span></acronym> is the concept of <a href="">interface metaphors</a> – singular design elements that quickly infer an action or result of the user based on familiarity and association. The Recycle Bin and the Folder being two of the most well-known. </p> <p>On the web, icon design contributes to this hugely, and the success of a site’s design is largely thanks to a consistent icon set. Unfortunately most of us don’t have the time, patience (or talent!) to create an all-encompassing set of icons for a project, and so we must rely on the excellent work of designers around the World to help us on our way. </p> <p>Here are 5 free icon sets, available for commercial use, that I believe fit the bill over the plethora of icon sets available:</p> <h2>FamFamFam Silk</h2> <p>How could I not start with the ubiquitous Silk set. It’s a set that is seen absolutely everywhere, for a very good reason. It’s a huge set of 700 beautifully crafted icons that are free of a strong design-style; able to fit into any design and for any FamFamFam Silk</a></p> <h2>FamFamFam Mint</h2> <p>I’m particularly fond of this small set of minty-hued icons (apparently inspired by Shaun Inman’s <a href="">Mint</a>). While they might not find their place in every design, their minimalism and space-saving dimensions deserves giving them a look.</p> <p class="clear"><img src="" width="11" height="11" alt="" class="nobg" /> <img src="" width="11" height="11" alt="" class="nobg" /> <img src="" width="11" height="11" alt="" class="nobg" /> <img src="" width="11" height="11" alt="" class="nobg" /></p> <p>» <a href="">Download FamFamFam Mint</a></p> <h2>Sweetie</h2> <p>Compact, but useful icon set with a glossy and colourful style. Comes with 4 icons sizes, from 8×8 up to 24×24, for a number of the included icons.< Sweetie</a></p> <h2>n-design Mini Pixel-Icons</h2> <p>A smart and well-featured set of 320 14×14 icons; particularly good for e-commerce sites, as a good number of credit-card, shopping-cart and basket icons are included. Comes bundled with multiple shades to fit into and brighten up a plain app design.</p> <p class="clear"><img src="" width="14" height="14" alt="" class="nobg" /> <img src="" width="14" height="14" alt="" class="nobg" /> <img src="" width="16" height="16" alt="" class="nobg" /> <img src="" width="14" height="14" alt="" class="nobg" /> <img src="" width="14" height="14" alt="" class="nobg" /> <img src="" width="14" height="14" alt="" class="nobg" /></p> <p>» <a href="">Download n-design Mini Pixel-Icons</a></p> <h2>Pinvoke Fugue</h2> <p>A huge, stylish icon set that I’ve not seen as widespread in the wild as Silk. Certainly one of the best large icon sets available on the Pinvoke Fugue</a></p> <h2>Closing Note</h2> <p>Most icon sets come prepared with transparent backgrounds, using <span class="caps">PNG</span> alpha-transparency. This type of transparency is unsupported in IE6 and produces undesirable results.</p> <p>The simplest solution to this problem, is to create alternative versions for IE6 in <span class="caps">PNG</span>-8 format (which supports 1-bit transparency). A little bit of pixel-pushing will be needed to make them look perfect, but I’ve never had any problems using this workaround. The Pinvoke and Sweetie sets already come bundled with alternative IE6-friendly versions.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Nick 2008-11-30T20:41:08Z 2008-12-06T17:03:59Z Compendium Videos now Triple Filtered for your Pleasure tag:beardscratchers.com,2008-11-30:4357bcf9c70b1807791cd39e76b16baa/ed5d8fd45908f2958165f21a9a3acf1f <p class="first">Since introducing <span class="caps">MTV</span> networks into the Compendium, I happened across <a href="">Yahoo’s excellent Music <span class="caps">API</span></a>, which offers up the Yahoo music video catalogue as part of the service. Good news, these are now available through the Compendium. This brings the total number of officially released music videos to over 60,000.</p> <p><img src="" width="150" height="65" alt="" class="nobg ir" />Well, almost… In combining the two content networks, I’ve had to be quite careful about what appears where, to whom and how. As I mentioned in a previous post about <span class="caps">MTV</span> Networks, video content comes from the most restricted datafeeds used on the Compendium, particularly regarding geographical location. Dependant on your location, and licensing, some videos may not be available.</p> <p>When videos are loaded in an artist profile on the Compendium, an <span class="caps">AJAX</span> request invokes two simultaneous server-side requests to go off and retrieve videos from Y!Music and <span class="caps">MTVN</span>—this is cached. Once these are successfully retrieved, they are combined and then passed off for pre-filtering.</p> <p>Filtering the video feeds involves three steps:</p> <ol> <li>Remove duplicate videos, based on the song title</li> <li>Remove videos that are not specifically by the artist or under the artist’s name, but may have been returned as part of the webservice search query</li> <li>Remove videos that are restricted either by geographical location or are not available for external embedding.</li> </ol> <p>Geographical filtering is done with the help of the <a href="">Net_GeoIP</a> Pear package. This is OO version of the regular Pear GeoIP package that queries the flat Maxmind GeoIP databases. This returns geolocation information based on IP. It is still in beta, but appears completely stable. It can be installed from the command-line using:</p> <pre><code>$ pear config-set preferred_state beta $ pear install Net_GeoIP $ pear config-set preferred_state stable </code></pre> <p>Conveniently it uses a Singleton pattern to stop the flat database files being initialised multiple times within a script. However it is still by far the slowest data lookup in the Compendium. I wrapped my own lookup methods into another Singleton, and used <code>$_SESSION</code> values to store country and city lookups. This prevent multiple database lookups happening, unless they change, since geolocation is used not only in Video feeds but also Events data lookups. Further improvements can be made by pushing the flat database files into memory or into a <span class="caps">DBMS</span> like MySQL.</p> <p>With this geolocation data at hand, <span class="caps">MTVN</span> videos are filtered based on attributes in the returned dataset. Y!Music works slightly differently, in that each geographical location has it’s own base <span class="caps">API</span> address, returning videos only available in that region. So Y!Music is filtered at the source, rather than at the point of retrieval.</p> <p>Once this filtering takes place, the resulting array is checked to see if it still contains any valid videos. If there are no videos available, an empty response is returned to the page. This return value causes the Compendium javascript to invoke another <span class="caps">AJAX</span> call to retrieve fallback content from Youtube. Youtube nearly always returns results from its catalogue of x million videos; but these still need a little bit of filtering and nudging to produce accurate results.</p> <p>All these lookups are cached, and the <span class="caps">AJAX</span> lookups are bypassed when valid caches exist, meaning that videos are injected directly into the page when it renders.</p> <p>If you’re in the US, UK, much or Europe and a few other places, you should see official music videos appearing for selected artists. If not, you’ll get Youtube results appearing.</p> <p>Some examples:</p> <ul> <li><a href="">Britney Spears Videos</a></li> <li><a href="">Led Zeppelin Videos</a></li> <li><a href="">My Brightest Diamond Videos</a></li> </ul> <p>The last, however, is US only. So to end this post on a high-note, here’s the excellent video by Raf Toro of <em>My Brightest Diamond</em> – <strong>Inside a Boy</strong>. In hi-def, courtesy of <a href="">Vimeo</a> (coming soon to the Compendium… perhaps):</p> <object width="667" height="500"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="" /><embed src="" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="667" height="500"></embed></object><p><a href="">Inside A Boy</a> from <a href="">Rafa Toro</a> on <a href="">Vimeo</a>.</p> <p>If you find any bugs with the video feeds, please let me know.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Nick 2008-11-30T18:38:23Z 2009-05-05T08:51:01Z Fixing the Enter Keypress Event in ASP.NET with jQuery tag:beardscratchers.com,2008-11-11:4357bcf9c70b1807791cd39e76b16baa/340985627edc8cbf7513ad6de20e4955 <p class="first">One of the most frustrating things about working with .NET from a front-end developer’s viewpoint is the Single Form Model. Enclosing an entire website or web-application in one single <code><form></code> element poses a number of accessibility and usability problems surrounding form input and usage. One of these is ensuring the correct default actions are assigned to sets of input fields when the enter key is used.</p> <p><img src="" width="291" height="90" alt="" class="nobg ir" />Traditionally, the default action for a <code><form></code> is to fire the <em>first submit button</em> found within the current <code><form></code> element. Every form has one default action. </p> <p>Striking the enter key within a text input field should submit the current set of—logically grouped—fields; <strong>this is the expected behaviour</strong>. For pages with multiple forms and actions, this is easily separated by having multiple <code><form></code> elements, each with their own submit buttons and actions. Each form operates independently, has its own default action, and doesn’t interfere with other forms.</p> <p>In the Single Form Model, the presence of just one <code><form></code> element, means that different default actions cannot be easily separated. Every input field on the page is automatically tied to just <em>one</em> default action – the first submit button on the page. </p> <p>Take a blog site as an example – like this very page! It has a search form at the top, with associated submit button, a comment form further down the page and perhaps another form for signing up to a newsletter. Implemented with the Single Form Model, only the search form will produce the correct behaviour as it introduces the first submit button on the page. All subsequent fields will be tied to this same button as their default action – submitting a comment by pressing enter would cause the search form to submit, as would signing up to a newsletter. Not particularly useful.</p> <h2>Can we cure this problem?</h2> <p>The answer is yes… partially. The problem we have concerns UI behaviour. Any solution needs to manipulate and override behaviour, and this is the domain of Javascript. We can’t manipulate the markup in our favour as we’re under the control of the Single Form Model (not without a complete switch to .NET <span class="caps">MVC</span> anyway). But there’s a conflict here—javascript is not a fully accessible technology. There are many environments where users do not have Javascript available.</p> <p>However, in this case, using Javascript is an acceptable answer—this <em>is</em> progressive enhancement of sorts. We’re not adding a dependency on Javascript, merely enhancing the usability of the form inputs for those users with it enabled. Javascript-disabled users will still be able to use the form, except they will experience the ‘broken’ behaviour. Form accessibility in .NET is pretty horrendous anyway, so anything we can do to make improvements is better than nothing at all.</p> <h2>Doesn’t .NET provide a solution already?</h2> <p>Yes, it does in version 2.0 and above. This is the <a href="">defaultbutton attribute</a>, which can be used in <code><asp:panel /></code> and <code><form></code>. Fine, so why not use this? There’s no real reason not to, <em>if you’re starting from scratch</em> and don’t mind crufty .aspx pages. </p> <p>But this wasn’t suitable for me. In an existing codebase, this requires a lot of extra change, and the addition of specific panels to group related sets of form fields is <em>completely</em> unnecessary. An <span class="caps">HTML</span> element already exists for that: <code><fieldset></code>. Yes, the mighty fieldset-who would have thought?! In any sanely coded page, fieldsets group fields, and so we already have part of the solution without needing to make any changes. </p> <p><div class="code-sample"><table summary="This table lists the contents of the file keylistener-markup"><colgroup><col class="line-no" /><col class="line" /></colgroup> <thead><tr><th>#</th><th>Code</th></tr></thead> <tbody><tr class="odd"><td>0001</td><td class="tab0"><fieldset></td></tr> <tr><td>0002</td><td class="tab1"><legend>Search</legend></td></tr> <tr class="odd"><td>0003</td><td class="tab1"><input type="text" id="query" name="query" value="" /></td></tr> <tr><td>0004</td><td class="tab1"><button type="submit" name="search" id="search">Search</button></td></tr> <tr class="odd"><td>0005</td><td class="tab0"></fieldset></td></tr> </tbody></table> <p><a href="">get this code</a></p> </div></p> <p>The other reason for rolling our own solution is that <code>defaultbutton</code> doesn’t offer the flexibility of applying the default action to any type of button—be it a <code><input type="submit /></code>, a <code><button /></code>, an <code><a href="javascript:..."></code> anchor and so on. Nor provide any custom event handling when default actions are fired.</p> <h2>A Proposed Solution</h2> <h3>1. Connect fieldsets with their default buttons using minimal markup</h3> <p>Although fields are grouped, we still need to provide some relationship between a button and a fieldset. We should provide enough flexibility that our default button could be anywhere on the page, not necessarily within the <code><fieldset></code>.</p> <p>A clean and unobtrusive way to provide the relationship is via a common attribute, the classname. We could use the id attribute, but this should be avoided – you have to accept that when you’re working with .NET, it owns every ID attribute.</p> <p>Using the classname, we create a unique identifier for the fieldset. If we then add the same identifier to the default button for that fieldset, a useful relationship between the two is created. For this to work best, all identifiers should begin with the same string so that we may write a generic function to pick them out of the markup and identifiers for each button to fieldset relationship must be unique. Here I’ve used “<em>submit-…</em>“:</p> <p><div class="code-sample"><table summary="This table lists the contents of the file keylistener-markup-connected"><colgroup><col class="line-no" /><col class="line" /></colgroup> <thead><tr><th>#</th><th>Code</th></tr></thead> <tbody><tr class="odd hi1"><td>0001</td><td class="tab0"><fieldset class="submit-search"></td></tr> <tr><td>0002</td><td class="tab1"><legend>Search</legend></td></tr> <tr class="odd"><td>0003</td><td class="tab1"><input type="text" id="query" name="query" value="" /></td></tr> <tr class="hi1"><td>0004</td><td class="tab1"><button type="submit" name="search" id="search" class="submit-search">Search</button></td></tr> <tr class="odd"><td>0005</td><td class="tab0"></fieldset></td></tr> <tr><td>0006</td><td class="tab0"> </td></tr> <tr class="odd"><td>0007</td><td class="tab0">...</td></tr> <tr><td>0008</td><td class="tab0"> </td></tr> <tr class="odd hi1"><td>0009</td><td class="tab0"><fieldset class="submit-comment"></td></tr> <tr><td>0010</td><td class="tab1"><legend>Add a comment</legend></td></tr> <tr class="odd"><td>0011</td><td class="tab1"><input type="text" id="name" name="name" value="" /></td></tr> <tr><td>0012</td><td class="tab0"> <textarea name="comment" id="comment"></textarea></td></tr> <tr class="odd hi1"><td>0013</td><td class="tab1"><input type="submit" name="search" id="search" class="submit-comment" value="Submit Comment" /></td></tr> <tr><td>0014</td><td class="tab0"></fieldset></td></tr> </tbody></table> <p><a href="">get this code</a></p> </div></p> <p>Now we have connected buttons and fieldsets, we need to handle events on the page fired when the enter key is hit.</p> <h3>2. Handle enter key events with Javascript/jQuery</h3> <p>There are a couple of ways we can do this:</p> <ol> <li>Be proactive. Scan the entire <span class="caps">DOM</span> on page load, find any fieldsets with a “submit-…” class and bind an onkeypress event to every input inside to connect it with the right button.</li> <li>Be passive. Use event delegation and <em>listen</em> for keypress events on the page and decide whether we <em>need</em> to handle a default action event for it.</li> </ol> <p>Option 1 is quick to implement, but very inefficient. On <em>every</em> page load, we must scan the entire <span class="caps">DOM</span> for <code><fieldset></code>s, and then do all the work of binding events to the input elements, even to elements that may never be used. This is computationally expensive and will be a real drag on page startup times.</p> <p>With option 2, we need only find <strong>one element</strong> in the <span class="caps">DOM</span>, bind <strong>one event</strong> to it and then <strong>wait</strong> for the enter key to be pressed:</p> <ol> <li>Using a fast ID selector, <code>$('#content')</code>, we find one containing element for the whole page. It could be <code><body></code>, but in this code I’ve gone for an imaginary <code><div id="content"></div></code> element that might always surround our site’s content. This element will handle the keypresses for all fields on the page.</li> <li>To this element we bind an <code>onkeypress</code> event listener function to listen out for any keypress that happen within it— <code>$('#content').bind('keypress', function(e) {});</code>.</li> <li>We start the function by checking for two attributes of the keypress event. These are both contained within the event object that is passed into the function— <code>e</code>: <ol> <li>The key that was pressed</li> <li>The target <span class="caps">HTML</span> element that the keypress has bubbled up from.</li> </ol></li> <li>For valid keypresses— an enter keypress from an input target— we find the input element’s parent fieldset, and extract the classname begining with “submit-”. We use the <span class="caps">CSS</span> attribute selector and find any value that begins with (^=) the submit string— <code>('class^="submit-"')</code>.</li> <li>We can now find the button in the page that corresponds to the identifier, and trigger a click event on it. We need to handle different types of buttons, submits and anchors slightly differently.</li> </ol> <p><div class="code-sample"><table summary="This table lists the contents of the file keylistener"><colgroup><col class="line-no" /><col class="line" /></colgroup> <thead><tr><th>#</th><th>Code</th></tr></thead> <tbody><tr class="odd"><td>0001</td><td class="tab0">KeyListener = {</td></tr> <tr><td>0002</td><td class="tab0"> </td></tr> <tr class="odd"><td>0003</td><td class="tab1">init : function() {</td></tr> <tr><td>0004</td><td class="tab2">$('#content').bind('keypress', function(e) {</td></tr> <tr class="odd"><td>0005</td><td class="tab3">var key = e.charCode ? e.charCode : e.keyCode ? e.keyCode : 0;</td></tr> <tr><td>0006</td><td class="tab3">var target = e.target.tagName.toLowerCase();</td></tr> <tr class="odd"><td>0007</td><td class="tab3">if (key === 13 && target === 'input') {</td></tr> <tr><td>0008</td><td class="tab4">e.preventDefault();</td></tr> <tr class="odd"><td>0009</td><td class="tab0"> </td></tr> <tr><td>0010</td><td class="tab4">var parentFieldset = $(e.target).parents('fieldset');</td></tr> <tr class="odd"><td>0011</td><td class="tab4">parentFieldset = parentFieldset.filter('class^="submit-"').eq(0);</td></tr> <tr><td>0012</td><td class="tab0"> </td></tr> <tr class="odd"><td>0013</td><td class="tab4">if (parentFieldset.length > 0) {</td></tr> <tr><td>0014</td><td class="tab5">var classnames = parentFieldset.attr('class').split(' ');</td></tr> <tr class="odd"><td>0015</td><td class="tab0"> </td></tr> <tr><td>0016</td><td class="tab5">for (var i = 0; i < classnames.length; i++) {</td></tr> <tr class="odd"><td>0017</td><td class="tab6">if (classnames[i].substring(0, 7) == 'submit-') {</td></tr> <tr><td>0018</td><td class="tab7">var button = $('a.' + classnames[i] + ', button.' + classnames[i], $(this)).eq(0);</td></tr> <tr class="odd"><td>0019</td><td class="tab7">if (button.length > 0) {</td></tr> <tr><td>0020</td><td class="tab8">if (typeof(button.get(0).onclick) == 'function') {</td></tr> <tr class="odd"><td>0021</td><td class="tab9">button.trigger('click');</td></tr> <tr><td>0022</td><td class="tab8">} else if (button.attr('href')) {</td></tr> <tr class="odd"><td>0023</td><td class="tab9">window.location = button.attr('href');</td></tr> <tr><td>0024</td><td class="tab8">} else {</td></tr> <tr class="odd"><td>0025</td><td class="tab9">button.trigger('click');</td></tr> <tr><td>0026</td><td class="tab8">}</td></tr> <tr class="odd"><td>0027</td><td class="tab7">}</td></tr> <tr><td>0028</td><td class="tab7">break;</td></tr> <tr class="odd"><td>0029</td><td class="tab6">}</td></tr> <tr><td>0030</td><td class="tab5">}</td></tr> <tr class="odd"><td>0031</td><td class="tab4">}</td></tr> <tr><td>0032</td><td class="tab3">}</td></tr> <tr class="odd"><td>0033</td><td class="tab2">});</td></tr> <tr><td>0034</td><td class="tab1">}</td></tr> <tr class="odd"><td>0035</td><td class="tab0">};</td></tr> <tr><td>0036</td><td class="tab0"> </td></tr> <tr class="odd"><td>0037</td><td class="tab0">$(document).ready(function() {</td></tr> <tr><td>0038</td><td class="tab1">KeyListener.init()</td></tr> <tr class="odd"><td>0039</td><td class="tab0">});</td></tr> <tr><td>0040</td><td class="tab0"> </td></tr> </tbody></table> <p><a href="">get this code</a></p> </div></p> <p>If no button is found, we leave the function silently. We could re-trigger the default action, but this is what we’re trying to fix. I think it’s better the enter key stops working completely, rather than producing unexpected or potentially dangerous results if the wrong action is triggered.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Nick 2008-11-19T02:52:14Z 2008-11-23T13:18:52Z Official Music Videos Now Available Across the Compendium tag:beardscratchers.com,2008-11-19:4357bcf9c70b1807791cd39e76b16baa/3443787196676bcb9637ad58a5b83a8a <p class="first">In a bid to improve the accuracy of artist profile pages, I’ve been spending a fair amount of time on cleaning up the problem of false positives occuring when media is retrieved and matched to artists. </p> <p>One major step in this process is bringing good quality, official music videos into the fold. From today, music videos are available for many artists listed in the Compendium, courtesy of <a href=""><span class="caps">MTV</span> Networks</a>. <span class="caps">MTV</span> Networks encompasses not only <span class="caps">MTV</span> itself, but VH1, <span class="caps">CMT</span> and Logo; meaning that a wide variety of artist videos are served.</p> <p><img src="" width="221" height="147" alt="" class="nobg ic" /></p> <p>From worldwide artists such as <a href="">Led Zeppelin</a> and <a href="">Britney Spears</a>, to some of the excellent, but perhaps not so far-reaching, artists like <a href="">Fleet Foxes</a> and <a href="">TV On the Radio</a>.</p> <p>In the event that there aren’t any official videos available, the Compendium will still fall back on Youtube as a source of video goodness and attempt to sift through the chaff of potential matches for the artist being viewed. I’m keen to keep this an option at all times, as user-submitted videos are often more revealing about an artist than carefully planned and meticulously orchestrated official videos ever can be – peculiar fan films, shaky concert and festival footage from the pit, interviews from late-night TV you’d otherwise miss and much more.</p> <p>Since this is brand new, you may experience a few teething issues. I’m still investigating a few details surrounding the broadcast of music videos, and the availability of these videos across the World. Music is global, but copyright and content licenses are sadly xenophobic areas. And currently just a randomised selection of official videos are being shown for each page-view – paging through videos akin to the way Youtube videos are displayed is due shortly.</p> <p>Oh, and if you happen to find the Compendium a useful tool, please take a moment to vote for it on <a href="">Programmable Web</a> -it was a proud “Mashup of the Day” at the end of last month-and feel free to add any suggestions or comments to this journal entry.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Nick 2008-10-31T01:17:02Z 2008-10-31T13:57:58Z Linkedin launch new Opensocial platform, featuring Huddle.net tag:beardscratchers.com,2008-10-30:4357bcf9c70b1807791cd39e76b16baa/c02a1911cf8ca81d0b6d6e5df066804b <p class="first">In case you missed <a href="">the news</a>, one of the largest business networking sites <a href="">Linkedin</a> have just launched their new platform for <a href="">Opensocial applications</a>.</p> <p>In this initial phase of the platform rollout, Linkedin sensibly took the decision not to open the floodgates to developers and launch with just eight selected partners who could provide application features that are relevant and provide value for the core Linkedin userbase. Business deals aren’t made by virtual high-fiving the <span class="caps">CEO</span> of a multinational, and dream jobs aren’t won by poking your prospective employer—well at least not without the risk of legal action.</p> <p><a href=""><img src="" width="201" height="79" alt="" class="nobg ir" /></a> But enough about Linkedin’s launch from me; there’s lots in the <a href="">blogosphere and the tech press out there</a>. I want to focus on the <em>most important</em> and <em>exciting</em> aspect of this launch. That being the existence of online collaboration startup <a href="">Huddle.net</a> as one of the eight partners to be featured in the launch. They are the <em>only</em> UK-based partner—in fact the only one outside the US—and a company that I hold in very high regard… it is the company that I work for. It’s a very exciting time to see Huddle sitting up there against big heavyweights Google, Wordpress and Amazon. And, from the development team’s viewpoint, a somewhat frightening prospect too… scalability in code and hardware is a whole new ballgame.</p> <p>So, without further ado, I present to you the result of a fair amount of blood, sweat and tears [not forgetting the coffee, cigarettes, beer and pizza] amongst my colleagues—it’s been an incredibly intense couple of months— the <strong><a href="">Huddle Workspaces application</a> </strong> on Linkedin. </p> <a href=""><img src="" width="464" height="273" alt="" class="nobg ir" /></a> <blockquote class="full"> <p class="full">“Huddle Workspaces offers private online workspaces for secure team collaboration, document sharing and discussions within the LinkedIn network. Users receive 1Gb of free storage and can collaborate with unlimited connection.”</p> </blockquote> <p>I’m unashamedly biased, but I do highly recommend you to go and <a href="">get it installed</a> and simply have a play. There’s a lot more going on in the app than anything else on Linkedin’s platform at the moment. It’s a shame that it’s the only properly interactive and genuinely collaborative tool available. All the others seem, at first glance, to suffer from that rather ugly social network trait of passive “look at me! look at me!” signposting and ignore the point of networking in the first place – to actively connect, engage and collaborate with likeminded people. So, in case you didn’t catch that… <a href="">install Huddle Workspaces on Linkedin here</a>.</p> <p>Ok. I’ll stop with the shameless pimping. I do have sleep to catch up on. In future entries I want to write about the more interesting stuff of actually working with the <a href="">Opensocial <span class="caps">API</span></a>. It’s currently a technology lacking general blog chatter. There are lots of questions out there, but usually with few answers; it was peculiar to discover Myspace as a technical resource. And the documentation is currently too dry/technical and lacking in ‘human’ explanation to be a sufficient one-stop Opensocial resource. If in fact there is something out there I missed, please let me know in the comments.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> | http://feeds.feedburner.com/beardscratcherscom-journal | crawl-002 | refinedweb | 8,822 | 51.48 |
An applet can output a message to the status window of the browser or applet viewer on which it is running. For this, it makes a call to showStatus( ) with the string that we want to be displayed. The status window is a place where the user can give feedback about what is occurring in the applet, suggest options, or report some types of errors. The status window also makes an excellent debugging aid, because it gives an easy way to output information about the applet. The applet below shows the use of showStatus( ):
import java.awt.*;
import java.applet.*;
/*
<applet code="StatusWindow" width=300 height=50> </applet>
*/
public class StatusWindow extends Applet
{
public void init()
{
setBackground(Color.pink);
}
public void paint (Graphics g)
{
g.drawString("You are in main applet window.", 10, 20);
showStatus("This is the status window.");
}
}
The output of the above Applet code is as | http://ecomputernotes.com/java/awt-and-applets/output-a-message-on-status-window | CC-MAIN-2019-39 | refinedweb | 149 | 65.83 |
Opened 3 years ago
Closed 3 years ago
#25838 closed defect (fixed)
sagemath does not translate function('F') to F := operator "F" for fricas interface
Description
When using algorithm="fricas" in integrate, I defined F as function in sagemath. BUt when the call is made to integrate using fricas, it did not seem to be translated correctly to fricas syntax, which is
F := operator "F"
So fricas was not happy.
>sage ┌────────────────────────────────────────────────────────────────────┐ │ SageMath version 8.3.rc0, Release Date: 2018-07-08 sage: var('x') x sage: F=function('F') sage: integrate(F(x),x,algorithm="fricas") ..... TypeError: An error occurred when FriCAS evaluated 'F(x)':
The Fricas syntax using Fricas directly is
>fricas Checking for foreign routines AXIOM="/usr/lib/fricas/target/x86_64-linux-gnu" spad-lib="/usr/lib/fricas/target/x86_64-linux-gnu/lib/libspad.so" FriCAS Computer Algebra System Version: FriCAS 1.3.4 Timestamp: Thu Jun 28 08:31:47 CDT 2018 ------------------------------------------------------------------- (1) -> F := operator "F" (1) F Type: BasicOperator (2) -> integrate(F(x),x) x ++ (2) | F(%A)d%A ++ Type: Union(Expression(Integer),...)
So many such integrals that use generic function such as the above now fail in sagemath when using FriCAS algorithm because it is not translated correctly.
Thank you
--Nasser
Change History (19)
comment:1 Changed 3 years ago by
- Type changed from PLEASE CHANGE to defect
comment:2 Changed 3 years ago by
comment:3 Changed 3 years ago by
- Branch set to u/mantepse/sagemath_does_not_translate_function__f___to_f____operator__f__for_fricas_interface
comment:4 Changed 3 years ago by
- Cc rws added
- Commit set to 78027704e00f0a6516261cf559011fa58a1d4dec
- Status changed from new to needs_review
New commits:
comment:5 Changed 3 years ago by
- Keywords sagedays@icerm added
- Milestone changed from sage-8.3 to sage-8.4
- Reviewers set to Travis Scrimshaw
One (trivial, looks like a bad copy/paste) doctest failure here:
def derivative(self, ex, operator): """ Convert the derivative of ``self`` in FriCAS. INPUT: - ``ex`` -- a symbolic expression - ``operator`` -- operator EXAMPLES:: sage: var('x,y,z') (x, y)
Should be
(x, y, z).
Otherwise LGTM.
comment:6 Changed 3 years ago by
- Commit changed from 78027704e00f0a6516261cf559011fa58a1d4dec to 0665ead3c565bc6a7963c5a08837226f60b7df9c
Branch pushed to git repo; I updated commit sha1. New commits:
comment:7 follow-up: ↓ 8 Changed 3 years ago by
Edge case for naming functions vs. variables:
sage: F=function('f') sage: f=SR.var('f') sage: F(f).diff(f).integrate(f) f(f)
amazingly this actually works with the maxima interface because:
sage: maxima_calculus(F(f)) 'f(_SAGE_VAR_f)
comment:8 in reply to: ↑ 7 ; follow-up: ↓ 10 Changed 3 years ago by
I'm not sure what I'm supposed to do. This "works" with FriCAS, too:
sage: F=function('f') sage: f=SR.var('f') sage: fricas(F(f)).D(f).integrate(f) f(f)
Am I missing something? Oh yes, a missing import, fixed.
comment:9 Changed 3 years ago by
- Commit changed from 0665ead3c565bc6a7963c5a08837226f60b7df9c to 9a76eb0fe7582151c290c001633b9f1720fe0bc3
Branch pushed to git repo; I updated commit sha1. New commits:
comment:10 in reply to: ↑ 8 ; follow-up: ↓ 11 Changed 3 years ago by
I'm not sure what I'm supposed to do. This "works" with FriCAS, too: Am I missing something? Oh yes, a missing import, fixed.
OK, then it's fine. I don't have FriCAS installed, so I couldn't check. When I saw
F:= operator F I figured some care should be taken to avoid name clashes, but apparently this is taken care of already.
comment:11 in reply to: ↑ 10 Changed 3 years ago by
review?
comment:12 Changed 3 years ago by
- Status changed from needs_review to positive_review
LGTM now, thanks.
comment:13 Changed 3 years ago by
comment:14 Changed 3 years ago by
- Status changed from positive_review to needs_work
Merge conflict
comment:15 Changed 3 years ago by
- Commit changed from 9a76eb0fe7582151c290c001633b9f1720fe0bc3 to aa044c5a0e8edb13825000b2dd76acf5ff232ff3
Branch pushed to git repo; I updated commit sha1. New commits:
comment:16 Changed 3 years ago by
trivial rebase
comment:17 Changed 3 years ago by
- Status changed from needs_work to needs_review
comment:18 Changed 3 years ago by
- Status changed from needs_review to positive_review
comment:19 Changed 3 years ago by
- Branch changed from u/mantepse/sagemath_does_not_translate_function__f___to_f____operator__f__for_fricas_interface to aa044c5a0e8edb13825000b2dd76acf5ff232ff3
- Resolution set to fixed
- Status changed from positive_review to closed
I will work on this once #25602 is reviewed. It is not hard. | https://trac.sagemath.org/ticket/25838 | CC-MAIN-2021-17 | refinedweb | 720 | 52.8 |
IpernityNET Crack+ Free Download
IpernityNET Download With Full Crack is a framework wrapping. It wraps the Ipernity web API for the.NET framework 3.5.
It’s easy to use. Just unpack it and start working!
IpernityNET Activation Code is a framework wrapping. It wraps the Ipernity web API for the.NET framework 3.5.
It’s easy to use. Just unpack it and start working!
@girdhar,
Yeah, I would love to use.NET framework with the Silverlight. But seems it will not support to create connections to the remote URL.
I see that the one which you sent is using for.NET Framework as well.
@girdhar,
Yeah, I would love to use.NET framework with the Silverlight. But seems it will not support to create connections to the remote URL.
I see that the one which you sent is using for.NET Framework as well.
Agree. BTW there is MVC3 version too, with more than 100 pages of documentation. Check that out (
Agree. BTW there is MVC3 version too, with more than 100 pages of documentation. Check that out (
Hello,
I have already checked the MVC3 doc, but in order to do the same in Silverlight, I have to write the whole service logic in Silverlight, right?
I have already checked the MVC3 doc, but in order to do the same in Silverlight, I have to write the whole service logic in Silverlight, right?
Well yes. The architecture is identical, so what you’re after is a good presentation. There is Silverlight MVC sample which is the best guide to the world of async/await in Silverlight.
Well yes. The architecture is identical, so what you’re after is a good presentation. There is Silverlight MVC sample which is the best guide to the world of async/await in Silverlight.
1. I have already created the MVC3 version, but only the mode which is used in the articles like the one that is given by Girdhar Patil. I saw in his blog, he is using the synchronous way of invoking the rest interface with the help of asynctask.
2. When I started writing the
IpernityNET Activation Code Free Download X64 [March-2022]
The ipernitynet package allows easily integration with Ipernity web API in your.NET application. As ipernity API is implemented with ASP.NET framework this package can be used in any.NET applications that can use ASP.NET Framework 3.5.
Features:
Enables easy integration of the Ipernity REST API in your.NET application.
Full.NET Framework 3.5 support.
Offers full compatibility with the Ipernity web API.
Usage:
To use the IpernityNET package, simply add a reference to it (it’s included in the nuget package).
var ipernity = new Ipernity.IpernityNet(“key”, “secret”);
In your project, you must use the namespace IpernityNET.
To receive the Ipernity NET user name and password, you must call signUp method like this:
var user = await ipernity.signUp(PasswordTextBox.Text, UserNameTextBox.Text);
For more information, see the documentation for IpernityNET at
Features
Offers full compatibility with the Ipernity web API
Enables easy integration of the Ipernity REST API in your.NET application.
License
The following licenses apply to the ipernitynet project.
License summary:
The source code for the ipernitynet project is licensed under the BSD License.
License link:
The project is in constant development, and we welcome any contributions!
On top of that, if you want to contribute, feel free to ask me!
On top of that, if you want to contribute, feel free to ask me!
I hope you will enjoy using this package.
A special thanks to:
Bruno Dreher – Ipernity, Inc. – Switzerland –
Ipernity, Inc. is a Swiss company, based in Geneva, Switzerland. We are a company developing software for micro-level and macro-level transactions of gift cards
2f7fe94e24
IpernityNET
What is IpernityNET? IpernityNET is a framework wrapping Ipernity, providing a.NET wrapper for Ipernity.
It does help making the right use of.NET to develop your apps more simple and rapid than ever before, but here’s what I can say more about IpernityNET:
What is Ipernity? Ipernity is a mature, open-source image-based search engine for.NET.
Ipernity provides a very good implementation of an image-based crawler. These tools are easy to use and they can save a lot of time (especially when you have a lot of images to crawl).
You can use Ipernity to process images of any type, not only JPGs only. You can also control the timeout, the priority of parsing, etc…
It provides a very complete and simple API. Here is a list of the most important classes of the API (you can find detailed documentation on Ipernity’s website):
Batch
CrawlerManager
Crawler
Crawler.SearchOptions
DocumentBatch
Job
Jobs
QueryResult
QueryResults
ImageInfo
QueryResultsInfo
RetrievalException
SearchOptionsInfo
How to Install IpernityNET?
Before you can use IpernityNET, you’ll need to install Ipernity in your.NET project.
IpernityNET is available from NuGet.
IpernityNET is released under the MIT License.
Documentation:
Documentation can be found on Ipernity’s official website.
Questions? Send me a note on e-mail: alex@xpernity.com
What is the history of Ipernity?
Ipernity was started in early 2003 by Miguel Cunha, a Microsoft Alumnus and has been growing ever since.
In 2009, Miguel started working on the C# version.
At this moment, Ipernity runs on several different platforms: Windows, Mac OS X and Linux.
Ipernity has an incredibly wide coverage of the most important uses: Social Network, File System, Blog, Web Site, Blog Search, etc.
Ipernity isn’t just a code base. It’s a project filled with people. See the stories.
What is the future of Ipernity?
Besides providing support for
What’s New in the IpernityNET?
IpernityNET is a framework wrapping. It wraps the Ipernity web API for the.NET framework 3.5.
It’s easy to use. Just unpack it and start working!Madison Rising
Madison Rising is a youth-led community development organization aimed at improving life and opportunity in Milwaukee, Wisconsin. The organization has a health, education, and culture focus. Madison Rising provides social service programs and works as a community organizing and training hub.
History
Madison Rising was founded in 2016 by 12 residents of the Third Ward neighborhood. Madison Rising is led by Anna Zielinski, the former head of New Leaders Council and a statewide advocate for youth-led community organizations.
In May 2019, Madison Rising created a fundraiser to support the legal representation of the 11-year-old girl who had been in a vegetative state following a traumatic brain injury. In July 2019, they installed a community youth center in the Third Ward neighborhood where 15 youth-led organizations work out of.
Programs
Madison Rising’s programs are broken into three categories: the Madison Rising Office, Healthy Living, and our learning lab.
the Madison Rising Office is focused on providing youth in our community with access to a foundation for community building. It assists with grant writing, event planning, coalition building, planning for policy change, and more.
healthy living focuses on ensuring that the people of Milwaukee have access to the resources they need to live healthy lives, including programming that focuses on biking, walking, and nutrition.
learning lab brings together four different learning centers, each focused on addressing a specific need in Milwaukee.
Partner organizations
Madison Rising is a member of the Alliance for Community Health Improvement, a network of health-focused community organizations.
The Madison Rising Office is a member of the Wisconsin Association of Community Health Centers.
The Healthy Living program is a member of the Wisconsin branch of Organizing for Action, a grassroots advocacy organization.
The Learning Lab is a member of the Walker Ideas Lab, a network of educational organizations.
References
External links
Category:Organizations based in Milwaukee
Category:Community centers in Wisconsin
Category:Non-profit organizations based in Wisconsin
Category:Youth empowerment organizations
Category:2016 establishments in Wisconsin
Category:Youth organizations based in WisconsinNetworks typically include switches, routers, bridges, servers and other network devices coupled to network links, such as optical fiber links. Network devices can transmit and receive data over network links
System Requirements For IpernityNET:
Minimum specifications:
OS: Microsoft Windows 7 ( 32 or 64-bit, all editions)
CPU: Intel(R) Core(TM)2 Duo CPU P8600 @ 2.66GHz with SSE3
Memory: 2 GB
Video: nVidia GeForce 9600 GT 256 MB
Audio: 2 channel analog sound (stereo)
DirectX: version 9.0c
Keyboard: Microsoft Natural Ergonomic Keyboard 4000 (with multimedia function)
Screendock: Microsoft Natural Ergonomic Keyboard | https://setewindowblinds.com/ipernitynet-crack/ | CC-MAIN-2022-40 | refinedweb | 1,410 | 58.58 |
.
05/19/11:
- 20:41 Ticket #29511 (ImageMagick @6.6.9-9+q32: unterminated argument list invoking macro ...) created by
- After trying to update, I tried to install manually, but it didn't work …
- 17:51 Ticket #29510 (apache2: Cannot load /usr/libexec/apache2/mod_authz_owner.so into server) closed by
- invalid: Great! That was easy. :) I also just updated apache2 to 2.2.18, so if you …
- 17:42 Changeset [78731] by
- apache2: update to 2.2.18, take openmaintainership
- 16:22 Ticket #29510 (apache2: Cannot load /usr/libexec/apache2/mod_authz_owner.so into server) created by
- Hello, Getting this "Cannot load" message when trying to start the …
- 13:10 Ticket #29509 (ctags: Patch php language specification) created by
- The php language specification in the ctags port doesn't deal with …
- 11:41 Ticket #29360 (GeoLiteCity update) closed by
- fixed: Fixed in r78730.
- 11:40 Changeset [78730] by
- databases/GeoLiteCity: Upgrade version to 20110501. Closes #29360
- 09:48 Changeset [78729] by
- python/python-musicbrainz2 upgraded version from 0.7.2 to 0.7.3
- 09:47 Changeset [78728] by
- audio/libmodplug upgraded version from 0.8.8.2 to 0.8.8.3
- 09:47 Changeset [78727] by
- sysutils/grep upgraded version from 2.7 to 2.8
- 09:38 Changeset [78726] by
- sysutils/file upgraded version from 5.05 to 5.07
- 09:18 Ticket #29508 (ghostscript fails, 9.02 vs 9.00?) created by
- "port install ghostscript" fails for me. […] Note, I have …
- 09:00 Ticket #29507 (additional dependencies for gdk-pixbuf2: TIFF, jasper) created by
- gdk-pixbuf2 seems to require additional dependencies that are not known to …
- 08:47 Ticket #26509 (gst-plugins-bad fails to build vpx plugin) closed by
- fixed: Ian, Because gst-plugins-bad did not explicitly depend upon libvpx, …
- 08:46 Ticket #29502 (Distfile for GeoLiteCity-20110501) closed by
- fixed: Mirrored at …
- 08:36 Ticket #29408 (gst-plugins-bad 0.10.21 build failure) closed by
- duplicate: Duplicate of #26509
- 08:33 Changeset [78725] by
- gnome/gst-plugins-bad upgraded version from 0.10.21 to 0.10.22, added …
- 08:33 Changeset [78724] by
- multimedia/libvpx enabled position independent code to resolve tickets …
- 07:13 Ticket #29506 (sandbox violation fltk-devel) created by
- fltk-devel was recently installed as dependency (for py26-fipy). …
- 07:08 Ticket #29505 (Can't deactivate port with deactivated dependent) closed by
- fixed: r78086
- 07:02 Ticket #24643 (Many ports reporting "configure: error: GTK not installed") closed by
- worksforme: No response; closing.
- 06:58 Ticket #24641 (fbg: Installation of port fbg fails) closed by
- fixed: r78723
- 06:58 Changeset [78723] by
- fbg: fix build (#24641)
- 06:40 Ticket #29487 (freetype: ftimage.h:1292:2: error: #endif without #if) closed by
- invalid
- 05:46 Ticket #24629 (gst-ffmpeg-0.10.10 problem building on Leopard and Snow Leopard) closed by
- fixed: Seems to have been fixed by r70838.
- 05:41 Ticket #29505 (Can't deactivate port with deactivated dependent) created by
- I want to deactivate the ruby port. The swig-ruby port …
- 04:15 Ticket #29504 (py26-virtualenvwrapper @2.7.1_0 virtualenvwrapper_sh.diff missing) created by
- It seems that virtualenvwrapper_bashrc.diff was removed from svn but …
- 01:35 Ticket #24626 (ffmpeg 0.5.1 fails to configure for build arch i386 on snow leopard) closed by
- worksforme: Can't reproduce with current versions.
- 01:23 Changeset [78722] by
- p5-event: update to 1.17
- 00:00 Ticket #29503 (skrooge: making doc fails) created by
- This can be seen in the make log: […] leading to this […] No clue …
05/18/11:
- 20:17 Changeset [78721] by
- Update download site
- 20:17 Ticket #26202 (ntop needs configure.args --without-ssl or depends_lib port:openssl) closed by
- fixed: Fixed in r78720
- 20:15 Changeset [78720] by
- net/ntop: Add --without-ssl; see #26202
- 19:48 Changeset [78719] by
- Update download site
- 19:47 Changeset [78718] by
- Update download site
- 19:39 Ticket #24474 (5 dovecot-antispam plugins) closed by
- wontfix
- 19:38 Ticket #24474 (5 dovecot-antispam plugins) reopened by
-
- 19:28 Changeset [78717] by
- uhd : version upgrade to 003.001.000 .
- 18:43 Ticket #29502 (Distfile for GeoLiteCity-20110501) created by
-
- 17:38 Changeset [78716] by
- Update download site
- 16:08 Ticket #29493 (gnupg mirror not available via HTTP) closed by
- fixed: Fixed in r78715.
- 16:07 Changeset [78715] by
- gnupg mirrors: The official website doesn't list this mirror as accessible …
- 15:40 Changeset [78714] by
- kmymoney4-devel: update to svn revision 1232629
- 15:22 Ticket #29491 (iulib: libiulib.dylib has wrong install_name) closed by
- fixed: r78713
- 15:22 Changeset [78713] by
- iulib: fix libiulib.dylib install_name; see #29491
- 15:16 Ticket #26202 (ntop needs configure.args --without-ssl or depends_lib port:openssl) reopened by
- I'd really rather not close genuine bug reports just because they're old.
- 15:15 Changeset [78712] by
- pdftk: make default compiler gcc42 on all systems prior to Snow Leopard …
- 14:31 Ticket #29501 (gimp @2.6.11_0 failed to run with a message referring to libpng12.0.dylib) created by
- Error message: […] Error persists after upgrading gimp.
- 14:24 Ticket #27241 (update clamav-server to 0.96.4) closed by
- fixed: Fixed r75979
- 14:20 Ticket #27225 (Update dovecot to 1.2.16) closed by
- fixed: Fixed r78550 and r78711
- 14:17 Changeset [78711] by
- mail/dovecot: Conflict with port dovecot2
- 13:57 Ticket #29500 (Percona Server 5.5.11-rel20.2 Percona database port, drop in replacement ...) created by
- Percona Server is an enhanced drop-in replacement for MySQL. With Percona …
- 13:55 Ticket #29499 (Percona 5.5.11-rel20.2 Percona database port, drop in replacement for ...) created by
- Percona Server is an enhanced drop-in replacement for MySQL. With Percona …
- 13:39 Ticket #26202 (ntop needs configure.args --without-ssl or depends_lib port:openssl) closed by
- wontfix: Closing due to inactivity
- 13:36 Ticket #26023 (devel/crm114 version 20100106) closed by
- wontfix: Closing due to inactivity
- 13:34 Ticket #24731 (db50: new port of Berkeley DB version 5.0.21) closed by
- wontfix: Closing due to inactivity
- 13:33 Ticket #24474 (5 dovecot-antispam plugins) closed by
- fixed: Closing due to inactivity.
- 13:32 Ticket #23044 (adodb livecheck incorrect) closed by
- wontfix: Closing due to inactivity.
- 12:59 Changeset [78710] by
- skrooge: update to 0.8.95
- 12:58 Changeset [78709] by
- scribus: new version 1.4.0.rc3 (I realize this is a "devel" version, but …
- 12:55 Ticket #22993 (add sieve variant to dovecot) closed by
- wontfix: This ticket is old and dovecot2 is out now.
- 12:45 Ticket #25864 (freetype-2.4.1: No such file or directory) closed by
- worksforme
- 12:39 Changeset [78708] by
- libofx: version update 0.9.4
- 12:33 Ticket #27240 (dovecot2-sieve update to 0.2.1) closed by
- invalid: Current port version is higher at 0.2.3.
- 12:28 Ticket #26665 (mail/dovecot2-sieve) closed by
- fixed: New port added r72031
- 11:46 Ticket #24529 (libao 1.0.0_0 errors in cmus 2.3.1_1) closed by
- worksforme
- 11:44 Ticket #29498 (tortoisehg is explicitly dependent on python26) created by
- The port tortoisehg should be compatible with various python versions. …
- 10:58 Ticket #28983 (clamav @0.97_0 +universal Database load killed by signal 11) closed by
- worksforme: Replying to steve@…: > /opt/local/share/clamav/bytecode.cld …
- 10:55 Changeset [78707] by
- math/octave: revbump to fix linking to updated port hdf5-18; see #29460
- 10:48 Ticket #29487 (freetype: ftimage.h:1292:2: error: #endif without #if) reopened by
- I've added the log. Sorry about that.
- 10:40 Ticket #29497 (ocropus: new port) created by
- Attached is a Portfile and patches for the Google ocropus project …
- 09:49 Ticket #29485 (gcc44 failed to complete when doing "sudo port upgrade outdated") closed by
- worksforme
- 08:13 Changeset [78706] by
- libmilter: verison bump to 8.14.5
- 07:49 Changeset [78705] by
- milter-greylist: version bump to 4.2.7
- 06:48 Changeset [78704] by
- Quicksilver: replace system call to chmod with native Tcl command. Thanks, …
- 06:20 Changeset [78703] by
- gnome/gst-plugins-ugly upgraded version from 0.10.17 to 0.10.18
- 06:13 Ticket #19049 (port activation fails with "Error: port activate failed: Not a directory") closed by
- wontfix: Closing as per comment:3. The inconsistency issues should be fixed in the …
- 06:08 Ticket #19908 (dry-run option (-y) not honored everywhere) closed by
- fixed: No further issues reported; closing.
- 06:02 Changeset [78702] by
- gnome/gst-plugins-good upgraded version from 0.10.28 to 0.10.29
- 05:53 Ticket #29495 (DBus doesn't launch, Digikam and other apps hang) closed by
- worksforme: The correct commands to load dbus are shown by port notes dbus.
- 05:53 Changeset [78701] by
- php5-apc: whitespace changes only
- 05:48 Ticket #29496 (Segmentation fault when building python27) created by
- When I run "sudo port install python27" on my 10.6.7 iMac here's what I …
- 05:31 Changeset [78700] by
- gnome/gst-plugins-base upgraded version from 0.10.32 to 0.10.34
- 05:22 Ticket #28770 (Create py27-nltk) closed by
- fixed: Added py27-nltk in r78699, based on py26-nltk. stevenbird1, I've left you …
- 05:21 Changeset [78699] by
- py27-nltk: new port, version 2.0.1rc1, based on py26-nltk; see #28770
- 05:18 Ticket #29494 (xmlsec: update to 1.2.18) closed by
- fixed: r78698
- 05:18 Changeset [78698] by
- xmlsec: update to 1.2.18; see #29494
- 05:18 Changeset [78697] by
- Quicksilver: follow-up to r78696: bump port revision
- 05:18 Changeset [78696] by
- Quicksilver: moved built plugins inside the application bundle, set …
- 05:17 Ticket #24625 (flac-1.2.1 - fails to build for i386 arch on Snow Leopard) closed by
- worksforme: Can't reproduce. r66830 fixed this AFAICT.
- 05:10 Changeset [78695] by
- py27-opengl: add py27-tkinter dep that was missed in r74691/r74692
- 05:10 Ticket #29493 (gnupg mirror not available via HTTP) reopened by
- Replying to cal@…: > Instead of deleting the file manually, …
- 05:08 Changeset [78694] by
- xmlsec: fix livecheck
- 04:57 Ticket #29085 (py*-nltk: upgrade to 2.0.1rc1) closed by
- fixed: Updated in r78693. I also added the line "supported_archs noarch" since …
- 04:55 Ticket #24595 (Move python25's tkinter to py25-tkinter) closed by
- fixed: r78692
- 04:55 Changeset [78693] by
- py*-nltk: 2.0.1rc1 and indicate this is noarch; see #29085
- 04:55 Changeset [78692] by
- move python25's tkinter module back into py25-tkinter (#24595) and adjust …
- 04:54 Ticket #29495 (DBus doesn't launch, Digikam and other apps hang) created by
- During installation, I noticed two messages: # launchctl load …
- 04:20 Ticket #29493 (gnupg mirror not available via HTTP) closed by
- worksforme: Please delete …
- 04:07 Ticket #29494 (xmlsec: update to 1.2.18) created by
- Hi The latest ports version of xmlsec is 1.2.16. It has this bug: …
- 04:03 Ticket #29493 (gnupg mirror not available via HTTP) created by
- […]
- 04:03 Ticket #20810 (postgresql_select port request) closed by
- fixed: r78691
- 04:02 Changeset [78691] by
- add postgresql_select port (#20810), add select files to postgresql84 and …
- 02:50 Ticket #17150 (add postgresql83 to the system path) closed by
- duplicate: Duping to #28133 since it has a patch.
- 02:49 Changeset [78690] by
- version 2.5
- 01:49 Ticket #29492 (libgeoip: update to 1.4.7) created by
- Looks like libgeoip should be updated to version 1.4.7.
- 01:47 Ticket #29491 (iulib: libiulib.dylib has wrong install_name) created by
- […] The library's install_name should be …
- 01:45 Ticket #29489 (iulib Portfile and patches) closed by
- fixed: Committed in r78689 with these changes: * removed trailing whitespace …
- 01:45 Changeset [78689] by
- iulib: new port, version 0.4; see #29489
- 00:57 Ticket #29490 (libgeoip doesn't install the header files) created by
- I've installed it with port install libgeoip and I can't find the header …
Note: See TracTimeline for information about the timeline view. | https://trac.macports.org/timeline?from=2011-05-20T11%3A31%3A33-0700&precision=second | CC-MAIN-2016-30 | refinedweb | 1,995 | 63.9 |
after the deluge: how to tidy an overflow
By John.Rose-Oracle on Aug 04, 2010.
Use CasesIf pre-constructed exception Benford’s law suggest that random-looking inputs are paradoxically rare. Bignum addition is not likely, in practice, to create a bignum longer than its inputs, so the overflow condition will be rare on one out of the N.
API Design CasesVarious subsequent commenting on Joe’s blog, I will make a link for each type of API.
Throw to Slow PathFor the record, here is a fragment from Joe’s example which shows the previously mentioned technique of reporting overflow with an exception:
static int addExact(int a, int b) throws ArithmeticException { ... } ... int z; try { z = addExact(a, b); } catch (ArithmeticException ignore) { ... }
Null to Slow PathBesides the half-measure of a slow path reached by an exception, there is another ingenious trick that Joe considers and rejects: Report the exceptional condition by returning null instead of an boxed result. The effect of this is to extend the dynamic range of the operand word (32 or 64 bits) by one more code point, a sort of
NaNvalue..
Longs as Int PairsAnother option would be to work only with 32-bit ints and forget about longs. Then the API could stuff two ints into a long as needed. The code for
addExactbecome.
static long intPair(int a, int b) { return ((long)a << 32) + (b & (-1L >>> 32)); } static int lowInt(long ab) { return (int)(ab >> 0); } static int highInt(long ab) { return (int)(ab >> 32); }.
Multiple Return ValuesYou may have guessed already that my favorite (future-only) realization of this would be multiple return values. The intrinsics would look something like this:.
Arrays as Values PairsA simple expedient is to use a two-element array to hold the two returned values. Escape analysis (EA) is present in modern optimizing compilers. (See this example with JRockit.) One could hope that the arrays would “just evaporate”. But EA patterns are fragile; a slight source code variation can unintentionally destroy them. Most ameliorations by the programmer, such as attempting to cache the array for reuse, will break the real optimization.
Wrappers as Value PairsT.
Value-like PairsIf we add value types to the JVM, with relaxed identity semantics, a value-like
LongPairwould play better with EA, since pair values could be destroyed and reconstituted at will by the optimizer. Alas, that also is in the future.
Return by ReferenceSome)) { ... }
Return by Thread LocalAnother.
Static Single AssignmentThe idea of thread-locals is interesting, since they are almost structured enough to optimize as if they were registers. (This is counter-intuitive if you know that they are implemented, at least in the interpreter, via hash table lookups.) Perhaps there is a subclass of
ThreadLocalthat needs to be defined, with a restricted use pattern that can be optimized into simple register moves. (Such a standard restricted use pattern is static single-assignment, used in HotSpot.) If, after intrinsics are expanded, the second return value looks like a register (and nothing more) then the optimizer has full freedom to boil everything down to a few instructions.
Return to an Engine FieldSo far we have supposed that the arithmetic methods are all static, and this is (IMO) their natural form. But it could be argued that arithmetic should be done by an explicit
ArithmeticEngineobject which somehow encapsulates all that goodness of those natural numbers. If you can swallow that, there is an unexpected side benefit: The engine is an acceptable (I cannot bring myself to say natural) place to store the second return value from intrinsics that must return one.
With some luck in EA, the engine field (With some luck in EA, the engine field (/\)) { ... }
highWordabove) might get scalarized to a register, and then participate in SSA-type optimizations.
Continue in a LambdaWith closures (coming to a JDK7 near you...) you could also drop to continuation-passing style (CPS) to receive the two result words from the intrinsic:
Optimizing this down to an instruction or two requires mature closure optimization, something we’ll surely get as closures mature. On the other hand, we will surely lack such optimizations at first.Optimizing this down to an instruction or two requires mature closure optimization, something we’ll surely get as closures mature. On the other hand, we will surely lack such optimizations at first)) { ... }
Those are a lot of choices for API design! I encourage you to add your thoughts to Joe’s blog.
(I'm writing this here, since it's not directly relevant to the JVM.)
Pure, a dynamically typed eager functional language based on generalized term rewriting, is the successor to Q, also a dynamically typed etc. In Q, which was interpreted directly, numbers could only be integers (implemented as GMP mpz_t objects) or doubles.
Pure, however, is JIT-compiled using LLVM, and it has three user-visible numeric datatypes: ints, bigints (still mpz_t objects), and doubles. Ints are known directly to LLVM, and have signed 32-bit wrapping semantics; bigints aren't and don't. The two types are essentially independent, except that ints are widened to bigints in int/bigint mixed-mode arithmetic, just as they are widened to doubles in int/double mixed mode. The three numeric types have separate literal forms (suffixed L for a bigint), though a too-big literal is quietly interpreted as a bigint literal.
All this puts the onus on the user of controlling speed vs. accuracy, but in a way which is fairly easy to understand: there are only two types of integers, and dynamic typing means that you don't have to distinguish them most of the time. When you do want to, you can attach a static type declaration to an argument, providing overloading.
"What about 64-bit machine ints?", I asked at the time. Answer: they make sense on (rare) ILP64 machines, or where you have fixnums, but when using LLVM on LP64 and LLP64 machines the boxed 32-bit int still rules. The type "long" is known only to the C FFI, and represents 32 or 64 bits according to the machine ABI; the result therefore becomes an int or a bigint under the covers. The same is done with the other C types that aren't native to Pure.
Posted by John Cowan on August 04, 2010 at 08:59 AM PDT #
The computational engine would seem the best solution to me:
1. You can easily inject different computational engines with different performance characteristics (I am assuming ComputationalEngine is an interface).
2. The ComputationEngine instance could be a ThreadLocal thus making the code thread safe.
Posted by Howard Lovatt on August 04, 2010 at 09:13 AM PDT # | https://blogs.oracle.com/jrose/entry/after_the_deluge | CC-MAIN-2016-50 | refinedweb | 1,115 | 54.02 |
Don't ad-block us - support your favorite websites. We have safe, unobstrusive, robotics related ads that you actually want to see - see here for more.
0 Members and 1 Guest are viewing this topic.
// ATMega8 LCD Driver//// (C) 2009 Radu Motisan , [email protected]// All rights reserved.//// test.c: sample test for the HD44780 LCD functions// For more details visit the website.#include "lcd.h"int main(){ int i=0; lcd_init(); while(1) { i = (i+1)%10; //lcd_clrscr(); lcd_home(); lcd_string2("Hello World! %d\npocketmagic.net",i); for (int i=0;i<10;i++) //some delay _auxDelay(1000000); } return 0;}
#include "at8LCD.h"int main(){ LCD_init(); LCD_write('A');}
well it shouldn't say LCD.c:5: undefined reference to `LCD_init'. this would mean you have a file LCD.c with the same function. The files I sent were at8LCD.c, and .h
#include "at8LCD.h"int main(){ LCD_init(); LCD_write('A');}
#include "at8LCD.h"int main(){ while(1){ LCD_init(); LCD_write('A'); } return 0 ;} | http://www.societyofrobots.com/robotforum/index.php?topic=8047.msg65859 | CC-MAIN-2014-52 | refinedweb | 161 | 71.31 |
dax_dtrace - DTrace probes for libdax
The libdax library defines the following DTrace SDT probes. The provider name is dax.
This probe is called when any of the dax_post_xxxx() functions post a request to DAX. If a function posts multiple DAX requests, this probe is called for each request.
The arguments for this probe are as follows:
Context passed to the post function
Queue passed to the post function
udata passed to the post function
This probe is called when a non-post DAX function completes.
The arguments for this probe are as follows:
A filter word that contains a small amount of data describing the command. You can use it in a filter expression, without incurring the cost of a copyin. The filter word has the following format:
Bits: 63-32 31-24 23-16 15-0 Field: reserved major minor cmd
cmd is the DAX command that executed, with values from dax_cmd_t. You can access this sub-field by using the DAX_DFILTER_CMD(filter) convenience macro.
major is the major version of the API used by the ctx that generated this probe. You can access this sub-field by using the DAX_DFILTER_MAJOR(filter) convenience macro.
minor is the minor version of the API used by the ctx that generated this probe. You can access this sub-field by using the DAX_DFILTER_MINOR(filter) convenience macro.
Structure that describes the operation
Structure that describes the result of the operation
Structure that describes the performance events that occurred during this operation
This probe is called when a DAX post function completes asynchronously.
The argument types and definitions are identical to the dax-execute probe.
This structure contains the arguments passed to the libdax function that called the probe. The description of the arguments are:
The version of the API used by the ctx that generated this probe
DAX context passed to the libdax function that called the probe
The flags argument passed to the function
The udata argument passed to the post function, or NULL for a non-post function. If udata is unique per request, you can use it in the script as an index into an associative array to track information related to the request, such as a starting timestamp.
The queue argument passed to the post function, or NULL for a non-post function
Indicates the type of the command that executed. The following table shows which fields are valid for each value of cmd.
This structure contains the performance events. The description of the performance events is as follows:
Number of physical pages that were crossed during the execution of the function. Each crossing causes an additional command to be submitted to DAX.
You can reduce crossings by mapping buffers with larger pages. For more information, see the MC_HAT_ADVISE option of memcntl(2).
Number of times command was re-submitted because of transient resource contention. This is usually 0, but may increase because of high system load.
Has the value 1 if some or all of the command failed to run on DAX and was emulated in software, else 0. Emulation of commands occurs in conditions such as transient resource contention, to guarantee completion.
Number of times this command was re-submitted because a buffer address was not translatable, else 0. This is usually 0, but may increase because of high system load.
Has the value 1 if the command uses an intermediate source or destination buffer. This can occur in page crossing commands and commands that are long because of DAX alignment restrictions.
Has the value 1 if the command unzipped the src into a temporary buffer. This can occur in page crossing commands and commands that are long because of DAX alignment restrictions.
Number of times a long command was split at the maximum size supported by DAX and submitted as multiple hardware commands.
Splitting of a command happens because its src vector or its dst vector is too long. The max_src_len and max_dst_len values in the dax_get_props() function define the length of the src and dst vectors.
Splitting of a command does not affect its functionality, but sometimes reduces its efficiency when compared to explicitly passing multiple shorter vectors. If the copy or unzip event is non-zero, you can improve the performance by passing shorter vectors.
Number of cycles DAX spent executing the command, excluding queueing delay and command fetching time.
DAX frequency, in MHz.
The following example shows how to run the dax.d D script that displays various events for each DAX command listed in the script.
The dax.d D script is as follows.
#include <dax.h> #pragma D option quiet #define PRINT_COMMAND(i) \ printf("%10s %7d %9d %9d %5d %5d %5d %5d %5d %5d %3d\n",\ Cmd[i], Count[i], Elements[i], Events[i].cycles, \ Events[i].page, Events[i].split, Events[i].unzip, \ Events[i].copy, Events[i].retry, Events[i].nomap, \ Events[i].emulate) #define END_COMMAND(cmd) \ dtrace:::END \ / Count[cmd] != 0 / \ { \ PRINT_COMMAND(cmd); \ } dax_perf_event_t Events[10]; long Elements[10]; long Count[10]; dtrace:::BEGIN { Cmd[DAX_CMD_SCAN_VALUE] = "scan_value"; Cmd[DAX_CMD_SCAN_RANGE] = "scan_range"; Cmd[DAX_CMD_TRANSLATE] = "translate"; Cmd[DAX_CMD_SELECT] = "select"; Cmd[DAX_CMD_EXTRACT] = "extract"; Cmd[DAX_CMD_COPY] = "copy"; Cmd[DAX_CMD_FILL] = "fill"; Cmd[DAX_CMD_AND] = "and"; Cmd[DAX_CMD_OR] = "or"; Cmd[DAX_CMD_XOR] = "xor"; } dax$target:::dax-execute, dax$target:::dax-poll { cmd = DAX_DFILTER_CMD(arg0); req = (dax_request_t *) copyin(arg1, sizeof(dax_request_t)); res = (dax_result_t *) copyin(arg2, sizeof(dax_result_t)); ev = (dax_perf_event_t *) copyin(arg3, sizeof(dax_perf_event_t)); Count[cmd]++; elements = (cmd == DAX_CMD_COPY ? req->arg.copy.count : (cmd == DAX_CMD_FILL ? req->arg.fill.count : req->src.elements)); Elements[cmd] += elements; Events[cmd].frequency = ev->frequency; Events[cmd].cycles += ev->cycles; Events[cmd].page += ev->page; Events[cmd].emulate += ev->emulate; Events[cmd].nomap += ev->nomap; Events[cmd].copy += ev->copy; Events[cmd].retry += ev->retry; Events[cmd].split += ev->split; Events[cmd].unzip += ev->unzip; } dtrace:::END { printf("%10s %7s %9s %9s %5s %5s %5s %5s %5s %5s %3s\n", "command", "count", "elems", "cycles", "cross", "split", "unzip", "copy", "retry", "nomap", "em"); } END_COMMAND(DAX_CMD_SCAN_VALUE) END_COMMAND(DAX_CMD_SCAN_RANGE) END_COMMAND(DAX_CMD_TRANSLATE) END_COMMAND(DAX_CMD_SELECT) END_COMMAND(DAX_CMD_EXTRACT) END_COMMAND(DAX_CMD_COPY) END_COMMAND(DAX_CMD_FILL) END_COMMAND(DAX_CMD_AND) END_COMMAND(DAX_CMD_OR) END_COMMAND(DAX_CMD_XOR)
To run the dax.d D script, use the following commands:
# DAX_DEBUG_OPTIONS=perf; export DAX_DEBUG_OPTIONS # dtrace -Cs dax.d -I$inc -c a.out
See attributes(5) for descriptions of the following attributes:
libdax(3LIB), dax_get_props(3DAX) | https://docs.oracle.com/cd/E86824_01/html/E84973/dax-dtrace-3dax.html | CC-MAIN-2021-39 | refinedweb | 1,044 | 56.55 |
Red Hat Bugzilla – Bug 18039
g++ breaks glibc
Last modified: 2008-05-01 11:37:59 EDT
Compiling
extern "C" {
void exit (int);
};
#include <stdlib.h>
with g++ yields:
In file included from foo.cpp:6:
/usr/include/stdlib.h:578: declaration of `void exit (int) throw ()'
throws different exceptions
foo.cpp:2: than previous declaration `void exit (int)'
This breaks autoconf scripts which includes stdlib.h in a AC_TRY_RUN when
the
language is set to c++.
Seems to be the same as described at
...tells a lot about what kind of testing RedHat does before launching a new
product. This is VERY frustrating.
This is not a bug. void exit(int) is a redefinition because the exit() prototype
in stdlib.h throws exceptions.
As bero mentioned, this really is not a bug and it is good current g++
is more strict about user bugs than it used to. Write correct C++
code and you should get of this warning. If current GNU autoconf
still generates this code it should be fixed, will check it out. | https://bugzilla.redhat.com/show_bug.cgi?id=18039 | CC-MAIN-2017-17 | refinedweb | 176 | 76.82 |
The goal of this site is to put relevant and applicable tools and information at the fingertips
With this blog we want to inform you on our latest initiatives.
Enjoy reading and stay tuned!;
4:
5: public class PlayerMovement : MonoBehaviour
6:
7: {
8:
9: private Vector3 movementVector;
10:
11: private CharacterController characterController;
12:
13: private float movementSpeed = 8;
14:
15: private float jumpPower = 15;
16:
17: private float gravity = 40;
18:
19: void Start()
20:
21: {
22:
23: characterController = GetComponent<CharacterController>();
24:
25: }
26:.-
Welcome to Microsoft BizSpark simply select the right program for your business and get started today!
A full-Featured IDE - Free. Start coding the app of your dreams for Windows, Android, and iOS.
Download Everything you need, all in one place for FREE!
Community includes..
Tools Designers, editors, debuggers, profilers - all packaged up in a single environment.
Languages Code in C++, Python, HTML5, JavaScript, and of course C#, VB, and F#.
Web Extensive web support for ASP.NET, Node.js, and JavaScript
Devices Tools for Apache Cordova and Unity to reach even more platforms.
One.
You can Also read his blog with the complete walkthrough documented at
There are more training resources for game developers at Microsoft Virtual Academy
Guest blog by Tim Stoddard, Staffordshire University @Gamepopper
Standing out above the crowd
My discovery into games development came from an opportunity at A-Levels to make a project on anything, so I decided to make a game using GameMaker 8. I decided to pursue making computer games in University, when I transferred from Computer Science to Computer Games Programming. After making a few games for the Windows Store, I decided I wanted to make one fairly large indie game, and being a student is the best time to do it.
So for the duration of a year, I was working on a game part time, during my degree and placement, and mostly on my own.
So what is the Game?
The game was Secret of Escape, a top-down stealth action game developed in Construct2. The game has since been released on Desura, Itch.io and IndieGameStand, as well as being shown at several game events such as Launch Conference, Norwich Games Expo, Insomnia52 and London Gaming & Anime Con. But out of all the moments I’ve had from making a game, the one that surprised me the most was Secret of Escape being nominated for the TIGA Games Industry Award for Best Student Game.
I submitted my game to the awards because I figured it might be a good chance, and the process of entering was straightforward. All you needed to do was enter your details, choose which category you wanted to submitted to, and upload both a copy of your game and a video and give a reason why you should win the award. Then in mid-October I got an email which showed my game as shortlisted for Best Student Game, needless to say I was excited.
What is your advice?
If there is any word of advice I would give here, it would be this. If you are ever nominated for an award, go to the award ceremony. Not only is it a moment to be proud of an achievement in your career, and is common courtesy to accept an award in person if you win, but an awards ceremony is a big opportunity to network and speak to professionals from all parts of the industry.
I had the opportunity of speaking with people from indie developers such as Futurlab (Velocity 2X), Evil Twin Artworks (Victory At Sea), Sumo Digital, all the way up to large studios such as Rebellion (Sniper Elite III) and Bullfrog Productions (Fable Anniversary). Not to mention people who work in Recruitment (Aardvark Swift and Amiqus), Accounting, and Quality Assurance.
Of course there was the ceremony, with the nominees of each award being displayed on large monitors, some with footage of the games being shown. I got to see my game being shown during the Best Student Game, and cheer at excitement for seeing it there. I also cheered when Staffordshire University was shown during the Best Education Initiative award. I was also sitting with the developers at Bullfrog, so I gave my support for them too.
Sadly I didn’t win, but being nominated and going to the award ceremony is definitely one of the highlights as a game developer. Hopefully one day I’ll return and take a trophy home with me, so I guess it’s time to start working now..
Tim excellent stuff and great to see you making the right steps!
So if your a student and interested in standing out above the crowd go and build a game for the players and not for yourself!!
Enter the gaming competition or individual competition like and
One of the great things about Windows Azure is its platform agnostic
So if your building games for Android check out the following videos by Dave Douglas a fellow Microsoft Technical Evangelist on building a Azure backend to your game.
Part 1
Create android app with cloud backend (Part 1: Azure Mobile Service)
Part 2
Create android app with cloud backend (Part 2: Table Permissions and OAuth)
Part 3
If your interested in trying out Azure apply online at or register for our special game dev offer | http://blogs.msdn.com/b/uk_faculty_connection/default.aspx?Redirected=true&PageIndex=2 | CC-MAIN-2014-52 | refinedweb | 889 | 57.1 |
Introduction
Implementing authentication (i.e. user login) in a system sounds simple. Just compare username and password with what is stored in the database, right?
Actually, it’s not nearly as simple as it looks to the user logging in. Storing and checking user credentials is a process with different levels of security (and I’m no expert, so there may be more):
- Store passwords in plain text. Very bad idea.
- Hash the passwords. They won’t be easily readable, but can be cracked by rainbow tables.
- Salt and hash the passwords. Protects against rainbow tables as well, but still vulnerable to high-performance brute force attacks.
- Use a slow hash algorithm to limit the effectiveness of brute force attacks.
My article “Securing Passwords by Salting and Hashing” covers the first three items. This article will deal with the fourth item, and introduce BCrypt, which gives you all four.
The source code for this article is available at the Gigi Labs BitBucket repository.
Authentication with BCrypt
Before we discuss the merits of using BCrypt, let’s see how to use it.
You first need to include the BCrypt library in your project. You can do this via NuGet:
The functionality we need to use is all in a class called
BCryptHelper, which you get access to by including the following namespace:
using DevOne.Security.Cryptography.BCrypt;
With that, it is very easy to generate a salt, hash a password with it, and validate the password against its salted hashed version:
static void Main(string[] args) { Console.Title = "BCryptTest"; string password = "My$ecureP@$sW0Rd"; string salt = BCryptHelper.GenerateSalt(); string hashedPassword = BCryptHelper.HashPassword(password, salt); bool valid = BCryptHelper.CheckPassword(password, hashedPassword); Console.WriteLine("Salt: {0}", salt); Console.WriteLine("Hashed Password: {0}", hashedPassword); Console.WriteLine("Valid: {0}", valid); Console.ReadLine(); }
Here’s the output of this little program:
The BCrypt hashed password is typically a 60-byte string. As you can see, the salt is actually embedded within the hashed password (this StackOverflow answer explains more about how this works). This means you don’t need to store the salt separately.
Why BCrypt?
The functionality we have seen in the previous section doesn’t really give us anything more than hashing and salting with any other reasonably strong hash function. So why use BCrypt?
In many programming situations, writing code that executes fast is a good thing. Authentication is not one of those. If the algorithms you use to authenticate your users are fast, that means that brute force attacks may attempt large amounts of combinations per second – more so with modern hardware and GPUs.
Algorithms such as PBKDF2 and BCrypt differ from traditional hash algorithms such as MD5 or SHA256 in that they take a work factor as an input. That is, you can decide how fast or slow the algorithm runs. So if, for instance, you set up your algorithm to take 1 second to validate a password, that greatly limits the possibilities of brute force attacks when compared to algorithms that can run several hundreds or thousands of times per second. Read more about why BCrypt is badass at this Security StackExchange answer.
In BCrypt, the
GenerateSalt() method takes an optional
logRounds parameter that affects the performance of subsequent hash operations. It has a default value of 10 and can be set to a number between 4 and 31. The algorithm will run 2 to the power of
logRounds times, making it run exponentially slower. To get an idea of this, I wrote some simple benchmarking code with the help of my trusted ScopedTimer class (from “Scope Bound Resource Management in C#“):
static void GenerateSaltBenchmarks(string password) { for (int i = 10; i < 16; i++) { using (var scopedTimer = new ScopedTimer($"GenerateSalt({i})")) { string salt = BCryptHelper.GenerateSalt(i); string hashedPassword = BCryptHelper.HashPassword(password, salt); } } }
Here are the results:
Summary
Use BCrypt to securely store and validate your passwords. It’s easy to use, easy to store, and hard to break. Also importantly, you can make it as slow as you like. | https://gigi.nullneuron.net/gigilabs/2016/05/08/ | CC-MAIN-2021-31 | refinedweb | 669 | 65.73 |
It just took me about 30 mins to figure it out, so here’s how to install python plugins in KiCad 5.0 on a Mac.
- Make sure your build of KiCad has scripting enabled. It looks like fresh downloads have it by default, but it doesn’t hurt to check. Go KiCad → About KiCad → Show Version Info and make sure that all of the
KICAD_SCRIPTING_flags are set to ON.
- Find pcbnew’s plugin search path list. Open pcbnew, and open Tools → Scripting Console. Run
import pcbnew; print pcbnew.PLUGIN_DIRECTORIES_SEARCHand you’ll see a list of folders which pcbnew will search for plugins
- Move your plugin files/folders to one of these locations
- In pcbnew, Tools → External Plugins… → Refresh Plugins. Your Tools → External Plugins menu should fill up with plugins. | https://waterpigs.co.uk/notes/4z2NJB/ | CC-MAIN-2021-31 | refinedweb | 130 | 75.91 |
#include <math/vector2d.h>
#include <geometry/direction45.h>
#include <geometry/seg.h>
#include <geometry/shape.h>
#include <geometry/shape_line_chain.h>
#include "pns_item.h"
#include "pns_via.h"
Go to the source code of this file.
Class LINE.
Represents a track on a PCB, connecting two non-trivial joints (that is, vias, pads, junctions between multiple traces or two traces different widths and combinations of these). PNS_LINEs are NOT stored in the model (NODE). Instead, they are assembled on-the-fly, based on a via/pad/segment that belongs to/starts/ends them.
PNS_LINEs can be either loose (consisting of segments that do not belong to any NODE) or owned (with segments taken from a NODE) - these are returned by NODE::AssembleLine and friends.
A LINE may have a VIA attached at its end (i.e. the last point) - this is used by via dragging/force propagation stuff.
Definition at line 58 of file pns_line.h. | http://docs.kicad-pcb.org/doxygen/pns__line_8h.html | CC-MAIN-2019-26 | refinedweb | 154 | 61.93 |
Batch API for Blob Storage
Updated: October 23, 2019
The public preview of Batch API for Azure Blob Storage is now available to simplify development of applications that make several concurrent API requests to Blob storage. Batch API reduces the number of connections a client has to open, manage and distribute the requests to and helps improve application performance . The Blob Batch REST API allows multiple API calls to be embedded into a single HTTP request. Each individual subrequest in a Batch request will be counted as one transaction. In addition, the Batch REST request will be counted as one transaction. So in a case where you have a batch of size 100 requests, you will have a total of 101 transactions. The response returned by the server for a batch request contains the results for each sub-request in the batch. The batch request and response leverages the OData batch processing specification.
Currently the API supports two types of sub-requests: SetBlobTier for Block Blobs and DeleteBlob, with more requests to follow. This API is available starting version 2018-11-09. Additionally, the Batch REST API is complemented with support in the .Net, Java and Python client SDK with more SDKs to follow. BatchAPI is available on standard General Purpose v2 storage accounts for Block blobs. It is not yet available for ADLS Gen2 however (Hierarchical namespace enabled storage accounts). Batch API will work just like any other storage REST API and supports all existing support Storage authentication and authorization schemes. Batch delete will trigger any event grid subscriptions registered for Delete.
We are looking forward to your feedback and excited to hear your experiences. Please send any feedback to azurestoragefeedback@microsoft.com. | https://azure.microsoft.com/en-us/updates/batchapiforblob/ | CC-MAIN-2020-16 | refinedweb | 284 | 55.84 |
08 September 2011 14:18 [Source: ICIS news]
HOUSTON (ICIS)--Gevo has agreed a strategic marketing and offtake alliance for its isobutanol (IBA) with Mansfield Oil, the US-based renewable chemical firm said on Thursday.
Gevo said under a five-year contract, ?xml:namespace>
Mansfield Oil, which is based in Gainesville, Georgia, markets and distributes fuel to commercial customers across all 50
“
“They will help us manage the supply chain and logistics required to efficiently move our product to market,” he added.
Financial or volume details were not disclosed.
Earlier this month, Gevo entered into an offtake agreement with Sasol. Under that deal, Sasol will sell Gevo’s IBA as a chemical intermediate to customers in the paint, ink and coating markets, among others.For more on IBA | http://www.icis.com/Articles/2011/09/08/9491136/us-gevo-seals-isobutanol-offtake-deal-with-mansfield-oil.html | CC-MAIN-2014-49 | refinedweb | 128 | 62.38 |
here is my assisgnment and I can compile it and run it but the numbers dont seem to be correct....could someone take a look and see if my call function is wrong? thanks
Bryan
Assignment:
If a principal amout P, for which the interest is compunded Q times per year, is placed in a savings account, then the amount of money in the account (the balance) after N years is given by the following formula, where I is the annual interest rate as a floating-point number:
balance = P x (1 + I/Q)^NxQ
Write a C++ program that inputs the values for P, I, Q, and N and outputs the balance for each year up through year N. Use a value-returning function to compute the balance. Your program should prompt the user appropriately, label the output values, and have a good style.
Code:#include <iostream> #include <cmath> void Balance(float, float, int, int, float&); using namespace std; int main() { float P,I,B; int Q,i,N; cout << "Enter the principal: "; cin >> P; cout << "Enter the annual interest rate: "; cin >> I; cout << "Enter the number of times money compounded per year: "; cin >> Q; cout << "Enter the number of years: "; cin >> N; for (i = 1; i <= N; i++) { Balance(P,I,Q,i,B); cout << "The balance after " << i << " year(s), is: " << B << "." << endl; } return 0; } void Balance(float P, float I, int Q, int N, float& B) { B = (P * pow((1.0 + I/Q),N*Q)); } | http://cboard.cprogramming.com/cplusplus-programming/42357-my-program-works-but-numbers-not-correct.html | CC-MAIN-2015-06 | refinedweb | 249 | 55.61 |
hi there,
i have been doin java for a few months now. and i have been trying to solve small and simple programs first, so that i understand java right from the beggining.
im currently solving a very simple program which is calculating the wall cover estimate of a room. i would appretiate it alot if some could help me with this simple program as i have been trying to solve the errors with the calculations and the whole structure of the program.
this is the work which i have done so far.
thanks alot
Code :
// calculating area of wall covering of a room import javax.swing.JOptionPane; public class Room { // First Method, to find Area of room to be covered public static void areaOfRoom(double lenght, double height, double width) { double area = 2 * (lenght * height) + 2 * (width * height); if (lenght<0) JOptionPane.showMessageDialog(null, "Lenght Can not be negative"); if (height<0) JOptionPane.showMessageDialog(null, "Height Can not be negative"); if (width<0) JOptionPane.showMessageDialog(null, "Width Can not be negative"); if (lenght>0) if (height>0) if (Width>0) JOptionPane.showMessageDialog(null, "The Area of Rectangle is " + area); } // Main Method to input numbers from the user public static void main(String[]args) { String input = JOptionPane.showInputDialog("Please Type in 1 to find the Area of Room"); int number = Integer.parseInt(input); int count=0; double l,h + w,h; // This Section runs the areaOfRectangle Method if (number==1) { input = JOptionPane.showInputDialog("Please Type in the Lenght of the Room"); l = Integer.parseInt(input); input = JOptionPane.showInputDialog("Please Type in the Height of the Room"); h = Integer.parseInt(input); input = JOptionPane.showInputDialog("Please Type in the Width of the Room"); W = Integer.parseInt(input); areaOfRectangle 2 * (l, h) + 2 * (W, h); else if (number<0) // Terminates the program { } } } | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/825-simple-java-program-calculate-wall-covering-room-printingthethread.html | CC-MAIN-2016-30 | refinedweb | 299 | 57.67 |
With.
We don’t have to write code over and over again. It also allows easy code modification and readability by simply adding or removing code chunks. Only when we call or invoke the method is it executed. The main() method is the most significant method in Java.
Assume you need to make a program to draw a circle and color it. To overcome this difficulty, you can devise two approaches:
- a method for drawing a circle
- a method for coloring the circle
Values or arguments can be inserted inside methods, and they will only be executed when the method is called. Functions are another name for them. The following are the most common usage of methods in Java:
- It allows for code reuse (define once and use multiple times)
- An extensive program can be broken down into smaller code parts.
- It improves the readability of code.
Methods in Java
By breaking down a complex problem into smaller bits, you can create an easier to comprehend and reuse program. There are two sorts of methods in Java: static and dynamic.
User-defined Methods: We can develop our method based on our needs.
Standard Library Methods: These are Java’s built-in methods that can be used.
Declaration of the Method
Method properties such as visibility, return type, name, and parameters are all stated in the method declaration. As seen in the following diagram, it consists of six components known as method headers.
(Access Specifier) (Return Type) (Method Name) (Parameter List) --> Method Header { // Method Body }
For example:
public int sumValues(int x, int y){ // method body }
Where sumValues(int x, int y) is the Method signature
Method Signature: A method signature is a string that identifies a method. It’s included in the method declaration. It contains the method name as well as a list of parameters.
Access Specifier: The method’s access specifier, also known as a modifier, determines the method’s access type. It specifies the method’s visibility. There are four different types of access specifiers in Java:
- Public: When we utilize the public specifier in our application, all classes can access the method.
- Private: The method is only accessible in the classes declared when using a private access specifier.
- Protected: The method is accessible within the same package or subclasses in a different package when using the protected access specifier.
- Default: When no access specifier is specified in the method declaration, Java uses the default access specifier. It can only be seen from the same package.
Return Type: The return type of a method is the data type it returns. For example, it could be a primitive data type, an object, a collection, or avoid. The void keyword is used when a method does not return anything.
Method Name: The name of a method is defined by its method name, which is a unique name.
It must be appropriate for the method’s functionality. If we’re making a method for subtracting two numbers, the method’s name must be subtraction(). The name of a method is used to call it.
Parameter List: The parameter list is a collection of parameters separated by a comma and wrapped in parentheses. It specifies the data type as well as the name of the variable. Leave the parenthesis blank if the method has no parameters.
Method Body: The method declaration includes a section called the method body. It contains all of the actions that must be completed. Further, it is protected by a pair of curly braces.
Choosing a Method Name
When naming a method, keep in mind that it must be a verb and begin with a lowercase letter. If there are more than two words in the method name, the first must be a verb, followed by an adjective or noun. Except for the first word, the initial letter of each word in the multi-word method name must be in uppercase. Consider the following scenario:
- sum(), area() are two single-word methods
- areaOfCircle(), stringComparision() are two multi-word methods
It’s also conceivable for a method to have the same name as another method in the same class; this is called method overloading.
User-defined methods
Let’s start by looking at user-defined methods. To declare a method, use the following syntax:
returnType methodName() { // method body }
As an example,
int sumValues() { // code }
The method above is named sumValues(), whose return type is an int. The syntax for declaring a method is as follows. The complete syntax for declaring a method, on the other hand, is
modifier static returnType nameOfMethod (parameter1, parameter2, ...) { // method body }
Here,
modifier – It specifies the method’s access kinds, such as public, private, etc. Visit Java Access Specifier for further information.
static -It can be accessed without creating objects if we use the static keyword.
The sqrt() method in the standard Math class, for example, is static. As a result, we may call Math.sqrt() without first establishing a Math class instance. The values parameter1/parameter2 are supplied to a method. A method can take any number of arguments.
Method call in Java
We’ve declared a method called sumValues() in the previous example. To use the method, we must first call it. The sumValues() method can be called in the following way.
// calls the method sumValues(); Example: Using Methods in Java class Codeunderscored { // create a method public int sumValues(int num_1, int num_2) { int sumVal = num_1 + num_2; // return the results return sumVal; } public static void main(String[] args) { int num1 = 67; int num2 = 33; // create an object of Codeunderscored Codeunderscored code = new Codeunderscored(); // calling method int resultVal = code.sumValues (num1, num2); System.out.println("The resultant sum value is: " + resultVal); } }
We defined a method called sumValues() in the previous example. The num_1 and num_2 parameters are used in the method. Take note of the line,
int resultVal = code.sumValues (num1, num2);
The procedure was invoked by giving two arguments, num_1 and num_2. We’ve placed the value in the result variable because the method returns a value. It’s worth noting that the method isn’t static. As a result, we’re utilizing the class’s object to invoke the method.
The keyword void
We can use the void keyword to create methods that don’t return a value. In the following example, we’ll look at a void method called demoVoid. It is a void method, which means it returns nothing. A statement must be used to call a void method, such as demoVoid(98);. As illustrated in the following example, it is a Java statement that concludes with a semicolon.
public class Codeunderscored { public static void main(String[] args) { demoVoid(98); } public static void demoVoid(double points) { if (points >= 100) { System.out.println("Grade:A"); }else if (points >= 80) { System.out.println("Grade:B"); }else { System.out.println("Grade:C"); } } }
Using Values to Pass Parameters
Arguments must be passed while working on the calling procedure. These should be listed in the method specification in the same order as their corresponding parameters. Generally, parameters can be given in two ways: a value or a reference.
Calling a method with a parameter is known as passing parameters by value. The argument value is provided to the parameter this way. The program below demonstrates how to pass a parameter by value. Even after using the procedure, the arguments’ values stay unchanged.
public class Codeunderscored { public static void main(String[] args) { int x = 20; int y = 62; System.out.println("Items initial order, x = " + x + " and y = " + y); // Invoking the swap method swapValues(x, y); System.out.println("\n**Order if items, before and after swapping values **:"); System.out.println("Items after swapping, x = " + x + " and y is " + y); } public static void swapValues(int a, int b) { System.out.println("Items prior to swapping(Inside), x = " + x + " y = " + y); // Swap n1 with n2 int temp = x; x = y; y = temp; System.out.println("Items post swapping(Inside), x = " + x + " y = " + y); } }
Overloading of Methods
Method overloading occurs when a class contains two or more methods with the same name but distinct parameters. It’s not the same as overriding. When a method is overridden, it has the same name, type, number of parameters, etc.
Consider the example of finding the smallest integer numbers. Let’s say we’re looking for the smallest number of double types. Then, to build two or more methods with the same name but different parameters, the notion of overloading will be introduced.
The following example clarifies the situation:
public class Codeunderscored { public static void main(String[] args) { int x = 23; int y = 38; double numOne = 17.3; double numTwo = 29.4; int resultOne = smallestValue(x, y); // invoking function name with different parameters double resultTwo = smallestValue(numOne, numTwo); System.out.println("The Minimum number is: = " + resultOne); System.out.println("The Minimum number is: = " + resultTwo); } // for integer public static int smallestValue(int numOne, int numTwo) { int smallestVal; if ( numOne > numTwo) smallestVal = numTwo; else smallestVal = numOne; return smallestVal; } // for double public static double smallestValue(double numOne, double numTwo) { double smallestVal; if ( numOne > numTwo) smallestVal = numTwo; else smallestVal = numOne; return smallestVal; } }
Overloading methods improve the readability of a program. Two methods with the same name but different parameters are presented here. The result is the lowest number from the integer and double kinds.
Using Arguments on the Command Line
When you execute a program, you may want to feed some information into it. It is performed by invoking main() with command-line arguments.
When a program is run, a command-line argument is information that appears after the program’s name on the command line. It’s simple to retrieve command-line parameters from within a Java program. They’re saved in the String array supplied to main() as strings. The following program displays all of the command-line arguments that it is invoked.
public class Codeunderscored { public static void main(String args[]) { for(int i = 0; i<args.length; i++) { System.out.println("args[" + i + "]: " + args[i]); } } }
‘This’ keyword
A Java keyword is used to reference the current class’s object in an instance method or constructor. You can use this to refer to class members like constructors, variables, and methods. It’s worth noting that the keyword this is only used within instance methods and constructors.
In general, the term this refers to:
- Within a constructor or a method, distinguish instance variables from local variables if their names are the same.
class Employee { int age; Employee(int age) { this.age = age; } }
- In a class, call one sort of constructor (parametrized constructor or default constructor) from another. Explicit constructor invocation is what it’s called.
class Employee { int age Employee() { this(20); } Employee(int age) { this.age = age; } }
This keyword is used to access the class members in the following example. Copy and paste the program below into a file called thisKeyword.java.
public class Codeunderscored { // Instance variable num int num = 10; Codeunderscored() { System.out.println("This is a program that uses the keyword this as an example. "); } Codeunderscored(int num) { // Using the default constructor as a starting point this(); // num is assigned to the instance variable num by assigning the local variable num to the instance variable num. this.num = num; } public void greet() { System.out.println("Hello and welcome to Codeunderscored.com. "); } public void print() { // declaration of the num Local variable int num = 20; // The local variable is printed. System.out.println("num is the value of a local variable. : "+num); // The instance variable is printed. System.out.println("num is the value of the instance variable. : "+this.num); // Invoking a class's greet method this.greet(); } public static void main(String[] args) { // Creating an instance of the class Codeunderscored code = new Codeunderscored(); // The print technique is used to print a document. code.print(); // Through a parameterized constructor, a new value is passed to the num variable. Codeunderscored codeU = new Codeunderscored(30); // Using the print technique once more codeU.print(); } }
Arguments with Variables (var-args)
You can give a variable number of parameters of the same type to a method in JDK 1.5. The method’s parameter is declared as follows:
typeName... parameterName
You specify the type followed by an ellipsis in the method definition (…). In a method, just one variable-length parameter can be supplied, and it must be the last parameter. Any regular parameters must precede it.
public class VarargsCode { public static void main(String args[]) { // Calling of a method with variable args showMax(54, 23, 23, 22, 76.5); showMax(new double[]{21, 22, 23}); } public static void showMax( double... numbers) { if (numbers.length == 0) { System.out.println("No argument passed"); return; } double result = numbers[0]; for (int i = 1; i < numbers.length; i++) if (numbers[i] > result) result = numbers[i]; System.out.println("The max value is " + result); } }
Return Type of a Java Method
The function call may or may not get a value from a Java method. The return statement is used to return any value. As an example,
int sumValues() { ... return sumVal; }
The variable sumVal is returned in this case. Because the function’s return type is int, the type of the sumVal variable should be int. Otherwise, an error will be generated.
// Example : Return Type of a Method class Codeunderscored { // creation of a static method public static int squareValues(int numVal) { // return statement return numVal * numVal; } public static void main(String[] args) { int result; // call the method // store returned value to result resultVal = squareValues(13); System.out.println("The Squared value of 13 is: " + resultVal); } }
In the preceding program, we constructed a squareValues() method. The method accepts an integer as an input and returns the number’s square. The method’s return type has been specified as int here.
As a result, the method should always return a positive number. Note that we use the void keyword as the method’s return type if the method returns no value.
As an example,
public void squareValues(int i) { int resultVal = i * i; System.out.println("The Square of the given number is: " + resultVal); }
Java method parameters
A method parameter is a value that the method accepts. A method, as previously stated, can have any number of parameters. As an example,
// method with two parameters int sumValues(int x, int y) { // code } // method with no parameter int sumValues(){ // code }
When calling a parameter method, we must provide the values for those parameters. As an example,
// call to a method with two parameters sumValues(29, 21); // call to a method with no parameters sumValues()
Example : Method Parameters
class Codeunderscored { // method with no parameter public void methodWithNoParameters() { System.out.println("Method without parameter"); } // method with single parameter public void methodWithParameters(int a) { System.out.println("Method with a single parameter: " + a); } public static void main(String[] args) { // create an object of Codeunderscored Codeunderscored code = new Codeunderscored(); // call to a method with no parameter code.methodWithNoParameters (); // call to a method with the single parameter code.methodWithParameters (21); } }
The method’s parameter is int in this case. As a result, the compiler will throw an error if we pass any other data type than int. Because Java is a tightly typed language, this is the case. The actual parameter is the 32nd argument supplied to the methodWithParameters() method during the method call.
A formal argument is the parameter num that the method specification accepts. The kind of formal arguments must be specified. Furthermore, the types of actual and formal arguments should always be the same.
Static Method
A static method has the static keyword. In other terms, a static method is a method that belongs to a class rather than an instance of that class. We can also construct a static method by prefixing the method name with the term static.
The fundamental benefit of a static method is that it can be called without requiring the creation of an object. It can change the value of static data members and access them. It is employed in the creation of an instance method. The class name is used to call it. The main() function is the best example of a static method.
public class Codeunderscored { public static void main(String[] args) { displayStatically(); } static void displayStatically() { System.out.println("Codeunderscored example of static method."); } }
Instance Method in Java
A class method is referred to as an instance method. It is a class-defined non-static method. It is essential to construct an object of the class before calling or invoking the instance method. Let’s look at an instance method in action.
public class CodeunderscoredInstanceMethod { public static void main(String [] args) { //Creating an object of the class CodeunderscoredInstanceMethod code = new CodeunderscoredInstanceMethod(); //invoking instance method System.out.println("The numbers' sum is: "+code .sumValues(39, 51)); } int s; //user-defined method because we have not used static keyword public int sumValues(int x, int y) { resultVal = x+y; //returning the sum return resultVal; } }
Instance methods are divided into two categories:
- Mutator Method
- Accessor Method
Accessor Method
The accessor method is the method(s) that reads the instance variable(s). Because the method is prefixed with the term obtain, we can recognize it. Getters is another name for it. It returns the private field’s value. It’s used to get the private field’s value.
public int getAge() { return age; }
Mutator Method
The method(s) read and modify the instance variable(s) values. Because the method preceded the term set, we can recognize it. Setters or modifiers are other names for it. Even though it doesn’t give you anything, it accepts a field-dependent parameter of the same data type. It’s used to set the private field’s value.
public void setAge(int age) { this.age = age; }
Example: Instance methods – Accessor & Mutator
public class Employee { private int empID; private String name; public int getEmpID() //accessor method { return empID; } public void setEmpID(int empID) //mutator method { this.empID = empID; } public String getName() { return name; } public void setName(String name) { this.name = name; } public void display() { System.out.println(" Your Employee NO is.: "+empID); System.out.println("Employee name: "+name); } }
Methods for a Standard Library
The standard library methods are Java built-in methods that can be used immediately. These standard libraries are included in a Java archive (*.jar) file with JVM and JRE and the Java Class Library (JCL).
Examples include,
- print() is a java.io method.
- In PrintSteam, the print(“…”) method displays a string enclosed in quotation marks.
- sqrt() is a Math class method. It returns a number’s square root.
Here’s an example that works:
// Example: Method from the Java Standard Library public class Codeunderscored { public static void main(String[] args) { // the sqrt() method in action System.out.print("The Square root of 9 is: " + Math.sqrt(9)); } }
Abstract Method
An abstract method does not have a method body. In other terms, an abstract method does not have an implementation. It declares itself in the abstract class at all times. If a class has an abstract method, it must be abstract itself. The keyword abstract is used to define an abstract procedure.
The syntax is as follows:
abstract void method_name();
abstract class CodeTest //abstract class { //abstract method declaration abstract void display(); } public class MyCode extends CodeTest { //method impelmentation void display() { System.out.println("Abstract method?"); } public static void main(String args[]) { //creating object of abstract class CodeTest code = new MyCode(); //invoking abstract method code.display(); } }
Factory method
It’s a method that returns an object to the class where it was created. Factory methods are all static methods. A case sample is as follows:
NumberFormat obj = NumberFormat.getNumberInstance().
The finalize( ) Method
It is possible to define a method that will be called immediately before the garbage collector destroys an object. This function is called finalize(), ensuring that an object is terminated correctly. Finalize(), for example, can be used to ensure that an open file held by that object is closed.
Simply define the finalize() method to add a finalizer to a class. When the Java runtime recycles an object of that class, it calls that method. In the finalize () method, you’ll specify the actions that must be completed before an object is destroyed in the finalize() method.
This is the general form of the finalize() method:
protected void finalize( ) { // finalization code here }
The keyword protected is a specifier that prevents code declared outside the class from accessing finalize(). It implies you have no way of knowing when or if finalize() will be called. For example, if your application stops before garbage collection, finalize() will not be called.
What are the benefits of employing methods?
The most significant benefit is that the code may be reused. A method can be written once and then used several times. We don’t have to recreate the code from scratch every time. Think of it this way: “write once, reuse many times.”
Example 5: Java Method for Code Reusability
public class Codeunderscored { // definition of the method private static int calculateSquare(int x){ return x * x; } public static void main(String[] args) { for (int i = 5; i <= 10; i++) { //calling the method int resultVal = calculateSquare(i); System.out.println("The Square of " + i + " is: " + resultVal); } } }
We developed the calculateSquare() method in the previous program to calculate the square of a number. The approach is used to find the square of numbers between five and 10 in this case. As a result, the same procedure is employed repeatedly.
- Methods make the code more readable and debuggable.
The code to compute the square in a block is kept in the calculateSquare() method. As a result, it’s easier to read.
Example: Calling a Method several times
public class Codeunderscored { static void showCode() { System.out.println("I am excited about CodeUnderscored!"); } public static void main(String[] args) { showCode(); showCode(); showCode(); showCode(); } } // I am excited about CodeUnderscored! // I am excited about CodeUnderscored! // I am excited about CodeUnderscored! // I am excited about CodeUnderscored!
Example: User-Defined Method
import java.util.Scanner; public class Codeunderscored { public static void main (String args[]) { //creating Scanner class object Scanner scan=new Scanner(System.in); System.out.print("Enter the number: "); //reading value from user int num=scan.nextInt(); //method calling findEvenOdd(num); } //user defined method public static void findEvenOdd(int num) { //method body if(num%2==0) System.out.println(num+" is even"); else System.out.println(num+" is odd"); } }
Conclusion
In general, a method is a manner of accomplishing a goal. In Java, a method is a collection of instructions that accomplishes a specified goal. It ensures that code can be reused. In addition, Methods can also be used to alter code quickly.
A method is a section of code that only executes when invoked. It has Parameters which are data that can be passed into a method. Methods, often known as functions, carry out specific tasks. Further, some of the benefits of using methods include code reuse, creating it once, and using it multiple times.
Within a class, a method must be declared. It is defined by the method’s name, preceded by parenthesis(). Although Java has several pre-defined ways, such as System.out.println(), you can also write your own to handle specific tasks. | https://www.codeunderscored.com/methods-in-java-with-examples/ | CC-MAIN-2022-21 | refinedweb | 3,859 | 57.57 |
Learn how to dynamically route Intercom notifications to slack channels using a Serverless Webhook
Connecting Intercom and Slack with a Serverless Webhook
In this post I am going to show you how you can connect Intercom via Webhooks to dynamically route messages to Slack by standing up a Serverless Webhook endpoint.
One tool I work with a lot in my day job is Intercom. It is a great product which we use to send Welcome Auto messages to customers when they sign up for our product. What's realy nice about Intercom as opposed to just email, is we can respond right from the Intercom UI where the whole thread is visible to the team. That's why when we decided to launch a second product it was a no brainer for us to use it again.
Challenges integrating Intercom with Slack
As an organization we live in Slack and as such, we integrate whatever we can to make our lives easier. Fortunately, Intercom has an integration with Slack, which we already use. That integration sends all notifications from Intercom to a single Slack channel. For our Extend product, I wanted the Intercom notifications to go to a different channel in order to allow us to have better visibility and management across the products. It became clear the existing Slack integration wouldn't suit my needs.
Webhooks, local tools, and standing up servers
Fortunately, I found Intercom supports Webhooks. Using the
Create webhook feature you can create a subscription which will send notifications to an endpoint for further handling. You can see the creation screen below.
Notice the dialog requires you to provide
WEBHOOK URL. This means you need to stand up an endpoint in order to get that URL. Looking in the documentation Intercom recommends developing locally using Sinatra, and exposing the endpoint using Ngrok. This means first you have to install a bunch of tools to develop and run locally. You'll then install deployment tools in order to deploy to a cloud provider like Heroku. Along the way you have plenty of reading to do. Feels very ancient.
Serverless Webhooks to the rescue
Thankfully we have a really easy way today to spin up a Webhook endpoint, and we don't need to run anything locally for development or deployment. The answer is Serverless. If you are shaking your head at the name, YES, of course there are servers, the main difference is there are not servers you have to directly worry about or manage. Let's move on.
Using a Serverless platform, you can easily write the code for a Webhook endpoint and have it spin up on-demand in the cloud. There's a number of different options you can choose from and Serverless vendors differ in terms of the experience of creating a Webhook endpoint. Some require you to install tools and do development locally and some require you to jump through extra setup hoops. Not Webtask, it shines in this respect. Every Webtask you create IS a Webhook endpoint. Further, you can develop, test and deploy it completely in the browser in JavaScript, and with great support for NPM modules.
For me these qualities of Webtask made it an obvious choice to use for implementing my Webhook.
Designing the Webtask
The diagram below shows at high level what I was trying to accomplish.
Whenever Webtask receives a notification, it will look at the contents, if it is a response related to Extend, then it will compose a Slack message and send it to our new
#mktg_intercom_extend channel.
The devil is of course in the details on each of the above. Next, I'll show you the process I to went through to implement it.
Quickly setting up a Webhook to inspect the payload.
Rather than read through documentation, when I work with APIs, I often start with exploring the actual payload. For APIs that I am calling,
curl is often an easy way to do this. In the case of Webhooks, you need an endpoint in order to receive the Webhook. As I said, I didn't want to have to install anything locally. Thankfully with Webtask you don't have to. You can create a Webtask very quickly and plug it in as a Webhook to start seeing the data that is being sent, in this case from Intercom.
To create the webtask, open the browser to. Once you do you'll get a sign in prompt, where you can quickly log in with a variety of credentials including Github. Then you'll be taken to a screen to choose what kind of Webtask you want to create.
Choose "Webtask" and then then put
intercom-slack-handler. Once your task is created, you'll see some basic starter code.
If you look at the bottom of the screen, you'll see a url for your Webtask. Press the copy button to put this on the clipboard as you will use this shortly.
Now let's make a slight modification to the code to have the task write out the payload to the console.
Change the code to the following
var util = require('util'); module.exports = function(ctx, cb) { console.log(util.inspect(ctx.body, {depth: null})); cb(null, null); }
This will grab the JSON body that Intercom sends and use the
inspect function to dump it's contents to a string. Inspect is useful because if we just write the object directly with console.log, then nested objects just show as
[object]. Click the "Save" button and you'll be ready to test.
First click on the
Logs icon to bring up the log viewer which you'll use to see the output. Clicking on the area above the log viewer search text box, allows you to resize the viewer window. First you need to add the webhook to Intercom. To do this you can head to the developer's management page. From go to
Settings ->
App settings ->
Developer Tools ->
Webhooks. Click on
Create webhook. Paste your Webtask URL from the clipboard that you copied earlier into the
WEBHOOK URL textbox. Next you need to set what notifications to watch. In this case we want to watch whenever there is a reply, so check the
Reply from a user and
Reply from a teammate boxes.
It should look similar to the following:
Now click create Webhook. Doing this will send a ping request to the Webhook, and you'll immediately see output.
The current output is not that helpful yet, however the important thing is you're now setup to iteratively develop the real functionality, and you haven't installed anything locally.
Inspecting a real notification
First clear the log viewer in the Webtask editor by clicking on the trash button. Now send a test message in order to fire the webhook and see what the real notification looks like. In the Intercom UI, go to your
Manual Messages and create a new message. Set the audience to be an an account for an email that you own, if you don't have an account, create one. Make sure the audience shows that you are only sending to your account, otherwise you might inadvertently upset a lot of people ;-). Below you can see that I am sending a message to myself.
Check the email that you received and reply. The response should trigger the Webhook. Now when you look in the log viewer you can see what the real payload looks like.
You can see a gist of the full contents of the log viewer here.
Looking at the payload there are some elements that pop out that you'll need in order to create notifications similar to those created by the Intercom Slack plugin.
- The app id (app_id). This will be needed for generating URLs.
- The subject of the original message
data.item.conversation_message.subject. You'll need this in order to route to the correct channel.
- The response message
data.item.conversation_parts.conversation_parts[0].bodywhich is in HTML format.
- The topic
topic. Currently it is 'conversation.user.replied'. Looking in the docs I can also see that this can be 'conversation.admin.replied' if the reply is from an Intercom admin.
- The
userand
assigneeinformation.
Implementing the Webtask
To implement the Webtask you'll first need to setup an Incoming Webhook to get a Slack URL that you can send messages to. You can create this if you are a Slack admin, or talk to your admin. Save the URL for later.
As part of the implementation, you'll need some npm modules to help you extract the raw text from the HTML message body, and for sending to Slack. Webtask has the
html-to-text and
slack_notify modules built which will help here.
Here is the code for the completed Webtask, which you can replace your existing task with.
// // requires SLACK_URL and SUBJECT secrets to be defined // module.exports = function(context, cb) { var slack = require("slack-notify")(context.secrets.SLACK_URL); var subject = context.secrets.SUBJECT; var htmlToText = require("html-to-text"); var body = context.body; var util = require('util'); var item, parts, text, conversationUrl; function isAuth0ExtendMessage(item) { return (item.conversation_message !== null && item.conversation_message.subject.indexOf(subject) > -1); } function createMessage(color, msgText, channel) { var message = { channel: channel, icon_url: "", text: "*Intercom*", attachments: [{ color: color, text: msgText }] } return message; } function composeMessage(text) { var from = parts.author.name; var to; var color; if (parts.author.type === 'user') { to = item.assignee.name; var fromUrl = `{body.app_id}/users/${parts.author.id}`; color = "#4277f4"; msgText = `<${fromUrl}|${from}> replied to <${conversationUrl}|a conversation> with ${to}\n\n${text}`; } else { to = item.user.name; color = "#f4b541"; msgText = `${from} replied to <${conversationUrl}|a conversation> with ${to}\n\n${text}`; } var channel; if (isAuth0ExtendMessage(item)) channel = "mktg_intercom_extend" else channel = "mktg_intercom"; var message = createMessage(color, msgText, channel); return message; } if (body !== null && body.data !== null && body.data.item !== null) { item = body.data.item; parts = item.conversation_parts.conversation_parts[0]; text = htmlToText.fromString(parts.body, { wordwrap: 130 }); conversationUrl = item.links.conversation_web; if (text.length > 215) { text = text.substring(0, 214) + `... <${conversationUrl}|More>` } var message = composeMessage(text); slack.send(message); } cb(null, null); };
You'll notice throughout the code the code references parameters of the
context.secrets object. Secrets provide a way to provide secure data like connection strings, API keys, etc from outside the code. They can also be used for storing configuration information. You should use the secrets panel in the Webtask Editor to define the following secrets.
- SLACK_URL - The Slack incoming webhook url you saved earlier.
- SUBJECT - The subject of the email message to match on. The match is whether or not the mail subject contains SUBJECT. In my case the messages for Webtask and Extend have different subjects, so I matched on "Extend". This worked for us, but it might not work for your use case, in which case you should change the logic.
Here is what the code is doing at a high level.
- Checks to see if there is a
body, and if it has a
dataand
data.itemfield.
- Extracts the text from the HTML response message. Trims the text if it is too large.
- Determines the channel by checking if the subject matches
- Composes the message to Send to Slack. This part of the code does a few gymnastics to build up the a message that is almost identical in format to the existing Intercom Slack integration messages.
- Sends the message to Slack.
Note For simplicity this code only handles replies, but it can easily be changed to also support internal notes, which we did in our internal version. I'll leave that as a challenge for you.
Save the task. With everything wired up, send another test message as you did earlier. You should see a notification in Slack similar to the following:
Success!
Aside: Going beyond Webhooks with Auth0 Extend
Using Webhooks today for Intercom's extensibility places a burden. You have to sign up for a separate hosting provider, possibly install local tools, create your Webhook implementation, deploy it, and then configure the URL in Intercom. You are not done there though, you also have to now manage this extension for the long haul. Serverless platforms like Webtask make that easier, but that still doesn't remove the maintenance and monitoring burden. You still have to research the different options, stand up the endpoint, and maintain it.
What if that textbox could go away? What if you could just edit the code right in Intercom in an embedded editor? There'd be no seperate accounts to worry about, no seperate endpoints to stand up and manage, no switching contexts. You could stay completely focused on writing the code for the extension.
Auth0 Extend makes this possible with an embedded code editor for creating extensions and a Serverless runtime for executing them. The demo below shows how Extend could enable you to create your extension right within Intercom!.
Recap
In this post you've seen how to customize Intercom via Webhooks to have custom routing logic for Slack messages. You've learned how to use the Webtask Editor to create a Serverless Webhook endpoint and how you can use the editor to explore the Webhook payload.
Tell us what your experiences have been creating Webhooks. Have you used Serverless platforms to do it? We look forward to hearing from you. | https://auth0.com/blog/amp/connecting-intercom-and-slack-with-a-serverless-webhook/ | CC-MAIN-2019-18 | refinedweb | 2,224 | 65.83 |
Turn from: python standard output sys. Stdout.
This article environment: python 2.7
Use print obj instead of print ( obj )Some backgrounds.
When we print an object in python, it's actually called the sys. Stdout. Write ( obj + 'n ').
Print prints the contents you need to the console and then appends a line break.
Print calls the sys. Stdout write method.
The following two lines are actually equivalent:
sys.stdout.write('hello'+'n') print 'hello'
As we use raw_input ( 'input promption:: '), we actually output the message, then capture the input
The following two groups are in fact equivalent:
hi=raw_input('hello? ') print 'hello? ', #comma to stay in the same line hi=sys.stdin.readline()[:-1] # -1 to discard the 'n' in input stream
Redirect from the co ole to a file
The original sys. Stdout point to the co ole.
If the reference to the object 's object is assigned to sys. Stdout, then print calls the file object 's write method.
f_handler=open('out.log', 'w') sys.stdout=f_handler print 'hello' # this hello can't be viewed on concole # this hello is in file out.log
Remember, if you also want to print something on the console, it's better to save the original console object reference to the file and then restore the sys. Stdout.
Redirect to co ole and fileRedirect to co ole and file
__console__=sys.stdout # redirection start #.. . # redirection end sys.stdout=__console__
What if we want to print the contents to the console on one side, on the other hand, the output to the file?
Leave the printed content in memory instead of a print to release the buffer, so how to put it in a string region.
a='' sys.stdout=a print 'hello'
Ok, the above code isn't functioning properly
Traceback (most recent call last): File".hello.py", line xx, in <module> print 'hello' AttributeError: 'str' object has no attribute 'write'
The error is clearly highlighted above, and it's noted that there's no write method when trying to call sys. Stdout. Write ( ).
In addition, the reason for attribute error I & tead of the function isn't found, I guess that python treats the object class pointer record as an attribute of the object class, except that the entry address of the function is retained.
Now that you've this, we must implement a write method for the redirected object:
import sys class __redirection__: def __init__(self): self.buff='' self.__console__=sys.stdout def write(self, output_stream): self.buff+=output_stream def to_console(self): sys.stdout=self.__console__ print self.buff def to_file(self, file_path): f=open(file_path,'w') sys.stdout=f print self.buff f.close() def flush(self): self.buff='' def reset(self): sys.stdout=self.__console__ if __name__=="__main__": # redirection r_obj=__redirection__() sys.stdout=r_obj # get output stream print 'hello' print 'there' # redirect to console r_obj.to_console() # redirect to file r_obj.to_file('out.log') # flush buffer r_obj.flush() # reset r_obj.reset() | https://www.dowemo.com/article/47452/python-standard-output-sys.-stdout-redirection | CC-MAIN-2018-30 | refinedweb | 492 | 60.51 |
note jhourcle <p>But the first one doesn't work okay all the time.</p> <readmore> <p>Sure, the month example is always going to fail. (and you can't override a builtin so it works in that context, that I know of) But someone could modify something later, and add a function whose name conflicts with one of your bare words:</p> <p><code>perl -e 'sub sat{}; @days = (sun,mon,tue,wed,thu,fri,sat); print "@days\n";'</code></p> <p>Or, you <code>use</code> some module that polutes your namespace when you weren't expecting it.</p> :</p> <p><code>perl -MData::Dumper -e '$ref->{time()} = time; print Dumper $ref;' perl -MData::Dumper -e '%hash=( time => time ); print Dumper \%hash;'</code></p> </readmore> 450080 450080 | http://www.perlmonks.org/?displaytype=xml;node_id=450243 | CC-MAIN-2016-30 | refinedweb | 129 | 71.48 |
I; }
}
View Complete")
Hello.!
Hi,?
Hi, I search through the web and I couldn't find a way to update with T4 POCO.
Here is my code works fine with Entity Framework:
public void UpdateProduct(Product product)
{
var res = (from r in _context.Products
where r.ProductID == product.ProductID
select r).FirstOrDefault();
_context.ApplyPropertyChanges(res.EntityKey.EntitySetName, product);
}
How do I change this to T4 POCO?
We have an architecture where we use Entity Framework with POCO, and the code-first approach (no edmx file). In order to explain the problem I have to explain a little bit about our architecture:
We have one layer which contains Routing Services (for load balancing and security), and a layer of Data Access Services. All services are hosted in IIS.
What we would like is to have caching of our (Entity Framework) Context, so we made the context static, and thought it would be shared between the different services. Well, it's not...
The scenario (where we noticed the problem) was that we did some changes to an transaction through the Transaction services. We then did a search through the Search services, and found that the data wasn't updated. Well, actually the data are updated, and
saved to the DB, it's just that the "Search Context" is out of sync. When retrieving the data from the Transaction service, we got the correct data, as expected.
So, after hours of googling I still haven't found a solution to the problem, and I'm hoping to get some response here.
Hi all,
I am building a WPF 4 application with Prism and MVVM, I have some POCO to bind with different views in different modules. I would like to implement validation of user input.
After I have done tons of search I found all of the implmenetations are based on property changed and validation logic kicking off right away. And typical way is to leverage attributes under System.ComponentModel.DataAnnotations namespace and IDataErrorInfo
interface.
My requirements are:
Anybody has some good suggestion, articles to describe my requirements?
Hardy
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/964-northwind-poco.aspx | CC-MAIN-2018-47 | refinedweb | 361 | 64.81 |
On 30 March, the Minister of Foreign
Affairs of the Federal Republic of Germany, the current Chairman of the Organization
for Security and Cooperation in Europe Frank-Walter Steinmeier will arrive in
Uzbekistan on an official visit, according to the press service of the Foreign
Ministry of Uzbekistan.
The visit program includes a meeting with the President of the Republic
of Uzbekistan I.A. Karimov, the negotiations in the Government, in the course
of which the state and prospects of development of Uzbek-German bilateral
cooperation, pressing regional and international problems will be discussed.
(Source: UzReport.uz)
Among a number of essential factors
behind a rapid pace of economic development in the Republic of Uzbekistan, many
experts point to diversification and increased competitiveness of the national
economy as a result of modernization and large-scale introduction of modern
technologies and equipment. A given strategy was worked out and launched
immediately after the proclamation of the country’s independence.
And localization of production and expansion of inter-sector industrial
co-operation were singled out as the highest priorities on this front.
At a recent press conference held at the Ministry of Economy of the
Republic of Uzbekistan, it was emphasized that localization of production
constitutes one of the most important directions of exploiting domestic
reserves and potentialities in order to further strengthen the national
industrial potential. This implies a deeper processing of local raw materials
and resources and an increase, on a given basis, in both the volumes and the
range of goods with high value added made in Uzbekistan. At the same time, it
should be observed that thanks to intensification of localization processes,
one more task is addressed, which has taken on special topicality in several
countries of late. The point is, such things as the maximum utilization of a
nation’s domestic resources and potentialities for the balanced development of
the real sector of its economy and the introduction of import-substituting and
export-oriented goods reduce its dependence on external factors, including, in
particular, global crises and an unfavorable situation in foreign markets.
And the wide experience gained by Uzbekistan in the course of its
independent development once again confirms this phenomenon, because its
economy tends to grow at a stable rapid clip even amid the ongoing world
economic and financial crisis.
More than 15 years ago, the Uzbek leaders have initiated the National
Localization Program, thanks to which the production volumes of
import-substituting goods have leapfrogged 220-fold, compared with the 2000
level.
Judging by the results for the year 2015, the implementation of 696
projects has generated 4 trillion Soum (currency
rates of CB RU from 29.03.2016, 1$= 2876.72 soums) worth of localized
produce, with its volume swelling 1.3-fold and the rated effect of import
substitution exceeding US $1.5 billion.
Last year, 820-plus new categories of industrial goods were mastered and
put into production. These include complex petro-chemical and mine machinery,
transformer sub-stations, spare parts and units for compressor equipment,
liquid for hydraulic equipment, light-diode lamps, composite materials, sports
kit and simulators, and many other types of produce, which is currently in
popular demand both domestically and abroad.
At the same time, great significance is attached to the price of
localized goods made by indigenous manufacturers. Their price should become far
more reasonable than that for their foreign analogues. Only in this case,
localized produce will be able to stimulate domestic demand and to satiate the
home market with quality goods of home production. As practices show, today
home-made localized commodities have appreciably pressed or completely
supplanted many categories of imports.
The official figures presented at the press conference at the Ministry
of Economy of the Republic of Uzbekistan demonstrate that over the past two
years, the importation of 97 product groups has been ceased completely owing to
localization projects. Among them are mine cars, vacuum pumps, forged pieces,
lifting cranes, welding electrodes, several types of refractory and
acid-resisting materials, wall materials, sandwich panels, carpet flooring,
synthetic lawns, audio systems for automobiles, glass jars and bottles, starch,
baking yeast etc. The import volumes of another 306 product groups have been
more than halved, including TV sets, air-conditioners, refrigerators and
freezers, dry-cleaners, consumer lamps, automobile filters and radiators,
all-metal cylinders, spiral-seam and
strait-seam steel pipes, copper pipes, ceramic tiles, linoleum, several types
of synthetic fabrics, medical ampoules, printing paints, children’s toys,
sports kit and many others.
Let it also be mentioned that all projects implemented under the
National Localization Program imply a consistent intensification of
localization processes. In 2014, to cite an example, the production of
liquid-crystal TV sets took the form of large-block assembly, with the level of
localization approaching 35 per cent. The domestic production of integral
printed circuit boards, frames and completing units, wires, distance control
panels and other components launched a year ago has made it possible to move up
a given indicator to 40 per cent. This process is going on, and in 2016, it is
expected to reach 50 per cent thanks to the production of liquid-crystal panels
at local enterprises.
The fact that the leading industries of the national economy tend to
swell the percentage of their local purchases from year to year is viewed as an
important outcome of localization of production and development of industrial
co-operation. In 2015 alone, the localization projects have reduced imports
volumes by 9.5 per cent, or US $444.6 million.
The attraction of foreign investors and creation of joint ventures with
the world’s leading technological companies play a sizable role in the
successful realization of the National Localization Program. The most promising
joint ventures operating in Uzbekistan to date are as follows: an enterprise
set up together with MAN to produce large-capacity vehicles, which is based in
Samarkand province; enterprises established together with CLAAS and Lemken,
which specialize in the production of agricultural equipment; enterprises
producing consumer electronics, set up together with Toshiba, Candy, LG,
Samsung and ZTE.
Another factor, appreciably stimulating both home demand and development
of intra- and inter-sector industrial co-operation is the introduction of a
preferential regime in the contract system, when it comes to the placement of
orders to purchase indigenous produce within the framework of the International
Industrial Fair and Co-operative Exchange held annually on the initiative of
the Uzbek leader. In the course of the IIFCE-2015, more than 13,300 contracts
have been concluded for the supply in 2016 of 11.9 trillion Soum worth of
goods, 38 per cent above the previous event’s figure. Over 2.1 trillion Soum of
the indicated amount was accounted for by contracts to purchase goods once
imported to the Republic, which, according to calculations, ensures a further
reduction in 2016 of imports by manufacturing enterprises for their main
production operations by US $637.5 million.
Additionally, those participating in the press conference paid attention
to the fact that, along with the satiation of the home market with quality
domestic goods, lately there has emerged a tendency for expanding both the
volumes and assortment of localized commodity exports. In particular, the year
2015 has seen the exportation of 52 new product groups to the tune of US $25.1
million. These include two-chamber washing machines, industrial energy-saving
light-diode lamps, tubes made of medical glass, shaped profiles made of
vulcanized rubber, cosmetics etc.
Speaking about such processes as localization of production and
development of industrial co-operation, the press conference’s organizers
informed those in attendance that the Ministry of Economy of the Republic of
Uzbekistan, joining hands with other interested ministries and departments, is
now busy elaborating a list of goods, which are in stable popular demand within
the Republic and abroad. Indigenous enterprises, wishing to invest in
production projects, are advised to study this list in the first place,
whereupon they are encouraged to launch the production of the most sought-after
import-substituting commodities. At the same time, it was stressed that
enterprises mastering the production of localized goods are granted exemptions
from customs payments, uniform tax, income (profit) tax and property tax.
As far as the medium- and long-term prospects are concerned, it should
be said that the National Program of Localization of Production and Expansion
of Inter-Sector Co-operation will remain the most essential tool for realizing
the country’s industrial policy designed to exploit the available economic
potentialities in the most efficient way, in order to organize the processing
of raw materials and subsequent production of goods that are in great request
in the world marketplace, in keeping with the following 3-4- stage scheme:
basic raw material – primary processing (semi-finished goods) – finished
materials for industrial production – finished goods for final use.
(Source: «Business partner.uz» newspaper)
Uzbekistan took the 17th place among
20 countries of the world in Global High-Speed Train Ranking, published by
GoEuro.
The ranking is based on such figures as record speed, operation speed,
line coverage and population coverage, as well as expense of population to 1 km
railway.
According to ranking, the record speed of high-speed train in Uzbekistan
makes up 255 km/h (17th place in ranking on this figure) and operation speed –
250 km/h (13th place).
Share of high-speed line in total railway infrastructure of Uzbekistan
is 8.21% (6th place). According to GoEuro, about 90.1% of Uzbek population has
access to high speed train (16th place).
GoEuro underlined that population spends 0.18 euros for each 1 km of
travel at high speed train (11th place).
Japan, South Korea and China hold first top three places in the ranking.
The United States is on 19th place ad Finland – 20th place.
High-speed train Afrosiyob started regular trips between Tashkent and
Samarkand from 8 October 2011. From 2015, Afrosiyob started to run on route
Tashkent-Samarkand-Qarshi.
(Source: UzDaily.com)
International experts draw a fairly
bleak picture of the global food market’s future. The World Wild Fund for
Nature warns that humanity may face an acute lack of food by 2050. Only
countries that are building a complete industrial chain, namely, from garden
beds to processing, are capable of avoiding the forecasts. Uzbekistan has been
moving along this path for more than 20 years and has achieved certain
progress.
Despite experts’ predictions, Uzbekistan is most likely to avoid serious
problems with food provision. Over the past few years, the country has ranked
among regional leaders in the production of fruit and vegetables, grains, meat,
dairy and food products. There are still imports in certain positions, but
import substitution has been in obvious progress.
Paradoxically, the global crisis has become a positive factor for the
domestic food industry. The volatility in currency markets and difference in
exchange rate have made Uzbek products more competitive both domestically and
internationally. Now, it is much cheaper for local wholesale companies to buy domestic
products instead of importing same products for foreign currency. Meanwhile,
the domestic food industry has grown significantly in terms of quality over the
past decade, outstripping many CIS countries.
UT has repeatedly reported about the well-built and efficiently
operating system ‘State-tax preferences-farmer-processor-exporter’, which
allowed creating a reliable shield for food security in Uzbekistan. However,
time does not stand still. In order to maintain the leadership, the country
needs to implement new initiatives, innovative projects, attract strategic
investors. In this context, Uzbekistan has something to be proud of.
In the years since independence, Uzbekistan has carried out large-scale
reforms in the food industry. In 1990, the republic imported more than 82% of
the total consumption of grain, 50% of meat and meat products, about 60% of
dairy products, 100% of sugar, powdered milk and baby food. Today, Uzbekistan
fully provides the population with all major types of products by means of
domestic production. Moreover, during the years of independence, meat
consumption per capita has increased 1.3 times, milk and dairy products - 1.6
times, and processed fruit and vegetables – almost 4 times.
At the same time, Uzbekistan has always sought to rely on long-term
development. The food industry is no exception: there are plans afoot to
implement an unprecedented number of business initiatives in the next five
years.
This February, the President of Uzbekistan signed a landmark document to
define the development of Uzbekistan's food industry for the years ahead. It is
envisaged to establish a new holding company, O’zbekoziqovqatholding, which
would incorporate 176 food, oil and fat, meat and dairy enterprises. The
establishment of a specialized Fund for Development, Reconstruction, and
Modernization of Food Industry is another noteworthy fact. The financial issue
is in the spotlight, as always: the document clearly defines how the fund will
be financed. Firstly, the state will allocate 50% of the dividends received
from the state shares of enterprises under the holding. In addition, the Fund
was entitled to get 0.5% of net profit of enterprises and companies of
O’zbekozikovkatholding.
The establishment of the new structure is largely aimed at the formation
of the industry's export-oriented vector. As a result of reforms on
agricultural diversification and provision of the population with food
products, Uzbekistan annually exports foods, fruits and vegetables worth more
than $5 billion. The volume of exports of agricultural products has tripled in
the past three years. Our country delivers more than 180 kinds of fresh and
processed fruit and vegetable products to 80 countries.
Uzbekistan ranks among the ten leading suppliers in exports of apricots,
plums, grapes, nuts, cabbage and other fruits and vegetables. In 2015,
Uzbekistan was awarded among 14 countries for the achievement of the Millennium
Development Goals in food security by member states of the Food and Agriculture
Organization.
There are more than 10,000 economic entities in the republic, which are
engaged in the production of food products. The system is generally completed,
so the main goal now is to help it grow through access to foreign markets. The
O’zulgurjisavdoinvest Association is intending to implement one of the
initiatives by means of a new exposition and business platform for leading
foreign procurers.
The Uzbek side is currently working on the country's first exhibition
and business forum with the working title ‘Fruits and Vegetables of
Uzbekistan’, which will bring together the leading international traders,
representatives of major transport companies, processing enterprises, the
agricultural sector and manufacturers of profile equipment. The main purpose of
the event is to create all the conditions for foreigners to buy fruits,
vegetables and other ‘Made in Uzbekistan’ branded food products.
This practice has proved its value in other directions. For example,
Uzbekistan has been holding the International Cotton Fair for over a decade,
contracting the bulk of cotton produced in the country. The forum is scheduled
for late May - early June 2016. It will be convenient for everyone. The guests
will come to Tashkent and negotiate on the whole range of issues on product
delivery in just a couple of days, meeting with farmers, transporters, tax
authorities, customs officials and representatives of other state institutions
on the same site. It promises to become a good promotion of Uzbek fruit and
vegetables in the world.
The event will be partnered by the country's largest Uzbek-British
exhibition company ITE Uzbekistan. It will be organized in the halls of the
republican center of exhibition and fair trade of consumer goods
O’zko’rgazmasavdo.
Uzvinprom-Holding is planning to increase external supplies of fresh
products, particularly grapes, worth more than $100 million, by implementing
new measures on increase of exports. Forty-two agricultural companies engaged
in the production and marketing of fresh vegetables and grapes were established
under the Holding in various regions of the country. Today, they are working
closely with roughly 12,000 farms.
Agricultural firms are mainly focused on the rehabilitation,
reconstruction and establishment of vineyards. The policy is promising for
higher yields and, therefore, increased exports. 19.600 hectares of young
vineyards have been laid this year. The farmers were supported in procuring the
seedlings and supply of special equipment. The total area of plantations has
expanded to 99,600 hectares.
It is planned to plant other fruits and vegetables between the rows of
vineyards on the area of 32,000 hectares this year, which should increase the
volume of fruit and vegetable production. To date, such works have been carried
out on 22,000 hectares.
Storage infrastructure ranks among the most important elements in
ensuring a unified and uninterrupted supply chain of fruit and vegetable
products to foreign markets. There is a system-based mechanism of its further
development, supply of up-to-date equipment for reception, processing, sorting,
grading, packaging and labeling of products for further exports.
Traditionally, a weighty share of fruit and vegetable products are
consumed canned, processed or dried, when products lose their consumer appeal,
taste and useful properties from a medical point of view. In recent years,
Uzbekistan has been implementing a program on modernization of existing and
construction of new cold storage facilities. Storage facilities of a total
capacity of 200,000 tons have been built in the country in the last few years
alone.
The emphasis is placed on the construction of modern cold storages for
fruits and vegetables with a total capacity of 325,000 tons. The current
country’s capacity makes up 832,000 tons. The construction of facilities will
be financed through loans of Uzbek banks and capital funds of the companies
that are engaged in exports. It is planned to allocate a total of 125 billion
soums (currency rates of CB RU from 29.03.2016,
1$= 2876.72 soums) for these purposes.
Works will be carried out in stages in specific regions of the country
to ensure a complete coverage of the CIS
countries and the European Union with exports. It is envisaged to commission
cooling storage capacities of 60-65,000 tons each year. Such volumes are
specifically tied to the projected demand and logistical capabilities of Uzbek
transport companies.
A new logistics system for exports of fruits and vegetables will be
created concurrently with the construction of refrigerators. 17 specialized
centers of processing, storage and transportation of fruits and vegetables will
be commissioned in key export-oriented regions within five years, or 3-4
objects annually. Their construction will be funded by 119 billion soums.
Big industrial projects will not stay aside. Coca-Cola Company is
planning to invest over $35 million in the development of three its plants in
Uzbekistan in the coming years, producing six kinds of drinks.
In particular, Coca-Cola intends to upgrade plants in Urgench and
Namangan, and establish a new line at the head manufacture in Tashkent. It is
expected that it will produce up to 48,000 bottles per hour, which is twice
higher than the current capacity. The equipment will be provided by one of the
segment’s leaders - the German Krones. There is no similar line of this kind in
Central Asia.
The decision on the increase of production capacity was entailed by the
growing demand in the domestic market and good export prospects. Products
totaling over $2.5 million were exported to the neighboring countries last year
alone. The company is working towards the increase in production volumes, and
development of new kinds of products. For example, experts are currently
estimating the feasibility of producing juices and soft fizzy drinks in aluminum
cans.
Another presidential initiative aims to support the beef and sausage
production: farms and other agricultural enterprises, which are engaged in
livestock breeding, are released from all taxes and mandatory contributions to
the state specialized funds by January 1, 2021. Meanwhile, the development of
the sector in the next five years will not be limited only by provision of tax
breaks for its entities. The head of state has also approved several
initiatives with specific sources of funding, which are aimed at the
intensification of livestock breeding in the country.
For example, since 2016, Uzbekistan will annually purchase 50 bred bulls
from the best foreign farms and institutions within five years. The project
cost makes up $1.8 million. At the same time, it is planned to triple the
number of breeding farms. Today, there are 470 of them across the country, and
are expected to increase to 1,530 by the end of 2021. It is expected that these
measures will ensure the supply of more than 80,000 heads of bred heifers to
farms over the next five years.
As estimated by the Ministry of Agriculture and Water Resources, the
number of cattle in the country will increase by more than 3,1 million animals
- from 11.6 million in 2015 to 4.8 million in 2021, and meat production - from
1.9 million tons to 2.5 million, respectively.
(Source: «Uzbekistan Today» newspaper)
The fifth plenary session of the Senate of the Republic of Uzbekistan
will open at 10 am on 31 March 2016 in the meetings hall of the Senate of the
Republic of Uzbekistan in Tashkent.
(Source: UzA)
A motion picture by the popular film director, the Honored Artist of
Uzbekistan, Zulfikar Musakov, has premiered in Tashkent. “Khazonrezgi” (Details
of the Autumn) is a film depicting the destinies of millions of people. The
script is written literally from our realities.
“I shoot films the way I feel about
life. The soul is the one who tells me to create such works. Perhaps because I
come from the people as the thousands and millions do,” Zulfikar Musakov has
commented. “I am always keen on learning about fates of common people, as well
as their aspirations, fantasies, plans for the future.
“It seems that I have had to listen
to this story many times,” Alisher Hamdamov says. “My father is a participant
of war and he used to tell us a lot about the lives of those who were in the
war and about how they ended up in the aftermath of the service.”
“This is a depiction of the fates of
grannies and grandpas, of those who endured tough times,” shares the student
Aziza Tursunova. “In particular, my family went through a very similar story.”
The motion picture is about those
who were 18-20 in the 1980s. It is a memory picture, where the main character
develops thoughts – during a travel on the metro and tram – and ‘scrolls’
around in the memory the episodes of his life.
Watching this movie is a rendezvous
with our favorite actors Shahzoda Matchanova and Bobur Yuldashev. But the film
industry stars do not play major roles. The director opted for others for the
protagonists. Thus, the character named Kurban is performed by the lawyer and
businessman Bakhritdin Abdullaev, while Lola is played by the young actress
Husnora Khojmuratova, previously a supporting-role artist.
“The first meeting with the director
Zulfikar Musakov took place practically in the street, and he invited me to
audition. As a result, I was approved for the role of Kurban,” says Bakhritdin
Abdullaev. “In my view, that was a very interesting story.”
A central plot in the film is the
fate of the two main characters – Kurban and Lola. He is a bus driver, has seen
a lot in the war, and lost a friend. She is a common girl, and because of
family troubles she leaves the parental home.
The characters get to know each
other after the last shift, when Kurban sees a sleeping girl in an empty bus.
After listening to her story, he offers his help – a shelter in the house of
his parents. That is the beginning of their joint life, which they will live
together, relying on each other, and create a happy family...
(Source: «Uzbekistan Today»
newspaper)
Reference to the source is a must in reproducing materials
Report Abuse|Powered By Google Sites | http://www.uzbekistanitalia.org/home/bollettino/informationdigestofpressofuzbekistan62march292016 | CC-MAIN-2019-35 | refinedweb | 3,990 | 50.57 |
Communities
NSH script/job blcli_setoption roleName not workingmichael huttner Jun 30, 2015 3:22 PM
We have a script/job to syncronize our AD/LDAP user/groups with Bladelogic RBAC user/roles. The job is set up to execute with BLAdmins role equivalence, but needs to switch roles to handle RBACUser functions (eg, createUser, addRole, removeRole). The NSH was not working, using:
blcli_init
blcli_setoption authType BLSSO
...
blcli_setoption roleName BLADmins
blcli_connect || exit 2
...
blcli_execute RBACUser removeRole ${USERNAME} ${ROLENAME}
We finally inserted this code right after the blcli_connect to get it working:
blcli_execute Utility assumeRole RBACAdmins
But we are concerned that BMC may change java classes/namespaces in future releases, and this code may only be a temporary workaround. Can someone advise or suggest a better way to handle this?
Thanks!
~Michael
1. Re: NSH script/job blcli_setoption roleName not workingBill Robinson
Jun 30, 2015 3:21 PM (in response to michael huttner)
blcli_setoption can't change things once the connection to the appserver is created. so the Utility.assumeRole is the way to switch roles after you have connected to the appserver. also, if this is running as a nsh script job, you don't need any of these:
blcli_init
blcli_setoption authType BLSSO
blcli_setoption serviceProfilesFile /bladelogic/depot/shared/AD_SYNC/profile.xml
blcli_setoption serviceProfileName SRP_Profile
blcli_setoption roleName SA_BL_USER # this is a BLADmins role equivalent
blcli_connect || exit 2
as they are ignored.
You can create an 'idea' to have the 'Utility.assumeRole' made into a 'released' blcli command.
2. Re: NSH script/job blcli_setoption roleName not workingmichael huttner Jun 30, 2015 3:25 PM (in response to Bill Robinson)
It was designed to be dual-purpose posix/CLI executable as well as NSH batch/script job capable, fyi. Do you think the Utility class method will be future compatible?
3. Re: NSH script/job blcli_setoption roleName not workingBill Robinson
Jun 30, 2015 3:29 PM (in response to michael huttner)
That command has existed for atleast 8 years..i don’t see that changing anytime soon. | https://communities.bmc.com/thread/131455 | CC-MAIN-2017-51 | refinedweb | 333 | 53.21 |
Xcode 3.2.6: compiler returns exception
Hi all.
I've a problem when i try to compile a simple C code. In particular, compiler generates a BAD ACCESS exception when i try to access image data.
My code is:
#include <stdio.h> #include <opencv/cv.h> #include <opencv/highgui.h> int main(int argc, char *argv) { IplImage *image_in = 0; int w,ws,h; image_in = cvLoadImage("./clown.bmp",0); w = image_in->width; }
and compiler returns:
Program received signal: “EXC_BAD_ACCESS”.
for the instruction
w = image_in->width;
I'm new in opencv world, and code seems correct, so I don't know what this exception means. Please help me!
Tommi
N.B.: I'm using OpenCV 2.4.3
P.s.: sorry for my not excellent english... | https://answers.opencv.org/question/6799/xcode-326-compiler-returns-exception/ | CC-MAIN-2019-47 | refinedweb | 125 | 62.85 |
, I wanted to move into Unity and create the bare bones of a shared hologram test scene using the messaging library from that post.
My aim for that scene is to build out a re-usable API and ‘infrastructure’ that allows for creating/deleting a hologram such that it will be shared with other devices over the network. I can then package those pieces (with the messaging library) and maybe use some/all of them again in further projects.
The Sketch
That sounds relatively unambitious but I think the steps involved are something along the lines of…
- Offer some “CreateHologram” method which can;
- Capture the ‘type’ and transform of the hologram being created – e.g. Cube/Sphere/Dragon – and create it.
- The object should have a stable ID that can represent it on all the devices that come to know about it.
- Determine whether there is already a suitable object in the scene which can provide a WorldAnchor to parent this new GameObject and, if suitable, re-use it to try to avoid creating ‘too many’ world anchors.
- For these posts, I will do this by trying to find an existing, world-anchored GameObject within 3m of the new GameObject.
- If a WorldAnchor does not exist, create one at the location of the GameObject and give it a stable ID that can represent it on all devices that come to know about it.
- Parent the hologram inside of the selected WorldAnchor, offsetting it as necessary.
- Wait for that WorldAnchor component to signify (via its isLocated flag) that it is located in space.
- Export that WorldAnchor to get the blob (byte[]) that represents it.
- Send that blob to some store that can persist it for access by other devices
- For these posts, I’m going to send these world anchors to Azure Blob Storage but it would not be difficult to abstract this and plug in other mechanisms
- Send a network message to other devices in the scene to inform them of;
- The new hologram ID
- The new hologram type
- The parent anchor ID
- The transform relative to the parent anchor object
- Offer some “DeleteHologram” method which can;
- Locate a hologram in the scene by its ID
- Remove it
- Send a network message to other devices in the scene to inform them of;
- The deleted hologram ID
That feels like more than enough to be getting on with for one blog post
Additionally, there’s a need to have some code which responds to these messages such that;
- When a “CreateHologram” message arrives;
- The hologram of the right type is created (e.g. Cube/Dragon/etc) with the same ID as the originating hologram.
- The world-anchored parent for the hologram is determined by the ‘parent anchor ID’
- If this parent object is not already in the scene then there’s a need to
- Download the blob representing the anchor from Azure blob storage
- Create a new blank GameObject to represent the world anchor
- Import the anchor to that GameObject
- The transform of the new hologram is set correctly relative to the world-anchored GameObject which parents it.
- When a “DeleteHologram” message arrives;
- Remove the hologram with the corresponding ID from the scene.
Ok, that’s definitely enough for one post
The Implementation
Ok, so how does this look in reality?
Firstly, I should say that for this experiment I am building with Unity 2017.3. I’m not at all sure that this is (yet) the recommended version of Unity for Mixed Reality development but I had a specific problem with the Unity 2017.2* versions in that I could not debug any code and I moved forward to 2017.3 for this set of blog posts and everything I needed seems to have worked to date. You may get different results.
I set up a blank Unity project, configured it to build for Windows UWP and Mixed Reality (without adding the Mixed Reality Toolkit) and added in my messaging library using the “placeholder” approach that I talked about here and you can see the 2 libraries in the Plugins folder of my solution below;
I then built out 3 sets of ideally re-usable scripts that you can see in this screenshot below;
The Messages Script Folder
The Messages folder contains 4 classes that ultimately are about presenting two different types of message-derived classes for use with the messaging library from my previous post.
and so there’s 2 base classes in here with the 2 real classes being CreatedObjectMessage and DeletedObjectMessage which line up with what I sketched out in that they carry the right pieces of data for those create/delete pieces of functionality.
The Azure Blobs Script Folder
The Azure Blobs folder contains a number of scripts intended to make the uploading/download of a blob to Azure storage relatively simple from Unity.
These scripts really surface one API exposed by the class AzureBlobStorageHelper which has public methods to Upload/Download blobs from Azure.
The rest of the code is just “infrastructure” and it leans very heavily on code that I ‘borrowed’ from my colleague Dave who has a repo of this type of code over on github. I hope that he doesn’t mind
and I hope that I commented the code appropriately to say where it (mostly) comes from.
In order to make use of Azure blob storage, there’s a need to have some endpoint/connection details and so there’s a type in this folder named AzureStorageDetails which stores these details and I’ll come back to its intended use.
The General Script Folder
The General scripts folder contains, again, mostly infrastructure with only perhaps 2 classes in here intended for actual use – namely, the SharedHologramsController class and the SharedCreator class.
The idea is that the SharedHologramsController is a MonoBehaviour intended to be dropped once into a project and it provides access to two key properties as seen below;
There’s the AzureStorageDetails which are intended to be configured in the editor to provide details of an Azure storage account name, key and container name as in this screenshot;
and I can easily copy those details from the Azure Storage Explorer or from the portal etc.
That SharedHologramsController instance also provides access to an instance of the other significant type here which is called SharedCreator and it is this type which has methods to Create/Delete shared holograms and perform the logic that was sketched out earlier in the post.
At the time of writing, the SharedCreator takes a string to identify the type of hologram that you want to create (Cube/Dragon/etc) and it only knows how to create primitives right now (Cube, Sphere, etc) but it would be far-from-rocket-science to adapt it so as to interpret that string in other ways – e.g. loading resources or asset bundles or similar in Unity. It’s just not something that I’ve added yet and I daresay some “IResolveHolograms” interface could easily be cooked up to do such a thing.
The Unity Package
I made a Unity Package out of the scripts and added it to the repo – it’s just an export of the scripts including the plugins.
The Package Downgrade Issue
This might be one big red herring but I think I noticed that when I build my solution from Unity 2017.3 then the generated projects look to be referencing V5.0.0 of the Microsoft.NETCore.UniversalWindowsPlatform package as shown below;
and I noticed that my messaging library project seems to be referencing V6.0.1 as below;
Confused? Yes, I am
This seems to manifest itself as a build warning NU1605 when I come to build the Unity solution inside of Visual Studio;
which I read as something like;
You have a project using Nuget package X which makes use of a library which has been built against Nuget package >X.
Now, of course, I tried to get around this by simply ignoring it but I then got bitten by a runtime error;
and I essentially pinned this down to the fact that my messaging library built against UWP package 6.0.1 was expecting to load System.Net.Sockets.dll V4.1.0.0 whereas the build process had emitted System.Net.Sockets.dll V4.0.6.0 and that didn’t match.
So, it wasn’t so easy to ignore.
I don’t know whether this was caused by some mistake I made inside of my Unity project setup or whether it would be reproducible if I were to make another Unity project.
For the moment, I have worked around this by manually changing the Nuget package of the Unity projects to be 6.0.1 as shown below;
Whether this is the ‘right’ thing to do, I’m unsure but it gets me around the build time warning and the runtime error for now but I’m grateful to whoever added that Nuget package warning because I spent some time trying to figure out what was going on here and it would have been a lot longer without that warning
An extra note here – I found that if by chance I had deployed the application containing this mismatch of UWP packages to a device then I had to make sure that I uninstalled that application before attempting to fix things – i.e. just switching the version numbers of the UWP packages in Visual Studio and asking to build/deploy didn’t seem to be enough but, rather, I had to make sure the application was wiped from the device.
The Usage
In terms of usage, I created a blank test project in Unity and set it up for the basics of UWP/HoloLens development, specifically;
- Moving the camera to the origin.
- Changing the camera’s clear flags to a solid black colour and its near clipping plane to 0.8.
- Changing the build platform to UWP, the device to HoloLens, the version of the SDK to 14393 and selecting the “C# projects” option.
- I changed the backend scripting engine to be .NET.
- I made sure that Windows Mixed Reality was set up within the XR settings.
- I made sure that my UWP capabilities included Internet Client/Server and Private Networks although I’m not 100% sure yet that I need both of those so this is possibly overkill. I also made sure that the capabilities included Spatial Perception.
I didn’t go to town on this – I just went with what I thought was the minimum. I then imported my Unity package that I made earlier in the blog post and which is also in the repo’s top level folder.
With that all imported, I added an empty GameObject to my scene and added the Shared Holograms Controller script to that GameObject as below;
and I filled in the details of my Azure storage account.
I then added a script named TestScript to my empty GameObject to see if I could write the following logic;
- A tap on an empty space will create a green cube as a shared hologram.
- Looking at a cube will turn it red, looking away will revert to green (locally, these colour changes are not intended to synchronise across devices).
- A tap on a focused cube will delete the shared hologram.
There’s no UX around the various delays involved in creating the shared holograms which there would definitely need to be in a real-world app but this is just for testing.
The TestScript ended up looking as below;
using System; using SharedHolograms; using UnityEngine; using UnityEngine.XR.WSA.Input; public class TestScript : MonoBehaviour { void Start() { this.recognizer = new GestureRecognizer(); this.recognizer.SetRecognizableGestures(GestureSettings.Tap); this.recognizer.Tapped += OnTapped; this.recognizer.StartCapturingGestures(); } void OnTapped(TappedEventArgs obj) { // If we are staring at a cube, delete it. Otherwise, make a new one. if (this.lastHitCube == null) { this.CreateSharedCube(); } else { this.DeleteSharedCube(); } } void DeleteSharedCube() { SharedHologramsController.Instance.Creator.Delete(this.lastHitCube); this.lastHitCube = null; } void CreateSharedCube() { var forward = Camera.main.transform.forward; forward.Normalize(); var position = Camera.main.transform.position + forward * 2.0f; // Note - there's potentially quite a long time here when the object has // been created but we're still doing network stuff so we'd need to really // make a UX that dealt with that which I haven't done here. SharedHologramsController.Instance.Creator.Create( "Cube", position, forward, new Vector3(0.1f, 0.1f, 0.1f), cube => { ChangeMaterial(cube, this.GreenMaterial); cube.AddComponent<BoxCollider>(); } ); } void Update() { RaycastHit rayHitInfo; // Are we looking at a cube? if (Physics.Raycast( Camera.main.transform.position, Camera.main.transform.forward, out rayHitInfo, 15.0f)) { this.lastHitCube = rayHitInfo.collider.gameObject; ChangeMaterial(this.lastHitCube, this.RedMaterial); } else if (this.lastHitCube != null) { ChangeMaterial(this.lastHitCube, this.GreenMaterial); this.lastHitCube = null; } } static void ChangeMaterial(GameObject gameObject, Material material) { gameObject.GetComponent<Renderer>().material = material; } Material GreenMaterial { get { if (this.greenMaterial == null) { this.greenMaterial = new Material(Shader.Find("Legacy Shaders/Diffuse")); this.greenMaterial.color = Color.green; } return (this.greenMaterial); } } Material RedMaterial { get { if (this.redMaterial == null) { this.redMaterial = new Material(Shader.Find("Legacy Shaders/Diffuse")); this.redMaterial.color = Color.red; } return (this.redMaterial); } } Material greenMaterial; Material redMaterial; GestureRecognizer recognizer; GameObject lastHitCube; }
and so there’s not much code and most of it has nothing to do with shared holograms – there’s just two calls in there to SharedHologramsInstance.Create and Delete and that’s pretty much it. The rest is just Unity work to change colours and so on.
Testing – A Challenge with One Device
At the time of writing, this is an experiment mostly done ‘for fun’ in the down time between Xmas and New Year and I have one HoloLens device which I can use to try things out.
Because of that, I had to write some extra code in order to use the one HoloLens as both a sender/receiver for these messages and so I added another project to the test apps folder of the messaging library project that I described in the previous blog post;
and this acts as a ‘recorder’ for the CreateObject/DeleteObject messages with a limited ability to play those messages back over the network.
This means that I can use my one HoloLens to position a number of cubes around a space and to delete some of them as well, use this console app to record that flow of messages and then restart the app on the HoloLens and play back those messages so as to check whether the holograms get re-created in the right places and deleted at the right time.
That seems to work reasonably well but, naturally, it’d be nice to also try this out on multiple devices.
Testing – The Editor
While I did try and make the messaging library and the other pieces so that as much of it as possible might run in the editor, I haven’t paid much attention to this yet as there’s a limited amount that I think that you can do with spatial anchors but the essence is there but is largely untested so far.
Wrapping Up
I (hopefully) removed my Azure storage connection details from the Unity project and I checked it, the Unity package and the underlying messaging library into github.
Feel very free to take it, play around with it, etc – once again, this is mainly written ‘for fun’ and for me to perhaps get some re-use of in the future so don’t expect super high quality from it – apply a pinch of salt to what you see.
What’s Next?
At the end of this post, I think I’ve got the basics to create/delete holograms and have them show up in a ‘shared manner’ across multiple devices albeit with a very limited user experience and the trade-offs that come with using the UDP multicast mechanism and Azure blob storage.
The mechanism is meant to support automatically creating world anchors as they are needed and the API is reduced down to a couple of calls to Create/Delete.
There’s one (small) Unity package to import into a solution and just one object to drop into the Unity scene.
So, there’s some basic pieces there but it would be nice to;
- Create objects other than primitives
- Transform objects after they are created and have those transformations mirrored to other devices.
- Have some ‘memory’ of messages that a client has missed such that not all clients have to join a scene at the same time in order to view the shared content.
I’m not sure whether I’ll have time to get through all of that but if I do then you’ll see some more posts in this series looking at some of those areas.
Pingback: Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 3) – Mike Taulty
Pingback: Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 5) – Mike Taulty
Pingback: Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 6) – Mike Taulty | https://mtaulty.com/2017/12/29/experiments-with-shared-holograms-and-azure-blob-storage-udp-multicasting-part-2/ | CC-MAIN-2022-05 | refinedweb | 2,838 | 55.78 |
#include <OBD.h>COBD obd;void setup(){ // start serial communication at the adapter defined baudrate Serial.begin(OBD_SERIAL_BAUDRATE); //38 400 Bauds // initiate OBD-II connection until success while (!obd.Init()); }void loop(){ int value; if (obd.ReadSensor(PID_RPM, value)) { Serial.println(value); }}
#include <NewSoftSerial.h>//Create an instance of the new soft serial library to control the serial LCD//Note, digital pin 3 of the Arduino should be connected to Rx of the serial LCD.NewSoftSerial lcd(2,3);//This is a character buffer that will store the data from the serial portchar rxData[20];char rxIndex=0;//Variables to hold the speed and RPM data.int vehicleSpeed=0;int vehicleRPM=0;void setup(){ //Both the Serial LCD and the OBD-II-UART use 9600 bps. lcd.begin(9600); Serial.begin(9600); //Clear the old data from the LCD. lcd.print(254, BYTE); lcd.print(1, BYTE); //Put the speed header on the first row. lcd.print("Speed: "); lcd.print(254, BYTE); //Put the RPM header on the second row. lcd.print(128+64, BYTE);();}void loop(){ //Delete any data that may be in the serial port before we begin. Serial.flush(); //Set the cursor in the position where we want the speed data. lcd.print(254, BYTE); lcd.print(128+8, BYTE); //Clear out the old speed data, and reset the cursor position. lcd.print(" "); lcd.print(254, BYTE); lcd.print(128+8, BYTE); /.print(254, BYTE); lcd.print(128 + 69, BYTE); //Clear the old RPM data, and then move the cursor position back. lcd.print(" "); lcd.print(254, BYTE); lcd.print(128+69, BYTE); /; } } }}
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=131693.msg990928 | CC-MAIN-2015-22 | refinedweb | 301 | 52.66 |
Create a Python function¶
In this basic example we are going to create a Function object (ie usable in the OpenTURNS context) from a pure Python function.
The pure Python function to wrap must accept a sequence of floats and return a sequence of float.
In [1]:
from __future__ import print_function import openturns as ot import math as m
In [2]:
# define a pure Python function from R^3 to R^2 def regularFunc(X): x0, x1, x2 = X y0 = x0 + x1 + x2 y1 = (x1 - 1.0) * m.exp(x0) * x2 return [y0, y1]
In [3]:
# create a Function object from a regular Python function function = ot.PythonFunction(3, 2, regularFunc)
In [4]:
# evaluate the function on a Point x = [1.0, 2.0, 3.0] print('x=', x, 'f(x)=', function(x))
x= [1.0, 2.0, 3.0] f(x)= [6,8.15485]
In [5]:
# evaluate the function on a Sample xs = [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]] print('xs=', xs, '\nf(xs)=', function(xs))
xs= [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]] f(xs)= [ y0 y1 ] 0 : [ 6 8.15485 ] 1 : [ 15 1310.36 ]
In [6]:
# now we can use the Function object services such as the gradient function.gradient(x)
Out[6]:
[[ 1 8.15485 ]
[ 1 8.15485 ]
[ 1 2.71828 ]]
Performance issues¶
When this function is used internally to evaluate a Sample, it loops over its points. This requires many memory allocations; moreover this loop is done in Python, it may thus be slow if Sample is large. We can define a function to operate on a Sample, and return a Sample.
For maximum performance, argument is in fact not a Sample, but a wrapper object which contains a pointer to data. When using Numpy arrays without copies and loops, performance is similar to C code, but Python definition is somewhat convoluted; please refer to Numpy documentation to learn how to efficiently define such functions.
In [7]:
# define the same function on a Sample import numpy as np def regularFuncSample(X): # Create a numpy array with the contents of X without copy xarray = np.array(X, copy=False) # Get columns as vectors, there is also no copy x0, x1, x2 = xarray.T # Allocate a numpy array to store result y = np.zeros((len(X), 2)) y[:,0] = x0 + x1 + x2 y[:,1] = (x1 - 1.0) * np.exp(x0) * x2 return y
In [8]:
# create a Function object from a regular Python function functionSample = ot.PythonFunction(3, 2, func_sample=regularFuncSample)
In [9]:
# evaluate the function on a Sample print('xs=', xs, '\nf(xs)=', functionSample(xs))
xs= [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]] f(xs)= [ y0 y1 ] 0 : [ 6 8.15485 ] 1 : [ 15 1310.36 ]
In [10]:
# evaluate the function on a Point print('x=', x, 'f(x)=', functionSample(x))
x= [1.0, 2.0, 3.0] f(x)= [6,8.15485]
The most efficient solution is to provide evaluations both on Point and Sample. This requires two Python function definitions, but if your code takes a lot of time, you should consider this option.
In [11]:
functionFast = ot.PythonFunction(3, 2, func=regularFunc, func_sample=regularFuncSample) | http://openturns.github.io/openturns/master/examples/functional_modeling/python_function.html | CC-MAIN-2018-17 | refinedweb | 536 | 66.44 |
I haven't used C++ for that long and I have a few questions. I checked the FAQ which didn't have the answers i was looking for.
-When do I know when to put the int main() into my program? I was looking at the tutorial and it's usually right at the begginning right after "using namespace std;" but in the structure tutorial (basic) it's input later.
-Are the spaces neccessary after brackets and such?
-What exactly is an argument? It's been mentioned before but it never really went into detail (unless it did and I forgot) Sorry if this is a stupid question.
-I have no clue what the use of a pointer is...
-When i tried to write an extremely basic file that would show a certain text, only the first word came up. This is the code
the only word it shows is "This" If you spot an error, please tell me.the only word it shows is "This" If you spot an error, please tell me.Code:#include <fstream> #include <iostream> using namespace std; int main() { char str[10]; ofstream a_file ( "example.txt" ); a_file<<"This text will now be inside example.txt"; a_file.close(); ifstream b_file ("example.txt" ); b_file>> str; cout<< str <<"\n"; cin.get(); }
-What's the point of structures and classes? they dont seem to do much.
-How can i make a loop that will make something happen at least twice? When i try and make a do...while loop it either only does it once or it keeps doing it into infinity. how would i make it repeat only 5 time?
I know that these are all extremely basic things that i should probably know, but from the tutorials i just couldn't figure out what the heck some of this stuff does.
Any help would be appreciated. | http://cboard.cprogramming.com/cplusplus-programming/110768-lot-questions.html | CC-MAIN-2016-30 | refinedweb | 309 | 83.96 |
Starting with version 6.0, the Datadog Agent is able to ingest metrics via a Unix Domain Socket (UDS), as an alternative to UDP, when running on Linux systems.
While UDP works great on
localhost, it can be a challenge to setup in containerized environments. Unix Domain Sockets allow you to easily establish the connection via.
Edit your
datadog.yaml file to set the
dogstatsd_socket option to the path where DogStatsD should create its listening socket:
dogstatsd_socket: /var/run/datadog/dsd.socket
Then restart your Agent. You can also set the socket path via the
DD_DOGSTATSD_SOCKET environment variable.
The following DogStatsD client libraries natively support UDS traffic:
Refer to the library’s documentation on how to enable UDS traffic.
Note: As with UDP, enabling client-side buffering is highly recommended to improve performance on heavy traffic. Refer to your client library’s documentation for instructions.
hostPathvolume # see note is tagged by the same container tags as Autodiscovery metrics. Note:
container_id,
container_name and
pod_name tags are not added to avoid creating too many custom metric contexts.
To use origin detection, enable the
dogstatsd_origin_detection option in your
datadog.yaml, or set the environment variable
DD_DOGSTATSD_ORIGIN_DETECTION=true, and restart your Agent.
When running inside a container, DogStatsd needs to run in the host PID namespace for origin detection to work reliably. You can enable this via the docker
--pid=host flag.
Note: This is supported by ECS with the parameter
"pidMode": "host" in the task definition of the container.
This option is not supported in Fargate. For more information, see the AWS documentation.
Adding UDS support to existing libraries can be easily achieved as the protocol is very close to UDP. Implementation guidelines and a testing checklist are available in the datadog-agent wiki.
Additional helpful documentation, links, and articles: | https://docs.datadoghq.com/developers/dogstatsd/unix_socket/ | CC-MAIN-2019-26 | refinedweb | 297 | 50.53 |
Task1 Advanced Debug Features
The TI Code Composer Studio (CCS) IDE and other IDEs provide many debugging tools to help with software development.
- The MSP432
MCUsintroduces a number of advanced debugging capabilities.
- CoreSight components are the ARM debugging tools. The available debugging hardware modules vary depending on the processor selected. Please refer the slides for the detailed discussion of ARM CoreSight Components.
- Three particular components can be used for debugging purposes: Embedded Trace Macrocell (ETM); Instrumentation Trace Macrocell (ITM); Data Watchpoint and Trace Unit (DWT)
- The MSP432 features the ITM and DWT (ETM is not present), both of which are configured through the Trace Port Interface Unit (TPIU) and output through the Serial Wire Output (SWO) pin. The CCS Hardware Trace Analyzer tools use a Serial Wire Viewer (SWV) to collect the trace data on the SWO, and thus the feature is referred to as SWO Trace.
- This lab focuses on how to leverage the SWO Trace debugging tools with the specific ARM Cortex-M4F implementation on MSP432 MCUs.
Different IDEs offer individual implementations of the ITM and DWT functions.
- TI CCS IDE supports the following advanced debug features: Statistical Function Profiling; Data Variable Tracing; Interrupt Profiling; Custom Core Trace
- These four use cases together are present in the Hardware Trace Analyzer menu in CCS.
- To enable these features, TI XDS110 or TI XDS200 debugger hardware is required. The TI XDS110 is already integrated inside the TI LaunchPad including the MSP432 LaunchPad.
Task 1.1 Configure the device
Expand the project (you can use your previous CCS projects) in Project Explorer, and open MSP432P401R.ccxml in the
Keep the “MSP432P401R” selected, then click the “Target Configuration” in the right part. Click the “TI XDS110 USB Debug Probe”, you will get the following screen
You can select the JTAG and SWD mode in the last part of the Connection Properties. After you changed the debug mode, e.g., JTAG or SWD, you can click “Test Connection” to verify the connection is successful.
- JTAG and SWD should all work without any modifications in the hardware
- You should change to the third debug mode “SWD Mode – Aux COM port is target TDO pin” (enable the SWO trace) for the following lab procedures.
- Press the Save button under the column of buttons, including Import…, New…, and so on
- Build the Project (click the hammer icon)
- Enter a debug session.
Right clickthe project name, select Debug As → Code Composer Studio Debug Session. Alternatively, click the bug icon.
- In debug mode, go to Tools → Hardware Trace Analyzer, and choose the use case to use as
showin the following figure. Note that only one SWO Trace use case can be open at a time. Task 1.2 Statistical Function Profiling
Under Tools → Hardware Trace Analyzer, click Statistical Function Profiling.
- There will be a popup prompt “Statistical Function Profiling Configuration”
- You can choose how often the ITM samples the PC. The fastest rate that the ITM can sample is one sample per 64 clock cycles, which is a hardware limitation.
- In this lab, we leave the default setup and select Start.
A new window appears in the bottom right corner of the debug view. The tab on the left traces the SWO trace output. The right tab displays the functions profiled.
- You can now click Run → Resume to start running the program. Alternatively, press F8.
- Either press Suspend (Alt + F8) when you want to pause the program and see the results, or set a breakpoint where you want it to stop.
- You can now check the data in the “Statistical Function Profiling”. You should be able to see the following figure.
Task 1.3 Data Variable Tracing
Data Variable Tracing can help you track the continues value change of one particular variable or a memory address without halt the CPU. Under Tools → Hardware Trace Analyzer, click Data Variable Tracing.
- If a different use case is open, a popup prompts you to close the current use case. (You can only open one)
- The pop-up window lets you configure the use case. For the demo, the only thing you must do is choose a location to trace.
- For example, we want to trace the value of a local variable “i”, we can put the “&i” in the address part of the following pop-up window.
- For the variable address, you can select an exact memory address in hex, or use a pointer to a variable. Use a global variable, or step past the initialization of the variable so the debugger knows the address of the variable (you can only
setupthe local variable “i” when the variable is visible, i.e., inside the scope of “i”).
- In this lab, we do not change other settings. You can click “Start” to open the Data
A new window appears in the bottom right corner of the debug view, the tab on the right is a graph of the value of the specified variable. You are now ready to run the use case.
- Click Run → Resume to start running the program
- Either press Suspend (Alt + F8) when you want to pause the program and see the results, or set a breakpoint where you want it to stop.
- Zoom out as necessary to see the full window as shown in the following figure. You are able too see the variable changes over time.
Task 1.4 Interrupt Profiling
Open one of your previous project that contains the GPIO interrupt, e.g., click the button to toggle the LED. We will use the Interrupt Profiling to check the interrupt.
- Build the Project (click the hammer icon)
- Enter a debug session.
Right clickthe project name, select Debug As → Code Composer Studio Debug Session. Alternatively, click the bug icon.
- Under Tools → Hardware Trace Analyzer, click Interrupt Profiling.
- Press Start to continue in the popup window.
- A new window appears in the bottom right corner of the debug view.
- The tab on the left traces the SWO trace output. The middle tab holds an interrupt graph; the right tab contains a detailed summary
aboutthe interrupts.
- Click Run → Resume to start running the program.
- Click the switch button multiple times to trigger the interrupt
- Either press Suspend (Alt + F8) when you want to pause the program and see the results, or set a breakpoint where you want it to stop.
- You should be able to see the following interrupt summary view
You can switch to the following graph view (the middle tap) and zoom out as necessary to see all the interrupts.
- You can see the PORT1 interrupts (your button click) vs the time, i.e., the green dot.
- Other interrupts are used for the RTOS, e.g., TA0 as the clock source of the RTOS.
Task 1.5 Custom Core Trace
Custom Core Trace is unique in that ITM does not use hardware packets to trace. Rather, for the most basic version, users must send software messages to the ITM port manually. You can use the ITM trace to replace the debug message output via the UART pin (Save the UART for other sensors).
- In this lab, we use the same sample project imported in Task 1.4.
- Software messages are application initiated messages. Software Messages are issued through the ITM (Instrumentation Trace Macrocell). The ITM in Cortex M has 32 stimulus ports.
- CCS reserves port 0 as a character port which means any data written to port 0 is interpreted by CCS as characters and not binary values. Use port 0 for printing strings and use ports 1 to 31 for printing binary values.
Before we utilize the ITM trace, we need to create one header file (.h) for all the ITM functions.
Right clickthe project, select New->Header File. Create one header file named “ITM.h” based on the following code
const unsigned ITM_BASE_ADDRESS = 0xE0000000; const unsigned ITM_NUM_PORTS = 32; const unsigned NUM_TRIALS = 2; typedef volatile unsigned* ITM_port_t; ITM_port_t getportnum(unsigned portnum) { unsigned port_num, port_address; ITM_port_t port; // Get this port address port_address = ITM_BASE_ADDRESS + (4*portnum); port = (ITM_port_t)port_address; return port; } void delay(unsigned num_loops) { unsigned i; for (i=0; i<num_loops; i++) { asm ("NOP"); } } void port_wait(ITM_port_t port) { delay(10); /* Wait while fifo ready */ while (*port == 0); } /* Send a nul terminated string to the port */ void ITM_put_string(ITM_port_t port, const char* data) { unsigned datapos = 0; unsigned portpos = 0; unsigned portdata = 0; while('\0' != data[datapos]) { port_wait(port); portdata = 0; /* Get the next 4 bytes of data */ for (portpos=0; portpos<4; ++portpos) { portdata |= data[datapos] << (8*portpos); if ('\0' != data[datapos]) { ++datapos; } } /* Write the next 4 bytes of data */ *port = portdata; } } /* Send a 32 bit value to the port */ void ITM_put_32(ITM_port_t port, unsigned data) { port_wait(port); *port = data; } /* Send a 16 bit value to the port */ void ITM_put_16(ITM_port_t port, unsigned short data) { /* Cast port for 16-bit data */ volatile unsigned short* myport = (volatile unsigned short*)port; port_wait(port); *myport = data; } /* Send a 8 bit value to the port */ void ITM_put_08(ITM_port_t port, unsigned char data) { /* Cast port for 8-bit data */ volatile unsigned char* myport = (volatile unsigned char*)port; port_wait(port); *myport = data; }
In the project file (.c), include the ITM.h file
#include "ITM.h"
If you want to output one debug message (similar to the UART terminal debug message), we can add the following code at any place you like
ITM_put_string((ITM_port_t)ITM_BASE_ADDRESS, "Helloworld-start\n");
We use the ITM_BASE_ADDRESS to select the port 0, and output strings. CCS reserves port 0 as a character port which means any data written to port 0 is interpreted by CCS as characters.
Other ports 1 to 31 can be used to print binary values, for example, you can add the following code in the application
ITM_put_32(getportnum(1), 33); delay(100); ITM_put_32(getportnum(1), 0x12345678); delay(100); ITM_put_16(getportnum(2), 33); delay(100); ITM_put_16(getportnum(2), 0x9abc); delay(100); ITM_put_string((ITM_port_t)ITM_BASE_ADDRESS, "end\n"); delay(10000);
ITM_put_32 is our defined functions to send a
Note: you can select the code section,
To use software messages, select Custom Core Trace from the Hardware Trace Analyzer menu as shown in the following figure
One configuration window will
After you continue
In the previous hardware trace configuration popup window (after you select Custom Core Trace from the Hardware Trace Analyzer menu). The configuration of
For example, you can click the Advanced Settings, and select the “ITM SW Messages” to change the character or binary output of each channel as shown in the following figure.
You also can click the button in the upper-left corner to create a new trigger. You can configure various trigger types as shown in the following figure. Please check TI’s document () for more advanced features.
| https://kaikailiu.cmpe.sjsu.edu/uncategorized/msp432-lab5-advanced-debug/ | CC-MAIN-2022-05 | refinedweb | 1,773 | 61.36 |
I have recently run onto this article by Ivan Shcherbakov called 10+ powerful debugging tricks with Visual Studio. Though the article presents some rather basic tips of debugging with Visual Studio, there are others at least as helpful as those. Therefore I put together a list of ten more debugging tips for native development that work with at least Visual Studio 2008. (If you work with managed code, the debugger has even more features and there are several articles on CodeProject that present them.) Here is my list of additional tips:
For more debugging tips check the second article in the series, 10 Even More Visual Studio Debugging Tips for Native Development.
It is possible to instruct the debugger to break when an exception occurs, before a handler is invoked. That allows you to debug your application immediately after the exception occurs. Navigating the Call Stack should allow you to figure the root cause of the exception.
Visual Studio allows you to specify what category or particular exception you want to break on. A dialog is available from Debug > Exceptions menu. You can specify native (or managed) exceptions and aside from the default exceptions known to the debugger, you can add your custom exceptions.
Here is an example with the debugger breaking when a std::exception is thrown. of scope. When that happens, the variable in the Watch window is disabled and cannot be inspected any more (nor updated) even if the object is still alive and well.
It is possible to continue to watch it in full capability if you know the address of the object. You can then cast the address to a pointer of the object type and put that in the Watch window.
In the example bellow, _foo is no longer accessible in the Watch window after stepping out of do_foo(). However, taking its address and casting it to foo* we can still watch the object.
If you work with large arrays (let say at least some hundred elements, but maybe even less) expanding the array in the Watch window and looking for some particular range of elements is cumbersome, because you have to scroll a lot.
And if the array is allocated on the heap you can't even expand its elements in the Watch window.
There is a solution for that. You can use the syntax (array + <offset>), <count> to watch a particular range of <count> elements starting at the <offset> position (of course, array here is your actual object).
If you want to watch the entire array, you can simply say array, <count>.
If your array is on the heap, then you can expand it in the Watch window, but to watch a particular range you'd have to use a slightly different the syntax: ((T*)array + <offset>), <count> (notice this syntax also works with arrays on the heap). In this case T is the type of the array's elements.
If you work with MFC and use the "array" containers from it, like CArray, CDWordArray, CStringArray, etc., you can of course apply the same filtering, except that you must watch the m_pData member of the array, which is the actual buffer holding the data.
Many times when you debug the code you probably step into functions you would like to step over, whether it's constructors, assignment operators or others. One of those that used to bother me the most was the CString constructor.
Here is an example when stepping into take_a_string() function first steps into CString's constructor.
CString
take_a_string()
void take_a_string(CString const &text)
{
}
void test_string()
{
take_a_string(_T("sample"));
}
Luckily it is possible to tell the debugger to step over some methods, classes or entire namespaces.
The way this was implemented has changed. Back in the days of VS 6 this used to be specified through the autoexp.dat file.
Since Visual Studio 2002 this was changed to Registry settings. To enable stepping over functions you need to add some values in Registry (you can find all the details here):
To skip stepping into any CString method I have added the following rule:
Having this enabled, even when you press to step into take_a_string() in the above example the debugger skips the CString's constructor.
Seldom you might need to attach with the debugger to a program, but you cannot do it with the Attach window (maybe because the break would occur too fast to catch by attaching), nor you can start the program in debugger in the first place. You can cause a break of the program and give the debugger a chance to attach by calling the __debugbreak() intrinsic.
__debugbreak()
void break_for_debugging()
{
__debugbreak();
}
There are actually other ways to do this, such as triggering interruption 3, but this only works with x86 platforms (ASM is no longer supported for x64 in C++).
There is also a DebugBreak() function, but this is not portable, so the intrinsic is the recommended method.
__asm int 3;
When your program executes the intrinsic it stops, and you get a chance to attach a debugger to the process.
Additional readings:
It is possible to show a particular text in the debugger's output window by calling DebugOutputString. If there is no debugger attached, the function does nothing.
DebugOutputString
Memory leaks are an important problem in native development and finding them could be a serious challenging especially in large projects. Visual Studio provides reports about detected memory leaks and there are other applications (free or commercial) to help you with that. In some situations though, it is possible to use the debugger to break when an allocation that eventually leaks is done. To do this however, you must find a reproducible allocation number (which might not be that easy though). If you are able to do that, then the debugger can break the moment that is performed.
Let's consider this code that allocates 8 bytes, but never releases the allocated memory. Visual Studio displays a report of the leaked objects, and running this several times I could see it's always the same allocation number (341).
void leak_some_memory()
{
char* buffer = new char[8];
}
Dumping objects ->
d:\marius\vc++\debuggingdemos\debuggingdemos.cpp(103) : {341} normal block at 0x00F71F38, 8 bytes long.
Data: < > CD CD CD CD CD CD CD CD
Object dump complete.
The steps for breaking on a particular (reproducible) allocation are:
Following these steps for my example with allocation number 341 I was able to identify the source of the leak:
Debug and Release builds are meant for different purposes. While a Debug configuration is used for development, a Release configuration, as the name implies should be used for the final version of a program. Since it's supposed that the application meets the required quality to be published, such a configuration contains optimizations and settings that break the debugging experience of a Debug build. Still, sometimes you'd like to be able to debug the Release build the same way you debug the Debug build. To do that, you need to perform some changes in the configuration.
However, in this case one could argue you no longer debug the Release build, but rather a mixture of the Debug and the Release builds.
There are several things you should do; the mandatory ones are:
Another important debugging experience is remote debugging. This is a larger topic, covered many times, so I just want to summarize a bit.
Remote Debugging Monitor downloads:
The debugging tips presented in this article and the original article that inspired this one should provide the necessary tips for most of the debugging experiences and problems. To get more information about these tips I suggest following the additional readings.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
if(error_condition_met)
{
std::stringstream buffer:
buffer << __FILE__ << "(" << __LINE__ << "): Something went wrong\n";
::OutputDebugString(buffer.str().c_str());
}
C:\bigdavedev\my_test_project\main.cpp(56): Something went wrong
boost\:\:shared_ptr\<.*=NoStepInto
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/469416/10-More-Visual-Studio-Debugging-Tips-for-Native-De?msg=4386952 | CC-MAIN-2014-52 | refinedweb | 1,356 | 59.13 |
A builder for WordPress plugins created with the Lava Framework.
Lofty (named after the character from Bob The Builder) is a build script for WordPress plugins created using the Lava framework.
$ npm install -g lofty
Lofty uses a configuration file (lofty.yaml)
test_server: C:\some location\
If you have a local wordpress installation you can set lofty to automatically copy the build files to the test server by specifying its path here.
Plugin defintion file (plugin.yaml)
name: Blank Plugin version: 1.0 description: Blank Plugin - update configuration in lava.yaml url: author: Daniel Chatfield author_url: license: GPLv2 class_namespace: Volcanic_Pixels_Blank_Plugin
This creates a development build (no minifying) and puts it in the build directory.
$ lofty
When you are ready to distribute this will create a copy in the dist folder.
$ lofty -d
$ lofty -v | https://www.npmjs.com/package/lofty | CC-MAIN-2015-48 | refinedweb | 134 | 58.38 |
We are going to learn how to make an application to control arduino.
We will see how to:
- Easily make a windows application with python.
- Convert it as an executable to use it anywhere.
- Manage Serial ports (detection/reconnection) automatically with 1 line of code
We can even reuse the code to manage as many arduino as we want.
But before i tell you more about it, let's try utest first!
We will use it to control the internal led on an Arduino nano.
As always, all the documentation/code are available on github in english/french
Step 1: Upload Arduino Code
Frist we need to upload the code on the arduino.
- Download the source code:
- Download drivers for arduino nano :
On the Arduino software ( )
- Copy utest folder into your sketch folder
- Upload utest.ino
(Tools: Arduino Nano / Processor : Atmega328)
You will need an Arduino nano clone (ch340g) as the application will only detected it.
You can use the serial monitor, to test your arduino:
No Line Ending / 115200
UTest : return OK ON : Turn on internal led (pin13)
OFF: Turn off internal led (pin13)
Step 2: Control the Arduino With Utest
utest is a portable application, you don't need to install anything to make it works.
- Download the application :
- Click on utest.exe
utest will automatically find the arduino
You can try to unplugged it/plug it on another usb port, and it will reconnect.
utest might not work on Windows 7 due to missing .dll
This shouldn't happen if your computer is up to date
Source:...
Step 3: Create Your Own Application
Let's see how to reuse this application, to make your own application.
First we need to install python 3 to modify it.
- Download python 3 ()
- During the installation, tick Add Python 3.5 to PATH
Then we need to install pySerial, to communicate with our arduino.
- Open a command prompt (Windows key + cmd)
- Type:
pip install serial
Finally, test the application, it is available in the source code ( ) at apps/utest/
- Open a command prompt
- Go to the source code folder (apps/utest/)
- Type:
python utest.py
Step 4: Create the Interface
We have everything we need to modify our application.
Let's see
- how to manage our arduino
- how to build a graphical interface with tkinter
In order to manage the arduino the easiest way possible, everything is handle by the module lib/usb.py
USB
As for now, this module has only two commands
usb = USB.Device(...)
- Connect to every serial ports which as CH340 in his name
- Send UTest to the serial port
- If it received "OK", it will connected to it
from lib import USB device_name = "CH340" #Device name in Windows device_type = "UTest" #Device type (in the arduino sketch) device_return_string = "OK" #Answer when a response is correct device_baudrate = 115200 #Baudrate usb = USB.Device(device_name,device_type,device_return_string,device_baudrate,status)
Everything is inside a separate thread to avoid blocking the application.
usb.write(string)
- Send a string to the arduino
- If the serial port is not available, it will try to reconnect
GUI (TKinter)
utest use tkinter to manage the GUI (graphical interface)
You can find more information on tkinter here :
- To create a window:
from tkinter import * root = Tk()
- To create a button ON
Button(text="on",command=on).pack()
- Create an action for the button ON
def on(): print("on")
- Create a label
status = Label(text="Searching...") status.pack()
If you want to modify a widget , we need to save it into a variable, and use .pack() on a different line
We send the label to the USB Moduleto display the current state of the connection
usb = USB.Device(...,status)
Finally, we generate the GUI loop.
root.mainloop()
Step 5: Add Commands to the Arduino
Our interface is ready,
but we need to teach our arduino to understand the commands we will send.
Serial Functions
We use two functions to manage the serial communication.
serialManager()
Check the serial port and convert any messages to a string (inside readString)
serialCheck()
If UTest is received , answer OK
Device name
You can change the name of the usb device in the first line
const String usb_name = "UTest";
Add commands
We manage our commands inside void loop()
void loop() { serialManager(); //If string received if (readString.length() > 0) { serialCheck(); if (readString == "ON"){ digitalWrite(13,1); } if (readString == "OFF"){ digitalWrite(13,0); } } //We clean the serial buffer readString = ""; }
For each commands create a condition, for example to turn on the internal led when "ON" is sent
if (readString == "ON"){ digitalWrite(13,1); }
Step 6: Convert Your App Into a Portable Executable
The arduino / Application is ready to be bundle into an .exe
- Install pyinstaller, we will use it to convert our application into an single executable file
pip install pyinstaller
- You can compile your application using the script compile.bat
pyinstaller --onefile --noconsole utest.py
- If you want to display debug messages, use this command instead:
pyinstaller --onefile utest.py
You should now have a /dist/utest.exe file
You will probably have warning about api-ms...dll file.
This shouldn't be a problem as these DLL (Universal C Runtime) are preinstalled on Windows 10 and previous windows should also have them if they are up to date.
Step 7: ... to Be Continued
I hope this was useful, and that will inspired you to create Arduino application !
With some modification, this application should works on MacOS / Linux.
Follow me on instructables/youtube, if you are interested by this topic.
Subscribe
Next time, we will learn how to improve our application and use it to control a led strip! | http://www.instructables.com/id/UTest-Make-USB-Devices-With-Arduino/ | CC-MAIN-2017-17 | refinedweb | 934 | 50.67 |
Hello all,
I need help with this assignment I am doing for my data structures class. I need to make a Depth-First Search Algorithm.
In order to do this, I have thought of a clever(well, maybe not that clever but hey I am still a student so gimme a break) way to take care of this. I have decided to use a 2 dimensional array and make a adjacency list. So, suppose we have the nodes, {0,1,2,3,4,5} and we have the following connections:
{(0,1)(0,2)(0,5)(1,3)(2,5)(3,4)} that would mean that we have an adjacency list like so:
012345
0 |011001|
1 |000100|
2 |000101|
3 |000010|
4 |000000|
5 |000000|
According to the adjacency list and assuming that our start is 0 and our goal is 4, one possible path might be [0, 1, 3, 4]
Now, here's the problem: I am having trouble populating this 2-dimensional array. Here's the code I have so far. The explanation of the code is after the code itself
The input is supposed to the program is as follows:The input is supposed to the program is as follows:Code:
#include <iostream.h>
void printArray( int a[5][5] );
int main(){
int num1, num2;
const int delim = -1;
int array_test[5][5];
while (num1 != delim){
cin >> num1;
cin >> num2;
for ( int i = 0; i <= 5; i++ ){
array_test[num1][num2] = 1;
}
printArray(array_test);
return 0;
}
}
void printArray( int a[ 5 ][ 5 ] )
{
for ( int i = 0; i <= 5; i++ ){
for ( int j = 0; j <= 5; j++ )
cout << a[ i ][ j ] << ' ';
cout << endl;
}
}
//the first two numbers are the start and the goal
//i.e - start at node 0 and end at node 4
04
//the rest are connections from node - to - node
01
02
05
13
23
25
34
//when user wants to end edge input type in -1 for both nodes
-1-1
First of all, I have no idea how to give the user this kind of flexibility in input(isn't it easier to put into a regular text file and then read from the text file?). Second of all, if anyone would run my code, they would see that my array does not get printed up right. I will be eternally grateful to whomever can help me. Thank you. | http://cboard.cprogramming.com/cplusplus-programming/27667-depth-first-search-using-matrices-printable-thread.html | CC-MAIN-2014-52 | refinedweb | 396 | 66.3 |
Those was, and hopefully still is amazing. It takes in a file, guesses the format. If it’s a closed proprietary format and it had the right xena plugin it would convert it to an open standard and optionally turned it into a .xena file ready to be ingested into the digital repository for long term storage.
We did this knowing that proprietary formats change so quickly and if you want to store a file format long term (20, 40, 100 years) you won’t be able to open it. An open format on the other hand, even if there is no software that can read it any more is open, so you can get your data back.
Once a file had passed through Xena, we’d use DPR to ingest it into the archive. Once in the archive, we had other opensource daemons we wrote which ensured we didn’t lose things to bitrot, we’d keep things duplicated and separated. It was a lot of work, and the size of the space required kept growing.
Anyway, now I’m an OpenStack Swift core developer, and wow, I wish Swift was around back then, because it’s exactly what is required for the DPR side. It duplicates, infinitely scales, it checks checksums, quarantines and corrects. Keeps everything replicated and separated and does it all automatically. Swift is also highly customise-able. You can create your own middleware and insert it in the proxy pipeline or in any of the storage node’s pipelines, and do what ever you need it to do. Add metadata, do something to the object on ingest, or whenever the object is read, updating some other system.. really you can do what ever you want. Maybe even wrap Xena into some middleware.
Going one step further, IBM have been working on a thing called storlets which uses swift and docker to do some work on objects and is now in the OpenStack namespace. Currently storlets are written in Java, and so is Xena.. so this might also be a perfect fit.
Anyway, I got talking with Chris Smart, a mate who also used to work in the same team at NAA, so it got my mind thinking about all this and so I thought I’d place my rambling thoughts somewhere in case other archives or libraries are interested in digital preservation and needs some ideas.. best part, the software is open source and also free!
Happy preserving. | https://oliver.net.au/?p=277 | CC-MAIN-2018-39 | refinedweb | 413 | 69.82 |
The first lecture note given during java class is “In java file name and class name should be the same”. When the above law is violated a compiler error message will appear as below
Output:
javac Trial.java Trial.java:9: error: class Geeks is public, should be declared in a file named Geeks.java public class Geeks ^ 1 error
But the myth can be violated in such a way to compile the above file.
Step 1: javac Trial.java
Step1 will create a Geeks.class (byte code) without any error message since the class is not public.
Step 2: java Geeks
Now the output will be Hello worldbr>
The myth about the file name and class name should be same only when the class is declared in
public.
The above program works as follows :
Now this .class file can be executed. By the above features some more miracles can be done. It is possible to have many classes in a java file. For debugging purposes this approach can be used. Each class can be executed separately to test their functionalities(only on one condition: Inheritance concept should not be used).
But in general it is good to follow the myth.
For example:
When the above file is compiled as javac Trial.java will create two .class files as ForGeeks.class and GeeksTest.class .
Since each class has separate main() stub they can be tested individually.
When java ForGeeks is executed the output is For Geeks class.
When java GeeksTest is executed the output is Geeks Test class.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above | https://tutorialspoint.dev/language/java/myth-file-name-class-name-java | CC-MAIN-2022-05 | refinedweb | 278 | 68.06 |
.
my $ns = new Net::DNS::Nameserver( LocalAddr => "10.1.2.3", LocalPort => "5353", ReplyHandler => \&reply_handler, Verbose => 1 ); my $ns = new Net::DNS::Nameserver( LocalAddr => ['::1' , '127.0.0.1' ], LocalPort => "5353", ReplyHandler => \&reply_handler, Verbose => 1, Truncate => 0 );
Creates a nameserver object. Attributes are:
LocalAddr IP address on which to listen. Defaults to INADDR_ANY. LocalPort Port on which to listen. Defaults to 53. ReplyHandler Reference to reply-handling subroutine Required. NotifyHandler Reference to reply-handling subroutine for queries with opcode NOTIFY (RFC1996) Verbose Print info about received queries. Defaults to 0 (off). Truncate Truncates UDP packets that are too big for the reply Defaults to 1 (on) IdleTimeout TCP clients are disconnected if they are idle longer than this duration. Defaults to 120 (secs)
The LocalAddr attribute may alternatively be specified as a list of IP addresses to listen the peerhost, the incoming query, and the name of the incoming socket (sockethost). It must either return the response code and references to the answer, authority, and additional sections of the response, or undef to leave the query unanswered. Common response codes are:
NOERROR No error FORMERR Format error SERVFAIL Server failure NXDOMAIN Non-existent domain (name doesn't exist) NOTIMP Not implemented REFUSED Query refused
For advanced usage it may also contain a headermask containing an hashref with the settings for the
aa,
ra, and
ad header bits. The argument is of the form
{.
Packet Truncation is new functionality for $Net::DNS::Nameserver::VERSION>830 and uses the Net::DNS::Packet::truncate method with a size determinde by the advertised EDNS0 size in the query, or 512 if EDNS0 is not advertised in the query. Only UDP replies are truncated. If you want to do packet runcation yourself you should set Truncate to 0 and use the truncate method on the reply packet in the code you use for the ReplyHandler.
Returns a Net::DNS::Nameserver object, or undef if the object couldn't be created. check if there is data to be read from the socket. If not it will return and you will have to call loop_once() again to check if there is any data waiting on the socket to be processed. In most cases you will have to count on calling "loop_once" twice.
A code fragment like:
$ns->loop_once(10); while( $ns->get_open_tcp() ){ $ns->loop_once(0); }
Would wait for 10 seconds for the initial connection and would then process all TCP sockets until none is left.
In scalar context returns the number of TCP connections for which state is maintained. In array context it returns IO::Socket objects, these could be useful for troubleshooting but be careful using them.
The following example will listen on port 5353 and respond to all queries for A records with the IP address 10.1.2.3. All other queries will be answered with NXDOMAIN. Authority and additional sections are left empty. The $peerhost variable catches the IP address of the peer host, so that additional filtering on its basis may be applied.
#!/usr/bin/perl use strict; use warnings; use Net::DNS::Nameserver; sub reply_handler { my ($qname, $qclass, $qtype, $peerhost,$query,$conn) = @_; my ($rcode, @ans, @auth, @add); print "Received query from $peerhost to ". $conn->{sockhost}. "\n"; $query->print; if ($qtype eq "A" && $qname eq "foo.example.com" ) { my ($ttl, $rdata) = (3600, "10.1.2.3"); my $rr = new Net::DNS::RR("$qname $ttl $qclass $qtype $rdata"); push @ans, $rr; $rcode = "NOERROR"; }elsif( $qname eq "foo.example.com" ) { $rcode = "NOERROR"; }else{ $rcode = "NXDOMAIN"; } # mark the answer as authoritive (by setting the 'aa' flag return ($rcode, \@ans, \@auth, \@add, { aa => 1 }); } my $ns = new Net::DNS::Nameserver( LocalPort => 5353, ReplyHandler => \&reply_handler, Verbose => 1 ) || die "couldn't create nameserver object\n"; $ns->main_loop;. Thus a UDP socket created listening to INADDR_ANY (all available IP-addresses) will reply not necessarily with the source address being the one to which the request was sent, but rather with the address that the operating system chooses. This is also often called "the closest address". This should really only be a problem on a server which has more than one IP-address (besides localhost - any experience with IPv6 complications here, would be nice). If this is a problem for you, a work-around would be to not listen to INADDR_ANY but to specify each address that you want this module to listen on. A separate set of sockets will then be created for each IP-address. | http://search.cpan.org/~nlnetlabs/Net-DNS-0.73/lib/Net/DNS/Nameserver.pm | CC-MAIN-2015-27 | refinedweb | 738 | 53.81 |
A reader wrote to me with an interesting Flash CS3 problem that had me stumped at first (mostly because I use FlexBuilder instead of Flash). I thought I’d post the answer here so we can all benefit from it.
The Problem
Ben H. writes…
I’ve been trying to get familiar with a “best practice” on [looping through the display list]. The trouble is, when I enter the following on the first frame of a blank .fla file:
trace("Number of Children in a blank SWF:"+stage.numChildren);
trace("Child 1:" + stage.getChildAt(0));
I get this:
Number of Children in a blank SWF:1
Child 1:[object MainTimeline]
Now that may seem grand, but I’ve drawn several shapes and have a text field and named movieclip instances on stage as well – so why [does it only count one child?]
Solution after the jump.
The Solution
The first thing I noticed here is that the trace statement seems to be tracing out an instance of the MainTimeline class. I tried seeing if the objects appear inside of the MainTimeline with this code (notice i had to cast the timeline to a DisplayObjectContainer to check its children):
[ftf]
import flash.display.*;
var timeline:DisplayObjectContainer = DisplayObjectContainer(stage.getChildAt(0));
trace(”Number of Children in a blank SWF:” + timeline.numChildren);
trace(”Child 1:” + timeline.getChildAt(0));
[/ftf]
Which displayed:
Number of Children in a blank SWF:3
Child 1:[object Shape]
OK. So the shapes are there, it’s just that they are inside the MainTimeline’s display list instead of in the Stage’s. Since I’m a little rusty on the Flash authoring tool, it took me a minute to realize that the MainTimeline is the class that Flash automatically creates when a Document Class is not specified by the developer. (By the way, I just want to say that I officially do not condone putting actions on frames when you’re using AS3. I really think you should always use a Document Class (which is a way to specify a class to use as your main timeline (I know, nested parentheses!)). You can use any class as long as it extends Sprite or MovieClip. In case you’re not familiar with this, Roger made a nifty screencast that shows how to set up your .FLA to use classes.) I was reminded that when you do define a Document Class such as this one:
[ftf]
package {
import flash.display.*;
public class Demo extends Sprite {
public function Demo() {
trace(”Number of Children in this SWF:” + numChildren);
trace(”Child 1:” + getChildAt(0));
}
}
}
[/ftf]
You simply access the children with
getChildren() or
this.getChildren() instead of referencing the Stage or the MainTimeline (because this is the main timeline). Try this on the first frame of the FLA (no don’t!) and you’ll get the same result.
So what’s the difference between the Stage and the Document Class? The Stage is a special type of Display Object that is ‘owned and operated’ by the Flash Player. It also contains some special properties, such as
frameRate, which affect all the other display lists. The Document Class, on the other hand, is defined by the developer and can be customized to fit your needs.
Good question Ben. It really is confusing that you’re drawing objects on the ’stage’ but calling
stage.getChildren() doesn’t return those objects.
You can also use root.
root == this == stage.getChildAt(0) for the main class
Hi,
Senocular wrote a great post about this topic:
Cheers,
Almog Kurtser | http://dispatchevent.org/mims/no-children-on-the-stage-a-confusing-flash-cs3-display-list-issue/ | crawl-002 | refinedweb | 591 | 73.37 |
Working With Zip Files In Python
Table Of Contents
Prerequisites To Work With Zip Files
- You must know the file handling of Python to understand Zip file handling. If you don't know the file handling, head over to the W3Schools File Handling section to learn.
- OOPS concept in Python
- Python concepts like conditionals, loops, functions, classes, etc.,
- If you don't know Python take DataCamp's free Intro to Python for Data Science course to learn Python language or read Pythons official documentation.
Open this link to download all of the Zip folders which I have used in the upcoming sections.
What is Zip File?
Zip is an archive file format which supports the lossless data compression. The Zip file is a single file containing one or more compressed files.
Uses for Zip File?
- Zip files help you to put all related files in one place.
- Zip files help to reduce the data size.
- Zip files transfer faster than the individual file over many connections.
zipfile Module
Explore all the methods and classes of the zipfile module using dir() method. See the code to get all the classes and methods of the zipfile module.
import zipfile # importing the 'zipfile' module print(dir(zipfile))
['BZIP2_VERSION', 'BadZipFile', 'BadZipfile', 'DEFAULT_VERSION', 'LZMACompressor', 'LZMADecompressor', 'LZMA_VERSION', 'LargeZipFile', 'MAX_EXTRACT_VERSION', 'PyZipFile', 'ZIP64_LIMIT', 'ZIP64_VERSION', 'ZIP_BZIP2', 'ZIP_DEFLATED', 'ZIP_FILECOUNT_LIMIT', 'ZIP_LZMA', 'ZIP_MAX_COMMENT', 'ZIP_STORED', 'ZipExtFile', 'ZipFile', 'ZipInfo', '_CD64_CREATE_VERSION', '_CD64_DIRECTORY_RECSIZE', '_CD64_DIRECTORY_SIZE', '_CD64_DISK_NUMBER', '_CD64_DISK_NUMBER_START', '_CD64_EXTRACT_VERSION', '_CD64_NUMBER_ENTRIES_THIS_DISK', '_CD64_NUMBER_ENTRIES_TOTAL', '_CD64_OFFSET_START_CENTDIR', '_CD64_SIGNATURE', '_CD_COMMENT_LENGTH', '_CD_COMPRESSED_SIZE', '_CD_COMPRESS_TYPE', '_CD_CRC', '_CD_CREATE_SYSTEM', '_CD_CREATE_VERSION', '_CD_DATE', '_CD_DISK_NUMBER_START', '_CD_EXTERNAL_FILE_ATTRIBUTES', '_CD_EXTRACT_SYSTEM', '_CD_EXTRACT_VERSION', '_CD_EXTRA_FIELD_LENGTH', '_CD_FILENAME_LENGTH', '_CD_FLAG_BITS', '_CD_INTERNAL_FILE_ATTRIBUTES', '_CD_LOCAL_HEADER_OFFSET', '_CD_SIGNATURE', '_CD_TIME', '_CD_UNCOMPRESSED_SIZE', '_ECD_COMMENT', '_ECD_COMMENT_SIZE', '_ECD_DISK_NUMBER', '_ECD_DISK_START', '_ECD_ENTRIES_THIS_DISK', '_ECD_ENTRIES_TOTAL', '_ECD_LOCATION', '_ECD_OFFSET', '_ECD_SIGNATURE', '_ECD_SIZE', '_EndRecData', '_EndRecData64', '_FH_COMPRESSED_SIZE', '_FH_COMPRESSION_METHOD', '_FH_CRC', '_FH_EXTRACT_SYSTEM', '_FH_EXTRACT_VERSION', '_FH_EXTRA_FIELD_LENGTH', '_FH_FILENAME_LENGTH', '_FH_GENERAL_PURPOSE_FLAG_BITS', '_FH_LAST_MOD_DATE', '_FH_LAST_MOD_TIME', '_FH_SIGNATURE', '_FH_UNCOMPRESSED_SIZE', '_SharedFile', '_Tellable', '_ZipDecrypter', '_ZipWriteFile', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_check_compression', '_check_zipfile', '_get_compressor', '_get_decompressor', 'binascii', 'bz2', 'compressor_names', 'crc32', 'error', 'importlib', 'io', 'is_zipfile', 'lzma', 'main', 'os', 're', 'shutil', 'sizeCentralDir', 'sizeEndCentDir', 'sizeEndCentDir64', 'sizeEndCentDir64Locator', 'sizeFileHeader', 'stat', 'stringCentralDir', 'stringEndArchive', 'stringEndArchive64', 'stringEndArchive64Locator', 'stringFileHeader', 'struct', 'structCentralDir', 'structEndArchive', 'structEndArchive64', 'structEndArchive64Locator', 'structFileHeader', 'sys', 'threading', 'time', 'zlib']
You have seen a bunch of classes and methods right. But, you are not going to learn all of them. You will learn only some classes and methods to work with the Zip files.
Let's see some useful Exceptions, Classes, and Methods with brief explanations.
Exceptions
Exception is a message which is used to display the exact error as you like. In Python, you use try, except, finally keywords for the error handling.
If you are not familiar with the error handling, go to Pythons Error Handling documentation to learn the error handling.
Let's see all exceptions in zipfile module.
zipfile.BadZipFile
zipfile.BadZipFile is an exception in the zipfile module. This error will raise for Bad Zip files. See the example below.
## zipfile.BadZipFile import zipfile def main(): try: with zipfile.ZipFile('sample_file.zip') as file: # opening the zip file using 'zipfile.ZipFile' class print("Ok") except zipfile.BadZipFile: # if the zip file has any errors then it prints the error message which you wrote under the 'except' block print('Error: Zip file is corrupted') if __name__ == '__main__': main() ## I used a badfile for the test
Ok
zipfile.LargeZipFile
Suppose if you want to work with a large Zip file, you need to enable the ZIP64 functionality while opening the Zip. If you don't enable it, LargeZipFile will raise. See the example.
## zipfile.LargeZipFile ## Without enabling 'Zip64' import zipfile def main(): try: with zipfile.ZipFile('sample_file.zip') as file: print('File size is compatible') except zipfile.LargeZipFile: # it raises an 'LargeZipFile' error because you didn't enable the 'Zip64' print('Error: File size if too large') if __name__ == '__main__': main()
File size is compatible
## zipfile.LargeZipFile ## With enabling 'ZIP64' import zipfile def main(): try: with zipfile.ZipFile('sample_file.zip', mode = 'r', allowZip64 = True) as file: # here enabling the 'Zip64' print('File size is compatible') except zipfile.LargeZipFile: print('Error: File size if too large') # if the file size is too large to open it prints the error you have written if __name__ == '__main__': main()
File size is compatible
Choose a Zip file which best suits for the Exception handling and then tries to run the program. You will get a clear Idea.
Classes
In simple words, class is a set of methods and attributes. You use the class methods and attributes wherever you want by creating the instances of class.
Let's see some classes of the zipfile module.
zipfile.ZipFile
The most common class which is used to work with Zip Files is ZipFile class.
zipfile.ZipFile is used to write and read the Zip files. It has some methods which are used to handle the Zip files.
Now, explore the methods of the ZipFile class using the dir() objects. See the code.
import zipfile print(dir(zipfile.ZipFile)) # accessing the 'ZipFile' class
['_RealGetContents', '__class__', '__del__', '__delattr__', '__dict__', '__dir__', '__doc__', '__enter__', '__eq__', '__exit__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_extract_member', '_fpclose', '_open_to_write', '_sanitize_windows_name', '_windows_illegal_name_trans_table', '_write_end_record', '_writecheck', 'close', 'comment', 'extract', 'extractall', 'fp', 'getinfo', 'infolist', 'namelist', 'open', 'printdir', 'read', 'setpassword', 'testzip', 'write', 'writestr']
You've already used the zipfile.ZipFile class to read Zip files in previous examples.
zipfile.ZipFile contains many methods like extract, open getinfo, setpassword, etc., to work the Zip files.
Let's see some methods of the ZipFile class.
## zipfile.ZipFile import zipfile def main(): with zipfile.ZipFile('sample_file.zip') as file: # ZipFile.infolist() returns a list containing all the members of an archive file print(file.infolist()) # ZipFile.namelist() returns a list containing all the members with names of an archive file print(file.namelist()) # ZipFile.getinfo(path = filepath) returns the information about a member of Zip file. # It raises a KeyError if it doesn't contain the mentioned file print(file.getinfo(file.namelist()[-1])) # ZipFile.open(path = filepath, mode = mode_type, pwd = password) opens the members of an archive file # 'pwd' is optional -> if it has password mention otherwise leave it text_file = file.open(name = file.namelist()[-1], mode = 'r') # 'read()' method of the file prints all the content of the file. You see this method in file handling. print(text_file.read()) # You must close the file if you don't open a file using 'with' keyword # 'close()' method is used to close the file text_file.close() # ZipFile.extractall(path = filepath, pwd = password) extracts all the files to current directory file.extractall() # after executing check the directory to see extracted files if __name__ == '__main__': main()
[<ZipInfo filename='extra_file.txt' filemode='-rw-rw-rw-' file_size=59>, <ZipInfo filename='READ ME.txt' filemode='-rw-rw-rw-' file_size=59>, <ZipInfo filename='even_odd.py' filemode='-rw-rw-rw-' file_size=129>] ['extra_file.txt', 'READ ME.txt', 'even_odd.py'] <ZipInfo filename='even_odd.py' filemode='-rw-rw-rw-' file_size=129> b"num = int(input('Enter a Number:- '))\r\nif num % 2 == 0:\r\n\tprint('{} is Even'.fromat(num))\r\nelse:\r\n\tprint('{} is Odd'.fromat(num))"
If you want to learn all of the methods of ZipFile class use the help() function on the method you want to learn.
Or go to the official documentation of Python to learn.
zipfile.ZipInfo
zipfile.ZipInfo class used to represent the member of a Zip folder.
First, explore all the objects of zipfile.ZipInfo class using dir() method. See the code below.
## zipfile.ZipInfo import zipfile print(dir(zipfile.ZipInfo))
['CRC', 'FileHeader', '__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__slots__', '__str__', '__subclasshook__', '_decodeExtra', '_encodeFilenameFlags', '_raw_time', 'comment', 'compress_size', 'compress_type', 'create_system', 'create_version', 'date_time', 'external_attr', 'extra', 'extract_version', 'file_size', 'filename', 'flag_bits', 'from_file', 'header_offset', 'internal_attr', 'is_dir', 'orig_filename', 'reserved', 'volume']
Now, you are going to see some methods of the class zipfile.ZipInfo
## zipfile.ZipInfo import zipfile def main(): with zipfile.ZipFile('sample_file.zip') as file: # 'infolist()' is the object of 'ZipFile' class # 'infolist()' returns a list containing all the folders and files of the zip -> 'ZipInfo' objects # assigning last element of the list to a variable to test all the methods of 'ZipInfo' archive = file.infolist() read_me_file = archive[-1] # 'ZipInfo' methods # ZipInfo_object.filename returns the name of the file print("Name of the file:- {}".format(read_me_file.filename)) # ZipInfo_object.file_size returns the size of the file print("Size of the file:- {}".format(read_me_file.file_size)) # ZipInfo_object.is_dir() returns True if it's directory otherwise False print("Is directory:- {}".format(read_me_file.is_dir())) # ZipInfo_object.date_time() returns the created date & time of file print("File created data & time:- {}".format(read_me_file.date_time)) if __name__ == '__main__': main()
Name of the file:- sample_file/READ ME.txt Size of the file:- 59 Is directory:- False File created data & time:- (2018, 10, 4, 11, 32, 22)
Go to ZipInfo, if you want to learn more about the objects of ZipInfo.
Methods
Methods are a block of code for a specific functionality in the program. For example, if you want to find the absolute value of a number, you can use the Pythons methods called abs.
You can use it anywhere you want. Let's see some methods of the zipfile module.
zipfile.is_zipfile()
is_zipfile(filename) method of zipfile module returns True if the file is a valid Zip otherwise it returns False.
Let's see an example.
## zipfile.is_zip(filename) import zipfile def main(): print(zipfile.is_zipfile('sample_file.zip')) # it returns True if __name__ == '__main__': main()
True
Handling Zip Files
In this section, you will learn how to handle Zip files like Opening, Extracting, Writing, etc..,.
Extracting A Zip File
Extracting the files of a Zip file using the extractall method to the current directory.
## extracting zip file import zipfile def main(): # assigning filename to a variable file_name = 'sample_file.zip' # opening Zip using 'with' keyword in read mode with zipfile.ZipFile(file_name, 'r') as file: # printing all the information of archive file contents using 'printdir' method print(file.printdir()) # extracting the files using 'extracall' method print('Extracting all files...') file.extractall() print('Done!') # check your directory of zip file to see the extracted files if __name__ == '__main__': main()
File Name Modified Size sample_file/ 2018-10-04 11:33:22 0 sample_file/even_odd.py 2018-06-29 23:35:54 129 sample_file/READ ME.txt 2018-10-04 11:32:22 59 None Extracting all files... Done!
Extracting A Zip With Password
To extract a Zip with a password, you need to pass a value to pwd positional argument of extract(pwd = password) or extractall(pwd = password) methods.
You must pass a password which is in bytes. To convert a str to bytes use the Pythons built-in method bytes with utf-8 encode format.
Let's see an example.
## extracting zip with password import zipfile def main(): file_name = 'pswd_file.zip' pswd = 'datacamp' with zipfile.ZipFile(file_name) as file: # password you pass must be in the bytes you converted 'str' into 'bytes' file.extractall(pwd = bytes(pswd, 'utf-8')) if __name__ == '__main__': main()
You can also extract files by using the setpassword(pwd = password) method of ZipFile class. See the example below.
## extracting zip with password import zipfile def main(): file_name = 'pswd_file.zip' pswd = 'datacamp' with zipfile.ZipFile(file_name) as file: # 'setpassword' method is used to give a password to the 'Zip' file.setpassword(pwd = bytes(pswd, 'utf-8')) file.extractall() if __name__ == '__main__': main()
Creating Zip Files
To create a Zip file, you don't need any extra methods. Just pass the name to the ZipFile class, and it will create an archive in the current directory.
See the below example.
## Creating Zip file import zipfile def main(): archive_name = 'example_file.zip' # below one line of code will create a 'Zip' in the current working directory with zipfile.ZipFile(archive_name, 'w') as file: print("{} is created.".format(archive_name)) if __name__ == '__main__': main()
example_file.zip is created.
Writing To Zip Files
You have to open Zip files in write mode to write files to the archive file. It overrides all the existing files in the Zip.
Let's an example.
## Writing files to zip import zipfile def main(): file_name = 'sample_file.zip' # Opening the 'Zip' in writing mode with zipfile.ZipFile(file_name, 'w') as file: # write mode overrides all the existing files in the 'Zip.' # you have to create the file which you have to write to the 'Zip.' file.write('extra_file.txt') print('File overrides the existing files') # opening the 'Zip' in reading mode to check with zipfile.ZipFile(file_name, 'r') as file: print(file.namelist()) if __name__ == '__main__': main()
File overrides the existing files ['extra_file.txt']
Appending Files To Zip
You have to open Zip in append(a) mode in order to append any files to the Zip. It doesn't override the existing files.
Let's see an example.
## Appending files to zip import zipfile def main(): file_name = 'sample_file.zip' # opening the 'Zip' in writing mode with zipfile.ZipFile(file_name, 'a') as file: # append mode adds files to the 'Zip' # you have to create the files which you have to add to the 'Zip' file.write('READ ME.txt') file.write('even_odd.py') print('Files added to the Zip') # opening the 'Zip' in reading mode to check with zipfile.ZipFile(file_name, 'r') as file: print(file.namelist()) if __name__ == '__main__': main()
Files added to the Zip ['extra_file.txt', 'READ ME.txt', 'even_odd.py']
Till now, you have learned how to handle Zip files. Now, you will able to open, write, append, extract, create, etc.., Zip files. Now, you're going to write a simple program.
Let's see what it is?
Extracting Multiple Sub Zips With Password Using Loops And zipfile
- You have a Zip which contains some sub Zip files in depth. And each of the Zip files has a password which is their name. Our challenge is to unzip all the Zip until you reach the end.
Steps To Solve The Problem
- Extract the Parent File file using its name as the password.
- Get the first child name using the namelist() method. Store it in a variable.
Run the loop for Infinite Times.
Check whether the file is Zip or not using is_zipfile() method. If yes do the following.
Open the zip with the name variable.
Get the password of the Zip from the name variable.
Extract the Zip.
Get and store the next Zip name in the name variable.
else
- Break the loop using break.
I have created the above Procedure. If you want you can change it according to your files arrangement.
## Solution import zipfile def main(): # storing the parent name parent_file_name = '000.zip' with zipfile.ZipFile(parent_file_name, 'r') as parent_file: # extracting the parent file pswd = bytes(parent_file_name.split('.')[0], 'utf-8') parent_file.extractall(pwd = pswd) # getting the first child next_zip_name = parent_file.namelist()[0] # looping through the sub zips infinite times until you don't encouter a 'Zip' file while True: if zipfile.is_zipfile(next_zip_name): # opening the zip with zipfile.ZipFile(next_zip_name, 'r') as child_file: # getting password from the zip name pswd = bytes(next_zip_name.split('.')[0], 'utf-8') # extracting the zip child_file.extractall(pwd = pswd) # getting the child zip name next_zip_name = child_file.namelist()[0] else: break if __name__ == '__main__': main()
After executing the above program, you'll see all the sub zips are extracted to the current directory.
EndNote
Congratulations on completing the tutorial!
Hope you enjoyed the tutorial. This article helps you a lot when you are working with Zip files. Now, you are able to work with the Zip files.
If you have any doubts regarding the article, ask me in the comment section. I will reply as soon as possible.
Again, if you are new to Python take DataCamp's free Intro to Python for Data Science course to learn Python language or read Pythons official documentation.
Happy Coding! | https://www.datacamp.com/community/tutorials/zip-file | CC-MAIN-2019-26 | refinedweb | 2,600 | 58.69 |
Comparing Objects in Java
When you start working with objects in Java, you find that you can use == and != to compare objects with one another. For instance, a button that you see on the computer screen is an object. You can ask whether the thing that was just mouse-clicked is a particular button on your screen. You do this with Java’s equality operator.
if (e.getSource() == bCopy) { clipboard.setText(which.getText());
The big gotcha with Java’s comparison scheme comes when you compare two strings. When you compare two strings with one another, you don’t want to use the double equal sign. Using the double equal sign would ask, “Is this string stored in exactly the same place in memory as that other string?” Usually, that’s not what you want to ask.
Instead, you usually want to ask, “Does this string have the same characters in it as that other string?” To ask the second question (the more appropriate question) Java’s String type has a method named equals. (Like everything else in the known universe, this equals method is defined in the Java API, short for Application Programming Interface.)
The equals method compares two strings to see whether they have the same characters in them. For an example using Java’s equals method, see this code listing. (The figure shows a run of the program in the listing.)
import static java.lang.System.*; import java.util.Scanner; public class CheckPassword { public static void main(String args[]) { out.print("What's the password? "); Scanner keyboard = new Scanner(in); String password = keyboard.next(); out.println("You typed >>" + password + "<<"); out.println(); if (password == "swordfish") { out.println("The word you typed is stored"); out.println("in the same place as the real"); out.println("password. You must be a"); out.println("hacker."); } else { out.println("The word you typed is not"); out.println("stored in the same place as"); out.println("the real password, but that's"); out.println("no big deal."); } out.println(); if (password.equals("swordfish")) { out.println("The word you typed has the"); out.println("same characters as the real"); out.println("password. You can use our"); out.println("precious system."); } else { out.println("The word you typed doesn't"); out.println("have the same characters as"); out.println("the real password. You can't"); out.println("use our precious system."); } keyboard.close(); } }
In the listing, the call keyboard.next() grabs whatever word the user types on the computer keyboard. The code shoves this word into the variable named password. Then the program’s if statements use two different techniques to compare password with “swordfish”.
The examples in the printed book are mostly text-based, but you can find fancier versions of most examples on Dummies website. These fancier versions have windows, buttons, text fields, and other elements of a typical graphical user interface (GUI).
The more appropriate of the two techniques uses Java’s equals method. The equals method looks funny because when you call it, you put a dot after one string and put the other string in parentheses. But that’s the way you have to do it.
In calling Java’s equals method, it doesn’t matter which string gets the dot and which gets the parentheses. For instance, in the listing, you could have written
if ("swordfish".equals(password))
The method would work just as well.
A call to Java’s equals method looks imbalanced, but it’s not. There’s a reason behind the apparent imbalance between the dot and the parentheses. The idea is that you have two objects: the password object and the “swordfish” object.
Each of these two objects is of type String. (However, password is a variable of type String, and “swordfish” is a String literal.) When you write password.equals(“swordfish”), you’re calling an equals method that belongs to the password object. When you call that method, you’re feeding “swordfish” to the method as the method’s parameter (pun intended).
When comparing strings with one another, use the equals method — not the double equal sign. | https://www.dummies.com/programming/java/comparing-objects-in-java/ | CC-MAIN-2019-26 | refinedweb | 682 | 69.58 |
Using CMSIS-DSP with MCUXpresso SDK and IDE
Follow these steps to link the CMSIS-DSP library to a MCUXpresso SDK 2.x project using the MCUXpresso IDE. The steps described in the document were done using the MKL25Z MCU like the one in the FRDM-KL25Z board, but the same principles are applicable to any Kinetis MCU.
Please make sure you have already created and installed the corresponding MCUXpresso SDK package to the MCUXpresso IDE, you can use following links as reference:
Getting Started with MCUXpresso and FRDM-K64F
Generating a downloadable MCUXpresso SDK v.2 package
Creating a MCUXpresso SDK 2.x Project:
1) Click on the Import SDK example option from the Quickstart menu:
2) For this demonstration the Hello World example is used:
3) The new project should now appear on your workspace:
Linking CMSIS-DSP Library:
1) The first step is to create a build variable that will be used to specify the path of the DSP library. Go to Project > Properties and under C/C++ Build select Build Variables and click on Add:
2) A new window will open, specify the name of the build variable, its type and value, the Value is the location of your CMSIS folder:
Variable name: SDK_2.2_KL25Z_CMSIS_PATH
Value: C:\nxp\SDK_2.2_FRDM-KL25Z_MCUX\CMSIS
NOTE: The SDK_2.2_FRDM-KL25Z package was previously unzipped to the C:\nxp folder
3) The new variable should be listed as in the image below, click on Apply:
4) Go to C/C++ Build > Settings > MCU Linker > Libraries and specify the precompiled library to be used and its path:
Library name: arm_cortexM0l_math.The M denotes the ARM core, while the ‘l’ means ‘little endian’.
Path: ${SDK_2.2_KL25Z_CMSIS_PATH}\Lib\GCC
5) Now go to C/C++ Build > Settings > MCU C Compiler > Preprocessor and specify the following macro:
ARM_MATH_CM0PLUS: Tells the CMSIS library which ARM Cortex core I’m using.
Importing DSP example source files:
1) For this project the “FIR Lowpass Filter” example will be used, it can be found on the following path:
${SDK_2.2_KL25Z_CMSIS_PATH}\DSP_Lib\Examples\arm_fir_example\ARM
2) The first step is to copy the source files of the example to the project, the files that need to be copied are:
arm_fir_example_f32.c
arm_fir_data.c
math_helper.c
math_helper.h
3) The next step is to copy the SDK include files and initialization functions from the hello_world.c file to the FIR example file:
#include "fsl_device_registers.h"
#include "fsl_debug_console.h"
#include "board.h"
#include "pin_mux.h"
BOARD_InitPins();
BOARD_BootClockRUN();
BOARD_InitDebugConsole();
4) Finally the hello_world.c file can be deleted from the project:
5) Now you should be able to compile and debug the project.
Related links:
Generating a downloadable MCUXpresso SDK v.2 package
Getting Started with MCUXpresso and FRDM-K64F
Adding CMSIS-DSP Library to a KSDK 2.x project in Kinetis Design Studio
Hey! Thank you for the guide!
I have a question about using CMSIS-DSP,
I'm running Kinetis Design Studio 3.2, MQX 4.2 and SDK 1.x project. There is SDK2.x but it doesn't seem to support K70F120M board.
Are you instructions similar in that I can just link it like you did?
Sorry if this is the wrong place to ask. | https://community.nxp.com/docs/DOC-335465 | CC-MAIN-2018-30 | refinedweb | 538 | 56.05 |
First time here? Check out the FAQ!
Call each value by a separate name and hence call the respective trackbar.. sweet! But how do I see the script's output on my screen?
I have made a traffic light detection algorithm with OpenCV and Python. My script has some trackbars which I need to adjust and see the output to set a desired stage.
My script runs on my Raspberry Pi, is there any way I can send the trackbar values from another device on the same network on the RPi? For example, maybe make a phone app (I use Windows Phone BTW) which sends the trackbar data to the script and shows the output on my phone screen.
@jmbapps Didn't get a word of that... can you please elaborate? I think that C.
Need more than one value to repack.
Can anybody help me?
Actually Its quite simple, just threshold the image, and use cv2.HoughCircles to find a circle, now put a condtional statement that as soon as the pixel intensity at the detected circle goes below a minimun threshold, activate the robot.
@berak ohh the minimum and max ranges of the y and x axis respectively right?
?
@berak it works! Thanks a ton! But I don't understand in what order you've written the coordinates
@Tetragramm, I've been through the documentary for whole nights, and it helped me a lot, but it fails to answer several specific questions...
I am working on a script that reads the non-zero pixels in a binary image. I am using np.count_nonzero() to count non zero pixels, which works fine until, I specify a range of coordinates.
ele = np.count_nonzero(img)
Works fine
But, when I specify a range of pixels, for example I am working with a 640*480 res image, Now I want that only the pixels falling in the range of (320,0) and (640,480), that is, the right half, be checked for non zero values, so I modify my code to -
ele = np.count_nonzero(((320,0)<gray) & (gray <(640,480)))
But this line gives an error -
ValueError: operands could not be broadcast together with shapes (480,640) (2,)
ValueError: operands could not be broadcast together with shapes (480,640) (2,)
To me it sounds like the image layers (RGB or similar) might be causing this problem. Can anyone tell me how to fix it?
@Tetragramm Actually, I don't know how do I split my binary image into 2 parts so that I can use np.count_nonzero to compare non zero pixels in each part
@berak I was in the process of doing that :P
My code is just the regular one used for thresholding, create trackbars, convert image to HSV and then threshold. was nothing more than an indentation error in the for loop. :P
I tried that. It didn't work. But I have solved my problem. I made an empty function which returns nothing, and the I called that. @Tetragramm
Hi there!
I'm trying to threshold an image. I have used the cv2.createTrackbar function as-
cv2.createTrackbar('High H','image',0,179, None).
Now the last part is what I'm having trouble with. Further in my code, I use highH = cv2.getTrackbarPos('High H','image') to get my trackbar value and use it in the cv2.inRange function. So it becomes pretty obvious that I do not need to call a function as the last argument of the function. Now the problem is I can't seem to type in the function. I tried removing the last part, I got an error-
cv2.createTrackbar only works with 5 arguements. Only 4 given.
Hmm, okay seems like I can't skip a part.
Next I tried callback and nothing. I got this error:-
When used nothing:- NameError: name 'nothing' is not defined
When used callback:- NameError: name 'callback' is not defined
Okay after a while I tried using None. Got this error:-
TypeError: on_change must be callable
So how do I use the cv2.createTrackbar function without calling a function?
Thanks!
Hi there! I'm getting this unexpected error in this code. I found some solutions after googling a whil, I've tried all of them but none work. Here is the code-
import cv2
import numpy as np
img = cv2.imread('circleTest.jpg',:]:
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('detected circles',cimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
The code is borrowed from a tutorial about HoughCircles from here-...
Whenever I try to run the code I get an error saying Unsupported or Unrecognized array type, ( you must have seen the entire error). I tried copying DLLs, putting the image in the working directory, giving the path to the image but no good. I can run other programs like of object detection fine, on Python and C/C++.
Any help would be greatly appreciated.
Thanks!
@berak , I did, that was the first thing I did. But that's only for RGB and I couldn't possibly understand how to implement it when I have 6 parameters (High H,S,V and Low H,S,V) I maybe can type in the cv2.inRange function correctly but I can't really understand how to use my trackbars to change the required values
Hi there!
I'm kinda new to Python, have been struggling with this language along with the OpenCV libraries for a while now.
I've come across various code examples for first converting an image (in my case camera feed from my laptop's webcam) from BGR to HSV and then using the cv2.inRange function to threshold the image to obtain a binary image with the desired colors filtered out. I can't understand the implementation of the cv2.createTrackbar function.
So my obvious question is-
How do I make trackbars to control the Min and Max H,S,V values to threshold my image. I cant understand how the trackbars affect the individual variables.
(Too many questions in very little time)
Hi there, I'm new to OpenCV (downloaded 24 hours ago, I've learnt a lot, but still), I am working on a project that requires image processing on a Raspberry Pi. The Pi has to detect a traffic light(when the robot is standing on the start line), and as soon as the light goes green, signal the Arduino to run the motors. Now to detect the traffic lights, based on my research, I have thought of 3 different approaches...
Approach 1. With the help of a great tutorial (Link-...) which told me how to detect a red coloured object, I've wrote a code which detects red lights(most of it is same as the tutorial), it shows the object as white and rest as black. Now what I think of doing is, adjust the HSV values for the threshold image so it sees my traffic light(appears as a white spot on the threshold image), set hem there and then monitor the the ROI until it goes black(meaning that the red light has gone off), and send signal to Arduino. In this approach, I need to know-
1. How to fix the HSV values once i set them to the desired values?
2. How to mark the Region of Interest(ROI)?
3. How to track this ROI and when it goes black(meaning the light has gone off), send the commands to the Arduino.
(Please note that currently I'm using OpenCV on my Windows PC in Visual Studio, and my code is written in C++, so if you can tell me how to show output on the computer screen rather than the Arduino part, that would be great. I can later convert the code to python and make necessary modifications, but first I need to get it working on my PC.)
Approach 2- Convert the RGB frames received from the camera feed into greyscale. Once they have been converted find the brightest spot(which should be traffic light), once the spot is found mark it as a ROI, monitor it and as soon as it goes below the threshold value(say 200, meaning the light has gone off), send appropriate commands to the Arduino. In this method what I need to know is same as in the first approach.
(I'm not very sure about the reliability of this approach as there will be direct sunlight where I have to detect the light, so will it be able to find the light and not just point to any well lit area? Please tell me if I'm wrong because this method seems the easiest.)
Approach 3- Take a sample image of a traffic light(when it is red) and then compare it with the frames from my camera feed. As soon as there is a mismatch( meaning the light has gone off), send appropriate command to the Arduino. I'm thinking of ... (more)
Thanks there! The Cascade Classifier methods looks nice, I read about the Template matching function earlier this morning and I'm thinking its worth a try... I'm gonna try both methods and see which works best on first my PC and then the RPi. The homography estimation tutorial is only a hell lot if code and no explanation, so that's gonna take me some time to interpret (I'm new to C/C++ been coding in Java and C# till now).
One more question that I came across was that I am using OpenCV in a Windows environment on VS thereby coding in C/C++ and I found that OpenCV code on the RPi is written in Python(atleast in all the tutorials I came across). So do I need to convert my code from C to Python or I can use it as I'm using it ... (more)
Hi there! I am working on a project which requires the detection of road signs. I was wondering that if I could take sample images of the road signs, save them onto my PC(final version to run on Raspberry Pi), and then as the camera runs, it keeps on checking the presence of the sign stored in the memory and when it is found takes appropriate action(For example if I implement it in a robot, whenever the robot sees a stop sign, it should stop for, say 10 seconds and then start moving again.)
Can anyone help me out on this?
Hi there. I am a complete beginner to OpenCV. I downloaded it today and tried installing it on my pc using the documentation on this site. After a while I figured out that the documentation was for version 2.14 and was very, very complex. They were using VS 2010 and I have 2012. So I headed to YouTube and found these tutorials-
Installing OpenCV -...
Configuring it with Visual Studio-...
However, I had already completed the first part of the documentation(which does something with a command prompt and makes environment variables. But I ignored that and followed both the tutorials and when I tried to run the sample code. I got errors on two lines. Here's the code-
include iostream
include opencv2/opencv.hpp // VS says it cannot open source file.
include opencv2/opencv.hpp // VS says it cannot open source file.
void main()
{
std::cout << "OpenCV Version: " << CV_VERSION << std::endl;// VS says indetifier CV_VERSION is unidentified.
}
This program is supposed to show the OpenCV version I have on my pc, but I cannot get it to run. Is my OpenCV installation valid? If not then how can I repair it? I have already checked the paths a lot of times. I just can' seem to figure out what to do.
Any and all help will be greatly appreciated.
Thank you
-YaddyVirus
Thanks a lot @berak
Hey there! I am new to OpenCV and while I originally intend to use it on a Raspberry Pi, I couldn't wait for my board to ship and thought that I'll start coding around on my PC itself. I just wanted to know that to run OpenCV will the Visual Studio Express 2012 Express edition is enough or will I have to download the full edition?
Moreover the set up tutorial is very complex(I'm a geek and code in Java and C# but still it went over my head). Can anyone tell me a simplified procedure? | https://answers.opencv.org/users/39991/yaddyvirus/?sort=recent | CC-MAIN-2020-16 | refinedweb | 2,098 | 72.16 |
Abstract
Contents
While distutils / setuptools have taken us a long way, they suffer from three serious problems: (a) they're missing important features like usable build-time dependency declaration, autoconfiguration, and even basic ergonomic niceties like DRY-compliant version number management, and (b) extending them is difficult, so while there do exist various solutions to the above problems, they're often quirky, fragile, and expensive to maintain, and yet (c) it's very difficult to use anything else, because distutils/setuptools provide the standard interface for installing packages expected by both users and installation tools like pip.
Previous efforts (e.g. distutils2 or setuptools itself) have attempted to solve problems (a) and/or (b). This proposal aims to solve (c).
The goal of this PEP is get distutils-sig out of the business of being a gatekeeper for Python build systems. If you want to use distutils, great; if you want to use something else, then that should be easy to do using standardized methods. The difficulty of interfacing with distutils means that there aren't many such systems right now, but to give a sense of what we're thinking about see flit or bento. Fortunately, wheels have now solved many of the hard problems here -- e.g. it's no longer necessary that a build system also know about every possible installation configuration -- so pretty much all we really need from a build system is that it have some way to spit out standard-compliant wheels and sdists.
We therefore propose a new, relatively minimal interface for installation tools like pip to interact with package source trees and source distributions.
Terminology and goals
A source tree is something like a VCS checkout. We need a standard interface for installing from this format, to support usages like pip install some-directory/.
A source distribution is a static snapshot representing a particular release of some source code, like lxml-3.4.4.zip. Source distributions serve many purposes: they form an archival record of releases, they provide a stupid-simple de facto standard for tools that want to ingest and process large corpora of code, possibly written in many languages (e.g. code search), they act as the input to downstream packaging systems like Debian/Fedora/Conda/..., and so forth. In the Python ecosystem they additionally have a particularly important role to play, because packaging tools like pip are able to use source distributions to fulfill binary dependencies, e.g. if there is a distribution foo.whl which declares a dependency on bar, then we need to support the case where pip install bar or pip install foo automatically locates the sdist for bar, downloads it, builds it, and installs the resulting package.
Source distributions are also known as sdists for short.
A build frontend is a tool that users might run that takes arbitrary source trees or source distributions and builds wheels from them. The actual building is done by each source tree's build backend. In a command like pip wheel some-directory/, pip is acting as a build frontend.
An integration frontend is a tool that users might run that takes a set of package requirements (e.g. a requirements.txt file) and attempts to update a working environment to satisfy those requirements. This may require locating, building, and installing a combination of wheels and sdists. In a command like pip install lxml==2.4.0, pip is acting as an integration frontend.
Source trees
There is an existing, legacy source tree format involving setup.py. We don't try to specify it further; its de facto specification is encoded in the source code and documentation of distutils, setuptools, pip, and other tools. We'll refer to it as the setup.py-style.
Here we define a new style of source tree based around the pyproject.toml file defined in PEP 518, extending the [build-system] table in that file with one additional key, build-backend. Here's an example of how it would look:
[build-system] # Defined by PEP 518: requires = ["flit"] # Defined by this PEP: build-backend = "flit.api:main"
build-backend is a string naming a Python object that will be used to perform the build (see below for details). This is formatted following the same module:object syntax as a setuptools entry point. For instance, if the string is "flit.api:main" as in the example above, this object would be looked up by executing the equivalent of:
import flit.api backend = flit.api.main
It's also legal to leave out the :object part, e.g.
build-backend = "flit.api"
which acts like:
import flit.api backend = flit.api
Formally, the string should satisfy this grammar:
identifier = (letter | '_') (letter | '_' | digit)* module_path = identifier ('.' identifier)* object_path = identifier ('.' identifier)* entry_point = module_path (':' object_path)?
And we import module_path and then lookup module_path.object_path (or just module_path if object_path is missing).
If the pyproject.toml file is absent, or the build-backend key is missing, the source tree is not using this specification, and tools should fall back to running setup.py.
Where the build-backend key exists, it takes precedence over setup.py, and source trees need not include setup.py at all. Projects may still wish to include a setup.py for compatibility with tools that do not use this spec.
Build backend interface
The build backend object is expected to have attributes which provide some or all of the following hooks. The common config_settings argument is described after the individual hooks:
def get_build_requires(config_settings): ...
This hook MUST return an additional list of strings containing PEP 508 dependency specifications, above and beyond those specified in the pyproject.toml file. Example:
def get_build_requires(config_settings): return ["wheel >= 0.25", "setuptools"]
Optional. If not defined, the default implementation is equivalent to return [].
def get_wheel_metadata(metadata_directory, config_settings): ...
Must create a .dist-info directory containing wheel metadata inside the specified metadata_directory (i.e., creates a directory like {metadata_directory}/{package}-{version}.dist-info/. This directory MUST be a valid .dist-info directory as defined in the wheel specification, except that it need not contain RECORD or signatures. The hook MAY also create other files inside this directory, and a build frontend MUST ignore such files; the intention here is that in cases where the metadata depends on build-time decisions, the build backend may need to record these decisions in some convenient format for re-use by the actual wheel-building step.
Return value is ignored.
Optional. If a build frontend needs this information and the method is not defined, it should call build_wheel and look at the resulting metadata directly.
def build_wheel(wheel_directory, config_settings, metadata_directory=None): ...
Must build a .whl file, and place it in the specified wheel_directory.
If the build frontend has previously called get_wheel_metadata and depends on the wheel resulting from this call to have metadata matching this earlier call, then it should provide the path to the previous metadata_directory as an argument. If this argument is provided, then build_wheel MUST produce a wheel with identical metadata. The directory passed in by the build frontend MUST be identical to the directory created by get_wheel_metadata, including any unrecognized files it created.
Mandatory.
Note
Editable installs
This PEP originally specified a fourth hook, install_editable, to do an editable install (as with pip install -e). It was removed due to the complexity of the topic, but may be specified in a later PEP.
Briefly, the questions to be answered include: what reasonable ways existing of implementing an 'editable install'? Should the backend or the frontend pick how to make an editable install? And if the frontend does, what does it need from the backend to do so.
config_settings
This argument, which is passed to all hooks, is an arbitrary dictionary provided as an "escape hatch" for users to pass ad-hoc configuration into individual package builds. Build backends MAY assign any semantics they like to this dictionary. Build frontends SHOULD provide some mechanism for users to specify arbitrary string-key/string-value pairs to be placed in this dictionary. For example, they might support some syntax like --package-config CC=gcc. Build frontends MAY also provide arbitrary other mechanisms for users to place entries in this dictionary. For example, pip might choose to map a mix of modern and legacy command line arguments like:
pip install \ --package-config CC=gcc \ --global-option="--some-global-option" \ --build-option="--build-option1" \ --build-option="--build-option2"
into a config_settings dictionary like:
{ "CC": "gcc", "--global-option": ["--some-global-option"], "--build-option": ["--build-option1", "--build-option2"], }
Of course, it's up to users to make sure that they pass options which make sense for the particular build backend and package that they are building.
All hooks are run with working directory set to the root of the source tree, and MAY print arbitrary informational text on stdout and stderr. They MUST NOT read from stdin, and the build frontend MAY close stdin before invoking the hooks.
If a hook raises an exception, or causes the process to terminate, then this indicates an error.
Build environment
One of the responsibilities of a build frontend is to set up the Python environment in which the build backend will run.
We do not require that any particular "virtual environment" mechanism be used; a build frontend might use virtualenv, or venv, or no special mechanism at all. But whatever mechanism is used MUST meet the following criteria:
All requirements specified by the project's build-requirements must be available for import from Python. In particular:
- The get_build_requires hook is executed in an environment which contains the bootstrap requirements specified in the pyproject.toml file.
- All other hooks are executed in an environment which contains both the bootstrap requirements specified in the pyproject.toml hook and those specified by the get_build_requires hook.
This must remain true even for new Python subprocesses spawned by the build environment, e.g. code like:
import sys, subprocess subprocess.check_call([sys.executable, ...])
must spawn a Python process which has access to all the project's build-requirements. This is necessary e.g. for build backends that want to run legacy setup.py scripts in a subprocess.
All command-line scripts provided by the build-required packages must be present in the build environment's PATH. For example, if a project declares a build-requirement on flit, then the following must work as a mechanism for running the flit command-line tool:
import subprocess subprocess.check_call(["flit", ...])
A build backend MUST be prepared to function in any environment which meets the above criteria. In particular, it MUST NOT assume that it has access to any packages except those that are present in the stdlib, or that are explicitly declared as build-requirements.
Recommendations for build frontends (non-normative)
A build frontend MAY use any mechanism for setting up a build environment that meets the above criteria. For example, simply installing all build-requirements into the global environment would be sufficient to build any compliant package -- but this would be sub-optimal for a number of reasons. This section contains non-normative advice to frontend implementors.
A build frontend SHOULD, by default, create an isolated environment for each build, containing only the standard library and any explicitly requested build-dependencies. This has two benefits:
- It allows for a single installation run to build multiple packages that have contradictory build-requirements. E.g. if package1 build-requires pbr==1.8.1, and package2 build-requires pbr==1.7.2, then these cannot both be installed simultaneously into the global environment -- which is a problem when the user requests pip install package1 package2. Or if the user already has pbr==1.8.1 installed in their global environment, and a package build-requires pbr==1.7.2, then downgrading the user's version would be rather rude.
- It acts as a kind of public health measure to maximize the number of packages that actually do declare accurate build-dependencies. We can write all the strongly worded admonitions to package authors we want, but if build frontends don't enforce isolation by default, then we'll inevitably end up with lots of packages on PyPI that build fine on the original author's machine and nowhere else, which is a headache that no-one needs.
However, there will also be situations where build-requirements are problematic in various ways. For example, a package author might accidentally leave off some crucial requirement despite our best efforts; or, a package might declare a build-requirement on foo >= 1.0 which worked great when 1.0 was the latest version, but now 1.1 is out and it has a showstopper bug; or, the user might decide to build a package against numpy==1.7 -- overriding the package's preferred numpy==1.8 -- to guarantee that the resulting build will be compatible at the C ABI level with an older version of numpy (even if this means the resulting build is unsupported upstream). Therefore, build frontends SHOULD provide some mechanism for users to override the above defaults. For example, a build frontend could have a --build-with-system-site-packages option that causes the --system-site-packages option to be passed to virtualenv-or-equivalent when creating build environments, or a --build-requirements-override=my-requirements.txt option that overrides the project's normal build-requirements.
The general principle here is that we want to enforce hygiene on package authors, while still allowing end-users to open up the hood and apply duct tape when necessary.
Source distributions
For now, we continue with the legacy sdist format which is mostly undefined, but basically comes down to: a file named {NAME}-{VERSION}.{EXT}, which unpacks into a buildable source tree called {NAME}-{VERSION}/. Traditionally these have always contained setup.py-style source trees; we now allow them to also contain pyproject.toml-style source trees.
Integration frontends require that an sdist named {NAME}-{VERSION}.{EXT} will generate a wheel named {NAME}-{VERSION}-{COMPAT-INFO}.whl.
Comparison to competing proposals
The primary difference between this and competing proposals (in particular, PEP 516) is that our build backend is defined via a Python hook-based interface rather than a command-line based interface.
We do not expect that this will, by itself, intrinsically reduce the complexity calling into the backend, because build frontends will in any case want to run hooks inside a child -- this is important to isolate the build frontend itself from the backend code and to better control the build backends execution environment. So under both proposals, there will need to be some code in pip to spawn a subprocess and talk to some kind of command-line/IPC interface, and there will need to be some code in the subprocess that knows how to parse these command line arguments and call the actual build backend implementation. So this diagram applies to all proposals equally:
+-----------+ +---------------+ +----------------+ | frontend | -spawn-> | child cmdline | -Python-> | backend | | (pip) | | interface | | implementation | +-----------+ +---------------+ +----------------+
The key difference between the two approaches is how these interface boundaries map onto project structure:
.-= This PEP =-. +-----------+ +---------------+ | +----------------+ | frontend | -spawn-> | child cmdline | -Python-> | backend | | (pip) | | interface | | | implementation | +-----------+ +---------------+ | +----------------+ | |______________________________________| | Owned by pip, updated in lockstep | | | PEP-defined interface boundary Changes here require distutils-sig .-= Alternative =-. +-----------+ | +---------------+ +----------------+ | frontend | -spawn-> | child cmdline | -Python-> | backend | | (pip) | | | interface | | implementation | +-----------+ | +---------------+ +----------------+ | | |____________________________________________| | Owned by build backend, updated in lockstep | PEP-defined interface boundary Changes here require distutils-sig
By moving the PEP-defined interface boundary into Python code, we gain three key advantages.
First, because there will likely be only a small number of build frontends (pip, and... maybe a few others?), while there will likely be a long tail of custom build backends (since these are chosen separately by each package to match their particular build requirements), the actual diagrams probably look more like:
.-= This PEP =-. +-----------+ +---------------+ +----------------+ | frontend | -spawn-> | child cmdline | -Python+> | backend | | (pip) | | interface | | | implementation | +-----------+ +---------------+ | +----------------+ | | +----------------+ +> | backend | | | implementation | | +----------------+ : : .-= Alternative =-. +-----------+ +---------------+ +----------------+ | frontend | -spawn+> | child cmdline | -Python-> | backend | | (pip) | | | interface | | implementation | +-----------+ | +---------------+ +----------------+ | | +---------------+ +----------------+ +> | child cmdline | -Python-> | backend | | | interface | | implementation | | +---------------+ +----------------+ : :
That is, this PEP leads to less total code in the overall ecosystem. And in particular, it reduces the barrier to entry of making a new build system. For example, this is a complete, working build backend:
# mypackage_custom_build_backend.py import os.path def get_build_requires(config_settings, config_directory): return ["wheel"] def build_wheel(wheel_directory, config_settings, config_directory=None): from wheel.archive import archive_wheelfile path = os.path.join(wheel_directory, "mypackage-0.1-py2.py3-none-any") archive_wheelfile(path, "src/")
Of course, this is a terrible build backend: it requires the user to have manually set up the wheel metadata in src/mypackage-0.1.dist-info/; when the version number changes it must be manually updated in multiple places; it doesn't implement the metadata or develop hooks, ... but it works, and these features can be added incrementally. Much experience suggests that large successful projects often originate as quick hacks (e.g., Linux -- "just a hobby, won't be big and professional"; IPython/Jupyter -- a grad student's ``$PYTHONSTARTUP` file <>`_), so if our goal is to encourage the growth of a vibrant ecosystem of good build tools, it's important to minimize the barrier to entry.
Second, because Python provides a simpler yet richer structure for describing interfaces, we remove unnecessary complexity from the specification -- and specifications are the worst place for complexity, because changing specifications requires painful consensus-building across many stakeholders. In the command-line interface approach, we have to come up with ad hoc ways to map multiple different kinds of inputs into a single linear command line (e.g. how do we avoid collisions between user-specified configuration arguments and PEP-defined arguments? how do we specify optional arguments? when working with a Python interface these questions have simple, obvious answers). When spawning and managing subprocesses, there are many fiddly details that must be gotten right, subtle cross-platform differences, and some of the most obvious approaches -- e.g., using stdout to return data for the build_requires operation -- can create unexpected pitfalls (e.g., what happens when computing the build requirements requires spawning some child processes, and these children occasionally print an error message to stdout? obviously a careful build backend author can avoid this problem, but the most obvious way of defining a Python interface removes this possibility entirely, because the hook return value is clearly demarcated).
In general, the need to isolate build backends into their own process means that we can't remove IPC complexity entirely -- but by placing both sides of the IPC channel under the control of a single project, we make it much cheaper to fix bugs in the IPC interface than if fixing bugs requires coordinated agreement and coordinated changes across the ecosystem.
Third, and most crucially, the Python hook approach gives us much more powerful options for evolving this specification in the future.
For concreteness, imagine that next year we add a new get_wheel_metadata2 hook, which replaces the current get_wheel_metadata hook with something that produces more data, or a different metadata format. In order to manage the transition, we want it to be possible for build frontends to transparently use get_wheel_metadata2 when available and fall back onto get_wheel_metadata otherwise; and we want it to be possible for build backends to define both methods, for compatibility with both old and new build frontends.
Furthermore, our mechanism should also fulfill two more goals: (a) If new versions of e.g. pip and flit are both updated to support the new interface, then this should be sufficient for it to be used; in particular, it should not be necessary for every project that uses flit to update its individual pyproject.toml file. (b) We do not want to have to spawn extra processes just to perform this negotiation, because process spawns can easily become a bottleneck when deploying large multi-package stacks on some platforms (Windows).
In the interface described here, all of these goals are easy to achieve. Because pip controls the code that runs inside the child process, it can easily write it to do something like:
command, backend, args = parse_command_line_args(...) if command == "get_wheel_metadata": if hasattr(backend, "get_wheel_metadata2"): backend.get_wheel_metadata2(...) elif hasattr(backend, "get_wheel_metadata"): backend.get_wheel_metadata(...) else: # error handling
In the alternative where the public interface boundary is placed at the subprocess call, this is not possible -- either we need to spawn an extra process just to query what interfaces are supported (as was included in an earlier draft of PEP 516, an alternative to this), or else we give up on autonegotiation entirely (as in the current version of that PEP), meaning that any changes in the interface will require N individual packages to update their pyproject.toml files before any change can go live, and that any changes will necessarily be restricted to new releases.
One specific consequence of this is that in this PEP, we're able to make the get_wheel_metadata command optional. In our design, this can easily be worked around by a tool like pip, which can put code in its subprocess runner like:
def get_wheel_metadata(output_dir, config_settings): if hasattr(backend, "get_wheel_metadata"): backend.get_wheel_metadata(output_dir, config_settings) else: backend.build_wheel(output_dir, config_settings) touch(output_dir / "PIP_ALREADY_BUILT_WHEELS") unzip_metadata(output_dir/*.whl) def build_wheel(output_dir, config_settings, metadata_dir): if os.path.exists(metadata_dir / "PIP_ALREADY_BUILT_WHEELS"): copy(metadata_dir / *.whl, output_dir) else: backend.build_wheel(output_dir, config_settings, metadata_dir)
and thus expose a totally uniform interface to the rest of pip, with no extra subprocess calls, no duplicated builds, etc. But obviously this is the kind of code that you only want to write as part of a private, within-project interface.
(And, of course, making the metadata command optional is one piece of lowering the barrier to entry, as discussed above.)
Other differences
Besides the key command line versus Python hook difference described above, there are a few other differences in this proposal:
- Metadata command is optional (as described above).
- We return metadata as a directory, rather than a single METADATA file. This aligns better with the way that in practice wheel metadata is distributed across multiple files (e.g. entry points), and gives us more options in the future. (For example, instead of following the PEP 426 proposal of switching the format of METADATA to JSON, we might decide to keep the existing METADATA the way it is for backcompat, while adding new extensions as JSON "sidecar" files inside the same directory. Or maybe not; the point is it keeps our options more open.)
- We provide a mechanism for passing information between the metadata step and the wheel building step. I guess everyone probably will agree this is a good idea?
- We provide more detailed recommendations about the build environment, but these aren't normative anyway.
Evolutionary notes
A goal here is to make it as simple as possible to convert old-style sdists to new-style sdists. (E.g., this is one motivation for supporting dynamic build requirements.) The ideal would be that there would be a single static pyproject.toml that could be dropped into any "version 0" VCS checkout to convert it to the new shiny. This is probably not 100% possible, but we can get close, and it's important to keep track of how close we are... hence this section.
A rough plan would be: Create a build system package (setuptools_pypackage or whatever) that knows how to speak whatever hook language we come up with, and convert them into calls to setup.py. This will probably require some sort of hooking or monkeypatching to setuptools to provide a way to extract the setup_requires= argument when needed, and to provide a new version of the sdist command that generates the new-style format. This all seems doable and sufficient for a large proportion of packages (though obviously we'll want to prototype such a system before we finalize anything here). (Alternatively, these changes could be made to setuptools itself rather than going into a separate package.)
But there remain two obstacles that mean we probably won't be able to automatically upgrade packages to the new format:
- There currently exist packages which insist on particular packages being available in their environment before setup.py is executed. This means that if we decide to execute build scripts in an isolated virtualenv-like environment, then projects will need to check whether they do this, and if so then when upgrading to the new system they will have to start explicitly declaring these dependencies (either via setup_requires= or via static declaration in pyproject.toml).
- There currently exist packages which do not declare consistent metadata (e.g. egg_info and bdist_wheel might get different install_requires=). When upgrading to the new system, projects will have to evaluate whether this applies to them, and if so they will need to stop doing that. | http://docs.activestate.com/activepython/3.5/peps/pep-0517.html | CC-MAIN-2019-13 | refinedweb | 4,110 | 53.31 |
Re: [deal.II] install problems with clang@12.0.0 and Xcode12.0 from spack
Dear Alex, what happens after loading dealii with `spack load dealii`, if you try to build (from scratch) `step-40`? Can you send us the output of cmake and make? Luca. On Mon, Sep 28, 2020 at 4:46 PM 'Alexander Greiner' via deal.II User Group < dealii@googlegroups.com> wrote: > Hi Luca, > >
Re: [deal.II] step-1 Error
Are you running the terminal, or the deal.II application? When you run the deal.II application, you are dropped into a terminal (with instructions) to run deal.II examples. Including how to set up your bashrc (or zshrc) to point to the deal.II Installation/the module command. Only after you
Re: [deal.II] step-1 Error
The 9.0.0 image contains a spack installation. Can you run "module load dealii" before trying to run? Luca. Il giorno mer 26 ago 2020 alle 19:51 Scott Ziegler < scott...@rams.colostate.edu> ha scritto: > Hello, > > I tried running the first tutorial on my machine and I am getting an error >
Re: [deal.II] Problem implementing Neumann boundary condition
the error. Luca > Il giorno 21 ago 2020, alle ore 10:07, Umair Hussain > ha scritto: > > > Thanx Luca. I got it. But I still don’t understand what is this “gradient” > used for? > >> On Fri, 21 Aug 2020 at 12:22 PM, Luca Heltai wrote: >> Did yo
Re: [deal.II] Problem implementing Neumann boundary condition
Did you actually read the error message? :-) It tells you exactly what you have to do. In particolar, you are using bc.gradient to evaluate the Neumann boundary value, but you did not implement the gradient in your Neumann.h file. This explained in detail in the error message you got. The
Re: [deal.II] Reading a Tensor from parameter file
the >> Patterns::Convert::to_value() function would work in this case. >> >> Is it must to use prm.add_parameter() to be able to do so? I usually use >> prm.declare_entry() and prm.get(). >> >> Best regards, >> Paras >> >>> On Wedn
Re: [deal.II] Installation didn't give any errors but when I tried make test, it failed all tests
Take a look at your parameter file. It is probably trying to write output file to a directory that does not exist. Luca > Il giorno 30 lug 2020, alle ore 10:58, kaleem iqbal > ha scritto: > > > Dear Prof. Wolfgang; > During running step-70. I found the following error > Exception on
Re: [deal.II] Particles and field interpolation error
Franco, The interpolated field says that the field value is zero (on the line above your arrow). This is how it is documented: if a particle is removed, its interpolated value is left unchanged in the target vector. Zero in your case. Luca > Il giorno 15 lug 2020, alle ore 18:18, Franco
Re: [deal.II] parallel::distributed::SolutionTransfer for FE_FaceQ element
Dear Yu, It is unclear what it means to transfer a solution for FE_FaceQ. On refined cells, the central part of the skeleton cannot be transferred from the outer skeleton (I.e., a refined grid is not a subspace Of the coarse grid), so technically we cannot transfer solutions for FE spaces
Re: [deal.II] Setting up dealii through Docker
m > > > Best, > Bhavesh > >> On Saturday, 27 June 2020 01:58:07 UTC-5, Luca Heltai wrote: >> Are you using an example from deal.II master? If this is the case, the >> example looks for 9.3pre but the image we provide only has 9.1 installed. >> Try
Re: [deal.II] Setting up dealii through Docker
Are you using an example from deal.II master? If this is the case, the example looks for 9.3pre but the image we provide only has 9.1 installed. Try inspecting the CMakeList.txt to see which version is being looked for, and try changing to 9.1. Luca > Il giorno 27 giu 2020, alle ore 06:58,
Re: [deal.II] Reading a Tensor from parameter file
Currently, this is also the simplest way: Tensor tens; prm.add_parameter("Tensor", tens); Take a look at the documentation of the add parameter method. Patterns::Tools::to_string(tens); And Patterns::Tools::to_value Are also available to simplify what you want to achieve. Alternatively:
Re: [deal.II] Deal.II on Docker
The examples you are using are likely looking for deal.II 9.2.pre. Try changing the makefile, and see if that works. :-) Luca > Il giorno 13 mar 2020, alle ore 02:47, Robert Kopp > ha scritto: > > > After some false starts, I was able to use deal.II on Ubuntu 18.04 by > installing it from
[deal.II] Intensive course on Finite Element Methods using deal.II @ SISSA
: Wolfgang Bangerth and Luca Heltai More information on the course schedule is available here: A limited number of seats is available for external PhD students, postdocs, and researcher. No fee is required, but registration is mandatory. To reserve a seat, please complete
Re: [deal.II] Particle contact detection
Take a look at the Particles namespace, and at the rtree boost documentation (we wrap rtree from boost into the RTree alias, which is compatible with Point, BoundingBox and Segment). The tests/boost directory contains some examples that may be useful. Luca > Il giorno 14 ott 2019, alle ore
Re: [deal.II] Reading mesh from vtk file
Dear Andreas, You are trying to read a vtk file that was generated by DataOut. Try to write the file with GridOut::write_vtk. DataOut produces files that are split cell wise (to allow for some flexibility in discontinuos fields. Luca > Il giorno 5 ago 2019, alle ore 03:05, Andreas Rupp
[deal.II] Old Mac Users wanted.
Dear All, I just uploaded a version of deal.II 9.1.0pre (master version of yesterday) here: <> This was compiled with clang 6.0.0, downloaded from the officia
[deal.II] Re: Mac testers wanted
Sorry. The address is the following: Luca > Il giorno 06 mag 2018, alle ore 11:30, luca.heltai
ha > scritto: > > Dear all, > > I have just uploaded a new package for deal.II-9.0.0-rc2 here: > >
Re: [deal.II] New Mac OSX brew package
I'm using clang+gfortran. I have not tried using gcc for everything, but I could give it a shot if you think it would be worth it. Luca > On 9 Nov 2017, at 17:59, Timo Heister
wrote: > > thanks, Luca! > > Are you using the system clang with the fortran compiler from
[deal.II] New Mac OSX brew package
Dear All, I just finished uploading a new brew based package for deal.II with a 9.0pre.1 version: It was compiled on a Mac OS X High Sierra: 10.13 (17A405), with Xcode 9.0.1 (9A1004). The application contains a full `spack
Re: [deal.II] Re: Deal for Mac OS 10.13
for the suggestion. I am new to both Mac OS and Deal. Can you possibly > advise how may I install 10.12 SDK. Would it mean reverting to an older > version of the OS for iMac as well? > > Regards > Deepak > >> On Wednesday, 11 October 2017 15:21:15 UTC+8, Luca Heltai
Re: [deal.II] Re: Deal for Mac OS 10.13
Alberto, In the options of xcode, you should be able to install also the 10.12 sdk. Can you try that? Luca > On 11 Oct 2017, at 08:30, Alberto Salvadori
> wrote: > > Hi Daniel. > > After installation of High Sierra, Xcode9, upgrading cmake and using >
Re: [deal.II] does the latest dealii prepackaged image file for Mac OSX come with 'other software packages' such as p4est, PETSc, Trilinos, etc?
Yes it does. Luca > On 11 ago 2016, at 20:36, thomas stephens
wrote: > > It's not clear to me from the github dealii wiki for MacOSX whether or not > the prepackaged image file for OSX comes with all of the optional third party > libraries that are listed as optional
Re: [deal.II] opencascade shape id
Hi Chang, If you use salome to open your iges files, then it should tell you what is what. Best, Luca > On 05 ago 2016, at 09:19, Chang-Pao Chang
wrote: > > Hello, > > I am trying to follow step-54 to register shapes in occ and cells in tria > through | https://www.mail-archive.com/search?l=dealii%40googlegroups.com&q=from:%22Luca+Heltai%22&o=newest | CC-MAIN-2020-45 | refinedweb | 1,378 | 76.82 |
...
Where can I see API changes?
API changes between OLPC releases can be seen here: API changes
Getting Started
How do I structure my files so that they are a valid sugar activity?
Information on activity bundle structure can be found here: Activity bundles
How do I make an icon for my activity?
Information on what you need to do can be found here: Making
Audio & Video
Mouse
How do I change the mouse cursor in my activity to the wait cursor?
In your activity subclass:
self.window.set_cursor( gtk.gdk.Cursor(gtk.gdk.WATCH) )
and to switch it back to the default:
self.window.set_cursor( None );
How do I track the position of the mouse?
There are many different reasons you might want to track the position of the mouse in your activity, ranging from the entertaining ([[1]]) to the functional (hiding certain windows when the mouse hasn't moved for a couple of seconds and making those ui elements re-appear when the mouse has moved again). Here is one way you can implement this functionality:
... self.hideWidgetsTime = time.time() self.mx = -1 self.my = -1 self.HIDE_WIDGET_TIMEOUT_ID = gobject.timeout_add( 500, self.mouseMightHaveMovedCb ) def _mouseMightHaveMovedCb( self ): x, y = self.get_pointer() passedTime = 0 if x != self.mx or y != self.my: self.hideWidgetsTime = time.time() if self.hiddenWidgets: self.showWidgets() self.hiddenWidgets = False else: passedTime = time.time() - self.hideWidgetsTime if passedTime >= 3: if not self.hiddenWidgets: self.hideWidgets() self.hiddenWidgets = True self.mx = x self.my = y return True
Miscellaneous
The tasks below are random useful techniques that have come up as I write code and documentation for this reference. They have yet to be categorized, but will be as a sufficient set of related entries are written.
How do I know when my activity is "active" or not?
You can set an event using the VISIBILITY_NOTIFY_MASK constant in order to know when your activity changes visibility. Then in the callback for this event, you simply compare the event's state to gtk-defined variables for activity visibility. See the GDK Visibility State Constants section of gtk.gdk.Constants for more information.
# Notify when the visibility state changes by calling self.__visibility_notify_cb # (PUT THIS IN YOUR ACTIVITY CODE - EG. THE __init__() METHOD) self.add_events(gtk.gdk.VISIBILITY_NOTIFY_MASK) self.connect("visibility-notify-event", self.__visibility_notify_cb) ... # Callback method for when the activity's visibility changes def __visibility_notify_cb(self, window, event): if event.state == gtk.gdk.VISIBILITY_FULLY_OBSCURED: print "I am not visible" elif event.state in [gtk.gdk.VISIBILITY_UNOBSCURED, gtk.gdk.VISIBILITY_PARTIAL]: print "I am visible"
How do I get the amount of free space available on disk under the /home directory tree?
The following code demonstrates how to get the total amount of free space under /home.
#### Method: getFreespaceKb, returns the available freespace in kilobytes. def getFreespaceKb(self): stat = os.statvfs("/home") freebytes = stat.f_bsize * stat.f_bavail freekb = freebytes / 1024 return freekb
Note, however, that assuming anything about "/home" is a bad idea, better use os.environ['HOME'] instead. Rainbow will put your actual files elsewhere, some on ramdisks, some on flash. Be clear about which filesystem's free space you actually care about.
How do I know whether my activity is running on a physical XO?
Sugar runs on ordinary computers as well as on XO's. While your activity is typically going to be run on a real XO, some people will indeed run it elsewhere. Normally you shouldn't write your activity to care whether it's on an XO or not. If for some odd reason, you need to care, the easiest way to tell if you are on a physical XO is to check whether /sys/power/olpc-pm, an essential power management file for the XO, exists. [1] [2]
import os ... #Print out a boolean value that tells us whether we are on an XO or not. print os.path.exists('/sys/power/olpc-pm')
How do I know the current language setting on my XO?
The system variable 'LANG' tells you which language is currently active on the XO. The following code shows how to look at the value of this variable.
import os ... _logger.debug(os.environ['LANG'])
How do I repeatedly call a specific method after N number of seconds?
The gobject.timeout_add() function allows you to invoke a callback method after a certain amount of time. If you want to repeatedly call a method, simply keep invoking the gobject.timeout_add function in your callback itself. The code below is a simple example, where the callback function is named repeatedly_call. Note that the timing of the callbacks are approximate. To get the process going, you should make an initial call to repeatedly_call() somewhere in your code.
You can see a more substantive example of this pattern in use when we regularly update the time displayed on a pango layout object.
#This method calls itself ROUGHLY every 1 second def repeatedly_call(self): now = datetime.datetime.now() gobject.timeout_add(self.repeat_period_msec, self.repeatedly_update_time)
How do I update the current build version of code that is running on my XO?
There are several pages that give you instructions on how to install/update your current build.
- If you already have a working build installed and an internet connection, first try olpc-update.
- If that doesn't work, you can look at instructions for an Activated upgrade that can be done via USB] boot.
As the instructions on the pages linked above note, make sure to install your activities separately after you have upgraded to a specific base build.
I am developing on an XO laptop, but my keyboard and language settings are not ideal. How can I change them?
Internationalized laptops will often have settings that might slow you down while developing. To change around the language settings so you can better understand environment messages, use the Sugar Control Panel
Keyboard settings on internationalized laptops[3] can also be suboptimal, especially as characters like "-" and "/" are in unfamiliar positions. You can use the setxkbmap command in the Terminal Activity to reset the type of keyboard input used and then attach a standard U.S. keyboard that will allow you to type normally. The command below sets the keyboard to the US mapping (it will reset to the default internationalized mapping upon restart).
setxkbmap us
My Python activity wants to use threads; how do I do that?
A question that has been answered with limited success is which threading patterns are most appropriate for use in Sugar. The following pattern of code to work fine in basic instances:
#### Method: __init__, initialize this AnnotateActivity instance def __init__(self, handle): ... self.sample_thread = Thread(target=self.announce_thread, args=()) self.sample_thread.setDaemon(0) self.sample_thread.start() ... def announce_thread(self): while self.Running: time.sleep(1) print "thread running" self._update_chat_text("Thread", "In here")
This is the basic series of steps that most online documentation on python suggests to use when trying to work with threads in python. The problem is that it is unclear how this pattern relates to code that worked in the SimCity activity:
import gobject gobject.threads_init() #import dbus.mainloop.glib #dbus.mainloop.glib.threads_init()
It should be noted that in the SimCity activity the pygame sound player would not produce sound reliably unless this setup was done.
Should the two patterns always be used in tandem? It seems that the latter code is mainly to initiate gobject and other libraries to work with threading, but it is unclear what restrictions there are with using threading with these libraries. Does one take precedence over the other? It is not clear if there is any problem with using the standard python threading code on the sugar technology stack.
In fact, experiments with threading on sugar leads to several different problems. For one thing, thread termination was tricky - using the can_close() method for sugar activities to terminate an activity only killed threads in some circumstances. It did not properly handle terminating threads in the case of CTRL-C or terminal interrupts. You can try to catch signals (SIGINT, SIGTERM or SIGHUP), but you will still be running in to errors in terminating child threads using these as well.
Another set of errors with threading comes up when trying to combine with stream tubes. The bottom line is that it is unclear what the scope of threading in a Sugar activity should be - should it simply work if you do the standard python threading pattern, is the use of the glib.threads_init and gobject.threads_init calls necessary, are there other interactions with threads and dbus that need to be accounted for? With more clarity from sugar developers on how the platform envisions threading to work in an activity, we can be more comfortable writing entries in the Almanac to help developers write error-free code.
How do I customize the title that is displayed for each instance of my activity?
By default, activity titles are just the generic activity names that you specify in your activity.info file. In some applications, you may want the activity title to be more dynamic.
For example, it makes sense to set the title for different browser sessions to the active web page being visited. That way, when you look back in the journal at the different browser sessions you have run in the previous few days, you can identify unique sessions based on the website you happened to be visiting at the time.
The code below shows how you can set the metadata for your activity to reflect a dynamic title based on whatever session criteria you feel is important. This example is adapted from the Browse activity, which sets activity instance titles based on the title of the current web page being visited.
if self.metadata['mime_type'] == 'text/plain': if self._jobject.metadata['title_set_by_user'] != '1': if self._browser.props.title: # Set the title of this activity to be the current # title of the page being visited by the browser. self.metadata['title'] = self._browser.props.title
What packages are available on sugar to support game development?
If your activity will require tools that are typically needed to develop robust and clean video games, then you should utilize the pygame package. It can be readily imported into any activity:
import pygame ...
How do I detect when one of the game buttons on the laptop have been pressed?
The laptop game buttons (the circle, square, x, and check buttons next to the LCD) are encoded as page up, home, page down and end respectively. So, you can detect their press by listening for these specific events. For example, the code below listens for button presses and then just writes to an output widget which button was pressed.
... ####_Page_Up': self._chat += "\nCircle Pressed!" self._chat_buffer.set_text(self._chat) elif keyname == 'KP_Page_Down': self._chat += "\nX Pressed!" self._chat_buffer.set_text(self._chat) elif keyname == 'KP_Home': self._chat += "\nSquare Pressed!" self._chat_buffer.set_text(self._chat) elif keyname == 'KP_End': self._chat += "\nCheck Pressed!" self._chat_buffer.set_text(self._chat) return False;
How do I detect if one of the joystick buttons has been pressed?
This is the same process as detecting game buttons, except with different names for the keys. Again, you listen for "key-press-event" signals and then in your callback you check to see if the pressed button was one of the joystick keys.
####_Up': self._chat += "\nUp Pressed!" self._chat_buffer.set_text(self._chat) elif keyname == 'KP_Down': self._chat += "\nDown Pressed!" self._chat_buffer.set_text(self._chat) elif keyname == 'KP_Left': self._chat += "\nLeft Pressed!" self._chat_buffer.set_text(self._chat) elif keyname == 'KP_Right': self._chat += "\nRight Pressed!" self._chat_buffer.set_text(self._chat) return False; | http://wiki.laptop.org/go/Sugar_almanac | CC-MAIN-2015-22 | refinedweb | 1,935 | 57.77 |
Article information
Article relates to
RadControls for Silverlight
Created by
Kiril Stanoev
Last modified
August, 28, 2008
Last modified by
August, 28, 2008
The target result is:
1. Create a new Silverlight Web Application Project
NOTE: Make sure you choose "Web Application Project"
After the project loads you can see that besides the regular Silverlight application, Visual Studio adds a Web application that will host the .xap file.
Before writing any LINQ or creating any WCF service, we need a database to target.
2. Right-click on RadTreeViewWithWCFWeb project and add a new item - "SQL Server Database". You can give the database any name you wish. I personally called mine TVSeries since this is going to be a TV related tutorial :).
Visual Studio will ask you whether you want to place the database in the App_Data folder. Click Yes to confirm.
Examine the RadTreeViewWithWCFWeb's App_Data folder and you will find your database there.
3. Double-clicking the TVSeries.mdf file will automatically send the database to the "Server Explorer" window.
It is now time to populate the database with some data. To keep the tutorial short, I will add only one table and fill it with data. I have called my table FoxTVSeries. You can download the database here.
Since the database is populated, it is time to do LINQ on it.
4. Right-click on RadTreeViewWithWCFWeb project and add a new item - "LINQ to SQL Classes".
5. Open the "Server Explorer" window and drag the FoxTVSeries table onto the "Object Relational Designer". ("Object Relational Designer" opens automatically when you open the DataClasses1.dbml file)
The "Object Relational Designer" will automatically show the columns that the table has.
6. By default, the LINQ class is not serializable. In order to use the table in a web service, we need to make the DataClasse1.dbml file serializable. Right-click on the design surface and choose Properties from the drop-down. In the properties window change the "Serialization Mode" to Unidirectional.
7. The LINQ is done, it is time now to create the web service. Again, right-click on RadTreeViewWithWCFWeb project and add a new item - "WCF Service".
Visual Studio adds 3 files that hold the service contract for the WCF service - IService1.cs, Service1.svc and a code-behind to it - Service1.svc.cs.
8. Open the first file - IService1.cs. This file contains the operation contract. Change the name and the signature of the DoWork() method - change its name to GetTVSerie and change its return value to be List<FoxTVSery>.
9. Go to the Service1.svc.cs file and implement the IService1 interface
10. Keeping the example as simple as possible, we will select all the the items in the table, without any grouping, ordering etc.
11. One thing that is important is to make sure that your web service uses a basicHttpBinding, not the default wsHttpBinding. Go to the Web.config file and scroll down until you find the system.serviceModel tag. Change the binding="wsHttpBinding" to binding="basicHttpBinding".
The reason to change the binding is because Silverlight supports only basic binding (SOAP 1.1 etc.).
Congrats, the web service is all set! The next step is to use the web service in the Silverlight application.
12. Go to the RadTreeViewWithWCF project, right-click on References and add a "Service Reference".
A popup window appears hit the Discover button to find the web service and then hit OK to add it.
13. Once you have the WCF Service added, it is time work a little on the XAML and actually add the TreeView. First of all add references to Telerik.Windows.Controls.dll and Telerik.Windows.Controls.Navigation.dll.
14. Open the Page.xaml file and reference the previously added dlls. Also you need to reference the RadTreeViewWithWCF project.
15. In the UserControl.Resources of Page.xaml we have to create a hierarchical data template that will be used as an item template for the TreeView. Also we create the data source for the TreeView
1: <UserControl.Resources>
2:
3: <local:HierarchicalDataSource x:
4:
5: <core:HierarchicalDataTemplate x:
6: <core:HierarchicalDataTemplate.HeaderTemplate>
7: <DataTemplate>
8: <TextBlock Text="{Binding NodeText}" TextWrapping="Wrap" Width="400"/>
9: </DataTemplate>
10: </core:HierarchicalDataTemplate.HeaderTemplate>
11: </core:HierarchicalDataTemplate>
12:
13: </UserControl.Resources>
16. Next step is to create the TreeView and apply the item template and the data source.
1: <telerik:RadTreeView
2: HorizontalAlignment="Left"
3: VerticalAlignment="Top"
4: ItemsSource="{Binding Source={StaticResource Source}}"
5: ItemTemplate="{StaticResource NodeTemplate}"
6: />
17. If you decide to build, Visual Studio will encounter an error at the line
18. In the RadTreeViewWithWCF project add a new class called HierarchicalDataSource.cs. Make this class inherit from ObservableCollection<TableItem>.
1: public class HierarchicalDataSource : ObservableCollection<TableItem>
Try to build and you will get an error telling you that you are missing the TableItem class. Therefore we need to create the TableItem class. This class is going to represent a single entry from the FoxTVSeries table. This means that this class is going to have properties like NodeID, ParentID, NodeText and one additional property called Children. The Children property will be used to turn the flat data into hierarchical one.
1: public class TableItem
2: {
3: private string nodeText;
4: private int nodeID;
5: private System.Nullable<int> parentID;
6: private List<TableItem> children;
7:
8: public TableItem(string nodeText, int nodeID, System.Nullable<int> parentID)
9: {
10: this.nodeText = nodeText;
11: this.nodeID = nodeID;
12: this.parentID = parentID;
13:
14: this.children = new List<TableItem>();
15: }
16:
17: public string NodeText
18: {
19: get
20: {
21: return this.nodeText;
22: }
23: }
24: public System.Nullable<int> ParentID
25: {
26: get
27: {
28: return this.parentID;
29: }
30: }
31: public int NodeID
32: {
33: get
34: {
35: return this.nodeID;
36: }
37: }
38: public List<TableItem> Children
39: {
40: get
41: {
42: return this.children;
43: }
44: }
45: }
Once the TableItem class is done, we are ready to proceed with the HierarchicalDataSource class.
3: // This list holds all the items that come from the web service result
4: private List<TableItem> unsortedList = new List<TableItem>();
5:
6: public HierarchicalDataSource()
7: {
8: // Create a new instance of the web service and get the data from the table
9: Service1Client webService = new Service1Client();
10: webService.GetTVSerieCompleted += new EventHandler<GetTVSerieCompletedEventArgs>(WebService_GetTableCompleted);
11: webService.GetTVSerieAsync();
12: }
14: private void WebService_GetTableCompleted(object sender, GetTVSerieCompletedEventArgs e)
15: {
16: // transfer all the items from the result to the unsorted list
17: foreach (FoxTVSery item in e.Result)
18: {
19: TableItem genericItem = new TableItem(item.NodeText, item.NodeID, item.ParentID);
20: this.unsortedList.Add(genericItem);
21: }
22:
23: // Get all the first level nodes. In our case it is only one - House M.D.
24: var rootNodes = this.unsortedList.Where(x => x.ParentID == x.NodeID);
25:
26: // Foreach root node, get all its children and add the node to the HierarchicalDataSource.
27: // see bellow how the FindChildren method works
28: foreach (TableItem node in rootNodes)
29: {
30: this.FindChildren(node);
31: this.Add(node);
32: }
33: }
34:
35: private void FindChildren(TableItem item)
36: {
37: // find all the children of the item
38: var children = unsortedList.Where(x => x.ParentID == item.NodeID && x.NodeID != item.NodeID);
39:
40: // add the child to the item's children collection and call the FindChildren recursively, in case the child has children
41: foreach (TableItem child in children)
42: {
43: item.Children.Add(child);
44: FindChildren(child);
45: }
46: }
47: }
It is a pretty straightforward class. It has a list that will contain the flat data coming from the web service result. Then this list is traversed and the hierarchical data is built.
If you need any more explanations or you have any suggestions, do not hesitate to drop me a comment.
Resources:
RadTreeViewWithWCF.zip
Telerik.Windows.Controls.dll
Telerik.Windows.Controls.Navigation.dll
FoxTVSeries.mdf
Resources
Buy
Try | http://www.telerik.com/support/kb/silverlight/details/populating-silverlight-treeview-from-wcf-service-with-linq-to-sql | CC-MAIN-2017-30 | refinedweb | 1,305 | 59.09 |
The syntax for declaring a struct is almost identical to that for a class:
[attributes ] [access-modifiers ] struct identifier [:interface-list ] { struct-members }
Example 7-1 illustrates the definition of a struct. Location represents a point on a two-dimensional surface. Notice that the struct Location is declared exactly as a class would be, except for the use of the keyword struct. Also notice that the Location constructor takes two integers and assigns their value to the instance members, x and y. The x and y coordinates of Location are declared as properties.
using System; public struct Location { private int xVal; private { public void myFunc(Location loc) { loc.x = 50; loc.y = 100; Console.WriteLine("In MyFunc loc: {0}", loc); } static void Main( ) { Location loc1 = new Location(200,300); Console.WriteLine("Loc1 location: {0}", loc1); Tester t = new Tester( ); t.myFunc(loc1); Console.WriteLine("Loc1 location: {0}", loc1); } } Output: Loc1 location: 200, 300 In MyFunc loc: 50, 100 Loc1 location: 200, 300
Unlike classes, structs do not support inheritance. They implicitly derive from object (as do all types in C#, including the built-in types) but cannot inherit from any other class or struct. Structs are also implicitly sealed (that is, no class or struct can derive from a struct). Like classes, however, structs can implement multiple interfaces. Additional differences include the following:
Structs cannot have destructors, nor can they have a custom parameterless (default) constructor. If you do not supply a constructor, your struct will in effect be provided with a default constructor that will zero all the data members or set them to default values appropriate to their type (see Table 4-2). If you supply any constructor, you must initialize all the fields in the struct.
You cannot initialize an instance field in a struct. Thus it is illegal to write:
private int xVal = 50; private int yVal = 100;
though that would have been fine had this been a class.
Structs are designed to be simple and lightweight. While private member data promotes data hiding and encapsulation, some programmers feel it is overkill for structs. They make the member data public, thus simplifying the implementation of the struct. Other programmers feel that properties provide a clean and simple interface, and that good programming practice demands data-hiding even with simple lightweight objects. Whichever you choose is a matter of design philosophy; the language supports either approach. | http://etutorials.org/Programming/Programming+C.Sharp/Part+I+The+C+Language/Chapter+7.+Structs/7.1+Defining+Structs/ | crawl-001 | refinedweb | 399 | 55.44 |
Using External actionscript files.
This tutorial will teach you how to use an external actionscript files with Actionscript 2.0. External actionscript files are used to separate the code from the actual Flash interface. This is useful as it helps to unclutters the work area and allows you to work with multiple scripts at the same time, if necessary.
To get an external Actionscript file to work, you need to first link it to the flash document. This only requires one simple line of code which is:
#include "youractionscriptfile.as"
This line of code includes your external actionscript file into your flash document.
External actionscript files.
Step 1
Open a new Actionscript file and save the file name as test.as. You could use whatever name you wish.
Step 2
Now open up a new Flash document.
On the timeline insert a new layer called "actions". Select the first frame and then right click and select actions.
Step 3
And add the following line of code:
#include "test.as"
You should now be able to use external actionscript files.
| http://www.ilike2flash.com/2008/07/using-external-actionscript-files.html | CC-MAIN-2015-11 | refinedweb | 179 | 68.06 |
Projects and Talks
(It took me a while to come up with new CDBS packaging series post not because I stopped using CDBS just because I was procrastinating myself as busy)
This is the second post in the CDBS packaging series. In this series I'm going to talk about package relationship management.
The better example where this feature is useful is packages where build-depends and run-time dependencies overlap. Most of the Perl modules which have test suites have build-depend and run-time dependency intersection. So let me take example of a Perl module.
First lets see control file of a Perl package which is not using CDBS and then let me explain how CDBS can help you improve the situation. I choose libxml-libxml-perl lets see part of control file which includes Build-Depends Depends Suggests Recommends.
Source: libxml-libxml-perl Maintainer: Debian Perl Group <pkg-perl-maintainers@lists.alioth.debian.org> Uploaders: Jonathan Yu <jawnsy@cpan.org>, gregor herrmann <gregoa@debian.org>, Chris Butler <chrisb@debian.org> Section: perl Priority: optional Build-Depends: perl (>= 5.12), debhelper (>= 9.20120312), libtest-pod-perl, libxml2-dev, libxml-namespacesupport-perl, libxml-sax-perl, zlib1g-dev Standards-Version: 3.9.4 Vcs-Browser: Vcs-Git: git://anonscm.debian.org/pkg-perl/packages/libxml-libxml-perl.git Homepage: Package: libxml-libxml-perl Architecture: any Depends: ${shlibs:Depends}, ${misc:Depends}, ${perl:Depends}, libxml-namespacesupport-perl, libxml-sax-perl Breaks: libxml-libxml-common-perl Replaces: libxml-libxml-common-perl Description: Perl interface to the libxml2 library.
So 2 packages are both in Build-Depends and Depends field
So in this situation there is a possibility that we miss to add one or both of these packages into Depends field, I'm not saying we will surely miss but we might after all we are all human beings.
So how can we improve the situation using CDBS? Let me go through step by step on what we need to do.
--- debian/control 2013-04-28 23:08:11.930082600 +0530 +++ debian/control.in 2013-05-04 20:51:18.849680419 +0530 @@ -5,13 +5,7 @@ Chris Butler <chrisb@debian.org> Section: perl Priority: optional -Build-Depends: perl (>= 5.12), - debhelper (>= 9.20120312), - libtest-pod-perl, - libxml2-dev, - libxml-namespacesupport-perl, - libxml-sax-perl, - zlib1g-dev +Build-Depends: @cdbs@ Standards-Version: 3.9.4 Vcs-Browser: Vcs-Git: git://anonscm.debian.org/pkg-perl/packages/libxml-libxml-perl.git @@ -20,8 +14,7 @@ Package: libxml-libxml-perl Architecture: any Depends: ${shlibs:Depends}, ${misc:Depends}, ${perl:Depends}, - libxml-namespacesupport-perl, - libxml-sax-perl + ${cdbs:Depends} Breaks: libxml-libxml-common-perl Replaces: libxml-libxml-common-perl Description: Perl interface to the libxml2 library @@ -30,4 +23,3 @@ programmers to make use of the highly capable validating XML parser and the high performance Document Object Model (DOM) implementation. Additionally, it supports using the XML Path Language (XPath) to find and extract information. -
#!/usr/bin/make -f include /usr/share/cdbs/1/rules/debhelper.mk include /usr/share/cdbs/1/rules/utils.mk include /usr/share/cdbs/1/rules/upstream-tarball.mk include /usr/share/cdbs/1/class/perl-makemaker.mk pkg = $(DEB_SOURCE_PACKAGE) deps = libxml-libxml-perl libxml-sax-perl deps-test = libtest-pod-perl CDBS_BUILD_DEPENDS +=, $(deps), $(deps-test) CDBS_BUILD_DEPENDS +=, zlib1g-dev, libxml2-dev, perl (>= 5.12) CDBS_DEPENDS_$(pkg) = , $(deps)
So basically we moved all Build-Depends and Depends to rules file. The common ones are placed in deps variable and assigned to both Build-Depends and Depends. CDBS uses following variables for package relationship management.
Other than CDBS_BUILD_DEPENDS all other variables work using substvars i.e. CDBS will put the respective substitutions in pkgname.substvars file which will be used during deb creation to replace things in control file.
So to make CDBS generate new control file run the below command
DEB_MAINTAINER_MODE=1 fakeroot debian/rules debian/control
Basically this command needs to be executed before starting build process if you miss your changes will not be reflected into debian/control. Additionally this feature is Maintainer Mode helper tool because Debian policy prohibits change of debian/control during normal package build.
So what is the benefit of using this feature of CDBS? I've listed some of them which I felt are obvious.
One last thing I want to point out is if you are NMUing a CDBS package
NMUs need not (but are encouraged to) make special use of these tools. In particular, the debian/control.in file can be completely ignored.
Before closing down the post, If you find some mistake in the post please let me know either through comments or through the email.
Soon I will be back with new CDBS recipes till then cya.
Today I'm writing this blog with saddened heart. My mentor and a best friend Dr.Ashokkumar is no more. He died yesterday after fighing with Lymph Node cancer.
Ashokkumar or Ashok sir that is how we all students used to address him, was a Professor in Information Science Engineering in NMAM Institue of Technology and recently transferred to Computer Science Engineering Departement. My last meeting him with him was last year December during which he was looking every bit okay other than he had knee pain because of which he couldn't walk freely. But I never imagined that it will be my last meeting with him.
Ashok sir was also behind the FLOSS event that took place in NMAM Institute of Technology including MiniDebconf 2011 which saw 2 of the foreign DD's Christian Perrier and Jonas Smedegaard.
It was because of his first organized FLOSS event which I volunteered called Linux Habba I entered into the FLOSS world. This means its because of him that I started my FLOSS world journey and reached to my current level. Also it was because of his motivation I started writing this blog which I continue till this day.
I whole heartedly thank Ashok sir for teaching me guiding me and motivating me during my difficult times. You will always be remembered through out my life. May your soul Rest In Peace.
Here are the 2 pics of Ashok sir taken during Minidebconf (Credits: Christian Perrier and Kartik Mistry)
Good bye Sir :-(
I'm no symbol expert and dealing with symbol files for first time from the time I started packaging. What I did here is depending on some suggestions I got in #debian-mentors.
If you think what I did was wrong please enlighten me :-).
Recently 2 of my library packages pugixml and ctpp2 got accepted into Debian archive and when buildd tried to build them on remaining architectures other than one for which I uploaded (amd64) builds failed. This was expected as symbols file which I generated was for amd64. As usual I got 2 serious bug reports #704718 and #705135.
First I was not sure how to handle this. I read article on symbols files handling by rra [1] and tried to use pkgkde-symbolshelper tool only to quickly figure out that I need to use pkgkde_symbolshelper addon for dh sequencer. But this was not possible for me as I was using CDBS for packaging.
I did a quick chat on #debian-mentors and some one suggested me to tag symbols which vary across architecture with (c++) tag. First I was not sure but after reading dpkg-gensymbols man page understood that I need to replace the entire mangled symbols lines with de-mangled version tagged with (c++) in the beginning.
But this was hectic job searching for each deleted symbols and replacing it. So I thought of writing a script to do the job and after struggling for 3 days (yeah I was bit dumb that I didn't read manual date: "2013-04-14 18:39:20 +0530" pugixml and ctpp2 package using it, which is now waiting for Jonas for uploading it.
Here is the script
#!/bin/bash set +x if [ $# -lt 3 ]; then echo "Usage: $0 failed_buildlogs_directory symbols_file package_version" exit 2 fi BUILD_LOG_DIRECTORY=$1 SYMBOLS_FILE=$2 PACKAGE_VERSION=$3 VERSION_TO_REPLACE=" $PACKAGE_VERSION\"" for LOGFILE in $(ls $BUILD_LOG_DIRECTORY/*.build); do for i in $(grep '^-\s_Z' $LOGFILE | perl -pe 's/-//g;'); do if [ $i = $PACKAGE_VERSION ];then continue fi demangled_version="\""$(echo $i" "$PACKAGE_VERSION | c++filt)"\"" tagged_version="(c++)"${demangled_version%$VERSION_TO_REPLACE}"\" "$PACKAGE_VERSION escaped_tagged_version=$(echo $tagged_version | sed 's/\&/\\\&/') sed -i "s#$i $PACKAGE_VERSION#$escaped_tagged_version#" $SYMBOLS_FILE done done
So basically to make this work we need all buildlogs to be downloaded from buildd's again this was easy thanks to rra for developing pkgkde-getbuildlogs :-).
Once you have build logs directory run the above script as follows
cppsymbol_replace.sh path_to_buildlogs path_to_symbol_file upstream_version
After replacing symbols I tried to build package on i386 chroot and build passed successfully but lintian told me that there are symbols which have Debian version appended to it and this might lead to date: "2013-04-14 18:42:42 +0530" back to mentors :-).
It is this time I really understood concept of mangled names generated by compiler and why it vary across the architecture ;-).
This time some one by nick pochu suggested me to pass option -v with package version to dpkg-gensymbols to make it generate symbols with package version and not Debian version.
The following probably needs to be done if package uses dh sequencer but I'm not sure as I've not tested it. If wrong please correct me.
override_dh_makeshlibs: dh_makeshlibs -- -v$PACKAGEVERION #package version needs to be either extracted using parsechangelog or manually fed
If you are using CDBS this is pretty simple. Just add following to rules
DEB_DH_MAKESHLIBS_ARGS_$(pkgname) += -- -v$(DEB_UPSTREAM_VERSION)
I noticed that when I provide (c++) tagged de-mangled name dpkg-gensymbols simply replaces it with proper mangled name but the deletion doesn't trigger an error in dpkg-gensymbols.
This script me allowed to replace 128 symbols which were very tricky and long with de-mangled and tagged version in ctpp2, so I hope it should work across different packages without any problem. Only silly mistake I did was occurrence of & symbol in function making sed go mad which I took one full day to debug :facepalm:.
So that's it folks, if you see something wrong in what I did please let me know through the comments.
[1]
I've provided a new patch #701061 for lintian to warn about font packages that are not marked as Multi-Arch foreign or allowed. Its already included in the lintian by Neils Thykier and will be part of version 2.5.12. The following tag has been implemented
font-package-not-multi-arch-foreign
A Bit of History for this implementation is as follows:
We got a bug report on one of the fonts in pkg-fonts team that its not been installed in i386 architecture on a amd64 multi-arch system #694864. We were first confused but Daniel Kahn Gillmor pointed that we indeed need to mark all font packages as Multi-Arch foreign. He proposed that we should write a lintian check for this which I volunteered to do and then forgot!. Recently I was checking my QA page and landed into Ubuntu's page of my package where I saw they were patching imported font package and marking them as Multi-Arch: foreign and I suddenly remembered my promise! and this patch was the result of same enlightenment :-).
Since there is huge number of font packages maintained by pkg-fonts devel we targeted this for Jessie release.
Here by I request all font package maintainers to consider marking their packages as Multi-Arch foreign. I also request people to join us on pkg-fonts-devel and help us doing this for all font packages maintained by the team, we really lack people in the team. | http://copyninja.info/ | CC-MAIN-2013-48 | refinedweb | 1,951 | 54.22 |
I'm trying to stream tweets from twitter using Tweepy for a particular hashtag. The problem that I'm facing is that fetching 500 tweets is taking almost around 10-15 minutes. I don't think it is supposed to be that slow? Am I missing anything? Has it got to do with any API rate limits? My tweepy listener looks like this:
class MyListener(StreamListener):
"""Custom StreamListener for streaming data."""
def __init__(self, lim):
self.count = 0
self.limit = lim
def on_data(self, data):
global tweets
if self.count < self.limit:
try:
self.count += 1
tweets.append(data)
return True
except BaseException, e:
print 'failed ondata,', str(e)
time.sleep(5)
pass
else:
return False
def on_error(self, status):
print(status)
return True
You are trying to fetch live tweets. It means the rate of your collecting tweets is the rate in which people post tweets with that hashtag. You can try your code with a popular or trending hashtag and you will get outputs faster. | http://www.dlxedu.com/askdetail/3/1dac9fd777f515aadcb19dcfc587b347.html | CC-MAIN-2018-22 | refinedweb | 167 | 69.89 |
Simple library for helping share and manage state in react applications. It provides a clear separation between business logic and views.
InstallationInstallation
npm install --save rex-react
And then import it:
// ES6 modules import { Provider, Listener } from 'rex-react'; // commonjs const Provider = require('rex-react').Provider; const Listener = require('rex-react').Listener;
APIAPI
The library exposes two React components:
Provider and
Listener
<Provider />
props.entities
This is an array of plain objects that represents the state of your app. It is mandatory that you pass at least one element.
<Provider entities={[Person, Ship]}> <div>...</div> </Listener>
<Listener />
props.children
A render function that is called with the array of objects/entities.
<Listener> {(Person, Ship) => { /* And you can access and do whetever with entities you provided before */ <h1>{Person.getName()}</h1> }} </Listener>
GuideGuide
First, wrap your main component with
Provider and pass an array of objects or
entities by prop.
const App = (props) => { return ( <Provider entities={[Counter]}> <Counter /> <Display /> </Provider> ); };
Entities are plain JS objects that represent the business logic of the application. You're free to model the logic of the program as you wish and every change on those objects will fire a setState, so, make sure to use inmutable data types or re-render won't be fired.
In this case:
const Counter = { counter: 1, increment() { this.counter++; }, decrement() { this.counter--; }, getCounter() { return this.counter; } };
finally, every component that needs to be aware of the Counter object can do it this way:
const Counter = props => { return ( <Listener> {counter => ( <div> <button onClick={() => counter.increment()}>Increment</button> <button onClick={() => counter.decrement()}>Decrement</button> </div> )} </Listener> ); }; const Display = props => { return ( <Listener> {counter => (<span> {counter.getCounter()} </span>)} </Listener> ); };
Questions or suggestions?Questions or suggestions?
Feel free to contact me on Twitter or open an issue. | https://www.npmjs.com/package/rex-react | CC-MAIN-2019-43 | refinedweb | 293 | 51.44 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.