text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
im brushing over some of my notes for a midterm tommorow.
questions i have when i was reading:
what exactly does a parameter do?
i know it passes a value to a function-but how is it always like that? or is it more complicated than that? is it kind of like passing a power in math?
for the functions should the declarations go in the very top-or in its respective function?
like:
double funct1(); //function protoype, is double ok to use? since i cant use void? (need //return) #include <iostream> using namespace std; int main() { double funct1(); return 0; } //function double funct1() { int x = 1, y = 2; //declare here or at top in main? note: no GLOBAL const allowed! cout << "Answer: " << x + y << endl; return ? //<---dont know what to return here. }
what does it mean to pass by value or pass by ref? wats the diff?
wat are the basics of an array?
2d array?
and wat is a structure?
guys-im reading my book but i feel like im reading something foreign! :(
it doesnt make sense for me! help me in simple english please!!
ur help will be GREATLY appreciated!!
(ok-back to studying :D | https://www.daniweb.com/programming/software-development/threads/155165/need-help-review-for-midterm | CC-MAIN-2017-26 | refinedweb | 198 | 87.52 |
java.lang.Object
org.netlib.lapack.STGEX2org.netlib.lapack.STGEX2
public class STGEX2
STGEX2 is a simplified interface to the JLAPACK routine stgex * ======= * * ST) REAL arrays, dimensions (LDA,N) * On entry, the matrix A in the pair (A, B). * On exit, the updated matrix A. * * LDA (input) INTEGER * The leading dimension of the array A. LDA >= max(1,N). * * B (input/output) REAL arrays, dimensions (LDB,N) * On entry, the matrix B in the pair (A, B). * On exit, the updated matrix B. * * LDB (input) INTEGER * The leading dimension of the array B. LDB >= max(1,N). * * Q (input/output) REAL array, dimension (LDZ,N) * On entry, if WANTQ = .TRUE., the orthogonal matrix Q. * On exit, the updated matrix Q. * Not referenced if WANTQ = .FALSE.. * * LDQ (input) INTEGER * The leading dimension of the array Q. LDQ >= 1. * If WANTQ = .TRUE., LDQ >= N. * * Z (input/output) REAL array, dimension (LDZ,N) * On entry, if WANTZ =.TRUE., the orthogonal matrix Z. * On exit, the updated matrix Z. * Not referenced if WANTZ = .FALSE.. * * LDZ (input) INTEGER * The leading dimension of the array Z. LDZ >= 1. * If WANTZ = .TRUE., LDZ >= N. * * J1 (input) INTEGER * The index to the first block (A11, B11). 1 <= J1 <= N. * * N1 (input) INTEGER * The order of the first block (A11, B11). N1 = 0, 1 or 2. * * N2 (input) INTEGER * The order of the second block (A22, B22). N2 = 0, 1 or 2. * * WORK (workspace) REAL array, dimension (LWORK). * * LWORK (input) INTEGER * The dimension of the array WORK. * LWORK >= MAX( N*(N2+N1), (N2+N1)*(N2+N1)*2 ) * * INFO (output) INTEGER * =0: Successful exit * >0: If INFO = 1, the transformed matrix (A, B) would be * too far from generalized Schur form; the blocks are * not swapped and (A, B) and (Q, Z) are unchanged. * The problem of swapping is too ill-conditioned. * <0: If INFO = -16: LWORK is too small. Appropriate value * for LWORK is returned in WORK(1). * * Further Details * =============== * * Based on contributions by * Bo Kagstrom and Peter Poromaa, Department of Computing Science, * Umea University, S-901 87 Umea, Sweden. * * In the current code both weak and strong stability tests are * performed. The user can omit the strong stability test by changing * the internal logical parameter WANDS to .FALSE.. See ref. [2] for *. * * ===================================================================== * * .. Parameters ..
public STGEX2()
public static void STGEX2(boolean wantq, boolean wantz, int n, float[][] a, float[][] b, float[][] q, float[][] z, int j1, int n1, int n2, float[] work, int lwork, intW info) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/STGEX2.html | CC-MAIN-2015-32 | refinedweb | 407 | 67.96 |
15 October 2010 12:45 [Source: ICIS news]
LONDON (ICIS)--New registrations for passenger vehicles in the 27-member EU fell for the sixth consecutive month in September, down by 9.6% compared with the same month last year as demand for new cars continued to deteriorate throughout 2010, an industry body said on Friday.
According to data from the European Automobiles Manufacturers’ Association (ACEA), a total of 1,227,645 new passenger vehicles were registered in September, following a 12.9% year-on-year decrease in August.
Over the first three quarters of 2010, registrations were 4.3% lower compared to the same period of 2009, totalling 10,251,140 vehicles, the ACEA added.
New car registrations began to slow down in April, when there was a 7.4% decline compared with April 2009. This was then followed by a 9.3% drop in May, a 6.9% fall in June, and an 18.6% slump in July, as various government incentive schemes to support fleet renewal ended.
The ACEA said that in September all the major markets contracted.
?xml:namespace>
A wide variety of chemicals markets depend on demand from the automotive industry, which has suffered badly from the effects of the global economic crisis | http://www.icis.com/Articles/2010/10/15/9401699/new-car-registrations-in-europe-fall-for-sixth-consecutive.html | CC-MAIN-2014-15 | refinedweb | 207 | 57.67 |
Description editOn Everything is a string, LES April 27,2004: said there:
-
- [..] the more I use Tcl, the more I am convinced that everything is a list. And that's one of the best features in Tcl.
- A Tcl script is a string, that is firstly split into one or more commands, which are evaluated from first to last, unless one of them alters this execution order.
- Each command is split into a sequence of words which are subject to substitutions.
- The first word of the resulting sequence of words is used to look up the command implementation, which, when found, is applied to the rest of the words. Or in a different terminology: the rest of the words on are given as arguments to the first, which is a command.
- KJN 2004-11-04: The endekalogue does not define lists - it leaves it to commands to decide whether to interpret an argument as a list (and, implicitly, to define what is meant by a list).
- RHS 2004-11-05: [..]".
- NEM 2005-07-25: [..] what is at the heart of this debate is not [..], but rather a recognition that the notion of a "type" of a value is extrinsic to the value itself.
% set x {a b c} ;# -> a b c puts $x a b c[puts] receives the value of x, which it interprets as a string. Consider the following:
# lappend x d a b c d # puts $x ==> a b c d # set x a b c d[lappend] converts $x into a list, and then appends d to it. As a result of the conversion, the double space between the first two elements has been lost. puts expects a string, so it converts x back to a string. x has now been used as both a list and a string. Internally, Tcl may be caching the various representations for improved performance, but at the script level, what matters is that x can be used as both types.
An Indecent Proposal by LEG editor: How to get rid of the {*} expansion prefix or: How to benefit most from the notion that Everything is a list (of strings).Up to Tcl 8.4, [eval] was the only mechanism available to interpolate lists into a command. In Tcl 8.5 the syntax of the Tcl language has been changed, introducing the expansion prefix {*} which can be put in front of any word to provide this {expand}ing.LEG suggests the following language change:
- every command returns a list of words, which may be empty.
- command substitution expands the invoking commandline with the list of words returned by the command
- return returns the list of words in its commandlines to the invoking stack frame
# proc valueOf x {return [list $x]} # set x {a b c} a b c # valueOf $x {a b c} # valueOf {*}[set x] wrong # args: should be "valueOf x"The problem is that any command can now return a number of values, and valueOf only accepts only one. We rewrite valueOf as variadic function, using the tautology [list {*}$args] = $args.
# proc valueOf args {return $args} # valueOf {*}[set x] a b c # remember! # valueOf $x {a b c}Features:
- list representation from gset x, string representation from $x
- The idiom: use set x instead of return $x, will break if x contains a list: the former returns a list, the later returns a list with one element (which is a list).
namespace import ::tcl::mathop::* proc map {prefix args} { set r {} foreach e $args {lappend r [uplevel $prefix $e]} set r } if {& [map {file exists} [map {file join / etc} passwd shadow group]]} { puts "We seem to be on a unix box with shadow passwords" }Which seems fairly more natural than:
if {& {*}[map {file exists} {*}[map {file join / etc} passwd shadow group]]} { ...Hint for people not used to Functional Programming: try to read the idiom from right to left: a list of names gets converted into a list of filesystem paths gets converted into a list of flags, which are all 'and'ed together.A typical "old style" implementation for comparision:
set files {} foreach file {passwd shadow group} { lappend files [file join / etc $file] } set flag 0 foreach file $files { set flag [expr {$flag & [file exists $file]}] } if {$flag} {$flag} { puts "We seem to be on a unix box with shadow passwords" }NEM: Why not just not use a variadic function when you don't want one?
proc map {f xs} { set ys [list] foreach x $xs { lappend ys [{*}$f $x] } return $ys } if {[& [map {file exists} [map {file join / etc} {passwd shadow group}]]]} { ... }LEG 2008-12-13: Just made a slight correction to the above to make it work (missing braces).
Note: While playing with this, I found some gotcha's in the use of {*}:
# proc valueOf args {return $args} # set x {a b c} # set y "{*}[valueOf $x]" {*}{a b c} # set y "{*}$x" {*}a b cI would have expected the result to be a b c and a b c respectively, since e.g. first {*}$x should be substituted by a b c, and then the '"' quotes stripped off. When reading the Dodekalogue I realize, that the {*} expansion occurs only at the start of a word, which explains the shown behaviour.are you saying that one gotcha is that {*} behaves as documented?LEG 2008-12-13: yes, who reads documentation? :) What is cool with Tcl is, that normally I'm able to write down a script by intuition and it 'just works' (tm)Another thing I just found out:
# set x {*}{} ==> can't read "x": no such variable # set x {*}[list] ==> can't read "x": no such variableThe empty list gets expanded into nothing and the resulting commandline is: set x. If you set a variable via the expansion prefix and your command eventually returns an empty list, you might get the above error message. Workaround: Always initialize variables before setting them via the {*} prefix.Note that the proposed semantic change to [bracket] expansion would have to deal with a sensible interpretation of substituting a list of words returned by a [bracket] expression into a string. I guess the most simple one would be to use the string representation of the list.
The extensive use of variadic functions as well as introducing (return) values for loops would further even more the Simplification of the Tcl language and make scripts more understandable. See Simplification of the Tcl language for the respective discussion.
aricb 2008-12-10: I'm intrigued by the prospect of [return] being able to return more than one value, but it seems to me that your proposal to eliminate {*} is fundamentally flawed. You say that command substitution would expand arguments, so that $var and [set var] would behave differently. Consider this:What if I want to return a list? In Tcl 8.5, I can do:
return [list $value1 $value2 $value3]but under your proposal, if I read it correctly, this would be equivalent to
return $value1 $value2 $value3[list], [dict create], [lreplace], etc. would no longer return lists but a bunch of scalar values. To make matters worse, what if I wanted to return two lists? The following hypothetical command:
return [list $value1 $value2] [list $value3 $value4]would be equivalent to
return $value1 $value2 $value3 $value4which is not at all correct.You cannot allow square brackets to do argument expansion without seriously botching up the way Tcl handles lists. If you are going to allow [return] to return multiple values, you have to do it in such a way that the integrity of the values is preserved. In other words, when command substitution takes place, the command must be replaced with a number of words equal to the number of arguments to the [return] statement that terminated the execution of the command.
proc returnOneValue {} { return [list value1A value1B] } proc returnTwoValues {} { return value1 value2 } proc acceptOneArg {arg} { puts "arg: $arg" } proc acceptTwoArgs {arg1 arg2} { puts "arg1: $arg1" puts "arg2: $arg2" }
% acceptOneArg a arg: a % acceptOneArg [returnOneValue] arg: value1A value1B % acceptTwoArgs a b arg1: a arg2: b % acceptTwoArgs [returnTwoValues] arg1: value1 arg2: value2 % acceptOneArg [returnTwoValues] wrong # args: should be "acceptOneArg arg" %acceptTwoArgs [returnOneValue] wrong # args: should be "acceptTwoArgs arg1 arg2"In a Tcl interpreter where commands could return more than one value, you could do away with {*}$list, but not by replacing it with [set $list]; you would need a new command which takes one argument (a list) and returns each member as a separate value: [expand $list].It should be noted that allowing a variable number of return values would introduce a serious incompatibility regarding procs that call [return] with no arguments. In Tcl 8.5, this returns an empty string. If return were variadic, [return] with no arguments would truly return nothing. Any script that relies on argumentless [return] returning an empty string would break.In the end, as useful as multiple return values might be, I think we are better off with the syntax and semantics that are currently in place.LEG 2008-12-11: Good point. I see that more would have to be done than I thought: [list $value1 $value2] would have to return just one value after expansion.aricb 2008-12-12: The point was not that [list] should have a special behavior (it should not) but that square brackets must not trigger argument expansion. The fact that you frame the discussion in terms of expansion indicates that you are not really returning multiple values; you are returning a single value (a list) which must be expanded. So you are not proposing any new behavior for [return]. The new behavior comes because square brackets will now trigger argument expansion. In your proposal, to return multiple values, you return a list, which gets expanded on the other end.But as you noted in your discussion, Tcl does not specify when something should or should not be treated as a list, and values which were intended to be scalar can be misinterpreted as lists if the programmer is not careful. To prevent misinterpretations, then, the programmer will be forced to always package return values as a list. So your proposal doesn't simplify anything at all; on the contrary, it complicates the task of programming by forcing programmers to type return [list $result] every time rather than return $result. [list ...] is seven characters. {*} is three. So:
- arguably you are not saving anybody any effort
- while you might remove one wart from the language ({*}), it is a wart that occurs relatively rarely in most people's code, and you are replacing it with a wart that would be orders of magnitude more frequent (return [list $result]) | http://wiki.tcl.tk/10390 | CC-MAIN-2017-30 | refinedweb | 1,773 | 56.08 |
Search: Search took 0.03 seconds.
- 17 Nov 2015 10:27 AM
- Replies
- 13
- Views
- 6,253
It looks like promises were added in ExtJS 6.
- 13 Oct 2015 1:03 PM
Mark,
Maybe I should start another post but it looks like the plugin doesn't catch out "common" code package. I assume this is because we would have to use Sencha CMD "packages" instead of using the...
- 13 Oct 2015 12:41 PM
Ah ok, looks like this was the issue. I didn't realize the app.json was required. When I copied it over to our "src" directory, it recognized all the views correctly. I assumed it would grab the app...
- 13 Oct 2015 12:39 PM
Yeah, we have a weird process where when we want to do a build, we copy over all the src files into the actual ext workspaces for sencha command and then run the sencha app build commands. This is so...
- 13 Oct 2015 9:38 AM
I'm using ExtJS 5.1.1 and Plugin version 6.0.3.430. Also I'm using Webstorm 10.0.4. I added the ExtJS library under "Languages & Frameworks -> Javascript -> Libraries". I can see that most of the...
- 12 Oct 2015 9:16 AM
Maybe I have something configured incorrectly, but should the plugin be able to resolve the "models/stores/views" array config with the correct class? I'm getting the following warning: "Unknown...
- 8 Oct 2015 6:28 AM
Oh, right. That makes sense. Thanks!
- 7 Oct 2015 5:42 AM
Thanks for the example and explanation. I wasn't sure if this was expected behaviour or not. Now I know. =)
- 28 Sep 2015 5:43 AM
Lets say the original store has records: 'A', 'B', 'C', 'D', 'E'. Then I filter the store so that the only appararent records are: 'A' and 'B'. If I create a Chained store off of that parent store,...
- 23 Sep 2015 8:52 AM
I'm using ExtJS 5.1.1. Here is my scenario. I have a global store that is filtered. When I create a Chained Store of it, the filtered records are completely lost. I know that chained stores can be...
- 23 Jul 2015 9:29 AM
- Replies
- 1
- Views
- 194
As the title says, we are using CS6 and Illustrator seems to complain and not work at all when trying to open any of the following files:
extstencil_crisp.ai
extstencil_neptune.ai...
- 13 Apr 2015 11:06 PM
- Replies
- 5
- Views
- 951
I'm on Windows 8.1, Webstorm 10.0.1, trial version and it seems to work fine. :-?
Do you maybe have a reference to the ExtJS library in the Webstorm -> Preferences -> Libraries?
- 1 Apr 2015 8:06 AM
- Replies
- 7
- Views
- 2,597
This bug just hit me as well. :((
- 11 Mar 2015 8:36 AM
- Replies
- 5
- Views
- 720
I don't suppose there is an override that you could provide to fix this for now? I haven't figured out a good way to DIFF multiple files. (I.E. comparing 5.1.0 with the nightly builds).
Thanks!
- 29 May 2014 6:40 AM
- Replies
- 8
- Views
- 2,727
They added tablet support to ExtJS, they didn't combine the two frameworks.
See
- 12 May 2014 6:13 AM
- Replies
- 7
- Views
- 1,506
Thanks! I didn't know that.
- 10 May 2014 6:12 AM
- Replies
- 7
- Views
- 1,506
Unless your listeners are on a controller and suspendEvents is ignored. :((
- 9 Apr 2014 10:54 AM
- Replies
- 3
- Views
- 899
I'm not sure if this is a generic bug but there doesn't seem to be any way to find methods/properties that are new for ExtJS 5. I remember in the 4.2.0 docs, there used to be a yellow star that would...
- 8 Apr 2014 1:35 PM
Jump to post Thread: Cannot subclass a Model properly by Dev@QLP
- Replies
- 12
- Views
- 4,013
So, I'm confused. I'm running into this error now as well. I figured this would have been fine since they are in seperate namespaces. They don't even subclass each other.
...
- 3 Apr 2014 12:58 PM
- Replies
- 13
- Views
- 6,253
We use a mix of AJAX/JSONP and even postMessage to communicate to different endpoints often as a result of a single event. It would be nice to just have an official Promises API we could use instead...
- 2 Apr 2014 6:29 AM
On the contrary, I feel like they don't reveal dates anymore because they used to but got burned when they missed one and the community went insane demanding for it to be released. :-?
- 7 Mar 2014 7:03 AM
ViewControllers + Data Binding! :D:D:D Any news on Promises? They are somewhat already in Touch (Ext.Promise).
- 4 Mar 2014 6:51 AM
This is exactly why Sencha doesn't like to give estimates. :((
- 17 Jan 2014 9:10 AM
+1 for fixing model associations! They only seem to work in very limited real world scenarios. :((
- 20 Nov 2013 3:16 PM
- Replies
- 1
- Views
- 919
Results 1 to 25 of 52 | https://www.sencha.com/forum/search.php?s=fb6bc6ba3bdd7a4b8cdb861c11e06b62&searchid=13303391 | CC-MAIN-2015-48 | refinedweb | 866 | 83.76 |
I recently got a chance to use Phaser 3 and TypeScript to build Root Maker with my Ludum Dare team. It worked out great. Phaser is one of the best frameworks around for HTML5 game development, and it’s definitely worth checking out if you haven’t.
I wanted to write this tutorial to help others get their Phaser 3 and TypeScript projects up and running as quickly as possible. If you’d rather just see the code, you can find a link to the repository at the end of the post.
1. Setting Up your Environment
IDE
Phaser is compatible with any editor or IDE. Visual Studio Code tends to be one of the better options because of its built-in TypeScript support, but feel free to pick your preferred environment.
Node
For this tutorial, we’ll be using Node.js with Yarn as a package manager, but this method will also work with npm. We’ll need to use Node since we’ll be developing in TypeScript, and using Yarn will help us manage and install all of the dependencies we’ll need for our game.
Setup
First, you’ll want to create a new directory for your game. Once you have a new folder, type
yarn init to go through the process of creating a
package.json file. This will contain all of the information about your project and its dependencies. You can answer the
init questions however you want – you can edit all of your answers later.
Next, we’ll need to get all of our dependencies. We’re going to be using webpack to bundle our files and run our test server, and we’re also going to need to grab the Phaser and TypeScript modules. (I’m assuming some familiarity with the Node ecosystem here, but if all of this sounds like another language to you, keep reading; I’ll explain everything you need to do as we go).
To get all of these, run these two commands:
yarn add --dev copy-webpack-plugin ts-loader typescript webpack webpack-cli webpack-dev-server
yarn add phaser
The first command will install all of the modules we need to run and test our code locally. It will also bundle them into files we can eventually host online for people to play. All of those modules are added to our development dependencies. One important note is that we’re using
webpack-dev-server to run our local server.
The second command installs Phaser as a dependency, and it’s one that we’ll need outside of just a development environment. The Phaser repository, as of version 3.17, now includes the type definitions by default. Because of that, installing Phaser via Yarn means we’ll automatically have types for Phaser.
Configuring webpack
We’ll need to tell webpack how it should compile our game. We added a few dependencies to help do this already, like the
ts-loader plugin and the
copy-webpack-plugin. However, webpack also requires a
webpack.config.js file to understand how to compile our project.
webpack config files are almost a separate language unto themselves, and they can be pretty confusing for someone who hasn’t worked with them before. Rather than go through each line of what we need, copy the contents of this file from the starter kit I wrote into a file called
webpack.config.js in your root directory.
As a brief overview, the first part of the file (from the
entry to the
output property) is just telling webpack where our project starts, where the compiled version should be output, and how to resolve TypeScript files. The latter part, under
plugins, defines some of the webpack additions it needs in order to work with Phaser.
Configuring TypeScript
Similar to webpack, TypeScript also needs a
tsconfig.json file to help the TypeScript compiler understand how to handle your project. Create a new file in your root directory with that file name, and type this:
{ "compilerOptions": { "target": "es5", "sourceMap": true }, "include": [ "**/*.ts" ] }
This tells TypeScript which files it should try to compile and which version of JavaScript to shoot for.
Adding a Script to Run our Game
At this point, our project infrastructure should be pretty much set up. All we need is a way to actually run our game. Primarily, we need to tell webpack when to compile our code and host it via a local server.
To do this, add this code to your
package.json file before the
dependencies section:
"scripts": { "build": "webpack", "dev": "webpack-dev-server" },
Now all we have to do is type
yarn dev, and webpack will automatically compile our code and host a server for us. You can even keep this running while you work–it will automatically rebuild whenever it detects a code change.
2. Creating Your Game
Now that our project infrastructure is all good to go, we can start actually coding. It’s definitely worth it to spend some time combing through Phaser’s documentation and examples, but I’ll give a brief overview of how Phaser works here.
At a high level, every game in Phaser has a single
Game object that contains information about the game and how it should run. A
Game object has a list of
Scene objects that make up a game.
You can think of a scene as the thing on your screen at any given time, whether that be the main menu, a certain level, or a “Game Over” message. However, scenes are versatile. Not only can small things be their own scene, but you can have multiple scenes visible at the same time (we won’t cover that in this tutorial, though).
Creating a HTML Container
Let’s start by creating an HTML file where our game can live. Create a new file called
index.html, and add this code to it:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Sample Project</title> <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no"> <meta name="HandheldFriendly" content="True"> <meta name="MobileOptimized" content="320"> <meta http- <style> html, body, canvas { margin: 0 !important; padding: 0 !important; overflow: hidden !important; } </style> </head> <body> <div id="content"></div> <script src="vendors.app.bundle.js"></script> <script src="app.bundle.js"></script> </body> </html>
This code creates a container
div where our game will live and also adds some styling information to help format our game.
You’ll notice there are two
script tags. If you go back to our
webpack.config.js file, you’ll see that there is an
optimization section at the bottom where we tell webpack to build our
node_modules directory separate from the code that we actually write.
We do this for performance reasons. Whenever we change our code, webpack will recognize that it doesn’t have to re-bundle all of our dependencies. This becomes especially useful as you add more dependencies to a project later on.
Creating our Game Object
Now it’s time to actually write some code. Create a new directory in your project called
src, and create a file in that directory called
main.ts. This is where we’ll define our
Game object.
Add this code to the file:
import * as Phaser from 'phaser'; const gameConfig: Phaser.Types.Core.GameConfig = { title: 'Sample', type: Phaser.AUTO, scale: { width: window.innerWidth, height: window.innerHeight, }, physics: { default: 'arcade', arcade: { debug: true, }, }, parent: 'game', backgroundColor: '#000000', }; export const game = new Phaser.Game(gameConfig);
In this code, we’re creating a configuration object, and then passing that in to the constructor for
Phaser.Game (the aforementioned
Game object). This creates a game that’s the same size as the window where you run it. If you run your game now by typing
yarn dev into your terminal and then going to
localhost:8080 in your browser, you should see a black screen. While it isn’t very appealing, it does mean that your game is running!
3. Your First Scene
A
Game object may be the thing that keeps track of everything going on with your game, but individual
Scene objects are what make your game actually feel like a game.
So, we need to create a scene. In your
main.ts file, add this code above your
gameConfig (but below where you import Phaser):
const sceneConfig: Phaser.Types.Scenes.SettingsConfig = { active: false, visible: false, key: 'Game', }; export class GameScene extends Phaser.Scene { private square: Phaser.GameObjects.Rectangle & { body: Phaser.Physics.Arcade.Body }; constructor() { super(sceneConfig); } public create() { this.square = this.add.rectangle(400, 400, 100, 100, 0xFFFFFF) as any; this.physics.add.existing(this.square); } public update() { // TODO } }
Once we’ve defined a new scene, we need to tell our
Game object about it. A
Game object has a property called
scene that can take in either a single scene or a list of scenes. For now, we only have one scene, so add this line inside of your
gameConfig object:
// ... gameConfig { scene: GameScene, // ... }
All of a sudden, our game is a bit more exciting. If you run this now, you should see a white square appear on your screen. In the code above, we’re defining a scene, then telling it to create a white square at
400, 400 with a width and height of
100 pixels. The
create method is called automatically by scenes whenever they are started, and it the best place to create any game objects the scene needs.
Another important method is the
update method, which gets called each tick and should be used to update the state of the scene or its game objects. We’ll get to that method in the next section.
You’ll also notice in the code above that we create a field for the square on our
GameScene. We can reference this later when we add input controls. In Phaser, you can add physics-enabled game objects by calling
this.physics.add rather than
this.add. Unfortunately, at the time of writing this tutorial, there is no built-in physics rectangle factory, so we first have to create the rectangle and then add a physics body after by calling
this.physics.add.existing on the square.
4. Adding Movement
We have a game, a scene, and a square. All that we need now is a way for the player to provide input to the game. To do this, we’re going to change the
update method of our
GameScene class. Replace the
// TODO with the following code:
const cursorKeys = this.input.keyboard.createCursorKeys(); if (cursorKeys.up.isDown) { this.square.body.setVelocityY(-500); } else if (cursorKeys.down.isDown) { this.square.body.setVelocityY(500); } else { this.square.body.setVelocityY(0); } if (cursorKeys.right.isDown) { this.square.body.setVelocityX(500); } else if (cursorKeys.left.isDown) { this.square.body.setVelocityX(-500); } else { this.square.body.setVelocityX(0); }
If you run your game now, the square should move in different directions as you press down on the arrow keys. This code uses Phaser’s built-in Arcade physics system to set the velocity of our square. We’re missing some things, like checking to make sure our square stays within our screen boundaries or adding other objects to collide with, but this simple code should get you started on implementing the actual mechanics of your game.
Conclusion
This tutorial was meant to be a brief overview of how to start developing in Phaser 3 with TypeScript. You can go to the GitHub repository for the starter kit I wrote to download the source code for this tutorial. It contains everything we covered, as well as some other useful tips about switching between scenes and menu buttons. Additionally, you can use it as a guide for how you might want to organize your project as you start adding more files.
I hope this tutorial was helpful and that you’ll feel like a Phaser pro in no time. Be on the lookout for more tutorials coming soon about more advanced topics with Phaser!
By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy1 Comment
I just wanted to say that this helped me a lot. I’ve been wanting to set up Phaser with TypeScript for a long time, but not found a reliable way of doing it that I felt comfortable with. This is great! Thank you! | https://spin.atomicobject.com/2019/07/13/phaser-3-typescript-tutorial/ | CC-MAIN-2019-51 | refinedweb | 2,068 | 64.71 |
.
March 2, 2010
This article was contributed by Nathan Willis.
Color management, broadly speaking, is the automatic transformation of image colors so as to provide a uniformly accurate representation across devices. This includes output-only devices such as televisions and printers, as well as CRT and LCD displays on which editing as well as final output is viewed. The first problem is that every device is capable of generating a different spectrum of colors — different hues, different ranges of white-to-black values, and different degrees of saturation. Collectively, the color capabilities of the device are its gamut, which can be represented by a three-dimensional volume in one of several mathematical color models (or "color spaces").
The second problem is that digital files store the color of each pixel as a numeric triple that may or may not represent coordinates in some specified color space. If the color space to which the file referenced is known, mapping each triple from its stored value into the gamut of the output device is a simple transformation, and the user can visually examine the full range of pixel data. Without that transformation, multiple colors outside the display device's gamut get mapped to the boundaries, causing artifacts and loss of detail, and the entire image can get mapped too dark or too light, misrepresenting the scene.
Although it is clear that graphics professionals need color managed displays and printers, Cruz said, the explosion of user-generated digital content in recent years makes it a problem for everyone.
Home users want to be able to edit video and share it online, knowing
that what appears appropriately bright on-screen will not look washed-out
or too dark on DVD or YouTube. They also want to drop off family photos at
the corner drugstore kiosk and not be disappointed by a red or green cast
to the skin-tones. Photo kiosks may be inexpensive per-print, he said, but
online vendors like Apple and Google's Picasa are increasingly offering
more elaborate services, such as hardbound books, with correspondingly
higher prices. Consumers might shrug off paying a few cents for a bad-looking
4x6 print, but getting burned on an expensive book is considerably more
aggravating.
Just as importantly, Cruz added, business users need to care about the professionalism of their presentations, both for aesthetic reasons, and because a mis-colored partner logo could accidentally sour the opinion of the executive at the table who recently spent months determining the "perfect shade of puce" to represent the company image. Finally, he said, anyone who sells products online should know that the number one reason for returned consumer purchases is mismatched colors — if the product shots on the web site make the red shirts look orange, the seller is financially at risk for the cost of returns.
In addition to these use cases, Cruz explained that users need color management support in their desktop applications to cope with the variety of different display devices they use over the course of a day. Multiple computers are commonplace, from desktops to laptops to netbooks to hand-held devices, and each have different display characteristics. Laptop screens have noticeably smaller gamuts than desktop LCDs, which are in turn smaller than CRTs, and different also from the displays of consumer HDTVs. Mobile devices, based on different graphics hardware, may not even support full 8-bit-per-channel color. Presenting a consistent display across these platforms cannot be left to chance.
Fortunately for Linux users, Cruz continued, color management support in
Linux is in good shape, although more still needs to be done. Most creative graphics applications support color management already, thanks in large part to the collaborative efforts of the Create project at Freedesktop.org. These include Gimp, Krita, Inkscape, Scribus, Digikam, F-Spot, and Rawstudio, as well as several image viewing utilities.
Enabling users to acquire good ICC profiles (tables
measuring the device's attributes against points in a known color space,
thus allowing for interpolation of color data) or to build their own is
one of the key areas of current color work. Projects like Argyll and Oyranos handle tasks such as precisely
measuring monitor color output through hardware colorimeters, creating
profiles for printers, scanners, and cameras through color targets, and
linking profiles for advanced usage.
A simpler solution aimed at the home user is GNOME Color Manager (GCM); unlike the previous two examples GCM does not attempt to be a complete ICC profile management tool, but focuses on easily enabling users to correctly assign a profile to their monitor. Default profiles are usually available from the manufacturer, either through the web or on the "driver" CDs in the box, and for normal usage they are an excellent first step. Developers from these and several related projects collaborate on common goals in the OpenICC project.
Developers interested in adding color management to their applications should start with LittleCMS, Cruz advised, noting that he personally added Inkscape's color management support in less than one week's time with LittleCMS. LittleCMS is a library that handles the mathematical transformations between color spaces automatically, quickly, and with very little overhead.
Currently, however, one drawback of the Linux color management scene is
that most color-aware applications work in isolation from one another,
requiring the user to choose display, output, and working ICC profiles in
each program — whether through LittleCMS or with in-house routines.
Ongoing work to bring color management to a wider range of programs
includes adding support to the Cairo vector graphics rendering
library, attaching display profiles to
X displays, and building color management into GTK+ itself. The
latter, in particular, would enable
"dumb" applications to automatically be rendered in color-corrected form on
the monitor, while still allowing "smart" applications to manage their own
color. This is important because graphics and video editing applications
need to be able to switch between different profiles for tasks like
soft-proofing (simulating a printer's output on-screen by rendering with a
different ICC profile) or testing for out-of-gamut color.
Finally, Cruz showed several example workflows for print and web graphics, first illustrating potential problem points when working in a non-color-managed environment, then explaining how using a color-aware setup would trap and eliminate the problem.
For web graphics, the example scenario was a simple photo color-correction. Over-correcting the color balance on an improperly-managed monitor easily leads to site visitors seeing a wildly distorted image. In addition, Windows and Macs use different system gamuts, which leads to photos looking either too bright on Macs or too dark on Windows. With a managed workflow, users should target the sRGB color space, previewing the results with Windows, Mac OS X 10.4 and Mac OS X 10.5 profiles (due to changes introduced by Apple in 10.5), as well as mobile devices under different conditions. Because most web site audiences do not have color-corrected displays, he said, not everything is under the designer's control — but if the end user's monitor is broken and the artwork is broken, the problems multiply.
For print graphics, the workflow is more complicated, starting with the fact that — despite the popularity of the term — there is no single, standard "CMYK" color space. All process-color spaces are device-dependent, including common four-ink CMYK printers, CcMmYK photo printers, Hexachrome, and others; there is not even an analogous color space to the "Web safe" sRGB standard. Process color's small gamut makes it very easy to produce poor output when not using color management to edit and proof.
Fortunately, Inkscape and other SVG-capable editing tools can take advantage of the fact that SVG allows different color profiles to be attached to different objects in a drawing. A CMYK profile for the target printer can be used for most of the drawing, with a separate spot-color profile attached to specific objects that need careful attention, and corrective profiles for embedded RGB elements like raster graphics. A test run is always the best idea, Cruz said, but having proofing profiles available on the system saves both money and time.
Color management on Linux has come a long way in the last four years. The application support in the basic graphics suite is good, and for professionals tools like Argyll and Oyranos open the door to complete solutions; as Cruz observed in his talk, the colorimeter hardware that used to cost thousands of dollars and lack support on free operating systems is now cheap and well-supported.
Still, the average desktop Linux distribution does not install in a color-managed state, which is unfortunate. Proper support for transforming pixels from one color space to another is straightforward math that, much like window translucency, smooth widget animation, and audio mixing, should happen without requiring the user to stop and think about it. It is promising that headway is being made on that front as well, with GCM and GTK+; perhaps in a few release cycles Linux will have full color management out-of-the-box.
Page editor: Jonathan Corbet
Security
Fedora already has a number of variations—called "spins"—to
support different use cases: alternative desktops (KDE, LXDE, XFCE),
gaming, hardware design, education, etc. Starting with Fedora 13, those
will be joined by the Fedora Security Lab (FSL),
which is meant to be a "safe test-environment for working on
security-auditing, forensics and penetration-testing, coupled with all the
Fedora-Security features and tools". The target audience is much
the same as that of the BackTrack
security distribution—security professionals along with those who
want to learn about various security techniques.
FSL is based on the LXDE desktop environment because of its small resource
footprint, which will leave more memory available for running various
security and forensic tools. The LXDE menu has been customized to present
a categorized list of tools and applications available to a user. The
distribution comes with a fairly extensive list
of packages, as well as a wish list of
additional packages that would be added to FSL once they are packaged for
Fedora.
The release itself will be an ISO image that can be used as a Live CD,
which can then be installed on the hard disk. A more likely scenario is
creating a bootable system on a USB stick using Fedora's liveusb-creator. That
will allow the user to reserve some extra space on the USB stick for
persistent storage. That storage can be used for installing additional
packages or storing the output or configuration of various utilities so
that they are
available after each boot.
Fedora's Joerg Simon is leading the FSL effort, which got final
approval from the Fedora advisory board in mid-February. FSL provides a
number of advantages for Fedora and its users—many of which are
listed on the FSL page—but there is one item in particular that Simon
seems to be excited about: using it as a platform to teach about security.
Simon has slides
[PDF] from a presentation he gave that proposed FSL as the basis for
teaching classes based on the Open
Source Security Testing Methodology Manual (OSSTMM). Simon is involved
in both projects and sees benefits to both from a collaboration. FSL would
provide a stable platform that teachers and students could rely upon and
Fedora would benefit from the wider exposure those classes would bring.
In addition to the various utilities and tools that are packaged with
the spin, FSL also showcases the security
features that are part of all Fedora spins. Things like SELinux,
default firewall rules, PolicyKit, and various protections like stack
smashing protection, buffer overflow protection, and so forth, are all
available for students and others to examine and play with.
Having a larger parent organization like Fedora—and to some extent
Red Hat—may help FSL achieve a higher-profile than BackTrack or other
security distributions have in the past. One can imagine that FSL will be
the tool of choice for recovery of
broken systems in the Fedora and RHEL worlds, as users will already be
familiar with the underlying distribution. Working with other
organizations that are targeting security education is another thing that
may very well help foster FSL as a tool of choice for security
professionals.
While FSL is somewhat late to this particular party, and still has a number
of important tools (Metasploit, OpenVAS, SiLK, etc.) on its wish list,
it does have the infrastructure and user community of Fedora behind it.
There is ample room for collaboration with BackTrack and other
security-focused distributions—one hopes that can come about. By
sharing information, configuration, tools, and techniques, in much the same
way that free software development is done, better security distributions
will result. That can only help bring about increased security for all
free software.
Brief items
New vulnerabilities
multiple security flaws, which might lead to bypass of intended
security restrictions and denial of service, have been reported
and corrected in latest v2.5.12 version of ModSecurity.
A vulnerability has been fixed in Kernel, which can be exploited by
malicious people to crash kernel due to divide by zero in
azx_position_ok.
Using mp3blaster-3.2.5 (latest version) to play MP3 audio, the reporter
was able to crash the kernel by stopping and restarting playback using
the "5" key repeatedly. This happens as a normal user, not only as root..
puppet may create several predictable files in /tmp, e.g.
/tmp/daemonout
/tmp/puppetdoc.txt
/tmp/puppetdoc.tex
Jeff Layton discovered that missing input sanitising in mount.cifs
allows denial of service by corrupting /etc/mtab.)
Page editor: Jake Edge
Kernel development
There have been no stable updates released over the last week, and
none are currently in the review process.
Full Story (comments: 131)
The message seems clear: new features aimed at the mainline should not be
configured in by default.
To that end, Eric Biederman has proposed the creation of a pair
of new system calls. The first is the rather tersely named
nsfd():
int nsfd(pid_t pid, unsigned long nstype);
This system call will find the namespace of the given nstype which
is in effect for the process identified by pid; the return value
will be a file descriptor which identifies - and holds a reference to -
that namespace. The calling process must be able to use ptrace()
on pid for the call to succeed; in the current patch, only network
namespaces are supported.
Simply holding the file descriptor open will cause the target namespace to
continue to exist, even if all processes within it exit. The namespace can
be made more visible by creating a bind mount on top of it with a command
like:
mount --bind /proc/self/fd/N /somewhere
The other piece of the puzzle is setns():
int setns(unsigned long nstype, int fd);
This system call will make the namespace indicated by fd into the
current namespace for the calling process. This solves the problem of
being able to enter another container's namespace without the somewhat
strange semantics of the once-proposed hijack() system call.
These new system calls are in an early, proof-of-concept stage, so they are
likely to evolve considerably between now and the targeted 2.6.35 merge.
Like most products, Android-based handsets go through a series of code
names before they end up in the stores. Daniel Walker cited an example: an HTC handset which
was named "Passion" by the manufacturer. When it got to Google for the
Android work, they concluded that "Mahimahi" would be a good name for it.
It's only when this device got to the final stages that it gained the
"Nexus One" name. Apple's "dirty infringer" label came even later than that.
Daniel asked: which name should be used when bringing this code into the
mainline kernel? The Google developers who wrote the code used the
"mahimahi" name, so the source tree is full of files with names like
board-mahimahi-audio.c; they sit alongside files named after
trout, halibut, and swordfish. Daniel feels these names might be
confusing; for this reason, board-trout.c became
board-dream.c when it moved into the mainline. After all, very
few G1/ADP1 users think that they are carrying trout in their pockets.
The problem, of course, is that this kind of renaming only makes life
harder for people who are trying to move code between the mainline and
Google's trees. Given the amount of impedance which already exists on this
path, it doesn't seem like making things harder is called for. ARM
maintainer Russell King came to that
conclusion, decreeing:
Let's keep the current naming and arrange for informative comments
in files about the other names, and use the common name in the
Kconfig - that way it's obvious from the kernel configuration point
of view what is needed to be selected for a given platform, and it
avoids the problem of having effectively two code bases.
That would appear to close the discussion; the board-level Android code can
keep its fishy names. Of course, that doesn't help if the code doesn't
head toward the mainline anyway. The good news is that people have not
given up, and work is being done to help make that happen. With luck,
installing a mainline kernel on a swordfish will eventually be a
straightforward task for anybody.
Kernel development news
Changes visible to kernel developers include:
list_rotate_left(struct list_head *head);
The merge window is normally open for two weeks, but Linus has suggested that it might be a
little shorter this time around. So, by the time next week's edition comes
out, chances are that the window will be closed and the feature set for
2.6.34 will be complete. Tune in then for a summary of the second half of
this merge window.
March 3, 2010
This article was contributed by Mel Gorman
In this chapter, the setup and the administration of huge pages within the
system is addressed.
Part 2 discussed the different interfaces between user and kernel space
such as hugetlbfs and shared memory. For an application to use these
interfaces, though, the system must first be properly configured.
Use of hugetlbfs requires only that the filesystem must be mounted;
shared memory needs additional
tuning and huge pages must also be allocated. Huge pages can be statically
allocated as part of a pool early in the lifetime of the system or the pool
can be allowed to grow dynamically as required. Libhugetlbfs provides a
hugeadm utility that removes much of the tedium involved in these tasks.
Since kernel 2.6.27, Linux has supported more than one huge page
size if the underlying hardware does. There will be one directory per page
size supported in /sys/kernel/mm/hugepages and the "default" huge
page size will be stored in the Hugepagesize field in
/proc/meminfo.
The default huge page size can be important. While hugetlbfs can specify the
page size at mount time, the same option is not available for shared memory or
MAP_HUGETLB. This can be important when using 1G pages on AMD or 16G pages on
Power 5+ and later. The default huge page size can be set either with the last
hugepagesz= option on the kernel command line (see below) or
explicitly with default_hugepagesz=.
Libhugetlbfs provides two means of identifying the huge
page sizes. The first is using the pagesize utility with
the -H switch printing the available huge page sizes and
-a showing all page sizes. The programming equivalent are the
gethugepagesizes() and getpagesizes() calls.
Due to the inability to swap huge pages, none are allocated by default,
so a pool must be configured with either a static or a dynamic size. The
static size is the number of huge pages that are pre-allocated and guaranteed
to be available for use by applications. Where it is known
in advance how many huge pages are required, the static size should be set.
The size of the static pool may be set in a number of ways. First, it may be
set at boot-time using the hugepages= kernel boot parameter. If
there are multiple huge page sizes, the hugepagesz= parameter
must be used and interleaved with hugepages= as described in
Documentation/kernel-parameters. For example, Power 5+ and later
support multiple page sizes including 64K and 16M; both could be configured
with:
hugepagesz=64k hugepages=128 hugepagesz=16M hugepages=4
Second, the default huge page pool size can be set with the
vm.nr_hugepages sysctl, which, again, tunes the default huge page
pool. Third, it may be set via sysfs by finding the appropriate
nr_hugepages virtual file below /sys/kernel/mm/hugepages.
Knowing the exact huge page requirements in advance may not be possible.
For example, the huge page requirements may be expected to vary
throughout the lifetime of the system. In this case, the maximum number
of additional huge pages that should be allocated is specified with the
vm.nr_overcommit_hugepages. When a huge page pool does not have
sufficient pages to satisfy a request for huge pages, an attempt to allocate up to
nr_overcommit_hugepages is made. If an allocation fails,
the result will be that mmap() will fail to avoid page fault
failures as described in Huge Page Fault
Behaviour in part 1.
It is easiest to tune the pools with hugeadm. The
--pool-pages-min argument specifies the minimum number of huge
pages that are guaranteed to be available. The --pool-pages-max
argument specifies the maximum number of huge pages that will exist in the
system, whether statically or dynamically allocated. The page size can be
specified or it can be simply DEFAULT. The amount to allocate
can be specified as either a number of huge pages or a size requirement.
In the following example, run on an x86 machine, the 4M huge page pool is being
tuned. As 4M also happens to be the default huge page size, it could also
have been specified as DEFAULT:32M and DEFAULT:64M
respectively.
$ hugeadm --pool-pages-min 4M:32M
$ hugeadm --pool-pages-max 4M:64M
$ hugeadm --pool-list
Size Minimum Current Maximum Default
4194304 8 8 16 *
To confirm the huge page pools are tuned to the satisfaction of requirements,
hugeadm --pool-list will report on the minimum, maximum and
current usage of huge pages of each size supported by the system.
To access the special filesystem described in HugeTLBFS in part 2, it
must first be mounted. What may be less obvious is that this is required to
benefit from the use of the allocation API, or to automatically back
segments with huge pages (as also described in part 2). The default huge page
size is used for the mount if the pagesize= is not used. The
following mounts two filesystem instances with different page sizes as supported
on Power 5+.
$ mount -t hugetlbfs /mnt/hugetlbfs-default
$ mount -t hugetlbfs /mnt/hugetlbfs-64k -o pagesize=64K
Ordinarily it would be the responsibility of the administrator to set the
permissions on this filesystem appropriately. hugeadm provides
a range of different options for creating mount points with different permissions.
The list of options are as follows and are self-explanatory.
It is up to the discretion of the administrator whether to call
hugeadm from a system initialization script or to create
appropriate fstab entries. If it is unclear what mount points
already exist, use --list-all-mounts to list all current
hugetlbfs mounts and the options used.
A little-used feature of hugetlbfs is quota support which
limits the number of huge pages that a filesystem instance can use even if
more huge pages are available in the system. The expected use case would
be to limit the number of huge pages available to a user or group. While
it is not currently supported by hugeadm, the quota can be set
with the size= option at mount time.
There are two tunables that are relevant to the use of huge pages with shared
memory. The first is the sysctl kernel.shmmax kernel parameter
configured permanently in /etc/sysctl.conf or temporarily in
/proc/sys/kernel/shmmax. The second is the sysctl
vm.hugetlb_shm_group which stores which group ID (GID)
is allowed to create shared memory segments. For example, lets say a JVM was to
use shared memory with huge pages and ran as the user JVM with UID 1500 and GID
3000, then the value of this tunable should be 3000.
Again, hugeadm is able to tune both of these parameters
with the switches --set-recommended-shmmax and
--set-shm-group. As the recommended value is calculated
based on the size of the static and dynamic huge page pools, this should
be called after the pools have been configured.
If the huge page pool is statically allocated at boot-time, then this
section will not be relevant as the huge pages are guaranteed to exist. In
the event the system needs to dynamically allocate huge pages throughout
its lifetime, then external fragmentation may be a problem.
"External fragmentation" in this context refers to the inability of the
system to allocate a huge page even if enough memory is free overall because the
free memory is not physically contiguous. There
are two means by which external fragmentation can be controlled, greatly
increasing the success rate of huge page allocations.
The first means is by tuning vm.min_free_kbytes to a
higher value which helps the kernel's fragmentation-avoidance mechanism.
The exact value depends on the type of system, the number of NUMA nodes
and the huge page size, but hugeadm can calculate and set it
with the --set-recommended-min_free_kbytes switch. If
necessary, the effectiveness of this step can be measured by using the
trace_mm_page_alloc_extfrag tracepoint and ftrace
although how to do it is beyond the scope of this article.
While the static huge page pool is guaranteed to be available as it has
already been allocated, tuning min_free_kbytes improves the
success rate when dynamically growing the huge page pool beyond its minimum
size. The static pool sets the lower bound but there is no guaranteed upper
bound on the number of huge pages that are available. For
example, an administrator might request a minimum pool of 1G and a maximum
pool 8G but fragmentation may mean that the real upper bound is 4G.
If a guaranteed upper bound is required, a memory partition can be created
using either the kernelcore= or movablecore= switch
at boot time. These switches create a Movable zone that can be seen in
/proc/zoneinfo or /proc/buddyinfo. Only pages that
the kernel can migrate or reclaim exist in this zone. By default, huge pages
are not allocated from this zone but it can be enabled by setting either
vm.hugepages_treat_as_movable or using the hugeadm
--enable-zone-movable switch.
In this chapter, four sets of system tunables were described. These relate
to the allocation of huge pages, use of hugetlbfs filesystems, the use of
shared memory, and simplifying the allocation of huge pages when dynamic pool
sizing is in use. Once the administrator has made a choice, it should be
implemented as part of a system initialization script. In the next chapter,
it will be shown how some common benchmarks can be easily converted to use
huge pages..
Graner manages a "pretty big" group at Canonical, of 25 people
split into two sub-groups, one focused on the kernel itself and the other on
drivers. For each release, the kernel team chooses a "kernel release lead"
(KRL) who is responsible for ensuring that the kernel is ready for the
release and its users. The KRL
rotates among team members with Andy Whitcroft handling Lucid Lynx (10.04)
and Leann Ogasawara slated as KRL for the following ("M" or 10.10) release.
The six-month development cycle is "very challenging", Graner
said. The team needs to be very careful about which drivers—in-tree,
out-of-tree, and staging—are enabled. The team regularly takes some
drivers from the staging tree, and fixes them up a bit, before enabling
them in the Ubuntu tree so that users "get better hardware
coverage".
Once the kernel for a release has been frozen, a new branch is created for
the next release. For example, the Lucid kernel will be frozen in a few
weeks, at which point a branch will be made for the 10.10 release. That
branch will get the latest "bleeding edge" kernel from Linus Torvalds's tree
(presumably 2.6.34-rc1), and the team will start putting the additional
patches onto that branch.
The patches that are rolled into the tree include things from linux-next
(e.g. suspend/resume fixes), any patches that Debian has added to its
kernel, then the Ubuntu-specific patchset. Any of those that have been
merged into the mainline can be dropped from the list, but it is a
"very time-consuming effort" to go through the git tree to
figure all of that out. With each new tag from Torvalds's tree, they do a
git rebase on their tree—as it is not a shared development
tree—"see what conflicts, and deal with those".
The focus and direction for the Ubuntu kernel, like all Ubuntu
features, comes out of the Ubuntu Developer Summit (UDS), which is held
shortly after each release to set goals and make plans for the following
release. Before UDS, the kernel team selects some broad topics and creates
blueprints on the wiki to describe those topics. In the past, they have
focused on things like suspend/resume, WiFi networking, and audio; "a
big one going forward is power management", he said.
The specifications for these features are "broad-brush
high-level" descriptions (e.g. John has a laptop and wants to get 10
hours of battery life). The descriptions are fleshed out into various use
cases, which results in a plan of action. All of the discussion,
decisions, plans, and so on are captured on the UDS wiki
One of the longer kernel sessions at UDS looks at each kernel configuration
option (i.e. the kernel .config file) to determine which should be
enabled. New options are looked at closely to decide whether that feature
is needed, but the existing choices are scrutinized as well.
In addition, Graner said that the team looks at the patches and drivers
that were added to the last kernel to see which of those should be
continued in the next release. He pointed to Aufs as a problematic feature
because it always breaks with each new kernel release and can take up to
three weeks to get it working. They have talked about dropping it, because
Torvalds won't merge it into the mainline, but the live CDs need it.
The kernel team has to balance the Ubuntu community needs as well as
Canonical's business needs, for things like Ubuntu One for example, and
come up with a set of kernel features that will satisfy both. The
discussions about what will get in at UDS can get intense at times, Graner said, "Lucid was
pretty tame, but Karmic was kind of heated".
Lucid will ship with
the 2.6.32 kernel which makes sense for a long-term support (LTS) release.
2.6.32 will be supported as a stable tree release for the next several
years and will be shipped with the next RHEL and SLES. That means it will
get better testing coverage which will lead to a "very stable kernel
for Lucid".
Each stable tree update will be pulled into the Ubuntu kernel tree, but LTS
updates to the kernel will only be pushed out quarterly unless there is a
"high" or "medium" security fix. For new kernel feature development, new
mainline kernel releases and release
candidates are pulled in by the team as well. Graner gave two examples of new
development that is going on in the Ubuntu kernel trees: adding devicetree
support for the ARM architecture, which will reduce the complexity of
supporting multiple ARM kernels, and the AppArmor security module that is
being targeted for the 2.6.34 kernel.
Once the kernel version has been frozen for a release, the management
of that kernel is much more strictly controlled. The only patches that get
applied are those that have a bug associated with them. Stable kernel
patches are "cherry-picked" based on user or security problems. There is a
full-time kernel bug triager that tries to determine if a bug reporter
has included enough information to have any hope of finding the
problem—otherwise it gets dropped. One way to ensure a bug gets
fixed, though, is to "show the
upstream patch that fixes the problem"; if that happens, it will get
pulled into the kernel, Graner said.
There are general freezes for each alpha, beta, and the final release, but
the kernel must already be in the archive by the time of those freezes. Each
time the kernel itself freezes, it "takes almost a full week to build
all of the architectures" that are supported by Ubuntu. There are
more architectures supported by Ubuntu than any other distribution
"that I am aware of", he said. Each build is done in a
virtualized environment with a specific toolchain that can be recreated
whenever an update needs to be built. All of that means the kernel
needs to freeze well in advance of the general release freeze, typically
about a month before.
Once the kernel is ready, it is tested in Montreal in a lab with 500 or
600 machines. The QA team runs the kernels against all that hardware,
which is also a time-consuming process. Previously, the kernels would be
tossed over the wall for users to test, but "now Canonical is trying
to do better" by dedicating more resources to testing and QA.
Managing kernel releases for a distribution is big task, and the details of
that process are not generally very well-known. Graner's talk helped to
change that, which should allow others to become more involved in the
process. Understanding how it all works will help those outside of the
team do a better job of working with the team, which should result in
better kernels for Ubuntu users.
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
Recently, Amar.
New Releases
Full Story (comments: none)
Distribution News
Fedora
Mandriva Linux
Ubuntu family
Other distributions
Distribution Newsletters
Interviews
Page editor: Rebecca Sobol
Development
At recent events, your editor asked many readers what part of the LWN
Weekly Edition would be missed least if it went away. The answers were
surprisingly consistent; it seems that relatively few people plow through
the long lists of software releases which have long appeared on this page.
So that's what is going to go; this week inaugurates a new, thinner
Development Page.
The most important aspects of this page, we hope, will remain. It will
still be led by our original content. We will still watch the stream of
software release announcements as we did before; the difference is that
only a small subset of them will be selected for mention on this page.
Announcements will show up here if they are a major release of an important
package, or if they highlight an application that we think our readers
would be interested in, or if somebody just thinks it's worth posting.
The value of LWN, we believe, has always been in selective judgment and
conciseness, rather than in scooping up and posting everything. We hope
that a more focused Development Page will increase that value. As this
page evolves, we will certainly welcome any comments you may have, either
posted as comments or sent directly to lwn@lwn.net.
This article was contributed by Joe 'Zonker' Brockmeier.
Lots of people have complained that XSane is too complicated for many
users, but little progress has been made towards creating a user-friendly
and stable replacement for the SANE GUI. Until now. Simple Scan is a GTK-based
front-end for SANE primarily
developed by Robert Ancell
and intended to replace XSane. Simple Scan will be landing on desktops in
the upcoming
Ubuntu Lucid (10.04) release, so now's a good time to take a look at the new kid on the scanning block.
Packages for Ubuntu are available via Ancell's PPA,
the most recent version as of this writing was 0.9.5. Source is
available for users on other distributions, and should build on most
current distributions. To test Simple Scan, I scanned in several color photos, a
handful of old black and white photos, line art, and a printed text document. The test system consisted of a dual Xeon 3.20GHz with 8GB of RAM, running Ubuntu 9.10 and using an Epson Perfection 1260 scanner. The scanner is a bit long in the tooth, and certainly not the fastest available, but has served well over the years and works well with Linux.
Simple Scan lives up to its name. The interface is uncluttered and offers only a few options. If no changes are made, Simple Scan will scan in photos at 300 DPI, or text documents at 150DPI. Photos and text are the only presets available. The DPI can be changed via the Preferences dialog. In fact, that's nearly all that can be changed, along with the scan source if more than one scanner is attached to the system. Once preferences are saved, you can choose to scan in a single page, or all pages if you happen to have a scanner with a document feeder. Unfortunately, the Epson is a flatbed scanner and I wasn't able to test the feeder feature.
Users familiar with other scanning applications will probably be used to doing a preview scan, followed by cropping a section of the document to get a full scan. Simple Scan does a one-shot process and simply scans in the entire area. After this, the user can crop the picture if desired. This is much easier if one wants to scan in something that takes up the entire tray, but can cause a scan to take much longer in practice if you're working at a high DPI and only wish to capture a small portion of it. If you're scanning in, say, several old family photos it makes more sense to just scan an entire tray and do the cropping in The GIMP or another application.
Simple Scan's performance leaves a bit to be desired when working at larger resolutions. Scanning a color photo in at 1200DPI nearly brought Simple Scan to its knees. It didn't crash, but the interface became laggy and slow to respond. Resizing the Simple Scan window would take 10 to 20 seconds. Even scanning in some black and white photos at 150DPI caused Simple Scan to become slow to respond.
Simple Scan makes it easy to scan in a document and send it as an email. Once a document is scanned in, just select Email from the File menu and Simple Scan will open a new email with the scan as an attachment. At least that's what will happen if you're using Evolution as the default mailer on GNOME. If you're using Thunderbird or another mailer, this doesn't work so well. Simple Scan will initiate a new email, but without the attachment. When selecting email, Simple Scan will always default to PDF. At the moment there appears to be no way to change this. That might be desirable for forms, but not so much for pictures.
Editing within Simple Scan is limited to cropping and rotation. When saving scans, users are limited to JPEG, PNG, and PDFs. Simple Scan is really a no-frills tool that just does the most basic scanning operations.
Some might wonder why a new application was developed from scratch,
rather than improving GNOME Scan. According to the comments on Ancell's blog following the introduction of Simple Scan, GNOME Scan suffered stability issues and did not work well as a stand-alone scanning application. For those unfamiliar with GNOME Scan, the project has been in the works for some time, and is not only meant to be a standalone scanning application, but also is meant to allow other GNOME applications to acquire images from a scanner.
All of the features for 1.0 are present in the 0.9.5 release of Simple Scan, and what remains are bugfixes and so on. According to the 0.9.0 announcement Ancell is interested in working on color management, OCR, integration with GNOME Scan and integration with photo management applications like F-Spot after the 1.0 release.
Naturally, Simple Scan doesn't hold a candle to XSane's bag of tricks, nor is it meant to. If a user wishes to do color correction, optical character recognition (OCR), scan in slide negatives, or any number of other more complex operations, then XSane is still a better choice. But, if all you need is a fast scan of a form or quick and dirty scan of a color document or photo, then Simple Scan is shaping up to be a good choice.
Newsletters and articles
Announcements
Non-Commercial announcements
Full Story (comments: 8)
Commercial announcements
See also: Elliott's press release about the offer. ." That suggests some rather significant changes should this deal be accepted.
Legal Announcements
Articles of interest
Resources
Calls for Presentations
Upcoming Events
Full Story (comments: 7)
If your event does not appear here, please
tell us about it.
Web sites
Audio and Video programs
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/376059/bigpage | CC-MAIN-2013-20 | refinedweb | 7,001 | 58.42 |
This is the fourteenth tutorial in the series Online C Tutorials for beginners and is about working with strings in C. These tutorials are single page bite size tutorials which can usually be run on either codepad.org or ideone.com.
Strings in C
If you were picking a programming language to work with strings, I wouldn’t start with C. Programming languages like Java, C# and Python and C++ have much more powerful string-handling capabilities. You can do stuff with text using c-Strings; it takes more effort though and can lead to bugs if you aren’t careful.
A c-string is a block of RAM that terminates with a 0 value, denoted as ‘\0’ in C. So if a string has four characters in it, it actually occupies five bytes. Forgetting this can lead to bugs.
char * name="Dave"; // name occupies five bytes Dave +
char * name="Dave"; // name occupies five bytes Dave + \0
So what can be do with strings?
Here’s a list of things. I’ll give examples for each item in the list.
- Assign a string. (Example above)
- Copy one string onto another.
- Append one string on to the end of another.
- Convert numbers to strings.
Copy one string onto another
So we have our string name which has the value Dave in it. Now lets copy it into another string.
Remember, a string is a block of RAM. It can be allocated statically at compile-time, or dynamically at runtime.
Here’s the static version. I’ve allocated a buffer of 20 chars.
#include <stdio.h> #include <string.h> char buffer[20]; char* name = "Dave"; int main(){ strcpy_s(buffer,sizeof(buffer), name); // Win 32 //strncpy(buffer, name, sizeof(buffer)); // Linux printf("Copied string - %s\n", buffer); }
This copies all five characters of name into buffer. Note I’ve used the safe versions (strcpy_s on Windows, strncpy on Linux) of strcpy. You might see older code using strcpy but if the string being copied doesn’t have a terminating 0 then strcpy will keep on copying until it finds a \0. If there isn’t a \0 at the end of the string strcpy could copy several hundred or thousand bytes and will clobber data and possibly code.
Here’s the dynamic version.
#include <stdio.h> #include <string.h> #include <stdlib.h> char* destination; char* name = "Dave"; int len = 50; int main(){ destination = (char*)malloc(len); if (!destination) { printf("Unable to allocate %d bytes",len); return 1; } strcpy_s(destination, len, name); // Win 32 //strncpy(destination, name, len); // Linux printf("Copied string - %s\n", destination); free(destination); }
This is a bit longer because I’ve checked the malloc() call to ensure it did allocate ram. It’s good practice to always do this and bail out if it fails. Otherwise it’s more or less identical. Most of the string functions that copy, append etc use this destination and source pointer parameters. These are always pointers but can also be the name of an array or the address of an array element.
For instance in the static example, if I wanted to copy name into buffer starting at the fifth byte, it would look like this.
strcpy_s(&buffer[4],sizeof(buffer), name);
You could put replace buffer with &buffer[0] but why would you. Simpler is best!
Windows vs Linux
It was found pretty quickly that the strcpy() and similar functions were not very safe. So safe versions of these were devised. Microsoft went for the _s versions. Linux went for the strn versions. Both added an extra parameter which specifies the maximum length of the destination. Confusingly Windows has the extra parameter as the 3rd while Linux makes it the second.
Append one string on to the end of another
This is very similar to the safe strcpy() examples but instead uses safe versions of strcat(). So first copy one string with strcpy_s then use strcat_s to append to it.
#include <stdio.h> #include <string.h> #include <stdlib.h> char* destination; char* name = "Dave "; char* append = "was here"; int len = 50; int main(){ destination = (char*)malloc(len); if (!destination) { printf("Unable to allocate %d bytes",len); return 1; } strcpy_s(destination, len,name ); strcat_s(destination, len, append); printf("Appended string - %s\n", destination); free(destination); }
I’ve not put the array version in.
Convert numbers to strings
You’ll occasionally see reference to a function itoa. This exists in Visual C++ as _ltoa_s but does not in Linux. Here’s what it looks like in Windows.
#include <stdio.h> #include <stdlib.h> int len = 50; char buffer[50]; int main(){ _ltoa_s(1250,buffer,len,10); printf("value = %s\n", buffer); }
This is the Linux version.
#include <stdio.h> #include <stdlib.h> int len = 50; char buffer[50]; int main(){ int value = 1250; snprintf(buffer,len,"%d", value); printf("Value=%s\n",buffer); }
The snprintf (safe version of sprintf) prints the number to the string into buffer. You can then printf buffer out in the last line. I’ve used an int value but float or double with the appropriate format string works as well.
Conclusion
String handling in C is certainly doable but its quite fiddly and it’s very easy to introduce bugs. | https://learncgames.com/tutorials/tutorial-14-working-with-strings-in-c/ | CC-MAIN-2021-43 | refinedweb | 870 | 66.84 |
Niclas Hedhman wrote:
>
> "Timm, Sean" wrote:
>
> > It seems like it would be useful to have pre- and post-transformers that are
> > applied prior to and after all other transformers on a per-pipeline basis.
> > Obviously this isn't necessary, since you could list the transformers in
> > every single match you had, but it would ease the administration of common
> > transformers (especially when you had to change something). For instance,
> > maybe you want XInclude to run every time for a certain pipeline prior to
> > any of your other transformers (or after your other transformers...or both).
> > Does anyone else see this as useful? Maybe something like the following:
>
> -1;
> IMHO, it breaks cleanliness of the Sitemap, introduces more complexity for the
> sitemap processor (must make sure there are only transformers in the
> pre-pipeline for instance.)
>
> I would instead suggest an "include" of commonly re-occurring segments.
Hmmmm, good point, but I'd leave this for later after we have more
feedback.
> Niclas
>
> >
> >
> > <map:pipeline>
> > ...
> > <map:pre-transform>
> > <map:tranform
> > </map:pre-transform>
> >
> > <map:post-transform>
> > <map:choose>
> > <map:when
> > <map:transform
> > </map:when>
> > <map:otherwise>
> > <map:transform
> > </map:otherwise>
> > </map:choose>
> > </map:post-transform>
> >
> > </map:pipeline>
No -1 as well.
A couple of reasons:
1) XInclude should not be a filter but something that it's built-in the
cocoon processing chain, just like xbase or schema validation or
namespaces.
2) you risk to make "pipelines" behave as "pipes". Anyway, you'll do
pre-post processing with mounting
<match>
..do something pre..
<mount>
..do something post..
</match>
but should not influence how the pipe works.
3) your stuff doesn't work at all where is the serializer? sure, you
could add your serializer in the post process, but then isn't this just
like placing it into the pipe?
I understand you do this only to reduce verbosity and I agree that
verbosity reduction is almost always a Good Thing(TM), but sometimes it
removes readability....
BTW, if your sitemap is too verbose, probably your URI space sucks :)
I picture sitemap verbosity as a "measure" of your URI space
complexity... so you have a _clear_ feedback of what's going on and you
have a coherent view of the whole addressing scheme.
But again, these are just speculations, we need more real-life feedback.
--!
------------------------- --------------------- | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200007.mbox/%3C3965CAFC.D0CF8D80@apache.org%3E | CC-MAIN-2016-22 | refinedweb | 384 | 62.17 |
I have a GUI button that can detect when it is pressed and a piece of code that I want to have create labels when the button is pressed. I'm currently converting an entry to an int (This is working. Tested by displaying the int in an entry.) and trying to use a for loop. My code is in the source code of my main window. Here is my code:
using System; using Gtk; public partial class MainWindow: Gtk.Window { public MainWindow () : base (Gtk.WindowType.Toplevel) { Build (); } protected void OnDeleteEvent (object sender, DeleteEventArgs a) { Application.Quit (); a.RetVal = true; } protected void generatePlates (object sender, EventArgs e) { int n; short number; bool validNumnber; if (Int16.TryParse (entry1.Text, out number)) { validNumnber = true; } else { validNumnber = false; } if (validNumnber == true) { n = Int16.Parse(entry1.Text); } else { n = 0; } for (int i = 0; i < n; i++) { var lbl = new Label(); lbl.Name = "lbl"+i; lbl.Text = "Plate "+i+":"; lbl.Allocation = new Gdk.Rectangle (110*i+110,110*i+110,100,100); this.Add (lbl); entry3.Text = this.ToString(); } }
Any advice is appreciated and feel free to ask for more details. If my code has other flaws in it, please let me know as I am a complete novice. Also
entry3 is just an entry I'm using for testing this.
Edit: This code is for a windows desktop app.
In general, you need to add the new control to the Form/view/whatever's collection of controls. If you are running this in a view than simply use.
View.Add (lbl);
If you are using c#.net winforms (which I don't think you are) then use
this.Controls.Add(lbl); | http://www.devsplanet.com/question/35271499 | CC-MAIN-2017-09 | refinedweb | 278 | 69.68 |
Query statement:
I?
Answer:
One line, probably pretty fast:
num_lines = sum(1 for line in open('myfile.txt'))
How to get the line count of a large file cheaply in Python?, the best you can do is make sure you don’t use unnecessary memory, but it looks like you have that covered.
Answer #2:
I believe that a memory mapped file will be the fastest solution. I tried four functions: the function posted by the OP (
opcount); a simple iteration over the lines in the file (
simplecount); readline with a memory-mapped filed (mmap) (
mapcount); and the buffer read solution offered by Mykola Kharechko (
bufcount).
I ran each function five times, and calculated the average run-time for a 1.2 million-line text file.
Windows XP, Python 2.5, 2GB RAM, 2 GHz AMD processor
Here are my results:
mapcount : 0.465599966049 simplecount : 0.756399965286 bufcount : 0.546800041199 opcount : 0.718600034714
Edit: numbers for Python 2.6:
mapcount : 0.471799945831 simplecount : 0.634400033951 bufcount : 0.468800067902 opcount : 0.602999973297
So the buffer read strategy seems to be the fastest for Windows/Python 2.6
Here is the code:
from __future__ import with_statement import time import mmap import random from collections import defaultdict def mapcount(filename): f = open(filename, "r+") buf = mmap.mmap(f.fileno(), 0) lines = 0 readline = buf.readline while readline(): lines += 1 return lines def simplecount(filename): lines = 0 for line in open(filename): lines += 1 return lines def bufcount(filename): f = open(filename) lines = 0 buf_size = 1024 * 1024 read_f = f.read # loop optimization buf = read_f(buf_size) while buf: lines += buf.count('\n') buf = read_f(buf_size) return lines def opcount(fname): with open(fname) as f: for i, l in enumerate(f): pass return i + 1 counts = defaultdict(list) for i in range(5): for func in [mapcount, simplecount, bufcount, opcount]: start_time = time.time() assert func("big_file.txt") == 1209138 counts[func].append(time.time() - start_time) for key, vals in counts.items(): print key.__name__, ":", sum(vals) / float(len(vals))
Answer #3:
All of these solutions ignore one way to make this run considerably faster, namely by using the unbuffered (raw) interface, using bytearrays, and doing your own buffering. (This only applies in Python 3. In Python 2, the raw interface may or may not be used by default, but in Python 3, you’ll default into Unicode.)
Using a modified version of the timing tool, I believe the following code is faster (and marginally more pythonic) than any of the solutions offered:
def rawcount(filename): f = open(filename, 'rb') lines = 0 buf_size = 1024 * 1024 read_f = f.raw.read buf = read_f(buf_size) while buf: lines += buf.count(b'\n') buf = read_f(buf_size) return lines
Using a separate generator function, this runs a smidge faster:
def _make_gen(reader): b = reader(1024 * 1024) while b: yield b b = reader(1024*1024) def rawgencount(filename): f = open(filename, 'rb') f_gen = _make_gen(f.raw.read) return sum( buf.count(b'\n') for buf in f_gen )
This can be done completely with generators expressions in-line using itertools, but it gets pretty weird looking:
from itertools import (takewhile,repeat) def rawincount(filename): f = open(filename, 'rb') bufgen = takewhile(lambda x: x, (f.raw.read(1024*1024) for _ in repeat(None))) return sum( buf.count(b'\n') for buf in bufgen )
Here are my timings:
function average, s min, s ratio rawincount 0.0043 0.0041 1.00 rawgencount 0.0044 0.0042 1.01 rawcount 0.0048 0.0045 1.09 bufcount 0.008 0.0068 1.64 wccount 0.01 0.0097 2.35 itercount 0.014 0.014 3.41 opcount 0.02 0.02 4.83 kylecount 0.021 0.021 5.05 simplecount 0.022 0.022 5.25 mapcount 0.037 0.031 7.46
Answer #4:
Here is a python program to use the multiprocessing library to distribute the line counting across machines/cores. My test improves counting a 20million line file from 26 seconds to 7 seconds using an 8 core windows 64 server. Note: not using memory mapping makes things much slower.
import multiprocessing, sys, time, os, mmap import logging, logging.handlers def init_logger(pid): console_format = 'P{0} %(levelname)s %(message)s'.format(pid) logger = logging.getLogger() # New logger at root level logger.setLevel( logging.INFO ) logger.handlers.append( logging.StreamHandler() ) logger.handlers[0].setFormatter( logging.Formatter( console_format, '%d/%m/%y %H:%M:%S' ) ) def getFileLineCount( queues, pid, processes, file1 ): init_logger(pid) logging.info( 'start' ) physical_file = open(file1, "r") # mmap.mmap(fileno, length[, tagname[, access[, offset]]] m1 = mmap.mmap( physical_file.fileno(), 0, access=mmap.ACCESS_READ ) #work out file size to divide up line counting fSize = os.stat(file1).st_size chunk = (fSize / processes) + 1 lines = 0 #get where I start and stop _seedStart = chunk * (pid) _seekEnd = chunk * (pid+1) seekStart = int(_seedStart) seekEnd = int(_seekEnd) if seekEnd < int(_seekEnd + 1): seekEnd += 1 if _seedStart < int(seekStart + 1): seekStart += 1 if seekEnd > fSize: seekEnd = fSize #find where to start if pid > 0: m1.seek( seekStart ) #read next line l1 = m1.readline() # need to use readline with memory mapped files seekStart = m1.tell() #tell previous rank my seek start to make their seek end if pid > 0: queues[pid-1].put( seekStart ) if pid < processes-1: seekEnd = queues[pid].get() m1.seek( seekStart ) l1 = m1.readline() while len(l1) > 0: lines += 1 l1 = m1.readline() if m1.tell() > seekEnd or len(l1) == 0: break logging.info( 'done' ) # add up the results if pid == 0: for p in range(1,processes): lines += queues[0].get() queues[0].put(lines) # the total lines counted else: queues[0].put(lines) m1.close() physical_file.close() if __name__ == '__main__': init_logger( 'main' ) if len(sys.argv) > 1: file_name = sys.argv[1] else: logging.fatal( 'parameters required: file-name [processes]' ) exit() t = time.time() processes = multiprocessing.cpu_count() if len(sys.argv) > 2: processes = int(sys.argv[2]) queues=[] # a queue for each process for pid in range(processes): queues.append( multiprocessing.Queue() ) jobs=[] prev_pipe = 0 for pid in range(processes): p = multiprocessing.Process( target = getFileLineCount, args=(queues, pid, processes, file_name,) ) p.start() jobs.append(p) jobs[0].join() #wait for counting to finish lines = queues[0].get() logging.info( 'finished {} Lines:{}'.format( time.time() - t, lines ) )
How to get the line count of a large file in Python?
You could execute a subprocess and run
wc -l filename
import subprocess def file_len(fname): p = subprocess.Popen(['wc', '-l', fname], stdout=subprocess.PIPE, stderr=subprocess.PIPE) result, err = p.communicate() if p.returncode != 0: raise IOError(err) return int(result.strip().split()[0]) | https://programming-articles.com/how-to-get-the-line-count-of-a-large-file-cheaply-in-python/ | CC-MAIN-2022-40 | refinedweb | 1,089 | 61.53 |
public class Solution { public int lengthOfLIS(int[] nums) { int N = 0, idx, x; for(int i = 0; i < nums.length; i++) { x = nums[i]; if (N < 1 || x > nums[N-1]) { nums[N++] = x; } else if ((idx = Arrays.binarySearch(nums, 0, N, x)) < 0) { nums[-(idx + 1)] = x; } } return N; } }
Re-use the array given as input for storing the tails.
As this statement is executed only if idx variable is negative (meaning there is no such tail in our DP part of the array at the moment), we use -(idx + 1) to convert it to the right position.
If we already have this number in the array Arrays.binarySearch() will return positive index (or 0) - we don't need to update anything. But if it is not in the array then we update it.
To avoid any confusion, look at the documentation of Arrays.binarySearch() (return part) here:(int[], int, int, int)
the code won't work with an input of 10,9,2,5,3,7,101,18,1,5,6,7,8,9,10,11
the output should be 8 but this will output a 9 instead.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/30334/o-1-space-o-n-log-n-time-short-solution-without-additional-memory-java | CC-MAIN-2017-51 | refinedweb | 206 | 70.63 |
Updated: July 2008
Use the Application page of the Project Designer to specify the application.
Specifies the name of the output file that will hold the assembly manifest. Changing this property will also change the Output Name property. You can also make this change from the command line by using /out (Set Output File Name) (C# Compiler Options). To access this property programmatically, see AssemblyName.
Specifies the base namespace for files added to the project.
It is also possible to clear the root namespace property, which enables you to manually specify the namespace structure of your project. See namespace (C# Reference) for more information about creating namespaces in your code.
To access this property programmatically, see RootNamespace.
Specifies the .NET Framework version that the application targets. This option can have the following values:
.NET Framework 2.0
.NET Framework 3.0
.NET Framework 3.5
The default setting is .NET Framework 3.5. Specific .NET Framework and .NET Framework Multi-Targeting Overview.
Specifies that the application targets the .NET Framework Client Profile, which provides a redistribution package that installs the minimum set of client assemblies on target computers, without requiring that the full .NET Framework be present. For more information, see .NET Framework Client Profile.
Specifies the type of application to build. The options are as follows:
Windows Application
Console Application
Class Library
In a Web Application project, this property can be set only to Class Library. See /target (Specify Output File Format) (C# Compiler Options) for more information.
In a WPF Browser Application project, this option is disabled.
To access this property programmatically, see OutputType.
Clicking this button displays the Assembly Information Dialog Box. (Specify Location of Main Method) (C# Compiler Options) for more information. To access this property programmatically, see StartupObject.
By default, this radio button is selected and the Icon and Manifest options are enabled. This enables you to select your own icon, or to select different manifest generation options. Leave this radio button selected unless you are providing a resource file for the project.
Sets the .ico file that you want to use as your program icon. Click the ellipsis button to browse for an existing graphic, or type the name of the file that you want. See /win32icon (Import an .ico File) (C# Compiler Options) for more information. To access this property programmatically, see ApplicationIcon. be.
Select this radio button when you are providing a resource file for the project. Selecting this option disables the Icon and Manifest options.
Enter a path name or use the Browse button (...) to add a Win32 resource file to the project.
Date
History
Reason
July 2008
Updated C#-specific content throughout.
Content bug fix.
Added information about the Client-only Framework subset option.
SP1 feature change. | http://msdn.microsoft.com/en-us/library/ms247046.aspx | crawl-002 | refinedweb | 457 | 53.27 |
Ads Via DevMavens
In my previous article (an Introduction to LINQ), I described LINQ as a language agnostic technology. LINQ can work in Visual Basic just as well as it can work in C#. However, both languages introduced new features to facilitate and enhance the LINQ experience. In this article, we want to investigate the new features of C# 3.0. While the C# designers put together many of these features with LINQ in mind, the features can also be used outside of query expressions to enhance other areas of code.
Extension methods allow us to create the illusion of new methods on an existing type - even if the existing type lives in an assembly outside of our control. For instance, the System.String class is compiled into the .NET framework as a sealed class. We can't inherit from the String class, and we can't change the String class and recompile the framework. In the past, we might package an assortment of commonly used string utility methods into a static class like the following.
With C# 3.0, we can make our ToDouble method appear as a member of the String class. When the first parameter of a method includes a this modifier, the method is an extension method. We must define extension methods inside a non-generic, static class, like the StringExtensions class shown below.
StringExtensions defines two extension methods. We can now invoke either method using instance method syntax. It appears as if the methods are members of System.String, and we don't need to use the name of the static class.
Despite appearances, extension methods are not members of the type they are extending. Technically, the ToDouble method we are using is still just a static method on another type, but the C# compiler can hide this fact. It is important to realize that extension methods have no special privileges with respect to member access of the target object. Our extension methods cannot access protected and private members of the target object.
An extension method is available to call on any object instance that is convertible to the type of the method's first parameter. For instance, the following extension method is available for any object that is convertible to IList.
We can use the extension method with an array of strings, as an array implements the IList interface. Notice we need to check for null, as it is legal and possible for a program to invoke an extension method on a null object reference.
An extension method cannot override or hide any instance methods. For instance, if we defined a ToString extension method, the method would never be invoked because every object already has a ToString method. The compiler will only bind a method call to an extension method when it cannot find an instance method with a usable signature.
An extension method is only available when you import the namespace containing the static class that defines the extension method (with a using statement). Although you can always invoke the extension method via the namespace qualified class name (Foo.StringExtensions.ToDouble()), there is no mechanism available to reach a single extension method using an instance syntax unless you import the method's namespace. Once you import the namespace, all extension methods defined inside will be in scope. For this reason you should be careful when designing namespaces to properly scope your extension methods (see the LINQ Framework Design Guidelines).
LINQ defines a large number of extension methods in the System.Linq namespace. These extension methods extend IEnumerable<T> and IQueryable<T> to implement the standard query operators, like Where, Select, OrderBy, and many more. For instance, when we include the System.Linq namespace, we can use the Where extension method to filter an enumerable collection of items.
The definition for Where would look like the following:
As you can see, the second parameter to Where is of type Func<TSource, bool>. This parameter is a predicate. A predicate is a function that the Where method can invoke to test each element. If the predicate returns true, the element is included in the result, otherwise the element is excluded. Just remember - different LINQ providers may implement the filtering using different strategies. For example, when using the LINQ to SQL provider with an IQueryable<T> data source, the provider analyzes the predicate to build a WHERE clause for the SQL command it sends to the database – but more on this later.
We built the predicate by constructing a delegate using an anonymous method. It turns out that little snippets of in-line code like this are quite useful when working with LINQ. So useful, in fact, that the C# designers gave us a new feature for constructing anonymous methods with an even more succinct syntax - lambda expressions.
In the days of C# 1.0, we did not have anonymous methods. Even the simplest delegate required construction with a named method. Creating a named method seemed heavy-handed when you only needed a one-line event handler. Fortunately, C# 2.0 added anonymous methods. Anonymous methods were perfect for simple event handlers and expressions – like the predicate we constructed earlier. Anonymous methods also had the advantage of being nested inside the scope of another method, which allowed us to pack related logic into a single section of code and create lexical closures.
C# 3.0 has added the lambda expression. Lambda expressions are an even more compact syntax for defining anonymous functions (anonymous methods and lambda expressions are collectively referred to as anonymous functions). The following section of code filters a list of cities as we did before, but this time using a lambda expression.
The distinguishing feature in this code is the "goes to" operator (=>). The left hand side of the => operator defines the function signature, while the right hand side of the => operator defines the function body.
You'll notice that the lambda expression doesn't require a delegate keyword. The C# compiler automatically converts lambda expressions into a compatible delegate type or expression tree (more on expression trees later in the article). You'll also notice the function signature doesn't include any type information for the parameter named s. We are using an implicitly typed signature and letting the C# compiler figure out the type of the parameter based on the context. In this case, the C# compiler can deduce that s is of type string. We could also choose to type our parameter explicitly.
Parentheses are required around the function signature when we use explicit typing, or when there are zero or more than one parameter. In other words, the parentheses are optional when we are using a single, implicitly typed parameter. An empty set of parentheses is required when there are no parameters for the expression.
The right hand side of a lambda may contain an expression or a statement block inside { and } delimiters. Our city filter above uses an expression. Here is the same filter using a statement block.
Notice the statement block form allows us to use multiple statements and provide local variable declarations. We also need to include a return statement. Statement blocks are useful in more complex lambda expressions, but the general guideline is to keep lambda expressions short.
Once we assign a lambda to a delegate type, we can put the lambda to use. The framework class library includes two sets of generic delegates that can fit most scenarios. Consider the following code:
The Func generic delegate comes in 5 versions. Func<TResult> encapsulates a method that takes no parameters and returns a value of type TResult. Our square delegate is of type Func<int, int> - a method that takes 1 parameter of type int and returns an int, while mult is of type Func<int, int, int> - a method that takes 2 parameters of type int and returns an int. The type of the return value is always specified as the last generic parameter. Func variations are defined that can encapsulate methods with up to 4 parameters (Func<T1, T2, T3, T4, TResult>).
Action delegates represent methods that do not return a value. Action<T> takes a single parameter of type T. We use an instance of Action<int> in the above code. There is also an Action (no parameters). Action<T1, T2> (2 parameters), Action<T1, T2, T3> (3 parameters), and Action<T1, T2, T3, T4> (4 parameters). We can put all these delegates to work with lambda expressions instead of defining our own custom delegate types.
Remember the definition of the Where extension method for IEnumerable<T>? We created a lambda expression for the parameter of type Func<TSource, bool>. The implementation of the Where clause for IEnumerable<T> executes the lambda expression via a delegate to filter the collection of in-memory objects.
What happens if we aren't operating on an in-memory collection? How could a lambda expression influence a SQL query sent to a database engine for execution? This is where IQueryable<T> and expression trees come into play.
Let's add a twist to our last example.
Instead of assigning our lambdas to an Action or Func delegate type, we've assigned them to an Expression<TDelegate> type. This code prints out the following:
x => (x * x)
(x, y) => (x * y)
x => WriteLine(x)
It would appear that the output is echoing our code. This is because the C# compiler treats Expression<TDelegate> as a special case. Instead of compiling the lambda expression into intermediate language for execution, the C# compiler compiles the lambda expression into an expression tree.
Our code is now data that we can inspect and act on at runtime. If we look at what the C# compiler produces for the squareExpression with a tool like Reflector, we'd see something similar to the following.
If we wanted to invoke the expression, we'd first have to compile the expression:
However, the real power of an expression tree is not in dynamic compilation but in runtime inspection. Again, our code is now data – a tree of Expression nodes that we can inspect. In fact, the C# samples for Visual Studio 2008 include an expression tree visualize – a plugin for the Visual Studio debugger to view expression trees. The following is a screen shot of the visualizer for our squareExpression.
Imagine writing some code that walks through the tree and inspects the types, names, and parameters. In fact, this is exactly what technologies like LINQ to SQL do. LINQ to SQL inspects the expression tree and converts the expression tree into a SQL command.
LINQ to SQL classes implement an IQueryable<T> interface, and the System.Linq namespace adds all the standard query operators as extension methods for IQueryable<T>. Instead of taking delegates as parameters like the IEnumerable<T> Where method does, IQueryable<T> will take Expression<TDelegate> parameters. We are not passing code into IQueryable<T>, we are passing expression trees (code as data) for the LINQ provider to analyze.
With an understanding of lambda expressions, expression trees, and extension methods, we are finally able to tackle one of the beauties of LINQ in C# - the query expression.
Query expressions provide the "language integrated" experience of LINQ. Query expressions use syntax similar to SQL to describe a query.
A query expression begins with a from clause and ends with a select or group clause. Other valid clauses for the middle of the expression include from, let, where, join, and orderby. We'll delve into these clauses in a future article.
The C# compiler translates a query expression into method invocations. The where clause will translate into a call to a Where method, the orderby clause will translate into a call to an OrderBy method, and so on. These methods must be extension methods or instance methods on the type being queried, and each has a particular signature and return type. It is the implementation of the methods, not the compiler, that will determine how to execute the query at runtime. Our query expression above would transform into the following:
Since we are using an IEnumerable<string>, these method calls are the extension methods for IEnumerable<T>. From our earlier discussion we know that the compiler assigns each lambda expression to a delegate for the extension methods to invoke on the in-memory collection. If we were working with an IQueryable<T> data source, compiler would be creating expressions trees for a LINQ provider to parse.
Most of the lambda expressions in this query are simple, like c => c, meaning in the OrderBy case that given a string parameter c, sort the result by the value of c (alphabetical). We could have also said orderby c.Length, which translates into the lambda expression c => c.Length, meaning given a parameter c, sort the items by the value of c's Length property.
It's important to reinforce the fact that the C# compiler is performing a translation of the query expression and looking for matching methods to invoke. This means the compiler will use the IEnumerable<T> or IQueryable<T> extension methods, when available. However, in our own classes we could add methods like Where or OrderBy to override the standard LINQ implementations (remember instance methods will always take preference over extension methods). We could also leave out the System.Linq namespace and write our own extension methods to replace the standard LINQ implementations completely. This extensibility in LINQ is a powerful feature.
We've now covered all the C# features that make LINQ work. The compiler translates query expressions into method calls. These method calls are generally extension methods on the queried type. The extension method approach doesn't force the queried type to use a specific base class or implement a particular interface, and allows us to customize behavior for specific types. Sorting, filtering, and grouping logic in the query expression is passed into the method calls using lambda expressions. The compiler can convert lambda expressions into delegates, for the method to invoke, or expression trees, for the method to parse and analyze. Expression trees allow LINQ providers to transform the query into SQL, XPath, or some other native query language that works best for the data source.
There are other features in C# that aren't required for all this magic to work, but they do offer everyday conveniences. These features include type inference, anonymous types, object and collection initializes, and partial methods.
C# 3.0 introduced the var keyword. We can use the new keyword to define implicitly typed local variables. Unlike the var keyword in JavaScript, the C# version does not introduce a weak, loose, or dynamically typed variable. The compiler will infer the type of the variable at compile time. The following code provides an example.
Each variable has a type that the compiler deduced using the variable's initializer. For example, the compiler sees the code assign the name variable an initial value of "Scott", so the compiler deduces that name is a string. We are not able to try change this type later. The following code is full of compiler errors – things you can't do with var.
Implicitly type variables must have an initializer, and the initializer must allow the compiler to infer the type of the variable. Thus, we can't assign null to an implicitly typed variable – the compiler can't determine the correct type – any reference type can accept the value null! Lambda expressions themselves are not associated with a specific type, we have to let the compiler know a delegate or expression type for lambdas. Finally, we can see that once the compiler has determined the type, we can't change or morph the type like we can in some dynamic languages. Thus the number variable declared in the above code will always and forever be strongly typed as a string.
For practical purposes, the var keyword is important in two scenarios. The first scenario is when we can use var to remove redundancy from our code. For instance, when constructing generic types, all the type information and angled brackets can clutter the code (Dictionary<int, string> d = new Dictionary<int, string>). Do we need to see the type information twice to understand the variable is a dictionarty of int and string?
While the first scenario is entirely a matter of style and personal preference, the second scenario requires the use of var. The scenario is when you don't know the name of the type you are consuming – in other words, you are using an anonymous type.
Anonymous types are nameless class types. We can create an anonymous type using an object initializer, as shown in several examples below.
The first example in the above code constructs an anonymous type to represent an employee. Between the curly braces, the object initializer lists a sequence of properties and their initial values. The compiler will use type inference to determine the types of these properties (both are strings in this case), then create an anonymous type derived from System.Object with the read-only properties Name and Department.
Notice we can use the anonymous type in a strongly typed fashion. In fact, once we've type employee. into the Visual Studio editor, an Intellisense window will show us the Name and Department properties of the object.
We cannot use this object in a strongly typed fashion outside of the local scope. In other words, if we needed to return the employee object from the current method, the method would have to specify a return type of object. If we wanted to pass the variable to another method, we'd have to pass the variable as an object reference. We can only use the var keyword for local variable declaration – var is not allowed as a return type, parameter type, or field type. Outside of the scope of the current method, we'd have to use reflection to retrieve the property values on the anonymously typed object (which is how anonymously typed objects work with data-binding features in .NET platforms).
Anonymous types are also useful in query expressions. In the second half of the code above, we create a report of running processes on the machine. Instead of the query expression returning a Process object, which has dozens of properties, we've projected a new anonymous type with two properties: the process name and the process thread count. These types of projections are useful when creating data transfer objects or restricting the amount of data coming back in a LINQ to SQL query.
The initializers we've used are not only or anonymous types. In fact, C# has added a number of shortcuts for constructing new objects.
The following two classes use auto-implemented properties to describe themselves.
Given those class definitions, we can use the object initializer syntax to construct new instances of the classes like so:
The initializer syntax allows us to set accessible properties and fields on an object during construction of the object. We can even nest an initializer inside an initializer, as we have done for the employee's Address property (notice the new keyword is optional, too). Closely related to the object initializer is the collection initializer.
Our coverage of C# features for LINQ wouldn't be complete without talking about partial methods. Partial methods were introduced into the C# language at the same time as LINQ. Partial methods, like partial classes, are an extensibility method to work with designer generated code. The LINQ to SQL class designer uses partial methods like the following.
The Account class is a partial class, meaning we can augment the class with additional methods, properties, and fields with our own partial definition. Partial classes have been around since 2.0.
The partial method OnCreated is defined in the above code. Partial methods are always private members of a class, and their return type must always be void. We have the option of providing an implementation for OnCreated in our own partial class definition. The implementation is optional, however. If we do not provide an implementation of OnCreated for the class, the compiler will remove the method declaration and all method calls to the partial method.
Providing an implementation is just defining a partial class with a partial method that carries an implementation for the method.
Partial classes are an optimization for designer generated code that wants to provide some extensibility hooks. We can "catch" the OnCreated method if we need it for this class. If we do not provide an implementation, the compiler will remove all OnCreated method calls from the code and we won't pay a penalty for extensibility that we do not use.
In this article we've covered most of the new features introduced into C# for LINQ. Although lambda expressions, extension methods, anonymous types and the like were primarily introduced to facilitate LINQ, I hope you'll find some uses for these powerful features outside of query expressions. | http://www.odetocode.com/articles/738.aspx | crawl-002 | refinedweb | 3,503 | 54.73 |
culprit is in LabelField.java line 122
Type: Posts; User: tbenbrahim
culprit is in LabelField.java line 122
I would suggest you educate your users, rather than change anything.
If someone modified a record at 1000 in Paris, it was 0900 in London.
Since the user in Paris sees 1000 and the one in London...
it works in 3.0.1
**
* @author tony.benbrahim
*
*/
public class AjaxComboBox<T extends ModelData> extends ComboBox<T> {
public interface GetDataCallback<T> {
void getData(String query, int maxResults,...
Actually yes, it is in 2.0 :-)
change the order of the loops :)
I think I did the same thing you did (got reduced visibility error, etc...)
Make absolutely sure everything matches, and everything will be fine
gxt resources, jars in WEB-INF/lib, jars in...
Code written in 1.6/1.7 will compile without problem in GWT 2.0
Code written to take advantage of new 2.0 features (RunAsync, ResourceBundle, etc...) will obviously not compile in 1.X
Tony
I tried it today. My application compiles with a couple of warnings (no big deal).
However there were many UI problems (wrong sized buttons, disappearing tab strips when switching tabs, etc...),...
Still does not look like GPL to me. I can take MySql, Apache (not GPL, I know) or Linux, make modifcation or extensions, and ship the whole thing on a CD or post it on the net. I have the right to...
From,
By the same analogy, the corporate user who uses a computer at work doesn't get possession of it, so the user also does...
I was surprised to hear this mentioned (negatively) on the Java Posse podcast (by far the most popular Java podcast), so this has been a PR disaster so far.
Java Posse podcast #184, about 20...
it does fire a load event, however this does not update the contents of the combobox attached to the SimpleStore.
The first line it hits in the handler is if (!focused) return;, and that's the end...
There is a bug in SimpleStore when adding a single record using loadData with append=true
for example:
supplierStore.loadData([[data.id,data.name]],true);
I was expecting my combo box tied... | https://www.sencha.com/forum/search.php?s=dbca376d016a5a9fb4eba893f4732012&searchid=22819212 | CC-MAIN-2019-43 | refinedweb | 367 | 68.47 |
Workspace Question
Workspace Question
Hello,?
Last edited by chrisfarrell; 20 Sep 2012 at 12:24 PM. Reason: corrected touch sdk version number-->
DeftJS
DeftJS
If I want to use DeftJS in a Cmd generated app, what do I need to consider?
Should I avoid using Cmd to generate classes (models, controllers, etc), or should I go ahead and generate them and then edit manually to implement DeftJS? Is there another option?
Are there any Cmd commands I need to avoid or be aware of?-->
Wrong Touch SDK!
Wrong Touch SDK!
I was using the wrong Touch SDK. Found the correct one here:
Touch app generates now...on to the next issue : )
-->
If those fail, please start a new thread with the script attached and/or other details!"-->
You should not have to remove the old tools, but the new PATH entry will take over and "hide" the older version.
The way this is accomplished varies by platform, so if you have issues please start a new thread and provide OS info and steps taken.Don Griffin
Ext JS Development Team Lead
Check the docs. Learn how to (properly) report a framework issue and a Sencha Cmd issue
"Use the source, Luke!"-->
- -->
Ok, I get errors all over tha place as well. Just my first hour with ExtJS, I have two days to evaluate the framework and decide if we will give it a try, but I can't even generate sample app...
Code:
C:\path\to\extjs>sencha -sdk C:\path\to\extjs generate app AppName C:\path\to\app\WebExt Sencha Cmd v3.0.0.122 [INFO ] init-properties: [INFO ] init-antcontrib: [INFO ] init-sencha-command: [INFO ] init: [INFO ] -before-generate-workspace: [INFO ] generate-workspace-impl: [INFO ] -before-copy-framework-to-workspace: [INFO ] copy-framework-to-workspace-impl: [ERROR] UNHANDLED EXCEPTION : This is the default implementation from Sencha CMD and must be overriden by the framework. C:\inetpub\wwwroot\extjs>
Can I fix that somehow?-->
I've some troubles when installing Sencha Cmd V3 on my OSX Lion 10.7.4 :
The .profile file was well updated with the PATH, but nothing worked in my terminal window (even after a restart - say restart of the mac)
So, copying the new lines written in .profil into .bash_profile made thing work fine
Hope this help-->
I'd installing the v3 with the link of the blog post announcing version 3.
So, it was installed at ~/bin/Sencha/*
PHP Code:
Sencha Cmd v3.0.0.141
PHP Code:
/Users/knalli/bin/Sencha/Cmd/3.0.0.141/sencha compile -classpath=app.js,ext-4.1.1/src,ext-4.1.1/examples/ux,plugins,myappdir exclude -namespace=Ext.chart and concat -out all-classes-dev.js and -debug=false concat -out all-classes-debug.js and -closure concat -out=all-classes-prod.js
PHP Code:
Sencha Cmd v3.0.0.141
[INFO ] Processing classPath entry : <dir>/sencha-compile-temp-dir
[INFO ] Processing classPath entry : app.js
[INFO ] Processing classPath entry : ext-4.1.1/src
[INFO ] Processing classPath entry : ext-4.1.1/examples/ux
[INFO ] Processing classPath entry : plugins
[INFO ] Processing classPath entry : myappdir
[INFO ] Processing class inheritance graph
[INFO ] Processing source dependencies
[INFO ] Concatenating output to file all-classes-dev.js
[INFO ] Concatenating output to file all-classes-debug.js
[ERROR] No such property : closure
--> | http://www.sencha.com/forum/showthread.php?243153-Sencha-Cmd-V3-Beta&p=890010&viewfull=1 | CC-MAIN-2014-42 | refinedweb | 555 | 59.19 |
I have a GUI in PyQt with a function
addImage(image_path)
threading.Thread
watchdog
addImage
QPixmap
addImage
I believe the best approach is using the signal/slot mechanism. Here is an example.
from PyQt4 import QtGui from PyQt4 import QtCore # Create the class 'Communicate'. The instance # from this class shall be used later on for the # signal/slot mechanism. class Communicate(QtCore.QObject): myGUI_signal = QtCore.pyqtSignal(str) ''' End class ''' # Define the function 'myThread'. This function is the so-called # 'target function' when you create and start your new Thread. # In other words, this is the function that will run in your new thread. # 'myThread' expects one argument: the callback function name. That should # be a function inside your GUI. def myThread(callbackFunc): # Setup the signal-slot mechanism. mySrc = Communicate() mySrc.myGUI_signal.connect(callbackFunc) # Endless loop. You typically want the thread # to run forever. while(True): # Do something useful here. msgForGui = 'This is a message to send to the GUI' mySrc.myGUI_signal.emit(msgForGui) # So now the 'callbackFunc' is called, and is fed with 'msgForGui' # as parameter. That is what you want. You just sent a message to # your GUI application! - Note: I suppose here that 'callbackFunc' # is one of the functions in your GUI. # This procedure is thread safe. ''' End while ''' ''' End myThread '''
In your GUI application code, you should create the new Thread, give it the right callback function, and make it run.
from PyQt4 import QtGui from PyQt4 import QtCore import sys import os # This is the main window from my GUI class CustomMainWindow(QtGui.QMainWindow): def __init__(self): super(CustomMainWindow, self).__init__() self.setGeometry(300, 300, 2500, 1500) self.setWindowTitle("my first window") # ... self.startTheThread() '''''' def theCallbackFunc(self, msg): print('the thread has sent this message to the GUI:') print(msg) print('---------') '''''' def startTheThread(self): # Create the new thread. The target function is 'myThread'. The # function we created in the beginning. t = threading.Thread(name = 'myThread', target = myThread, args = (self.theCallbackFunc)) t.start() '''''' ''' End CustomMainWindow ''' # This is the startup code. if __name__== '__main__': app = QtGui.QApplication(sys.argv) QtGui.QApplication.setStyle(QtGui.QStyleFactory.create('Plastique')) myGUI = CustomMainWindow() sys.exit(app.exec_()) ''' End Main '''
I really hope this was helpful. Please let me know if it worked for you. | https://codedump.io/share/Dwm7CAbR0mXR/1/simplest-way-for-pyqt-threading | CC-MAIN-2017-09 | refinedweb | 371 | 53.88 |
For this one. From Windows to Xbox, the power is visible everywhere. I will discuss the classes I use with greater frequency when I make my own reversing tools and a few pointers here and there.
Most of the tools I make for my day to day work involve the following things from C#:
- Multithreading (System.Threading.Thread/BackgroundWorker component)
- UI and resource hog algorithms decoupled i.e. responsive applications
- Extensive use of events and delegates for communication between the various forms and controls
- File system classes (File, Path, DirectoryInfo, FileInfo, FileSystemInfo) that encapsulate the Directory and File objects
- FileStream/MemoryStream classes to work with dynamically read or generated data
- BinaryReader and BinaryWriter classes
- Extensive use of collections and generics
- Structs – readonly fields /Enums
- Strings manipulation classes and methods
- Properties –get and set accessors
- Regex
- Process class for starting applications and reading commandline output
- GDI+ Graphics/3D
This whole set can be invoked by importing a few namespaces.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using System.Drawing.Drawing2D;
using System.IO;
using System.Threading;
using System.Diagnostics;
using System.Text.RegularExpressions;
The above set sums up much of my namespace laundry list.
Let’s delve into the classes themselves.
Much of reversing primers includes format parsing of some sort, whether it’s a PE file or a Dex file or PDF or JPEG and so on and so forth. For this I use a byte array. It’s very convenient to have direct representation of the actual bytes as integer values from 0-255 for every byte of a corresponding format. Once that is done the parsing involves use of mainly walking through the array and extracting specific lengths at specific offsets. These data are extracted from the headers of the respective formats.
Of course, if editing the file is required, then using an array would be expensive for large files, so a List<byte> type can also be used. While using BinaryReader to fill an instantiated array is fine, the performance is far better when using File.ReadAllBytes() method. This takes the path of the target file in the file system, could be any binary file, and then reads it to a byte [] as a return value. For reading in a series of files for later manipulation in memory, I use a List<byte []> or make it more nested by adding a list into another accumulator list like, List<List<byte []>>. It’s a lot simpler while enumerating long lists for graphic displays and this helps a lot, while being effective for searching a particular value as well as addressing a specific type within a type.
I use structs extensively for any sort of custom data structure template along with values retrieving and setting implemented. This saves a lot of time for later processing, and being a value type is easily passed around within List<struct type> for better manipulation, e.g. List<dexFileStruct>. Sometimes references don’t work the way you want with structs, so resetting a struct field other than the constructor is a bad idea. The solution is to make the fields readonly and instantiate a new struct and work with the new set of fields and point to it if needed for lists or stacks. Don’t try to reset an existing struct field member.
Properties make life a lot easier for both classes and structs. Also exposing types using this method immediately gives a security factor by ensuring the entry and exit points of a particular field value.
Moving on to filesystem access and enumeration, the best way to get a list of directories is to instantiate the DirectoryInfo class and get the FileInfo array for every directory within the root. This process can be repeated for every directory level. Using FileSystemInfo class also works but unless you need extensive drilling capabilities. I suggest using recursion with the above mentioned classes. Also FileSystemInfo is a bit buggy sometimes and crashes for no apparent reason.
DirectoryInfo d = new
DirectoryInfo(targetPath);
DirectoryInfo[] di = d.GetDirectories();
foreach (DirectoryInfo i in di) {
FileInfo[] f = i.GetFiles(“*.dex”);
foreach (FileInfo j in f) {
Console.WriteLine(i.Name);
ProcessStartInfo ps = new ProcessStartInfo();
ps.FileName = “cmd.exe”;
ps.CreateNoWindow = true;
ps.UseShellExecute = false;
ps.Arguments = “/c “ + Environment.CurrentDirectory + “\\dexdump\\dexdump.exe”
+ ” -d “ + “\””+j.FullName+“\””;
ps.RedirectStandardOutput=true;
using (Process pi = new Process()) {
pi.StartInfo = ps;
pi.Start();
string temp = pi.StandardOutput.ReadToEnd();
using (FileStream fs = new
FileStream(Environment.CurrentDirectory + “\\dexdumpOutput\\” + i.Name + “.txt”, FileMode.Create)) {
using (StreamWriter sw = new
StreamWriter(fs)) {
sw.WriteLine(temp);
}
}
The above snippet illustrates the use of commandline through cmd.exe and collecting the output of another commandline tool to memory and then to file. The ProcessStartInfo and the Process class are used for the same, note the various fields in each class set to get the intended output. The separate classes are a good design decision in code when the parameters to a class are numerous and complex, its arguments can be encapsulated in a separate class which can be referenced by the other class requiring it. Very robust.
Next, threading can be really simplified using the backgroundWorker component. A C# component exposes functionality but does not provide a UI. So a backgroundWorker exposes three events DoWork, ProgressChanged and RunWorkerCompleted. It also exposes a Boolean property ReportsProgress. I normally don’t use cancellation a lot and build the UI around it, taking things up in chunks and then processing them. To initiate the background process, you need to call the RunWorkerAsync(<Object argument>) and pass any input parameter, that needs to be typecasted inside the DoWork eventhandler code, this could typically be a file/folder path or a list of user data types sent for processing . The DoWork eventHandler is where you write your most resource hogging codes. If any updates are required during the operation, you can send a percentage completed integer value along with an object instance that can contain the userState, this could be any data type, which has to be typecasted later on, in the ProgressChanged eventHandler. After completion of the task the RunWorkerCompleted event is triggered and the handler can be used to write code that completes any task post completion. You can use as many backgroundWorker components as needed giving maximum flexibility. Couple that with Timer class instances and you get a very good threading model implemented.
Next up, strings are immutable in C#. There are classes provided that make efficient use of memory and processing power to manipulate strings. Simple concatenation for intensive strings is a lot more expensive than using any one of the classes dedicated to working with strings. I many a times need to get a character array so that each string literal can be thoroughly analysed. The <string type>.ToCharArray() method does just that. I find the Trim() method very useful for removing specific characters from a string. Split() method takes a char [] and splits the string at those pivot points. The StringBuilder class is very useful when building long lists of strings after extensive parsing. Very simple, just instantiate and use the Append()/AppendLine() method.
The Regex class’s IsMatch() method returns a bool value for a string pattern against an input string. Quick and dirty. “[aA][dD][1-9]” searches for any string containing upper or lower case ‘a/A, d/D’ and any digit from 1-9, in that order and sequence. It’s a very powerful method for fast extraction of certain strings from long logs or dumps.
So much of the real world code that I write uses all and any of these in various ways dictated by results required.
For my current software pursuits I am using all of what C# can offer, but I tend to keep things streamlined and once I find a good set of classes I begin implementing the logic myself using the code constructs provided by the language. Thus, you have I/O, multithreading, regular constructs for programming, extensive graphics support, streams for fast byte level processing, excellent debugging utilities, provisions for unsafe code, and use of the Win32 API if needed, networking code among others. I think the power of rapid application development is very evident and the benefits far outweigh any cons of using C#. In fact you can use just about any language that’s in vogue and collaborate with other developers on a common platform. In the end I think that’s the biggest advantage. Think about it, are you DotNet wise, and if not what are you missing out on? | http://resources.infosecinstitute.com/thoughts-from-my-three-night-coding-excursion-part-3-leveraging-c-for-your-daily-reverse-engineering/ | CC-MAIN-2017-17 | refinedweb | 1,437 | 56.35 |
This HTML version of Think Java is provided for convenience, but it is not the best format of the book.
In particular, some of the symbols are not rendered correctly.
You might prefer to read the PDF version, or you can buy a hardcopy at
Amazon.
The Java library includes a simple package for drawing 2D graphics, called java.awt.
AWT stands for “Abstract Window Toolkit”.
We are only going to scratch the surface of graphics programming; you can read more about it in the Java tutorials at.
java.awt
There are several ways to create graphics in Java; the simplest way is to use java.awt.Canvas and java.awt.Graphics.
A Canvas is a blank rectangular area of the screen onto which the application can draw.
The Graphics class provides basic drawing methods such as drawLine, drawRect, and drawString.
java.awt.Canvas
java.awt.Graphics
Canvas
Graphics
drawLine
drawRect
drawString
Here is an example program that draws a circle using the fillOval method:
fillOval
The Drawing class extends Canvas, so it has all the methods provided by Canvas, including setSize.
You can read about the other methods in the documentation, which you can find by doing a web search for “Java Canvas”.
Drawing
setSize
In the main method, we:
main
JFrame
Once the frame is visible, the paint method is called whenever the canvas needs to be drawn; for example, when the window is moved or resized.
The application doesn’t end after the main method returns; instead, it waits for the JFrame to close.
If you run this code, you should see a black circle on a gray background.
paint
You are probably used to Cartesian coordinates, where x and y values can be positive or negative.
In contrast, Java uses a coordinate system where the origin is in the upper-left corner.
That way, x and y are always positive integers.
Figure B.1 shows these coordinate systems.
Graphical coordinates are measured in pixels; each pixel corresponds to a dot on the screen.
Figure B.1: Diagram of the difference between Cartesian coordinates and Java graphical coordinates.
To draw on the canvas, you invoke methods on a Graphics object.
You don’t have to create the Graphics object; it gets created when you create the Canvas, and it gets passed as an argument to paint.
The previous example used fillOval, which has the following signature:
The four parameters specify a bounding box, which is the rectangle in which the oval is drawn.
x and y specify the the location of the upper-left corner of the bounding box.
The bounding box itself is not drawn (see Figure B.2).
x
y
Figure B.2: Diagram of an oval inside its bounding box.
To choose the color of a shape, invoke setColor on the Graphics object:
setColor
The setColor method determines the color of everything that gets drawn afterward.
Color.red is a constant provided by the Color class; to use it you have to import java.awt.Color.
Other colors include:
Color.red
Color
import java.awt.Color
You can create your own colors by specifying the red, green, and blue (RGB) components.
For example:
Each value is an integer in the range 0 (darkest) to 255 (lightest).
The color (0, 0, 0) is black, and (255, 255, 255) is white.
(0, 0, 0)
(255, 255, 255)
You can set the background color of the Canvas by invoking setBackground:
setBackground.
Rectangle
Here’s a method that takes a Rectangle and invokes fillOval:
And here’s a method that draws Mickey Mouse:
The first line draws the face.
The next three lines create a smaller rectangle for the ears.
We translate the rectangle up and left for the first ear, then to the right for the second ear.
The result is shown in Figure B.3.
translate
Figure B.3: A “Hidden Mickey” drawn using Java graphics.
You can read more about Rectangle and translate in Chapter 10.
See the exercises at the end of this appendix for more example drawings.
The code for this chapter is in the ap02 directory of ThinkJavaCode.
See page ?? for instructions on how to download the repository.
Before you start the exercises, we recommend that you compile and run the examples.
The result should look like “Mickey Moose”, shown in Figure B.4.
Hint: You should only have to add or modify a few lines of code.
Figure B.4: A recursive shape we call “Mickey Moose”.
Figure B.5: Graphical patterns that can exhibit Moiré interference.
radial
Think DSP
Think Java
Think Bayes
Think Python 2e
Think Stats 2e
Think Complexity | https://greenteapress.com/thinkjava6/html/thinkjava6017.html | CC-MAIN-2021-39 | refinedweb | 771 | 65.52 |
I am trying to make motorcycle game and I want to have suspension of the wheels. In 3D there is wheels collider, but how to get wheels collider on RigidBody2D. I have tried combining joints to achieve this but with no result. I will be glad if someone give me some suggestions how to make suspension with the 2D rigidbody and tools.
I manage to get some kind of suspension with spring joint, but now the wheels are jiggering all the time and by adding hinge joint to empty in the center of the wheel I manage to make it move
The script for moving: using UnityEngine; using System.Collections;
public class Controller : $$anonymous$$onoBehaviour {
// Use this for initialization
void Start () {
}
// Update is called once per frame
void FixedUpdate () {
float move = Input.GetAxis("Horizontal");
Debug.Log(move);
GameObject wheel1 = GameObject.Find ("wheel1");
GameObject wheel2 = GameObject.Find ("wheel2");
HingeJoint2D joint1 = wheel1.GetComponent<HingeJoint2D>();
HingeJoint2D joint2 = wheel2.GetComponent<HingeJoint2D>();
Joint$$anonymous$$otor2D motor1 = joint1.motor;
Joint$$anonymous$$otor2D motor2 = joint2.motor;
motor1.motorSpeed = move * 5000;
motor2.motorSpeed = move * 5000;
joint1.motor = motor1;
joint2.motor = motor2;
}
}
But again is so unrealistic any suggestions
Answer by RightHandedMonkey
·
Jun 06, 2014 at 08:15 PM
Try the new Wheel Joint 2D for objects in Unity 4.5.
If you know of a good tutorial on this as well I would appreciate information on it since I am trying to understand this topic myself.
Answer by ZANTcr
·
Aug 05, 2014 at 01:30 AM
Hi, like the first answer posted I recomend use the new WheelJoint2D of Unity 4.5. I did a 2D Vehicle Control for the asset store, in my script the WheelJoint2D are created automatic, but some tips for setup your vehicle with this new component are:
Make your wheels child of the carbody.
The carbody and the wheels need a rigidbody2d.
In your carbody add 2 WheelJoint2D.
In the WheelJoint2D drag and drop you wheel to connected rigidbody option.
Adjust the anchor option of the WheelJoint2D.
Your wheel object dont need WheelJoint2D, just your carbody.
Hey! thanks, u simply explained how to connect wheels with car body and joints etc...
But can u plz tell me, If I want to use motor of RearWheel and FrontWheel, then how would I Script it to get both wheels and apply motor force, suspension etc... to feel and see the car movement realistic.
For instance, you plz see the HillClimb Racing. HillClimb Racing.
Connecting SpringJoint2D to Rotating Rigidbody2D With Correct Anchor
0
Answers
Joints - is bounciness working when the spring value is set?
0
Answers
Snake / Sping behaviour improvements
0
Answers
reading forces applied to a rigidbody
1
Answer
Is spring joint possible with configurable joint?
0
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/620487/how-to-get-suspension-or-wheels-collider-with-the.html | CC-MAIN-2022-05 | refinedweb | 456 | 65.93 |
Hi,
What's the 'supported' state of the Subversion Perl bindings? I sent the
message below to the list a couple of weeks ago, and heard nothing back. A
related message a few months ago went unanswered as well.
I know -- open source project, you get what you pay for, we're all
volunteers, etc -- and that's absolutely fine. I just want to make sure
that my expectations are correct.
And if no one's doing much with it at the moment I might try and dig in a
bit myself...
N
> Hi all,
>
> I'm experimenting with the SVN::Client API with Perl.
>
> One of the things I need to do is, given a repository path and a
> revision, find the youngest revision in which the path was modified that
> is less than or equal to the given revision.
>
> For example, given /trunk/foo, changed in revs 1, 4, 7, 12, the
> following would all hold true:
>
> irev('/trunk/foo', 3) == 1;
> irev('/trunk/foo', 4) == 4;
> irev('/trunk/foo', 11) == 7;
>
> I know how to do this using SVN::Repos:
>
> # Get the revision root, step back one hop in the node history
> # (which might be the same rev if it's interesting), and return
> # the revision at that hop.
> #
> # $repo = SVN::Repos object
> sub irev {
> my($path, $rev) = @_;
>
> my $fs = $repo->fs();
> my $root = $fs->revision_root($rev);
> my $hist = $root->node_history($path);
> $hist = $hist->prev(0);
>
> return ($hist->location())[1];
> }
>
> and using SVN::Ra:
>
> # Get the logs for this path, starting at $rev. Set limit to
> # 1 so that the callback is only called once. In the callback
> # assign the log's revision number to the variable we're going
> # to return.
> #
> # $ra = SVN::Ra object
> sub irev {
> my($path, $rev) = @_
>
> $ra->get_log([$path], $rev, 1, 1, 0, 1,
> sub { $rev = $_[1] });
>
> return $rev;
> }
>
> I can't see how it would work with SVN::Client. I can't use
> SVN::Client::get_logs(), as that doesn't have a $limit parameter.
>
> I've tried using SVN::Client::ls() and calling the created_rev() method
> of the dirent that's returned. However, this has the same problem that
> I discussed in:
>
>
>
> Specifically, the created_rev() for a node that's been copied is
> normally the revision in which the source of the copy was last changed,
> not the revision in which the copy happened.
>
> Is this possible?
>
> Jul 12 22:01:28 2006
This is an archived mail posted to the Subversion Dev
mailing list. | https://svn.haxx.se/dev/archive-2006-07/0458.shtml | CC-MAIN-2019-13 | refinedweb | 417 | 67.49 |
Type one word in google “Hadoop” and you will get numerous result. It is very difficult to decide what to read first, how to go with it. Hadoop has 3 main paths to go.
- – Data Scientists
- – Developers/Analysts
- – Administration
To go any of these path you have to know few basic things like what Hadoop is, why Hadoop is and how is Hadoop. Here I have tried to capture very basic few things which will escort you to start with Hadoop.
What is Hadoop?
Apache Hadoop is an open source software framework written in JAVA for distributed storage and distributed processing of very large data set on computer cluster build on commodity hardware.
Why Hadoop?
Challenge: Data is too big store in one computer
Hadoop Solution: Data is stored in multiple computer.
Challenge: Very high end machines are expensive
Hadoop solution: Run on commodity hardware
Challenge: commodity hardware will fail.
Hadoop Solution: Software is intelligent enough to deal with hardware failure.
Challenge: hardware failure may lead to data loss
Hadoop Solution: replicate (duplicate) data
Challenge: how will the distributed nodes co-ordinate among themselves
Hadoop solution: There is a master node that co-ordinates all the worker nodes
3 V attributes that are used to describe the big data problem.
— Volume: Volume reflects the large amount of data that needs to be processed. As the various data sets are stacked together the amount of data increases.
— Variety: Varity reflects different sources of data. It can vary from webserver logs to structured data from databases to unstructured data from social media.
— Velocity: Velocity reflects the amount of data which keeps on accumulating with time.
What is the difference between Hadoop and RDBMS?
Hadoop Ecosystem:
Hadoop.
Pig:
Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. At the present time, Pig’s infrastructure layer consists of a compiler that produces sequences of Map-Reduce programs.
Pig’s language layer currently consists of a textual language called Pig Latin, which is easy to use, optimized, and extensible. Pig was originally [3] developed at Yahoo Research around 2006.
Hive:
Hive is a data warehouse system for Hadoop that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets stored in Hadoop-compatible file systems. It provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL.
Both Pig and hive are on map reduce layer. The code written on Pig and hive gets converted into map reduce job and then run on hdfs.
HBase:
HBase (Hadoop DataBase) is a distributed, column oriented database. HBase uses HDFS for the underlying storage. It supports both batch style computations using MapReduce and point queries (random reads).
The main components of HBase are as below:
– HBase Master is responsible for negotiating load balancing across all Region Servers and maintain the state of the cluster. It is not part of the actual data storage or retrieval path.
– RegionServer is deployed on each machine and hosts data and processes I/O requests.
Apache Zookeeper:
ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services which are very useful for a variety of distributed systems. HBase is not operational without ZooKeeper.
Apache Oozie:
Apache Oozie is a workflow/coordination system to manage Hadoop jobs.
Apache Sqoop:
Apache Sqoop is a tool designed for efficiently transferring bulk data between Hadoop and structured datastores such as relational databases.
Flume:
Apache Flume is a service for collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows.
Flume is used to inject the data into hadoop system.
It has three entity. Source, channel and sink.
Source is the entity through which data enters into Flume.
Sink is the entity that delivers the data to the destination.
Sources ingest events into the channel and the sinks drain the channel.
In order to facilitate the data movement into or out of Hadoop sqoop/flume is used.
Hue:
Hue provides a Web application interface for Apache Hadoop. It supports a file browser, Hive, Pig, Oozie, HBase, and more.
In order to facilitate the data movement into or out of Hadoop sqoop/flume is used.
HDFS – Hadoop distributed file system:
Master / Worker Design:
In HDFS design there are one master node and many worker node.
Master node named as name node (NN) and worker node named as data node (DN).
Master node keeps all the Meta data information about the HDFS and information about the data node. Master node is in charge of file system (like creating files, user permission etc). Without it cluster node will be inoperable.
Data node is the slave / worker node holds the user data in the form of data blocks. There can be any number of data node in the Hadoop cluster.
DATA block:
A data block can be considered as the standard unit of data or files stored in HDFS.
Each incoming files are broken into 64 MB by default.(Currently the size has changed to 128 MB)
Any larger than 64MB is broken down in to 64 MB blocks.
All the blocks which make up a particular file are of the same size (64MB) except for the last block which might be lesser that 64 MB depending upon the size of the file.
Runs on commodity hardware:
As we saw Hadoop doesn’t need fancy high end hardware. Hadoop runs on commodity hardware. Hadoop stack is built to deal with hardware failure. So if one node fails still file system will continue to run. Hadoop accomplish this by duplicating the data in across the nodes.
Data is replicated:
So how does Hadoop keep the data safe or resilient? Simple, it keeps the multiple copies of the data across the nodes.
Example: Data segment #2 is replicated 3 times, on a data node A, B and D. Let say if data node A fails still data will be accessible from data node B and D.
Data:
Data is replicated across different nodes in the cluster to ensure reliability/fault tolerance. Replication of data blocks is done based on the location of the data node, so as to ensure high degree of fault tolerance.
For instance, one or two copies of the data block stored on the same rack and one copy is stored on a different rack in same data center and another copy is stored on a rack in different data center and so on.
HDFS is better suited for large data:
Generic file system like Linux EXT file system will store files of varying sizes, from few bytes to few giga bytes. HDFS is however designed to large files. Large as in a few hundred megabytes to a few gigabytes. HDFS was built to work with mechanical disk drives, whose capacity has gone up in recent years. However, seek times haven’t improved all that much. So Hadoop tries to minimize disk seeks.
Files are write once only (Not updatable):
HDFS supports writing files once (they cannot be updated). This is a stark difference between HDFS and a generic file system (like a Linux file system). Generic file systems allows files to be modified. However appending to a file is supported.
HDFS Architecture:
- Name Node servers as the master and each Data Node servers as a worker/slave.
- Name Node and each Data Node have built-in web servers.
- Name Node is the heart of HDFS and is responsible for various tasks including – it holds the file system namespace, controls access to file system by the clients, keeps track of the Data Nodes, keeps track of replication factor and ensures that it is always maintained.
- User data is stored on the local file system of Data Nodes. Data Node is not aware of the files to which the blocks stored on it belong to. As a result of this, if the Name Node goes down then the data in HDFS is non-usable as only the Name Node knows which blocks belong to which file, where each block located etc.
- Name Node can talk to all the live Data Nodes and the live Data Nodes can talk to each other.
- There is also a Secondary Name Node which comes in handy when the Primary Name Node goes down. Secondary Name Node can be brought up to bring the cluster online. This process of switching of nodes needs to be done manually and there is no automatic failover mechanism in place.
- Name Node receives heartbeat signals and a Block Report periodically from each of the Data Nodes.
- Heartbeat signals from a Data Node indicates that the corresponding Data Node is alive and is working fine. If a heartbeat signal is not received from a Data Node then that Data Node is marked as dead and no further I/O requests are sent to that Data Node.
- Block Report from each Data Node contains a list of all the blocks that are stored on that Data Node.
- Heartbeat signals and Block Reports received from each Data Node help the Name Node to identify any potential loss of Blocks/Data and to replicate the Blocks to other active Data Nodes in the event when one or more Data Nodes holding the data blocks go down.
Here are few general highlights about HDFS:
- HDFS is implemented in Java and any computer which can run Java can host a Name Node/Data NodeS:
There Name Node and is held in memory.
- HDFS is not suitable for scenarios requiring multiple/simultaneous writes to the same file.
So here are the basics of hadoop which will help you to start thinking in hadoop arena. If you want to know more about how sap is going with hadoop. You can follow the below links.
SAP Embraces Hadoop in The Enterprise
Fitting Hadoop in an SAP Software Landscape – Part 3 ASUG Webcast
References:
Its Very insightful and informative , good work Soma Mukherjee 🙂
Good Read!! (Y)
Good one... ℹ
Hi Soma,
Good one, any link for installation if i want to start from my side 🙂 .
Thanks & Regards
Mukesh Otwani
Hi Mukesh,
Thank you.
You can install the sandbox system and start working on hadoop.
Please find the links below,
Download Virtual box:
Download the sandbox for virtual box:
Cheers,
Soma
thnx 😎
Hi Soma,
Good Wirte-up .... Informative.
Regards
Nithya
Good Overview... well done...do keep blogging.
Nice post! Thank you for sharing this information.
BR
Venkat...
Again a very nice & a explanatory blog to help understand the growing need for Big Data & Hadoop. Thanks Soma. one Question though how should i start working on it.
Hi Priyesh,
Thank you.
You can install the sandbox system and start working on hadoop.
Please find the links below,
Download Virtual box:
Download the sandbox for virtual box:
Cheers,
Soma | https://blogs.sap.com/2015/08/06/hadoop-overview/ | CC-MAIN-2021-17 | refinedweb | 1,824 | 64.1 |
Contents
WebKit in Qt
The QtWebKit module provides a web browser engine as well as classes to render and interact with web content. More...
Classes
Detailed Description. For detailed documentation see The QtWebkit Bridge. has been enhanced to become more attractive on the mobile front as well. For more information see QtWebKit Goes Mobile.
QtWebKit is based on the Open Source WebKit engine. More information about WebKit itself can be found on the WebKit Open Source Project Web site.
Including In Your Project
To include the definitions of the module's classes, use the following directive:
#include <QtWebKit>
To link against the module, add this line to your qmake .pro file:
QT += webkit
Notes/Source.
Architecture:
License Information
This is a snapshot of the Qt port of WebKit. The exact version information can be found in.
Votes: 2
Coverage: Qt library 4.8, 4.7
Ant Farmer
35 notes
QTWEBKIT_PLUGIN_PATH is used on all platforms
According to the sources [qt.gitorious.org], the environment variable QTWEBKIT_PLUGIN_PATH is evaluated on every platform to extend the search paths for the plugins. The docs are incorrect in this regard, stating that this variable is only used on Linux/X11 platforms.
[Revisions] | http://qt-project.org/doc/qt-4.8/QtWebKit.html | CC-MAIN-2015-11 | refinedweb | 198 | 58.58 |
There. Decision-making statements available in pl/SQL are:
- if then statement
- if then else statements
- nested if-then statements
- if-then-elsif-then-else ladder
- if then statement
if then statement is the most simple decision-making statement. It is used to decide whether a certain statement or block of statements will be executed or not i.e if a certain condition is true then a block of statement is executed otherwise not.
Syntax:
if condition then -- do something end if;
Here, condition after evaluation will be either true or false. if statement accepts boolean values – if the value is true then it will execute the block of statements below it otherwise not.
if and endif consider as a block here.
Example:-
As the condition present in the if statement is false. So, the block below the if statement is not executed.
Output:
I am Not in if
- if – then-.
Syntex:-
if (condition) then -- Executes this block if -- condition is true else -- Executes this block if -- condition is false
Example:-
Output:-
i'm in if Block i'm not in if and not in else Block
The block of code following the else statement is executed as the condition present in the if statement is false after calling the statement which is not in block(without spaces).
- nested-if-then:
A nested if-then is an if statement that is the target of another if statement. Nested if-then statements mean an if statement inside another if statement. Yes, pl/sql allows us to nest if statements within if-then statements. i.e, we can place an if then statement inside another if then statement.
Syntex:-
if (condition1) then -- Executes when condition1 is true if (condition2) then -- Executes when condition2 is true end if; end if;
Output:-
num1 small num2 num1 small num3 also after end if
- if-then-elsif-then-else ladder
Here, a user can decide among multiple options. The if then.
Syntex:-
if (condition) then --statement elsif (condition) then --statement . . else --statement endif
Flow Chart:-
Example:-
Output:-
num1 small after end if | https://tutorialspoint.dev/language/sql/decision-making-plsql-else-nested-elsif-else | CC-MAIN-2022-05 | refinedweb | 346 | 61.77 |
On Wed, Jun 07, 2006 at 06:05:23AM -0400, Geoffrey Alan Washburn wrote:
> Anselm R. Garbe wrote:
> >Ah, now we come closer to the problem. wmii treats applications
> >which windows request the size of the display as fullscreen
> >apps, which are always floating. I think we will check, if a
> >window which is going to be managed by wmii is larger than the
> >screen dimension, and resizing it if necessary (this only
> >happens for floating windows).
> >
> Okay, but that doesn't tell me what I should do about it. I've
> tried writing to /client/n/geom but that doesn't seem to have any effect.
You can't write to a geom file in a global namespace, because it
is not clear which tag is affected. A client can be visible in
different views, thus a client has different geometries at a
time. In th /client/ namespace it is indecidable which client is
effected, that's why you need to write to
/view/m/n/geom or /tag/m/n/geom (wmii-3)
/tag/<tag>/n/p/geom (wmii-4)
instead.
Regards,
-- Anselm R. Garbe ><>< ><>< GPG key: 0D73F361Received on Wed Jun 07 2006 - 14:43:49 UTC
This archive was generated by hypermail 2.2.0 : Sun Jul 13 2008 - 16:08:24 UTC | http://lists.suckless.org/wmii/0606/2128.html | CC-MAIN-2019-26 | refinedweb | 216 | 72.36 |
by Michael S. Kaplan, published on 2010/04/07 07:01 -04:00, original URI:
Do you play the odds?
If you are a developer, the odds are that all things being equal you are not nearly as smart now before you read this blog as you will be once you have read it....
Back in the middle of February I mentioned in The real problem(s) with all of these console "fallback" discussions that, of the many people talking about the console these days, most of them are wrong.
Solving problems that don't exist, incorrectly impacting problems that do exist, and just generally making the situation worse overall....
But I didn't really finish the work there; the blog was merely armchair criticisms of bugs, design flaws, mistaken assumptions spoken as fact, documentation problems, etc.
100% accurate, but not described in a way that can help you move to the next step (getting it done right, in either native or managed code).
Today's blog is going to change all that. :-)
All of this and much more will be covered in the upcoming training on the World-Ready Console, if you are on the Windows team....
After showing how the console could be 100% Unicode, which I did in March of 2008 after STL showed me, as I talked about in Conventional wisdom is retarded, aka What the @#%&* is _O_U16TEXT?, there is one piece of the puzzle still missing.
I mean it is all well and good to show it in just a few lines of native code using the CRT.
But the truth is that this problem exists in managed code too (some of which actually uses the CRT, and Win32), and also in native code that has no heavy CRT dependency or doesn't take one on.
Behind the scenes, the CRT is doing all the right work in those circumstances to e.g. call WriteConsoleW or WriteFile (depending on whether the console's output streams are redirected or not).
So anyone trying to do the same thing in native Win32 would have to do that same work.
And although the CRT and .NET are both being developed in the same division of Microsoft, and .Net has its own internal CRT dependencies (it depends on .Net's version even when it ships with the OS), the managed Console class is not using this CRT functionality. And they are not doing it the hard way themselves, either.
Now calling the CRT from VB.Net or C# (or other non-C++ languages) has some interesting challenges that I am not going to get into here (if someone wants to go that way they can). I thought instead I'd just give you the code really quick so you can do it in whatever language, without the version or CRT dependencies.
Now this is C# code, this WriteLineRight sample function.
But it is pretty much Win32 code written in C#. So Win32 developers should have no trouble grokking it or what it is doing:
using System; using System.Runtime.InteropServices; public class Test { public static void Main() { string st = "\u0627\u0628\u0629 \u043a\u043e\u0448\u043a\u0430 \u65e5\u672c\u56fd\n\n"; WriteLineRight(st); } internal static bool IsConsoleFontTrueType(IntPtr std) { CONSOLE_FONT_INFO_EX cfie = new CONSOLE_FONT_INFO_EX(); cfie.cbSize = (uint)Marshal.SizeOf(cfie); if(GetCurrentConsoleFontEx(std, false, ref cfie)) { return(((cfie.FontFamily & TMPF_TRUETYPE) == TMPF_TRUETYPE)); } return false; } public static void WriteLineRight(string st) { IntPtr stdout = GetStdHandle(STD_OUTPUT_HANDLE); if(stdout != INVALID_HANDLE_VALUE) { uint filetype = GetFileType(stdout); if(! ((filetype == FILE_TYPE_UNKNOWN) && (Marshal.GetLastWin32Error() != ERROR_SUCCESS))) { bool fConsole; uint mode; uint written; filetype &= ~(FILE_TYPE_REMOTE); if (filetype == FILE_TYPE_CHAR) { bool retval = GetConsoleMode(stdout, out mode); if ((retval == false) && (Marshal.GetLastWin32Error() == ERROR_INVALID_HANDLE)) { fConsole = false; } else { fConsole = true; } } else { fConsole = false; } if (fConsole) { if (IsConsoleFontTrueType(stdout)) { WriteConsoleW(stdout, st, st.Length, out written, IntPtr.Zero); } else { // // Not a TrueType font, so the output may have trouble here // Need to check the codepage settings // // TODO: Add the old style GetConsoleFallbackUICulture code here!!! } } else { // // Write out a Unicode BOM to ensure proper processing by text readers // WriteFile(stdout, BOM, 2, out written, IntPtr.Zero); WriteFile(stdout, st, st.Length * 2, out written, IntPtr.Zero); } } } } [DllImport("kernel32.dll", CharSet=CharSet.Unicode, ExactSpelling=true)] internal static extern bool WriteConsoleW(IntPtr hConsoleOutput,
string lpBuffer,
int nNumberOfCharsToWrite, out uint lpNumberOfCharsWritten,
IntPtr lpReserved); [DllImport("kernel32.dll", CharSet=CharSet.Unicode, ExactSpelling=true)] internal static extern bool WriteFile(IntPtr hFile,
string lpBuffer,
int nNumberOfBytesToWrite, out uint lpNumberOfBytesWritten,
IntPtr lpOverlapped); [DllImport("kernel32.dll", ExactSpelling=true, SetLastError=true)] internal static extern bool GetConsoleMode(IntPtr hConsoleHandle, out uint lpMode); [DllImport("kernel32.dll", ExactSpelling=true)] internal static extern bool GetCurrentConsoleFontEx(IntPtr hConsoleOutput,
bool bMaximumWindow,
ref CONSOLE_FONT_INFO_EX lpConsoleCurrentFontEx); [DllImport("Kernel32.DLL", ExactSpelling=true, SetLastError=true)] internal static extern uint GetFileType(IntPtr hFile); [DllImport("Kernel32.DLL", ExactSpelling=true)] internal static extern IntPtr GetStdHandle(int nStdHandle); internal struct COORD { internal short X; internal short Y; internal COORD(short x, short y) { X = x; Y = y; } } [StructLayout(LayoutKind.Sequential)] internal unsafe struct CONSOLE_FONT_INFO_EX { internal uint cbSize; internal uint nFont; internal COORD dwFontSize; internal int FontFamily; internal int FontWeight; fixed char FaceName[LF_FACESIZE]; } internal const int TMPF_TRUETYPE = 0x4; internal const int LF_FACESIZE = 32; internal const string BOM = "\uFEFF"; internal const int STD_OUTPUT_HANDLE = -11; // Handle to the standard output device. internal const int ERROR_INVALID_HANDLE = 6; internal const int ERROR_SUCCESS = 0; internal const uint FILE_TYPE_UNKNOWN = 0x0000; internal const uint FILE_TYPE_DISK = 0x0001; internal const uint FILE_TYPE_CHAR = 0x0002; internal const uint FILE_TYPE_PIPE = 0x0003; internal const uint FILE_TYPE_REMOTE = 0x8000; internal static IntPtr INVALID_HANDLE_VALUE = new IntPtr(-1); }
And there you go!
A few things to note here:
I know the last point because I used to say that, when I was not as smart as I am now.
In fact, as I said way back in the beginning, the odds are in favor of the fact that you yourself were not nearly as smart before you read this blog as you are now that you have read it! :-)
And now if you will excuse me, I have to start conversations with the gazillion console applications in Windows that routinely punt bugs in console apps talking about their lack of Unicode support....
Craig Peterson on 7 Apr 2010 1:14 PM:
Is the 'W' at the end of ReadFileW and WriteFileW a typo? I can't find anything about it online or in the SDK headers.
In any case, thanks for the post. I feel smarter already!
Michael S. Kaplan on 7 Apr 2010 1:17 PM:
Those are the Unicode versions of the functions I link to -- but you want to call the Unicode ones, whether by compiling with UNICODE or by calling the "W" versions explicitly (the sample does the latter).
Random832 on 7 Apr 2010 2:52 PM:
If you were to select a font which has the Arabic or CJK characters in it, will it appear correctly? I already notice that the [double-width] CJK characters take up only a single column each. So much for no complex scripts or CJK in the console, indeed.
Brendan Elliott on 7 Apr 2010 3:01 PM:
Thank you so much for the console font trick! I now have a way to read Japanese console text on a Japanese machine with the system locale set to English (for compapiblity reasons). All my years of studying Japanese hadn't increased my ability to read a series of ASCII question marks, so a true type font plus copy & paste is a very useful workaround to know.
Michael S. Kaplan on 7 Apr 2010 6:15 PM:
@Random832 - if such a font is available (generally they aren't unless your system locale matches). But the redirect case works fine and the copy/paste works as well.....
@Brendan Elliott: Great! Glad to assist. :-)
Mike Dimmick on 8 Apr 2010 6:57 AM:
Craig, Mike: there's no ReadFileW or WriteFileW because the functions operate on binary data - therefore, not safe to convert anything. The documentation does not include the "Unicode and ANSI names" section for that reason. There is only ReadFile and WriteFile.
Mostly functions that have A and W versions have string parameters, or structure parameters (or pointer-to-structure) where the structure contains one or more string parameters.
ReadConsole and WriteConsole have A/W variants as they deal with string parameters even though the parameters are declared as VOID*. I'm not actually sure why this is, perhaps because the strings are not required to be null-terminated.
Michael S. Kaplan on 8 Apr 2010 8:00 AM:
And the weird thing is that I knew that (note the WriteFile p/invoke above!). :-)
Igor Tandetnik on 8 Apr 2010 10:14 AM:
This line
if(! (filetype == FILE_TYPE_UNKNOWN) && (Marshal.GetLastWin32Error() != ERROR_SUCCESS)) {
doesn't look right. Perhaps the closing paren after UNKNOWN and an opening one before Marshal shouldn't be there. Personally, I'd write
if(filetype != FILE_TYPE_UNKNOWN ||
Marshal.GetLastWin32Error() == ERROR_SUCCESS) {
Michael S. Kaplan on 8 Apr 2010 12:15 PM:
Actually, the check is kind of right, believe it or not -- it is attempting to catch the case where it is unknown yet succeeded. Weird code behavior trying to key off weird function results....
Pavanaja U B on 8 Apr 2010 9:24 PM:
You (I mean MS) are still putting me down by not allowing complex scripts (opentype fonts) in console.
-Pavanaja
Michael S. Kaplan on 8 Apr 2010 11:38 PM:
Well, putting down is a relative term....
Seth on 9 Apr 2010 11:02 AM:
Am I missing something? In the line
if(! (filetype == FILE_TYPE_UNKNOWN) && (Marshal.GetLastWin32Error() != ERROR_SUCCESS)) {
say filetype is FILE_TYPE_CHAR, then "filetype == FILE_TYPE_UNKNOWN" evaluates to false, and so "! (filetype == FILE_TYPE_UNKNOWN)" evaluates to true. Since there wasn't an error GetLastError() returns ERROR_SUCCESS and GetLastError() != ERROR_SUCCESS evaluates to false, and the whole expression evalutes to false and the function exits without writing anything?
It seems like the condition you want to return on is if the type is unknown because there was an error. So if it's unknown but there is no error then you still continue on with the write. I think that's the same as just making sure there's no error, so should that check be replaced with the following?
if(GetLastError() != ERROR_SUCCESS) {
return;
}
Also, why do we need to check that filetype is FILE_TYPE_CHAR? Isn't it enough to just check that out is a console (are all consoles FILE_TYPE_CHAR?), and does the following do that?
bool fConsole = (GetConsoleMode(out,&mode) || (GetLastError() != ERROR_INVALID_HANDLE));
So could it be right to do the following?
void WriteLineRight(std::string const &s) {
//...
HANDLE out = GetStdHandle(STD_OUTPUT_HANDLE);
if(out == INVALID_HANDLE_VALUE) {
return;
}
// we don't directly check the filetype of output handle, we only check if it's a console
DWORD mode;
bool fConsole = (GetConsoleMode(out,&mode) || (GetLastError() != ERROR_INVALID_HANDLE));
if(fConsole) {
//... don't care about non-true-type consoles
//... convert to wchar here
WriteConsoleW(...)
} else {
WriteFile(out,&s[0],s.size(),&written,NULL);
}
}
...
Michael S. Kaplan on 9 Apr 2010 1:22 PM:
In my opinion, you are missing something, yes. :-)
If you look at the docs for GetFileType, it is clear that:.
This sample code is distinguishing the two cases.
Seth on 9 Apr 2010 2:11 PM:
Okay, was something wrong with my analysis of the expression? When I ran the sample code it seemed to confirm my analysis by skipping over printing when my own version does do the printing.
My understanding of the requirements is that there are three possible cases:
1. file type is not unknown. therefore we know the call succeeded
2. file type is unknown, but the call succeeded
3. file type is unknown and the call failed
In case one we want to continue on with printing. In case two we also want to continue on with printing. In case three there was an error, and we cannot continue with printing and must return. This reduces down to just checking for success of the call, and checking if the type is unknown or not is unneeded.
However the sample code seems to only cause printing in a fourth, impossible case: when filetype is not unknown, but the call failed.
I think the expression contains a typo. ! has higher precedence than && right? so ! applies only to the left side, not the whole expression. If the ! were instead applied to the entire expression then it looks like it would be correct to me.
Seth on 9 Apr 2010 2:16 PM:
Also I'm still curious about checking for FILE_TYPE_CHAR specifically. Can consoles be anything else? Isn't a successful call to GetConsoleMode enough to distinguish between when we need to do special things for console output vs. when we need to use WriteFile to write to the file that output's being redirected to?
referenced by
2010/09/23 A confluence of circumstances leaves a stone unturned...
2010/06/27 Bugs hidden in plain sight, and commented that way too ANSWERS
2010/06/18 Bugs hidden in plain sight, and commented that way too
2010/05/07 Cunningly conquering communicated console caveats. Comprende, mon Capitán?
go to newer or older post, or back to index or month or day | http://archives.miloush.net/michkap/archive/2010/04/07/9989346.html | CC-MAIN-2017-17 | refinedweb | 2,181 | 55.44 |
IOS UIWebView in QML
I suspect I am trying to do something that is not possible. However I am hopeful I am wrong.
I know that the QML WebView is not supported on iOS so I have created a QML plugin using the native UIWebView control.
This part is easy. It just replaces the main View Controller with a new one that has the UIWebView applied.
The issue is that, because it replaces the main view controller, it takes over the entire app.
I was hoping that I could either send it size and positioning properties, then overlay it over the top of the main view. However it seems that I end up with a black background over the top of the QML elements, then on top of the black area I have the resized UIWebView.
I suspect this is because the entire root web view has been swapped over, so there is really actually nothing 'behind it'.
Ideally, I would like to draw the view to the area to the defined on the QML object, but I don't think this is possible.
I have included all the code below, but I think the issue is with this section in how I apply it to the view.
@];
@
Does anyone know if what I am after is possible, or any alternatives?
@
#include <UIKit/UIKit.h>
#include <QtGui/5.2.0/QtGui/qpa/qplatformnativeinterface.h>
#include <QtGui>
#include <QtQuick>
#include "ViewController.h"
@interface WebViewController : UIViewController {
}
@end
@implementation WebViewController
- (void)viewDidLoad
{
[super viewDidLoad];
//self.view.frame = CGRectMake(0,0,100,100);
UIWebView *webView = [[UIWebView alloc] initWithFrame:self.view.bounds];
[self.view addSubview:webView];
NSString *urlAddress = @"";
NSURL *url = [NSURL URLWithString:urlAddress];
NSURLRequest *requestObj = [NSURLRequest requestWithURL:url];
[webView loadRequest:requestObj];
}
@end
IOSWebView::IOSWebView(QQuickItem *parent) :
QQuickItem(parent)
{
}
void IOSWebView::open()
{];
}
@
ViewController.h
@
#ifndef VIEWCONTROLLER_H
#define VIEWCONTROLLER_H
#endif // VIEWCONTROLLER_H
#include <QQuickItem>
#include<QtQuick>
class IOSWebView : public QQuickItem
{
Q_OBJECT
public:
explicit IOSWebView(QQuickItem *parent = 0);
public slots:
void open();
};
@
Edit
A friend has come up with a method that is much more simple
@
UIView *view = static_cast<UIView *>(
QGuiApplication::platformNativeInterface()
->nativeResourceForWindow("uiview",window()));
int x = 0;
int y = 80;
int width = 300;
int height = 200;
UIWebView* webView =[[UIWebView alloc] initWithFrame:CGRectMake(x,y,width,height)];
[view addSubview:webView];
NSString *urlAddress = @"";
NSURL *url = [NSURL URLWithString:urlAddress];
NSURLRequest *requestObj = [NSURLRequest requestWithURL:url];
[webView loadRequest:requestObj];
@
This appends it to the view rather than creating a new one. There is more needed to get it finished but it seems to work. I am still open to any other ideas or comments people have though.
Hi
Thanks a lot for sharing your work !
I'm beginner with implementing my own QQuickItem and use it on QML side.
I compiled your ViewController and here is what I did to expose the IOSWebView type on QML side:
:///qml/testwebview.qml"))); return app.exec();
}@
testwebview.qml:
@import QtQuick 2.0
import IOSWebView 1.0
Rectangle {
IOSWebView { id:webv Component.onCompleted: { console.log("loaded web view") //webv.open() } }
}@
However the application screen remains black at runtime. Calling webv.open() doesn't change anything.
I probably didn't understand how the IOSWebView is supposed to be used. Could you tell me what I've done wrong please ? author="g00dnight" date="1408446341"]
Thanks a lot for the github g00dnight ! I'll give it a try in a few hours as soon as possible. And I'll edit this post to give a feedback.
Hi again g00dnight and thank you so much and thanks to johnc's original post also. I tried your IOSWebView. First the issue was remaining : IOSWebView wasn't appearing in my the QML Window on my iPad 2.
So I compared with another Qt sample () which exposes the iOS Camera component (UIImagePickerController & ) as a QML Component. I noticed that this sample uses:
@import QtQuick.Window 2.0@
As my IOSWebView project was using:
@import QtQuick.Window 2.1@
So I changed for "QtQuick.Window 2.0" and it works! The root cause of my issue was really about the 2.1 version of QtQuick.Window element. Also my first post on this thread was a try with a Rectangle as the root QML element of my main.qml and this had no chance to work. All this is probably related to the following line in ioswebview.mm:
@UIView pMainView = static_cast<UIView>(QGuiApplication::platformNativeInterface()->nativeResourceForWindow("uiview", (QWindow*)window()));@
So I was doing a wrong usage of the QML IOSWebView although it should probably work with "QtQuick.Window 2.1" and I don't know why it doesn't.
I'm actually using Qt 5.3.1 on OSX 10.8.5.
my main.cpp:
@#include <QGuiApplication>
#include <QQmlApplicationEngine>
#include "ioswebview.h":///main.qml"))); return app.exec();
}@
my main.qml:
@import QtQuick 2.2
import QtQuick.Window 2.0
import IOSWebView 1.0
Window {
visible: true
IOSWebView { id:webview url: "" anchors.fill: parent }
}@
- guatedude2
This is a great solution for the missing WebKit on IOS.
Khelkun can you share the IOSWebView source in Git or similar?
Hi,
There's now the "QtWebView": module that provides this
- guatedude2
[quote author="SGaist" date="1411945295"]Hi,
There's now the "QtWebView": module that provides this[/quote]
Cool thanks!
Hi, I wanted to give QtWebView a try on iOS.
I'm using the precompiled Qt 5.4 beta for iOS. Do you know how could I compile it and make it available for my app?
Thanks!
Robert.
- Vincent007
git clone the repo, and then
qmake
make install
after that you should be able to build the example and deploy it via QtCreator.
Thanks, I did exactly that and I'm getting a compile error:
In file included from qwebview_ios.mm:37:
./qwebview_p.h:51:10: fatal error: 'QtWebView/qwebview_global.h' file not found
#include <QtWebView/qwebview_global.h>
Any ideas?
Thanks,
Robert.
Are you on the dev branch ?
How did you proceed to compile it ? Did you install it ?
[quote author="SGaist" date="1413751591"]Are you on the dev branch ?[/quote]
Yes, that's the only branch in the repo.
[quote author="SGaist" date="1413751591"]How did you proceed to compile it ? Did you install it ?[/quote]
I followed the same steps that Vincent007 suggested:
- clone the git repo
- cd to the repo path
- run qmake
- run make install (got the compile error)
Thanks,
Robert.
Personally I don't build in the sources since you might also want to build for e.g. Android and if something fails it's easier to just delete the faulty build and start from scratch.
Just tested an out of source build and got not problem. However you should rather do:
qmake
# make
# make install
Hi, I finally got it working.
I'm using the last 5.4 beta and I did an in source build. One for each platform (iOS/Android/OSX).
Thanks for your help!
Robert. | https://forum.qt.io/topic/42643/ios-uiwebview-in-qml | CC-MAIN-2017-34 | refinedweb | 1,133 | 59.9 |
Using TCP sockets editUsually when people ask the question in the title they mean "the IP address of the primary network interface". This is not always easy to discover. However, if you happen to have a socket connected to an external host handy, you can use
lindex [fconfigure $sock -sockname] 0to get (one of) the IP address(es) of (one of) your network interfaces.
proc ip:address {} { # find out localhost's IP address # courtesy David Gravereaux, Heribert Dahms set TheServer [socket -server none -myaddr [info hostname] 0] set MyIP [lindex [fconfigure $TheServer -sockname] 0] close $TheServer return $MyIP }(Note that the above procedure can go wrong in several different ways.)For example, depending on your network, your hostname may be something that has nothing to do with your external IP address. Connecting to an external host is probably the only way to know that that IP is used to connect to that host. When writing server applications your should really be aware that quite a few of them have multiple interfaces for multiple network domains (internet/intranet). This has become quite common: almost all company and home networks use some kind of gateway server with network address translation: One IP for internal traffic, another for external.
An important question you should ask yourself is: do I really care about my IP address? Remember that for most applications you do not need to know your IP address, and in a lot of cases you do, you may be better off asking the user. Most server applications don't need to know their IP address, they simply accept any incoming connection on the specified port. And client applications tend to only need to know the server address, as the server you connect to can immediately see where the connection is coming from (which seems to be the whole point).One reason to care about your own IP address is if you want to build up a string representing a URL to pass to a client. It is easier to know your own IP than to know your own FQDN. And of course, I am talking about host code that does not require configuration; that is, where you want the software to come up usable on a system other than the one on which it was developed...I actually use this:
proc myIP {} { if { [ catch { set ip 127.0.0.1 set sid [ socket -async [ info hostname ] 22 ] set ip [ lindex [ fconfigure $sid -sockname ] 0 ] ::close $sid } err ] } { catch { ::close $sid } puts stderr "myIP error: '$err' on port 22 (sshd). using 127.0.0.1" } return $ip }For my systems, because if they are not running ssh they are as good as dead anyway. Of course, this does not meet my criterion for general usability.
TV (23 juni 2003) This not so quick not so hack worked for me for years on various machines, os-es, nets, modems etc, didn't realy fail, I seem to remember, and isn't too slow: return $myip }It is taken from pcom with one line removed.CL notes that the case this does not handle nicely is the common corporate one in which /etc/hosts translates hostname as 127.0.0.1. People often do this with laptops or other machines that move around a network.TV I wouldn't know, I failed to mention the majority of tests have been on some 95+ version of dreaded and used Windows OS, though I used Macs at some point, and tested on early linux. I'll probably be on redhat 2.4.20 now to test.. Sorry for those who got the idea I included linux and was sure that would work, I wasn't.(after a little while) I happen to be on Linux now, on a modern PC like arch, with upgraded Redhat 8.0, where the above at least seems to generate a sensible IP address, too. It's on a general use university net for the moment, is it possible to have a corporate net where you'd have no tcl fundable IP address equivalent? Maybe you'd just get your proxy's or gateways' address? Maybe when no tcp/ip is used?CL has been one of those who most frequently questions, "Do you really want that?" As of mid-2003, I'm more sympathetic to those who've answered, "yes".Several of the methods above are prone to reporting 127.0.0.1 or other undesired answers. The real platform-independent answer is that the address can only be truly known once it's an endpoint of a TCP/IP conversation. At that level, therefore, the most reliable answer is to run
# This is reporter.tcl. socket -server accept 6584 proc accept {handle origination other} { set message "Origination address is $origination." puts $message puts $handle $message close $handle } vwait foreveron $TARGET machine, then, on the host whose IP one wants,
telnet $TARGET 6584or equivalent.[Edit. 6584 is arbitrary. Explain circumstances in which various methods fail?]
MS: another simple-minded option, if you are connected to the internet
proc getIp {{target} {port 80}} { set s [socket $target $port] set res [fconfigure $s -sockname] close $s lindex $res 0 }RLE (2015-04-11): The above statement should say: "if you are connected to the internet with a route-able IP address. For anyone behind a NAT gateway, this gives the address of the local Ethernet interface, not the globally route-able IP.Note, this is what the machine I'm typing this on gives:
$ rlwrap tclsh proc getIp {{target} {port 80}} { set s [socket $target $port] set res [fconfigure $s -sockname] close $s lindex $res 0 } % getIp 10.0.0.28 %If the IP of interest is actually the local machine's IP, it works perfectly. But more often the IP address desired is the internet visible IP, not the local machine's IP, and those two are not always the same value.
Peter Hansen points out we haven't even started with the complexities SNAT/DNAT, firewalls, ..., might introduce.
2005/06/10 sbron: As mentioned before there is no such thing as "my IP address". Usually you want to know which IP address will be used when contacting a certain host. The next piece of code actually seems to work on linux even if the remote port is not accessible or, more amazing, if the remote host doesn't exist.
set anyport 5555 ;# Any random port number set sock [socket -async $host $anyport] puts [fconfigure $sock -sockname] close $sockAgain, this is not a general solution because it doesn't work on solaris (which of course is where I needed it!). For a non-existing host or port it just returns 0.0.0.0 on solaris.The search continues ...
DNS edit
Using DNS root serversA modification to find the publicly addressable IP of your machine:
proc ip:external {} { #this uses the TCP DNS port of the DNS root-servers. #If these aren't reachable, you probably don't #have a working external internet connection anyway. set MyIP "" foreach a {a b c d e f g h i j k} { catch { set external [socket $a.root-servers.net 53] set MyIP [lindex [fconfigure $external -sockname] 0] close $external } if { ![string equal $MyIP ""] } { break } } return $MyIP }Depending on your setup, this procedure may take up to six minutes to complete (DNS timeout and route timeout for eleven servers). For me it typically takes a fraction of a second.
Using special-purpose DNS serversSee Public IP.
Parsing ifconfig (*nix) and ipconfig (Windows) editLikely you want to know all your IP addresses, not just the one the whole world knows about.This works nattily on Solaris and Linux, and some poor sod might extend it to work on windoze:
proc ifConfig { args } { if { [ catch { set interfaces [ list ] if { [ file executable /usr/sbin/ifconfig ] } { catch { ::exec /usr/sbin/ifconfig -a } data } elseif { [ file executable /sbin/ifconfig ] } { catch { ::exec /sbin/ifconfig -a } data } else { return -code error "can't find 'ifconfig' executable!" } set fid [ open /etc/hosts r ] set hostdata [ read $fid [ file size /etc/hosts ] ] ::close $fid foreach line [ split $hostdata "\n" ] { array set hosts \ [ list [ lindex $line 0 ] [ lrange $line 1 end ] ] } foreach line [ split $data "\n" ] { regexp {^[a-z]+\d+} $line if if { [ regexp {^\s+inet\s+(?:addr:)?(\S+)} $line -> ip ] } { lappend interfaces [ list $if $ip $hosts($ip) ] } } } err ] } { return -code error "ifConfig: $err" } return $interfaces }DKF: That info is probably available through the ipconfig program, though I've no idea what it returns on machines with multiple network addresses.SDW: DKF, ask and you shall recieve. This script switches between Unix and Windows, and will even detect if your address was obtained from DHCP.
switch $tcl_platform(platform) { unix { ######################## # Parse network configuration information # for unix-like systems #### proc ifconfig {} { variable iplist {} set buffer [exec /sbin/ifconfig] set scan "inet addr:" set addrlist {} foreach line [split $buffer \n] { if [regexp HWaddr $line] { lappend addrlist [lindex $line 0] [lindex $line end] } if ![regexp $scan $line] continue set s [expr [string first $scan $line 0] + \ [string length $scan]] set f [expr [string first " " $line $s] - 1] set addr [string range $line $s $f] if { $addr != "127.0.0.1" } { lappend addrlist $addr } } foreach {device mac addr} $addrlist { set device [string tolower $device] if [catch {exec ps ax | grep dhcpcd | grep $device}] { set dhcp no } else { set dhcp yes } set dhcp [linux_dhcp_device $device] lappend iplist [list $device $addr $mac $dhcp] } return $iplist } } windows { ##################### # Parse network configuration information # Windows NT/98/ME/2K/XP # (Also nabs the mac number for the network card # this is a quick copy and paste from my workstation # inventory script) #### proc ifconfig {} { set iplist {} # Load IP info catch { exec ipconfig /all } data set buffer [split $data \n] set mac_address {} foreach line $buffer { if [regexp -nocase {Description} $line] { set val [string trim [lindex [split $line :] end]] lappend mac_addresses $val } if [regexp -nocase {Physical Address} $line] { set val [string trim [lindex [split $line :] end]] regsub -all {\-} $val : val lappend mac_addresses [string tolower $val] } if [regexp -nocase {DHCP Enabled} $line] { set val [string trim [lindex [split $line :] end]] lappend mac_addresses [string tolower $val] } if [regexp -nocase {IP Address} $line] { set val [string trim [lindex [split $line :] end]] lappend mac_addresses $val } } set devices -1 set ppp 0 foreach {desc mac dhcp ip} $mac_addresses { if [regexp desc "PPP"] { set device ppp[incr ppp] } else { set device eth[incr devices] } lappend iplist [list $device $ip $mac $dhcp] } return $iplist } } default { proc ifconfig {} { error "Unsupported Platform" } } }
2005/03/07 skm I kluged a procedure using ipconfig, and had intended to do something later with a dialog to handle a multihomed machine. Later, I didn't work on this anymore because I wanted to use a better method. The dns package looked promising. lately we're using the socket command. (never did finish all of the unit tests. btw, now that I'm using the ActiveState debugger, unit testing is much easier than what I originally did.)
proc get_ips {} { # default ip_list to one element consisting of localhost. # If the regexp call succeeds, ip will be set a list of # the IPs. If the computer is not multihomed, the list will consist # of one element set ip_list {} # initialize status variables and warning messages set warning {} set fail 0 set re_status 0 # normal if {[catch {set ipconfig_pid [open |ipconfig]} err]} { # query the host for its actual IP configuration # get the stack trace and flatten it into one line to make it log-able set localInfo [split $::errorInfo \n] set warning "Warning: Unable to query the host for its IP\ configuration. IP configure returned: $err\ Stack trace: $localInfo" } else { # get the data from the ipconfig command set results [split [read $ipconfig_pid] \n] close $ipconfig_pid # loop over the results, searching for IP addresses. foreach line $results { if {[catch {regexp {IP Address[\. ]+: ([0-9\.]+)}\ $line match ip} re_status]} { set localInfo [split $::errorInfo \n] set warning "Warning: Failed to run regexp on the\ the ipconfig output.\ Result: $re_status Stack trace: $localInfo" } elseif {$re_status == 1} { lappend ip_list $ip } } } append msg "Harvested IP address(es) $ip_list" " $warning" # report msg here puts $msg # return ip if {[llength $ip_list] == 0} { return localhost } else { return $ip_list } } # unit tests # 0. run success case # 1. Change the call to ipconfig to: catch {exec ipconfig; error foo} # 2. Change the regexp pattern to: {fooIP Address[\. ]+: ([0-9\.]+)} # 3. Append ;error foo to the catch around the regexp
Mike Tuxford adds... Similar to the ifConfig proc above for Linux, when you know the location ifconfig and only need one interface IP, which is perhaps the most common case, the below should work. or you could edit a page on The Tcler's Wiki and your IP will appear on the Recent changes
proc getIP {type bin} { set data [exec $bin $type | grep inet] return [string range $data [expr [string first ":" $data]+1] \ [expr [string first " " $data [expr [string first ":" $data]+1]]-1]] } puts [getIP ppp0 /sbin/ifconfig]
Using ping editCL has another: "ping -c 1 $known_host", and parse the output for origin.
Using web services editFW: Or, to see what IP your machine is seen from the outside, do some web scraping:
package require http proc whatIP {} { regexp {Your IP Address:</B><BR>\n([0-9.]*)} [http::data [http::geturl whatismyipaddress.com]] {} ip return $ip }rdt says that this will probably give you the address of some router, tho :)
tb 2009-08-05 I wrote a script that uses checkip.dyndns.org to wrap your current external IP from there and mail it to a given email address. Because my DDNS client is behind a NAT, it doesn't recognize when my provider hooks off, so I've put this into my [crontab] to send my external IP every 15 minutes.See mailip.tcl
Using TWAPI editA problem with some of the above solutions is that they rely on parsing the output of some other program and hence will not work on localized operating systems. On Windows NT platforms only, TWAPI includes a function to provide IP address information:
package require twapi twapi::get_ip_addresses
neb 2010/09/03: Using dyndns and other similar options isn't an option for me, as our work proxy winds up blocking things seemingly at random. Plus, it just doesn't feel clean. Parsing ipconfig is iffy, has no guarantees with multihomed machines (including those with virtual NICs). Plus, I have a thing against dependencies =]Fortunately, on XP-class Windows boxen, it's actually much easier to do it the internal way.Via twapi or ffidl, you can hit a couple of Windows APIs; one that will tell you the IP address of a specific interface, and another that will work out the interface for a given route. Combining those, we can grab the IP for a given side of the PC, without having to go out to the network (although it needs to be connected)
package require twapi puts ips:[lsearch -all -inline -not -exact [::twapi::get_ip_addresses] 127.0.0.1]\n proc getiptohost hostip { package require twapi set idx [::twapi::get_outgoing_interface $hostip] array set iplist [lsearch -all -inline -not -exact [::twapi::get_netif_info $idx -ipaddresses] 127.0.0.1] if {[info exists iplist(-ipaddresses)]} { return [lindex $iplist(-ipaddresses) 0 0] } else { puts "error" } } puts "getting route..." puts [getiptohost $::argv]This proc has a couple of advantages; it doesn't need the internet (it needs a network, else there isn't an IP to return, but it doesn't actually connect to anything to retrieve the IP. It's entirely self-contained),it will ignore disconnected interfaces (so if your lan is unplugged, and you are connected via WiFi, that's the interface you get) and on a multihomed machine, it will ive you the IP for the interface that is pointing to the IP you give it. Also, the IP you give it doesn't have to exist. If you give it a random wild IP, it will just return the IP for the default NIC.
C:\temp>getip2host 1.1.1.1 ips:74.125.19.68 192.168.56.1 getting route... 74.125.19.68 C:\temp>getip2host 192.168.56.123 ips:74.125.19.68 192.168.56.1 getting route... 192.168.56.1It's longer than it probably needs to be. Filtering out 127.0.0.1 is redundant--I just wanted to make sure it failed if it ran into something I haven't thought of.
Using the registry on Windows edit2005/04/30 sbron: A method that should work on localized Windows too is to get the information from the registry. Based on information provided by kbk in the Tcl Chatroom I made the following code that will obtain most information you may ever want to know about the available network interfaces:
proc regpath {args} { join $args "\\" } proc ipConfig {} { global tcl_platform if {$tcl_platform(os) eq "Windows NT"} { # Windows NT/2k/XP set regroot [regpath HKEY_LOCAL_MACHINE Software \ Microsoft "Windows NT" CurrentVersion NetworkCards] foreach key [registry keys $regroot] { set net [regpath $regroot $key] set name [registry get $net Description] puts "Description=$name" set service [registry get $net ServiceName] set path [regpath HKEY_LOCAL_MACHINE System \ CurrentControlSet Services \ $service Parameters Tcpip] set vars [registry values $path] foreach var $vars { set val [registry get $path $var] puts $var=$val } puts "" } } else { # Windows 95/98/ME set regroot [regpath HKEY_LOCAL_MACHINE System \ CurrentControlSet Services Class] set net [regpath $regroot Net] foreach key [registry keys $net] { set name [registry get [regpath $net $key] DriverDesc] puts "Description=$name" set path [regpath $regroot NetTrans $key] set vars [registry values $path] foreach var $vars { set val [registry get $path $var] puts $var=$val } puts "" } } }Unfortunately it seems that dynamic IP addresses are not necessarily stored in the registry, so this code will not be able to find those.LES: unable to get value "DriverDesc" from key "HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Class\Net\NetTrans_0003": File not found.2005/06/10 sbron: This method isn't the final solution I hoped it would be. I have reverted to scanning the ipconfig output again myself because too many customers reported problems when I used the above code. Maybe I should delete this entry. I was just hoping that someone could use it as a starting point to improve upon.
Using TclX on *nix edit2005/06/29 CK: what about the following command ?
host_info addresses [info hostname]CL believes CK is assuming TclX, which happens to provide host_info.
Using Ceptcl edit2009-06-06 Stu With Ceptcl >= 0.4 this can be accomplished with the cepResolve command.
% cepResolve [info hostname] 192.168.2.2 % cepResolve google.com -addresses 74.125.45.100 74.125.127.100 74.125.67.100
Using UDP sockets editcrshults 2014/10/01: This is working for me on Windows 7 right now. Going to try a few other environments later (note: I live in tkcon) -
package require udp set udp_socket [udp_open 15351 reuse] chan configure $udp_socket -blocking 0 -buffering none -mcastadd {224.0.0.251} -mcastloop 1 -remote {224.0.0.251 15351} chan puts -nonewline $::udp_socket [info hostname] chan event $udp_socket readable read_data proc read_data {} { if {[chan read $::udp_socket] eq [info hostname]} { set ::ip_address [lindex [chan configure $::udp_socket -peer] 0] proc get_ip_address {} { return $::ip_address } chan close $::udp_socket } }
Linux command line commands editneb 2011/07/18: I was looking for a way to do the same as my previous post but for Linux, and found what looks like a much simpler way:
[~]$ ip route get 64.64.64.64/24 64.64.64.64 via 192.168.0.1 dev wlan0 src 192.168.0.102 cacheAFAICT, this gives the appropriate route to a given IP, picking the appropriate interface, and the 'src' field is the IP for that interface. Although the documentation warns that using 'get' alters the usage statistics for the route, so it may be better to first try looking something up in the route cache:
[~]$ ip route show cache 133.40.3.100 133.40.3.100 via 192.168.0.1 dev wlan0 src 192.168.0.102 cache 133.40.3.100 from 192.168.0.102 via 192.168.0.1 dev wlan0 cacheThe 'src' and 'from' fields are the interface ip.I was going to pull the 'inet addr' from
ifconfig ifafter pulling the if from
netstat -ior
netstat -teIf I wind up having to fall-back or anything, and work that out, I will post it, too; but 'ip route' looks like a much better alternative.
ZB 2009-07-10 For Linux perhaps this will be most convenient solution:1. Find out all present network interfaces in /proc/net/dev (file format is simple)2. Use the code below ( found it here
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <sys/ioctl.h> #include <net/if.h> #include <arpa/inet.h> int main(int argc, char *argv[]) { struct ifreq ifr; int sock, j, k; char *p, addr[32], mask[32], mac[32]; if (argc<2) { fprintf(stderr,"missing argument, example: eth0\n"); return 1; } sock=socket(PF_INET, SOCK_STREAM, 0); if (-1==sock) { perror("socket() "); return 1; } strncpy(ifr.ifr_name,argv[1],sizeof(ifr.ifr_name)-1); ifr.ifr_name[sizeof(ifr.ifr_name)-1]='\0'; if (-1==ioctl(sock, SIOCGIFADDR, &ifr)) { perror("ioctl(SIOCGIFADDR) "); return 1; } p=inet_ntoa(((struct sockaddr_in *)(&ifr.ifr_addr))->sin_addr); strncpy(addr,p,sizeof(addr)-1); addr[sizeof(addr)-1]='\0'; if (-1==ioctl(sock, SIOCGIFNETMASK, &ifr)) { perror("ioctl(SIOCGIFNETMASK) "); return 1; } p=inet_ntoa(((struct sockaddr_in *)(&ifr.ifr_netmask))->sin_addr); strncpy(mask,p,sizeof(mask)-1); mask[sizeof(mask)-1]='\0'; if (-1==ioctl(sock, SIOCGIFHWADDR, &ifr)) { perror("ioctl(SIOCGIFHWADDR) "); return 1; } for (j=0, k=0; j<6; j++) { k+=snprintf(mac+k, sizeof(mac)-k-1, j ? ":%02X" : "%02X", (int)(unsigned int)(unsigned char)ifr.ifr_hwaddr.sa_data[j]); } mac[sizeof(mac)-1]='\0'; printf("\n"); printf("name: %s\n",ifr.ifr_name); printf("address: %s\n",addr); printf("netmask: %s\n",mask); printf("macaddr: %s\n",mac); printf("\n"); close(sock); return 0; }Using this code one can even prepare package for tcllib, especially for IP-address recognition - but found it 5 minutes ago, so no complaints, that haven't provided it yet ready-to-use. ;)Something similar for Windows (not tested by myself): here
Windows command line commands editSomeone malformadly suggested:
global env exec myip [exec ypcat -k hosts | grep $env(HOST) | awk {{print $1}}]
[Lectus] This works on windows XP:
proc getIP {} { set fd [open "|nslookup" r] fconfigure $fd -buffering line gets $fd line gets $fd line return [string trim [string range $line 9 end]] }But there are better cross-platform code in this page. ;D
Editing a page on the Tcler's wiki ;-) editLars H: Given that web browsers tend to be more commonly available than telnet these days, one alternative to such a method would be to open the [Graffiti] page :-), edit (make a trivial change and save), and then look at Recent changes
Discussion edit[Explain why NAT and firewalls can make things more difficult.]2005/06/29 SRIV: Perhaps STUN is the answer here [1][Explain why a web proxy can make using web-services to determine your address difficult.]HolgerJ 2015-03-18: I wonder why on Java 'getNetworkInterfaces()' works so easily. Couldn't the code behind it be put into Tcl core?
See also edit
- Tnm/Scotty/TkInEd
- Regular Expression Examples
- Entry box validation for IP address
- DNS
- TWAPI Network Configuration module [2] | http://wiki.tcl.tk/3015 | CC-MAIN-2018-34 | refinedweb | 3,891 | 58.32 |
A nasty Problem - Java Code
By vaibhavc on Sep 15, 2008
Few days back, I got a problem in which actually I have to get the fields, constructor and method name from a java class. One can think of using reflection API and the problem is easy to solve, but that is not the case. Reflection API can only be used on class files not Java file. Than a quick idea came into mind to compile the file internally, use reflection and then delete the class file, if user don't want to see it. This is never be a tough job if I use JDK6 JavaCompiler code which is :
Here is the code:
import java.io.IOException; import javax.tools.JavaCompiler; import javax.tools.ToolProvider; public class JDK6FirstCompile { public static void main(String args[]) throws IOException { JavaCompiler compiler = ToolProvider.getSystemJavaCompiler(); int results = compiler.run(null, null, null, "HelloWorld.java"); System.out.println("Success: " + (results == 0)); } } class HelloWorld { public static void main(String args[]) { System.out.println("Hello, World"); } }But but, I was constraint to use JDK5 or less. So, this JDK6 API is not worth for me :-(. Finally, came up with a small code, which probably can do this in all the case. Hoping it will not lose any generic behavior and run in all cases.
Here is the code:
import java.io.\*; public class CompileCheck { public void compileClasses2() { try { String command = System.getProperty("java.home")+File.separator + ".." + File.separator+"bin"+File.separator+"javac " + "HelloWorld.java"; System.out.println(command); Runtime.getRuntime().exec(command); } catch (Exception e) { } } public static void main(String[] args) { CompileCheck c = new CompileCheck(); c.compileClasses2(); } }In place of HelloWorld, I can surely use any path where the java file resides. And off course, I can use \*.java as well :-). Please let me know your opinion or any other method easier than this from which I can get class methods, attributes and all from a java file.
Read this as this would help you
Posted by Srikanth on September 16, 2008 at 07:50 AM IST #
o yes, better option. thanks ! just classpath(tools.jar) issue need to be resolved by code too.
Posted by Vaibhav on September 21, 2008 at 05:59 PM IST #
erssdsd
Posted by guest on September 30, 2008 at 10:13 AM IST #
Posted by Tag Heuer watches on December 20, 2009 at 02:27 PM IST # | https://blogs.oracle.com/vaibhav/entry/a_nasty_problem_java_code | CC-MAIN-2014-23 | refinedweb | 396 | 65.73 |
Show Table of Contents
28.3. Configuring Persistent Memory for File System Direct Access
File system direct access requires the namespace to be configured to the
fsdaxmode. This mode allows for the direct access programming model. When a device is configured in the
fsdaxmode, a file system can be created on top of it, and then mounted with the
-o fsdaxmount option. Then, any application that performs an
mmap()operation on a file on this file system gets direct access to its storage. See the following example:
#
ndctl create-namespace --force --reconfig=namespace0.0 --mode=fsdax --map=mem{ "dev":"namespace0.0", "mode":"fsdax", "size":17177772032, "uuid":"e6944638-46aa-4e06-a722-0b3f16a5acbf", "blockdev":"pmem0" }
In the example,
namespace0.0is converted to namespace
fsdaxmode. With the
--map=memargument, ndctl puts operating system data structures used for Direct Memory Access (DMA) in system DRAM.
To perform DMA, the kernel requires a data structure for each page in the memory region. The overhead of this data structure is 64 bytes per 4-KiB page. For small devices, the amount of overhead is small enough to fit in DRAM with no problems. For example, the 16-GiB namespace only requires 256MiB for page structures. Because NVDIMM devices are usually small and expensive, storing the kernel’s page tracking data structures in DRAM is preferable, as indicated by the
--map=memparameter.
In the future, NVDIMM devices might be terabytes in size. For such devices, the amount of memory required to store the page tracking data structures might exceed the amount of DRAM in the system. One TiB of persistent memory requires 16 GiB just for page structures. As a result, specifying the
--map=devparameter to store the data structures in the persistent memory itself is preferable in such cases.
After configuring the namespace in the
fsdaxmode, the namespace is ready for a file system. Starting with Red Hat Enterprise Linux 7.3, both the Ext4 and XFS file system enable using persistent memory as a Technology Preview. File system creation requires no special arguments. To get the DAX functionality, mount the file system with the
daxmount option. For example:
#
mkfs -t xfs /dev/pmem0
#
mount -o dax /dev/pmem0 /mnt/pmem/
Then, applications can use persistent memory and create files in the
/mnt/pmem/directory, open the files, and use the
mmapoperation to map the files for direct access.
When creating partitions on a
pmemdevice to be used for direct access, partitions must be aligned on page boundaries. On the Intel 64 and AMD64 architecture, at least 4KiB alignment for the start and end of the partition, but 2MiB is the preferred alignment. By default, the parted tool aligns partitions on 1MiB boundaries. For the first partition, specify 2MiB as the start of the partition. If the size of the partition is a multiple of 2MiB, all other partitions are also aligned.. | https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/configuring-persistent-memory-for-file-system-direct-access-dax | CC-MAIN-2019-22 | refinedweb | 476 | 56.35 |
Topics covered: During this video I introduce #include (used to include a header file into a program) using namespace std (this will be discussed at length later on) and cout.
Hello World
“Hello World” is the quintessential “starter program” in most programming languages, we simply include enough files to write this message to the screen, and then try to explain what each part of the program does. In this case the program will look something like this (I don’t have the source code transcribed for this video right now):
#include
using namespace std;
int main(){
cout << "Hello World" << endl;
return 0;
} //end main
Now let's break the above code down part by part and explain what each part of it does.
#include <iostream>
This line tells our compiler to "include" the library "iostream", which is also known as input/output stream. This library contains all the code required to use the functions cout / cin (covered later) in our programs. For quite some time cin / cout will be all that we're really using iostream for, as many of the more advanced uses of it just aren't sensible at this point.
Using namespace std;
Namespaces can be fairly confusing to some people, because until you see what the above program looks like without this statement it's hard to understand what exactly it does, so I'll show you that, and then try to explain it a bit better:
#include
int main(){
std::cout<<"Hello World" << endl; return 0; }
In the above code you'll notice that I left out the
using namespace std; but had to change
cout<< to
std::cout<<. This is what the
using namespace std; line does. It allows us to use items from the 'standard namespace' without mentioning the namespace that they're in. We'll talk about namespaces MUCH later on in the series when we get out of using just STD, and maybe get into Boost and other libraries.
int main(){
This line is fairly simple, it shows where the start of the "main" part of our program is. I want you guys to take special notice to the
{ though. That's known as a Scope Bracket, these can make your life easier or harder based on the habits you develop with them now. You guys should use this as something of a rule when it comes to programming: "Every line I write should end with either a
; or an
{ / }. This will make your life much easier, especially in the early stages of programming.
cout <<"Hello World"<<endl;
This line calls the function "cout" from the iostream library, and passes in the text string "Hello World". The part after that (endl) is just to make a new line after that. We'll get more into how this works in subsequent lessons
return 0;
This line tells the compiler that it can "return" (to the command prompt) if everything went well. The return command will make more sense later. The reason we're using return 0 and not some other number will be covered way later as well.
} //end main
This is really just a closing bracket
} , the
//end main after it is what's known as a comment. Comments are used by programmers to keep track of what's what in a program. In this case when you have a longer program with many brackets, you might want to know that this bracket is closing your int main() opening bracket. | http://beginnerscpp.com/c-tutorial-1-hello-world/ | CC-MAIN-2016-40 | refinedweb | 579 | 72.19 |
Unique neighbors and small paths search solution in Speedy category for Signpost by Phil15
from collections import defaultdict
MOVES = {'NW': (-1, -1), 'N': (-1, 0), 'NE': (-1, 1),
'W': (0, -1), 'E': (0, 1),
'SW': (1, -1), 'S': (1, 0), 'SE': (1, 1)}
def signpost(grid, directions):
nb_rows, nb_cols = len(grid), len(grid[0])
size = nb_rows * nb_cols
def number_compatibility(xA, yA, xB, yB) -> bool:
""" Say if A can go to B according to grid numbers. """
nA, nB = grid[xA][yA], grid[xB][yB]
return nB == nA + 1 if nA and nB else True
# Preprocess what is next/previous to each cell of the grid.
next_neighbors, prev_neighbors = defaultdict(set), defaultdict(set)
fix = {}
# fix[x][0] (/ fix[x][1]) is True when previous (/ next) cell is known.
for i, row in enumerate(grid):
for j, n in enumerate(row):
A = i, j # Cell A is pointing to a few cells B.
fix[A] = [n == 1, n == size]
if n != size: # else no direction given since it's the end.
di, dj = MOVES[directions[i][j]]
B = i + di, j + dj
while 0 <= B[0] < nb_rows and 0 <= B[1] < nb_cols:
if number_compatibility(*A, *B):
next_neighbors[A].add(B)
prev_neighbors[B].add(A)
B = B[0] + di, B[1] + dj
def follow_uniques(position, neighbors):
""" Generate the path from a position, looking in neighbors. """
while len(neighbors[position]) == 1:
for position in neighbors[position]:
yield position
def link(A, B) -> bool:
""" Create a link A --> B by updating data structures. """
if fix[A][1] and fix[B][0]:
return False # We knew this link before.
# Update neighbors and fix a few things.
next_neighbors[A], fix[A][1] = {B}, True
prev_neighbors[B], fix[B][0] = {A}, True
(xA, yA), (xB, yB) = A, B
nA, nB = grid[xA][yA], grid[xB][yB]
# If a A or B have number but not the other, inform next/prev cells.
# Next cells can't point to a previous cell, update neighbors.
for n, (x, y) in enumerate(follow_uniques(A, next_neighbors), 1):
# A --> B --> ... --> (x, y) so (x, y) can't point to A.
# nA < nA+1 < ... < nA + n
next_neighbors[x, y].discard(A)
if nA and not nB:
grid[x][y] = nA + n
for n, (x, y) in enumerate(follow_uniques(B, prev_neighbors), 1):
# B <-- A <-- ... <-- (x, y) so (x, y) can't come from B.
# nB > nB-1 > ... > nB - n
prev_neighbors[x, y].discard(B)
if nB and not nA:
grid[x][y] = nB - n
return True # We learned a new link
def find_unique_neighbors() -> int:
""" Go through the grid looking for unique previous/next neighbors. """
nb_new_links = 0 # This will say if we found something, or not.
for i, j in fix:
if not fix[i, j][0]: # The previous cell is not known yet.
end = i, j # What is the start?
discard = set() # Discard useless previous neighbors.
for start in prev_neighbors[end]:
# Discard cells that already have a next cell fixed.
# Discard cells without a compatible number.
if fix[start][1] or not number_compatibility(*start, *end):
discard.add(start)
prev_neighbors[end] -= discard
if len(prev_neighbors[end]) == 1: # start is known now!
for start in prev_neighbors[end]:
nb_new_links += link(start, end)
if not fix[i, j][1]: # The next cell is not known yet.
start = i, j # What is the end?
discard = set() # Discard useless next neighbors.
for end in next_neighbors[start]:
# Discard cells that already have a previous cell fixed.
# Discard cells without a compatible number.
if fix[end][0] or not number_compatibility(*start, *end):
discard.add(end)
next_neighbors[start] -= discard
if len(next_neighbors[start]) == 1: # end is known now!
for end in next_neighbors[start]:
nb_new_links += link(start, end)
return nb_new_links
def paths(steps, A, B) -> int: # Steps won't be > 3 (it's not needed).
""" Search a common thing to all paths
between A and B in the given number of steps. """
def dfs_links(): # Will generate links for all wanted paths.
stack = [[A]]
while stack:
path = stack.pop()
for neighbor in next_neighbors[path[-1]]:
if neighbor not in path:
new_path = path + [neighbor]
if len(new_path) <= steps:
stack.append(new_path)
elif neighbor == B: # new_path is complete here.
yield set(zip(new_path, new_path[1:]))
return sum(link(a, b) for a, b in set.intersection(*dfs_links()))
def smart_bruteforce() -> int:
""" Search little paths between cells with known numbers. """
starts, ends = {}, {}
for i, j in fix:
n = grid[i][j]
if n:
if not fix[i, j][0]: # (i, j) predecessor is unknown.
ends[n] = i, j
if not fix[i, j][1]: # (i, j) successor is unknown.
starts[n] = i, j
starts, ends = sorted(starts.items()), sorted(ends.items())
return sum(paths(nB - nA, A, B) # No need to look for big paths.
for (nA, A), (nB, B) in zip(starts, ends) if nB - nA <= 3)
smart_bruteforce() # Save time to first search little paths on big grids!
while find_unique_neighbors() or smart_bruteforce():
pass # While we find something, we continue.
return grid
March 14, 2019
Forum
Price
For Teachers
Global Activity
ClassRoom Manager
Leaderboard
Jobs
Coding games
Python programming for beginners | https://py.checkio.org/mission/signpost/publications/Phil15/python-3/unique-neighbors-and-small-paths-search/share/6fa987adf2413e893932874041c199ba/ | CC-MAIN-2022-21 | refinedweb | 828 | 65.83 |
Introduction to ASP.NET Core part 12: data annotation of view models
February 13, 2017 2 Comments
Introduction
In the previous post we went through how to insert a new Book entity using a form. We used a view model for this purpose. Two different Create action methods take care of the insertion process in the Books controller. One accepts HTTP GET requests and only presents an empty form with the various book insertion view model properties. The corresponding view has a form that’s built using a Html helper which takes care of setting up the form with the correct naming conventions so that the view model properties can be correctly populated. We also extended our rudimentary layered architecture to simulate that the Book entity is saved in a data store. Finally we also mentioned the usage of the anti-forgery token attribute which guards against a cross site request forgery attack.
In this post we’ll look at how attributes on our attributes can help us refine our view models and how they are rendered in the view.
Attributes on view models
We’ll return to our InsertBookViewModel object:
public class InsertBookViewModel { public string Title { get; set; } public string Author { get; set; } public int NumberOfPages { get; set; } public decimal Price { get; set; } public Genre Genre { get; set; } }
There are various attribute types we can attach to these properties that affect the behaviour of the view model in various ways. These attributes reside in the System.ComponentModel.DataAnnotations namespace. As it stands now our book insertion form is very basic. Apart from the lack of styling, which is of no interest to us at this phase, there’s no data validation, there are no restrictions imposed on the values that the user can enter and the NumberOfPages label is not too fancy.
Controlling the display
The Display attribute can define the label for the property in the view:
public class InsertBookViewModel { [Display(Prompt = "Enter title")] public string Title { get; set; } public string Author { get; set; } [Display(Name = "Number of pages")] public int NumberOfPages { get; set; } [Display(Name = "Price in USD")] public decimal Price { get; set; } public Genre Genre { get; set; } }
The Display attribute has a number of properties with Name and Prompt probably the one used most often. Name defines the label of the property and Prompt will set a placeholder that disappears as soon as the user starts typing in the text box:
Form validation
Data validation has several attributes:
- Required: the user must enter a value
- Range: the user must enter a numeric value within an interval
- MinLength and MaxLength: the string must be either be at least or at most as long as the defined lengths
- Regular expression: the value must match a regular expression, useful for emails, phone numbers, social security numbers, postal codes etc.
- Compare: compare two strings, most useful for password validation
These annotations will provide a basic level of validation of the form data in the backend. It’s important to keep in mind that these attributes won’t magically create client side validation scripts. Also, complex validation will still be part of your domain logic. E.g. at my company the customer can perform load testing of their websites. The rules around who and when they can start a load test are quite complex and intricate which cannot possibly be expressed with data annotations. Data annotations can validate simple things like the passwords match when a new user wants to sign up with our service. This level of validation is perfect to reject the most obvious mistakes. Don’t try to put all your domain validation rules in data annotations, they should be part of the domain classes.
Before we apply some of these attributes we need to extend the book insertion form. We should enter placeholders to show the validation exception messages to our users. There’s a Html helper called ValidationMessageFor that we can apply here:
@model DotNetCoreBookstore.Models.InsertBookViewModel <html xmlns=""> <head> <title>Insert a new Book</title> </head> <body> <h1>Insert a new book</h1> @using (Html.BeginForm()) { <div> @Html.LabelFor(m => m.Author) @Html.TextBoxFor(m => m.Author) @Html.ValidationMessageFor(m => m.Author) </div> <div> @Html.LabelFor(m => m.Title) @Html.TextBox> } </body> </html>
Here are a couple of examples for the data validation attributes on the book insertion view model:
public class InsertBookViewModel { [Display(Prompt = "Enter title")] [Required(ErrorMessage = "The title is required"), MaxLength(10, ErrorMessage = "A book title cannot exceed 10 characters")] public string Title { get; set; } [Required(ErrorMessage = "The author is required"), MaxLength(15, ErrorMessage = "A book author cannot exceed 15 characters")] public string Author { get; set; } [Display(Name = "Number of pages")] [Required(ErrorMessage = "Number of pages is required"), Range(1, 10000, ErrorMessage = "Min pages: 1, max pages: 10000")] public int NumberOfPages { get; set; } [Required(ErrorMessage = "Enter a price"), Range(1, 1000, ErrorMessage = "Min price: 1, max price: 1000")] [Display(Name = "Price in USD")] public decimal Price { get; set; } public Genre Genre { get; set; } }
We’re not done yet. If you test entering an invalid book now all you’ll see is that the page is redirected to the book index page and the book with the invalid data is still inserted into the data store. Data annotation in itself will not prevent the controller from executing its POST Create action method. We’ll need to check the model state first. If there’s a validation error then the model state will know about it but the exceptions it has in its list won’t be applied automatically in our controller code. The POST Create method will need to be extended as follows:
[HttpPost] [ValidateAntiForgeryToken] public IActionResult Create(InsertBookViewModel insertBookViewModel) { if (ModelState.IsValid) {"); } return View(); }
If the model state is valid then we enter a book. Otherwise the same view will be presented. MVC will figure out that we mean the current book insertion view presented by GET Create. That’s how the user will see the data validation exceptions:
As soon as you insert valid data the book will be added to the in-memory data store.
There’s also another way to present the data validation messages. The validation summary helper method can be placed within the form:
@using (Html.BeginForm()) { @Html.ValidationSummary() <div> //rest of code ignored
Data type annotation
There’s one more important annotation type called DataType. It gives a hint to MVC what kind of input to render in the view. We’ll misuse the Title attribute for this purpose. As you start typing…
[DataType(DataType.)]
…in Visual Studio above a view model property then you’ll see that it offers a wide range of data types like credit card, password, email, URL etc.
In order to fully reap the benefits of the DataType attribute we need to make a little change in Create.cshtml. The currently used TextBoxFor gives no freedom to MVC to decide what kind of input type it should render. We tell it to show a text box which will be an input of type text:
<input class="input-validation-error" data-
We can ask MVC to help us by using the EditorFor helper method instead:
@Html.LabelFor(m => m.Title) @Html.EditorFor(m => m.Title) @Html.ValidationMessageFor(m => m.Title)
Put the following data type annotation above the Title property in InsertBookViewModel:
[DataType(DataType.Url)] public string Title { get; set; }
Run the application and inspect the HTML source of the Create view:
<input class="text-box single-line" data-
Note how the type was set to “url” instead of “text”.
Now try to enter a book with a title that doesn’t look like a URL. I’m testing in Chrome and it gives me the following validation message:
The DataType enumeration also has a Text value which will set the type to “text” like we had before. Feel free to test the other data types too. The password type is widely used in signup and login forms. Its effect is that the user won’t see the characters in the text box but only some black dots. That’s of course the well know behaviour of password text boxes.
The post has demonstrated an additional advantage with using view models in our controllers. The attributes can be placed in our view models instead of putting them directly on our domain classes which would be very wrong. Domain models should stay as clean as possible and not be littered with view related attributes that are solely in place for the controller and its matching view.
We’ll continue in the next post.
View the list of MVC and Web API related posts here.
Continue your series! Excellent working!
love these series. it’s very good source for learning asp.net core | https://dotnetcodr.com/2017/02/13/introduction-to-asp-net-core-part-12-data-annotation-of-view-models/ | CC-MAIN-2019-04 | refinedweb | 1,459 | 50.06 |
On 1/26/06, Williams, James P <address@hidden> wrote: > > > I'm trying to implement automatic dependency generation using the excellent > idea posted here. > > > > The following email seems to describe the problem I'm having. > > > > > In my case, however, I'm trying to implement this in a general way that can > be #included by Makefiles from many different applications. Also, files > other than lex.c files can #include yacc.h. So the dependency graph could > look like this. > > prog.exe: goo.o lex.o yacc.o > > goo.o: goo.c yacc.h > lex.o: lex.c yacc.h > yacc.o: yacc.c > > lex.c: lex.l > yacc.c yacc.h: yacc.y > > And just because there's a lex.l doesn't mean there's an associated yacc.y. > > How can I teach make these dependencies in a general way without having to > write them by hand, using "Advanced Auto-Dependencies"? I don't see a way > to detect the goo.o/yacc.h dependency because it requires that yacc be run > first. I'm coming to the conclusion that I must return to automatically > building the dependency files using implicit rules, i.e., "Basic > Auto-Dependencies" (see first link above), with all of the problems that > introduces. Is that really the best way to do this? > > Thanks, > > Jim It may be slightly inefficient, but you can do it automatically using a kind of two-pass solution. On the first pass, all the files that generate other sources are run (such as the lex & yacc commands). On the second pass, since your generated .h and .c files exist, you can wildcard the sources to build the program. Hopefully this commented example will help illustrate what I mean... :) #### Begin Makefile #### $(warning make is starting) # Get a list of all the .c srcs, and find their corresponding objects and # automatically generated dependency files srcs := $(wildcard *.c) objs := $(srcs:.c=.o) deps := $(srcs:.c=.d) # The program depends on all the objects in the current directory. Note the # first time through, $(objs) is incomplete (it doesn't include the yacc and lex # generated sources). We don't care though, since this rule won't take affect # until those sources are created and make reloads. prog: $(objs) gcc $^ -o $@ # Get the dependencies from all of the compiler-generated files. Note that # nowhere in this file is a rule with $(deps) as a target. This prevents # make from reloading unnecessarily. -include $(deps) # Generic rule to create an object from a .c file. The -MMD creates the .d # file automatically. %.o: %.c gcc -MMD -c $< -o $@ # Find all the lex srcs and their corresponding autodependency stub files (.ad). # These files are used as a target so that make knows to reload after they # are changed. This way the second time make loads, the lex.c file will be # present for the c srcs wildcard. Note that the .ad files don't actually # contain any dependency information. lsrcs := $(wildcard *.l) ldeps := $(lsrcs:.l=.ad) $(ldeps): %.ad: %.l @echo "lex $<"; touch $@; touch $(<:.l=.c) # Similar to above, only for yacc srcs. They could probably be combined in # some fashion if you want a smaller Makefile :). ysrcs := $(wildcard *.y) ydeps := $(ysrcs:.y=.ad) $(ydeps): %.ad: %.y @echo "yacc $<"; touch $@; touch $(<:.y=.c); touch $(<:.y=.h) # Include the empty autodependency stubs. The first time make is run in a # clean area, these .ad files won't exist. So make will build them using # the lex & yacc rules, then reload. This means no .c -> .o compilation # happens on the first pass. Also, we don't want make to be dependent on # these files in a "clean" mode, otherwise a "make clean" will try to # rebuild the lex and yacc sources before removing them. ifeq (,$(filter clean,$(MAKECMDGOALS))) -include $(ydeps) $(ldeps) endif # Clean out the objects, dependencies, autodependency stubs, and other # generated files. You could use patsubst or similar functions to generate # the lex.c, yacc.c, and yacc.h to be automatically cleaned. Alternatively, # you could set up the lex and yacc rules to place their generated output # in another directory. The clean could then just 'rm -rf' that directory. clean: ; rm -f $(objs) $(deps) $(ldeps) $(ydeps) lex.c yacc.c yacc.h prog #### End Makefile #### I think this results in what you would expect (I put the warning at the top to show when make reloads)... # A clean build $ make Makefile:1: make is starting lex lex.l yacc yacc.y Makefile:1: make is starting gcc -MMD -c goo.c -o goo.o gcc -MMD -c lex.c -o lex.o gcc -MMD -c yacc.c -o yacc.o gcc goo.o lex.o yacc.o -o prog # Editing a yacc file - goo.o is rebuilt since goo.d contains the # dependency on yacc.h automatically. $ touch yacc.y $ make Makefile:1: make is starting yacc yacc.y Makefile:1: make is starting gcc -MMD -c goo.c -o goo.o gcc -MMD -c yacc.c -o yacc.o gcc goo.o lex.o yacc.o -o prog # Editing a regular .c file - make only loads the makefile once $ touch goo.c $ make Makefile:1: make is starting gcc -MMD -c goo.c -o goo.o gcc goo.o lex.o yacc.o -o prog Anyway, the clean target would still need to be fixed up a bit, and running in multiple directories will probably require additional hackery. Hopefully that will help some though :) -Mike | http://lists.gnu.org/archive/html/help-make/2006-01/msg00061.html | CC-MAIN-2018-43 | refinedweb | 906 | 70.19 |
Dead Code: Broken Override
From OWASP
This is a Vulnerability. To view all vulnerabilities, please see the Vulnerability Category page.
Abstract
This method fails to override a similar method in its superclass because their parameter lists do not match.
Description
This method declaration looks like an attempt to override a method in a superclass, but the parameter lists do not match, so the superclass method is not overridden.
Examples
The class DeepFoundation is meant to override the method getArea() in its parent class, but the parameter lists are out of sync.
public class Foundation { public int getArea() { ... } } class DeepFoundation extends Foundation { public int getArea(int a) { ... } } | http://www.owasp.org/index.php/Dead_Code:_Broken_Override | crawl-001 | refinedweb | 106 | 56.25 |
Type: Posts; User: ahmed17
Hi ,
I want to send SMS message througth my Application and I want to read message from USB Modem
How I check the account in SIM
hi,
I have the following problem , I am reading from serial port and draw plots from the serial port in the form and no problem with flicker (I am enable doublebuffersize property) after doing...
is this true
"the problem of SOA is that if you have different component and all component build on com technology and you want to change any component from those you will need to change other's...
Hi,
I am read about WCF and I want to ensure from I am read , if I understand concept right or not ?
What I understand about WCF is :
I understand that WCF has been build upon the previous...
if i have windows application and i have 2 options to use WPF or win forms ... what i should choose?
and when i should use win and wpf ?
hi ,
I am start to learn WCF and i want to correct some concepts
I have some problems in understanding some concepts ..........
SOA is set of distributed components which can be invoked to...
I am try to do this change but error .......
Hi Guys,
I am try to draw line and i want to make line rotate 360 degree and each degree changed regarding as timer intervals.
double x1, y1, r;
private void...
Hi Guys,
I am using hash code to store items in it but when i retrive data using foreach , data not retrived in order so how i can slove this problem ?
All the best,
?????????????????????????????????
hi guys,
i have some conflicts in understanding the render process of XML , HTML has the problem of "cross-browser rendering" which means that you can not control how html tags displayed and...
hi guys,
i want to know how i can check the validate of the following schema , i test it in this site but give me error.
An XML Schema
<?xml...
?????????????
Tables :
Create Table Catgerories (
CatgId int PRIMARY KEY,
Catgname nvarchar(50)
)
hi ,
i make simple method that retrieve data from tables this is easy but my problem happen when i retrieve data from table that have relationship with other table, i make class to set value into ...
Thanks.
each model try to separate code into 3 classes one for db layer, one for presentation (view ) and one of business operations so what is the difference?
what is the difference between MVC Model and 3-tier app ?
i have problem with stored procedure , i create procedure and it run good .
use MyTestDataBase
GO
create proc sp_with_outpt_parameters2
@id int ,
@name nvarchar(40)=Null,
@age int,
@phone...
data not displayed in dataGridView but when i replace dataGridView by dataGrid it work ...why?
SqlConnection sqlCon = new SqlConnection();
SqlCommand sqlCmd = new SqlCommand();
...
thanks for you, but i will say to you why i want simple example , first i spend 5 days in this problem , i take object from jtable and take row in array and pass array with array of column name to...
can u give me simple update on the previous code?
hi,
i have problem with jtable to display data from db into table and here's the code that diplay data in console so what is the update on code to display data in jtable
import java.sql.*;
...
hi,
i want very simple example for making template control, i read more articles but not give me the simplicty?
thanks for your help , as u say i can not changethe value of master Page ID " ctl00 " but can i see where this value stored or its hidden value ? | http://forums.codeguru.com/search.php?s=216aa83a8995e843224efc0e1cc9248d&searchid=7384629 | CC-MAIN-2015-32 | refinedweb | 612 | 67.28 |
#define directives are preprocessor directives. These are either symbolic constants or macros or conditional compilation constructs or other various directives. Let’s see first symbolic constants, for ex.,
#define NAME "What is your name?" #define SIZE 512 #define PI 3.14 #define FOREVER for(;;) #define PRINT printf("values of x = %d and y = %d.\n", x, y)
Notice that we haven’t used ‘;’ to terminate the replacement texts in each #defined symbol above. Actually, when we use them in program, for ex.
int main(void) { char buf[SIZE]; int x = 10, y = 20; FOREVER; /* ; used here */ PRINT; x++; y++; PRINT; return 0; }
‘;’ is used to terminate the symbolic statement as any other C statement. Let’s see what happens when preprocessor operates on the main() above, it becomes,
int main(void) { char buf[512]; int x = 10, y = 20; for(;;); printf("values of x = %d and y = %d.\n", x, y); x++; y++; printf("values of x = %d and y = %d.\n", x, y); return 0; }
So what do you think how will it affect the program execution if you have used ‘;’ in the replacement text, for ex.,
#define PRINT printf("values of x = %d and y = %d.\n", x, y); /* ';' */
and then used #define PRINT in program and preprocessed it, output results as follows,
int main(void) { printf("values of x = %d and y = %d.\n", x, y);; return 0; }
If you notice the main() above, the extra ‘;’ doesn’t affect the program execution. But now consider,
#include <stdio.h> #define TRUE 1 #define PRINT printf("values of x = %d and y = %d.\n", x, y); int main(void) { if (TRUE) PRINT; else printf("Bye!\n"); return 0; }
Notic that extra ‘;’ causes compilation error in the program. So, it’s always a rule to use ‘;’ to terminate the symbolic constants in program and not in their definitions.
Let’s now consider #define macros, for ex.,
#define MUL(a,b) a * b
and use MUL(a,b) in program as,
int main(void)
{
printf(“product of 5 and 10 is %d\n”, MUL(5,6));
return 0;
}
Of course, output displays as
product of 5 and 10 is 50
Now, guess what’ll be the output if you use macor arguments as,
int main(void) { int x = 10, y = 10; printf("product of %d and %d is %d\n", x + 1, y + 1, MUL(x + 1, y + 1)); }
Output displays as,
product of 11 and 11 is 21
It’s not correct at all! Why? Where have execution gone wrong? Let’s explore this by substituting
macro MUL(x + 1, y + 1) by it’s definition in the printf() below,
printf("product of %d and %d is %d\n", x + 1, y + 1, MUL(x + 1, y + 1));
Preprocessor does substitution as follows,
printf("product of %d and %d is %d\n", x + 1, y + 1, x + 1 * y + 1);
And cause gets cleared! Firstly,
1 * y
multiplied, expression became
10 + 10 + 1
which resulted 21. This problem can be easily fixed by using parenthesis in macro declaration as
#define MUL(a,b) ((a) * (b))
Remember that every operand and entire arithmetic expression in macros definition should be properly parenthesised to avoid unexpected results because of adajacent operators in expressions, and within the macro definitions.
Actually, symbolic constants make programs easily maintainable by allowing them to be modified just at their place of declaration irrespective of size of program and how many different places have these been used in program! For ex.,
#include <stdio.h> #define SIZE 10 void disp(char []); int main() { char name[SIZE] = {'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'}; disp(name); return 0; } void disp(char name[]) { int i; printf("name is: "); for (i = 0; i < SIZE; i++) { printf("%c", name[i]); } printf("\n"); }
Notice that had symbolic constant SIZE even required to be modified, it would have to be updated at one place in the program irrespective of program size. Further, scope of #define symbols is throughout the program from the point of declaration.
Let’s come to function-like macros. Where do macros play role in C programs? We have already used MUL(a,b) macro to compute product of two values. What type of values were those? We had used integers there. Of course, we can use MUL(a,b) to compute product of any type of values, whether integers, floats, double, unsigned integers, etc. This means, macros are typeless! Well! We could rather use functions but then we should have declared functions, one for each type of arguments. In addition,
there’s overhead in function call and return. Therefore, macros are efficient over functions where they are very small in size.
Sanfoundry Global Education & Learning Series – 1000 C Tutorials. | http://www.sanfoundry.com/c-tutorials-define-directive-uses-program/ | CC-MAIN-2017-30 | refinedweb | 790 | 61.67 |
QML Syntax Basics
QML is a multi-paradigm language that enables objects to be defined in terms of their attributes and how they relate and respond to changes in other objects. In contrast to purely imperative code, where changes in attributes and behavior are expressed through a series of statements that are processed step by step, QML's declarative syntax integrates attribute and behavioral changes directly into the definitions of individual objects. These attribute definitions can then include imperative code, in the case where complex custom application behavior is needed.
QML source code is generally loaded by the engine through QML documents, which are standalone documents of QML code. These can be used to define QML object types that can then be reused throughout an application. Note that type names must begin with an uppercase letter in order to be declared as QML object types in a QML file.
Import Statements
A QML document may have one or more imports at the top of the file. An import can be any one of:
- a versioned namespace into which types have been registered (e.g., by a plugin)
- a relative directory which contains type-definitions as QML documents
- a JavaScript file
JavaScript file imports must be qualified when imported, so that the properties and methods they provide can be accessed.
The generic form of the various imports are as follows:
import Namespace VersionMajor.VersionMinor
import Namespace VersionMajor.VersionMinor as SingletonTypeIdentifier
import "directory"
import "file.js" as ScriptIdentifier
Examples:
import QtQuick 2.0
import QtQuick.LocalStorage 2.0 as Database
import "../privateComponents"
import "somefile.js" as Script
Please see the QML Syntax - Import Statements documentation for in-depth information about QML imports.
Object Declarations
Syntactically, a block of QML code defines a tree of QML objects to be created. Objects are defined using object declarations that describe the type of object to be created as well as the attributes that are to be given to the object. Each object may also declare child objects using nested object declarations.
An object declaration consists of the name of its object type, followed by a set of curly braces. All attributes and child objects are then declared within these braces.
Here is a simple object declaration:
This declares an object of type Rectangle, followed by a set of curly braces that encompasses the attributes defined for that object. The Rectangle type is a type made available by the
QtQuick module, and the attributes defined in this case are the values of the rectangle's
width,
height and
color properties. (These are properties made available by the Rectangle type, as described in the Rectangle documentation.)
The above object can be loaded by the engine if it is part of a QML document. That is, if the source code is complemented with import statement that imports the
QtQuick module (to make the Rectangle type available), as below:
When placed into a
.qml file and loaded by the QML engine, the above code creates a Rectangle object using the Rectangle type supplied by the
QtQuick module:
Note: If an object definition only has a small number of properties, it can be written on a single line like this, with the properties separated by semi-colons:
Obviously, the Rectangle object declared in this example is very simple indeed, as it defines nothing more than a few property values. To create more useful objects, an object declaration may define many other types of attributes: these are discussed in the QML Object Attributes documentation. Additionally, an object declaration may define child objects, as discussed below.
Child Objects
Any object declaration can define child objects through nested object declarations. In this way, any object declaration implicitly declares an object tree that may contain any number of child objects.
For example, the Rectangle object declaration below includes a Gradient object declaration, which in turn contains two GradientStop declarations:
import QtQuick 2.0 Rectangle { width: 100 height: 100 gradient: Gradient { GradientStop { position: 0.0; color: "yellow" } GradientStop { position: 1.0; color: "green" } } }
When this code is loaded by the engine, it creates an object tree with a Rectangle object at the root; this object has a Gradient child object, which in turn has two GradientStop children.
Note, however, that this is a parent-child relationship in the context of the QML object tree, not in the context of the visual scene. The concept of a parent-child relationship in a visual scene is provided by the Item type from the
QtQuick module, which is the base type for most QML types, as most QML objects are intended to be visually rendered. For example, Rectangle and Text are both Item-based types, and below, a Text object has been declared as a visual child of a Rectangle object:
import QtQuick 2.0 Rectangle { width: 200 height: 200 color: "red" Text { anchors.centerIn: parent text: "Hello, QML!" } }
When the Text object refers to its parent value in the above code, it is referring to its visual parent, not the parent in the object tree. In this case, they are one and the same: the Rectangle object is the parent of the Text object in both the context of the QML object tree as well as the context of the visual scene. However, while the parent property can be modified to change the visual parent, the parent of an object in the context of the object tree cannot be changed from QML.
(Additionally, notice that the Text object has been declared without assigning it to a property of the Rectangle, unlike the earlier example which assigned a Gradient object to the rectangle's
gradient property. This is because the children property of Item has been set as the type's default property to enable this more convenient syntax.)
See the visual parent documentation for more information on the concept of visual parenting with the Item type.
The syntax for commenting in QML is similar to that when processing QML code. They are useful for explaining what a section of code is doing, whether for reference at a later date or for explaining the implementation to others.
Comments can also be used to prevent the execution of code, which is sometimes useful for tracking down problems.
In the above example, the Text object will have normal opacity, since the line opacity: 0.5 has been turned into a. | https://doc.qt.io/archives/qt-5.10/qtqml-syntax-basics.html | CC-MAIN-2019-18 | refinedweb | 1,060 | 51.28 |
Windows Containers - How to Containerize an ASP.NET Web API Application in Windows using Docker
This post is about:
- You have or want to build an ASP.NET Web API app you want to run in a Windows Container
- You want to automate the building of an image to run that Windows-based ASP.NET Web API
Click image for full size
Figure 1: How this post can help you
Prerequisite Post
This post can give you some background that I will assume you know for this post:
Background for APIs
Building APIs
Hardly a day goes by where you don't learn of some business opening up there offerings as a web service application programming interface (API). this is a hot topic among developers these days, regardless of whether you're a startup or large enterprise. These APIs are typically offered as a series of interdependent web services.
Exposing an API through a Windows Container
So in this post I would like to get into the significance of the modern API and then get into a technical discussion about how you might build that out. In addition, I would like to address the use of a Docker container for hosting and running our API in the cloud.
The Programmable Web
The programmable web acts as a directory to all of the various APIs that are available.
Click image for full size
Figure 2: The Programmable Web
Build and test locally - deploy to cloud and container
Windows Container Workflow
These will be the general steps we follow in this post.
Click image for full size
Figure 3: Workflow for building and running Windows containers
Why containers are important
Perhaps the most important reason why containerization is interesting, is the fact that it can increase deployment velocity. The main reason this happens with the help of containerization is that all of an application's dependencies are bundled up with the application itself.
Delivering the application with all its dependencies
Oftentimes dependencies are installed in a virtual machine. That means when applications are deployed to that virtual machine, there might be an impedance mismatch between the dependency in the virtual machine and the container. Bundling up the dependency along with the container minimizes this risk.
Our web application will be deployed with all its dependencies in a container
In our example we will bundle up a specific version of IIS and a specific version of the ASP.net framework. Once we roll this out as a container, we have a very high degree of confidence that our web application will run as expected, since we will be developing and debugging with the same version of IIS and ASP.net.
We will use the ASP.NET MVC Web API to build out our HTTP-based service.
We will use Visual Studio to do so.
Starting with the new project in Visual Studio
We will use the ASP.net MVC web API to build out our restful API service.
Click image for full size
Figure 4: Creating a new project with Visual Studio
Our API will leverage ASP.NET MVC Web API as you see below. Somewhat tangential, we will choose to store our diagnostic information up in the cloud, but that is orthogonal to this post.
Click image for full size
Figure 5: Choosing ASP.NET Web Application
We will need to select a template below, which defines the type of application we wish to create.
- Web API
- Do NOT host in the cloud
- No Authentication
Click image for full size
Figure 6: Choosing Web API, No Authentication, Do NOT Host in the cloud
We will ignore the notion of authentication for the purposes of this post.
Click image for full size
Figure 7: No authentication
Our Visual Studio Solution Explorer should look like the following. You may choose a different solution name but you'll need to keep this in mind with later parts of the code, particularly with the Dockerfile.
Click image for full size
Figure 8: Visual Studio Solution Explorer
The ValuesController.cs file is contains the code that gets executed automatically when HTTP requests come in.
Notice that in the code snippet below that the various HTTP verbs (GET, POST, PUT, DELETE), map to code or functions.
When we issue " web server/api/values", for example, you will see that because the get method below in return an array of strings, { "Run this ", "from a container" }.
public class ValuesController : ApiController { public IEnumerable<string> Get() { return new string[] { "Run this ", "from a container" }; } // GET api/values/5 public string Get(int id) { return "value"; } // POST api/values public void Post([FromBody]string value) { } // PUT api/values/5 public void Put(int id, [FromBody]string value) { } // DELETE api/values/5 public void Delete(int id) { } }
Click image for full size
Figure 9: Opening thne ValuesController.cs file
Code to modify
Modify the strings you see in the red box to match. These will be this to stream they get returned back to the browser based on the http request.
Click image for full size
Figure 10: modifying the Get() method
Running the solution locally
Get the F5 key or go to the debug menu and select Start Debugging.
In this case we are running locally on my laptop and that is why you see . when I ran the project within Visual Studio it automatically routed me to localhost with the corresponding port.
Click image for full size
Figure 11: The view from inside the browser
Provisioning a Windows Server Docker host, Building the image, and running image as container in cloud
Click image for full size
Figure 12: Remaining Work
There are a few things that we need to do before we are finished. The first thing is we will need a host for our container application. The host and the main purpose of this post will be to demonstrate how we will host a Windows application. So we will need to provision a Windows server 2016 container-capable Docker host.
From there we will begin the work of building an image that contains our web application that we just finished building. To do this there will be a few artifacts that are required, such as a Dockerfile, some Powershell, and a docker build command.
Let's get started and go to the Azure portal and provision a Windows 2016 virtual machine that is capable of running docker containers. It is currently in Technical Preview 6.
Provision Winows 2016 Docker Host at Portal
Navigate to the Azure portal. this is where we will provision a Windows server virtual machine that is capable of hosting our Docker containers.
Click image for full size
Figure 13: Provisioning Windows server 2016
Be sure to select the version that can support containers.
Click image for full size
Figure 14: Containerized Windows server
Entering the basic information about your Windows virtual machine.
Click image for full size
Figure 15: Naming your VM
Selecting a virtual machine with two cores and seven GBs of RAM.
Click image for full size
Figure 16: Choosing the hardware footprint
It's important to remember to modify the virtual network configuration. Two addresses only are supported for Docker functionality for Windows server 2016 technical preview 5. The two supported subnets include:
- 192.x.x.x
- 10.x.x.x.x
Click image for full size
Figure 17: Specifying network configuration
Notice that in this case I selected the 10.1.x.x network.
Click image for full size
Figure 18: Entering the address space for the subnet
Dockerfile and Docker Build
Once we create this virtual machine in Azure, we will connect to it. From there we will create a "Dockerfile," which is nothing more than a text file that contains instructions on how we wish to build our image. The instructions inside of the Docker will begin by downloading a base image from Docker Hub, which is a central repository for both Linux and Windows-based images. The Dockerfile will then continue by the deployment process for our MVC App. The process will install some compilation tools. It will also compile our MVC app yet again and then copy it over to the Web server directory of the image (c:\inetpub\wwwroot). After the build process is done, we will have an image that contains all the necessary binaries to run our MVC Web API app.
Additional Guidance
I borrowed some of the guidelines from Anthony Chu:
Fixing some bugs
I also ran into some bugs that could be easily fixed. Once your container is running, it may not be reachable by client browsers outside of Azure. To fix this problem we will use the following Powershell command,"Get-NetNatStaticMapping."
Get-NetNatStaticMapping | ? ExternalPort -eq 80 | Remove-NetNatStaticMapping
So now we will begin the process of provisioning a container enabled version of Windows server 2016. Begin by going to the Azure portal and clicking on the + . From there, type in Windows 2016 into the search text box.
Click image for full size
Figure 19: Provisioning a Windows virtual machine in Azure
You will then see the ability to choose Windows server 2016 with containers tech preview 5
Click image for full size
Figure 20: Searching for the appropriate image
It is now time to copy our source code to our Windows server 2016 virtual machine running in Azure. so go to your local directory for your laptop on which you are developing your ASP.net MVC Web API application.
From there we will remotely connect to the virtual machine running an Azure. the next goal will be to copy over our MVC application, along with all of its source code, to this running Windows Server virtual machine an Azure.
Click image for full size
Figure 21: Remotely connecting to the Windows 2116 server
Click image for full size
Figure 22: Copying our entire project from the local laptop used to develop the MVC web app
Now that we have the project in the clipboard, the next step is to go back to our Windows server running in the cloud and paste into a folder we create. We will call that folder docker for simplicity's sake.
When working with docker and containerization, most of your work is achieved at the command line. In the Linux world, we typically work in a bash environment, while in the Windows will will just simply use either a command prompt or Powershell.
So let's navigate into Powershell.
Click image for full size
Figure 23: Start the Powershell Command Line
We will create a docker directory in which we will place our work.
Click image for full size
Figure 24: Command line to make a directory
Let's be clear that you are pasting into the Windows server 2016 virtual machine running in Azure.
Click image for full size
Figure 25: Paste in your application and supporting code
The code below provides some interesting ways for us to deploy our MVC Application.
- The base image will be a Windows Image with IIS pre-installed. starting with this base image saves the time of us installing Internet Information Server.
- We install the Chocolatey tools, which lets you install Windows programs from the command line very easily
- Because we will compile our MVC application prior to deploying into the image, the next section requires us to install the build tooling
- A bill directory is created and files are copied into it so that the build process can take place in its own directory
- Nuget packages are installed, a build takes place in the files are copied to c:\inetpub\wwwroot
You will need to pay particular attention to the application name below, AzureCourseAPI.sln, and the related dependencies. You will obviously need to modify this for the name of your project.
# TP5 for technology preview (will not be needed when we go GA) # FROM microsoft/iis FROM microsoft/iis:TP5 # Install Chocolatey (tools to automate commandline compiling) ENV chocolateyUseWindowsCompression false RUN @powershell -NoProfile -ExecutionPolicy unrestricted -Command "(iex ((new-object net.webclient).DownloadString(''))) >$null 2>&1" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin # Install build tools RUN powershell add-windowsfeature web-asp-net45 \ && choco install microsoft-build-tools -y --allow-empty-checksums -version 14.0.23107.10 \ && choco install dotnet4.6-targetpack --allow-empty-checksums -y \ && choco install nuget.commandline --allow-empty-checksums -y \ && nuget install MSBuild.Microsoft.VisualStudio.Web.targets -Version 14.0.0.3 \ && nuget install WebConfigTransformRunner -Version 1.0.0.1 RUN powershell remove-item C:\inetpub\wwwroot\iisstart.* # Copy files (temporary work folder) RUN md c:\build WORKDIR c:/build COPY . c:/build # Restore packages, build, copy RUN nuget restore \ && "c:\Program Files (x86)\MSBuild\14.0\Bin\MSBuild.exe" /p:Platform="Any CPU" /p:VisualStudioVersion=12.0 /p:VSToolsPath=c:\MSBuild.Microsoft.VisualStudio.Web.targets.14.0.0.3\tools\VSToolsPath AzureCourseAPI.sln \ && xcopy c:\build\AzureCourseAPI\* c:\inetpub\wwwroot /s # NOT NEEDED ANYMORE –> ENTRYPOINT powershell .\InitializeContainer
Dockerfile
InitializeContainer gets executed at the and. The web.config file needs to be transformed once our app gets deployed.
If (Test-Path Env:\ASPNET_ENVIRONMENT) { \WebConfigTransformRunner.1.0.0.1\Tools\WebConfigTransformRunner.exe \inetpub\wwwroot\Web.config "\inetpub\wwwroot\Web.$env:ASPNET_ENVIRONMENT.config" \inetpub\wwwroot\Web.config } # prevent container from exiting powershell
InitializeContainer
Docker Build
At this point we are ready to begin the building of our image.
docker build -t docker-demo .
Docker build
The syntax for the docker build commamd.
Click image for full size
Figure 26: The docker build command
The next step is to build the image using the doctor build command as seen below.
Click image for full size
Figure 27: The docker build command continued...
The docker run takes the name of our image and runs it as a container.
Docker Run
docker run -d -p 80:80 docker-demo
Docker run
Getting ready to test our running container
Is a few more things to do before we can test our container properly. The first thing we need to do is open up port 80 on the Windows server virtual machine running in Azure. By default everything is locked down.
Click image for full size
Figure 28: Public IP address from the portal
Network security groups are the mechanism by which we can open and close ports. A network security group can contain one or more rules. We are adding a rule to open up port 80 below.
Click image for full size
Figure 29: Opening up Port 80
We are now ready to navigate to the public IP address, as indicated in the figure, Public IP address from the portal. the default homepage is displayed.
Click image for full size
Figure 30: Home Page for Web Site
The real goal of this exercise is to make an API called to a restful endpoint that will return some JSON data. Notice that in the browser we can see the appropriate JSON data being returned.
Click image for full size
Figure 31: JSON Data from API Call
Conclusion
This post demonstrated the implementation of an ASP.net MVC Web Api application running in their Windows container. Interestingly, there is support for this type of an application in a Linux-based container, but that is reserved for a future post. In addition, there will be a forthcoming Windows Nano implementation, which will be a much lighter version than what we saw here in this post.
Hopefully, this post provided some value as some of this was difficult to discover and write about. I welcome your comments below.
Troubleshooting Guidance (orthogonal to this post)
below are some command to help you better troubleshoot issues that might arise.
docker inspect docker-demo
This command can tell you about your running container.
[ { "Id": "sha256:27cdd74ae5d66bb59306c62cdd63cd629da4c7fd77d7a9efbf240d0b4882ead7", "RepoTags": [ "docker-demo:latest" ], "RepoDigests": [], "Parent": "sha256:50fcbe5e3653b3ea65d4136957b4d06905ddcb37bf46c4440490f885b99c38dd", "Comment": "", "Created": "2016-10-04T03:36:32.8079573Z", "Container": "98190701562a0a70b100e470f8244d203afaa68cb4ccb64c42ba5bee10817934", "ContainerConfig": { "Hostname": "2ac70997c0f2", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "chocolateyUseWindowsCompression=false" ], "Cmd": [ "cmd", "/S", "/C", "#(nop) ", "ENTRYPOINT [\"cmd\" \"/S\" \"/C\" \"powershell .\\\\InitializeContainer\"]" ], ": {} }, "DockerVersion": "1.12.1", "Author": "", "Config": { "Hostname": "2ac70997c0f2", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "chocolateyUseWindowsCompression=false" ], "Cmd": null, ": {} }, "Architecture": "amd64", "Os": "windows", "Size": 8650981554, "VirtualSize": 8650981554, "GraphDriver": { "Name": "windowsfilter", "Data": { "dir": "C:\\ProgramData\\docker\\windowsfilter\\498f5114b4972b7a19e00c3e7ac1303ad28addd774d6c7b949e9955e2147950e" } }, "RootFS": { "Type": "layers", "Layers": [ "sha256:72f30322e86c1f82bdbdcfaded0eed9554188374b2f7f8aae300279f1f4ca2cb", "sha256:23adcc284270a324a01bb062ac9a6f423f6de9a363fcf54a32e3f82e9d022fc4", "sha256:fbb9343bb3906680e5f668b4c816d04d1befc7e56a284b76bc77c050dfb04f1f", "sha256:ad000fd14864d0700d9b0768366e124dc4c661a652f0697f194cdb5285a5272c", "sha256:8b6bfce4717823dfde8bde9624f8192c83445a554adaec07adf80dc6401890ba", "sha256:8ff4edf470318e6d6bce0246afc6b4cb6826982cd7ef3625ee928a24be048ad8", "sha256:1852364f9fd5c7f143cd52d6103e3eec5ed9a0e909ff0fc979b8250d42cf56bd", "sha256:08325b3804786236045a8979b3575fd8dcd501ff9ca22d9c8fc82699d2c045ad", "sha256:7a7f406dcbae5fffbbcd31d90e86be62618e4657fdf9ef6d1af75e86f29fcd19", "sha256:d2d8dc7b30514f85991925669c6f829e909c5634204f2eaa543dbc5ceb811d29", "sha256:da0607f92811e97e941311b3395bb1b9146d91597ab2f21b2e34e503ad57e73f", "sha256:0937ca7b5cbb9ec4a34394c4342f7700d97372ea85cec6006555f96eada4d8c3" ] } } ]
netstat -ab | findstr ":80"
Displays information about network connections for the Transmission Control Protocol (both incoming and outgoing), routing tables, and a number of network interface (network interface controller or software-defined network interface) and network protocol statistics.
Click image for full size
Figure 32: snap32.png | https://docs.microsoft.com/en-us/archive/blogs/allthingscontainer/windows-containers-how-to-containerize-a-asp-net-web-api-application-in-windows-using-docker?wt.mc_id=DX_875106 | CC-MAIN-2021-49 | refinedweb | 2,788 | 51.48 |
I've just started learning C++ so I am at the basics. Can anyone help me with my code? It's an excercise from my book and I can't seem to get it to do what I want. The calculategrade function has to return the char grade to main and then display. I can't figure out where I'm going wrong.
Code:#include<iostream> using namespace std; void getScore(int &score); calculateGrade(int score); int main() { float grade; int courseScore; cout<<"Line 1: Based on the course score, this program " <<"computes the course grade."<<endl; getScore(courseScore); calculateGrade(courseScore); cout<<"Line 7: Your grade for the course is: "<<grade<<endl; return 0; } void getScore(int &score) { cout<<"Line 4: Enter course score-> "; cin>>score; cout<<endl<<"Line 6: Course score is: "<<score<<endl; } calculateGrade(int score) { float grade; if(score >= 90) grade = static_cast<char>('65'); else if(score >= 80) grade = static_cast<char>('66'); else if(score >= 70) grade = static_cast<char>('67'); else if(score >= 60) grade = static_cast<char>('68'); else grade = static_cast<char>('70'); return(grade); } | http://cboard.cprogramming.com/cplusplus-programming/25769-static_casting.html | CC-MAIN-2014-35 | refinedweb | 178 | 67.99 |
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2a1pre) Gecko/20090719 Minefield/3.6a1pre
Build Identifier:
Extension of the CSS3 Transforms; this looks very interesting.
Apple implemented it in its iPhone2 and is in the progress of adding it to Safari ( )
Reproducible: Always
This is now working in the Webkit nightlies, which means that it will "soon" be implemented in Safari and Google Chrome.
Safari has this on Mac but not on Windows. Chrome doesn't have this on any platform.
What is the reason that Webkit has for making it Mac only as of now? Might the same thing limit our implementation?
According to this: it is now implemented on Windows.
Very much WIP patch queue at
Created attachment 459283 [details] [diff] [review]
Part 1: Fix OpenGL container layer to support transforming children
Created attachment 459284 [details] [diff] [review]
Part 2: Make nsDisplayTransform create a Container layer
Creates a ContainerLayer for nsDisplayTransform objects so rendering of them can be accelerated when an accelerated backend is enabled.
This gives me noticable improvements in rendering (within a MakeGoFaster window) - cc'ing Aza.
Will push these to try once I figure out how.
Looks like this failed a few tests on try, will debug tomorrow and update.
I think you need to do what I did for opacity: add a new style change hint to nsChangeHint, report that change hint when the transform changes, mark the frame active, etc. See changeset d98f8a21727e and d290d2b97416.
Created attachment 459742 [details] [diff] [review]
Part 2 v2: Layerify nsDisplayTransform
Fixed reftests failures.
Fixed roc's suggestions to get layers timing out working correctly.
Everything looks good except
+ visibleRect.x = 0;
+ visibleRect.y = 0;
+ visibleRect.width = INT_MAX;
+ visibleRect.height = INT_MAX;
This isn't really right, the visible rect might need to extend to negative coordinates.
If nsLayoutUtils::GfxRectToIntRect fails, it's because the visible rect is nearly infinite ... i.e., the transform has scaled the content down to an incredibly small size. In fact, the content will be shrunk to be basically invisible. Is that right? In that case, why not just set the visible rect to empty?
Seems to me that would work in most cases, except for situations where you have one extreme transform with a child element with another extreme transform that cancels out the first one.
So another thing we might want to do is to teach FrameLayerBuilder to use an internal scale factor (or possibly horizontal and vertical scale factors). For example, if we have a transform that scales everything up by 2, we could make all the child layers work at twice the resolution, and draw into the ThebesLayers with a scale applied to the context. This would give better quality results too. We would have to use some heuristics so we didn't change the scale factor too often, so we don't have to rerender content too often. We'd still have this problem of extreme visibleRects for video and maybe canvas and other leaf non-Thebes layers, but we can make those invisible using the first approach with no bad consequences.
If this makes sense, I suggest we go with the empty-rect approach for now, and file a bug to implement the second approach.
Created attachment 461956 [details] [diff] [review]
Part 2 v3: Layerify nsDisplayTransform
Use an empty visible rect for invalid transforms - Try server says yes.
Created attachment 461957 [details] [diff] [review]
Part 3: Convert nsStyleTransformMatrix to be backed by a 4x4 matrix
No API changes to the class, just reworked internals in preparation
Created attachment 461958 [details] [diff] [review]
Part 4: Upgrade gfx3DMatrix
Created attachment 461959 [details] [diff] [review]
Part 5: Use gfx3DMatrix in layout
This changes larges amounts of layout to directly use gfx3DMatrix wherever possible.
This more or less concludes the preparation patches for 3d transforms.
Comment on attachment 461956 [details] [diff] [review]
Part 2 v3: Layerify nsDisplayTransform
Need dbaron review on style system changes
Comment on attachment 461956 [details] [diff] [review]
Part 2 v3: Layerify nsDisplayTransform
r=dbaron on the style system changes
Checked in part 2:
Could this have caused bug 584494?
Yes, I'll look into that today.
Just a little curiosity on my part here - how likely is this to make it into Firefox 4?
I'm hopeful! Slowly crossing things off my list of things to implement and bugs to fix. The end is in sight!
It might be worth starting to request code reviews on the parts that you think are done.
Pushed part 1 to mozilla-central:
Comment on attachment 461958 [details] [diff] [review]
Part 4: Upgrade gfx3DMatrix
I would strongly suggest that we use an externally-developed and tested matrix library instead of growing our own.
If you want something minimalist that fits in one file and has code that's very easy to understand, use CImg:
If you want to know what I'd really recommend, it's Eigen (disclaimer, I'm a contributor to it):
This doesn't seem required for the OpenGL reftest bug - probably a mistake?
webkit's implementation also includes a media query: @media (-webkit-transform-3d)
This comes in rather handy for feature detection as checking something like
'webkitPerspective' in document.body.style
will false positive in recent webkit versions without the graphics-side support.
I know with Firefox's multitouch it also exposes a similar -moz-touch-enabled media query.
Are you planning to include a mediaQuery for your 3D implementation, not only for feature detection but for better style scoping for authors?
Including a media query sounds like a good idea.
is this dead ?
could it reach firefox 4?
Created attachment 525557 [details] [diff] [review]
Part 3: Convert nsStyleTransformMatrix to be backed by a 4x4 matrix v2
Fixed bitrot.
Created attachment 525558 [details] [diff] [review]
Part 4: Upgrade gfx3DMatrix v2
Fixed more bitrot.
Created attachment 525561 [details] [diff] [review]
Part 5: Use gfx3DMatrix in layout v2
More of the same.
+ UntransformRect(mVisibleRect, mFrame, ToReferenceFrame(), &untransformedVisible);
What if this fails? We need at least a comment explaining what happens if we can't untransform.
+ if (disp->mTransform.IsFlat() &&
Let's call this PreservesAxisAlignedRectangles.
+ nsRect untransformedVisible;
+ UntransformRect(mVisibleRect, mFrame, ToReferenceFrame(), &untransformedVisible);
Again, probably need a comment explaining what happens for a singular matrix.
+ if (matrix.IsSingular() || !matrix.Is2D())
+ return PR_FALSE;
It's probably faster to check Is2D and get the 2D matrix, then check whether the 2D matrix is invertible. Maybe add a method Is2DAndInvertible?
+ static gfx3DMatrix GetResultingTransformMatrix(const nsIFrame* aFrame,
const nsPoint& aOrigin,
float aFactor,
const nsRect* aBoundsOverride = nsnull);
Fix indent.
+nsStyleTransformMatrix::IsFlat() const
+{
+ if (_12 == 0.0f && _13 == 0.0f && _14 == 0.0f &&
+ _21 == 0.0f && _23 == 0.0f && _24 == 0.0f &&
+ _31 == 0.0f && _32 == 0.0f && _33 == 1.0f &&
+ _34 == 0.0f && _43 == 0.0f && _44 == 1.0f)
+ return PR_TRUE;
Should probably just call Is2D and then gfxMatrix::PreservesAxisAlignedRectangles.
Comment on attachment 525558 [details] [diff] [review]
Part 4: Upgrade gfx3DMatrix v2
Please use #include <algorithm> instead of nsAlgorithm
Progress update:
I now have a large part of the spec completed and working, with the GL layers backend.
The remaining ToDo list (in approximate order) is:
1) Get the patches cleaned up and ready to land (pref'd off) in a way that won't break unsupported configurations
2) Tests tests tests
3) Add -moz-perspective-origin a z component to -moz-transform-origin
4) D3D9/10 Backend support
5) Software backend support
6) -moz-transform-style
7) DOM interfaces
8) Transitions/Animations
Comment on attachment 525557 [details] [diff] [review]
Part 3: Convert nsStyleTransformMatrix to be backed by a 4x4 matrix v2
So I haven't dug into this in a whole lot of detail yet, but I do definitely know one major thing that's wrong, and I may as well tell you about that sooner rather than later. (It could even be causing bugs that you're observing...)
The whole mX/mY business is to handle percentage values of translateX() and translateY(), which are percentages relative to the width and height of the element. At the time we compute the style data, we don't know the element's width and height -- that gets computed during layout. So the basic idea of what we've done for 2D is record three separate matrices: the first is the part that's fixed (mMain and mDelta), the second is to be multiplied by the element's width (mX), and the third to be multiplied by the element's height (mY).
In 2-D transforms, the only way to introduce a height-relative transform is in the Y translation component, the only way to introduce a width-relative transform is in the X translation component, and, importantly, the only other matrix components that the operators in 2-D transforms allow the X translation and Y translation components to be copied to is each other (via skew or rotate)..
That reminds me of a second issue, which I'm less sure about what should happen: the issue of units. WebKit's publication of the transforms spec sort of papered over the fact that CSS doesn't have a "primary" unit type for lengths -- lengths can have different units. In the 2-D transform matrix, the components of the transform matrix technically have the following units:
[ unitless unitless length ]
[ unitless unitless length ]
[ 1/length 1/length unitless ] (but these are fixed 0 0 1)
In the 3-D transform matrix, the units really look like this:
[ unitless unitless length/Z length ]
[ unitless unitless length/Z length ]
[ Z/length Z/length unitless Z ]
[ 1/length 1/length 1/Z unitless ]
It's not clear to me that we want to continue storing those two "length" values as nscoord while storing all the rest as floats, especially given the variety of units in this matrix. But we do certainly need to be careful about units and what conversions are expected. And 3-D transforms do introduce the new problem (not present in 2-D transforms) that there's an assumption that CSS pixels are the canonical unit of length and that there's some unnamed canonical unit of Z distance.
(In reply to comment #36)
>.
And, in particular, the fact that 3-D transforms has the arbitrary 16 value matrix3d() function means that you could end up with width-relative and height-relative values in all 16 components of the matrix. (With 2-D transforms, the two cells that were always 0 prevented them from spreading through most of the matrix.)
Thanks David, I should have spotted that.
This is pushing the limit of my matrix knowledge, is there any way we can represent all the information we need, similar to what the current code is doing?
From what I can see, once the relative data has spread across the matrix there will be no easy way to store everything.
We could keep a linked list of matrix objects and only multiply them together once we can evaluate the length/width values, but this would require heap allocations and feels somewhat ugly.
The current approach is still quite doable: you just need 48 components total: 16 for the "constant", 16 for the width-relative, and 16 for the height-relative.
Er, actually, no, it's not, since you can have quadratic terms now too.
So I think we'll just need to change to storing the list of transforms in computed style and only computing them into a (16-term) matrix once we know the width. We probably then want to cache the matrix in a frame property.
And, actually, we *already* store the specified transform as nsStyleDisplay::mSpecifiedTransform (since we already need it for animations), so we'd just need to stop storing nsStyleDisplay::mTransform. Then we'd need to compute the transform later on (maybe in nsDisplayTransform? I don't think we should need it at any stage before display lists), and then possibly cache it somewhere (if we do, we'll also need to clear that cache appropriately -- but I'm actually thinking that caching is probably unnecessary). We largely wouldn't need nsStyleTransformMatrix anymore, except we might want to keep it around as a bunch of static methods that produce a gfx3DMatrix from the transform function data.
Or something like that -- it depends on what works out well.
We need to know the transform during reflow to calculate the right overflow area in FinishAndStoreOverflow.
Of course, we have the width and height there. So a utility method to compute the final transform would be fine. I agree we don't even need to cache it on the frame, just cache it in the nsDisplayTransform.
Also, since you're redoing much of the code for interpolation of transforms anyway, you should read
(a proposal on how to do it better than what's currently in the spec).
Created attachment 530273 [details] [diff] [review]
Part 3: Convert nsStyleTransformMatrix to be backed by a 4x4 matrix v3
Huge rewrite to keep transforms as a list of transform functions until the bounds information becomes available.
I'm not sure if the test for transform changed is suitable and I'm getting a few transform reftest failures (anti-aliasing differences) that I think are caused by this being hit and forcing active (and GPU composited) layers.
We also probably want to cache the gfxMatrix inside nsDisplayTransform, how can I tell when the tranform and/or bounds have changed so that we can dump the cached value?
This passes all reftests from transforms/ and css-transitions/ (aside from the above mentioned compositing differences), but I still need to check animations tests.
(In reply to comment #45)
> We also probably want to cache the gfxMatrix inside nsDisplayTransform, how can
> I tell when the tranform and/or bounds have changed so that we can dump the
> cached value?
Display list items are recreated every time we paint (or hit-test, or whatever) and styles and layout can't change during that time, so there's no problem.
Is this going to land in Aurora for Firefox 5 or Firefox 6? hmmm..
Created attachment 530964 [details] [diff] [review]
Part 3: Convert nsStyleTransformMatrix to be backed by a 4x4 matrix v4
Fixed crashes and quite a few test failures running test_transitions_per_property.html.
How do we handle compute distance with non matching transform lists? I'm a bit stuck on how to defer this calculation until the bounds information is available.
We still have 17 failures in the test, 7 are rounding issues (greater than round_error_ok allows), the other 10 are distance calculations failing (since the code for them doesn't do anything).
This is on top of the above mentioned anti-aliasing failures with reftests.
Created attachment 537013 [details] [diff] [review]
Part 3: Convert nsStyleTransformMatrix to be backed by a 4x4 matrix v5
This now passes all tests on tryserver.
I had to disable all tests that attempted to compute a distance with non-matching transform lists (as suggested).
Created attachment 537014 [details] [diff] [review]
Part 4: Upgrade gfx3DMatrix v3
Fixed review comment and moved most implementations into a new gfx3DMatrix.cpp file.
Carrying forward r=jrmuizel.
Created attachment 537016 [details] [diff] [review]
Part 5: Use gfx3DMatrix in layout v3
Rebased and hopefully fixed review comments.
Note that we should never actually get any 3d transforms with this current code, everything should pass Is2D().
Later patches add 3d transforms, and also implement proper untransforming of these.
Created attachment 537017 [details] [diff] [review]
Part 6: Implement the 3d -moz-transform functions
Adds the 3d transform functions to nsStyleTransformMatrix.
Needs parts 7 and 8 to function correctly, just broken up for easier review.
Created attachment 537018 [details] [diff] [review]
Part 7: Layers support for 3d transforms
Add support in layers for 3d transforms.
We need to adjust the z scale of the viewport transform to 0 because all values must be in the -1 to 1 range, and we have an infinite far plane.
Created attachment 537019 [details] [diff] [review]
Part 8: Add ray tracing to untransform 2d points on a 3d plane
Let us actually untransform 3d points wherever possible.
With the pref (layout.3d-transforms-enabled) off (as it is by default), we pass all tryserver tests with this patch queue.
Parts 9 - 12 are still waiting on tryserver results, and possibly more fixes.
I plan to land this as soon as possible to try catch any regressions, and prevent them from being bitrotted again.
Comment on attachment 537016 [details] [diff] [review]
Part 5: Use gfx3DMatrix in layout v3
Review of attachment 537016 [details] [diff] [review]:
-----------------------------------------------------------------
Comment on attachment 537018 [details] [diff] [review]
Part 7: Layers support for 3d transforms
Review of attachment 537018 [details] [diff] [review]:
-----------------------------------------------------------------
Comment on attachment 537019 [details] [diff] [review]
Part 8: Add ray tracing to untransform 2d points on a 3d plane
Review of attachment 537019 [details] [diff] [review]:
-----------------------------------------------------------------
::: gfx/thebes/gfx3DPoint.h
@@ +101,5 @@
> + *this /= Length();
> + }
> +};
> +
> +#endif /* GFX_3DPOINT_H */
\ No newline at end of file
Does it make sense to make a Base3DPoint class like BasePoint, and instantiate it here for gfx3DPoint? I think it probably does.
Created attachment 538995 [details] [diff] [review]
Part 8: Add ray tracing to untransform 2d points on a 3d plane v2
Renamed gfx3DPoint to gfx3DVector (I think it makes more sense given the operations we use it for).
Added Base3DVector.
Created attachment 538996 [details] [diff] [review]
Part 9 - Implement the perspective() transform function and style property.
Created attachment 538997 [details] [diff] [review]
Part 10 - Implement -moz-backface-visible
Created attachment 538998 [details] [diff] [review]
Part 11 - Make -moz-transform-origin also support a z component.
Created attachment 538999 [details] [diff] [review]
Part 12 - Implement -moz-perspective-origin.
All green on tryserver up to this point (with the pref disabled).
Took a guess at appropriate reviewers for each piece, happy for anyone to change them.
Comment on attachment 538995 [details] [diff] [review]
Part 8: Add ray tracing to untransform 2d points on a 3d plane v2
Review of attachment 538995 [details] [diff] [review]:
-----------------------------------------------------------------
Let's break this patch up into the 3DVector stuff, the matrix changes, and the layout changes.
For Base3DVector, "3D" should be last in the name for consistency with Azure and because we can't use 3DVector as a class name.
Also, per discussion on IRC this should be BasePoint3D. We'll have to review it carefully to keep the API as consistent with BasePoint as makes sense.
::: gfx/src/Base3DVector.h
@@ +124,5 @@
> + return sqrt(x*x + y*y + z*z);
> + }
> +
> + void Normalize() {
> + *this /= Length();
What if Length() is zero? At least document what the behavior is.
Comment on attachment 538998 [details] [diff] [review]
Part 11 - Make -moz-transform-origin also support a z component.
Review of attachment 538998 [details] [diff] [review]:
-----------------------------------------------------------------
r=me on everything outside of layout/style, needs dbaron review for the style changes
::: layout/style/nsCSSParser.cpp
@@ +7524,5 @@
> + if (!ParseBoxPositionValues(position, PR_TRUE))
> + return PR_FALSE;
> +
> + PRBool allow3D =
> + mozilla::Preferences::GetBool("layout.3d-transforms.enabled", PR_FALSE);
I think we should cache this somewhere. Use Preferences::AddBoolVarCache.
Maybe you can break out the style system changes in parts 11 and 12 into separate patches from the code that actually uses the new properties?
Benoit, what do you say to that?
(In reply to comment #67)
> Benoit, what do you say to that?
We had a conversation in Real Life (tm) with Joe and we're now in agreement on the set of point/vector classes that we want. Only have one class for points/vectors with 3 coords (x,y,z). Call that e.g. Point3D. Then only have one class for points/vectors with 4 homogeneous coords (x,y,z,w). Call that e.g. Point4D.
It doesn't matter to me whether they're called 'point' or 'vector', what matters to me is that we have only one non-homogeneous class and one homogeneous class, and we're now in agreement on that.
(In reply to comment #63)
> ::: gfx/src/Base3DVector.h
> @@ +124,5 @@
> > + return sqrt(x*x + y*y + z*z);
> > + }
> > +
> > + void Normalize() {
> > + *this /= Length();
>
> What if Length() is zero? At least document what the behavior is.
FWIW I consider that to be purely a documentation issue i.e. I don't think that the code should perform any check. I've found that it's just simpler to design vector/matrix classes in such a way that they behave as similarly as possible to plain floats. Surely the compiler doesn't generate any code to protect against divisions by zero, and neither should we.
. ;)
(In reply to comment #70)
> . ;)
Ah, so we still disagree: I don't agree with having Points with non-homogeneous coordinates vs Vectors with homogeneous coordinates.
In homogeneous coordinates there is no notion of 'length of a vector', since if you multiply all homogeneous coords by 2 it's still the same point. So in a proposal where the only vectors would have homogeneous coords, we'd be able to represent only _unit_ vectors. I am not comfortable with such a limitation.
=== My proposal ===
Vector3D class has 3 coords (x,y,z). You can also simply call it 'Vector'
Vector4D class has 4 homogeneous coords (x,y,z,w). You can also call it 'HVector', where H stands for Homogeneous.
No distinction between 'points' and 'vectors'.
Forgot to reply to that:
(In reply to comment #66)
>
As I see it, Vector3D would be for ordinary 3D points, not for directions at infinity, so if one had implicit casting to homogeneous coords, (x,y,z) would unambiguously become (x,y,z,1). So there is no ambiguity, once this is agreed on.
However, regardless of that, matrix4 * vector3 would have to return a vector4 since the matrix4 can have anything on the fourth row.
.
(In reply to comment #73)
> .
I don't know what's the plan with 2D points, but I feel that it's best to pick names that reflect the difference between affine (x,y,z) and homogeneous (x,y,z,w) coordinates. Homogeneous means that we're doing projective geometry i.e. (x,y,z,w) is the same point as (a*x,a*y,a*z,a*w) for any nonzero a. So maybe my suggestion of Vector4D vs Vector3D was not good, as it wrongly suggests that the only difference is the number of coords.
Well then, Point3D vs HPoint3D?
Why not. Or maybe PointH3D. Instead of H for homogeneous, the difference can also be represented by a P for Projective.
(note: i've been assuming that we wanted to have a class for points with (x,y,z,w) projective coordinates, I don't know exactly what the needs here are, I'm not implying we should).
HPoint3D, PPoint3D, PointH3D, PointP3D ... all work for me. Someone pick one.
PointH3D it is then.
Created attachment 540952 [details] [diff] [review]
Part 8a: Add BasePoint3D and gfxPoint3D
Created attachment 540953 [details] [diff] [review]
Part 8b: Add 3D Point support, and ray tracing to gfx3DMatrix
Created attachment 540954 [details] [diff] [review]
Part 8c: Use ray tracing to untransform 2d points on a 3d plane.
Created attachment 540955 [details] [diff] [review]
Part 10 - Implement -moz-backface-visible
s/gfx3DVector/gfxPoint3D
Created attachment 540956 [details] [diff] [review]
Part 11a: Add nsCSSValueTriplet and optionally read a z component to -moz-transform-origin
Created attachment 540957 [details] [diff] [review]
Part 11b: Layout changes to use a z component for -moz-transform-origin
Created attachment 540958 [details] [diff] [review]
Part 12a: Implement -moz-perspective-origin style property.
Created attachment 540959 [details] [diff] [review]
Part 12b: Layout changes to use -moz-perspective-origin
Comment on attachment 540957 [details] [diff] [review]
Part 11b: Layout changes to use a z component for -moz-transform-origin
Carrying forward r=roc
Comment on attachment 540952 [details] [diff] [review]
Part 8a: Add BasePoint3D and gfxPoint3D
Review of attachment 540952 [details] [diff] [review]:
-----------------------------------------------------------------
::: gfx/src/BasePoint3D.h
@@ +62,5 @@
> + bool operator==(const Sub& aPoint) const {
> + return x == aPoint.x && y == aPoint.y && z == aPoint.z;
> + }
> + bool operator!=(const Sub& aPoint) const {
> + return x != aPoint.x || y != aPoint.y || z != aPoint.z;
For future reference, I usually find it easier and safer to write !(*this == aOther), but you needn't bother changing it.
Comment on attachment 540953 [details] [diff] [review]
Part 8b: Add 3D Point support, and ray tracing to gfx3DMatrix
Review of attachment 540953 [details] [diff] [review]:
-----------------------------------------------------------------
::: gfx/thebes/gfx3DMatrix.h
@@ +117,5 @@
> + */
> + gfxPoint3D Transform3D(const gfxPoint3D& point) const;
> +
> + gfxPoint ProjectPoint(const gfxPoint& aPoint) const;
> + gfxRect ProjectRect(const gfxRect& aRect) const;
Call this ProjectRectBounds since it takes the bounding rect.
Comment on attachment 540954 [details] [diff] [review]
Part 8c: Use ray tracing to untransform 2d points on a 3d plane.
Review of attachment 540954 [details] [diff] [review]:
-----------------------------------------------------------------
::: layout/base/nsLayoutUtils.h
@@ +512,5 @@
> +
> +
> + static nsRect InvertRectToAncestor(nsIFrame* aFrame,
> + const nsRect& aRect,
> + nsIFrame* aStopAtAncestor);
InvertRectToAncestorBounds? InvertRectBoundsToAncestor?
Why not just TransformRectToBoundsInAncestor?
Comment on attachment 540959 [details] [diff] [review]
Part 12b: Layout changes to use -moz-perspective-origin
Review of attachment 540959 [details] [diff] [review]:
-----------------------------------------------------------------
::: layout/base/nsDisplayList.cpp
@@ +2389,5 @@
> + }
> +
> + /*?
@@ +2394,5 @@
> +
> + result.x = -result.x;
> + result.y = -result.y;
> +
> + return result;
return -result;
> > +
> > + /*?
>
A value of (0,0) has the effect of applying perspective with the vanishing point in the centre of the object. So we need to adjust the point so that (50%, 50%) returns (0,0).
Excluding possible typos, this is what WebKit does and it appears to look correct.
Comment on attachment 540959 [details] [diff] [review]
Part 12b: Layout changes to use -moz-perspective-origin
Review of attachment 540959 [details] [diff] [review]:
-----------------------------------------------------------------
Comment on attachment 537017 [details] [diff] [review]
Part 6: Implement the 3d -moz-transform functions
So this treats translateZ() the Z component of translate3d() differently from what the spec says: the spec says it's a <length> rather than a <number>. What does WebKit do?
ParseSingleTransforms argument should be aIs3D rather than is3d; I spent a while trying to figure out what is3D was used for before realizing it was an out-param.
That said, rather than having a separate switch statement inside ParseSingleTransform, I think you should just pass aIs3D into GetFunctionParseInformation and set it there for each case.
In ParseMozTransform:
* you really really really should cache this preference somewhere rather than getting it every time
* your introduction of |prev| doesn't appear to be used for anything; remove it
I'm going to wait to look at the nsStyleTransformMatrix stuff in this patch until going through the other one again...
Thanks dbaron, I'll fix those and upload a new patch soon.
I'm currently stuck on calcuating overflow areas/visible regions etc for layers with -moz-preserve-3d set.
Consider the (relatively) simple case of
Parent Frame - preserve3d - rotateY(90deg)
- Child Frame 1 - No transform
- Child Frame 2 - RotateY(90deg).
The current GetBounds implementation calls GetBounds on each child, Union()'s the result and then transforms it into local coordinate space. This returns a 0 sized area since Child Frame 2 (CF2) is essentially singular in 2d space, as is Parent Frame (PF).
My solution for finding the bounds of PF, is to instead apply the transformation to each child's GetBounds() result and create a Union of these areas. In the case where the preserve-3d flag is set, and the specific child is also an nsDisplayTransform, we skip the transformation step. When calculating the transformation to use in an nsDisplayTransform, we check for parent frames with preserve-3d and multiply the transforms together.
In short CF2::GetBounds() is returning an area in PF's coordinate space instead of it's own.
The piece I'm stuck on is what to return for CF2::GetBounds() when not being used as a component of PF::GetBounds(). Can I just always return an area in the parents coordinate system? The 'actual' bounds of CF2 is 0 sized, and results in the object not being drawn.
I believe HitTest, ComputeVisibility and FinishAndStoreOverflow are essentially the same problem.
For FinishAndStoreOverflow, will storing the area in parents coordinate space cause the frame size for the parent to be changed? We don't want to transform it again.
Ideas appreciated :)
preserve-3d always confuses me.
Can we pretend the parent frame simply has no transform at all, and propagate its transform down to its children every time we compute the used transform value?
I might be able to do this for some cases.
Where does the size value come from for nsIFrame::FinishAndStoreOverflow?
The problem is that children that *don't* have a transform need to be transfomed, yet children that do can probably include the transform with theirs and transform themselves.
If the area given to FinishAndStoreOverflow is the sum of the child areas, then sections of it would need to be transformed, while others not. I'm not sure how to achieve this.
aNewSize is computed by Reflow(), it's the border-box size which is about to be returned for the frame by Reflow(). It has not yet been set on the frame. It does *not* include child areas.
aOverflowAreas contains the visual and scrollable overflow areas for this frame, which includes those areas for the child frames. But if you need to, you can recompute the child frames' contribution by iterating over all child frames and asking for their overflow areas, then transforming them, then you can add that back in to the passed-in aOverflowAreas.
I'm still struggling with the handling of preserve-3d. Slow progress is slow.
I had an idea for another possible approach to solving this:
Given a frame structure like:
parent frame - preserves3d - rotatey(90deg)
- child frame - rotatey(90deg)
- child frame - not transformed
The current code will generate a display item tree that looks like:
nsDisplayTransform - rotatey(90deg) - preserve3d
- nsDisplayTransform - rotatey(90deg)
- nsDisplayText - whatever this content was
Is it instead possible to generate:
nsDisplayList
- nsDisplayTransform - rotatey(180deg)
- nsDisplayTransform - rotatey(90deg)
- nsDisplayText - contentcontentcontent
This looks like it will show the correct rendering for this limited example (not sure if nsDisplayList is the correct 'container') and won't require any changes in visibility calculation, hit testing etc.
Are there any problems with this approach that I'm missing?
That sounds good.
So if you have
parent frame - preserves3d - rotatey(90deg)
- child frame - opacity:0.5
- child frame - not transformed
- child frame - rotatey(90deg)
- child frame - not transformed
You'd build
nsDisplayList
- nsDisplayOpacity - opacity:0.5
- nsDisplayTransform - rotatey(90deg)
- nsDisplayText - contentcontentcontent
- nsDisplayTransform - rotatey(180deg)
- nsDisplayTransform - rotatey(90deg)
- nsDisplayText - contentcontentcontent
?
I think that approach should work, and I think it shouldn't be too hard to implement. In BuildDisplayListForStackingContext, when the frame has preserves-3d, you can walk the child display list and do whatever surgery you want to do.
I guess you would walk the display items in the child list:
-- for any nsDisplayTransforms, just fold in the extra transform from the parent
-- for any nsDisplayOpacity or nsDisplayWrapList, recursively walk over the children (since opacity commutes with transforms)
-- for any nsDisplayClip ... I dunno! Introduce a new nsDisplayClipPath which generalizes nsDisplayClipRoundedRect and can clip to an arbitrary path, and transform the rectangle to a quad, replace the nsDisplayClip with an nsDisplayClipPath for that quad, and recursively walk over the children? FrameLayerBuilder would have to know about nsDisplayClipPath. This could probably be done as a followup bug, if we do it at all, since preserving 3D through clipping is reasonably esoteric
-- for any other display item, wrap it in an nsDisplayTransform with the parent's transform
I have good news and bad news.
This approach seems to work fairly well, I've got it working and rendering the safari 'poster circle' demo correctly..
I'm not sure how to implement this without moving the preserve-3d handling further down again, onto the actual frames.
Will think about this further.
(In reply to comment #103)
>.
Surely this is a Webkit bug? That behavior makes no sense to me.
So what's the plan here with the preference and how we enable this? Patch 6 introduces a preference for 3D transforms, defaulting to off.
Is there going to be a situation where, on some machines, we have the ability to use graphics capabilities that make doing 3D transforms reasonable, and on others we don't? When we enable 3D transforms, do you expect we'll be able to enable it reasonably across all machines, or will we need to enable it only on some of them?
It depends on the use-case. The software implementation is presumably a lot slower, but it might be OK for some kinds of usage.
The rapid release guidlines (as I understand them) encourage the use of the preferences to disable new features if needed. I am working on a software rendering implementation, performance of this is yet to be tested.
I want to land code with the preference disabled so we can get some user testing, with lower risk of regressions.
Do you have any ETA on being able to review the first few of these patches? Parts 3-5 seem the most likely to cause regressions (and can't be trivially turned off/backed out), so I'd like to get these baking on m-c as soon as possible (and before they bitrot).
Comment on attachment 537013 [details] [diff] [review]
Part 3: Convert nsStyleTransformMatrix to be backed by a 4x4 matrix v5).
>+ PRBool IsIdentity() const {
>+ return xx == 1.0f && yx == 0.0f &&
>+ xy == 0.0f && yy == 1.0f &&
>+ x0 == 0.0f && y0 == 0.0f;
>+ }
Drop the 'f' suffixes, since xx, yx, etc., are double rather than float.
>- (newOrigin + toMozOrigin, disp->mTransform.GetThebesMatrix(bounds, aFactor));
>+ (newOrigin + toMozOrigin, nsStyleTransformMatrix::ReadTransforms(disp->mSpecifiedTransform,
>+ aFrame->GetStyleContext(),
>+ aFrame->PresContext(),
>+ dummy, bounds, aFactor));
Find a way to wrap before the 80th column.
>+ const gfxMatrix& newTransformMatrix = GetTransform(mFrame->PresContext()->AppUnitsPerDevPixel());
Break after the = to fit within 80 columns.
It would be really nice if nsDisplayTransform::BuildLayer didn't need to
use AppUnitsPerDevPixel() (unlike all the other callers) and thus
require the mCachedFactor business. Might be nice to clean that up in a
followup.
nsCSSValueList::CloneExisting() is a confusing name for what you've
done. Instad, call it CloneInto, since it's the *argument* that
receives the clone. Also add a comment to nsCSSValue.h saying what it
does. Finally, reuse Clone() so that the contents of the method are
just:
aList->mValue = mValue;
aList->mNext = mNext ? mNext->Clone() : nsnull;
which also fixes the bug in your implementation that it doesn't
overwrite a non-null next pointer with a null one. Finally, make it
assert at the start that aList->mNext is null (otherwise it would leak).
nsComputedDOMStyle.cpp:
As I said above, I think it's bogus to serialize these numbers as
doubles when they're stored as float accuracy.
Patch 5 doesn't fix this, either. .
Please document AddTransformMatrix as taking gfxMatrix parameters in CSS
pixels (in .h and .cpp), since it produces eCSSUnit_Pixel output based
on the translation coordinates in the gfx matrix. I'm still trying to
figure out if nsStyleTransformMatrix::ProcessInterpolateMatrix has a
scaling error here (when dev pixels != css pixels).
... WRITE MORE HERE ...
In AddDifferentTransformLists:
>+ nsCSSValueList* list = arr->Item(2).SetListValue();
>+ aList1->CloneExisting(list);
I think this is clearer on one line (especially if CloneExisting is
renamed to CloneInto):
aList1->CloneInto(arr->Item(2).SetListValue());
>+ ///XXX: If the matrix contains only numbers then we could decompose here.
>+ // If it contains % values, I think we can too, less sure about
>+ // calc values though.
A bunch of things here:
- first line goes past column 80
- use "// FIXME:" rather than "///XXX:".
- % and calc() are pretty much equivalent; it's the same for both.
- However, I'd add that dealing with % and calc() is not possible with
matrix3d(), and having consistency between matrix() and matrix3d() is
valuable
> while ((*resultTail)->mNext) {
>- resultTail = &(*resultTail)->mNext;
>+ resultTail = &(*resultTail)->mNext;
> }
Leave this as it was.
nsStyleAnimation.h:
Please forward-declare gfxMatrix rather than including the header.
nsStyleStruct.cpp:
- if (mTransform != aOther.mTransform)
+ if (mSpecifiedTransform != aOther.mSpecifiedTransform)
This should be
!mSpecifiedTransform != !aOther.mSpecifiedTransform ||
(mSpecifiedTransform && *mSpecifiedTransform != *aOther.mSpecifiedTransform)
since you want deep comparison of the lists rather than pointers.
nsStyleTransformMatrix.cpp:
Could you rename the aFactor param to pretty much everything to be
called aAppUnitsPerMatrixUnit or something like that? Otherwise I think
it's not clear that it can be used to create a matrix in either device
or CSS pixels. Furthermore, you need to fix the comments above
SetToTransformFunction in the .h file -- that currently says it requires
device pixels, but most of the time you actually give it CSS pixels.
Really, it picks which units the matrix is in terms of. (That said, as
I said above, I think it would be good to clean up that inconsistency.)
Please wrap the last line of ProcessTranslatePart.
In nsStyleTransformMatrix::ProcessMatrix could you call the matrix
result rather than temp?
>+ nsAutoPtr<nsCSSValueList> result;
>+ nsCSSValueList **resultTail = getter_Transfers(result);
>+
>+ *resultTail = nsStyleAnimation::AddTransformMatrix(matrix1, coeff1,
>+ matrix2, coeff2);
Instead, write:
nsAutoPtr<nsCSSValueList> result(nsStyleAnimation::AddTransformMatrix(
matrix1, coeff1, matrix2, coeff2));?
I hope all the gfx3DMatrix return values from
nsStyleTransformMatrix::Process* don't lead to too much struct copying.
Have you checked?
>- ProcessSkewHelper(xSkew, ySkew, aMain);
>+ return ProcessSkewHelper(xSkew, ySkew);;
No double ;, please.
Could you rename SetToTransformFunction to MatrixForTransformFunction?
r=dbaron with all of that fixed
Also, even after patch 5, I think there are still some pieces that are awkwardly 2-D only (like animation). That'll need to be fixed before this is enabled, but it's ok for now.
(In reply to comment #108)
>).
gfxMatrix is double because that's what cairo uses.
I think gfx3DMatrix uses floats because they're far better supported on GPUs than doubles.
Yep, we use floats matrices for both OpenGL and D3D*. I think the loss of precision is probably better than creating a gfxDouble3DMatrix and converting back to floats at draw time, but I'm open to suggestions.
Animations are definitely awkward with this patch, but I have a followup patch in my queue that implements these properly.
Was the " ... WRITE MORE HERE ..." comment meant to be removed or is still an intentional placeholder?
Thanks for the review dbaron!
(In reply to comment #111)
> Was the " ... WRITE MORE HERE ..." comment meant to be removed or is still
> an intentional placeholder?
Er, yeah, one of us should figure out if there's actually a bug there, though it sort of looks like it to me.
Comment on attachment 537017 [details] [diff] [review]
Part 6: Implement the 3d -moz-transform functions
...continued from comment 94.
We should add support for number values on translatex(), translatey(),
translate(), translate3d(), and the last 2 parts of matrix() -- and
treat them as pixels. You should do that as part of this bug --
otherwise this patch leaves things in an oddly inconsistent state.
>+ /* There are several cases to consider.
>+ * Next, the values might be lengths, or they might be percents. If they're
>+ * percents, store them in the dX and dY components. Otherwise, store them in
>+ * the main matrix.
>+ */
You copied this from somewhere, but it no longer belongs. If the
original is still there, delete that too.
>+ temp._11 = 1 + (1 - cosTheta) * (x * x - 1);
>+ temp._12 = -z * sinTheta + (1 - cosTheta) * x * y;
>+ temp._13 = y * sinTheta + (1 - cosTheta) * x * z;
>+ temp._14 = 0.0f;
>+ temp._21 = z * sinTheta + (1 - cosTheta) * x * y;
>+ temp._22 = 1 + (1 - cosTheta) * (y * y - 1);
>+ temp._23 = -x * sinTheta + (1 - cosTheta) * y * z;
>+ temp._24 = 0.0f;
>+ temp._31 = -y * sinTheta + (1 - cosTheta) * x * z;
>+ temp._32 = x * sinTheta + (1 - cosTheta) * y * z;
>+ temp._33 = 1 + (1 - cosTheta) * (z * z - 1);
>+ temp._34 = 0.0f;
>+ temp._41 = 0.0f;
>+ temp._42 = 0.0f;
>+ temp._43 = 0.0f;
>+ temp._44 = 0.0f;
According to the spec _44 should be 1.0f, not 0.0f.
>+ case eCSSKeyword_scale3d:
>+ return ProcessScale3D(aData);
> case eCSSKeyword_scale:
> return ProcessScale(aData);
Could you stick scale3d() after scale() for consistency with the rest?
r=dbaron with that fixed, though I probably want to look at the pref caching
code once you address that
(Note that the number values thing is probably easiest as a separate patch -- if you merge it with an existing one I'd want to review it.)
Comment on attachment 538996 [details] [diff] [review]
Part 9 - Implement the perspective() transform function and style property.
>Bug 505115 - Part 9 - Implement the perspective() transform function and style property. try: -b d -p linux -u mochitest-4 -t none
You don't want try syntax in your commit message.
I'd also write it as:
Implement the perspective() transform function and the perspective CSS property.
As per patch 6, I think you should cache the pref (which also has value in
encapsulating the "are 3d transforms enabled" decision to one place, which
could be valuable if it's more than just a pref check in future).
>+CSS_PROP_DISPLAY(
>+ -moz-perspective,
>+ _moz_perspective,
>+ CSS_PROP_DOMPROP_PREFIXED(Perspective),
>+ CSS_PROPERTY_PARSE_VALUE,
>+ VARIANT_NONE|VARIANT_INHERIT|VARIANT_NUMBER,
>+ kDisplayKTable,
>+ offsetof(nsStyleDisplay, mChildPerspective),
>+ eStyleAnimType_float)
_moz_perspective -> perspective (don't prefix internals)
kDisplayKTable -> nsnull
put spaces around the |s
The mPerspective and mChildPerspective stuff is really broken -- it's
going to produce incorrect results in a lot of cases due to your code
not disabling sharing in cases where you'd need to disable sharing.
Rather than explain the whole thing, I'm just going to say you should
rip it out. Make the style struct contain the computed value of the
property for that element (what's now mChildPerspective). When you want
the parent's perspective, look at
parentFrame->GetStyleContext->GetParent()->GetStyleDisplay()->mPerspective
(except you'll need to null-check the GetParent() call).
I didn't really look too closely at rest of the code (though the whole
thing with the perspective transform function seems pretty weird to me
-- I haven't dug into that), since this is pretty clearly wrong and
needs substantial reworking because of the mPerspective/mChildPerspective
thing being broken.
(In reply to comment #115)
> When you want
> the parent's perspective, look at
> parentFrame->GetStyleContext->GetParent()->GetStyleDisplay()->mPerspective
> (except you'll need to null-check the GetParent() call).
er, I meant
frame->GetStyleContext()->GetParent() (not parentFrame -- otherwise you'd be walking up twice).
Comment on attachment 540955 [details] [diff] [review]
Part 10 - Implement -moz-backface-visible
>Bug 505115 - Part 9 - Implement the perspective() transform function and style property.
How about calling it Part 10, and using an appropriate commit message
for part 10.
>+CSS_PROP_DISPLAY(
>+ -moz-backface-visibility,
>+ _moz_backface_visibility,
>+ CSS_PROP_DOMPROP_PREFIXED(BackfaceVisibility),
>+ CSS_PROPERTY_PARSE_VALUE,
>+ VARIANT_HK,
>+ kBackfaceVisibilityKTable,
>+ offsetof(nsStyleDisplay, mBackfaceVisible),
>+ eStyleAnimType_EnumU8)
_moz_backface_visibility -> backface_visibility (don't prefix internals)
Please call the member of nsStyleDisplay mBackfaceVisibility to
match the property name rather than calling it mBackfaceVisible.
>+#define NS_STYLE_BACKFACE_VISIBLE 1
>+#define NS_STYLE_BACKFACE_HIDDEN 0
The convention here suggests you should insert VISIBILITY_ into these
names (since they're property-value).
>+ COMPUTED_STYLE_MAP_ENTRY_LAYOUT(_moz_backface_visibility, MozBackfaceVisibility),
This isn't layout-dependent, drop the "_LAYOUT".
I'm puzzled by the nsStyleAnimation changes -- I don't see anything in
the transitions or 3d transform specs saying the property is
transitionable. If it is, though, I'd have expected it to work like
'visibility'... which seems to be what you test for, despite doing the
code entirely differently. What does WebKit do? If the goal is to work
like visibility, why not do it the same way in nsStyleAnimation?
In nsStyleDisplay::CalcDifference, why is reflow required? I'd have
expected repaint and maybe layer stuff.
In property_database.js, please test that "collapse" is not an accepted
value (put it in invalid_values). (That's the value for 'visibility'
that's not valid here.)
>+ // Convert to two vectors on the surface of the plane.
>+ gfxPoint3D ab = a - b;
>+ gfxPoint3D ac = a - c;
>+
>+ return ab.CrossProduct(ac);
I find this a little confusing. ab is the *negative* of the
transformation of the unit Y axis vector, and ac is the *negative* of
the transformation of the unit X axis vector. It seems clearer to use
b-a and c-a.
Also, it seems really weird that your definition of "normal vector" is
something that, with no transformation, points along the negative Z
axis. Is this some standard terminology, or would it make more sense to
take ac.CrossProduct(ab)?
I presume you've tested that this matches WebKit in the case of
scaleZ(-1). The spec seems totally unclear, and it ought to be
clarified (i.e., that it's the results of transforming the X and Y axes
that matter, rather than the result of transforming the Z axis). One of
us should email www-style about that.
>+ gfxPoint3D normal = newTransformMatrix.GetNormalVector();
>+ if (newTransformMatrix.IsSingular() ||
>+ (mFrame->GetStyleDisplay()->mBackfaceVisible == NS_STYLE_BACKFACE_HIDDEN &&
>+ normal.DotProduct(gfxPoint3D(0,0,1)) >= 0.0)) {
>+ return nsnull;
>+ }
Seems better to skip the variable |normal| and just put the whole
expression inside the if().
Also, don't use two spaces before the ==.
Also, just check if normal.z >= 0 and skip the DotProduct().
And if you change GetNormalVector() to work the way that makes sense to
me, make it <= 0 here :-)
And please add a comment in gfx3DMatrix.h saying what GetNormalVector does.
Marking as review- because I'd like to look at this again for the
animation issues; otherwise I'd be fine with you making the changes
without my looking again.
Comment on attachment 540956 [details] [diff] [review]
Part 11a: Add nsCSSValueTriplet and optionally read a z component to -moz-transform-origin
>+ if (CheckEndProperty() || !ParseVariant(depth, VARIANT_AHL, nsnull) || !allow3D) {
VARIANT_AHL is wrong; as far as I can tell it should just be
VARIANT_LENGTH | VARIANT_CALC.
Additionally, you should explicitly not look for a third value if the
first value has unit inherit or initial.
Also, wrap before the 80th column.
You need (again) to cache the preference.
I'd probably have preferred use of nsCSSValue::Array over creating
nsCSSValueTriplet, but I suppose I can live with this approach.
In nsCSSValue.h, could you make Triplet 51, and bump the current 51-56,
to be consistent with the order you use in the code?
You need to add a case to nsCSSValue::operator==, which you missed.
I'm concerned that you serialize (in the specified value serialization;
the computed value serialization is fine, and also has fewer constraints
about needing to be faithful to what was specified) a transform-origin
without a third value by adding that value, since we've generally tried
to serialize to the most supported (i.e., oldest) syntax. One way to do
that is to use the eCSSUnit_Null on the Z value to distinguish these
cases; I think that would require adjusting a few assertions
(nsCSSValue::SetTripletValue's assertion about null)... though
assertions already inconsistent with what's handled in
nsCSSValueTriplet::AppendToString. (The animation code can still
maintain the invariant that the third value is non-null... though when
the resulting value is zero it should *set* is as null.) There are
probably other options for handling this as well, but that's the way
that seems obvious to me.
In nsCSSValue.h:
>+ nsCSSValueTriplet(const nsCSSValue& aXValue, const nsCSSValue& aYValue, const nsCSSValue& aZValue)
>+ : mXValue(aXValue), mYValue(aYValue), mZValue(aZValue)
Please wrap before column 80.
In nsRuleNode.cpp, don't give SetTripletCoords a general name; it's
specific to transform origin since it has special treatment of the Z
component. I tend to think it's probably best to inline it into the
caller.
nsStyleAnimation.cpp:
I'd like to look at this more closely after you address the
serialization stuff with the third value. A little more sharing of code
that you're copying from the pair case would be nice, though.
nsStyleStruct.h:
>- nsStyleCoord mTransformOrigin[2]; // [reset] percent, coord, calc
>+ nsStyleCoord mTransformOrigin[3]; // [reset] percent, coord, calc
comment should explain that the third value can't be a percent
I'd like to look at this again once you address the third-value serialization issue, so marking review-.
Comment on attachment 540958 [details] [diff] [review]
Part 12a: Implement -moz-perspective-origin style property.
>diff --git a/dom/interfaces/css/nsIDOMCSS2Properties.idl b/dom/interfaces/css/nsIDOMCSS2Properties.idl
You need to bump the IID here.
(And I forgot to mention that for 2 other patches, but it applies there too!)
>+PRBool CSSParserImpl::ParseMozPerspectiveOrigin()
The indentation of this function is weird. Also, I think you should
share ParseMozTransformOrigin and add a property argument (and use it
for whether to do the third value), rather than have two separate
almost-identical functions.
> CSS_PROP_DISPLAY(
>+ -moz-perspective-origin,
>+ _moz_perspective_origin,
No _moz_ here, again.
>+ CSS_PROP_DOMPROP_PREFIXED(PerspectiveOrigin),
>+ CSS_PROPERTY_PARSE_FUNCTION |
>+ CSS_PROPERTY_STORES_CALC,
Please indent the line after the | following local style.
>+ COMPUTED_STYLE_MAP_ENTRY_LAYOUT(_moz_perspective_origin, MozPerspectiveOrigin),
The use of _LAYOUT here is correct (unlike the last time, when I pointed
out it was wrong). Just pointing out that you shouldn't change this one
too.
nsRuleNode.cpp:
Please match the local 2-space indent.
property_database.js:
Indentation here is wacky.
r=dbaron with that fixed
(In reply to comment #108)
>
> nsComputedDOMStyle.cpp:
>
> As I said above, I think it's bogus to serialize these numbers as
> doubles when they're stored as float accuracy.
>
> Patch 5 doesn't fix this, either.
I've added this to patch 5 now.
>
>.
This is currently in a much later part of the patch queue, I can split this out and bring it forward if needed.
>
> >+ /* .
Alright, done!
>
>?
Good point, made this this change too.
>
> I hope all the gfx3DMatrix return values from
> nsStyleTransformMatrix::Process* don't lead to too much struct copying.
> Have you checked?
Looking at the assembly for my debug mac build, it appears that it is constructing the temporary gfx3DMatrix objects directly into a pointer passed by the caller, and not copying the result at all.
Created attachment 547812 [details] [diff] [review]
Part 3: Convert nsStyleTransformMatrix to be backed by a 4x4 matrix v6
Fixed review comments, and rebased against tip.
Created attachment 547813 [details] [diff] [review]
Part 5: Use gfx3DMatrix in layout v4
Fixed serialization of gfx3DMatrix objects to stay as floats. Rebased against tip/Part 3 changes.
Carrying forward r=roc
Landed parts 3-5 on inbound
Created attachment 548074 [details] [diff] [review]
Part 6: Implement the 3d -moz-transform functions v2
Fixed review comments, including caching the preference.
Carrying forward r=dbaron
Created attachment 548075 [details] [diff] [review]
Part 7: Layers support for 3d transforms v2
Rebased against tip/previous changes.
Carrying forward r=roc
Created attachment 548076 [details] [diff] [review]
Part 9 - Implement the perspective() transform function and style property. v2
Fixed review comments and changed handling of perspective() function to match the spec properly.
Created attachment 548077 [details] [diff] [review]
Part 10 - Implement -moz-backface-visible v2
Fixed review comments, removed animations code entirely.
Created attachment 548078 [details] [diff] [review]
Part 11a: Add nsCSSValueTriplet and optionally read a z component to -moz-transform-origin v2
Fixed review comments
Created attachment 548079 [details] [diff] [review]
Part 12a: Implement -moz-perspective-origin style property. v2
Fixed review comments, carrying forward r=dbaron
Created attachment 548080 [details] [diff] [review]
Part 13: Add basic reftests for 3d transforms and expose 3d transform status in GfxInfo
Initial set of basic tests for 3d transforms, that really don't test enough yet. Still thinking of more ideas to test perspective etc.
Created attachment 548083 [details] [diff] [review]
Part 14a: Add -moz-transform-style CSS property
Created attachment 548084 [details] [diff] [review]
Part 14b: Layout changes for preserve-3d
The preserve-3d patches raises a few interesting question, which also probably need to go through www-style.
The depth sorting in the spec is very vague, although this is intentional I think it needs to be clarified. Currently I've implemented sorting based on the z translate component of the matrix (same as Chromium does/did).
One interesting case is elements that contain the preserve-3d flag, but no actual transform. Webkit is treating these as still 'transformed' and is including any parent perspective into child transforms. This could also be considered as having children of a preserve-3d frame being brought up into the frame space (if that makes sense).
The other problem case is where there is multiple levels of preserve-3d (such as the WebKit poster circle demo). This has 12 elements transformed into a ring, 3 rings transformed into a cylinder, and the whole cylinder rotates. My depth sorting code arranges elements within a single ring correctly, and each ring correctly, but elements can still weirdly overlap between rings. WebKit appears to be sorting all 36 elements by depth, which seems to go against the fact that a transform creates a stacking context/containing block I'm not sure what the correct behaviour should be, but I this needs to be clarified in the spec.
Created attachment 548086 [details] [diff] [review]
Part 15 - Add 4D Vectors, Quaternions and gfx3DMatrix functions
part 3
part 4
part 5
Comment on attachment 540953 [details] [diff] [review]
Part 8b: Add 3D Point support, and ray tracing to gfx3DMatrix
>+gfxRect gfx3DMatrix::ProjectRect(const gfxRect& aRect) const
It would be nice to add a comment explaining what is the rect returned by this function, since it's not just returning the transformed rect (that wouldn't be a gfxRect). If I understand the code correctly, it returns the smallest gfxRect containting the image of aRect under the transformation.
Created attachment 548634 [details] [diff] [review]
Part 15 - Add 4D Vectors, Quaternions and gfx3DMatrix functions v2
Fixed a few bugs
Created attachment 548635 [details] [diff] [review]
Part 16 - Implement transitions/animations for 3d transforms.
Created attachment 548636 [details] [diff] [review]
Part 17 - Add style tests for the new transform functions, and transitions
Initial testing for 3d transitions/animations. I really need to add more tests here, working on it.
Comment on attachment 548076 [details] [diff] [review]
Part 9 - Implement the perspective() transform function and style property. v2
>diff --git a/layout/base/nsDisplayList.cpp b/layout/base/nsDisplayList.cpp
>+#include "mozilla/Preferences.h"
No need for this include anymore.
>+ parentDisp->mChildPerspective != 0.0) {
Per spec, you should test > 0.0, since negative values should also lead to no perspective.
>+ const nsStyleDisplay* parentDisp = aFrame->GetParent()->GetStyleDisplay();
You should null-check the result of aFrame->GetParent() before calling GetStyleDisplay() on it.
>diff --git a/layout/style/nsComputedDOMStyle.cpp b/layout/style/nsComputedDOMStyle.cpp
>+nsIDOMCSSValue*
>+nsComputedDOMStyle::DoGetMozPerspective()
>+{
>+ nsROCSSPrimitiveValue* val = GetROCSSPrimitiveValue();
>+ val->SetNumber(GetStyleDisplay()->mChildPerspective);
>+ return val;
Given that 'none' is the initial value, I think you should probably report '0' as 'none'.
>+ COMPUTED_STYLE_MAP_ENTRY_LAYOUT(perspective, MozPerspective),
Remove the "_LAYOUT".
>diff --git a/layout/style/nsRuleNode.cpp b/layout/style/nsRuleNode.cpp
>--- a/layout/style/nsRuleNode.cpp
>+++ b/layout/style/nsRuleNode.cpp
>@@ -4479,16 +4479,22 @@ nsRuleNode::ComputeDisplayData(void* aSt
> parentDisplay->mTransformOrigin[0],
> parentDisplay->mTransformOrigin[1],
> SETCOORD_LPH | SETCOORD_INITIAL_HALF |
> SETCOORD_BOX_POSITION | SETCOORD_STORE_CALC,
> aContext, mPresContext, canStoreInRuleTree);
> NS_ASSERTION(result, "Malformed -moz-transform-origin parse!");
> }
>
>+ SetFactor(*aRuleData->ValueForPerspective(),
>+ display->mChildPerspective, canStoreInRuleTree,
>+ parentDisplay->mChildPerspective, 0.0f,
>+ SETFCT_NONE);
>+
>+
No need for the double linebreak here.
>diff --git a/layout/style/nsStyleStruct.cpp b/layout/style/nsStyleStruct.cpp
>+ if (!(mChildPerspective == aOther.mChildPerspective))
Just use != rather than ! ( == ).
>diff --git a/layout/style/nsStyleTransformMatrix.cpp b/layout/style/nsStyleTransformMatrix.cpp
>+ if (depth > 0.0f) {
>+ temp._34 = -1.0/depth;
>+ }
The spec says non-positive values should be rejected by the parser, so by this point you should be able to assert that depth > 0.0f -- though I'd hope you haven't done any double-to-float conversions since it was parsed!
That said -- you need to add code to the parser to reject nonpositive values, since you're missing that code.
>diff --git a/layout/style/test/property_database.js b/layout/style/test/property_database.js
>+ "-moz-perspective": {
>+ domProp: "MozPerspective",
>+ inherited: false,
>+ type: CSS_TYPE_LONGHAND,
>+ initial_values: [ "0" ],
you should add "none" here
>+ other_values: [ "1000", "500.2" ],
and add some negative values here
>+ invalid_values: [ "pants" ]
>+ },.
r=dbaron with that fixed
(In reply to comment #140)
Thanks for the comments, fixed and running through tryserver now.
>
>.
Where does it say this?
The document I've been referring to says <number>. is newer than that
Oh right, guess I've been reading the wrong thing :)
Thanks for that, I'll switch to parsing lengths then!
WebKit appears to be parsing lengths too.
Created attachment 549620 [details] [diff] [review]
Part 9 - Implement the perspective() transform function and style property. v3
Fixed review comments.
Requesting review again because of the number -> length changes.
Created attachment 549699 [details] [diff] [review]
Part 9 - Implement the perspective() transform function and style property. v4
Fixed test failures.
Comment on attachment 548080 [details] [diff] [review]
Part 13: Add basic reftests for 3d transforms and expose 3d transform status in GfxInfo
Review of attachment 548080 [details] [diff] [review]:
-----------------------------------------------------------------
Instead of exposing whether 3D transforms are enabled, how about just making the tests conditional on GPU acceleration enabled (for now)?
::: layout/reftests/transform-3d/reftest.list
@@ +23,5 @@
> +== rotate3d-2a.html rotatey-1-ref.html
> +!= backface-visibility-1a.html about:blank
> +fails-if(!transforms3d) == backface-visibility-1b.html about:blank
> +fails-if(!transforms3d) != perspective-origin-1a.html rotatex-perspective-1a.html
> +== perspective-origin-1b.html perspective-origin-1a.html
Can you put these in alphabetical order?
Comment on attachment 548084 [details] [diff] [review]
Part 14b: Layout changes for preserve-3d
Review of attachment 548084 [details] [diff] [review]:
-----------------------------------------------------------------
::: layout/base/nsDisplayList.cpp
@@ +2441,4 @@
> {
> NS_PRECONDITION(aFrame, "Cannot get transform matrix for a null frame!");
> + //NS_PRECONDITION(aFrame->GetStyleDisplay()->HasTransform(),
> + // "Cannot get transform matrix if frame isn't transformed!");
Just remove this
@@ +2446,5 @@
> + if (aOutAncestor) {
> + *aOutAncestor = aFrame->GetParent();
> + }
> +
> + if (!aFrame->GetStyleDisplay()->HasTransform()) {
Add comment that this is the preserve-3d case
@@ +2476,5 @@
> + aFrame->PresContext(),
> + dummy, bounds, aFactor);
> + } else {
> + NS_ASSERTION(aFrame->Preserves3DChildren(),
> + "If we don't have a transform, then we must be at least preserving transforms of our children");
Shouldn't this be asserting aFrame->Preserves3D?
@@ +2776,5 @@
> const nsRect* aBoundsOverride)
> {
> NS_PRECONDITION(aFrame, "Can't take the transform based on a null frame!");
> + //NS_PRECONDITION(aFrame->GetStyleDisplay()->HasTransform(),
> + // "Cannot transform a rectangle if there's no transformation!");
Just remove this, and the ones below
::: layout/generic/nsFrame.cpp
@@ +1431!
But can't this assertion fire if you have a preserves-3D parent with a child that has a transform but also a clip?
@@ +1481,5 @@
> */
> if ((mState & NS_FRAME_MAY_BE_TRANSFORMED) &&
> disp->HasTransform()) {
> /* If we have a complex transform, just grab the entire overflow rect instead. */
> + if (Preserves3DChildren() || GetParent()->Preserves3DChildren() || !nsDisplayTransform::UntransformRect(dirtyRect, this, nsPoint(0, 0), &dirtyRect)) {
Just call Preserves3D() here?
@@ ?
@@ +4534,5 @@
> * See bug #452496 for more details.
> */
> +
> + // Check the transformed flags and remove it
> + PRBool isTransformed = (aFlags & INVALIDATE_ALREADY_TRANSFORMED);
"rectIsTransformed" to be clearly different from IsTransformed()?
@@ +4601,5 @@
>
> /* If we're transformed, the matrix will be relative to our
> * cross-doc parent frame.
> */
> + //*aOutAncestor = nsLayoutUtils::GetCrossDocParentFrame(this);
Remove
@@ +6670,5 @@
> + }
> + } else {
> + // When we are preserving 3d we need to iterate over all children separately.
> + // If the child also preserves 3d then their overflow will already been in our
> + // coordinate space, otherwise we need to transform.
Move this code to a helper function? FinishAndStoreOverflow is already a bit long :-)
::: layout/generic/nsIFrame.h
@@ +1192,5 @@
> /**
> + * Returns whether this frame will attempt to preserve the 3d transforms of its
> + * children. This is a direct indicator of -moz-transform-style: preserve-3d.
> + */
> + virtual PRBool Preserves3DChildren() const;
Doesn't need to be virtual?
@@ +1195,5 @@
> + */
> + virtual PRBool Preserves3DChildren() const;
> +
> + /**
> + * Returns whether this child frame will preserve 3d transforms.
Needs a better comment to distinguish it from the previous method.
@@ +1197,5 @@
> +
> + /**
> + * Returns whether this child frame will preserve 3d transforms.
> + */
> + virtual PRBool Preserves3D() const;
Doesn't need to be virtual?
Comment on attachment 548077 [details] [diff] [review]
Part 10 - Implement -moz-backface-visible v2
r=dbaron
Comment on attachment 548078 [details] [diff] [review]
Part 11a: Add nsCSSValueTriplet and optionally read a z component to -moz-transform-origin v2
>+ if (CheckEndProperty() ||
>+ !ParseVariant(depth, VARIANT_LENGTH | VARIANT_CALC, nsnull) ||
>+ !nsLayoutUtils::Are3DTransformsEnabled()) {
I think you can drop the CheckEndProperty || here.
>+ NS_ABORT_IF_FALSE(xValue.GetUnit() != eCSSUnit_Null &&
>+ yValue.GetUnit() != eCSSUnit_Null &&
>+ xValue.GetUnit() != eCSSUnit_Inherit &&
>+ yValue.GetUnit() != eCSSUnit_Inherit &&
>+ zValue.GetUnit() != eCSSUnit_Inherit &&
>+ xValue.GetUnit() != eCSSUnit_Initial &&
>+ yValue.GetUnit() != eCSSUnit_Initial &&
>+ zValue.GetUnit() != eCSSUnit_Initial,
>+ "inappropriate pair value");
"pair" -> "triplet"
Also, please add a comment noting that a null Z value *is* allowed,
since it isn't obvious when skimming.
(both comments repeated for both SetTripletValue variants)
nsRuleNode.cpp:
You still have the SetTripletCoords function, but you're not calling it.
Remove it. (It has bugs, too, but I won't bother describing them!)
It would probably be useful to use mozilla::DebugOnly<PRBool> in
your new code in ComputeDisplayData.
>+ if (valZ.GetUnit() == eCSSUnit_Null) {
>+ display->mTransformOrigin[2].SetCoordValue(0);
You should add a comment here that we've already separately checked
transformOriginValue against eCSSUnit_Null, so this is safe (i.e.,
clearly means 0 rather than unspecified).
r=dbaron with those things fixed
Comment on attachment 549699 [details] [diff] [review]
Part 9 - Implement the perspective() transform function and style property. v4
Carrying forward r=dbaron
Landed parts 6 through 12b on mozilla-inbound:
Excitement!
/be
Cool!
>
> @@ .
I've actually removed this line in my queue because it was causing tests to fail. What exactly is in the Content() group? I was under the impression that everything would have a z-order of 0 (positive/negative z-order content is separate), and no z-position (exisiting tests won't have 3d transforms), so this would just be a content-order sort and a no-op.
Relatedly, all the 3d content that I've tested ends up in the positive z-order group, so this line doesn't actually appear to be necessary at all.
(In reply to comment #155)
> >
> > @@ .
The scariness is that you're re-sorting everything by CSS z-index and content order, and if any code has deliberately violated that order then you'll be undoing that. I don't remember off the top of my head what does that, but it sounds like that's what's causing your tests failing. There's also an efficiency issue, you're adding an O(N log N) pass over the content list.
> I've actually removed this line in my queue because it was causing tests to
> fail. What exactly is in the Content() group?
Content() contains everything "normal" that doesn't belong in one of the other lists. See nsDisplayList.h.
> Relatedly, all the 3d content that I've tested ends up in the positive
> z-order group, so this line doesn't actually appear to be necessary at all.
Right, nsDisplayTransform items should always be in the PositionedDescendants list since they are pseudo-stacking-contexts.
Is it planned to have a media query available to test for 3d transform support?
Regards,
Lr
(In reply to Louis-Rémi BABE from comment #158)
> Is it planned to have a media query available to test for 3d transform
> support?
This feature can't be detected afaik. So a mediaQuery sounds appropriate. Webkit does that, right?
(In reply to Paul Rouget [:paul] from comment #159)
> (In reply to Louis-Rémi BABE from comment #158)
> > Is it planned to have a media query available to test for 3d transform
> > support?
>
> This feature can't be detected afaik. So a mediaQuery sounds appropriate.
> Webkit does that, right?
There's also dbaron's @supports[1], but I guess that wouldn't be ready in time.
[1]
hi guys. i really love how much progress and effort goes into supporting new standards and bringing the web forward. i’m following this bug report for quite some time now.
but every time i see how some parts are advancing and getting care from so many, it feels funny how other small deetails are neglected despite having hideous bugs for years.
specifically, i’m talking about some elements not being stylable, partly or at all (bug 418833, bug 52500, bug 402625 and bug 79107), buttons still have this invisible-yet-padded “::-moz-focus-inner” (bug 140562), and cursor themes are not completely supported (bug 609889) (firefox uses it’s own 90ies-cursors sometimes)
i hope this gets one or two of you to look at these, and my whinig didn’t distract you too much, but i think new standards are not everything, and little old inconsitencies matter, too. i just tried to direct some of the positive energy here elwhere, where it is needed imho, too.
You could technically feature detect by setting a 3d transform on something and examining the computed style, but that's annoying and why we implemented the media query in webkit.
Currently the plan is to support 3d-transforms everywhere (see bug 675837). The performance won't be great without GPU acceleration, so maybe we need a query for whether transforms are GPU-accelerated.
At least the Apple port of WebKit for Safari only does 3d in accelerated content. We don't have a software fallback. I'm not sure if other ports do. In other words, for Safari our media query for 3d was basically enough to guarantee performance.
We were quite hesitant to introduce a media query that is used as feature detection. As roc says, this is really testing performance, not the media. Oh well :(
Created attachment 551300 [details] [diff] [review]
Part 14b: Layout changes for preserve-3d v2
Fixed review comments
Comment on attachment 551300 [details] [diff] [review]
Part 14b: Layout changes for preserve-3d v2
Review of attachment 551300 [details] [diff] [review]:
-----------------------------------------------------------------
Somewhere you need a comment or two to explain the general setup: how preserve-3d transforms are applied to the display list (in BuildDisplayListForStackingContext, I guess), and how invalidation is handled (in Invalidate somewhere).
::: layout/base/nsDisplayList.cpp
@@ +772,5 @@
> + nsIFrame* ancestor;
> + gfx3DMatrix matrix1 = aItem1->GetUnderlyingFrame()->GetTransformMatrix(&ancestor);
> + gfx3DMatrix matrix2 = aItem2->GetUnderlyingFrame()->GetTransformMatrix(&ancestor);
> +
> + return matrix1._43 < matrix2._43;
If these are equal, we should check IsContentLEQ, right?
::: layout/generic/nsFrame.cpp
@@ +1444,5 @@
> +{
> + nsresult rv = NS_OK;
> + nsDisplayList newList;
> + while (nsDisplayItem *item = aList->RemoveBottom()) {
> + if (item->GetType() == nsDisplayItem::TYPE_TRANSFORM && item->GetUnderlyingFrame()->GetParent()->Preserves3DChildren()) {
How about check item->GetUnderlyingFrame() not null, then check item->GetUnderlyingFrame()->GetParent()->Preserves3DChildren(), then switch on item->GetType()?
@@ +1445.
It looks like this has landed in a partially-enabled state, which causes site compat issues. See bug 677173.
Hmm right. We can't really turn off IDL attributes based on a pref, of course :-(. If we can back out just the CSSDeclaration part of the patch, we should do that.
> We can't really turn off IDL attributes based on a pref, of course
You sort of can, if it lives on a separate interface; we have some code in nsDOMClassInfo.cpp right now that does that sort of thing.
Should probably track for Firefox 8 given the site compat issues.
Is it worth creating a new interface just for the 3d transform properties, with the plan of having them unconditionally defined in the near future?
It might be easier to just remove the properties, since this should only be needed temporarily.
(In reply to Matt Woodrow (:mattwoodrow) from comment #171)
> It might be easier to just remove the properties, since this should only be
> needed temporarily.
Let's do that.
How would I do this exactly?
Just removing the .idl entries fails to compile because the macros in nsCSSPropList.h reference them.
Created attachment 552267 [details] [diff] [review]
Part 14b: Layout changes for preserve-3d v3
Where do you get errors from removing the IDL entries? It's probably fixable, with hacks.’
(In reply to David Baron [:dbaron] from comment #175)
>.
Yes, I just did that code audit and I believe it does the right thing. But the change needs to apply to both versions of LookupProperty.
(In reply to Matt Woodrow (:mattwoodrow) from comment #176)
>’
Just add the declarations to nsDOMCSSDeclaration.h manually, right below NS_DECL_NSIDOMCSS2PROPERTIES.
Comment on attachment 548634 [details] [diff] [review]
Part 15 - Add 4D Vectors, Quaternions and gfx3DMatrix functions v2
Review of attachment 548634 [details] [diff] [review]:
-----------------------------------------------------------------
So, the operator[] issues, the order of translation in Inverse(), the left-over debugging code, and the MAX macro in a header must be fixed. The rest are suggestions you can take or leave as you like.
::: gfx/2d/BasePoint3D.h
@@ +68,5 @@
> + return y;
> + } else {
> + return z;
> + }
> + }
This function made me throw up in my mouth a little. Just saying.
:::);
Should be w - aPoint.w, not +.
::: gfx/2d/Makefile.in
@@ +51,5 @@
> EXPORTS_mozilla/gfx = \
> 2D.h \
> BasePoint.h \
> BasePoint3D.h \
> + BasePoint4D.h \
You're using tabs here when the rest of the list (except the $(NULL) line) isn't.
::: gfx/thebes/gfx3DMatrix.cpp
@@ +114,5 @@
> return *this;
> }
>
> +gfxPointH3D&
> +gfx3DMatrix::operator[](const int aIndex)
Is there any reason to declare aIndex const?
@@ +118,5 @@
> +gfx3DMatrix::operator[](const int aIndex)
> +{
> + NS_ABORT_IF_FALSE(aIndex >= 0 && aIndex <= 3, "Invalid matrix array index");
> + if (aIndex == 0) {
> + return *reinterpret_cast<gfxPointH3D*>(&_11);
So, this kind of thing is technically invalid, though I don't know a compiler/architecture combination where it will actually break.
The only way to guarantee, at a language level, that a series of memory cells are adjacent in memory, with no padding for alignment or whatever else between them, is to put them in an array.
However, if you're going to rely on _11 through _14 being adjacent in memory, why not rely on _11 through _44 being adjacent? Then you can get rid of the if/else chain and just use
return *reinterpret_cast<gfxPointH3D*>((&_11)+4*aIndex);
directly, and the whole thing can go in the header so it can be inlined, as it becomes basically one lea instruction on x86. A similar approach will eliminate the if/else chain in BasePoint[34]D and even in TransposedVector()/SetTransposedVector(). That at least has the advantage of generating reasonable code, while the current construction will create surprisingly slow code for some future caller that naively expects operator[] on a matrix class to be implemented efficiently.
In other words, if you're going to make the assumption these things are tightly packed, then you should actually use that assumption, rather than doing things halfway. Otherwise you're relying on loop unrolling to get reasonable code out of things like Normalize() and Transposed() below (looking at the asm, not only does gcc not unroll the loops, it doesn't inline the functions that are available to be inlined, either, likely because all the conditionals make them huge).
@@ +129,5 @@
> + }
> +}
> +
> +const gfxPointH3D&
> +gfx3DMatrix::operator[](const int aIndex) const
Is there any reason to declare aIndex const?
@@ +249,5 @@
> +gfx3DMatrix
> +gfx3DMatrix::Inverse3x3() const
> +{
> + gfxFloat det = Determinant3x3();
> + if (det == 0.0) {
So, gfxMatrix::Invert uses cairo_matrix_invert(), which handles singular matrices slightly differently.
1) It also fails if !ISFINITE(det).
2) Because of the way it is called, the inverse of a singular matrix is (usually) the original singular matrix, rather than the identity. I say "usually" because in the simple scaling/translation case, it inverts the X row before checking to see if the Y row is invertible, but I'd call that behavior a bug in Cairo (the documentation only says the matrix is modified if the inversion process is successful, and in general it's bad form to partially modify things and then return an error). It also doesn't check ISFINITE in that case, which is probably another bug.
I don't know how important preserving either of these semantics are. Given the aforementioned Cairo bugs, probably not very.
@@ +264,5 @@
> + temp._23 = _13 * _21 - _11 * _23;
> + temp._31 = _21 * _32 - _22 * _31;
> + temp._32 = _31 * _12 - _11 * _32;
> + temp._33 = _11 * _22 - _12 * _21;
> + temp /= det;
I recommend temp *= (1/det).
@@ +293,5 @@
> + */
> + gfx3DMatrix translation;
> + translation[3] = gfxPointH3D(-_41, -_42, -_43, 1);
> +
> + matrix3 = Inverse3x3() * translation;
Is this really correct? I get Inverse[{{a,b,c,0},{d,e,f,0},{g,e,h,0},{x,y,z,1}}] == {{1,0,0,0},{0,1,0,0},{0,0,1,0},{-x,-y,-z,1}}.Inverse[{{a,b,c,0},{d,e,f,0},{g,e,h,0},{0,0,0,1}}]. I.e.,
matrix3 = translation * Inverse3x3();
Also, the vast majority of these multiplications are by 0 or 1, but because things get stored to memory and then read back, the optimizer is unlikely to eliminate them all. Perhaps it's worth expanding them out, e.g.:
matrix3 = Inverse3x3();
matrix3[3] = gfxPointH3D(-_41*matrix3._11 - _42*matrix3._22 - _43*matrix3._33,
...
That reduces the number of multiplications from 64 to 9. If you opt not to do that, you should still at least use the gfx3DMatrix::Translation() factory, rather than the translation[3] = ... construction you use above.
@@ +294,5 @@
> + gfx3DMatrix translation;
> + translation[3] = gfxPointH3D(-_41, -_42, -_43, 1);
> +
> + matrix3 = Inverse3x3() * translation;
> + test = PR_TRUE;
So, this looks like left-over debugging code. In particular, you aren't actually returning the inverse you just computed, but are continuing on to compute the full 4x4 inverse.
@@ +301,3 @@
> gfxFloat det = Determinant();
> if (det == 0.0) {
> + return gfx3DMatrix();
Ah, I see you're actually explicitly changing the behavior away from the Cairo behavior (i.e., returning the identity instead of the original, singular matrix). Is there a reason for this? Perhaps it should be documented?
@@ +356,4 @@
>
> + temp /= det;
> +
> + //NS_ABORT_IF_FALSE(!test || matrix3 == temp, "AAAAH");
Either #ifdef this to actually work in debug mode, or remove it entirely. Don't just leave it here commented out.
@@ +407,5 @@
> +gfx3DMatrix::Normalize()
> +{
> + for (int i = 0; i < 4; i++) {
> + for (int j = 0; j < 4; j++) {
> + this->operator [](i)[j] /= this->operator [](3)[3];
Perhaps (*this)[i][j] would be cleaner?
@@ +465,5 @@
> + return gfxPointH3D(x, y, z, w);
> +}
> +
> +gfxPointH3D
> +gfx3DMatrix::Transform4DLeft(const gfxPointH3D& aPoint) const
"Left" messes with my head. It's correct for OpenGL, where points are column vectors multiplied on the right and the matrix is stored in column-major order, but it's backwards for D3D, where points are row vectors multiplied on the left to begin with (and in particular, this is how multiplication is documented to work for the 2D gfxMatrix class).
Perhaps TransposeTransform4D?
::: gfx/thebes/gfxQuaternion.h
@@ +40,5 @@
> +
> +#include "mozilla/gfx/BasePoint4D.h"
> +#include "gfx3DMatrix.h"
> +
> +#define MAX(a,b) ((a)>(b)?(a):(b))
It's probably not a great idea to put a macro named MAX in a header file like this. I also recommend documenting that it has to return b when a==b, and why. Otherwise someone is likely to replace it with some other MAX implementation and break things without realizing it.
@@ +67,5 @@
> + gfxFloat dot = DotProduct(aOther);
> + if (dot == 1.0) {
> + return *this;
> + }
> + gfxFloat theta = acos(dot);
Rounding errors in the quaternion normalization or dot product could potentially give you a value slightly outside the range [-1,1], which will make acos() return NaN. I recommend clamping dot to that range.
@@ +70,5 @@
> + }
> + gfxFloat theta = acos(dot);
> +
> + gfxQuaternion left = *this;
> + left *= sin((1 - aCoeff) * theta) / sin(theta);
Instead of dividing by sin(theta), it may be better to use
gfxFloat rsintheta = 1/sqrt(1 - dot*dot);
and then multiply by rsintheta. Square roots, and particularly reciprocal square roots, are much easier to compute.
@@ +73,5 @@
> + gfxQuaternion left = *this;
> + left *= sin((1 - aCoeff) * theta) / sin(theta);
> +
> + gfxQuaternion right = aOther;
> + right *= sin(aCoeff * theta) / sin(theta);
You can also replace the remaining two calls to sin() with a call to sin() and cos() on the same angle (which is faster on many platforms). E.g., using the identity
sin((1 - aCoeff) * theta)/sin(theta)
== sin(theta - aCoeff*theta)/sin(theta)
== (sin(theta)*cos(aCoeff*theta) - cos(theta)*sin(aCoeff*theta))/sin(theta)
== sin(theta)*cos(aCoeff*theta)/sin(theta) - cos(theta)*sin(aCoeff*theta)/sin(theta)
== cos(aCoeff*theta) - dot*sin(aCoeff*theta)/sin(theta)
leads to the code:
gfxFloat w = sin(aCoeff*theta)*rsintheta;
left *= cos(aCoeff*theta) - dot*w;
right *= w;
@@ +89,5 @@
> + temp[1][1] = 1 - (2 * x * x) - (2 * z * z);
> + temp[1][2] = (2 * y * z) + (2 * w * x);
> + temp[2][0] = (2 * x * z) + (2 * w * y);
> + temp[2][1] = (2 * y * z) - (2 * w * x);
> + temp[2][2] = 1 - (2 * x * x) - (2 * y * y);
Recommend factoring out the 2's in each term, to save 9 multiplies, e.g.,
temp[0][0] = 1 - 2*(y * y - z * z);
The compiler is generally conservative about doing such optimizations for you unless you compile with -ffast-math or an equivalent.
Comment on attachment 548635 [details] [diff] [review]
Part 16 - Implement transitions/animations for 3d transforms.
Review of attachment 548635 [details] [diff] [review]:
-----------------------------------------------------------------
r+ with some suggestions, but I did not review the style stuff carefully, as I'm relying on the fact that dbaron knows that code much better than I do.
:::);
You should just fix this in Part 15, not here.
::: layout/style/nsStyleAnimation.cpp
@@ +1115,5 @@
>
> float scaleY = sqrt(C * C + D * D);
> C /= scaleY;
> D /= scaleY;
> +
Whitespace-only change.
@@ .
@@ +1147,5 @@
> +static PRBool
> +Decompose3DMatrix(const gfx3DMatrix &aMatrix, gfxPoint3D &aScale,
> + float aShear[3], gfx3DMatrix &aRotate,
> + gfxPoint3D &aTranslate, gfxPointH3D &aPerspective)
> +{
This is a pretty straightforward translation of the code from to our C++ API, except you dropped all the comments from the original code. It would be nice if you could preserve those. Referencing that link would be a good idea, too.
@@ +1194,5 @@
> + row[i].z = local[i].z;
> + }
> +
> + aScale.x = row[0].Length();
> + row[0].Normalize();
You just computed Length(), but Normalize() will compute it again internally. Recommend just dividing by the length you already computed. Ditto for the other two occurrences below.
@@ +1197,5 @@
> + aScale.x = row[0].Length();
> + row[0].Normalize();
> +
> + aShear[XYSHEAR] = row[0].DotProduct(row[1]);
> + row[1] += row[0] * -aShear[XYSHEAR];
row[1] -= row[0] * aShear[XYSHEAR] perhaps? Ditto for the other two lines below.
@@ +1224,5 @@
> +
> + for (int i =0; i < 3; i++) {
> + aRotate[i] = gfxPointH3D(row[i].x, row[i].y, row[i].z, 0);
> + }
> + aRotate[3] = gfxPointH3D(0, 0, 0, 1);
Is it really worth breaking out local into row[]? Operating on the rows of local directly would add 15 FMA's (assuming I can count), but the copies add 18 loads/stores, not counting loop overhead, and a bunch of code.
@@ +1310,4 @@
>
> + gfxPointH3D perspective =
> + InterpolateNumerically(perspective1, perspective2, aProgress);
> + if (perspective != gfxPointH3D(0, 0, 0, 1)) {
What's the point of this check? Can't we just write perspective into result unconditionally? It would seem cheaper to always do that than to even do this test. I'm dubious about the utility of some of the other checks below, as well.
@@ +1317,5 @@
> + gfxPoint3D translate =
> + InterpolateNumerically(translate1, translate2, aProgress);
> + if (translate != gfxPoint3D(0, 0, 0)) {
> + gfx3DMatrix temp = gfx3DMatrix::Translation(translate);
> + result = temp * result;
As in the last patch, you can reduce this from 64 multiplies to 9. Perhaps you should just add a function for it to gfx3DMatrix?
@@ +1322,5 @@
> + }
> +
> + gfxQuaternion q1(rotate1);
> + gfxQuaternion q2(rotate2);
> + gfxQuaternion q3 = q1.Slerp(q2, aProgress);
Perhaps it would be better to have Decompose3DMatrix return a gfxQuaternion directly. Especially combined with the suggestion to just keep things in "local", it would avoid copying everything from local to aRotate at the end.
@@ +1335,5 @@
> + InterpolateNumerically(shear1[YZSHEAR], shear2[YZSHEAR], aProgress);
> + if (yzshear != 0.0) {
> + gfx3DMatrix temp;
> + temp._32 = yzshear;
> + result = temp * result;
This can also be reduced to result[2] += yzshear*result[1], going from 64 multiplies to 4 and 3 lines of code to 1.
@@ +1358,5 @@
> + gfxPoint3D scale =
> + InterpolateNumerically(scale1, scale2, aProgress);
> + if (scale != gfxPoint3D(1.0, 1.0, 1.0)) {
> + gfx3DMatrix temp = gfx3DMatrix::Scale(scale.x, scale.y, scale.z);
> + result = temp * result;
This can also be reduced from 64 multiplies to 12, though it requires 3 lines of code instead of 2.
@@ +1576,4 @@
>
> // FIXME: If the matrix contains only numbers then we could decompose
> // here. We can't do this for matrix3d though, so it's probably
> // best to stay consistent.
I'm not sure that the second sentence here is true any longer. Certainly doing the decomposition once instead of for every step of the animation would save more computation than basically all of the other optimizations I pointed out combined.
I concur with Timothy, a Vector or Matrix class should store all its coefficients as a single one-dimensional array. Then, if you want to address vector coefficients as "x" and "y", implement that as inline methods. It is then completely safe to assume that they are inlined (provided optimization is enabled) and incur zero overhead.
> I'm not sure that the second sentence here is true any longer. Certainly
> doing the decomposition once instead of for every step of the animation
> would save more computation than basically all of the other optimizations I
> pointed out combined.
I believe this would only prevent us from decomposing multiple times for each stage of the animation. Ie for visible region calculation, and for drawing.
I think we'd need a separate optimization to let us decompose the list once, and then start using those lists for future steps of the animation.
Just use NS_MAX instead of MAX.
(In reply to Timothy B. Terriberry (:derf) from comment #179)
> Comment on attachment 548635 [details] [diff] [review]
> Part 16 - Implement transitions/animations for 3d transforms.
> @@ .
FwIW, people have spent quite some time eradicating PR_ABS. Please be so kind not to revert their work.
Created attachment 553086 [details] [diff] [review]
Part 15 - Add 4D Vectors, Quaternions and gfx3DMatrix functions v3
Fixed most of (if not all) review comments
Created attachment 553087 [details] [diff] [review]
Part 16 - Implement transitions/animations for 3d transforms. v2
Fixed Tim's review comments
Carrying forward r=derf
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #182)
> Just use NS_MAX instead of MAX.
There's potential for a subtle bug here. The first argument to MAX can be -0 instead of 0, and the second one is always 0. They'll compare equal with the standard relational operators, so it depends on the exact implementation which one gets returned. If you pass -0 instead of 0 to sqrt(), you'll get NaN back. Although the current implementation of NS_MAX looks like it does what we want, and it's unlikely to change, I don't see this as a documented part of its interface, and think it's safer to have a custom version where the desired behavior _is_ documented.
You could document it for NS_MAX, and write a test?
Yeah, let's just document it for NS_MAX. Very good catch though, I wouldn't have thought of it.
Jesse is going to add this to his DOM fuzzers.
Comment on attachment 548083 [details] [diff] [review]
Part 14a: Add -moz-transform-style CSS property
We need to do the same things done in bug677173 to hide these properties
for now. (I'd like to come up with a better solution so that the pref
can fully enable them, though; the main obstacle there is the
nsIDOMCSS2Properties interface. If we had no interface at all, things
would be simpler...)
In property_database.js, you should:
- use tabs to match the rest of the file
- use invalid_values rather than invalid_value
r=dbaron with that
Comment on attachment 548636 [details] [diff] [review]
Part 17 - Add style tests for the new transform functions, and transitions
In property_database.js, you need tabs rather than spaces.
Why do the perspective tests all take <number> arguments when the
spec says perspective() takes lengths? I guess that's a change
in the editor's draft since the working draft. I suppose numbers
do make sense -- except I'd expect perspective() to be consistent
with translatez(). I think we should change one of them.
test_transitions_per_property.html:
Please leave the existing tests using rotate() rather than converting
them all to rotatez(), but it's not a bad idea to test them with
rotatez in *addition*. (Also, if the pref is working right, those
should require the pref... which makes me think that either something
is wrong or you didn't test this with the pref off.)
Some of the horrible expected: fields with matrix3d() values would
have been easier to write with c()... but now that you've written them
they're probably worth leaving.
I'm guessing you didn't test with the pref off, or maybe that there's
another patch in the series to turn the pref on. You probably need a
pref check in property_database.js to make the addition of some of the
values to those arrays condition on the pref. Should be easy enough
with the structure
[ values, for, always ].concat(check_pref() ? [values, for, 3d] : [])
r=dbaron with those things fixed
Comment on attachment 553087 [details] [diff] [review]
Part 16 - Implement transitions/animations for 3d transforms. v2
comments on the nsComputedDOMStyle.cpp part only (haven't gotten through nsStyleAnimation.cpp yet):
So I'm not sure why these changes are in this patch. But that's ok
for now, though I think they should probably be separate.
More importantly, though, this needs to output a result that we can
parse. matrix3d() takes 16 <number>s, as it should, so this is
producing matrix3d() expressions that aren't parseable.
Given appropriate tests in property_database.js (and their being
enabled), this should cause test failures. If it doesn't, there's a
problem.
I think the best way to fix this is:
+ change matrix() to accept <number> or <length> for the
transformation parts, so that matrix() and matrix3d() both
accept the syntax WebKit accepts, but that ours also accepts px for
matrix() as we have in the past
+ change nsComputedDOMStyle::DoGetMozTransform to never output
"px"
review- on this part of the patch because I'd like to look at that
again once you've done it
release drivers are not going to track this for 8 because we believe it is turned off for the 8 release (on aurora today.) If that is not correct, please re-nominate.
Created attachment 555944 [details] [diff] [review]
Part 16 - Implement transitions/animations for 3d transforms. v3
That bug did indeed break tests.
I've gone for the simple solution of just printing 3d matrices without 'px'. This <length>/<number> problem exists for other functions too, so I'd like to handle it all in one single patch.
Created attachment 555946 [details] [diff] [review]
Part 17 - Add style tests for the new transform functions, and transitions v2
Updated so these actually pass again with the current patch queue.
Indeed, I've only tested these with 3d transforms enabled. Not setting review flag for now until I implement handling these without transforms enabled.
Created attachment 555950 [details] [diff] [review]
Part 17 - Add style tests for the new transform functions, and transitions v3
Updated test_transitions_per_property.html
Created attachment 555951 [details] [diff] [review]
Part 18 - Make the perspective() transform function actually fail on numbers <= 0
Comment on attachment 553086 [details] [diff] [review]
Part 15 - Add 4D Vectors, Quaternions and gfx3DMatrix functions v3
Review of attachment 553086 [details] [diff] [review]:
-----------------------------------------------------------------
r+ from me, but you should probably address roc's comment 188.
::: gfx/2d/BasePoint3D.h
@@ +61,5 @@
> // compiler generated default assignment operator
>
> + T& operator[](int aIndex) {
> + NS_ABORT_IF_FALSE(aIndex >= 0 && aIndex <= 2, "Invalid array index");
> + return *reinterpret_cast<T*>((&x)+aIndex);
Is the reinterpret_cast as written here necessary? &x should already be of type T*. One subtlety here is that automated tools like coverity may detect references past the end of the object x as being out-of-bounds. It might be possible that a reinterpret_cast to an array of size 3 instead would avoid triggering bogus reports in coverity, but I have no way to test this.
The same comment applies to all the other places you do this.
::: gfx/thebes/gfx3DMatrix.cpp
@@ +260,5 @@
> + }
> +
> + gfx3DMatrix temp;
> +
> + temp._11 = (_22 * _33 - _23 * _32) / det;
Making the "/ det" local is good (the optimizer doesn't have to deal with things going out to memory and back and trying to figure out if intermediate things have side-effects). But my suggestion was to compute the reciprocal "1 / det" and then multiply by that.
It's just a suggestion, and if you want to ignore me, feel free. It can be a ULP or two less accurate than doing all the divisions, but it should be significantly faster. Not only do divisions take many cycles, but on x86 they are also not pipelined, so all your other floating-point operations will stall, waiting for them.
Comment on attachment 555944 [details] [diff] [review]
Part 16 - Implement transitions/animations for 3d transforms. v3
Review of attachment 555944 [details] [diff] [review]:
-----------------------------------------------------------------
I know I already r+'d this, but I had a few more quick comments anyway.
::: layout/style/nsStyleAnimation.cpp
@@ +1163,5 @@
> + /* Normalize the matrix */
> + local.Normalize();
> +
> + /**
> + * perspective is used to solve for perspective, but it also provides
This sounds pretty funny. Perhaps "This" in place of the first "perspective".
@@ +1205,5 @@
> + /* Compute X scale factor and normalize first row. */
> + aScale.x = local[0].Length();
> + local[0] /= aScale.x;
> +
> + /* Compute XY shear factor and make 2nd local orthogonal to 1st. */
This looks like an erroneous search and replace? I think "2nd row" still makes more sense here than "2nd local" (there's only one variable named "local"). Same for all the places below.
@@ +1228,5 @@
> + aShear[XZSHEAR] /= aScale.z;
> + aShear[YZSHEAR] /= aScale.z;
> +
> + /**
> + * At this point, the matrix (in locals) is orthonormal.
Unless you actually want to rename the variable to "locals"... but I think that's pretty silly.
Created attachment 556169 [details] [diff] [review]
Part 15 - Add 4D Vectors, Quaternions and gfx3DMatrix functions v4
Fixed review comments, carrying forward r=derf
Landed parts 14a, 14b and 15 on inbound:
BTW - if you're up to implementing matrix decomposition, be aware that there was a bug in the spec which was fixed recently.
Use the editor's copy at
(decomposition is in the 2d transforms spec, not 3d)
Thanks Dean!
Do you know what changed exactly? The only difference I can see is the scale[0] *= -1; to scale[i] *= -1; change, which I have fixed already.
Created attachment 556450 [details] [diff] [review]
Part 16 - Implement transitions/animations for 3d transforms. v4
Fixed Tim's comments
Created attachment 556451 [details] [diff] [review]
Part 17 - Add style tests for the new transform functions, and transitions v4
Fixed review comments, and made them all pass with and without the pref.
Carrying forward r=dbaron
Created attachment 556452 [details] [diff] [review]
Part 19: Make all translate functions handle lengths and percents
Created attachment 556453 [details] [diff] [review]
Part 13: Add basic reftests for 3d transforms v3
Lets check the tests in for now, but not add the folder to the main reftest.list.
Then they can be run easily when people need to, and we can trivially enable then when 3d transforms are enabled by default.
(In reply to Matt Woodrow (:mattwoodrow) from comment #208)
> Created attachment 556453 [details] [diff] [review]
> Part 13: Add basic reftests for 3d transforms v3
>
> Lets check the tests in for now, but not add the folder to the main
> reftest.list.
I recommend putting the addition to the main reftest.list in, but commented out, so people know there's a directory that's not being run.
(In reply to David Baron [:dbaron] from comment #209)
> I recommend putting the addition to the main reftest.list in, but commented
> out, so people know there's a directory that's not being run.
Sounds reasonable.
Created attachment 556463 [details] [diff] [review]
Part 13: Add basic reftests for 3d transforms v4
Added disabled reftest.list entry.
Carrying forward r=roc
Landed parts 13 and 17 on inbound:
Created attachment 557395 [details] [diff] [review]
Part 20 - Add more gfx3DMatrix transformation function and use these in nsStyleTransformMatrix
Comment on attachment 557395 [details] [diff] [review]
Part 20 - Add more gfx3DMatrix transformation function and use these in nsStyleTransformMatrix
Review of attachment 557395 [details] [diff] [review]:
-----------------------------------------------------------------
r+ with changes.
::: gfx/thebes/gfx3DMatrix.cpp
@@ +349,5 @@
> + _24 = -sinTheta * temp + cosTheta * _24;
> +}
> +
> +void
> +gfx3DMatrix::Multiply(const gfx3DMatrix& aOther)
This should be called something like PreMultiply (like in gfxMatrix) to distinguish it from the normal operator*=(). It would also be a lot simpler to implement it the same way:
*this = aOther * (*this);
@@ +374,5 @@
> + *this = temp;
> +}
> +
> +void
> +gfx3DMatrix::Multiply(const gfxMatrix& aOther)
Same comment here with respect to the name.
::: layout/style/nsStyleTransformMatrix.cpp
@@ +118,5 @@
> NSAppUnitsToFloatPixels(offset, aAppUnitsPerMatrixUnit);
> }
>
> +static void
> +ProcessTranslatePart(double& aResult,
Can't you just make the ProcessTranslatePart() function above return a float instead of taking an aResult outparam? Then you wouldn't need this extra version.
@@ +258,5 @@
> * | dx 0 0 1 |
> * So E = value
> *
> * Otherwise, we might have a percentage, so we want to set the dX component
> * to the percent.
We're no longer constructing a matrix here, so this comment should be updated. A similar comment showing the equivalent matrix at the declaration of gfx3DMatrix::Translate() (and its friends) might be useful, however. Same for all the other, similar comments below.
@@ +603,5 @@
> float depth;
> ProcessTranslatePart(depth, aData->Item(1), aContext,
> aPresContext, aCanStoreInRuleTree,
> 0, aAppUnitsPerMatrixUnit);
> NS_ASSERTION(depth > 0.0, "Perspective must be positive!");
You've got this same assertion in gfx3DMatrix::Perspective(). You probably only need it in one place, and I think there is better.
::: layout/style/nsStyleTransformMatrix.h
@@ +79,5 @@
> /**
> * Given an nsCSSValue::Array* containing a -moz-transform function,
> * returns a matrix containing the value of that function.
> *
> * @param aData The nsCSSValue::Array* containing the transform function.
The documentation here needs to be updated to include the aMatrix parameter. You may also want to put the full documentation on the public ReadTransforms() instead of this private method.
@@ +102,2 @@
>
> + static void ProcessMatrix(gfx3DMatrix& aMatrix, const nsCSSValue::Array *aData,
All of these functions look like they are only ever used as implementation details of the public nsStyleTransformMatrix functions, and are never referenced from anywhere else. That class has no data members and only static methods (its entire existence looks like an odd stand-in for a namespace). I realize it's good C++ form to expose lots of unnecessary details of your private implementation in the public headers, to spread the joy of recompiling when things change and to tempt people to expose things they can see right there, just under that pesky "private" keyword, but perhaps you should just make these into non-member, static (in the original C sense of the keyword) functions (so they only appear in the .cpp file).
Created attachment 558196 [details] [diff] [review]
Part 20 - Add more gfx3DMatrix transformation function and use these in nsStyleTransformMatrix v2
Fixed review comments, and test failures. Carrying forward r=derf
Comment on attachment 556450 [details] [diff] [review]
Part 16 - Implement transitions/animations for 3d transforms. v4
See also comment 192 for comments on the nsComputedDOMStyle part.
Somewhat large changes needed:
- check both 2d together, not separately
- i.e., Decompose3DMatrix shouldn't call Decompose2DMatrix; caller
should figure out which is right
- and need to be careful to be consistent about what gets set
or left untouched (e.g., Decompose2DMatrix has the odd behavior
that it touches only one of the three parts of aShear. It should
probably set them to 0 appropriately -- and other parts to 0 or 1
as appropriate. And then you don't need to initialize). Though
maybe fixing that should wait until after you've fixed the handling
for matrices that we can't decompose.
- rotate3d() and perspective() should use the decomposition algorithm
(though rotate3d() *could* use quaternions directly, but
interpolating components is probably wrong). I think this requires
changing both the main switch() in AddTransformLists and the test
around the AppendTransformFunction call at the top.
More detailed comments:
In AppendTransformFunction, if you put the:
default:
NS_ERROR("...");
*above* the longest list of cases (so that it falls through into the
nargs=1), you'll end up with more efficient code for opt builds, I
think, since the long list of cases will just translate to default:
Also, AppendTransformFunction should use 2-space indent. But the best
way to fix this is by indenting the lines starting with "case" and
"default" by 2 additional spaces and leaving the code inside them where
it is.
Decompose3DMatrix should be 2 space indent rather than 4 space.
The comment at the top of Decompose3DMatrix should have a comment
saying that it's derived from the code in Graphics Gems, so that it
complies with the license on the graphics gems.
At the end of Decompose3DMatrix, could you change this:
aRotate = local;
to:
aRotate = gfxQuaternion(local);
to make it clearer that there's a matrix -> quaternion assignment there.
At some point we probably want to handle decomposition failure better,
so we claim to be unable to interpolate, rather than interpolating
between the identity matrix and something else.
>+ // TODO: Would it be better to interpolate these as angles? How do we convert back to angles?
probably better to just say why it's better to interpolate as shears
and that we're not interpolating as angles.
In AddTransformLists, at least add a comment suggesting that maybe
this isn't the best way to interpolate perspective.
Could you fix the indentation in the second lines of the function
calls in the translate3d and scale3d cases?
The rotate3d case shouldn't have separate code; it should use the matrix
decomposition.
>+ // rotatez and rotate are identical, is there an easier way to check this?
If you want this comment, put it in the TransformFunctionsMatch
function.
(I also think it might not be that hard to do the general Add here
with 2 coefficients, though maybe I'm missing something.)?
r=dbaron with those changes
gfx3DMatrix::Skew* methods seem poorly named, since Skew is usually
described as an angle but shear as what these take.
gfxQuaternion::Slerp should have a comment expanding the acronym,
perhaps?
Comment on attachment 555951 [details] [diff] [review]
Part 18 - Make the perspective() transform function actually fail on numbers <= 0
I'd say just remove the line you're modifying instead of making the modification; you're inside checks that you're either eCSSToken_Dimension, or that you're eCSSToken_Number and 0. Both token types have a valid mNumber, so you're fine just checking tk->mNumber without a tk->mType check.
r=dbaron with that?)
I'd also prefer not to have VARIANT_* names that don't say what's in them. In other words, if you're adding VARIANT_NUMBER to something, it should also get an N inserted in its name. (And while you're there, I don't see the point of having TRANSFORM in the name -- I'd go with VARIANT_LP_CALC, VARIANT_LPN_CALC, VARIANT_LN_CALC, or similar.)
Also, it's helpful if you include the commit message in patches that you post for review.
>?
>
The problem I'm trying to solve here is that the inputs to AddTransformLists when interpolating from 'none' to a list, is two identical lists and coefficients that don't add to 1 (eg, 0.0, 0.5).
Since coeeficients that add to more than 1 don't make sense for transforms, it's preferable to have a null list object, and a single coefficient that just represents the progress through the animation.
AddDifferentTransformLists is detecting this special case, and inserting a null list, instead of the same list twice. nsStyleTransformMatrix::ProcessInterpolateMatrix detects this empty list, and using an identity matrix for decomposition.
The aList1 == aList2 check in AddTransformLists is there because we construct temporary list objects to pass into AddDifferentTransformLists, and this breaks the detection of identical list pointer.
(In reply to Matt Woodrow (:mattwoodrow) from comment #221)
>
>
Ah, interesting. :-) If WebKit accepts raw lengths there, then we probably should too, but if it doesn't, we should only accept raw lengths where it does (and I know it does in the matrix* functions.)
WebKit apparently doesn't accept raw numbers for translate*, so I'll update the patch to only support this for the matrix* variants.
Created attachment 561953 [details] [diff] [review]
Part 16 - Implement transitions/animations for 3d transforms. v5
Fixed review comments, carrying forward r=dbaron,derf
Created attachment 561961 [details] [diff] [review]
Part 18 - Make the perspective() transform function actually fail on numbers <= 0 v2
Fixed review comment, carrying forward r=dbaron
Created attachment 561962 [details] [diff] [review]
Part 19: Make matrix* functions handle lengths and percents
Removed the changes to the translate* functions, and fixed the TRANSFORM macros as suggested
> Also, it's helpful if you include the commit message in patches that you
> post for review.
Sorry, I usually only add these when I go to land patches. I've added them for all the new patches I just uploaded.
Comment on attachment 561962 [details] [diff] [review]
Part 19: Make matrix* functions handle lengths and percents
You should probably have a test in property_database.js that the following are invalid:
* matrix3d() with a % value in the translatez part
* matrix3d() with a length value in the _44 part
r=dbaron with that
Created attachment 562585 [details] [diff] [review]
Part 21: Enable 3D transforms!
This is just a backout of the patch from bug 677173, plus flipping the pref to enabled.
I want to land this just after the aurora merge to get as much testing coverage as possible before it hits a release.
Do 3D transforms actually work when we don't have an accelerated layers backend?
Yes, we support these in BasicLayers via pixman.
Matt, I noticed this line in your last patch (reftest.list) which needs fixing:
> # 3d transforms - disabled currently
Part 16:
Part 18:
Part 19:
Part 20:
Not sure what else is left to checkin, please close if everything done here (and mark mozilla9) - thanks :-)
Comment on attachment 561953 [details] [diff] [review]
Part 16 - Implement transitions/animations for 3d transforms. v5
># HG changeset patch
># Parent 50b36274e6896899e9b3e1de6b4e0ad75ed60a37
>--- a/layout/style/nsComputedDOMStyle.cpp
>+++ b/layout/style/nsComputedDOMStyle.cpp
>@@ -1053,37 +1053,63 @@ nsComputedDOMStyle::DoGetMozTransform()
>+ resultString.Append(NS_LITERAL_STRING("("));
>+ resultString.Append(NS_LITERAL_STRING(")"));
For future reference, I think these would be ever so slightly more efficient as resultString.Append('('); and resultString.Append(')');.
Landed Part 21 on inbound:
That should conclude this bug, unless it gets backed out :)
\o/
Verified as fixed on Aurora (Firefox 10):
layout/reftests/transform-3d passes on all OSs:
/tests/layout/style/test/test_transitions_per_property.html and layout/base/tests/test_preserve3d_sorting_hit_testing.html are passing on all OSs:
Verified as fixed on Central (Firefox 11):
layout/reftests/transform-3d passes on all OSs:
/tests/layout/style/test/test_transitions_per_property.html and layout/base/tests/test_preserve3d_sorting_hit_testing.html are passing on all OSs:
This bug will only be closed when the CSS3 3D Transforms feature will be verified on a RC.
(In reply to Ioana Budnar [QA] from comment #239)
> This bug will only be closed when the CSS3 3D Transforms feature will be
> verified on a RC.
This bug is already closed. (RESOLVED|FIXED)
Perhaps by "closed" you meant "officially marked as verified"? IIRC, we don't have that requirement in general -- once a bug is verified as fixed on trunk (or whatever branch it lands directly on), that's sufficient to mark the bug as "verified".
(Thank you for the comprehensive verification, btw!)
Daniel,
I meant closed from the QA point of view, which means setting it as VERIFIED FIXED ("officially marked as verified" as you put it). I am not working on this bug by the general requirements because it is a feature tracking bug and I need to make sure the feature enters betas and RC without any (major) issues. Please let me know if you have any concerns about this.
(Ah, ok -- I hadn't realized that we wait for feature-tracking bugs to reach an RC before marking them as verified -- but it may be that I just never noticed that, and it sounds like you know what you're doing, so I'll be quiet. :))
This issue has also been verified as fixed on Firefox 10 beta 1:
layout/reftests/transform-3d passes on all OSs:
/tests/layout/style/test/test_transitions_per_property.html and layout/base/tests/test_preserve3d_sorting_hit_testing.html are passing on all OSs:
The manual tests here also pass with known issues.
This feature has been shipped (all the major issues were fixed). | https://bugzilla.mozilla.org/show_bug.cgi?id=505115 | CC-MAIN-2017-04 | refinedweb | 17,226 | 54.42 |
I'm considering investing in a Cisco ASA5505. As Cisco's own VPN client requires a service subscription, which I am trying to do without, are there any free or low-cost ipsec VPN clients that will work with the ASA and run on Windows XP? Is XP's built-in ipsec client compatible? Any links to guides and walkthroughs would be much appreciated.
Install a virtual server inside the remote network and use RRAS for VPN on it. Then expose the relevant ports through the ASA.
That way all XP/Vista clients can connect reliably.
Mike
Have you taken a look at shrew?. I've used it on windows machines as a replacement for a specific version of the cisco client which was not playing nicely with windows 7.
vpnc combined with network-manager-vpnc is a great option if you're on a linux platform though.
Came across this article on how to get Windows Vista to connect to a Cisco PIX using a native client. Since ASA is mostly just a glorified PIX, these options might also work for you.
I also have to agree with the previous posts that the 'vpnc' client that you can install on Linux is just great, as it does not force any network routes or blockage like the official Cisco client does. You decide what your computer routes through the tunnel, as it should be.
Cisco VPN is proprietary and will not work with anything but Cisco VPN :-) It is IPSec but it is not compatible with other IPSec clients.
A cludge of a solution:
I would consider this a bad solution - too convoluted. You will want to service contract in order to get security updates for your router and to get access to the Cisco Support Site.
If the Cisco device is just too expensive, consider a lower cost alternative, either from a different vendor or "Roll-Your-Own". Just remember RFC 1925 truth #3
With sufficient thrust, pigs fly just fine. However, this is
not necessarily a good idea. It is hard to be sure where they
are going to land, and it could be dangerous sitting under them
as they fly overhead.
With sufficient thrust, pigs fly just fine. However, this is
not necessarily a good idea. It is hard to be sure where they
are going to land, and it could be dangerous sitting under them
as they fly overhead.
Looks like you can run vpnc under Windows via Cygwin.
TEsted and working on
On ubuntu 9.04 -64 bit
Sudo apt-gete install VPNC
sudo apt-get install KVpnc
;)
import the Cisco VPN profile and enjoy
Thanks
kartook
Another way to use VPN
install the Windows Box on VMware Workstation and move that Vmdk file to ubuntu box box download and install VMware player then play that windows.vmdk file
If you want move files from ubuntu to Windows you can connect the Windows box through
nautilus-connect-server
It cool
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
8419 times
active
4 years ago | http://serverfault.com/questions/21745/alternative-cisco-vpn-clients-for-windows-xp/38879 | CC-MAIN-2014-23 | refinedweb | 520 | 70.84 |
Mini-Huey¶
MiniHuey provides a very lightweight huey-like API that may be
useful for certain applications. The
MiniHuey consumer runs inside a
greenlet in your main application process. This means there is no separate
consumer process to manage, nor is there any persistence for the
enqueued/scheduled tasks; whenever a task is enqueued or is scheduled to run, a
new greenlet is spawned to execute the task.
MiniHuey may be useful if:
- Your application is a WSGI application.
- Your tasks do stuff like check for spam, send email, make requests to web-based APIs, query a database server.
- You do not need automatic retries, persistence for your message queue, dynamic task revocation.
- You wish to keep things nice and simple and don’t want the overhead of additional process(es) to manage.
MiniHuey may be a bad choice if:
- Your application is incompatible with gevent (e.g. uses asyncio).
- Your tasks do stuff like process large files, crunch numbers, parse large XML or JSON documents, or other CPU or disk-intensive work.
- You need a persistent store for messages and results, so the consumer can be restarted without losing any unprocessed messages.
If you are not sure, then you should probably not use MiniHuey. Use the
regular
Huey instead.
Usage and task declaration:
- class
MiniHuey([name='huey'[, interval=1[, pool_size=None]]])¶
task([validate_func=None])¶
Task decorator similar to
Huey.task()or
Huey.periodic_task(). For tasks that should be scheduled automatically at regular intervals, simply provide a suitable
crontab()definition.
The decorated task will gain a
schedule()method which can be used like the
TaskWrapper.schedule()method.
Examples task declarations:
from huey import crontab from huey.contrib.mini import MiniHuey huey = MiniHuey() @huey.task() def fetch_url(url): return urlopen(url).read() @huey.task(crontab(minute='0', hour='4')) def run_backup(): pass
Example usage. Running tasks and getting results work about the same as regular Huey:
# Executes the task asynchronously in a new greenlet. result = fetch_url('') # Wait for the task to finish. html = result.get()
Scheduling a task for execution:
# Fetch in ~30s. result = fetch_url.schedule(('',), delay=30) # Wait until result is ready, takes ~30s. html = result.get()
start()¶
Start the scheduler in a new green thread. The scheduler is needed if you plan to schedule tasks for execution using the
schedule()method, or if you want to run periodic tasks.
Typically this method should be called when your application starts up. For example, a WSGI application might do something like:
# Always apply gevent monkey-patch before anything else! from gevent import monkey; monkey.patch_all() from my_app import app # flask/bottle/whatever WSGI app. from my_app import mini_huey # Start the scheduler. Returns immediately. mini_huey.start() # Run the WSGI server. from gevent.pywsgi import WSGIServer WSGIServer(('127.0.0.1', 8000), app).serve_forever()
Note
There is not a separate decorator for periodic, or crontab, tasks. Just
use
MiniHuey.task() and pass in a validation function. A
validation function can be generated using the
crontab() function.
Note
Tasks enqueued for immediate execution will be run regardless of whether the scheduler is running. You only need to start the scheduler if you plan to schedule tasks in the future or run periodic tasks. | https://huey.readthedocs.io/en/stable/mini.html | CC-MAIN-2020-16 | refinedweb | 529 | 58.99 |
i am a student taking c++ so i do not know that much about it yet.
but we had a homework assignment that i wrote code for but the first error i get is that is doesn't recognize stdafx.h when i try to include it here is the code can anyone help? i am using microsoft visual c++ 2008 express edition.
Code:#include "stdafx.h" #include <iostream>; using namespace std; using namespace System; int _tmain(int argc, _TCHAR* argv[]) { int city1 = 0; int city2 = 0; int averageTemp= 0; cout << "Enter the name of city 1:"; cin >> city1 cout << "Enter the name of city 2:"; cin >> city2 averageTemp = (city1 * city2)/2 cout << "the average temperature of the two cities is " << averageTemp << endl; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/113795-doesn't-recognize-stdafx-h.html | CC-MAIN-2014-41 | refinedweb | 125 | 72.7 |
that I built and provide a link to the download package so that you can run the whole thing yourself.
If you’re not familiar with Microsoft StreamInsight, here’s a quick recap. StreamInsight is a complex event processing engine that can receive high volumes of data via adapters and pass it through LINQ-authored queries. The result is real-time intelligence about the pattern of events found in the engine. You can read more about it on the Microsoft MSDN page for StreamInsight, my own blog posts on it, or pick up a book by a set of good-looking authors.
Assuming you have StreamInsight 1.1 installed (download here) you can execute my solution, which has these Visual Studio projects:
The first project, DataPublisher is my custom StreamInsight adapter that sends “call center” events to the StreamInsight engine.
The CallCenterAdapterPoint.cs class is my actual input adapter that leverages the FakeDataSource.cs class which creates a new CallCenterRequestEventType every 500 milliseconds. The CallCenterRequestEvenType has its properties (e.g. product, call type) randomly assigned upon creation.
The next VS 2010 project that I’ll highlight is my web service adapter (which I describe in depth in this blog post).
I’m going to use this adapter to send complex events from StreamInsight to my Windows Form.
The next project is my Windows Form project, named EventReceiver.WinUI.
This Windows Form hosts a WCF service that when invoked, updates the Chart control on the main form.
I had to do some fun work with .NET delegates to successfully host a WCF and allow the service to update the chart. Seems to work ok.
The final project, and meatiest, is the StreamInsightQuery project. This project starts up an embedded StreamInsight server, and has a set of six queries that you can play with. The first five are meant to be output to the Tracer (console) adapter. These queries show how to filter events, create tumbling windows, hopping windows and running totals. If you set the one line of code here to the query you want and press F5, you can see StreamInsight in action.
//start SI query for queries #1-5 #region Tracer Adapter Query var siQuery = query4.ToQuery(myApp, "SI Query", string.Empty, typeof(TracerFactory), tracerConfig, EventShape.Point, StreamEventOrder.FullyOrdered); #endregion
Cool. If you want to try out the Windows Form chart, simply comment out the previous siQuery variable and uncomment out the one that follows.
//start SI query for query #6 #region Web Adapter Query var siQuery = query6.ToQuery(myApp, "SI Query", string.Empty, typeof(WebOutputFactory), webAdapterConfig, EventShape.Point, StreamEventOrder.FullyOrdered); #endregion
Now, you’ll want to go and manually start up the Windows Form console, click the Start Listening button, and make sure that the status of the service is Open.
We can now press F5 again within VS 2010 and start up our StreamInsight server. Instead of writing events to the Console, StreamInsight is calling the Web adapter and sending messages to the web service hosted by our Windows Form. Within a few seconds after starting the StreamInsight server, we should see our “running totals by call center type” complex events drawing on the Chart.
When you’re finished being mildly impressed, you can shut down the StreamInsight server and then Stop Listening on the Windows Form.
So that’s it. You can download the full source code for this whole demo. StreamInsight is a pretty cool technology and I hope that by making it easy to try it, I’ve motivated you to give it a whirl.
Categories: StreamInsight
really good stuff, I seem to be running into issues with the winform app though. I get this as missing CallCenterEvent_To_EventDashboardService.xslt .. is that missing from your zip file?
Hi Steve, that XSLT file is at the root of the solution (next to the .sln file). You likely need to change the address in the StreamInsight project.
got it, working great. pretty awesome, I have been looking for something like this, to “show” how it all works, great work.
Cool. Glad you found it useful. There are a few moving parts in a StreamInsight solution, so it’s nice when you don’t have to do much to see it in action.
Also, just a note, you may want to mention this, I had to download the samples from codeplex and build two of them to get your solution to build, the OutputTracer and SimpleFileTextWriter
Thanks for letting me know.
hi Steve, While Sample do we need to compile from the codeplex to get the SimpleFileTextWriter?
I have seen that the output adapter send the request using HTTP POST. Why WCF is not used to communicate with the client application?
Hi Kodjo, I was aiming for the lowest common denominator. I didn’t want to assume what the recipient service might look like.
Hi,
Great article and technology, thanks.
I’m more on the database technology stack rather than the visual studio/.NET side, so having a few issues getting this running, wondering if you can assist?
I too had to download the OutputTracer and SimpleFileTextWriter, all good!
But I get StreamInsight.Samples.Adapters.OutputTracer; “the type or namespace ‘StreamInsight’ could not be found”, i tried referencing it, but it doesnt show up! I registered the two adapters above using regasm.
Any help would be much appreciated…..
Thanks
Hi Scott,
Have you installed StreamInsight 1.1 itself? If you look at the path where it’s looking for the SI dlls, are there assemblies there?
Hi Richard,
es installed StreamInsight 1.1 (even went through the repair too, to make sure all ok)
When you say SI dlls, do you mean in C:\Program Files (x86)\Microsoft StreamInsight 1.1\Bin?
The StreamInsight.Samples.Adapters.OutputTracer.dll and StreamInsight.Samples.Adapters.SimpleTextFileWriter.dll werent so copied them in there, still no joy!
Cheers…
Scott,
Have you put the adapter assemblies into the GAC?
i installed streaminsight 1.1, don’t see any StreamInsight.Samples.Adapters.OutputTracer.dll anywhere (neither in the bin folder of streaminsight nor in the Package_StreamInsightSolution… please advise
update
ok, so i only found StreamInsight.Samples.OutputAdapters.Tracer, bu tit doesn’t contain a declaration for TracerKind..
Hmm. I know there have been a couple versions of this sample, and that Tracer stuff had changed. Are you able to run my sample?
Thanks,Really good example.
Hello, Stgeve, I have got installed the StreamInsight 1.2 however when I am running the project running into the issue such as
Error 1 The type or namespace name ‘Adapters’ does not exist in the namespace ‘StreamInsight.Samples’ (are you missing an assembly reference?) C:\Users\Downloads\Package_StreamInsightSolution_v1\Package_StreamInsightSolution\StreamInsightQuery\Program.cs 8 29 StreamInsightQuery
May be the package is missing one of the class?
Hi Richard, As you mentioned, the latest revision must be missing Adapters class which is causing the compilation errors.. Would it be possible to share? I am excited to see your working solution. Thanks!
Might be missing following dll – > StreamInsight.Samples.Adapters.OutputTracer.dll
Hey Ken, the latest Codeplex download for StreamInsight samples has the adapters in there. It may be in a different folder, and thus you’ll have to update my project to point to it, but it should be there.
Richard, I am sorry to say but thelatest codeplex sample does not work either.. May be the adapater file is missing in the upload.. Would you try and see if it really works?
Richard, I got it working.. Thanks!
Cannot get the example on codeplex to work either – any insight on where to find the StreamInsight.Samples.Adapters.OutputTracer.dll
Hi Al, do you not see it in this download ()? I see it in the Adapters folder … | https://seroter.wordpress.com/2011/04/18/sending-streaminsight-events-to-a-windows-form-dashboard-code-included/ | CC-MAIN-2019-39 | refinedweb | 1,288 | 66.74 |
IRC log of grddl-wg on 2007-05-16
Timestamps are in UTC.
14:52:06 [RRSAgent]
RRSAgent has joined #grddl-wg
14:52:06 [RRSAgent]
logging to
15:00:26 [Zakim]
SW_GRDDL()11:00AM has now started
15:00:34 [Zakim]
+john-l
15:02:22 [Zakim]
+DanC
15:03:48 [danja_]
danja_ has joined #grddl-wg
15:05:03 [DanC]
Zakim, read agenda from
15:05:03 [Zakim]
working on it, DanC
15:05:04 [Zakim]
agenda+ Convene GRDDL WG meeting of 2007-05-16T11:00-0400
15:05:06 [Zakim]
agendum 13 added
15:05:07 [Zakim]
agenda+ GRDDL Spec: Edits done
15:05:08 [Zakim]
agendum 14 added
15:05:09 [Zakim]
agenda+ Test Cases
15:05:10 [Zakim]
agendum 15 added
15:05:11 [Zakim]
agenda+ Comments on GRDDL Spec since CR Request
15:05:13 [Zakim]
agendum 16 added
15:05:14 [Zakim]
agenda+ Primer - More Work, Note, or LC?
15:05:15 [Zakim]
agendum 17 added
15:05:16 [Zakim]
agenda+ WWW2007 Tutorial, W3C Track, etc.
15:05:18 [Zakim]
agendum 18 added
15:05:20 [Zakim]
agenda+ Advocating
15:05:22 [Zakim]
agendum 19 added
15:05:24 [Zakim]
agenda+ Relationship with RDFa
15:05:28 [Zakim]
agendum 20 added
15:05:30 [Zakim]
agenda+ Test Cases for Rejection
15:05:32 [Zakim]
agendum 21 added
15:05:36 [Zakim]
done reading agenda, DanC
15:05:41 [DanC]
Zakim, take up item 1
15:05:41 [Zakim]
agendum 1. "Convene GRDDL WG meeting of 2007-05-02T11:00-0400" taken up
15:05:53 [DanC]
Zakim, clear agenda
15:05:53 [Zakim]
agenda cleared
15:05:55 [DanC]
Zakim, read agenda from
15:05:55 [Zakim]
working on it, DanC
15:05:56 [Zakim]
agenda+ Convene GRDDL WG meeting of 2007-05-16T11:00-0400
15:05:58 [Zakim]
agendum 1 added
15:05:59 [Zakim]
agenda+ GRDDL Spec: Edits done
15:06:02 [Zakim]
agendum 2 added
15:06:03 [Zakim]
agenda+ Test Cases
15:06:04 [Zakim]
agendum 3 added
15:06:05 [Zakim]
agenda+ Comments on GRDDL Spec since CR Request
15:06:06 [Zakim]
agendum 4 added
15:06:07 [Zakim]
agenda+ Primer - More Work, Note, or LC?
15:06:09 [Zakim]
agendum 5 added
15:06:16 [Zakim]
agenda+ WWW2007 Tutorial, W3C Track, etc.
15:06:20 [Zakim]
agendum 6 added
15:06:22 [Zakim]
agenda+ Advocating
15:06:26 [Zakim]
agendum 7 added
15:06:28 [Zakim]
agenda+ Relationship with RDFa
15:06:30 [Zakim]
agendum 8 added
15:06:32 [Zakim]
agenda+ Test Cases for Rejection
15:06:36 [Zakim]
agendum 9 added
15:06:39 [Zakim]
done reading agenda, DanC
15:06:47 [DanC]
Zakim, take up item 1
15:06:47 [Zakim]
agendum 1. "Convene GRDDL WG meeting of 2007-05-16T11:00-0400" taken up
15:07:11 [Zakim]
+ +0127368aaaa
15:08:14 [HarryH]
HarryH has joined #grddl-wg
15:08:36 [john-l]
Regrets+ BrianS
15:08:51 [john-l]
Regrets+ IanD
15:09:01 [Zakim]
+HarryH
15:09:05 [john-l]
Regrets+ chimezie
15:09:09 [HarryH]
Zakim, who's on the phone?
15:09:09 [Zakim]
On the phone I see +0127368aaaa, HarryH, john-l, DanC (muted)
15:09:16 [DanC]
Zakim, aaaa is JeremyC
15:09:16 [Zakim]
+JeremyC; got it
15:09:37 [john-l]
Regrets+ FabienG
15:10:00 [DanC]
I 2nd the motion (from the agenda) to approve minutes 2 May
15:10:25 [john-l]
Regrets+ Simone
15:10:27 [Zakim]
-JeremyC
15:10:39 [HarryH]
RESOLVED: to approve GRDDL WG-- 02 May 2007 as a true record
15:11:36 [john-l]
Scribe: john-l
15:11:37 [Zakim]
+JeremyC
15:12:37 [DanC]
I'm available to meet next week, 23 May... er... I'm getting on a plane in the evening...
15:13:38 [DanC]
(I see an agenda request to discuss "base test cases")
15:14:05 [HarryH]
to meet again Wed, 23rd May 11:00-0500.
15:14:10 [DanC]
agenda + base test cases
15:14:22 [HarryH]
RESOLVED: to meet again Wed, 23rd May 11:00-0500.
15:14:28 [HarryH]
Zakim, next item
15:14:28 [Zakim]
agendum 2. "GRDDL Spec: Edits done" taken up
15:14:36 [DanC]
(hmm... I think it's -0400)
15:14:39 [HarryH]
Zakim, next item
15:14:39 [Zakim]
agendum 2 was just opened, HarryH
15:14:48 [HarryH]
RESOLVED: to meet again Wed, 23rd May 11:00-0400
15:14:56 [HarryH]
Zakim, open item 3
15:14:56 [Zakim]
agendum 3. "Test Cases" taken up
15:15:02 [HarryH]
Zakim, open item 2
15:15:02 [Zakim]
agendum 2. "GRDDL Spec: Edits done" taken up
15:15:08 [HarryH]
Zakim, open item 3
15:15:08 [Zakim]
agendum 3. "Test Cases" taken up
15:15:29 [HarryH]
Zakim, open item 2
15:15:29 [Zakim]
agendum 2. "GRDDL Spec: Edits done" taken up
15:15:58 [Zakim]
+??P17
15:16:14 [john-l]
Zakim, P17 is danja
15:16:14 [Zakim]
sorry, john-l, I do not recognize a party named 'P17'
15:16:19 [john-l]
Zakim, P17 is ??danja
15:16:19 [Zakim]
sorry, john-l, I do not recognize a party named 'P17'
15:16:20 [DanC]
(looking at
)
15:16:25 [john-l]
Zakim, ??P17 is danja
15:16:25 [Zakim]
+danja; got it
15:17:57 [danja_]
john-l, next week fine for me
15:19:25 [jjc]
jjc has joined #grddl-wg
15:20:00 [HarryH]
ACTION: for jjc to look at namespace document closely.
15:20:12 [HarryH]
DanC gets victory on actions otherwise.
15:20:17 [HarryH]
Zakim, next item
15:20:17 [Zakim]
agendum 3. "Test Cases" taken up
15:20:41 [DanC]
ACTION: chime to update dom in ack [DONE]
15:20:46 [DanC]
(that was done a week or two ago)
15:21:41 [DanC]
I think I updated
recently...
15:21:51 [danja_]
zakim, mute me
15:21:51 [Zakim]
sorry, danja_, I do not know which phone connection belongs to you
15:21:55 [DanC]
yes... during tutorial prep... 1.13 $ of $Date: 2007/05/06 00:07:08
15:21:59 [HarryH]
15:22:12 [danja_]
zakim, who's on the phone
15:22:12 [Zakim]
I don't understand 'who's on the phone', danja_
15:22:25 [john-l]
Zakim, who's on the phone?
15:22:25 [Zakim]
On the phone I see HarryH, JeremyC, danja, john-l, DanC
15:22:44 [jjc]
Zakim, dandja is danja_
15:22:44 [Zakim]
sorry, jjc, I do not recognize a party named 'dandja'
15:22:50 [jjc]
Zakim, danja is danja_
15:22:50 [Zakim]
+danja_; got it
15:22:53 [danja_]
thanks
15:23:14 [danja_]
zakim, mute me
15:23:14 [Zakim]
danja_ should now be muted
15:23:20 [DanC]
(anybody seen raptor results using the modern EARL namespace etc.?)
15:24:52 [HarryH]
Test-Cases that 2 implementations are not interoperable with:
15:24:52 [HarryH]
htmlbase3, htmlbase4
15:24:52 [HarryH]
noxinclude
15:24:53 [HarryH]
xmlbase1, xmlbase2, xmlbase3, xmlbase4
15:25:58 [john-l]
jjc: The subgroup formed to approve certain tests decided that GRDDL results are "encapsulated" (according to RFC 3986) within the source document at the root element of that document.
15:26:38 [john-l]
jjc: I'm not sure if that led to a reapproval or a removal of an *HTML* base test.
15:26:40 [jjc]
15:26:55 [john-l]
jjc: The case in question is #htmlbase1
15:27:10 [jjc]
15:27:42 [john-l]
jjc: The key question is "what is the correct subject of that statement?"
15:27:53 [jjc]
<base href="
"/>
15:28:09 [john-l]
jjc: (From within the source document)
15:28:50 [john-l]
jjc: We need to find the base URI of the GRDDL results...
15:29:27 [john-l]
... and for this we turn to RFC 3986.
15:30:20 [john-l]
DanC: The GRDDL spec is *not* "normatively silent" on the base URI for GRDDL results.
15:30:28 [john-l]
jjc: So what does the spec say?
15:30:57 [DanC]
"Applying this transformation to po-doc.xml yields RDF/XML; we parse this to an RDF graph (using the URI of the source document,,
as the base URI) "
15:31:21 [danja_]
zakim, unmute me
15:31:21 [Zakim]
danja_ should no longer be muted
15:32:19 [jjc]
The base IRI for interpretting relative IRI references in a serialization of a graph produced by a GRDDL transformation is the IRI of the source document.
15:32:38 [DanC]
(it should be in
)
15:33:17 [jjc]
RESOLVED: Given that a base URI parameter is a parameter whose value is the base URI of the source document, the WG RESOLVES not to define a base URI parameter for transforms.
15:34:15 [DanC]
^that's a quote, not a resolution of this meeting
15:34:17 [john-l]
Seems like this pretty much invalidates the base URI test cases that the subgroup approved.
15:34:39 [HarryH]
Groups can always retract approval and re-approve the fixed test-cases.
15:34:45 [DanC]
the text jjc quotes ("The bse IRI...") is from 1.233 (connolly 01-Mar-07)
15:35:52 [jjc]
5.1. Establishing a Base URI . . . . . . . . . . . . . . . . 28
15:35:52 [jjc]
5.1.1. Base URI Embedded in Content . . . . . . . . . . 29
15:35:52 [jjc]
5.1.2. Base URI from the Encapsulating Entity . . . . . 29
15:35:52 [jjc]
5.1.3. Base URI from the Retrieval URI . . . . . . . . 30
15:35:52 [jjc]
5.1.4. Default Base URI . . . . . . . . . . . . . . . . 30
15:36:05 [danja_]
zakim, mute me
15:36:05 [Zakim]
danja_ should now be muted
15:38:26 [danja_]
zakim, unmute me
15:38:26 [Zakim]
danja_ should no longer be muted
15:39:22 [john-l]
GRDDL.py is another implementation that is closely tracking the spec.
15:40:02 [jjc]
5.1.1. Base URI Embedded in Content
15:40:03 [jjc]
Within certain media types, a base URI for relative references can be
15:40:03 [jjc]
embedded within the content itself so that it can be readily obtained
15:40:03 [jjc]
by a parser. This can be useful for descriptive documents, such as
15:40:03 [jjc]
tables of contents, which may be transmitted to others through
15:40:03 [jjc]
protocols other than their usual retrieval context (e.g., email or
15:40:05 [jjc]
USENET news).
15:40:08 [jjc]
It is beyond the scope of this specification to specify how, for each
15:40:09 [jjc]
media type, a base URI can be embedded. The appropriate syntax, when
15:40:11 [jjc]
available, is described by the data format specification associated
15:40:13 [jjc]
with each media type.
15:40:29 [jjc]
15:40:42 [jjc]
<base href="
"/>
15:41:52 [jjc]
15:42:20 [DanC]
(the
rule assumes you have an RDF/XML document, and those have base-URI built-in)
15:43:47 [john-l]
Do we want xml:base and html:base declarations to affect GRDDL results or not?
15:43:58 [john-l]
... that is, declarations within the source document?
15:44:22 [danja_]
zakim, mute me
15:44:22 [Zakim]
danja_ should now be muted
15:44:34 [danja_]
(dogs!)
15:45:56 [john-l]
jjc: I think that tests currently resolve relative references using base URI information on the root element of the source document.
15:47:30 [HarryH]
ACTION: john-l to implement htmlbase1.
15:48:04 [danja_]
zakim, unmute me
15:48:04 [Zakim]
danja_ should no longer be muted
15:50:22 .
15:50:42 [john-l]
I agree at least with (b).
15:51:36 [HarryH]
ACTION: jjc to write text about base-uri, which may become normative.
15:51:47 [HarryH]
I agree with (a) and (b).
15:53:05 [HarryH]
ACTION: john-l to turn "fails" into passes on GRDDL.py and rebuild test-results, bonus points for Raptor.
15:54:20 [jjc]
On IRI text I think (c) it should mention *base* IRI of root element
15:54:43 [DanC]
ACTION: chimezie to start an index of tests by normative assertion (proxy for feature) [CONTINUES]
15:55:32 [jjc]
(john note there is some SPARQL code that turns fails into not aplicables where appropriate)
15:55:47 [john-l]
Ah, good to know, thanks. In CVS?
15:55:59 [DanC]
(push to agenda review... checking that actions from last time were carried forward
)
15:56:35 [HarryH]
ACTION: [CONTINUES] harry to start an index of tests by issue (not urgent; due 30 May)
15:56:39 [HarryH]
Zakim, next item
15:56:39 [Zakim]
agendum 4. "Comments on GRDDL Spec since CR Request" taken up
15:57:02 [HarryH]
15:57:07 [HarryH]
David Booth had minor editorial comment.
15:58:05 [HarryH]
16:06:32 [john-l]
There was a fair chunk of discussion surrounding the old resolution of the faithful infoset issue.
16:07:38 [john-l]
In this discussion, several members of the group informally agreed that mapping from XML serialization to an XPath root note is a problem larger than the scope of the GRDDL charter.
16:08:19 [john-l]
Some of this discussion indicated that D. Booth might want to take his comment to the XProc working group.
16:08:39 [HarryH]
ACTION: jjc to ask David Booth to reply to current messages on comment list and possibly, if needed, clarify what he wants if he isn't happy.
16:09:10 [HarryH]
16:09:34 [jjc]
Jeremy notes that David has been holding off because he has been discussing off-list with jeremy
16:09:35 [HarryH]
"The failure of a transformation could be interpreted by a processor as
16:09:35 [HarryH]
producing no RDF data (fail silently) or the error could be passed back to
16:09:35 [HarryH]
the invoker.
16:09:35 [HarryH]
On producing non-RDF elements, let me extend your analogy and say that a
16:09:35 [HarryH]
processor returns both 11 and "volunteer". The extraneous value could be
16:09:36 [HarryH]
silently ignored, or the entire answer could be considered meaningless and
16:09:38 [HarryH]
the invoker informed of the error.
16:09:40 [HarryH]
We are suggesting that both cases be acknowledged in the specification,
16:09:42 [HarryH]
and that you specify the behavior allowed by a GRDDL processor."
16:10:19 [john-l]
DanC: He didn't give any reason why we should address his issues.
16:10:55 [jjc]
The extraneous value could be
16:10:55 [jjc]
silently ignored, or the entire answer could be considered meaningless and
16:10:55 [jjc]
the invoker informed of the error.
16:13:31 [john-l]
DanC: What about having a test case addressing the issue?
16:13:46 [john-l]
jjc: We might need additional test machinery.
16:16:30 [john-l]
DanC: These could be informative tests.
16:16:51 [john-l]
jjc: Introduce them with text like "The GRDDL spec doesn't specify what happens in these cases, but..."
16:17:31 [jjc]
"some grddl implementations behave as follows"
16:22:14 [danja_]
zakim, mute me
16:22:16 [Zakim]
danja_ should now be muted
16:22:43 [john-l]
jjc: Some implementations might get partial results.
16:23:07 [john-l]
DanC: But if only some triples get added to the GRDDL result, that would be a mistake.
16:24:25 [danja_]
zakim, unmute me
16:24:25 [Zakim]
danja_ should no longer be muted
16:25:45 [john-l]
HarryH: So a normative test case would solve this problem.
16:26:18 [HarryH]
Correct? jjc to write an normative test case showing that if the transform is broken, you should not get a partial result, you should get no results.
16:26:27 [DanC]
yup
16:26:44 [HarryH]
ACTION: jjc to write an normative test case showing that if the transform is broken, you should not get a partial result, you should get no results.
16:27:36 [danja_]
zakim, mute me
16:27:36 [Zakim]
danja_ should now be muted
16:27:50 [HarryH]
Will talk about 3 remaining comments later.
16:28:05 [HarryH]
Zakim, open item 6
16:28:05 [Zakim]
agendum 6. "WWW2007 Tutorial, W3C Track, etc" taken up
16:28:21 [john-l]
DanC: Room was packed for the tutorial...
16:29:04 [john-l]
... 60 people, good timing, audience followed along.
16:29:07 [HarryH](1
)
16:30:03 [john-l]
HarryH: People liked the spreadsheet example in Fabien's presentation.
16:30:16 [DanC]
(the URIs I've seen from Fabien seem to have /tmp/ in them. anybody have a cool URI?)
16:30:46 [john-l]
HarryH: Companies mentioning the desire to integrate their data, which is often largely stored in spreadsheets.
16:31:00 [HarryH]
W3C Track:
16:31:59 [john-l]
HarryH: Spreadsheets only came up when raised during the Q&A section.
16:32:21 [danja_]
zakim, unmute me
16:32:21 [Zakim]
danja_ should no longer be muted
16:34:21 [HarryH]
ACTION: HarryH to look in the spreadsheet into RDF in primer.
16:34:32 [HarryH]
Anyone want to write Excel XML to RDF tranform?
16:34:39 [HarryH]
Anyone want to write Excel XML to RDF transform?
16:36:08 [HarryH]
Calconnect =
16:36:15 [HarryH]
Add Excel to Primer
16:38:08 [HarryH]
Lee Feigenbaum did Spreadsheets to RDF.
16:38:14 [HarryH]
in Dev track.
16:38:35 [jjc]
16:38:49 [DanC]
ah yes...
... they've been in contact via www-rdf-calendar several times. Matt May attended one of their events in person. I think I blogged about them once.
16:38:59 [jjc]
lists about four docs which appear tp be iCalendar
16:39:53 [HarryH]
ACTION: Danja to pick LeeF over Spreadsheet to RDF examples, does he have XSLT?
16:40:29 [HarryH]
Meeting Adjourned.
16:40:45 [john-l]
Zakim, list participants
16:40:45 [Zakim]
As of this point the attendees have been john-l, DanC, +0127368aaaa, HarryH, JeremyC, danja_
16:40:45 [danja_]
16:42:48 [john-l]
RRSAgent, please make these logs world-readable
16:42:56 [john-l]
RRSAgent, please create the minutes
16:42:56 [RRSAgent]
I have made the request to generate
john-l | http://www.w3.org/2007/05/16-grddl-wg-irc | CC-MAIN-2014-10 | refinedweb | 3,091 | 67.59 |
pipetools 0.2.7
A library that enables function composition similar to using Unix pipes.
Complete documentation in full color.
Pipetools
pipetools is a python package that enables function composition similar to using Unix pipes.
Inspired by Pipe and Околомонадное (whatever that means…)
It allows piping of arbitrary functions and comes with a few handy shortcuts.
Why?
Pipetools attempt to simplify function composition and make it more readable.
Why piping instead of regular composition?
I believe it to be easier to read, write and think about from left to right / top to bottom in the order that it’s actually executed, instead of reversed order as it is with regular function composition ((f • g)(x) == f(g(x))).
Example
Say you want to create a list of python files in a given directory, ordered by filename length, as a string, each file on one line and also with line numbers:
>>> print pyfiles_by_length('../pipetools') 0. main.py 1. utils.py 2. __init__.py 3. ds_builder.py
So you might write it like this:
def pyfiles_by_length(directory): all_files = os.listdir(directory) py_files = [f for f in all_files if f.endswith('.py')] py_files.sort(key=len) numbered = enumerate(py_files) rows = ("{0}. {1}".format(i, f) for i, f in numbered) return '\n'.join(rows)
Or perhaps like this:
def pyfiles_by_length(directory): return '\n'.join('{0}. {1}'.format(*x) for x in enumerate(sorted( [f for f in os.listdir(directory) if f.endswith('.py')], key=len)))
Or, if you’re a mad scientist, you would probably do it like this:
pyfiles_by_length = lambda d: (reduce('{0}\n{1}'.format, map(lambda x: '%d. %s' % x, enumerate(sorted( filter(lambda f: f.endswith('.py'), os.listdir(d)), key=len)))))
But there should be one – and preferably only one – obvious way to do it.
So which one is it? Well, to redeem the situation, pipetools give you yet another possibility!
pyfiles_by_length = (pipe | os.listdir | where(X.endswith('.py')) | sort_by(len) | enumerate | foreach("{0}. {1}") | '\n'.join )
So is this The Right Way™? Probably not, but I think it’s pretty cool, so you should give it a try! Read on to see how it works.
Installation
$ pip install pipetools
Usage
The pipe
The pipe object can be used to pipe functions together to form new functions, and it works like this:
from pipetools import pipe f = pipe | a | b | c f(x) == c(b(a(x)))
A real example, sum of odd numbers from 0 to x:
from functools import partial from itertools import ifilter from pipetools import pipe odd_sum = pipe | xrange | partial(ifilter, lambda x: x % 2) | sum odd_sum(10) # -> 25
Note that the chain up to the sum is lazy.
Automatic partial application in the pipe
As partial application is often useful when piping things together, it is done automatically when the pipe encounters a tuple, so this produces the same result as the previous example:
odd_sum = pipe | xrange | (ifilter, lambda x: x % 2) | sum
As of 0.1.9, this is even more powerful, see X-partial.
Built-in tools
Pipetools contain a set of pipe-utils that solve some common tasks. For example there is a shortcut for the ifilter from our example, called where():
from pipetools import pipe, where odd_sum = pipe | xrange | where(lambda x: x % 2) | sum
Well that might be a bit more readable, but not really a huge improvement, but wait!
If a pipe-util is used as first or second item in the pipe (which happens quite often) the pipe at the beginning can be omitted:
odd_sum = xrange | where(lambda x: x % 2) | sum
See pipe-utils’ documentation.
OK, but what about the ugly lambda?
where(), but also foreach(), sort_by() and other pipe-utils can be quite useful, but require a function as an argument, which can either be a named function – which is OK if it does something complicated – but often it’s something simple, so it’s appropriate to use a lambda. Except Python’s lambdas are quite verbose for simple tasks and the code gets cluttered…
X object to the rescue!
from pipetools import where, X odd_sum = xrange | where(X % 2) | sum
How ‘bout that.
Read more about the X object and it’s limitations.
Automatic string formatting
Since it doesn’t make sense to compose functions with strings, when a pipe (or a pipe-util) encounters a string, it attempts to use it for (advanced) formatting:
>>> countdown = pipe | (xrange, 1) | reversed | foreach('{0}...') | ' '.join | '{0} boom' >>> countdown(5) u'4... 3... 2... 1... boom'
Feeding the pipe
Sometimes it’s useful to create a one-off pipe and immediately run some input through it. And since this is somewhat awkward (and not very readable, especially when the pipe spans multiple lines):
result = (pipe | foo | bar | boo)(some_input)
It can also be done using the > operator:
result = some_input > pipe | foo | bar | boo
Which also isn’t ideal, but I couldn’t think of anything better so far…
But wait, there is more
See the full documentation.
- Downloads (All Versions):
- 56 downloads in the last day
- 319 downloads in the last week
- 1021 downloads in the last month
- Author: Petr Pokorny
- License: MIT
- Categories
- Package Index Owner: 0101
- DOAP record: pipetools-0.2.7.xml | https://pypi.python.org/pypi/pipetools | CC-MAIN-2015-32 | refinedweb | 870 | 61.16 |
* Karl Goetz <karl@kgoetz.id.au> [2009-06-10 03:44-0400]: > On Tue, 2 Jun 2009 00:14:45 -0400 > Micah Anderson <micah@riseup.net> wrote: > > Thanks for your response, sorry about my delay getting back to you. > > > * Karl Goetz <karl@kgoetz.id.au> [2009-06-01 23:31-0400]: > > > The suggestion in #vserver was "if you manage to get a host path on > > > a recent (non broken, i.e. non-debian :) kernel and util-vserver, > > > then it is considered a bug and will be fixed ASAP ... because that > > > basically means that the namespace isolation is not working > > > properly" > > > > > > Is this a valid bug? Is there some debianisms involved that could > > > cause the issues, or is it just another upstream who doesnt like > > > "unoffical" packages? :) > > > > Only one way to find out, build a vanilla upstream, with the patch and > > find out. > > > > However, I cannot reproduce what you have seen, using the same > > kernel. > > Odd. I've just done it again, using the same two vhosts. > (sorry about the wrapping) > > sidvs:~/debomatic/Debomatic# > > wesnoth:~# > mv /home/vservers/sidvs/root/debomatic /home/vservers/autobuilders/root/ > > sidvs:~/debomatic/Debomatic# cd .. > sidvs:/home/vservers/sidvs/vservers/autobuilders/root/debomatic# > > wesnoth is the host. Sounds like you have something funny going on in your guest's fstab, either a bind mount or similar... What does your /etc/vservers/sidvs/fstab have in it? micah
Attachment:
signature.asc
Description: Digital signature | https://lists.debian.org/debian-security/2009/06/msg00048.html | CC-MAIN-2016-50 | refinedweb | 236 | 60.11 |
Creating a Splash Screen
- PDF for offline use:
-
- Sample Code:
-
- Related Articles:
-
- Related Samples:
-
Cold starting an application on an Android device usually takes a couple of seconds. During this time, users need feedback to let them know your application is loading. Here is one way to create a splash screen for your application.
Add Splash Screen Image
First you need a splash screen image. Because Android devices come in various resolutions, you may want to ship several splash screens as described in Google's Best Practices for Supporting Multiple Screens. For simplicity, we'll just ship one here that is 480x800. It should support most phone sizes pretty well, and Android will scale it as best it can.
Add the following image into your
Resources\Drawable folder:
Create a Theme with Your Splash Screen
Android lets us assign themes to an activity. We do it this way because
Android is able to show this immediately, before we even start loading.
Obviously, we want the splash screen to show up as soon after the user launches
our application as possible. The first thing we need to do is to add a new XML file named
/Resources/Values/Styles.xml as shown in the following screenshot from Xamarin Studio:
When this file is added to the project, the build action should automatically be set to AndroidResource as shown in the following screenshot:
Next, edit the contents of
/Resources/Values/Styles.xml and paste in this code:
<resources> <style name="Theme.Splash" parent="android:Theme"> <item name="android:windowBackground">@drawable/splash</item> <item name="android:windowNoTitle">true</item> </style> </resources>
This creates a new theme called
Theme.Splash that sets the window background
to our splash image and turns off the window title bar. (You can remove item
with name="android:windowNoTitle" if you want to keep the title bar.)
Create a Splash Activity
Now we need a new Activity for Android to launch that has our splash image. Add a new activity to your project called SplashActivity, and use the following code:
namespace SplashScreen { using System.Threading; using Android.App; using Android.OS; [Activity(Theme = "@style/Theme.Splash", MainLauncher = true, NoHistory = true)] public class SplashActivity : Activity { protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); Thread.Sleep(10000); // Simulate a long loading process on app startup. StartActivity(typeof(Activity1)); } } }
In the
[Activity] attribute, we specify:
- MainLauncher - this is the activity that should be launched when the user clicks our icon
- Theme - tell Android to use our theme we made for this activity
- NoHistory - tell Android not to put the activity in the 'back stack', that is, when the user hits the back button from the real application, don't show this activity again
In OnCreate, we simply call StartActivity, passing it the first activity of our application. If you wanted to ensure the splash screen stays up for a certain amount of time, you could add a sleep for a few seconds before calling StartActivity.
Finishing Up
The only thing left we need to do is remove
MainLauncher = true from the
[Activity] attribute on our actual activity. This controls if this activity has
an icon in the application launcher. We want people to always go through our
splash screen activity, so simply remove it from your regular activity.
Now you should be able to compile and run your application, and get a nice splash screen while your application loads!
| http://developer.xamarin.com/guides/android/user_interface/creating_a_splash_screen/ | CC-MAIN-2015-32 | refinedweb | 568 | 53 |
A Beginner’s Guide for Contributing to Xamarin.Forms
David
Xamarin.Forms has been open source for over a year now. In that time, we’ve accepted over 700 pull requests and received many more. Have any of those been yours? If not, this is your invitation to participate! In this article, I’ll outline what kinds of contributions we are looking for and provide a guide to submitting your first bug fix.
What Contributions Will We Accept
In short, we’ll entertain anything that makes the Xamarin.Forms product better in terms of stability, quality, and capability. This may take the shape of:
- Bug fixes
- Feature implementations
- Tests
- Readme and Wiki articles or updates
Before you start opening Pull Requests on the GitHub project, there are some prerequisites.
Sign the .NET Foundation Release
At the time you submit a Pull Request, a .NET Foundation bot will check to make sure you’ve signed a Contribution License Agreement. If you haven’t, you’ll be prompted to do so. We cannot accept any contributions until this is completed.
Review the Coding Style Requirements
We adhere to the .NET Foundation style guide with a few exceptions:
- Don’t use
privateas that’s the default protection level.
- Use hard tabs instead of spaces
- Limit lines to a max of 120 characters
For guidance on implementing some of these in your Visual Studio installation, check our readme.
Let’s Focus on Bug Fixes
Have you found a bug and have a fix you want to contribute? Awesome! Before you go too far, do a quick search on Bugzilla to see if there are any reports of the same issue. When searching for Xamarin.Forms, choose the Advanced Search option and:
- Classification: Xamarin
- Product: Xamarin.Forms
- Component: None or All to search broadly
- Status: All
If there is a matching issue and it’s is marked In Progress, then someone is already working the issue. If a PR is referenced on the issue, then it’s likely pending merge on GitHub. When the issue is Resolved and Fixed, then a Pull Request has been merged to address the issue. Currently, to see if it has been released yet and in which version you’ll need to search our release notes. We have a plan to improve that in the coming weeks.
If there’s a bug in any other incomplete state, or no bug report at all, then you’re in luck and ready to proceed!
Xamarin.Forms Solution Orientation
As you start exploring the Xamarin.Forms solution, it may seem a by daunting. I’ll demystify that for you and show you where to focus your attention.
Control Gallery/
These projects comprise a gallery application including all of the Xamarin.Forms controls available, and more importantly host bug reproductions with UITests inline. When working on fixing a bug, or just to investigate how a control is expected to work, this bare bones but functional app is where you want to look.
Pages Gallery/
As the name implies, this is another gallery application, but this time for the DataPages implementation.
Platforms/
These projects contain the platform specific implementations of services and most importantly UI controls. When Xamarin.Forms renders a Label on iOS it runs the Xamarin.Forms.Platform.iOS/Renderers/LabelRenderer.cs. On Android you’ll get Xamarin.Forms.Platform.Android/Renderers/LabelRenderer.cs, or if you’re using FastRenderers Xamarin.Forms.Platform.Android/FastRenderers/LabelRenderer.cs. When adding controls or fixing controls related bugs, this is where you’ll effect those changes.
Xamarin.Forms/
When looking for core implementations of abstract controls, layouts, bindings, triggers, App Links, and other non-platform specific code, search these projects.
Xamarin.Forms.Maps/ and Xamarin.Forms.Xaml/
These folders are self-explanatory. You probably won’t spend too much time in those projects unless you happen to really understand those domains.
Fixing a Bug
- Clone the Xamarin.Forms code from GitHub master branch or pull to make sure you have the latest.
- Create a new branch to host your changes
- Open the Xamarin.Forms solution and navigate to the Control Gallery > Xamarin.Forms.Controls.Issues > Xamarin.Forms.Controls.Issues.Shared
- Use the _Template.cs to start a new case following the established naming convention of “Bugzilla######.cs” where ###### is the issue id in Bugzilla.
using Xamarin.Forms.CustomAttributes; using Xamarin.Forms.Internals; #if UITEST using Xamarin.UITest; using NUnit.Framework; #endif namespace Xamarin.Forms.Controls.Issues { [Preserve(AllMembers = true)] [Issue(IssueTracker.Bugzilla, 1, "Issue Description", PlatformAffected.Default)] public class Bugzilla1 : TestContentPage // or TestMasterDetailPage, etc ... { protected override void Init() { // Initialize ui here instead of ctor Content = new Label { AutomationId = "IssuePageLabel", Text = "See if I'm here" }; } #if UITEST [Test] public void Issue1Test () { RunningApp.Screenshot ("I am at Issue 1"); RunningApp.WaitForElement (q => q.Marked ("IssuePageLabel")); RunningApp.Screenshot ("I see the Label"); } #endif } }
Implement your reproduction case here. If it’s a more complex case, refer to other issues in this project to find something similar and follow that pattern. In the end you should have a reproduction that demonstrates the problem you are working to solve.
- Select a Control Gallery target project to run on simulator or device.
You should see your repro happening. If not, then it would appear the issue is fixed already.
- Implement your fix and retest the repro.
- Implement a UITest in the reproduction file. If you’re not a UITest pro, again reference some of the other fixes in the source. Once you submit the Pull Request, the tests will be run automatically.
You’re now ready to create a Pull Request. From within your IDE or Git tool of choice, create a Pull Request against the Xamarin.Forms remote on GitHub. This process should take you to the GitHub Pull Request page and populate with the Xamarin.Forms Pull Request template. We ask that you fill out everything you can, and omit anything that doesn’t apply.
Description
What is the current behavior and what is the expected behavior.
Bugs
List any and all Bugzilla reports that this applies to
API Changes
If the surface area of any class changed when making this fix, note the changes here. These may indicate breaking or behavioral changes for other users and their legacy applications.
Checklist
Indicate if you have included tests. If no tests are required or the issue is too difficult to test with a UITest, note that.
And that’s it. Submit the Pull Request and we’ll review it. If there are questions or concerns, the team will submit comments and make requests on code. This review process may feel clinical, so don’t take it personally. Speaking for myself, it’s been a great learning experience to work through the Pull Request process on open source projects, and in nearly every case my contributions have eventually been accepted.
It is also extremely helpful to us and other users if you copy the url to the Pull Request and note it on the referenced Bugzilla issues.
Adding Features
Before you start working on a feature, check out the public Xamarin.Forms Roadmap and do a search of the Evolution forum where we discuss specifications for possible new features and other changes. If you don’t see the feature already covered, open a proposal on the Evolution forum and offer to implement it. The Xamarin.Forms engineering team will review the proposal and provide feedback.
What should you do if you are comfortable implement on one platform, but don’t know enough to implement the rest of the platforms? Go ahead and open the proposal and invite others to participate.
Tests and Wiki
As mentioned, we welcome contributions in these areas as well. We have a several projects for UITests as well as Unit Tests. We have fairly good coverage, but it could always be better. It’s important to test the right things, so if you have any questions please ask.
Our documentation team continues to do amazing work on our developer guides and the API documentation. If you spot any inaccuracies or have suggestions, use the “I have a problem” button in the sidebar and send us details.
If you have a wiki contribution that doesn’t quite fit into those documentation categories, let me know and we’ll consider building up the wiki.
Happy Contributing!
I hope you’ll consider contributing to Xamarin.Forms. There’s nothing quite like the satisfaction of having your pull request merged and knowing you’ve just helped out a huge community of amazing developers.
For more information and to get started:
- open.xamarin.com for more guidance on contributing to Xamarin open source
- github.com/xamarin/Xamarin.Forms
- Evolution forum proposals
- Bugzilla
New roadmap page can be found here: Xamarin Forms Roadmap | https://devblogs.microsoft.com/xamarin/beginners-guide-contributing-xamarin-forms/ | CC-MAIN-2020-29 | refinedweb | 1,460 | 58.58 |
Hello,
Do you know how I could create a grid like this one on Pluto? In this grid, I would like to put numbers in it (not in all the boxes) and draw curves over it.
Have a nice day
Hello,
Do you know how I could create a grid like this one on Pluto? In this grid, I would like to put numbers in it (not in all the boxes) and draw curves over it.
Have a nice day
What does “in Pluto” mean here? Could you not just use a normal plot which has gridlines?
julia> using Plots julia> plot(xticks = 1:20, yticks = 1:20, xlim = (1, 20), ylim = (1,20), aspect_ratio = 1)
By that I mean “using a Pluto notebook”.
Thank you ! And is it possible to “draw” over it as well as include numbers in the boxes?
Sure, you can do
julia> annotate!([3], [3], text("3"))
to put the number “3” on the location (3,3) in the above plot
Another way:
using Plots; gr() # Define grid Nx, Ny = 12, 6 # number of grid cells x1, x2 = 0, 240 # xgrid limits y1, y2 = 0, 120 # ygrid limits x, y = LinRange(x1,x2,Nx+1), LinRange(y1,y2,Ny+1) dx, dy = (x2-x1)/Nx, (y2-y1)/Ny # Plot grid p = plot(legend=false,xlim=(x1,x2),ylim=(y1,y2),ratio=1,grid=false) xi, yi = x1 .+ zeros(Nx+1), y1 .+ zeros(Ny+1) [ plot!(p, fill(x[i],Ny+1), y, lc=:gray, lw=0.2) for i in 1:Nx+1 ]; [ plot!(p, x, fill(y[i],Nx+1), lc=:gray, lw=0.2) for i in 1:Ny+1 ]; # Annotate np points with coords (xr,yr) and draw connecting lines: np = 7 xr, yr = x[rand(1:Nx,np)] .+ dx/2, y[rand(1:Ny,np)] .+ dy/2 labels = string.(1:np) plot!(xr,yr, marker=2.) [annotate!(xr[i], yr[i] + (y2-y1)/20, text(labels[i],7)) for i in 1:np]; plot(p, title="Grid plot", axis=nothing, framestyle=:none)
Or showing the axes:
plot(p, title="Grid plot", xticks=x1:dx*2:x2, yticks=y1:dy*2:y2, framestyle=:box)
And with GMT
(could have used the automatic region detection but would risk to trim out some text)
# Number points with the record number xy = rand(7,2); plot(xy, region=(0., 1.1, 0, 1.1), frame=(axes=:WSen, annot=0.2, grid=0.2), marker=:point, lc=:green) text!(xy, offset=(shift=(0,0.5),), rec_number=true, name="um.png", show=true)
# Use any label we want Dxy = mat2ds(rand(7,2), ["A1","A2","A3","A4","A5","A6","A7"]); plot(Dxy, region=(0., 1.1, 0, 1.1), frame=(axes=:WSen, annot=0.2, grid=0.2), marker=:point, lc=:green) text!(Dxy, offset=(shift=(0,0.5),), name="dois.png", show=true)
Many thanks to you gentlemen!
Could I benefit from your experience?
I wonder if it is possible to draw a random curve that would close. Added to this curve would be another curve that would follow approximately the same path and would be located at a certain distance from the other curve. And also “black out” the squares that are not between the two curves.
@Stephen1, a simple way, but not ideal, is to use plot shapes (and then to plot grid defined in previous post over the shapes).
using Plots; gr() function Shape1(C,r) θ = LinRange(0,2π, 72) C[1] .+ r*(1 .+ 0.2*sin.(4θ)).*sin.(θ), C[2] .+ r*(1 .+ 0.2*sin.(4θ)).*cos.(θ) end C = [120,60]; r = 40; S1 = Shape1(C,r) S2 = (C[1] .+ (S1[1] .- C[1])*0.7, C[2] .+ (S1[2] .- C[2])*0.7) p = plot(legend=false,xlim=(x1,x2),ylim=(y1,y2),ratio=1,grid=false) plot!(S1, seriestype=[:shape],lw =0.5,c=:blue,lc=:black,legend=false,fa=0.2,ratio=1) plot!(S2, seriestype=[:shape],lw =0.5,c=:white,lc=:black,legend=false,fa=1,ratio=1)
or using
Luxor.jl
(can probably be simplified, I am a Luxor Newbie)
using Luxor function drawGrid(X,Y) for i=-X:50:X line(Point(i,-Y),Point(i,Y), :stroke) end for j=-Y:50:Y line(Point(-X,j),Point(X,j), :stroke) end end @svg begin X,Y=250,250 sethue("black") box(Point(0, 0), 2X, 2Y, :fill) innerpoints = [Point(rand(120:160)*sin(i), rand(130:160)*cos(i)) for i in 0:0.1:2π] outerpoints = [Point(rand(190:200)*sin(i), rand(190:200)*cos(i)) for i in 0:0.1:2π] donut = [innerpoints; innerpoints[1]; outerpoints[1]; reverse(outerpoints)] poly(donut, :clip) sethue("black") drawGrid(X,Y) sethue("white") box(Point(0, 0), 2X, 2Y, :fill) clipreset() setcolor(0.5, 0.5, 0.5, 0.5) drawGrid(X,Y) end | https://discourse.julialang.org/t/grid-creation-and-donut-like-display-and-color-filling/59100 | CC-MAIN-2022-21 | refinedweb | 814 | 75 |
pip 1.0.1
pip installs packages. Python packages. An easy_install replacement
The main website for pip is. =
News / Changelog
Next release (1.1) schedule
Beta release mid-July 2011, final release early August. #266 - current working directory when running setup.py clean.
1.0 (2011-04-04)
- Moved main repository to Github:
- Transferred primary maintenance from Ian to Jannis Leidel, Carl Meyer, Brian Rosner
- Fixed #204 - rmtree undefined in mercurial.py. Thanks Kelsey Hightower
- Fixed bug in Git vcs backend that would break during reinstallation.
- Fixed bug in Mercurial vcs backend related to pip freeze and branch/tag resolution.
- Fixed bug in version string parsing related to the suffix “-dev”.
0.8.2
Avoid.
0.8.1
- Added global –user flag as shortcut for –install-option=”–user”. From Ronny Pfannschmidt.
- Added support for PyPI mirrors as defined in PEP 381, from Jannis Leidel.
- Fixed #39 - –install-option=”–prefix=~/.local” ignored with -e. Thanks Ronny Pfannschmidt and Wil Tan.
0.8
- etc. scripts are created (Python-version specific scripts)
- contrib/build-standalone script creates a runnable .zip form of pip, from Jannis Leidel
- Editable git repos are updated when reinstalled
- Fix problem with --editable when multiple .egg-info/ directories are found.
- A number of VCS-related fixes for pip freeze, from Hugo Lopes Tavares.
- Significant test framework changes, from Hugo Lopes Tavares.
0.7.2
- Set zip_safe=False to avoid problems some people are encountering where pip is installed as a zip file.
0.7.1
- Fixed opening of logfile with no directory name. Thanks Alexandre Conrad.
- Temporary files are consistently cleaned up, especially after installing bundles, also from Alex Conrad.
- Tests now require at least ScriptTest 1.0.3.
0.7
- Fixed uninstallation on Windows
- Added pip search command.
- all over the place) and constantly overwrites the file in question. On Unix and Mac OS X this is '$HOME/.pip/pip.log' and on Windows it’s '%HOME%\\pip\\pip.log'. You are still able to override this location with the $PIP_LOG_FILE environment variable. For a complete (appended) logfile use the separate '--log' command line option.
- Fixed an issue with Git that left an editable packge URLs.
-.3
- Fixed import error on Windows with regard to the backwards compatibility package
0.6.2
- Fixed uninstall when /tmp is on a different filesystem.
- Fixed uninstallation of distributions with namespace packages.
0.6.1
- Added support for the https and http-static schemes to the Mercurial and ftp scheme to the Bazaar backend.
- Fixed uninstallation of scripts installed with easy_install.
- Fixed an issue in the package finder that could result in an infinite loop while looking for links.
- Fixed issue with pip bundle and local files (which weren’t being copied into the bundle), from Whit Morriss.
0.6
- Add pip uninstall and uninstall-before upgrade (from Carl Meyer).
- Extended configurability with config files and environment variables.
- Allow packages to be upgraded, e.g., pip install Package==0.1 then OS X Framework layout installs
- Fixed bug preventing uninstall of editables with source outside venv.
- Creates download cache directory if not existing.
0.5.1
- Fixed a couple little bugs, with git and with extensions.
0.5
- option to install ignore package dependencies
- Added --no-index option
- Make -e work zip all its arguments, not just the first.
- Fix some filename issues on Windows.
- Allow the -i and --extra-index-url options in requirements files.
- Fix the way bundle components are unpacked and moved around, to make bundles work.
- Adds -s option to allow the access to the global site-packages if a virtualenv is to be created.
- Fixed support for Subversion 1.6.
0.3.1
- Improved virtualenv restart and various path/cleanup problems on win32.
- Fixed a regression with installing from svn repositories (when not using -e).
- Fixes when installing editable packages that put their source in a subdirectory (like src/).
- Improve pip -h
0.3
- freeze not including -e svn+ when an svn structure is peculiar.
- Allow pip -E to work with a virtualenv that uses a different version of Python than the parent environment.
- Fixed Win32 virtualenv (-E) option.
- Search the links passed in with -f for packages.
- Detect zip files, even when the file doesn’t have a .zip extension and it is served with the wrong Content-Type.
- Installing editable from existing source now works, like pip install -e some/path/ will install the package in some/path/. Most importantly, anything that package requires will also be installed by pip.
- Add a --path option to pip un/zip, so you can avoid zipping files that are outside of where you expect.
- Add --simulate option to pip zip.
0.2.1
- Fixed small problem that prevented using pip.py without actually installing pip.
- Fixed --upgrade, which would download and appear to install upgraded packages, but actually just reinstall the existing package.
- Fixed Windows problem with putting the install record in the right place, and generating the pip script with Setuptools.
- Download links that include embedded spaces or other unsafe characters (those characters get %-encoded).
- Fixed use of URLs in requirement files, and problems with some blank lines.
- Turn some tar file errors into warnings.
0.2
- Renamed to pip, and to install you now do pip install PACKAGE
- Added command pip zip PACKAGE
- Added an option --install-option to which will cache package downloads, so future installations won’t require large downloads. Network access is still required, but just some downloads will be avoided when using this.
0.1.3
- Always use svn checkout (not export) so that tag_svn_revision settings give the revision of the package.
- Don’t update checkouts that came from .pybundle files.
0.1.2
- Improve error text when there are errors fetching HTML pages when seeking packages.
- Improve bundles: include empty directories, make them work with editable packages.
- If you use -E env and the environment env/ doesn’t exist, a new virtual environment will be created.
- Fix dependency_links for finding packages.
0.1.1
- Fixed a NameError exception when running pip outside of a virtualenv environment.
- Added HTTP proxy support (from Prabhu Ramachandran)
- Fixed use of hashlib.md5 on python2.5+ (also from Prabhu Ramachandran)
0.1
- Initial release
- Author: The pip developers
- Keywords: easy_install distutils setuptools egg virtualenv
- License: MIT
- Categories
- Development Status :: 5 - Production/Stable
- Intended Audience :: Developers
- License :: OSI Approved :: MIT License
- Programming Language :: Python :: 2
- Programming Language :: Python :: 2.4
- Programming Language :: Python :: 2.5
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3
- Programming Language :: Python :: 3.1
- Programming Language :: Python :: 3.2
- Topic :: Software Development :: Build Tools
- Package Index Owner: ianb, jezdez, carljm, brosner, dstufft, qwcode
- DOAP record: pip-1.0.1.xml | https://pypi.python.org/pypi/pip/1.0.1 | CC-MAIN-2016-36 | refinedweb | 1,113 | 60.21 |
💬 Using Arduino
This thread contains comments for the article "Using Arduino" posted on MySensors.org.
Hi, just to let you know MySensors libraries don't seem to be found in Arduino IDE build 1.6.12
@doug could you clarify what you mean?
Are you unable to install the MySensors library? if so, what did you do and what message did you get?
- connecties last edited by
I think that Doug means that if you go to Sketch -> Include Library -> Manage Libraries... and enter MySens into the search box there is NO MySensor library to be included.
Just found that out myself. Downloaded the 1.6.12 IDE and installed it on a "clean" mac OS X system.
I test it now and have the same Problem. The Arduino IDE shows no MySens or MySensors Library to add. And so i can't use this. Can anybody help?
- connecties last edited by
As a work around I downloaded the zip file from: and added it with Sketch -> Add Library -> Add .ZIP Library. select the downloaded zip file.
If you are done you can find the MySensors Library in the Sketch -> Add Library drop down list under Contributed Libraries.
It seems that mysensors library is missing in arduino library manager..
I have filed an issue with arduino on github, so hopefully we will get it solved
- mahesh2000 last edited by
Using Arduino IDE 1.6.2, no problem compiling MySensors/AltSoftwareSerial/Test/test.ino. Had to make two changes: add some #defines and include an extra library. I'm using a Nordic nRF24L01+ on a Chinese board. I had to add:
#define MY_RADIO_NRF24
#define MY_RF24_PA_LEVEL RF24_PA_LOW
#define MY_GATEWAY_SERIAL
before
#include <MySensors.h>
Also, had to add the AltSofwareSerial library, otherwise I received the error "AltSoftSerial.h: No such file or directory".
If I don't follow up with a new comment it means that this config worked. I'm on Windows 10, using a fakeduino Uno, hoping to connect to an nRF24LE1 over 250Kbps.
and MySensors libraries. A
- mahesh2000 last edited by
couldn't get nRF24L01+ to work. Making do with two nRF24LE1s communicating with each other.
Would be nice to have a link to the example sketches (the ones using external libraries) here:
- Matt Briggs last edited by
So this is going to cause an argument in the sketch?:
void loop()
{
// This will be called contentiously after setup.
}
Maybe should be "continuously"
Why do i get this error? Any idea how to fix it?
Arduino: 1.6.5 (Windows 8.1), Board: "Arduino Uno"
In file included from C:\Users\Audrey\Documents\Arduino\libraries\MySensors/drivers/RF24/RF24.cpp:23:0,
from C:\Users\Audrey\Documents\Arduino\libraries\MySensors/MySensors.h:290,
from GatewayW5100.ino:116:
C:\Users\Audrey\Documents\Arduino\libraries\MySensors/drivers/RF24/RF24.h:37:17: fatal error: SPI.h: No such file or directory
#include <SPI.h>
^
compilation terminated.
Error compiling.
This report would have more information with
"Show verbose output during compilation"
enabled in File > Preferences.
Messing around with it and got this message, still not idea how to fix it
Arduino: 1.6.5 (Windows 8.1), Board: "Arduino Uno"
Using library MySensors in folder: C:\Users\Audrey\Documents\Arduino\libraries\MySensors
Using library Ethernet in folder: C:\Program Files (x86)\Arduino\libraries\Ethernet
C:\Program Files (x86)\Arduino\hardware\tools\avr/bin/avr-g++ -c -g -Os -w -fno-exceptions -ffunction-sections -fdata-sections -fno-threadsafe-statics -MMD -mmcu=atmega328p -DF_CPU=16000000L -DARDUINO=10605 -DARDUINO_AVR_UNO -DARDUINO_ARCH_AVR -IC:\Program Files (x86)\Arduino\hardware\arduino\avr\cores\arduino -IC:\Program Files (x86)\Arduino\hardware\arduino\avr\variants\standard -IC:\Users\Audrey\Documents\Arduino\libraries\MySensors -IC:\Program Files (x86)\Arduino\libraries\Ethernet\src C:\Users\Audrey\AppData\Local\Temp\build1062139905146989693.tmp\GatewayW5100.cpp -o C:\Users\Audrey\AppData\Local\Temp\build1062139905146989693.tmp\GatewayW5100.cpp.o
In file included from GatewayW5100.ino:2:0:
C:\Users\Audrey\Documents\Arduino\libraries\MySensors/MySensors.h:328:2: error: #error No forward link or gateway feature activated. This means nowhere to send messages! Pretty pointless.
#error No forward link or gateway feature activated. This means nowhere to send messages! Pretty pointless.
^
Error compiling.
@Newzwaver could you post the sketch?
Thank you for your help, the thing is working. It has something to do with the way I have the Arduino software setup, for some reason I can't just copy a sketch and paste it in for a new sketch.
The MySensors library is missing from the arduino library manager again. Should I just log a new issue and reference the previous one?
@benji_lb yes please do. Could you post the link to the issue here when it has been created so we can follow it?
@benji_lb
Fixed. I noticed I was running an older version of the software. Uninstalled completely and reinstalled, worked fine.
Where would be the best place for an Arduino newbie to post to get some assistance? I'm looking at Gateways and i'm unsure which direction to take, and where would be best to post. Would this be OK?
@benji_lb great, thanks for reporting back.
The development category will be good. Big welcome to the MySensors community
Today, the "Basic Structure" section contains just a "normal" Arduino structure.
Wouldn't it be useful to also add a slightly extended version mentioning the split of the original Arduino-setup() to begin()->presentation()->setup() when using recent MySenors libs (since 2.1.0)?
This is not very transparent to quite a few users, and yet some of our examples do either not mention "before()" at all or do not make use of the new logical execution time of setup(), even if this would be helpful (!pcReceived in WaterMeter-Sketch, getControllerConfig(), for metric in Temperature, just to mention some examples) .
Also the text order of setup() still beeing above presentation() in at least some examples may be technically irrelevant but nevertheless misleading to some extend.
@rejoe2 great feedback, thanks! For the examples, I have created
I have updated the "Using Arduino " page. I am not 100% sure I got it right and easy to understand so feedback is very welcome.
@mfalkvidd
Your understanding of my suggestion seems to be 100% correct. I will add some additional comments and some (too complicated but working examples) as links at the github page.
Hi there,
I am on Linux - exclusively! Does that mean I am out even before I started?
3nibble (puzzled)
- napo7 Hardware Contributor last edited by
You're not !
I'm also exclusively on linux, and I use MySensors !
I was using Arduino IDE, and now I'm using atom.io, but both are working on linux !
hi i am using Lora SX1276 as simple transmitter and receiver.
is there any library for sending and receiving the data no gateway involved only simple data transmission and reception. | https://forum.mysensors.org/topic/4766/using-arduino/4 | CC-MAIN-2019-09 | refinedweb | 1,140 | 59.5 |
Summary: Guest blogger, Vinay Pamnani, talks about a cool new WMI tool.
Microsoft Scripting Guy, Ed Wilson, is here. One of the absolutely greatest things about working for Microsoft is the chance to meet (even virtually) and interact with really smart people who absolutely love technology. I recently ran across a post by Vinay (todays guest blogger) in an internal mailing where he was describing his new WMI Explorer tool. I downloaded the tool, played around with it a bit, and thought about writing a post about it—the tool is that good. But then, I decided it would be better if I could persuade the person who actually designed the tool to write the blog post (because obviously he knows it better than I do). So I reached out to Vinay, and he responded enthusiastically. In fact, he told me that he is currently working on a new iteration of the tool. So I said, “The current offering is cool, why don’t you write a quick overview, and when your new tool ships, you can write a couple of blog posts about it.” He agreed, and I am proud to turn things over to our guest blogger for today. Take it away Vinay…
My name is Vinay Pamnani, and I’m a senior support escalation engineer in the System Center Configuration Manager team. I’ve been working with Configuration Manager for over 7 years, but it didn’t take me long to realize that working with Configuration Manager means you need to work with WMI. As a result, I found myself spending a lot of time in wbemtest, which is a useful utility, but at the same time it’s very time consuming. Additionally, if you haven’t worked with wbemtest enough, it can be hard to navigate through wbemtest to get to where you want to be.
After spending enough time in wbemtest, I found some WMI Explorer utilities, and although they were useful, none of them gave me what I was looking for. Most of the tools I looked at had a feature that I liked, but none of them felt like a complete product that had everything I wanted. So, I decided to start a project with the intention of combining the features of currently available WMI Explorer utilities, and adding the features that I specifically wanted.
Note You can download the tool and find a list of features and requirements on CodePlex: WMI Explorer.
WMI Explorer has become integral to the way I work. For example, a few days ago I was looking to find if there’s something in WMI that can give me information about the DHCP server. I pulled up WMI Explorer, selected the root namespace, clicked the Search tab, selected Properties, typed DHCP, and selected the Recursive option. In a matter of seconds, I had a list of every WMI property in every WMI class in every WMI namespace that has a property name that matches “DHCP.” I would compare this with wbemtest…but wait, wbemtest does not have a Search feature! Here are the results from my search:
After reviewing the instances of Win32_NetworkAdapterConfiguration class in root\CIMV2 namespace, I knew that’s what I was looking for. But this class contains a number of instances, and only one of them had the DHCP server populated. I decided to write a script that would get all the instances of Win32_NetworkAdapterConfiguration class so that I could review the output and find the DHCP server. Instead of going to our Scripting Guy, Ed, I pulled up WMI Explorer again and navigated to the Win32_NetworkAdapterConfiguration class in root\CIMV2 namespace, clicked the Scripting tab, and selected VBScript as my script engine of choice. As shown in the following image, I had a script ready to run or save.
My next interaction with WMI involved looking at an embedded property for one of the SMS Provider classes. If you haven’t looked at the value of an embedded property, it’s painfully annoying to find it in wbemtest. Here’s an example of viewing an embedded property in wbemtest (note that I had to navigate through at least five separate windows to see the value for the IISSSLPortsList embedded property, not to mention blindly clicking each property to find the one I was looking for:
Instead of doing this in wbemtest, I pulled up the same instance in WMI Explorer. I expanded the property to get the value, and I was done.
I could talk more about other features of WMI Explorer. However, I’ll save that for another post, which will cover the next release that is currently in the works. In the meantime, check it out and let me know what you think on the WMI Explorer Discussions page.
~Vinay
Thank you, Vinay. This is a great tool, and I look forward to seeing what you have in store for us with the next iteration. Join me tomorrow when we have a guest blog post by Windows PowerShell MVP, Sean Kearney. (Yeah, it is the Scripting Guy’s birthday, and Sean thought he would write a cool post for my birthday present.) WooHoo!
I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.
Ed Wilson, Microsoft Scripting Guy
this is really really really a great tool 🙂 thanks for sharing!
this is really really really a great tool 🙂 thanks for sharing!
Sincerely very helpful. Very similar to finding user classes in ADSIEDIT for use in Powershell
Downloaded this few minutes ago and playing around with it. This tool is amazing. It will enhance my Powershell scripting!!!!
WMI Explorer is amazing! Get it while it’s hot
This tool currently looks nothing like in the screenshots, and it is unable to locate my pointer class (mouse) object. Any clues? Windows 10 | https://blogs.technet.microsoft.com/heyscriptingguy/2014/09/13/weekend-scripter-the-wmi-explorer-tool/ | CC-MAIN-2017-34 | refinedweb | 997 | 69.31 |
Python Functions
In this section, you will learn
- What is a function
- How to create a function
- Types of functions
What is Python Function
A function is a small block of a program which contains a number of statements to perform a specific task. When you have a program of thousands of lines performing different tasks then you should divide the program into small modules (blocks) which increases the readability and lowers the complexity.
How to Define a Python Function
The following is the syntax to define a function:
def functionName(arguments): """This is the definition of this function""" statements return returnParam
- The keyword
defis used to define a function.
functionNameis the name of the function.
argumentsare optional. Arguments provide values to function to perform operations on.
- Colon (
:) ends the function header.
"""This is the definition of this function"""is a
docstringand is optional which describes what the function does.
statementsrefers to the body of the function.
- The statement
returnoptionally returns the result to the caller.
Python Function Example:
def language(p): """Function to print a message""" print("Programming language:", p)
Here a function
language is defined which has one argument
p passing from the caller. Inside the function, there is a docstring and a
p.
Call a Function in Python:
A function can be called from anywhere in the program. A function can be called by its name with the required parameters.
language('Python')
Programming language: Python
The
return Statement:
The
return statement transfers the control back to the codes where the function is called.
It indicates the ending of the function definition.
The syntax of
return is as follows:
return [values_to_be_returned]
If there is no
return statement in a function, a
None object will be returned.
Example of Using
return Statement:
def square(n): return n*n print("Square of 4=", square(4))
Square of 4=16
In this code, the function is called in a
n*n which is evaluated and the result is returned to where the function is called (
Scope and Lifetime of Python Variables:
The scope of a variable is where a variable can be accessed. When a variable is declared inside a function, it is not accessible to the outside of that function. This type of variables is called local variable and is accessed only to the function where it is declared.
The lifetime of a variable is the time during which a variable exists in memory. When a variable is declared inside a function, the memory of it will be released when the control jumps out of the function.
See the example below:
def fun(): a = 12 print("Value of a inside function:", a) a = 24 fun() print("Value of a outside function:", a)
Value of a inside function: 12 Value of a outside function: 24
In this code, variable
a inside the function and variable
a outside the function are different variables.
If you try to access variables declared inside functions from outside, you will come across an error -
NameError name 'x' is not defined. But the variables declared outside of functions have a global scope and can be accessed from inside.
Types of Functions:
Functions in Python can be categorized into two types:
Built-in functions: have a predefined meaning and perform specific tasks.
User-defined functions: are defined by the user containing any number of statements to perform user-defined tasks. | https://www.delftstack.com/tutorial/python-3-basic-tutorial/python-functions/ | CC-MAIN-2018-43 | refinedweb | 562 | 58.62 |
Determines if certain flags are set for a particular attribute.
#include "slapi-plugin.h" int slapi_attr_flag_is_set( Slapi_Attr *attr, unsigned long flag );
This function takes the following parameters:
Attribute that you want to check.
Flag that you want to check in the attribute.
The value of the flag argument can be one of the following:
Flag that determines if the attribute is single-valued.
Flag that determines if the attribute is an operational attribute.
Flag that determines if the attribute is read-only.
This function returns one of the following values:
1 if the specified flag is set.
0 if the specified flag is not set.
This function determines if certain flags are set for the specified attribute. These flags can identify an attribute as a single-valued attribute, an operational attribute, or as a read-only attribute, and are set from the schema when the Slapi_Attr structure is initialized. | http://docs.oracle.com/cd/E19424-01/820-4810/aaidi/index.html | CC-MAIN-2015-40 | refinedweb | 149 | 56.96 |
Member
1 Points
Nov 30, 2019 10:33 AM|James Boelens|LINK
I'm an absolute beginner on EF, and I am stuck with this problem:
I have
2 tables: Toy and Brand
public class Toy { public int Id { get; set; } public string Name { get; set; } public int BrandId { get; set; } public Brand brand { get; set; } } public class Brand { public int BrandId { get; set; } public string Name { get; set; } }
My DBContext file:
public class SOT : DbContext { public SOT(): base("name=SOT") { public virtual DbSet<Brand> brands; set; } public virtual DbSet<Toy> toys { get; set; } }
The Brand data is given in the view, using a dropdown:
@Html.DropDownListFor(m => m.Toy.brand, new SelectList(Model.BrandsList, "BrandId", "Name"), "Select Brand", new { @class = "form-control" })
This seems to work fine, as shown in the page source:
<select class="form-control" id="Toy_Brand" name="Toy.Brand"><option value="">Select Brand</option> <option value="1">LEGO</option> <option value="2">HASBRO</option> <option value="5">Fisher Price</option> </select>
The List<> Brandslist used is created here:
public class NewToyViewModel { public IEnumerable<Brand> BrandsList { get; set; } public Toy Toy { get; set; } }
The view is using this model:
@model Spelotheek.ViewModels.NewToyViewModel
When I add a record, the Brand field is null.
public ActionResult Create(Toy toy) { _context.Toy.Add(toy); _context.SaveChanges();
All fields, except Brand are saved OK What am I doing wrong?
thanks, James
Contributor
4081 Points
Nov 30, 2019 10:11 PM|DA924|LINK
@Html.DropDownListFor(m => m.Toy.BrandId, new SelectList(Model.BrandsList, "BrandId", "Name"), "Select
Brand", new { @class = "form-control" })
I think all that's going to be selected is a value of the selected item in the dropdownlist. It's something like that. What the Brand is in Toy and why you're trying to do something with Brand and a dropdownlist selection, I don't know what that's about. Is that something you made up?
Member
1 Points
2 replies
Last post Dec 01, 2019 07:23 AM by James Boelens | https://forums.asp.net/t/2162019.aspx?Attribute+not+saved | CC-MAIN-2020-05 | refinedweb | 334 | 51.28 |
IRC log of ws-addr on 2006-09-25
Timestamps are in UTC.
19:48:14 [RRSAgent]
RRSAgent has joined #ws-addr
19:48:14 [RRSAgent]
logging to
19:48:29 [bob]
zakim, this will be ws_addrwg
19:48:29 [Zakim]
ok, bob; I see WS_AddrWG()4:00PM scheduled to start in 12 minutes
19:48:54 [bob]
Meeting: Web Services Addressing WG Teleconference
19:49:03 [bob]
Chair: Bob Freund
19:49:50 [bob]
Agenda:
19:52:20 [bob]
rrsagent, make logs public
19:53:14 [David_Illsley]
David_Illsley has joined #ws-addr
19:53:52 [Jonathan]
Jonathan has joined #ws-addr
19:56:09 [bob]
regrets+ katy
19:57:49 [plh]
plh has joined #ws-addr
19:58:38 [pauld]
pauld has joined #ws-addr
19:58:51 [Zakim]
WS_AddrWG()4:00PM has now started
19:58:57 [Zakim]
+Bob_Freund
19:59:08 [plh]
tel:+1.617.258.0992 (Office)
19:59:10 [plh]
oops
19:59:14 [Zakim]
+Plh
19:59:31 [Zakim]
+[IPcaller]
19:59:49 [plh]
zakim, ipcaller is David
19:59:49 [Zakim]
+David; got it
19:59:58 [Zakim]
+Jonathan_Marsh
20:00:01 [David_Illsley]
zakim, mute me
20:00:01 [Zakim]
sorry, David_Illsley, I do not see a party named 'David_Illsley'
20:00:17 [David_Illsley]
zakim, mute David
20:00:17 [Zakim]
David should now be muted
20:00:23 [PaulKnight]
PaulKnight has joined #ws-addr
20:00:35 [anish]
anish has joined #ws-addr
20:00:37 [Zakim]
+Paul_Downey
20:00:47 [TonyR]
TonyR has joined #ws-addr
20:01:41 [David_Illsley]
zakim, David is really me
20:01:41 [Zakim]
+David_Illsley; got it
20:01:47 [prasad]
prasad has joined #ws-addr
20:01:54 [Zakim]
+??P9
20:02:00 [Zakim]
+ +1.503.228.aaaa
20:02:05 [gpilz]
gpilz has joined #ws-addr
20:02:18 [Zakim]
+??P12
20:02:20 [prasad]
zakim, ??P12 is prasad
20:02:20 [Zakim]
+prasad; got it
20:02:25 [bob]
zakim ??P9 is yenling
20:02:26 [Zakim]
+Paul_Knight
20:02:35 [Zakim]
+Gilbert_Pilz
20:02:39 [Zakim]
+Marc_Hadley
20:02:42 [pauld]
zakim, aaaa is Anish
20:02:42 [Zakim]
sorry, pauld, I do not recognize a party named 'aaaa'
20:03:04 [pauld]
zakim, .aaaa is Anish
20:03:04 [Zakim]
sorry, pauld, I do not recognize a party named '.aaaa'
20:03:29 [bob]
zakim, whi is on the phone
20:03:29 [Zakim]
I don't understand 'whi is on the phone', bob
20:03:43 [Zakim]
+??P0
20:03:53 [TonyR]
zakim, ??p0 is me
20:03:54 [Zakim]
+TonyR; got it
20:04:10 [TonyR]
zakim, ??p9 is yinleng
20:04:10 [Zakim]
+yinleng; got it
20:04:24 [Zakim]
+DOrchard
20:05:07 [David_Illsley]
zakim, unmute me
20:05:07 [Zakim]
David_Illsley should no longer be muted
20:05:23 [dorchard]
dorchard has joined #ws-addr
20:05:30 [Zakim]
+[IBM]
20:05:35 [Zakim]
+David_Hull
20:05:44 [Paco]
Paco has joined #ws-addr
20:05:56 [plh]
zakim, ibm holds Paco
20:05:56 [Zakim]
+Paco; got it
20:06:54 [Jonathan]
Scribe: Jonathan
20:07:02 [Zakim]
+GlenD
20:07:04 [dhull]
dhull has joined #ws-addr
20:07:20 [GlenD]
GlenD has joined #ws-addr
20:07:31 [Jonathan]
Topic: Agenda Review
20:07:37 [Jonathan]
Meeting: WS-Addressing WG telcon
20:07:39 [Jonathan]
Chair: Bob
20:07:44 [Jonathan]
Agenda accepted
20:07:57 [Jonathan]
Topic: Approval of minutes
20:08:06 [dhull]
minutes look OK
20:08:16 [Jonathan]
Minutes accepted as mailed
20:08:51 [Jonathan]
Bob: Goal to reach conclusion on CR33, which is blocking CR31, which is blocking ending CR.
20:09:03 [Jonathan]
... Been working this for a couple of months, we need to conclude.
20:09:09 [Jonathan]
Topic: Action Items:
20:09:31 [Jonathan]
2006-08-21: cr31 - Tony Rogers to implement CHANGE 1&2 to the table in preparation for CR-31 PENDING
20:09:52 [Jonathan]
... will be done later today
20:10:18 [Jonathan]
Topic: Coordination and New Issues
20:10:49 [Jonathan]
Bob: Policy is requesting a non-normative referenec to WS-Policy.
20:11:00 [Jonathan]
Proposal: "The wsaw:UsingAddressing element MAY also be used in other contexts (e.g., as a policy assertion in a policy framework <such as WS-Policy [REF]>)."
20:11:00 [Jonathan]
20:11:25 [Jonathan]
Philippe: Request didn't go to publich list... link here:
20:11:39 [Jonathan]
s/publich/public/
20:11:41 [plh]
20:12:12 [Jonathan]
Bob: Controversial?
20:12:23 [Jonathan]
Tony: That's what we were suggesting as well...
20:12:39 [Jonathan]
Bob: No objections heard to adding this as a new issue, closing it by accepting the proposal.
20:12:56 [Jonathan]
Topic: CR033
20:13:15 [Jonathan]
Bob: Proposal 4 was posted last week by Anish, but we didn't have time to go over.
20:13:54 [Jonathan]
Anish: Background: we have the wsaw:Anonymous marker, restricting values of FaultTo and ReplyTo, which we've modified to accomodate "none".
20:14:06 [bob]
zakim, who is making noise
20:14:06 [Zakim]
I don't understand 'who is making noise', bob
20:14:23 [dorchard]
zakim, mute me
20:14:23 [Zakim]
DOrchard should now be muted
20:14:32 [dorchard]
zakim, who's speaking?
20:14:38 [David_Illsley]
zakim, mute me
20:14:39 [Zakim]
David_Illsley should now be muted
20:14:44 [Zakim]
dorchard, listening for 10 seconds I heard sound from the following: anish (81%), GlenD (9%)
20:14:47 [Jonathan]
... WS-RX came up with an anonymous template, with slightly different semantics. It says "the current backchannel that has the same uuid as the current makeconnection."
20:15:00 [dorchard]
zakim, mute GlenD
20:15:00 [Zakim]
GlenD should now be muted
20:15:13 [Jonathan]
... This isn't composible with wsaw:Anonymous.
20:15:34 [Jonathan]
... We need instead to talk about addressable endpoints rather than equivalent to our anonymous URI.
20:15:51 [Jonathan]
... Some folks asked for something runtime verifiable.
20:17:05 [Jonathan]
... The wsaw:isAnon flag in the message allows the service to verify that the non-Anon URI still should go on the backchannel.
20:17:28 [bob]
20:17:28 [Jonathan]
... Proposal here:
20:18:01 [Jonathan]
... Under "required" and "prohibited", the address has to be anon, none, or marked isAnon="true".
20:18:39 [Jonathan]
... Other specs could use this to define variants of anon or none.
20:18:59 [dhull]
q+
20:19:42 [bob]
ack dhull
20:20:12 [Jonathan]
David: In allowing none with anon, how did we deal with HTTP response?
20:20:28 [Jonathan]
... HTTP users will no be able to handle none.
20:22:20 [Jonathan]
... To the issue, the changes are interesting, signalling anonymity by something other than through a URI.
20:22:36 [Jonathan]
... Our core is extensible.
20:22:48 [Jonathan]
... Seems to obviate the need for the anon URI.,
20:23:41 [Jonathan]
... Have we thought through how the two work together, and how disruptive it is to what we have,
20:23:48 [anish]
q+
20:23:48 [dorchard]
q+ to ask about putting in the core
20:24:06 [bob]
ack anish
20:24:44 [Jonathan]
Anish: I think it's fairly limited. If this is marked in the WSDL, when you send a request, your FaultTo or ReplyTo can be any URI.
20:24:49 [Jonathan]
q+
20:25:14 [Jonathan]
Anish: If the client doesn't understand it, it simply won't put it in...
20:25:31 [bob]
ack dorch
20:25:32 [Zakim]
dorchard, you wanted to ask about putting in the core
20:25:45 [Jonathan]
DaveO: What about putting it in core?
20:25:45 [Paco]
q+
20:26:13 [Jonathan]
... saw Jonathan's note, agreed putting wsdl-markers into runtime stuff seems suspect.
20:26:32 [Jonathan]
... sent recent mail about putting it into the core (straw man). I'll make the argument it's not an incompatible change.
20:26:49 [Jonathan]
... We could put this in the Core namespace, and look at sending/receiving behaviors.
20:27:03 [Jonathan]
... We can see how forward/backward compatible it is.
20:27:15 [Jonathan]
... I squinted and think it might be a compatible change.
20:27:31 [bob]
q?
20:27:47 [Jonathan]
... If someone comes along and says it's a non-compatible change to the core, then it's hard to want to do as an extension.
20:27:52 [dhull]
+1 in that the document doesn't matter
20:28:22 [Jonathan]
... In a sense this is adding a kind of typing behavior. We only provide a URI there, but we don't have a model for filling in your own values.
20:28:44 [Jonathan]
... Adding run-time typing might be an important change to do.
20:28:50 [dorchard]
zakim, mute me
20:28:50 [Zakim]
DOrchard should now be muted
20:29:00 [bob]
ack jona
20:29:53 [Jonathan]
Jonathan: Marker might mean: ignore any validation you might do based on wsaw:Anonymous="required".
20:30:45 [Jonathan]
Anish: no, that means in the ReplyTo and FaultTo must either be wsa anonymous, none, or any other URI.
20:30:57 [TonyR]
q+
20:31:23 [Jonathan]
Jonathan: what's the difference?
20:31:52 [dorchard]
Jonathan's point about "ignoring validation" is somewhat true, the marker does say "ignore the value".
20:32:07 [Jonathan]
Anish: I don't ignore it, I include the recognition of wsaw:isAnon.
20:32:18 [dorchard]
Jonathan: There is no restriction on the value if the anon attrib is set.
20:32:59 [gpilz]
q+
20:33:02 [dorchard]
Jonathan: When WS-A processor sees this, then it ignores the value, therefore it's core
20:33:20 [dorchard]
jonathan: there are 2 different ways of looking at this
20:33:25 [dhull]
so if the server sees isanon=true, then it basically pretends the [address] was anon?
20:33:41 [Jonathan]
Jonathan: Thinks there are two ways to look at this proposal.
20:33:43 [dorchard]
jonathan: 1 way is that the wsdl is overridden, the other is overridiing core validation.
20:33:46 [bob]
ack paco
20:33:59 [gpilz]
q-
20:34:19 [Jonathan]
Paco: Is the intent that when you've got the flag, you can put any URI, or one that represents the backchannel?
20:34:50 [dhull]
q+
20:35:14 [Jonathan]
Anish: As far as this marker goes, and as far as WSDL validation goes, then it follows the rules for the marker. You still need to process the special URI.
20:35:31 [Jonathan]
Paco: What if I give you an addressable URI with wsaw:isAnon="true"?
20:35:57 [Jonathan]
Anish: wsaw:Anonymous="required" means the response will always be on the backchannel.
20:36:21 [Jonathan]
... The specific URI in the EPR may have a specific meaning.
20:36:30 [Jonathan]
Paco: That meaning has to be compatible with the backchannel behavior?
20:36:43 [Jonathan]
... My question is about the backward compatibility issue:
20:36:49 [dorchard]
zakim, unmute me
20:36:49 [Zakim]
DOrchard should no longer be muted
20:37:11 [Jonathan]
... A client sends this marker to an old endpoint. Maybe the old endpoint doesn't understand the marker, it will choke because it doesn't understand the URI.
20:37:19 [Jonathan]
... But the WS-A processor will still choke.
20:37:43 [Jonathan]
DaveO: If WS-A processing is done first, it will choke. A layer before addressing might be able to handle it.
20:38:48 [Jonathan]
Paco: Suppose you have WS-A deployed, and we have RM deployed (assume somebody uses special backchannel). Now we publish this spec with the new flag, everyone crashes
20:38:49 [gpilz]
q+
20:39:02 [Jonathan]
(not sure I got that whole thing).
20:39:08 [Jonathan]
DaveO: Not backward compatible?
20:39:15 [Jonathan]
Paco: Not convinced it won't break existing endpoints.
20:39:38 [Jonathan]
... A client sends to a previously deployed endpoint. The endpoint breaks.
20:40:09 [Jonathan]
Gil: You're presuming that you can already use the WS-RM URI. But that's why we have this issue in the first place.
20:40:33 [Jonathan]
Paco: Concerned about pushing something to the runtime something we should do in the WSDL.
20:40:59 [Jonathan]
... I think we're too tight in assuming who will manage connections.
20:41:18 [Jonathan]
Anish: Seem to be pointing to the previous proposal.
20:41:35 [Jonathan]
Paco: Yes, sorry to go back there. We should have a better marker than a runtime artifact.
20:42:06 [Jonathan]
DaveO: Do you see any way we can support WS-RM's anonymous with wsaw:Anonymous?
20:42:28 [Jonathan]
... Any change to RM that isn't also incompatible with WS-Addressing?
20:42:52 [Jonathan]
Paco: You put it well - we're assuming to much about how much validation we do.
20:42:58 [anish]
btw, paco, i would be ok with the previous proposal (or some version of it)
20:43:12 [Jonathan]
DaveO: Even changing the marker is not compatible.
20:43:20 [Jonathan]
Paco: Marker isn't done yet. We can change that.
20:43:40 [dorchard]
q?
20:43:41 [anish]
q+
20:43:42 [Jonathan]
... Sympathetic to the problem, but don't like this solution for the backward compatibility reasons.
20:43:55 [gpilz]
q-
20:43:58 [bob]
ack tonyr
20:44:10 [dorchard]
zakim, mute me
20:44:10 [Zakim]
DOrchard should now be muted
20:44:31 [Jonathan]
Tony: Jonathan recorded Anish as saying that anonymous=required means anon, none, or any other uri, but should say any other uri with anon=true.
20:45:21 [Jonathan]
... Thinks Jonathan misminuted it.
20:47:22 [Jonathan]
Jonathan: Trying to tease out what "any other uri" means.
20:47:24 [GlenD]
Tony was exactly right - when isAnon="true" that means "do NOT interpret this URI in the naive way (by just trying to send to it) - something special is going on"
20:47:45 [bob]
q?
20:47:51 [bob]
ack dhull
20:47:52 [Jonathan]
Tony: You need something to process that special URI.
20:49:11 [Jonathan]
DaveH: Only difference between an EPR with an addressable value, and the same EPR with isAnon="true" is...
20:49:38 [Jonathan]
scratch above line.
20:50:05 [Jonathan]
DaveH: What's the difference to an anon-like address and an anon-like address marked as isAnon?
20:50:20 [Jonathan]
Tony: Should be no difference on the wire.
20:50:27 [gpilz]
q+
20:50:33 [Jonathan]
DaveH: We're just spelling anonymous differently.
20:50:43 [bob]
ack anish
20:50:49 [Paco]
q+
20:51:09 [Jonathan]
Anish: We want to see WS-A and WS-RM composable. We aren't right now.
20:51:43 [Jonathan]
... I don't want to make it hard to turn on RM on an endpoint that only operates on the backchannel - without removing things from the WSDL.
20:52:01 [Jonathan]
... Several ways to solve this problem.
20:52:21 [Jonathan]
... We can ask WS-RM to redesign, which they already rejected.
20:53:12 [Jonathan]
... They sent one proposal to loosen the tightness between the marker and WS-A anon.
20:53:28 [Jonathan]
... I wrote up the isAnon marker proposal.
20:53:43 [Jonathan]
... I don't want to see that these two spescs aren't compatible. Worst solution possible.
20:54:14 [Jonathan]
... Second edition of Core, kicking back to WS-RM, etc. anything is better.
20:54:25 [bob]
ack gpil
20:54:47 [Jonathan]
Gil: Wanted to answer DaveH, what's the difference when you add isAnon=true. Trying to communicate it one more time.
20:55:17 [Jonathan]
... Imagine you have a service that needs to be reliable. Need to retry responses when it determines it must.
20:55:34 [Jonathan]
... Now imagine Alice and Bob connecting to the service. Neither supports a listener.
20:56:07 [Jonathan]
... From the service's perspective, if it needs to resend a response back to Alice, it can't disambiguate between Alice and Bob.
20:56:33 [Jonathan]
... Both are addressed using the anonymous URI. It's got no information to correlate the replies.
20:56:54 [Jonathan]
... We've split out the fact of anon meaning use the back channel, leaving the uuid to disambiguate Alice and Bob.
20:57:07 [Jonathan]
DaveH: Gets that, talked about sending a cookie along.
20:57:35 [Jonathan]
Gil: Alice's wire and Bob's wire are different wires.
20:57:45 [Jonathan]
DaveH: You can't disambiguate that from the request.
20:58:00 [Jonathan]
Gil: Might need to create a new sequence, so you need to know who it came from.
20:58:07 [Jonathan]
DaveH: Can't use wsa:From?
20:58:14 [Jonathan]
Gil: At risk?
20:58:36 [Jonathan]
DaveH: Might be a good use for it.
20:58:36 [bob]
ack paco
20:59:20 [Jonathan]
Paco: With the isAnon marker, when I use the wsaw:Anonymous="required", that means the service trusts you to send a URI that encodes "anonymoous".
20:59:44 [Jonathan]
... Key point is that when you use the flag, you trust the client.
20:59:53 [Jonathan]
Tony: That's how I understand it.
21:00:08 [Jonathan]
Paco: You either send the real thing, or we trust you. We give up validation.
21:00:59 [Jonathan]
... if this is fundamentally changing the meaning of the marker. Why don't we make the marker informational: "I always send the response on the backchannel."
21:01:21 [Jonathan]
Anish: I'm fine with proposal 1 which is about asserting what the service can and cannot do.
21:01:42 [Jonathan]
... If the service says it can't do callbacks, it will send a fault.
21:01:58 [Jonathan]
... The complaint was the requirement is too fuzzy. I think that's OK.
21:02:21 [Jonathan]
Paco: Looks like when we put in this marker, and suspend validation.
21:03:44 [Jonathan]
Jonathan: In the absence of WSDL, does isAnon="true" change any behavior?
21:03:56 [Zakim]
-yinleng
21:03:57 [Jonathan]
Tony: Yes, the response will come on the backchannel.
21:04:27 [chad]
chad has joined #ws-addr
21:04:46 [Jonathan]
Anish: Can't say anything about what the behavior might be without WSDL.
21:05:08 [pauld]
can't help thinking this belongs in core
21:05:12 [dhull]
q+
21:05:24 [pauld]
and that ship has sailed, for 1.0 anyway
21:06:09 [pauld]
chad, question: options for cr33, again
21:06:24 [pauld]
chad, option 0: Status Quo
21:06:55 [dhull]
chad, option 1: Status quo, but clarify that "optional" makes no statement about what other addresses will work
21:07:35 [dhull]
chad, option 2: Remove wsaw:anonymous and use Policy
21:07:35 [Jonathan]
Anish: Doesn't change behavior. The client is asserting it's anonymous.
21:07:44 [gpilz]
q+
21:07:53 [gpilz]
q-
21:08:05 [TonyR]
chad option 9: misunderstand the proposal
21:08:14 [marc]
marc has joined #ws-addr
21:08:15 [TonyR]
chad, list options
21:10:13 [Jonathan]
21:11:27 [TonyR]
chad option 4: provide a marker to let a URI slide past the wsaw:Anonymous validation (Anish's proposal of today)
21:12:38 [TonyR]
chad option 5: provide a marker to indicate that a URI requires a response on the backchannel
21:13:02 [pauld]
chad, options?
21:13:56 [dorchard]
q+ to ask about option #4
21:14:20 [dorchard]
zakim, unmute me
21:14:20 [Zakim]
DOrchard should no longer be muted
21:14:23 [TonyR]
chad option 3: Remove wsaw:anonymous and use policy (Katy's proposal)
21:14:54 [Jonathan]
option2: <wsaw:NewConnection> proposal
21:15:01 [Jonathan]
chad, option 2: <wsaw:NewConnection> proposal
21:15:02 [bob]
ack dorch
21:15:02 [Zakim]
dorchard, you wanted to ask about option #4
21:15:38 [Jonathan]
DaveO: If I make up a new URI that means "don't use the backchannel", and I mark it as isAnon="true"
21:15:59 [dhull]
q+ to comment on option 0
21:16:07 [Jonathan]
... that means that if someone has wsaw:Anonymous="required", I will process it.
21:16:23 [Jonathan]
Anish: Yes, but the service will still need to process that URI, if it understands it.
21:16:42 [Jonathan]
DaveO: I'm just going to inject this in there so that WS-A isn't going to fault either.
21:16:47 [gpilz]
q+
21:16:51 [Jonathan]
Anish: Paradox.
21:17:20 [Jonathan]
DaveO: Two levels of validation. WSDL-based, WS-A. We'll hope these aren't incompatible.
21:17:39 [Jonathan]
... Thought the proposal was a bit more strongly-typed. Guess not.
21:17:43 [anish]
q+
21:17:47 [bob]
ack dhull
21:17:47 [Zakim]
dhull, you wanted to comment on option 0
21:17:52 [dorchard]
zakim, mute me
21:17:52 [Zakim]
DOrchard should now be muted
21:18:24 [Jonathan]
DaveH: From what I could tell, some is based on the misconception that wsaw:Anonymous="optional" means any URI will work. It doesn't, just that you can put anon URI there.
21:18:30 [TonyR]
chad, list options
21:18:32 [Jonathan]
... Might want to emphasize that point in the text.
21:18:44 [Jonathan]
... which could be composed with Option 0 - status quo.
21:18:54 [dorchard]
q+ to ask about Paco's "kind of proposal"
21:20:11 [Jonathan]
Option 0.1 Status quo, but clarify that "optional" makes no statement about what other addresses will work
21:20:19 [Jonathan]
Option 2: Change MUST to SHOULD
21:20:26 [marc]
q+ to ask about option 3
21:20:29 [Jonathan]
chad, Option 2: Change MUST to SHOULD
21:20:39 [Jonathan]
chad, Option 0.1 Status quo, but clarify that "optional" makes no statement about what other addresses will work
21:20:49 [anish]
q?
21:20:50 [dorchard]
q?
21:20:58 [Jonathan]
chad, Option 0: Status quo, but (possibly) clarify that "optional" makes no statement about what other addresses will work
21:21:17 [bob]
ack gpilz
21:21:30 [pauld]
chad, say hi
21:21:32 [Jonathan]
Gil: Dug preferred Option 1 so each specification can maintain lists of URIs with anonymous semantics.
21:21:40 [Jonathan]
... right?
21:21:58 [anish]
chad, list options
21:21:59 [bob]
q?
21:22:01 [Jonathan]
Bob: Loosen MUST to SHOULD so that you could rationalize the conflict.
21:22:18 [Jonathan]
Gil: Core already allows other URIs with anonymous semantics, right>?
21:22:22 [Jonathan]
Bob: right.
21:22:24 [bob]
ack anish
21:22:36 [Jonathan]
chad, Option 1: Change MUST to SHOULD
21:22:45 [Jonathan]
chad, option 2: <wsaw:NewConnection> proposal
21:22:46 [TonyR]
chad option 1: change MUST to SHOULD (allows for other specs to define anon URIs)
21:22:48 [Jonathan]
chad, options?
21:23:19 [Jonathan]
Anish: Server needs to understand the URI in the address. If the server doesn't it will barf or make assumptions.
21:23:23 [dorchard]
zakim, unmute me
21:23:23 [Zakim]
DOrchard should no longer be muted
21:23:37 [Jonathan]
... Personally I like (2)...
21:23:43 [bob]
q?
21:24:22 [Jonathan]
Anish: Regardless of whether isAnon is present or not, there is a URI which, if specified ala WS-RM, if the service doesn't understand that URI and know what to do, it will fault or do something strange.
21:24:33 [Jonathan]
... The service still has to understand what that URI means.
21:25:10 [Jonathan]
... I like (2) because it still validates, just says the service can/can't/must use backchannels.
21:25:31 [bob]
q?
21:25:41 [bob]
ack dorch
21:25:41 [Zakim]
dorchard, you wanted to ask about Paco's "kind of proposal"
21:26:11 [Jonathan]
DaveO: Were you talking about option 1 earlier?
21:26:21 [Jonathan]
Paco: Thinks 1 is closer.
21:26:26 [Jonathan]
Paco: Thinks 2 is closer.
21:26:47 [bob]
ack marc
21:26:47 [Zakim]
marc, you wanted to ask about option 3
21:27:00 [Jonathan]
Marc: Option 3, does this mean?
21:27:04 [Jonathan]
DaveO: Duck and run.
21:27:05 [dorchard]
zakim, mute me
21:27:05 [Zakim]
DOrchard should now be muted
21:28:22 [dhull]
q+
21:29:02 [Jonathan]
Joanthan: Indicates frustration with the baked-ness of wsaw:Anonymous, perhaps we shouldn't be RECOMMENDING this yet.
21:29:18 [Jonathan]
Jonathan: Policy is a distraction.
21:29:33 [GlenD]
GlenD has joined #ws-addr
21:30:00 [Jonathan]
DaveH: Idea is that policy might be a more composable framework for solving these kinds of problems better.
21:30:06 [anish]
i would like to speak against proposal 3 -- without that, there is no way to specify "async" request response, like what rosettanet wants
21:30:17 [anish]
that ==> wsaw:Anonymous
21:30:24 [Jonathan]
... Seems like a serious alternative to coming up with a special-purpose hard-wired WSDL marker.
21:30:42 [dorchard]
q+
21:30:44 [bob]
ack dhull
21:30:50 [gpilz]
q+
21:31:03 [TonyR]
chad, list options
21:31:06 [dorchard]
zakim, unmute me
21:31:06 [Zakim]
DOrchard should no longer be muted
21:31:12 [bob]
ack dorch
21:31:15 [gpilz]
q-
21:31:17 [Jonathan]
DaveO: Does 2 solve RM's problem?
21:31:28 [Jonathan]
Anish: I think so.
21:31:51 [dorchard]
zakim, mute me
21:31:51 [Zakim]
DOrchard should now be muted
21:31:53 [GlenD]
Gotta run, folks. I'm going to abstain on this one, assuming we vote in the next 30 min.
21:32:15 [Jonathan]
... It makes assertions about whether the backchannel can be used.
21:32:19 [Jonathan]
... or must
21:32:32 [Zakim]
-GlenD
21:32:55 [bob]
q?
21:33:09 [Jonathan]
Gil: specifically lists the URIs that indicate the backchannel.
21:33:29 [Jonathan]
Anish: Just shows examples of URIs that indicate the backchannel...
21:33:31 [Zakim]
-prasad
21:34:55 [anish]
vote: 2, 4, 5, 1
21:34:56 [Jonathan]
vote: 3,0
21:34:58 [gpilz]
vote 2, 5, 4, 1
21:35:02 [David_Illsley]
vote 3,2,1
21:35:05 [pauld]
vote: 3, 0
21:35:11 [Paco]
vote: 3, 2, 1
21:35:13 [gpilz]
vote: 2, 5, 4, 1
21:35:20 [marc]
vote: 0, 4
21:35:23 [dhull]
vote: 0, 3, 5, 4
21:35:27 [TonyR]
vote 5, 3, 0, 9, 4, 1
21:35:40 [PaulKnight]
vote 3,2,0
21:36:08 [David_Illsley]
vote: 3,2,1
21:36:08 [Jonathan]
chad, votes?
21:36:09 [plh]
vote 1, 0, 3, 2
21:36:09 [PaulKnight]
vote: 3, 2, 0
21:36:21 [Jonathan]
chad, votes?
21:36:24 [plh]
vote: 1, 0, 3, 2
21:36:25 [TonyR]
vote: 0, 5, 3, 9, 4, 1
21:36:30 [Jonathan]
chad, votes?
21:37:07 [pauld]
chad, count
21:37:08 [chad]
Question: options for cr33, again
21:37:08 [chad]
Option 0: Status quo, but (possibly) clarify that "optional" makes no statement about what other addresses will work (3)
21:37:08 [chad]
Option 1: change MUST to SHOULD (allows for other specs to define anon URIs) (1)
21:37:08 [chad]
Option 2: <wsaw:NewConnection> proposal (2)
21:37:08 [chad]
Option 3: Remove wsaw:anonymous and use policy (Katy's proposal) (5)
21:37:10 [chad]
Option 4: provide a marker to let a URI slide past the wsaw:Anonymous validation (Anish's proposal of today) (0)
21:37:13 [chad]
Option 5: provide a marker to indicate that a URI requires a response on the backchannel (0)
21:37:15 [chad]
Option 9: misunderstand the proposal (0)
21:37:17 [chad]
11 voters: anish (2,4,5,1),David_Illsley (3,2,1),dhull (0,3,5,4),gpilz (2,5,4,1),Jonathan (3,0),marc (0,4),Paco (3,2,1),pauld (3,0),PaulKnight (3,2,0),plh (1,0,3,2),TonyR (0,5,3,9,4,1)
21:37:20 [chad]
Round 1: Count of first place rankings.
21:37:22 [chad]
Round 2: First elimination round.
21:37:24 [chad]
Eliminating candidadates without any votes.
21:37:26 [chad]
Eliminating candidate 4.
21:37:28 [chad]
Eliminating candidate 5.
21:37:30 [chad]
Eliminating candidate 9.
21:37:32 [chad]
Round 3: Eliminating candidate 1.
21:37:34 [chad]
Round 4: Eliminating candidate 2.
21:37:36 [chad]
Round 5: Eliminating candidate 0.
21:37:38 [chad]
Candidate 3 is elected.
21:37:40 [uyalcina]
uyalcina has joined #ws-addr
21:37:40 [chad]
Winner is option 3 - Remove wsaw:anonymous and use policy (Katy's proposal)
21:38:51 [dorchard]
zakim, unmute me
21:38:51 [Zakim]
DOrchard should no longer be muted
21:39:00 [Jonathan]
Bob: 0 and 3 are the most popular options.
21:39:13 [Jonathan]
DaveO: Might want to do a runoff between 3, 0, 2
21:39:49 [pauld]
chad, drop option 9
21:39:49 [chad]
dropped option 9
21:40:07 [pauld]
chad, drop option 4
21:40:07 [chad]
dropped option 4
21:40:12 [jeffm]
jeffm has joined #ws-addr
21:40:49 [pauld]
chad, drop option 1
21:40:49 [chad]
dropped option 1
21:40:51 [anish]
chad, list options
21:41:00 [pauld]
chad, drop option 5
21:41:00 [chad]
dropped option 5
21:41:06 [pauld]
chad, count
21:41:06 [chad]
Question: options for cr33, again
21:41:06 [chad]
Option 0: Status quo, but (possibly) clarify that "optional" makes no statement about what other addresses will work (4)
21:41:06 [chad]
Option 2: <wsaw:NewConnection> proposal (2)
21:41:06 [chad]
Option 3: Remove wsaw:anonymous and use policy (Katy's proposal) (5)
21:41:06 [chad]
11 voters: anish (2),David_Illsley (3,2),dhull (0,3),gpilz (2),Jonathan (3,0),marc (0),Paco (3,2),pauld (3,0),PaulKnight (3,2,0),plh (0,3,2),TonyR (0,3)
21:41:07 [TonyR]
vote: 0,3,2
21:41:09 [chad]
Round 1: Count of first place rankings.
21:41:11 [chad]
Round 2: Eliminating candidate 2.
21:41:13 [chad]
Round 3: Eliminating candidate 0.
21:41:15 [dorchard]
vote: 2
21:41:15 [chad]
Candidate 3 is elected.
21:41:17 [chad]
Winner is option 3 - Remove wsaw:anonymous and use policy (Katy's proposal)
21:41:28 [Jonathan]
vote: 3,0
21:41:31 [anish]
vote: 2, 0
21:41:32 [David_Illsley]
vote: 3
21:41:37 [dorchard]
vote: 2, 0
21:41:37 [PaulKnight]
vote: 3, 0
21:41:38 [pauld]
vote: 3,0
21:41:39 [dhull]
vote: 0, 3
21:41:39 [gpilz]
vote: 2
21:41:41 [uyalcina]
vote:0
21:41:43 [marc]
vote: 0
21:41:43 [Paco]
vote: 3
21:41:51 [jeffm]
vote: 2,0
21:41:56 [plh]
vote: 3,0,2
21:42:18 [pauld]
chad, votes?
21:42:36 [uyalcina]
vote:0,2
21:43:03 [pauld]
chad, count
21:43:03 [chad]
Question: options for cr33, again
21:43:03 [chad]
Option 0: Status quo, but (possibly) clarify that "optional" makes no statement about what other addresses will work (4)
21:43:03 [chad]
Option 2: <wsaw:NewConnection> proposal (4)
21:43:03 [chad]
Option 3: Remove wsaw:anonymous and use policy (Katy's proposal) (6)
21:43:03 [chad]
14 voters: anish (2,0),David_Illsley (3),dhull (0,3),dorchard (2,0),gpilz (2),jeffm (2,0),Jonathan (3,0),marc (0),Paco (3),pauld (3,0),PaulKnight (3,0),plh (3,0,2),TonyR (0,3,2),uyalcina (0,2)
21:43:07 [chad]
Round 1: Count of first place rankings.
21:43:08 [chad]
Round 2: Tie when choosing candidate to eliminate.
21:43:10 [chad]
Tie at round 1 between 0, 2.
21:43:12 [chad]
Tie broken randomly.
21:43:14 [chad]
Eliminating candidate 0.
21:43:17 [chad]
Candidate 3 is elected.
21:43:18 [chad]
Winner is option 3 - Remove wsaw:anonymous and use policy (Katy's proposal)
21:44:20 [dorchard]
zakim, mute me
21:44:20 [Zakim]
DOrchard should now be muted
21:46:58 [dorchard]
zakim, unmute me
21:46:58 [Zakim]
DOrchard should no longer be muted
21:47:03 [Jonathan]
Option 3: 6
21:47:05 [Jonathan]
Option 2: 2
21:47:10 [Jonathan]
Option 0: 1
21:47:27 [anish]
q?
21:47:29 [anish]
q+
21:47:49 [bob]
BT:3
21:47:55 [bob]
MS3
21:48:01 [bob]
IBM: 3
21:48:07 [plh]
s/MS3/MS: 3/
21:48:11 [bob]
CA: 3
21:48:32 [bob]
BEA: 2
21:48:44 [bob]
NORTEL: 3
21:48:48 [dhull]
q+
21:49:12 [pauld]
feels people who voted for 3 also voted for 0
21:49:35 [marc]
SUN: 0
21:49:49 [bob]
TIBCO:3
21:49:51 [anish]
q+
21:49:59 [bob]
ack ani
21:50:05 [dhull]
q+ to understand the objection to 3
21:50:13 [bob]
Oracle: 2
21:50:20 [Zakim]
+JeffM
21:50:28 [Jonathan]
Anish: Option 0 doesn't solve the RM issue, but throws out the baby with the bathwater.
21:50:57 [Jonathan]
... Taking away wsaw:Anonymous after all the work we did loses the really important use case I discussed earlier.
21:51:01 [plh]
s/throws/doesn't throw/
21:51:07 [marc]
i think Anish said that option 3 throws out the baby with the bathwater
21:51:14 [dorchard]
ah
21:51:32 [Jonathan]
... Saying a service needs to provide a callback channel.
21:51:37 [bob]
q?
21:52:14 [Jonathan]
Paco: Don't like 0, because we now have to choose between WS-A and RM.
21:52:27 [Jonathan]
Anish: Provides feedback to the WS-RM WG that they need to rethink.
21:53:11 [anish]
yes, i meant option 3 throws the baby out with the bathwater, not option 0
21:53:17 [Jonathan]
Bob: Option 3 is a significant change, in the expectation of what information is carried in the WSDL. Option 0 continues to keep pressure on RM to figure out an alternative way to get the job done.
21:53:20 [anish]
thx marc, that's what i meant
21:53:57 [Jonathan]
Bob: Objections with the clear winner (3). We'll see if option 0 is something people can live with.
21:54:27 [Jonathan]
Marc: I'd like to point out that we already say we can use Anonoymous in policy.
21:54:37 [Jonathan]
Jonathan: wsaw:UsingAddressing can be used in policy.
21:54:43 [Jonathan]
Paco: Bad way to do anonymous.
21:55:05 [Jonathan]
DaveO: If we're saying wsaw:Anonymous doesn't compose with policy, we should go down that path.
21:55:33 [Jonathan]
Bob: Formal objections against Option 3?
21:55:37 [Jonathan]
DaveO: Yes
21:55:39 [Jonathan]
Anish: Yes
21:56:04 [Jonathan]
DaveH: But you might be OK if it were replaced with a policy marker instead.
21:56:04 [anish]
yes, jonathan, thx
21:56:15 [Jonathan]
DaveO: Yes, but fixing it and removing it are different.
21:57:16 [Jonathan]
Jonathan: Can live with Option 0 if we open a new issue about redesigning wsaw:Anonymous as policy assertions.
21:57:23 [dhull]
q+ to make a quick plug for WSN
21:57:41 [Jonathan]
Paco: Still has some problems with dealing with non-WS-A "special" URIs.
21:57:57 [Jonathan]
... Just recasting it as policy doesn't fully solve the problem.
21:58:02 [Jonathan]
Anish: Might be easier as a policy.
21:58:15 [Jonathan]
DaveO: Paco, you were pushing for 3 in preference for 2?
21:58:33 [anish]
jonathan had sometime back suggested that instead of one wsaw:Anonymous marker we should have 3 different markers
21:58:41 [anish]
which help from a policy POV
21:58:50 [Jonathan]
Paco: 2 will probably tell us we can solve RM's problem. We can live waiting until we solve this problem better, but first cast as a policy assertion, then with the "special" URI problem.
21:59:13 [Jonathan]
DaveO: We want something like this. Can you live with something like what RM has, cast it as a policy, and go from there?
22:00:51 [Jonathan]
Jonathan: Worried about trying to fix the "RM problem" as we cast into policy assertions. Still not sure we can't satisfy RM by tweaks there.
22:01:23 [Jonathan]
DaveO: Don't want to preclude tweaking to accomodate RM.
22:01:31 [Jonathan]
Paco: Not just an RM problem.
22:02:36 [Jonathan]
Jonathan: Postponing the debate on whether we change or RM does.
22:03:06 [Jonathan]
Bob: Put 33 on a backburner while we explore recasting as a policy assertion.
22:03:25 [Zakim]
-TonyR
22:03:29 [Jonathan]
Jonathan: What do we tell RM?
22:04:25 [Jonathan]
Bob: Any objections to leave 33 open while we entertain a new proposal working the policy assertion.
22:04:27 [Jonathan]
none
22:04:47 [dorchard]
zakim, mute me
22:04:47 [Zakim]
DOrchard should now be muted
22:04:54 [Jonathan]
Bob: Any takers to craft the proposal?
22:05:01 [Jonathan]
Anish & Paco volunteer.
22:05:50 [Jonathan]
DaveH: WS-Notification is being voted on as an OASIS standard. Needs 15% of membership to vote. If there is anyone who also is an OASIS member, please vote.
22:06:04 [Jonathan]
Bob: Thanks for the forebearance.
22:06:07 [pauld]
pauld has left #ws-addr
22:06:12 [Zakim]
-DOrchard
22:06:13 [Jonathan]
... Came further today.
22:06:18 [Zakim]
-[IBM]
22:06:19 [Jonathan]
ajourn
22:06:19 [Zakim]
-anish
22:06:20 [Zakim]
-JeffM
22:06:20 [Zakim]
-Paul_Downey
22:06:23 [Zakim]
-Gilbert_Pilz
22:06:24 [Zakim]
-David_Illsley
22:06:24 [Zakim]
-Jonathan_Marsh
22:06:25 [Zakim]
-Bob_Freund
22:06:25 [Zakim]
-Paul_Knight
22:06:27 [Zakim]
-David_Hull
22:06:32 [Jonathan]
RRSAgent, set log member
22:06:33 [TonyR]
s/ajourn/adjourn/
22:06:39 [Jonathan]
rrsagent, draft minutes
22:06:39 [RRSAgent]
I have made the request to generate
Jonathan
22:06:43 [bob]
zakim, make logs public
22:06:43 [Zakim]
I don't understand 'make logs public', bob
22:06:50 [Zakim]
-Plh
22:06:52 [bob]
rrsagent, make logs public
22:06:54 [Jonathan]
s/Joanthan/Jonathan/g
22:07:01 [plh]
zakim, make logs public-visible
22:07:01 [Zakim]
I don't understand 'make logs public-visible', plh
22:07:07 [plh]
rrsagent, make logs public-visible
22:07:13 [RRSAgent]
I have made the request to generate
plh
22:07:24 [bob]
zakim, give me patience
22:07:24 [Zakim]
I don't understand 'give me patience', bob
22:07:27 [plh]
zakim, drop marc
22:07:27 [Zakim]
Marc_Hadley is being disconnected
22:07:29 [Zakim]
WS_AddrWG()4:00PM has ended
22:07:30 [Zakim]
Attendees were Bob_Freund, Plh, Jonathan_Marsh, Paul_Downey, David_Illsley, +1.503.228.aaaa, prasad, Paul_Knight, anish, Gilbert_Pilz, Marc_Hadley, TonyR, yinleng, DOrchard,
22:07:32 [plh]
zakim, bye
22:07:32 [Zakim]
Zakim has left #ws-addr
22:07:33 [Zakim]
... David_Hull, Paco, GlenD, JeffM
22:07:43 [bob]
rrsagent, generate minutes
22:07:43 [RRSAgent]
I have made the request to generate
bob
22:07:48 [TonyR]
TonyR has left #ws-addr
22:08:05 [bob]
rrsagent, make logs public
22:08:15 [bob]
rrsagent, generate minutes
22:08:15 [RRSAgent]
I have made the request to generate
bob
22:08:33 [Jonathan]
thx bob, you can take it from here..
22:08:47 [bob]
Jonathan, thanks for scribing
22:11:53 [bob]
bob has left #ws-addr | http://www.w3.org/2006/09/25-ws-addr-irc | CC-MAIN-2017-30 | refinedweb | 6,746 | 69.62 |
portathemeportatheme
Small, opinionated theme bundler
InstallationInstallation
npm install portatheme
SetupSetup
A theme has these features:
- A set of layouts, which are Pug templates. Every theme must have at least one layout called
default.pug.
- Sass compilation using node-sass and Eyeglass.
- JavaScript compilation and bundling using Webpack with Babel support.
- Static asset copying.
A minimal theme is structured like this:
- theme/- assets/- js/- index.js- sass/- index.scss- templates/- default.pug
Let's break it down.
assetsincludes static assets like images and fonts.
jsincludes JavaScript required by your theme.
sassincludes Sass required by your theme.
templatesincludes Pug templates that will render pages.
Static assets placed at the root of the theme, such as a
robots.txt, will also be copied.
Only the
templates folder is required, and the only required file in that folder is
default.pug. All other features‐static files, Sass, and JavaScript—are optional.
UsageUsage
;// Initialize the theme with a file path// The theme can be in node_modules or a local folderconst catTheme = './cat-theme';// Set an output folder for pages and theme assetscatTheme;// Create a page by passing a file name and data objectcatTheme;// Compile theme assets (static files, Sass, and JavaScript)catTheme
InheritanceInheritance
A theme can inherit the layouts, assets, Sass, and JavaScript of another theme. Here's how it works:
- Any layout in a theme or its parents can be used. If a child theme has a layout with the same name as a parent theme, the child layout will be used.
- All static assets are combined together and copied to the final build folder.
- The child Sass files can import the parent theme's Sass using
@import '<parent theme>';, where
<parent theme>is the folder name of the parent theme.
- The child JavaScript files can import the parent theme's JavaScript using
require('<parent theme>');or
import '<parent theme>';.
Note that if a parent theme has Sass or JavaScript features, the child theme must import the parent theme's files for them to be compiled.
Here's what theme inheritance looks like:
;const catTheme = 'cat-theme';const kittenTheme = './lib/kitten-theme' catTheme;// Any layout in cat-theme/ or kitten-theme/ can be referencedkittenTheme;
If
cat-theme/ has a Sass codebase,
kitten-theme/ can import those files:
;
It works the same with JavaScript:
;
APIAPI
new Theme(location[, parent])new Theme(location[, parent])
Create a new theme.
- location (String): path to theme folder. Should be an absolute path, or relative to the current working directory.
- parent (Theme): Optional. Theme to inherit assets from.
Theme.outputTo(location)Theme.outputTo(location)
Define the output folder when building theme assets and pages. Call this before using other theme methods.
- location (String): path to output to.
Theme.compilePage(dest[, data, layout])Theme.compilePage(dest[, data, layout])
Create a page using one of the theme's layouts, paired with a data object.
- dest (String): name of file to output, e.g.
index.html. The path is relative to the root folder set with
Theme.outputTo().
- data (Object): object to pass to Pug template.
- layout (String): theme template to use. Defaults to
default.
Returns a Promise which resolves when the page has been written to disk, or rejects if there's an error.
Theme.compileString([data, layout])Theme.compileString([data, layout])
Create an HTML string using one of the theme's layouts, paired with a data object.
- data (Object): object to pass to Pug template.
- layout (String): theme template to use. Defaults to
default.
Returns an HTML string.
Theme.build()Theme.build()
Build the assets of a theme: static files, CSS, and JavaScript.
Returns a Promise which resolves when the build process is done, or rejects if there's an error.
Theme.buildAndWatch()Theme.buildAndWatch()
Build the assets of a theme, then watch for changes to theme files and rebuild when necessary.
Returns a Promise which rejects if there's an error. The Promise will never resolve, because file watching goes on indefinitely.
Local DevelopmentLocal Development
git clone portathemenpm installnpm test
LicenseLicense
MIT © Geoff Kimball | https://www.npmjs.com/package/portatheme | CC-MAIN-2022-21 | refinedweb | 664 | 59.9 |
Community TutorialsCOMMUNITY HOME SEARCH TUTORIALS
title: Ingress with NGINX controller on Google Kubernetes Engine description: Learn how to deploy the NGINX Ingress Controller on Google Kubernetes Engine using Helm. author: ameer00 tags: Google Kubernetes Engine, Kubernetes, Ingress, NGINX, NGINX Ingress Controller, Helm date_published: 2018-02-12
This guide explains how to deploy the NGINX Ingress Controller on Google Kubernetes Engine.. The second component is the Ingress Controller which acts upon the rules set by the Ingress Resource, typically via an HTTP or L7 load balancer. It is vital that both pieces are properly configured to route traffic from an outside client to a Kubernetes Service.
NGINX is a popular choice for an Ingress Controller for a variety of features:
- Websocket,.
- Support for JWTs (NGINX Plus only), which allows NGINX Plus to authenticate requests by validating JSON Web Tokens (JWTs).
The following diagram shows the architecture described above:
This tutorial illustrates how to set up a Deployment in Kubernetes with an Ingress Resource using NGINX as the Ingress Controller to route/load balance traffic from external clients to the Deployment. This tutorial explains how to accomplish the following:
- Create a Kubernetes Deployment
- Deploy NGINX Ingress Controller via Helm
- Set up an Ingress Resource object for the Deployment
Objectives
- Deploy a simple Kubernetes web application Deployment
- Deploy NGINX Ingress Controller using the stable helm chart
- Deploy an Ingress Resource for the application that uses NGINX Ingress as the controller
- Test NGINX Ingress functionality by accessing the Google Cloud L4 (TCP/UDP) Load Balancer frontend IP and ensure it can access the web application
Costs
This tutorial uses billable components of Cloud Platform, including:
- Kubernetes Engine
- Google Cloud Load Balancing
Use the Pricing Calculator to generate a cost estimate based on your projected usage.
Before you begin
Create or select a GCP project. GO TO THE PROJECTS PAGE
Enable billing for your project. ENABLE BILLING
Enable the Kubernetes Engine API. ENABLE APIs
Set up your environment
In this section you configure the infrastructure and identities required to complete the tutorial.
Create a Kubernetes Engine cluster using Cloud Shell
You can use Cloud Shell to complete this tutorial. To use Cloud Shell, perform the following steps: OPEN A NEW CLOUD SHELL SESSION
Set your project's default compute zone and create a cluster by running the following commands:
gcloud config set compute/zone us-central1-f gcloud container clusters create nginx-tutorial --num-nodes=2
Install Helm in Cloud Shell
If you already have Helm client and Tiller installed on your cluster, you can skip to the next section.
Helm is a tool that streamlines installing and managing Kubernetes applications and resources. Think of it like apt/yum/homebrew for Kubernetes. Use of helm charts is recommended since they are maintained and typically kept up-to-date by the Kubernetes community.
- Helm has two parts: a client (
helm) and a server (
tiller)
Tillerruns inside of your Kubernetes cluster, and manages releases (installations) of your helm charts.
Helmruns on your laptop, CI/CD, or in our case, the Cloud Shell.
You can install the
helm client in Cloud Shell using the following commands:
curl -o get_helm.sh chmod +x get_helm.sh ./get_helm.sh
The script above fetches the latest version of
helm client and installs it locally in Cloud Shell.
Downloading Preparing to install into /usr/local/bin helm installed into /usr/local/bin/helm Run 'helm init' to configure helm.
Installing Tiller with RBAC enabled
Starting with Kubernetes v1.8+, RBAC is enabled by default. Prior to installing
tiller we need to ensure we have the correct ServiceAccount and ClusterRoleBinding configured for the
tiller service. This allows
tiller to be able to install services in the
default namespace.
Run the following commands to install the server side
tiller to the Kubernetes cluster with RBAC enabled:
kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller
Installing Tiller with RBAC disabled
If you do not have RBAC enabled on your Kubernetes installation, you can simply run the following command to install
tiller on your cluster.
helm init
The output below confirms that Tiller is running.
Creating /home/ameer00/.helm Creating /home/ameer00/.helm/repository Creating /home/ameer00/.helm/repository/cache Creating /home/ameer00/.helm/repository/local Creating /home/ameer00/.helm/plugins Creating /home/ameer00/.helm/starters Creating /home/ameer00/.helm/cache/archive Creating /home/ameer00/.helm/repository/repositories.yaml Adding stable repo with URL: Adding local repo with URL: $HELM_HOME has been configured at /home/ameer00/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Happy Helming!
You can also confirm that
tiller is running by checking for the
tiller_deploy Deployment in the
kube-system namespace. Run the following command:
kubectl get deployments -n kube-system
The output should have a
tiller_deploy Deployment as shown below:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE event-exporter-v0.1.7 1 1 1 1 13m heapster-v1.4.3 1 1 1 1 13m kube-dns 2 2 2 2 13m kube-dns-autoscaler 1 1 1 1 13m kubernetes-dashboard 1 1 1 1 13m l7-default-backend 1 1 1 1 13m tiller-deploy 1 1 1 1 4m
Deploy an application in Kubernetes Engine
You can deploy a simple web based application from the Google Cloud Repository. You use this application as the backend for the Ingress.
From the Cloud Shell, run the following command:
kubectl run hello-app --image=gcr.io/google-samples/hello-app:1.0 --port=8080
This gives the following output:
deployment "hello-app" created
Expose the
hello-app Deployment as a Service by running the following command:
kubectl expose deployment hello-app
This gives the following output:
service "hello-app" exposed
Deploying the NGINX Ingress Controller with Helm
Kubernetes platform allows for administrators to bring their own Ingress Controllers instead of using the cloud provider's built-in offering.
The NGINX controller, deployed as a Service, must be exposed for external
access. This is done using Service
type: LoadBalancer on the NGINX controller
service. On Kubernetes Engine, this creates a Google Cloud Network (TCP/IP) Load Balancer with NGINX
controller Service as a backend. Google Cloud also creates the appropriate
firewall rules within the Service's VPC to allow web HTTP(S) traffic to the load
balancer frontend IP address. Here is a basic flow of the NGINX ingress
solution on Kubernetes Engine.
NGINX Ingress Controller on Kubernetes Engine
Deploy NGINX Ingress Controller with RBAC enabled
If your Kubernetes cluster has RBAC enabled, from the Cloud Shell, deploy an NGINX controller Deployment and Service by running the following command:
helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true
Deploy NGINX Ingress Controller with RBAC disabled
If your Kubernetes cluster has RBAC disabled, from the Cloud Shell, deploy an NGINX controller Deployment and Service by running the following command:
helm install --name nginx-ingress stable/nginx-ingress
In the ouput under
RESOURCES, you should see the following:
==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.7.248.226 pending 80:30890/TCP,443:30258/TCP 1s nginx-ingress-default-backend ClusterIP 10.7.245.75 none 80/TCP 1s
Wait a few moments while the GCP L4 Load Balancer gets deployed. Confirm that the
nginx-ingress-controller Service has been deployed and that you have an external IP address associated with the service. Run the following command:
kubectl get service nginx-ingress-controller
You should see the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.7.248.226 35.226.162.176 80:30890/TCP,443:30258/TCP 3m
Notice the second service, nginx-ingress-default-backend. The default backend is a Service which handles all URL paths and hosts the NGINX controller doesn't understand (that is, all the requests that are not mapped with an Ingress Resource). The default backend exposes two URLs:
/healthzthat returns 200
/that returns 404
Configure Ingress Resource to use NGINX Ingress Controller
An Ingress Resource object is a collection of L7 rules for routing inbound traffic to Kubernetes Services. Multiple rules can be defined in one Ingress Resource or they can be split up into multiple Ingress Resource manifests. The Ingress Resource also determines which controller to utilize to serve traffic. This can be set with an annotation,
kubernetes.io/ingress.class, in the metadata section of the Ingress Resource. For the NGINX controller, use the value
nginx as shown below:
annotations: kubernetes.io/ingress.class: nginx
On Kubernetes Engine, if no annotation is defined under the metadata section, the
Ingress Resource uses the GCP GCLB L7 load balancer to serve traffic. This
method can also be forced by setting the annotation's value to
gceas shown below:
annotations: kubernetes.io/ingress.class: gce
Let's create a simple Ingress Resource YAML file which uses the NGINX Ingress Controller and has one path rule defined by typing the following commands:
touch ingress-resource.yaml nano ingress-resource.yaml
Copy the contents of ingress-resource.yaml
into the editor, then press
Ctrl-X, then press
y, then press
Enter to save
the file.
The
kind: Ingress dictates it is an Ingress Resource object. This Ingress Resource defines an inbound L7 rule for path
/hello to service
hello-app on port 8080.
From the Cloud Shell, run the following command:
kubectl apply -f ingress-resource.yaml
Verify that Ingress Resource has been created. Note that the IP address
for the Ingress Resource will not be defined right away (wait a few moments for the
ADDRESS field to get populated):
kubectl get ingress ingress-resource
You should see the following:
NAME HOSTS ADDRESS PORTS AGE ingress-resource * 80 `
Test Ingress and default backend
You should now be able to access the web application by going to the EXTERNAL-IP/hello
address of the NGINX ingress controller (from the output of the
kubectl get service nginx-ingress-controller above).
To check if the default-backend service is working properly, access any path (other than
the path
/hello defined in the Ingress Resource) and ensure you receive a 404
message. For example:
You should get the following message:
Clean up
From the Cloud Shell, run the following commands:
Delete the Ingress Resource object:
kubectl delete -f ingress-resource.yaml
You should see the following:
ingress "demo-ingress" deleted
Delete the NGINX Ingress helm chart:
helm del --purge nginx-ingress
You should see the following:
release "nginx-ingress" deleted
Delete the app:
kubectl delete service hello-app kubectl delete deployment hello-app
You should see the following:
service "hello-app" deleted deployment "hello-app" deleted
Delete the Kubernetes Engine cluster by running the following command:
gcloud container clusters delete nginx-tutorial
You should see the following:
The following clusters will be deleted. - [nginx-tutorial] in [us-central1-f] Do you want to continue (Y/n)? y Deleting cluster nginx-tutorial...done. Deleted [].
Delete the
ingress_resource.yamlfile by running the following command:
rm ingress-resource.yaml. | https://cloud.google.com/community/tutorials/nginx-ingress-gke | CC-MAIN-2019-04 | refinedweb | 1,845 | 51.18 |
This bug is missing log files that will aid in diagnosing the problem. >From a terminal window please run:
Advertising
apport-collect 1684481. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. Title: KVM guest execution start apparmor blocks on /dev/ptmx now (regression?) Status in apparmor package in Ubuntu: New Status in linux package in Ubuntu: Incomplete Status in lxd package in Ubuntu: New Bug description: Setup: - Xenial host - lxd guests with Trusty, Xenial, ... - add a LXD profile to allow kvm [3] (inspired by stgraber) - spawn KVM guests in the LXD guests using the different distro release versions - guests are based on the uvtool default template which has a serial console [4] Issue: - guest starting with serial device gets blocked by apparmor and killed on creation - This affects at least ppc64el and x86 (s390x has no serial concept that would match) - This appeared in our usual checks on -proposed releases so maybe we can/should stop something? Last good was "Apr 5, 2017 10:40:50 AM" first bad one "Apr 8, 2017 5:11:22 AM" Background: We use this setup for a while and it was working without a change on our end. Also the fact that it still works in the Trusty LXD makes it somewhat suspicious. Therefore I'd assume an SRUed change in LXD/Kernel/Apparmor might be the reason and open this bug to get your opinion on it. You can look into [1] and search for uvt-kvm create in it. Deny in dmesg: [652759.606218] audit: type=1400 audit(1492671353.134:4520): apparmor="DENIED" operation="open" namespace="root//lxd-testkvm-xenial-from_<var-lib-lxd>" profile="libvirt-668e21f1-fa55-4a30-b325-0ed5cfd55e5b" name="/dev/pts/ptmx" pid=27162 comm="qemu-system-ppc" requested_mask="wr" denied_mask="wr" fsuid=0 ouid=0 Qemu-log: 2017-04-20T06:55:53.139450Z qemu-system-ppc64: -chardev pty,id=charserial0: Failed to create PTY: No such file or directory There was a similar issue on qmeu namespacing (which we don't use on any of these releases) [2]. While we surely don't have the "same" issue the debugging on the namespacing might be worth as it could be related. Workaround for now: - drop serial section from guest xml [1]: [2]: [3]: [4]: To manage notifications about this bug go to: -- Mailing list: Post to : kernel-packages@lists.launchpad.net Unsubscribe : More help : | https://www.mail-archive.com/kernel-packages@lists.launchpad.net/msg229532.html | CC-MAIN-2017-17 | refinedweb | 408 | 59.13 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
Mosaik API for Java =================== This is an implementation of the mosaik API for simulators written in Java. It hides all the messaging and networking related stuff and provides a simple base class that you can implement. Setup ----- Clone or download this repository onto you computer. Install Apache Ant 1.9 or higher () and build a distribution:: $ cd mosaik-api-java $ ant As soon as Ant is finished, you can copy the *.jar* files from the *dist/* folder into your project. Testing ------- To run the test cases, you will need Apache Ant 1.9 or higher, Python 3 and the virtualenv package for your current Python environment installed. You can then, if you are working on Linux or OS X, run the tests via:: $ ./runtests.sh [-s] Or, if you are working on a Windows machine:: $ runtests.bat "path-to-Python-3-executable" [-s] This will build the API and the test simulators using Ant, create a new Python 3 virtualenv with all required packages installed, let Pytest run the test cases, and finish with some cleanup. You can optionally pass the parameter -s to activate Pytests console output for additional debug information. Documentation ------------- You can find general information about the API in mosaik's docs (). Also, all public classes and methods also have docstrings (there is no pre-built Java doc yet). The *tests/* directory contains an example simulator (ExampleSim) and an example control strategy (ExampleMAS) that may give you an idea of what to do and how things work. Support ------- If you need help or want to give feedback, you are welcome to post something to our mailing list (). You can also browse the archives (). | https://bitbucket.org/mosaik/mosaik-api-java | CC-MAIN-2018-26 | refinedweb | 299 | 64.61 |
This problem can be seen as a graph problem: the nodes represent the words,
and the edges are created only for those words at distance = 1 (where
distance here is computed as the number of different positions between
the words).
As I see it, there are two parts of this problem:
a) Ensuring the paths we find are the shortest
b) Ensuring that we compute the adjacency list in less than O(n^2)
The first part can be done using BFS, as that guarantees shortest paths for
undirected graphs like this. But the second part was a bit hard to guess,
but finally realized that we can do the following:
Save all words in a set for O(1) existence queries
Pre-compute all possible characters among words per position (all have same
size, per problem description)
For a given word, iterate over its k positions
For each position, iterate over its possible characters:
Form a new word by replacing character at current position
Test if word does exist, and it it does then we found an edge
The adjacency list was built in lazy-mode, just when we reached the word
behind the node.
I think worst-case complexity would be O(n * k * m), where
n = number of words
k = size of words
m = maximum number of chars observed among all words x positions
from collections import deque from sys import maxsize as maxint class Solution: def ladderLength(self, beginWord, endWord, wordList): wordList.append(beginWord) words = set(wordList) k = len(beginWord) chars = [set() for _ in range(k)] for w in words: for i in range(k): chars[i].add(w[i]) def neighbors(word): for j in range(k): for c in chars[j]: w = word[:j] + c + word[j + 1:] if w in words and w != word: yield w def search_bfs(start): min_dist = maxint queue = deque([(start, 1)]) visited = set([start]) while queue: word, d = queue.popleft() if word == endWord: min_dist = min(min_dist, d) elif d < min_dist: for w in neighbors(word): if w not in visited: visited.add(w) queue.append((w, d + 1)) return min_dist min_dist = search_bfs(beginWord) return 0 if min_dist == maxint else min_dist | https://discuss.leetcode.com/topic/117341/python3-solution-using-constrained-bfs | CC-MAIN-2018-09 | refinedweb | 362 | 51.45 |
SAP Has recently released its updated version of SAP Mobile Platform SMP 3, With this release SAP has done tremendous updates when compare with their old release 2.2, and SMP 3 even has it’s support for MBO based runtime, while it has lots of new features added.
This Session, covers how to consume the NW Gateway Demo system OData services, using SMP and build HTML5 Application using Sencha Touch with Advanced themes.
I’ve taken Sales Data example to cover during this app build process, and will show how to get the Sales orders, Sales Order Item details, and Products and Revenue. And we’ll cover 3D Graphics using Sencha Touch GPL Version graphs. to learn more about Sencha Touch visit
Curious about What we are about to build, here is the demo video in youtube or click here to see this Demo live in Action
With this working session, we will cover the following topics
- Create Hybrid and Native Applications in SMP 3, (I’ve used SMP Cloud version hosted on Cloud Share, you can get your copy here )
- Configure SMP App to communicate with Back-end System (I’ve used NW Demo system, you can get your copy here)
- Register device in SMP – Click Here
- Test your app using Google Chrome to get the listed services – Click Here
- Develop Front-end using Sencha Touch
- Compile Sencha App into Native apps using Cordova / PhoneGAP.
- Test your App in Physical Device
1. Creating Hybrid / Native Application in SMP
Login to your SMP 3 Admin UI, and create the application
you can use your own namespace for ID.
Then go to Back End Tab
Here is the End point for SAP NW Demo System ()
And now Click on New in Back-end-connections
Goto Authentication
enter the following information
Click Save and you should see the following
Now Switch over to Overview Section, and you will see your newly create application details on the left hand side panel. And make sure you app is Consistent.
2. Configure/Verify SMP App to communicate with Back-end System
Now let’s do the ping test for your app, to make sure it’s talking to Back end system
Now you are done with creating of your Hybrid app in SMP 3.
We will continue the Device Registration, Test the App, Application Monitoring here
We will continue Sencha Touch Coding and Cordova Native wrapper in next post.
[Removed by Moderator]
thanks for your great work,
actually am facing a problem when adding backend system, as i could understood from searching that i should add a certificate to smp keystore, please advice.
Regards,
Mohamed Mohsen
Mohamed Mohsen
Look for “(E). Importing ES1 HTTPS certificate into SMP server keystore” in this blog.
For any further queries, please create a new thread.
Regards,
JK
thanks Jitendra, the blog was very helpful .. I used keystore browser software to import the certificate, Thanks again 🙂
Regards,
Mohamed Mohsen | https://blogs.sap.com/2014/05/30/develop-hybrid-and-native-mobile-apps-using-smp-3-and-sencha-touch-cordova/ | CC-MAIN-2018-47 | refinedweb | 490 | 53.14 |
Categories
(Core :: DOM: Editor, defect, P2)
Tracking
()
mozilla1.2beta
People
(Reporter: jfriend, Assigned: mjudge)
References
Details
(Keywords: access, helpwanted, platform-parity, Whiteboard: [keybnd] EDITORBASE+ [QAHP] [ADT3 RTM] [ETA 06/04] [KEYBASE+])
Attachments
(4 files, 15 obsolete files)
(This bug imported from BugSplat, Netscape's internal bugsystem. It was known there as bug #53588 Imported into Bugzilla on 03/26/99 10:35) From the champs and it annoys me too. Using PgUp and PgDn keys while composing an HTML message scrolls the window, but doesn't move the cursor. The cursor remains where it was. Keyboard navigation in the editor is rendered fairly useless because you hit PgDn several times, then hit the down arrow and it takes you right back up to one line below where you were originally.
Sorry, this was one of the features that was cut from 4.0.
Remind as per 4/10 bug triage.
it was a nice feature. a very handy one if you had a long doc and needed to scroll, only to find that once you type after scrolling, you're back to where you started.
reopen this bug for 5.0
This should be working correctly on all Platforms in 5.0: Windows--caret moves Unix--caret moves Macintosh caret DOES NOT move
up priority to P1; this feature was completed for 5.0 but Mariner appears to have broken it.
QA Contact: 4079
Status: REOPENED → ASSIGNED
ok ok
Summary: PgUp/PgDn in editor don't move cursor (platform differences) → (feature)PgUp/PgDn in editor don't move cursor (platform differences)
Target Milestone: M10
set milestone M10
Summary: (feature)PgUp/PgDn in editor don't move cursor (platform differences) → (feature)PgUp/PgDn in editor don't move caret (platform differences)
fix summary
Target Milestone: M10 → M12
moving to m12
Whiteboard: post dogfood
Target Milestone: M12 → M15
changing milestone to after dogfood, setting to M15
Summary: (feature)PgUp/PgDn in editor don't move caret (platform differences) → [feature] PgUp/PgDn in editor don't move caret (platform differences)
Whiteboard: post dogfood → [beta f.c.]
feature clean-up work for beta
Summary: [feature] PgUp/PgDn in editor don't move caret (platform differences) → [BETA][feature] PgUp/PgDn in editor don't move caret (platform differences)
Whiteboard: [beta f.c.]
Target Milestone: M15 → M14
moving this up to m14
setting keyword
Putting on PDT- radar for beta1. Would not hold beta for this.
Whiteboard: [PDT-]
move to M15; remove beta1 keyword since this has PDT- designation
M16 for now
OS: Windows NT → All
Whiteboard: [PDT-]
Target Milestone: M15 → M16
updating keyword and status whiteboard to reflect that this is a beta 2 feature work bug that the Composer team deems a must fix for beta 2.
[nsbeta2+ until 5/16]
Whiteboard: Composer feature work → [nsbeta2+ until 5/16] Composer feature work
This is bad...beppe filled in leger...get scoop from her.
Putting on [nsbeta2-] radar. Removing [5/16]. Missed the Netscape 6 feature train. Please set to MFuture
Whiteboard: [nsbeta2+ until 5/16] Composer feature work → [nsbeta2-] Composer feature work
Whiteboard: [nsbeta2-]
adding help wanted to the keywords
Keywords: helpwanted
Keywords: access, mozilla0.9
Target Milestone: Future → ---
clearing milestone and also nominating for mozilla0.9.
Here are some comments from Mike about this bug: I think the idea is: - to simulate down arrows until you get to the estimated location on the page and also trap for ending up on same line to avoid infinite loops and then - to scroll the page so the caret shows up in approx the same spot relative to the container so the user doesn't have to look for the caret each time you hit page down The "estimated" location is the tough one--you need the visual page size (assuming we are in a scroll view). Find the containing scroll view [from presshell]
Assignee: mjudge → brade
Status: ASSIGNED → NEW
Target Milestone: --- → mozilla1.0
Page Up and Page Down *must not* move the caret on Mac OS. It's in italics in the HIGs.
Brade, this behavior is written in the Apple Human Interface guidelines and is the way Mac OS applications are supposed to work. The pgUp, pdDn, home, end, etc keys are supposed to change the window scroll, not the focus. The guidelines allow the arrow keys to change focus but not the pgUp, etc keys. Every Mac app I use works this way. You can change it for other platforms, but please don't change it on the Mac.
How do Mac users move the insertion point around if pgup, pgdn, home, and end only scroll?
They use a rodent :-( HIG wasn't designed for people with multiple fingers. corvus@mac.com please post a HIG url if you intend for someone who you believe isn't well versed in HIG to do something according to HIG.
Keywords: mozilla0.9
The HIG note re. extended keys is at I also notice the arrow keys are not allowed to move out of multi-line text fields, but the Tab key is. I am one Mac user whose "pgDn pgDn pgDn, then press the up and down arrow to go back to where I started" reflex is so ingrained that if the focus changed when I pressed pgDn I would go insane. I just tried this in 0.9 and the pgUp and pgDn keys didn't work at all. What gives?
Since I've got this bug, you can be sure that the Mac will follow HIG and not move the caret. :-) Mac users have other keybindings for moving the caret if desired (such as command-up arrow moving caret to the top of the window).
Priority: P1 → P2
Summary: [feature] PgUp/PgDn in editor don't move caret (platform differences) → PgUp/PgDn in editor don't move caret (platform differences)
Whiteboard: [keybnd]
f rr
move to 0.9.3 reassign to mjudge Mike--what needs to be done here is implementing: * nsPresShell::PageMove method ( nsPresShell.cpp#3058) and * nsTextInputSelectionImpl::PageMove method ( nsGfxTextControlFrame2.cpp#871)
Assignee: brade → mjudge
Target Milestone: mozilla1.0 → mozilla0.9.3
manager reviewed the need for the fix and agrees, approval for checkin to the trunk and branch after fix has received an r= and sr=, adding nsBranch keyword
This is really annoying, both html and non html mails Catfood to me
Note: shift + pageup/down fails to work as one would assume also :)
missed 0.9.3.
Target Milestone: mozilla0.9.3 → mozilla0.9.4
if Mike can't get this resolved this coming week, this will get reassigned to anthony
Whiteboard: [keybnd] → [keybnd] [nsBranch+]
-->anthonyd
Assignee: mjudge → anthonyd
Target Milestone: mozilla0.9.4 → mozilla0.9.5
-->kin
Assignee: anthonyd → kin
--> mjudge owns selection and selection/caret navigation
Assignee: kin → mjudge
If this is important to someone for 0.9.5, tell me now so I can find another owner for it. mjudge won't be back until 0.9.6.
I think this bug is frustrating and should have been fixed waaaaaaaaaay back (notice this bug was introduced in 1997!)
It's important (and frustrating) to me. With 17 dupes, 31 cc's, and 10 votes, it looks important to many others as well. Textareas need to move the cursor properly (windows/unix) with page up and down.
Whow.. I've never seen a bug this old.. how about fixing it? MozillaQu*st would just kick on this bug.. moving to M10.. to M12, "missed 0.9.3"... Sorry for the sarcasm, Mozilla is great.
IMO, any platform-specific UI behavior should be a preference. If Windows users like the Mac behaviour, let them use Mac behaviour, and vice versa. Also, some desktop environments for UNIX® systems (especially NeXT-derived systems) try to adhere to the Mac behaviours. Would it be too hard to make "PgUp and PgDn move the caret" a preference that defaults one way on Mac builds and the other way on Windows and UNIX builds?
*** Bug 100741 has been marked as a duplicate of this bug. ***
*** Bug 100876 has been marked as a duplicate of this bug. ***
*** Bug 101138 has been marked as a duplicate of this bug. ***
-> 0.9.6
Target Milestone: mozilla0.9.5 → mozilla0.9.6
Let's see... P2, Major, 0.9.6, 15 votes, 38 CCs, and 22 dups. nsbeta1 again? Maybe even nsCatFood? Sorry for the spam, but this bug is four and a half years old.
*** Bug 105070 has been marked as a duplicate of this bug. ***
This bug is four and a half year old, and that is the problem. I think the next bug to fix is this one, so do it please.
This bug is actually *much* older than 4 1/2 years. But that is irrelevant. *IF* this bug is important to you, you should do more than complain in this bug. In fact, I don't think the person who is going to fix this bug has read the whining in this bug (and I can't blame him). If you want this fixed, why don't you try to help? Here are some of my ideas: What does not help get
*** Bug 105401 has been marked as a duplicate of this bug. ***
This Probrem is included in all component. the position of cursor doesn't change while push PageUp and PageDown. and this bad influence prevent you from selecting multi pages with push Shift and PgUP/Down. In the worse, if you push right in tail of text of Form "TEXTAREA" with vertical scroll bar , cursor go top of text. Confirmed with 2001101909&2001102203/Win2k.
*** Bug 100741 has been marked as a duplicate of this bug. ***
Yuki: part of your comment is covered by bug 82151, "Right arrow key at end of a TEXTAREA goes to the beginning".
*** Bug 106232 has been marked as a duplicate of this bug. ***
*** Bug 107361 has been marked as a duplicate of this bug. ***
*** Bug 108158 has been marked as a duplicate of this bug. ***
Whiteboard: [keybnd] → [keybnd] EDITORBASE [QAHP]
I searched on the single word "cursor" trying to find out if this bug existed and got "zaaro boogs", leading me to file yet another duplicate at bug 109935. I doubt most users associate the word "caret" with cursor schizophrenia.
Summary: PgUp/PgDn in editor don't move caret (platform differences) → PgUp/PgDn in editor don't move caret/cursor (platform differences)
0.9.6 is gone -> 0.9.7
Target Milestone: mozilla0.9.6 → mozilla0.9.7
*** Bug 111755 has been marked as a duplicate of this bug. ***
*** Bug 109935 has been marked as a duplicate of this bug. ***
*** Bug 112620 has been marked as a duplicate of this bug. ***
*** Bug 113065 has been marked as a duplicate of this bug. ***
Confirm comment #76: As point of clarification, this errant behavior also applies when filling in forms (e.g.) and composing text email.
might as well move it to 0.9.8 now.. :)
*** Bug 115653 has been marked as a duplicate of this bug. ***
*** Bug 117125 has been marked as a duplicate of this bug. ***
*** Bug 117755 has been marked as a duplicate of this bug. ***
*** Bug 120741 has been marked as a duplicate of this bug. ***
Target Milestone: mozilla0.9.7 → mozilla0.9.8
*** Bug 118799 has been marked as a duplicate of this bug. ***
Mozilla 0.9.8 is nearly out the door, and I don't see a patch yet. Suggest 0.9.9?
marking editorbase plus
Whiteboard: [keybnd] EDITORBASE [QAHP] → [keybnd] EDITORBASE+ [QAHP]
moving to 1.0
Status: NEW → ASSIGNED
Target Milestone: mozilla0.9.8 → mozilla1.0
*** Bug 123773 has been marked as a duplicate of this bug. ***
*** Bug 128231 has been marked as a duplicate of this bug. ***
*** Bug 128036 has been marked as a duplicate of this bug. ***
removing myself from the cc list
*** Bug 131028 has been marked as a duplicate of this bug. ***
editorbase-, nsbeta1- per triage
Why is 4xp missing from keywords? This old thing should have been history long ago.
This makes the composer crap for me. I hate going for the mouse.
I've looked at the sources but I don't think that there could be a correct, easy fix. I can think of some hacks for example: 1. Get XY coordinates of the caret. 2. Is it possible to scroll by page (at least some amount)? 3a. Yes. Scroll by page (this is done now). 4a. Put the caret on the same XY position as it was before. 3b. No. Put the caret on the beginning/end of the edited text. This shouldn't be too hard to do IF it's possible to get the current XY coords of the caret.
Can someone with the power please add keyword 4xp? Netscape 4 had no such trouble. The presence of this bug is monumentally absurd.
4xp is not correct since Composer in 4.x (or any version prior) did not support page up/down with caret movement.
OS/2 2002032416 Wrong. I use [about:bldlevel: Netscape Communicator 4.6 was created on 17 Dec 2001 at 21:18:23: Support for 128 bit encryption and mail encryption] every day. PgUp/PgDn maintains cursor in the viewable area always. It works in 3.x. It works in 4.x. No excuse for it to not work in Mozilla. Definitely 4xp.
s/maintains cursor in the viewable area always./maintains cursor in the viewable area always while composing plain text email & news./ I've just checked 4.79 for windoze, and the cursor in maintained in the viewable area always while composing plain text email & news and using the PgUp/PgDn keys. I've just checked 4.77 for Caldera OpenLinux 3.1/KDE 2.1, and the PgUp/PgDn keys don't even scroll the text email composition window, much less move the cursor anywhere. (System not used much & prolly misconfigured).
I just tried 4.75 on linux (the version that comes with Redhat 7.1). Page up works, but does not move the caret. So it's not 4xp on linux, though apparently it was on Windows and OS/2. Tom: this shouldn't require anything to do with XY coordinates of the caret. What needs to be done is to change the selection commands (in the selection controller) so that (configurably, since it's different on different platforms) the selection is collapsed to some element of the document that's visible in the page after the scroll. (The caret follows the collapsed selection.) I'm not sure how you find out which part of the document is currently visible, but that might be evident from whatever code it's already using to scroll (I'm not familiar with the scrolling code).
Slackware Linux 8 Netscape(r) Communicator 4.77 Copyright (c) 1994-2001 Netscape Communications Corporation, All rights reserved. PgUp/Down work as expected: they page up or down exactly one screen also moving caret.
*** Bug 133450 has been marked as a duplicate of this bug. ***
Target Milestone: mozilla1.0 → mozilla1.1alpha
*** Bug 137949 has been marked as a duplicate of this bug. ***
*** Bug 138073 has been marked as a duplicate of this bug. ***
How can 1.0 release possibly be justified without a fix for this? This is ancient (over four years old), major, and 4xp. This, bug 118672, bug 118899, bug 137018, and bug 109935 amount to stoppers preventing upgrade from 4.x mailnews.
I can't count. s/four/five/. Sheesh.
Everyone knows this bug is very very old bug. Do not complain more, but let's discuss the solution. nsTextInputSelectionImpl::PageMove() do PgUp/PgDn action. In sources, nsTextInputSelectionImpl::PageMove() has codes which is not used now. That may be what Tom said at comment 107. That codes calculate the position of the caret after scroll and intend to collapse on a frame at the position. (Is this bad codes, Akkana?) Unfortunately, the codes have not been completed. I tried to complete that codes. There is a difficulty in getting specified frame from the position (struct nsPoint). I thought to use nsGfxTextControlFrame2::GetFrameForPoint(). But I could't call it within nsTextInputSelectionImpl::PageMove(). Because I couldn't refer to nsGfxTextControlFrame2 instances there. Does anyone have good ideas?
What does GetPrimaryFrameFor() on mLimiter give you?
> (Is this bad codes, Akkana?) I'm not familiar with this code at all. Looks like brade wrote the current partial implementation, though Mike is really the expert on how this stuff works.
*** Bug 130380 has been marked as a duplicate of this bug. ***
Target Milestone: mozilla1.1alpha → mozilla1.0
I was inspired by Boris's idea and wrote this patch. This is not complete version, but works. I want a specification about the behavior: Where the caret should appear after PgUp/PgDn hitted? At the Top of the area (Netscape4.77/UNIX does) or Bottom of the area? Emacs is bottom for PgUp, top for PgDn. This patch works as Emacs do. Is there any specification? And I want some help. This patch uses nsPresShell::ScrollFrameIntoView(). But this method has a difficulty in scrolling horizontally to make the caret into view area. Any alternatives or ideas?
Could you possibly use scrollSelectionIntoView() ? Does that work for a collapsed selection (ie the caret)?
Cursor should fall in the same relative vertical position in the viewspace as it was before the PgDn/PgUp keys, unless the end of space is reached. In that case, PgUp should leave cursor at 0,0, and PgDn, at 0,last.
I suggest going with the emacs approach myself.... but then, I'm used to it. :)
People who know little or nothing about *nix might like to know what the emacs appoach is, like me.
See comment 123
> Cursor should fall in the same relative vertical position in the viewspace as it > was before the PgDn/PgUp keys, unless the end of space is reached. In that case, > PgUp should leave cursor at 0,0, and PgDn, at 0,last. And it should also keep in the same horizontal offset. For lines size >= the prev line, h should remain unchanged. Since Moz text buffer lines wrap to the next on h pos > current line len, you should only trim into h pos of line len if new line len is < old line len. IE: If you have a space of thousands of lines all of len 5 and your cursor is on 4, pressing pgdown should never move it to 0 h pos. If one of the lines we land on while going down is len 3, we should then be at h pos 3. Sound good? This reflects the vast majority of text editors people use on Windows, MacOS, and things like pico, cooledit, gnotepade, gedit, kde text editor, and many other easy-to-learn editors (emacs or vi options are, imo, useful to a much smaller audience and should only be available as a non-default option).
Looks to me like an incomplete explanation.
Excellent editor behavior is one of several reasons why I continue to use Netscape 3 (three, not 4) for most email.
> Looks to me like an incomplete explanation. What is this in relation to? If it's to comment 129, please understand that I'm just specifiying the behaviour of the cursor's x position on a line when pgup/down adjustments are applied to the location. A complete description of moving the cursor in all cases in a general editor would require several pages of text, and would be redundant. As for using NS3, no CSS support, no XHTML support -- no thanks :p
Thanks, Boris. ScrollSelectionIntoView() works fine. And I fixed the behavior as comment #125. This patch still has Emacs-like behavior. For me, this is rather satisfying one. For wrap problem of comment #129, we probably should know nsSelection::mDesiredX. But, there is no way to know it because it is private. Does anyone have some idea?
Attachment #81290 - Attachment is obsolete: true
I tried nsPresShell::PageMove() with same way. This patch works for Composer and text/HTML Mail composer. This makes a good job in most cases. But some bugs are there. (In some cases, caret will go away. In some cases, it forgets horizontal position.)
Oops, I said wrong. > And I fixed the behavior as comment #125. read this as comment #129. What I wanted to say here is that it trys to keep the horizontal offset.
Comment on attachment 81493 [details] [diff] [review] Proposed Patch for Textarea v1 lets get this into the trunk. I would remove the ifdef MAC since the keybindings will take care of calling or not calling this code.
Attachment #81493 - Flags: review+
Comment on attachment 81496 [details] [diff] [review] experimental patch for nsPresShell again aside from the IFDEF this looks good. lets give it a try on the trunk
Attachment #81496 - Flags: review+
Agreed about the ifdefs. This desired behavior is the whole reason we have separate methods for pageMove (which moves the caret) vs scrollPage (which doesn't). The key bindings should call the appropriate method, so there should be no need for platform ifdefs in the C++ code. Currently it looks like both the mac and the global bindings call cmd_scrollPageUp/Down, so one of them probably needs to change to call pageMove.
OK. I removed #ifdef.
Attachment #81493 - Attachment is obsolete: true
For the patch for nsPresShell, I removed #ifdef and debug codes. This patch sometimes behave wrong as said above. Is it ok?
Attachment #81496 - Attachment is obsolete: true
*** Bug 141804 has been marked as a duplicate of this bug. ***
I am testing the latest patches in my Mac build. In Composer, I don't see it page down and move the caret always when I press alt-down arrow. Pressing the down arrow does move it down further. Maybe it's just some table oddity? I'm testing by editing: Alt-up and alt-down don't seem to work in textareas; I'll investigate that (probably a missing keybinding?)
OK, in both patches, I see the problem that shift doesn't work to change selection; is that a known issue? Can these patches be revised to handle that or should we reopen one of the duplicate bugs to this bug which deals with shift issue specifically? I have a patch for the Mac keybindings which should be included with the other patches. I will attach shortly.
Wait, I'm confused about what we're doing here. If we're going to check in a patch, can we make sure we're fixing it right rather than making a bigger mess for someone to clean up later? It's great that we have a patch and we can finally fix this, but I think we should take a little time to make sure it's the right patch before checking in. What do we have, and what do we want to have? As I understand it, there are two sets of commands, cmd_movePageUp/Down vs. cmd_scrollPageUp/Down. Currently (someone please correct me if this is wrong): XP bindings use "scroll" for editor, nothing for textareas. Win platform bindings use move for textareas and for editor. Mac platform bindings use scroll for textareas and for editor. Linux platform bindings use move for textareas and for editor. The implementations for both scroll and move do a scroll but do not move the caret. What we should have (again, correct me if I have this wrong): Implementation for move should move the caret, implementation for scroll should not. (No ifdefs needed, there are two separate routines.) xp bindings: use move for textareas and editor. mac platform bindings: use scroll for textareas and editor. unix/win platform bindings: don't list a binding, default to xp bindings.
*** Bug 141913 has been marked as a duplicate of this bug. ***
Bug 58332 (listed as being blocked by this bug) is for the shifted version of this functionality (using aExtend parameter in the current patches). Akkana--You are correct about the various commands and their keybindings. I don't think the current patches have any platform ifdefs. This bug covers the core functionality for moving the caret in "page" chunks. I attached the Mac keybinding fix here since it should land with the fixes for this bug. I haven't checked any of the other keybindings yet; those may need changes also. Akkana--could you r= it? Is everyone ok with this being checked in without dealing with the shift (extend) variation working? skamio--would it be possible for you to easily fix the extending issue too (while you're already modifying the code)?
Bug 58332 (the shifted version) is marked future, but to me feels like the same useability issue and should be given the same priority. just my $0.02.
I can't but heartily agree with you. The shifted version (bug #58332) is in my opinion tightly connected to this one (#4302) and both should be resolved at once. If possible, please fix them both for 1.0.
Kathy and I went through this -- she'd prefer keeping the current overrides in all of the platformHTMLBindings files, and remove the lines in the xp file, rather than correcting the xp bindings to do move and removing the linux/win platform overrides. I'm willing to go with that. Here's a patch which removes the existing xp bindings -- they were to the wrong routine, they were only there for editor and not textareas, and they were overridden on all three platforms anyway.
Comment on attachment 82103 [details] [diff] [review] patch to fix mac keybindings for textareas r=akkana on Kathy's mac key bindings.
Attachment #82103 - Flags: review+
One comment on the current implementations: I notice that pageMove acts like Windows scrolling, in other words: if the caret is at the top of the page and I hit page down, it moves the caret to the bottom but doesn't actually page down. On Unix, my expectation is that hitting page down will actually page down (and leave the caret either at the top of the screen or at approximately the same position it was on the previous page). I don't have any problem with this patch going in as is, but if it does, I'd like to either file a new bug or keep this one open to make it possible to make pageMove scroll instead of just moving the caret the first time. (Post 1.0 would be fine for that -- it's not that important, unless there's a quick fix we could put in now.) Re extending selection: neither windows nor unix has bindings for extending the selection with page up/down keys. They probably should, when extending the selection gets addressed.
Comment on attachment 82234 [details] [diff] [review] Patch to remove the unused and incorrect xp bindings r=brade
Attachment #82234 - Flags: review+
Time to mention relative vertical positions, I guess :) Unless next page size is less than the size of the scroll, move the scrolled text by that amount. The cursor should be at the same relative y offset (IE: If I have a 4 page document, and I'm on line 2 and hit page down, and the page scroll amount is 25, my "new" caret absolute pos is 27 but relative is still 2). In cases where the scroll amount is smaller than the page scroll, jump the caret to the end/start of the text. This behaviour is once again the most common and (IMO) most logical handling of it, because you can lessen the amount of mouse/arrow work to get to a specific position. Plus everyone has used it ;)
I fixed the extending issue. And I implementated the behavior of keeping relative position of the caret in view. (Netscape 4.77/UNIX does. I misunderstanded before :-)
I fixed the extending issue, too. And implemented the behavior keeping relative position of caret. This patch forgets the horizontal offset in plaintext mail composer when the caret hits top or bottom of the document. Probably, this is because containerHeight has some margin. So, the calculation in this patch get a wrong value.
Attachment #81817 - Attachment is obsolete: true
This patch fixes the keybinding for the extending issue. If you try two patches above, use this, too.
Comment on attachment 82480 [details] [diff] [review] Patch to fix SHIFT+PgDp/PgDn keybinding for PC and UNIX r=akkana on the linux/windows bindings. The new patches for textareas and pres shell work beautifully on linux, both shifted and unshifted. Thanks, Shotaro! Kathy, have you had a chance to try them on Mac?
Attachment #82480 - Flags: review+
Does the proposed patch fix this problem for caret browsing as well? It should.
What did you mean by "this problem for caret browsing", dmitry? I implemented the caret behavior mentioned in comment #152 and comment #154. Or you meant other problem?
What he means is the following: if you hit "F7" that will toggle on "browse-with-caret" mode, which puts a visible caret (similar to the one in composer) in the browser's content area. The question is whether pagedown/pageup move this caret with the patches in this bug. I suspect the nsPresShell changes handle that case...
Yes, what I was referring to is the f7 "caret browsing." It also has another related problem which I filed today as bug 144000.
Oh, I see. (I hadn't known "caret-browsing" until now...) I tested and my patch moved the visible caret when I hit PgUp/PgDn. I think you love it :-)
*** Bug 144324 has been marked as a duplicate of this bug. ***
Talked to kin to sr the old patch and he has some reservations about the size and reuse. i factored out some other parts of presshell to trim down nsPresShell patch. also we needed to factor in the SCROLL_PERCENTAGE into the scroll. when you scroll up or down you actually only move 90%. Also it is ok to let the number go out of bounds and let getcontentoffsets bind it to the view. no need to look for out of bounds there. The next patch is required to move SCROLL_PERCENTAGE to a place outside people can see. I will see if i can factor this code more to allow text views to also reuse the same code.
Attachment #82478 - Attachment is obsolete: true
see above comment for why we need this patch as well.
*** Bug 144591 has been marked as a duplicate of this bug. ***
this reuses most code by passing arguments to CommonPageMove in nsPresShell. small change to nsIPresShell to contain this method but its for a good cause of code reuse. scroll view code change is just to move a #define. ( i am open to a future bug about using an enum or something else..). Basically the same as original patch with some extra stuff taken out and some harsh reuse to apease the great kin ;)
Attachment #82475 - Attachment is obsolete: true
Attachment #83599 - Attachment is obsolete: true
Attachment #83601 - Attachment is obsolete: true
*** Bug 145983 has been marked as a duplicate of this bug. ***
*** Bug 146713 has been marked as a duplicate of this bug. ***
*** Bug 147379 has been marked as a duplicate of this bug. ***
It turns out that my problem starting mozilla was not a problem in that day's build; it's a problem with the patch. When I apply the patch to a working build, then build in xpfe and layout, mozilla -edit no longer brings up a window. It gets through all the usual registering of libraries and nsKeyboard complaints and other warnings, then simply exits with status 11 without ever showing a window or printing an abnormal error message. The last few lines on stdout are: LoadPlugin() /builds/vanilla/mozilla/modules/plugin/samples/default/unix/libnullplugin.so returned 81ee090 GetMIMEDescription() returned "*:.*:All types" WEBSHELL+ = 2 bad FastLoad file version warning: property switchskins already exists warning: property switchskinstitle already exists Note: verifyreflow is disabled Note: styleverifytree is disabled warning: property cp1256 already exists Note: frameverifytree is disabled WARNING: charset = ISO, file nsFontMetricsGTK.cpp, line 1995
hmm let me clarify this. akkana had not built her whole tree so the vtable offsets were wrong for some of her dlls so the crash would happen. after she rebuilt the whole tree she was able to run the build. I am still waiting on her comments for the actual patch now that her tree seems to be wurkin.
So what's the current status of the patch? Can somebody r= and sr= it?
Comment on attachment 83688 [details] [diff] [review] total patch for nsPresShell, gfxtextframe and the small view change >+nsIFrame * >+PresShell::GetPrimaryFrameFromTag(const nsAString& aTagname) Don't we already have methods to get the document root? For instance, I see PresShell::GetRootFrame -- does that get something different from the frame for the body node? And nsIDocument has GetRootContent. Seems like it might be better to use one of these rather than reinventing the wheel. >+ if (NS_COMFALSE == result) //NS_COMFALSE should ALSO break >+ break; >+ if (NS_OK != result || !pos.mResultFrame ) >+ return result?result:NS_ERROR_FAILURE; NS_COMFALSE is deprecated, according to the comment in nsError.h. I know you didn't introduce it in this patch, but can we fix it as long as we're here? Or is this really what you want to do? If so, please add a comment explaining it so people encountering the obsolete NS_COMFALSE will know why it's there. >Index: view/public/nsIScrollableView.h >=================================================================== >+// the percentage of the page that is scrolled on a page up or down >+#define PAGE_SCROLL_PERCENT 0.9 If this has to be in an interface file, might it be better to make it an enum? Otherwise, looks okay and seems to work as expected.
I Didn't get all the way through the diff yet, hopefully I can get some time to complete it next week. Here are my comments so far: --- This comment is very hard to read/understand: + /** + * Scrolling then moving caret placement code in common to text areas and + * content areas should be located in the implementer + * This method will accept the following parameters and perform the scroll + * and caret movement. It remains for the caller to call the final + * ScrollCaretIntoView if that called wants to be sure the caret is always + * visible. --- The code in CommonPageMove() seems to assume that the caret's closest view and the view for the document are the same. I don't think that's a correct assumption since that could throw off the coordinates used in finding where the caret should be placed after the scroll. --- PageMove() calls GetPrimaryFrameFromTag() with "body", does that imply that this feature isn't supposed to work with XML documents. Remember that there is "Browse With Caret" feature in the browser for accessibility that turns on the caret.
bumping off to 1.2
Target Milestone: mozilla1.0 → mozilla1.2beta
*** Bug 153634 has been marked as a duplicate of this bug. ***
Mike, where does this bug stand?
Whiteboard: [keybnd] EDITORBASE- [QAHP] [ADT3 RTM] [ETA 06/04] → [keybnd] EDITORBASE- [QAHP] [ADT3 RTM] [ETA 06/04] [KEYBASE+]
Comment on attachment 83688 [details] [diff] [review] total patch for nsPresShell, gfxtextframe and the small view change Ok, here's the rest of my comments. I think the patch still needs some work: --- Correct the "bahavior" typo: -#if XXXXX_WORK_TO_BE_COMPLETED_XXXXX - // XXX: this code needs to be finished or rewritten // expected bahavior for PageMove is to scroll AND move the caret - nsIScrollableView *scrollableView; --- It bugs me that we have to know the frame structure to get to the Div's area frame in the nsGfxTextControlFrame2.cpp code. Can't we just call GetScrolledView() on the scrollable view and call GetClientData() on it which might actually return the frame we are interested in? If that really does work, then we can avoid having to know about "body" and "div", and the fact that the doc might not even be HTML in both the presShell code and the nsGfxTextControlFrame2 code. I'm also worried about the perf impact of calling doc->GetElementsByTagName() ... does it crawl the entire content tree looking for all elements that could be called "body" or is it optimized to know there is only one body? Here's the code I was referring to that knows the frame hiearchy for the div: + //get the scroll port frame from the gfxscroll frame + if (NS_FAILED(result = frame->FirstChild(context, nsnull, &frame))) + return result; + if (!frame) + return NS_ERROR_UNEXPECTED; + //get the area frame (what we really want) from the scroll port. + if (NS_FAILED(result = frame->FirstChild(context, nsnull, &frame))) + return result; + if (!frame) + return NS_ERROR_UNEXPECTED; --- By the way, in my previous review comments I mentioned that the caret could be in a frame that is in a view that is not the same as the scrolled view. An example of this could be a div with an overflow property that was specified, for example: <div style="overflow: none">a<br>b<br>c<br>d<br>e<br>f<br></div>
Attachment #83688 - Flags: needs-work+
> does it crawl the entire content tree looking for all elements that could be > called "body" At the moment, yes. In a few days it will no longer do so (I have a patch that fixes that and it should land soon). _However_ the code does GetLength() on the list, and at that point we will have to crawl the whole document. We should remove the GetLength() check and just null-check the node Item() returns -- if it's null, we return nsnull.
*** Bug 154868 has been marked as a duplicate of this bug. ***
*** Bug 155810 has been marked as a duplicate of this bug. ***
*** Bug 156469 has been marked as a duplicate of this bug. ***
How will the fix interfere with browser search? At least, bug 156519 will become more important ...
Putting this back on the EDITORBASE+ list based on user feedback. This needs to get fixed.
Whiteboard: [keybnd] EDITORBASE- [QAHP] [ADT3 RTM] [ETA 06/04] [KEYBASE+] → [keybnd] EDITORBASE+ [QAHP] [ADT3 RTM] [ETA 06/04] [KEYBASE+]
when this is fixed, please try scenario in dup bug 147379.
ok so i removed the getlength call on the node list and instead just get the first element on the list and check it for null. I also replaced the predetermined frame structure assumptions and replaced them with this: -- **&frame))) { frame = nsnull; } } } So far so good.
this should fix the comments kin had
spiff, are we ready for r/sr?
at a glance, in PresShell::CommonPageMove, it looks like beginFrameContent is not initialized; is that a problem?
Comment on attachment 92604 [details] [diff] [review] new patch with fixes The issues in comment 176 still need to be addressed. And the typo mentioned in comment 180 is actually in 2 places in this new patch.
Attachment #92604 - Flags: needs-work+
ignore nsHRFrame and nsFrame.cpp in diff. those are for bug 159207.
Attachment #83688 - Attachment is obsolete: true
Attachment #92604 - Attachment is obsolete: true
Comment on attachment 93439 [details] [diff] [review] big patch with all in it. ============= = nsCaret.cpp ============= -- In GetCaretCoordinates() you need to remove the setting of outView immediately after the call to GetViewForRendering() or it will overwrite the value returned by GetViewForRendering(): - GetViewForRendering(theFrame, aRelativeToType, viewOffset, clipRect, drawingView); + GetViewForRendering(theFrame, aRelativeToType, viewOffset, clipRect, &drawingView, outView); if (!drawingView) return NS_ERROR_UNEXPECTED; - + if (outView) + *outView = drawingView; // ramp up to make a rendering context for measuring text. // First, we get the pres context ... -- You'll get brownie points if you clean up the arg naming for GetCaretCoordinates(). :-) ================ = nsIPresShell.h ================ -- I just noticed that you're adding CommonPageMove() to nsIPresShell.h. It should go in nsISelectionController.idl with the other navigaion methods shouldn't it? ====================== = nsGfxScrollFrame.cpp ====================== -- In nsGfxScrollFrame.cpp you declare and initialize point: + nsPoint point; + nsPoint currentPoint; + point.x = 0; + point.y = 0; + //we need to translate the coordinates to the inner several lines above it's first use, which just resets the value: + point = aPoint; + while (view != innerView && innerView) So why not just delay the declaration till the point they are actually used and just initialize it like this: + nsPoint point(aPoint); + nsPoint currentPoint; + + while (view != innerView && innerView) -- I know I was sitting with you when you made this change: + result = GetClosestViewForFrame(aCX, frame, &innerView); + if (NS_FAILED(result)) + innerView = nsnull; I'm thinking now that we should just return an error in the failed case to be consistent with the GetClosestViewForFrame() call before this one. -- You can remove this: + +#if 1 +#endif ================= = nsPresShell.cpp ================= -- In nsPresShell.cpp, CommonPageMove() in the class declaration should be grouped with the other nsISelectionController method declarations. + NS_IMETHOD CommonPageMove(PRBool aForward, PRBool aExtend, nsIScrollableView *aScrollableView, nsIFrame *aMainFrame,nsIFrameSelection *aFrameSel); -- As mentioned in prior review comments, you need to get rid of GetPrimaryFrameFromTag() and all it's uses in your new code. Relying on it to give you back the frame for "body" means the code that uses it won't work for XML pages. Replace it with a utility method like GetTopLevelScrolledViewFrame() which returns the clientdata of the top-level scrolled view ... similar to what you do in nsGfxTextControlFrame2.cpp diffs. -- In CommonPageMove() don't be afraid to use vertical white space! :-) -- "bahavior" is still misspelled. + return NS_ERROR_NULL_POINTER; + // expected bahavior for PageMove is to scroll AND move the caret + // and remain relative position of the caret in view. see Bug 4302. -- Checking for failure is not necessary here since you are returning anyways: + result = aFrameSel->HandleClick(content, startOffset, startOffset, aExtend, PR_FALSE, PR_TRUE); + if (NS_FAILED(result)) + return result; return result; -- PageMove() already gets the scrollable view, it should just get the frame from it's scrolled view instead of calling: + nsIFrame *frame = GetPrimaryFrameFromTag(NS_LITERAL_STRING("body")); -- CompleteMove() should also use the root scrolled view's frame instead of: + nsIFrame *frame = GetPrimaryFrameFromTag(NS_LITERAL_STRING("body")); ============================ = nsGfxTextControlFrame2.cpp ============================ -- "bahavior" is still misspelled: -#if XXXXX_WORK_TO_BE_COMPLETED_XXXXX - // XXX: this code needs to be finished or rewritten // expected bahavior for PageMove is to scroll AND move the caret - nsIScrollableView *scrollableView; -- I think the word "to" is missing in this comment: + // and remain relative position of the caret in view. see Bug 4302.
Attachment #93439 - Flags: needs-work+
ok besides nsFrame and nsHRFrame which should be ignored in this patch, I think this is the mother of all patches. this should address all the issues above. bahavior isnt in the tree at all as far as I can tell. I moved CommonPageMove to nsSelection and removed it from nsIPresShell. I have added some whitespace to CommonPageMove in my local tree that is not in this patch so you will just have to take my word for it.
Attachment #93439 - Attachment is obsolete: true
mjudge: // expected bahavior for PageMove is to scroll AND move the caret from your patch. you didn't touch that line, but you can fix it while you're at it. it's in layout/html/forms/src/nsGfxTextControlFrame2.cpp near line 872
yes, please fix the typo/misspelling for "bahavior" in that comment. :-)
son of a. i dont understand its in my tree. maybe i posted the wrong patch. I will run it again. trust me I ran a search of all files for this misspelling.
I am pulling new trees to split up my bugs. the last patch has 2 other bugs in it and its getting confusing. This will probably take until friday to get done.
last patch had included some other bugs along with it. I have made a new tree with just this bug included. I believe that this will allow more easy reviews by the reviewers.
Attachment #93620 - Attachment is obsolete: true
needs a review from someone....
> needs a review from someon What kind of review are you looking for ? Ajay
*** Bug 161988 has been marked as a duplicate of this bug. ***
patch will no longer work i guess. it needs to be updated. I will try to post yet another patch.
Comment on attachment 93767 [details] [diff] [review] total patch without other bugs included r = jfrancis; Tried to concentrate on big picture rather than line by line stuff... nsSelection.cpp ================================================ CommonPageMove(): null check for aFrameSel I see that we dont care if selection is collapsed or not... assuming thats ok. ScrollByPages() does not scroll the PAGE_SCROLL_PERCENT, it scrolls a whole page. So the HandleClick() call may be on content that is not scrolled into view. What happens then? nsCaret.cpp ==================================================== This looks ok though I dont pretend to understand the view stuff nsGfxScrollFrame.cpp =========================================== "while (view != innerView && innerView)" - oh sure, make me look up operator precendence you big wanker. btw, I didnt even realize this source file was stil part of build. nsPresShell.cpp ================================================ PageMove(): I see this calls ScrollSelectionIntoView() after CommonPageMove(). I'm worried that the PAGE_SCROLL_PERCENT dealio is going to make us rely on ScrollSelectionIntoView() for some page downs but not for others, depending on how close to the top(bottom) of the page the caret was. will this lead to different behaviors, like getting the view scrolled with the caret centered sometimes, for example? nsGfxScrollFrame2.cpp ========================================== Looks good, though same concerns apply as to when ScrollSelectionIntoView is needed. nsTextControlFrame.cpp ========================================= Ditto. By the way, is there any way to share common code between nsGfxScrollFrame2.cpp & nsGfxScrollFrame2.cpp?
Attachment #93767 - Flags: review+
uhh, last line shoulda been: "nsGfxScrollFrame2.cpp & nsTextControlFrame.cpp?"
ok, so I meant nsGfxTextControlFrame, instead of nsGfxScrollFrame. What's in a name, anyway? I spoke with Kin about my concerns. He pointed out that I was looking at the wrong ScrollByPages() function, and the correct one does use the page scaling constant. So no worries there. And the two sets of changes I asked to be made common are in fact in the same file, which aparently was renamed. So I don't have any issues with this patch other than the perhaps unneeded call to ScrollContentIntoView(), and a missing null check for aFrameSel.
posting new patch with updated tree in it. carrying over the r=
new patch with new make system minus nsGfxTextControlFrame. Havent put in jfrancis's mods yet. I will do so today. I wont post new patch with changes in them unless requested.
Attachment #93767 - Attachment is obsolete: true
Comment on attachment 95309 [details] [diff] [review] patch from /mozilla carrying over the r=
Attachment #95309 - Flags: review+
Comment on attachment 95309 [details] [diff] [review] patch from /mozilla -- I still have trouble parsing this first sentence in this comment, is it possible to reword it? + /** + * Scrolling then moving caret placement code in common to text areas and + * content areas should be located in the implementer -- For CommonPageMove(), there seems to be an implicit assumption that aMainFrame *is* the frame associated with the scrolled view inside aScrollableView. + * @param aScrollableView the view that needs the scrolling + * + * @param aMainFrame the contained parent frame inside the scrollable view. + * If you moved the code that fetched the frame into CommonPageMove() then you could remove the code that does the same thing in PresShell::PageMove() and nsTextInputSelectionImpl::PageMove(). Also, why the 2 different methods of transforming clientData into an nsIFrame? One does a straight cast: + nsIView *scrolledView; + result = scrollableView->GetScrolledView(scrolledView); + // get a frame + void *clientData; + scrolledView->GetClientData(clientData); + nsIFrame *frame = (nsIFrame *)clientData; and the other uses QI: + **)&fram e))) { + frame = nsnull; + } + } + } Address these 2 items, and jfrancis' concerns and you got an sr=kin@netscape.com -- I'm assuming the diffs for the following files are not part of the fix for bug 4302 so I didn't review them: Index: directory/c-sdk/config/Makefile Index: directory/c-sdk/ldap/clients/tools/Makefile Index: directory/c-sdk/ldap/libraries/libiutil/Makefile Index: directory/c-sdk/ldap/libraries/libldif/Makefile Index: directory/c-sdk/ldap/libraries/libssldap/Makefile Index: editor/libeditor/base/nsEditorCommands.cpp Index: editor/libeditor/base/nsEditorCommands.h Index: layout/base/public/nsIPresShell.h Index: layout/html/base/src/nsFrame.cpp
*** Bug 164199 has been marked as a duplicate of this bug. ***
isn't this fixed already?
marking fixed.
Status: ASSIGNED → RESOLVED
Closed: 19 years ago
Resolution: --- → FIXED
Verified fixed in today's OS/2 trunk, which is very broken otherwise.
Oops, make that 99% fixed. The last PgUp should move cursor to 0,0, but only goes to home row, not home column.
The patch for the Shift+PgUp/Dn keybindings wasn't checked in yet () Please check it in too.
Also I don't see the behavior which was mentioned in the comment #217 on Linux. Here the last PgUp moves the caret to (0,0) position.
I have checked in the patches which were not checked in but part of this fix. Shifted keybindings should work as should platform particular issues.
This fix looks to be responsible for a regression causing Page Up and Page Down not to work in Browser (bug 165267). That bug appeared sometime between 2002-08-26-21 (where it does not appear) and 2002-08-28-08 (where it does).
*** Bug 165320 has been marked as a duplicate of this bug. ***
I don't think this bug is the culprit. When only the first patch was checked in the PgUp/Dn worked fine here in my browser from 20020827??. The second patch checked in adds only new keybindings for Shift+PgUp/Dn and couldn't cause the regression. With the 2002082808 build the Shift-PgUp/Dn now correctly selects the text in textareas and without shift it works like before the last checkin. So I say this bug is verified fixed on Linux 2002082808.
> The second patch checked in adds only new keybindings for Shift+PgUp/Dn Wrong. Look at please. What happened was: 1) bindings were added to the various platformHTMLBindings.xml 2) bindings were removed from htmlBindings.xml reverting change #2 fixes the pgup/pgdown problem. I'm not sure why (And that discussion is for bug 165267). But yes, this bug is 100% certainly responsible for the regression.
Sorry for that I thought only this attachment was checked in.
verified in 9/5 trunk. Reopen if anyone disagrees. I am opening a new bug 166938 because now the scrollbar does not go all the way to begin/end when moving the caret to the begin/end. interested parties can cc: themselves on that bug directly.
Status: RESOLVED → VERIFIED
*** Bug 167444 has been marked as a duplicate of this bug. ***
*** Bug 169629 has been marked as a duplicate of this bug. ***
*** Bug 167441 has been marked as a duplicate of this bug. ***
*** Bug 175623 has been marked as a duplicate of this bug. ***
*** Bug 178009 has been marked as a duplicate of this bug. ***
*** Bug 178660 has been marked as a duplicate of this bug. ***
*** Bug 179644 has been marked as a duplicate of this bug. ***
Has this regressed in Firefox 2? I can't select text by clicking inside a page then pressing Shift-Page Down.
This bug is about _editor_ as the summary says. In Firefox, that basically means a textarea. At no point could you select random text on a page with shift-pgdown. | https://bugzilla.mozilla.org/show_bug.cgi?id=4302 | CC-MAIN-2021-04 | refinedweb | 8,481 | 73.78 |
I was helping somebody out with his JavaScript code and my eyes were caught by a section that looked like that:
function randOrd(){ return (Math.round(Math.random())-0.5); } coords.sort(randOrd); alert(coords);
My first though was: hey, this can't possibly work! But then I did some experimenting and found that it indeed at least seems to provide nicely randomized results.
Then I did some web search and almost at the top found an article from which this code was most ceartanly copied. Looked like a pretty respectable site and author...
But my gut feeling tells me, that this must be wrong. Especially as the sorting algorithm is not specified by ECMA standard. I think different sorting algoritms will result in different non-uniform shuffles. Some sorting algorithms may probably even loop infinitely...
But what do you think?
And as another question... how would I now go and measure how random the results of this shuffling technique are?
update: I did some measurements and posted the results below as one of the answers.
It's never been my favourite way of shuffling, partly because it is implementation-specific as you say. In particular, I seem to remember that the standard library sorting from either Java or .NET (not sure which) can often detect if you end up with an inconsistent comparison between some elements (e.g. you first claim
A < B and
B < C, but then
C < A).
It also ends up as a more complex (in terms of execution time) shuffle than you really need.
I prefer the shuffle algorithm which effectively partitions the collection into "shuffled" (at the start of the collection, initially empty) and "unshuffled" (the rest of the collection). At each step of the algorithm, pick a random unshuffled element (which could be the first one) and swap it with the first unshuffled element - then treat it as shuffled (i.e. mentally move the partition to include it).
This is O(n) and only requires n-1 calls to the random number generator, which is nice. It also produces a genuine shuffle - any element has a 1/n chance of ending up in each space, regardless of its original position (assuming a reasonable RNG). The sorted version approximates to an even distribution (assuming that the random number generator doesn't pick the same value twice, which is highly unlikely if it's returning random doubles) but I find it easier to reason about the shuffle version :)
This approach is called a Fisher-Yates shuffle.
I would regard it as best practice to code up this shuffle once and reuse it everywhere you need to shuffle items. Then you don't need to worry about sort implementations in terms of reliability or complexity. It's only a few lines of code (which I won't attempt in JavaScript!)
The Wikipedia article on shuffling (and in particular the shuffle algorithms section) talks about sorting a random projection - it's worth reading the section on poor implementations of shuffling in general, so you know what to avoid.
After Jon has already covered the theory, here's an implementation:
function shuffle(array) { var tmp, current, top = array.length; if(top) while(--top) { current = Math.floor(Math.random() * (top + 1)); tmp = array[current]; array[current] = array[top]; array[top] = tmp; } return array; }
The algorithm is
O(n), whereas sorting should be
O(n log n). Depending on the overhead of executing JS code compared to the native
sort() function, this might lead to a noticable difference in performance which should increase with array sizes.
In the comments to bobobobo's answer, I stated that the algorithm in question might not produce evenly distributed probabilities (depending on the implementation of
sort()).
My argument goes along these lines: A sorting algorithm requires a certain number
c of comparisons, eg
c = n(n-1)/2 for Bubblesort. Our random comparison function makes the outcome of each comparison equally likely, ie there are
2^c equally probable results. Now, each result has to correspond to one of the
n! permutations of the array's entries, which makes an even distribution impossible in the general case. (This is a simplification, as the actual number of comparisons neeeded depends on the input array, but the assertion should still hold.)
As Jon pointed out, this alone is no reason to prefer Fisher-Yates over using
sort(), as the random number generator will also map a finite number of pseudo-random values to the
n! permutations. But the results of Fisher-Yates should still be better:
Math.random() produces a pseudo-random number in the range
[0;1[. As JS uses double-precision floating point values, this corresponds to
2^x possible values where
52 ? x ? 63 (I'm too lazy to find the actual number). A probability distribution generated using
Math.random() will stop behaving well if the number of atomic events is of the same order of magnitude.
When using Fisher-Yates, the relevant parameter is the size of the array, which should never approach
2^52 due to practical limitations.
When sorting with a random comparision function, the function basically only cares if the return value is positive or negative, so this will never be a problem. But there is a similar one: Because the comparison function is well-behaved, the
2^c possible results are, as stated, equally probable. If
c ~ n log n then
2^c ~ n^(a·n) where
a = const, which makes it at least possible that
2^c is of same magnitude as (or even less than)
n! and thus leading to an uneven distribution, even if the sorting algorithm where to map onto the permutaions evenly. If this has any practical impact is beyond me.
The real problem is that the sorting algorithms are not guaranteed to map onto the permutations evenly. It's easy to see that Mergesort does as it's symmetric, but reasoning about something like Bubblesort or, more importantly, Quicksort or Heapsort, is not.
The bottom line: As long as
sort() uses Mergesort, you should be reasonably safe except in corner cases (at least I'm hoping that
2^c ? n! is a corner case), if not, all bets are off.
I did some measurements of how random the results of this random sort are...
My technique was to take a small array [1,2,3,4] and create all (4! = 24) permutations of it. Then I would apply the shuffling function to the array a large number of times and count how many times each permutation is generated. A good shuffling algoritm would distribute the results quite evenly over all the permutations, while a bad one would not create that uniform result.
Using the code below I tested in Firefox, Opera, Chrome, IE6/7/8.
Surprisingly for me, the random sort and the real shuffle both created equally uniform distributions. So it seems that (as many have suggested) the main browsers are using merge sort. This of course doesn't mean, that there can't be a browser out there, that does differently, but I would say it means, that this random-sort-method is reliable enough to use in practice.
EDIT: This test didn't really measured correctly the randomness or lack thereof. See the other answer I posted.
But on the performance side the shuffle function given by Cristoph was a clear winner. Even for small four-element arrays the real shuffle performed about twice as fast as random-sort!
// The shuffle function posted by Cristoph. var shuffle = function(array) { var tmp, current, top = array.length; if(top) while(--top) { current = Math.floor(Math.random() * (top + 1)); tmp = array[current]; array[current] = array[top]; array[top] = tmp; } return array; }; // the random sort function var rnd = function() { return Math.round(Math.random())-0.5; }; var randSort = function(A) { return A.sort(rnd); }; var permutations = function(A) { if (A.length == 1) { return [A]; } else { var perms = []; for (var i=0; i<A.length; i++) { var x = A.slice(i, i+1); var xs = A.slice(0, i).concat(A.slice(i+1)); var subperms = permutations(xs); for (var j=0; j<subperms.length; j++) { perms.push(x.concat(subperms[j])); } } return perms; } }; var test = function(A, iterations, func) { // init permutations var stats = {}; var perms = permutations(A); for (var i in perms){ stats[""+perms[i]] = 0; } // shuffle many times and gather stats var start=new Date(); for (var i=0; i<iterations; i++) { var shuffled = func(A); stats[""+shuffled]++; } var end=new Date(); // format result var arr=[]; for (var i in stats) { arr.push(i+" "+stats[i]); } return arr.join("\n")+"\n\nTime taken: " + ((end - start)/1000) + " seconds."; }; alert("random sort: " + test([1,2,3,4], 100000, randSort)); alert("shuffle: " + test([1,2,3,4], 100000, shuffle));
Interestingly, Microsoft used the same technique in their pick-random-browser-page.
They used a slightly different comparison function:
function RandomSort(a,b) { return (0.5 - Math.random()); }
Looks almost the same to me, but it turned out to be not so random...
So I made some testruns again with the same methodology used in the linked article, and indeed - turned out that the random-sorting-method produced flawed results. New test code here:
function shuffle(arr) { arr.sort(function(a,b) { return (0.5 - Math.random()); }); } function shuffle2(arr) { arr.sort(function(a,b) { return (Math.round(Math.random())-0.5); }); } function shuffle3(array) { var tmp, current, top = array.length; if(top) while(--top) { current = Math.floor(Math.random() * (top + 1)); tmp = array[current]; array[current] = array[top]; array[top] = tmp; } return array; } var counts = [ [0,0,0,0,0], [0,0,0,0,0], [0,0,0,0,0], [0,0,0,0,0], [0,0,0,0,0] ]; var arr; for (var i=0; i<100000; i++) { arr = [0,1,2,3,4]; shuffle3(arr); arr.forEach(function(x, i){ counts[x][i]++;}); } alert(counts.map(function(a){return a.join(", ");}).join("\n"));
I have placed a simple test page on my website showing the bias of your current browser versus other popular browsers using different methods to shuffle. It shows the terrible bias of just using
Math.random()-0.5, another 'random' shuffle that isn't biased, and the Fisher-Yates method mentioned above.
You can see that on some browsers there is as high as a 50% chance that certain elements will not change place at all during the 'shuffle'!
Note: you can make the implementation of the Fisher-Yates shuffle by @Christoph slightly faster for Safari by changing the code to:
function shuffle(array) { for (var tmp, cur, top=array.length; top--;){ cur = (Math.random() * (top + 1)) << 0; tmp = array[cur]; array[cur] = array[top]; array[top] = tmp; } return array; }
I think it's fine for cases where you're not picky about distribution and you want the source code to be small.
In JavaScript (where the source is transmitted constantly), small makes a difference in bandwidth costs.
It is a hack, certainly. In practice, an infinitely looping algorithm is not likely. If you're sorting objects, you could loop through the coords array and do something like:
for (var i = 0; i < coords.length; i++) coords[i].sortValue = Math.random(); coords.sort(useSortValue) function useSortValue(a, b) { return a.sortValue - b.sortValue; }
(and then loop through them again to remove the sortValue)
Still a hack though. If you want to do it nicely, you have to do it the hard way :)
It's been four years, but I'd like to point out that the random comparator method won't be correctly distributed, no matter what sorting algorithm you use.
Proof:
nelements, there are exactly
n!permutations (i.e. possible shuffles).
The only sizes that could possibly be correctly distributed are n=0,1,2.
As an exercise, try drawing out the decision tree of different sort algorithms for n=3.
There is a gap in the proof: If a sort algorithm depends on the consistency of the comparator, and has unbounded runtime with an inconsistent comparator, it can have an infinite sum of probabilities, which is allowed to add up to 1/6 even if every denominator in the sum is a power of 2. Try to find one.
Also, if a comparator has a fixed chance of giving either answer (e.g.
(Math.random() < P)*2 - 1, for constant
P), the above proof holds. If the comparator instead changes its odds based on previous answers, it may be possible to generate fair results. Finding such a comparator for a given sorting algorithm could be a research paper.
If you're using D3 there is a built-in shuffle function (using Fisher-Yates):
var days = ['Lundi','Mardi','Mercredi','Jeudi','Vendredi','Samedi','Dimanche']; d3.shuffle(days);
And here is Mike going into details about it:
Here's an approach that uses a single array:
The basic logic is:
Code:
for(i=a.length;i--;) a.push(a.splice(Math.floor(Math.random() * (i + 1)),1)[0]);
Can you use the
Array.sort() function to shuffle an array – Yes.
Are the results random enough – No.
Consider the following code snippet:
var array = ["a", "b", "c", "d", "e"]; var stats = {}; array.forEach(function(v) { stats[v] = Array(array.length).fill(0); }); //stats = { // a: [0, 0, 0, ...] // b: [0, 0, 0, ...] // c: [0, 0, 0, ...] // ... // ... //} var i, clone; for (i = 0; i < 100; i++) { clone = array.slice(0); clone.sort(function() { return Math.random() - 0.5; }); clone.forEach(function(v, i) { stats[v][i]++; }); } Object.keys(stats).forEach(function(v, i) { console.log(v + ": [" + stats[v].join(", ") + "]"); })
Sample output:
a [29, 38, 20, 6, 7] b [29, 33, 22, 11, 5] c [17, 14, 32, 17, 20] d [16, 9, 17, 35, 23] e [ 9, 6, 9, 31, 45]
Ideally, the counts should be evenly distributed (for the above example, all counts should be around 20). But they are not. Apparently, the distribution depends on what sorting algorithm is implemented by the browser and how it iterates the array items for sorting.
More insight is provided in this article:
Array.sort() should not be used to shuffle an array
There is nothing wrong with it.
The function you pass to .sort() usually looks something like
function sortingFunc( first, second ) { // example: return first - second ; }
Your job in sortingFunc is to return:
The above sorting function puts things in order.
If you return -'s and +'s randomly as what you have, you get a random ordering.
Like in MySQL:
SELECT * from table ORDER BY rand() | https://javascriptinfo.com/view/13233/is-it-correct-to-use-javascript-array-sort-method-for-shuffling | CC-MAIN-2020-50 | refinedweb | 2,421 | 56.35 |
Spring Boot properties and Camel routes - an example
When you’re writing code you’re often reminded that hard-coding is bad; that including environment-dependent parameters inside your code, such as URLs, credentials and database configuration, is bad.
This advice is no different for writing Camel routes. If you find yourself in a situation where you’re hard-coding something - e.g. a REST service endpoint - then you should probably stop, and consider how you can externalise this configuration, in case the URL changes in future.
Fortunately, if you’re using Spring Boot, you get functionality to help you do this, as standard.
One of the cool features of Spring Boot is its ability to manage your application’s configuration and then pass that to your app, all without you having to explicitly write a line of code. You can read in parameters from config files, environment variables, and lots more. This makes it very easy to swap out different configurations for your app when you need them. Or, to make changes to the way your application runs, without having to recompile it.
When used with Camel, it’s especially useful because if you are developing some routes that are very similar, now you can write the code once and run multiple instances of the same route, with just the parameters modified.
Using application properties in Camel with Spring Boot - step-by-step example
Let’s create a Camel app with Spring Boot, and pass some configuration to it:
Create your Spring Boot application
You probably already know about Spring Initializr, the web tool for creating Spring Boot apps.
But did you know that you can create Spring apps using the command line as well? Command line FTW!
To do this, you just need to install the Spring Boot CLI. This will install the
springcommand into your environment. (Tip: if you’re using Mac OS, it’s also available through Homebrew; info in the link above).
Then, to create a project, you can just run:
spring init -d=camel -g=com.yourcompany your-project-name
Where d=camel states that you want to use the
cameldependency, and g=com.yourcompany is where you give the package name for your generated classes.
Add a Camel route
Once we’ve created our project, we need to add some Camel routes.
Spring just gives us an empty boilerplate project, so let’s add a Camel
RouteBuilderclass. This class will configure a basic route that logs a greeting to the console at regular intervals, using a timer.
Two aspects of the route are going to take values from Spring Boot configuration: the frequency of the timer, and the greeting phrase in the output. We mark parameters with curly braces {{ and }}, e.g.
{{myvariable}}.
So to create the route, create a file DemoRouteBuilder.java with these contents:
import org.apache.camel.builder.RouteBuilder; import org.springframework.context.annotation.Bean; import org.springframework.stereotype.Component; @Component public class DemoRouteBuilder extends RouteBuilder { @Override public void configure() { from("timer:mytimer?period={{timer.period}}") .setBody(simple("{{greeting.word}}, this is Camel running in Spring Boot!")) .to("log:out"); } }
Add configuration parameters
Now we’ll configure the parameters for our application.
Create a file src/main/resources/application.properties (or edit the file if it already exists).
Firstly, add the following line:
camel.springboot.main-run-controller=true
This first line is important as it ensures that Spring Boot stays “up”. In other words, it keeps the Camel route running until you terminate the Spring Boot container.
Then you add two config parameters like this:
timer.period=5000 greeting.word=Nihao
Run your application
Then, to run the app:
mvn spring-boot:run
Once Spring Boot has started up, you will see this in the logs:
2017-08-16 21:24:16.126 INFO 70530 --- [inRunController] o.a.camel.spring.SpringCamelContext : Route: route1 started and consuming from: timer://mytimer?period=5000 2017-08-16 21:24:16.128 INFO 70530 --- [inRunController] o.a.camel.spring.SpringCamelContext : Total 1 routes, of which 1 are started. 2017-08-16 21:24:16.129 INFO 70530 --- [inRunController] o.a.camel.spring.SpringCamelContext : Apache Camel 2.19.2 (CamelContext: camel-1) started in 0.300 seconds 2017-08-16 21:24:17.160 INFO 70530 --- [timer://mytimer] out : Exchange[ExchangePattern: InOnly, BodyType: String, Body: Nihao, this is Camel running in Spring Boot!] 2017-08-16 21:24:22.139 INFO 70530 --- [timer://mytimer] out : Exchange[ExchangePattern: InOnly, BodyType: String, Body: Nihao, this is Camel running in Spring Boot!] 2017-08-16 21:24:27.140 INFO 70530 --- [timer://mytimer] out : Exchange[ExchangePattern: InOnly, BodyType: String, Body: Nihao, this is Camel running in Spring Boot!]
You can also change the configuration properties when you run the route by adding the -D argument, like this:
mvn spring-boot:run -Dgreeting.word=Bonjour
Which will give you this output:
2017-08-16 21:27:06.215 INFO 70651 --- [timer://mytimer] out : Exchange[ExchangePattern: InOnly, BodyType: String, Body: Bonjour, this is Camel running in Spring Boot!] 2017-08-16 21:27:11.167 INFO 70651 --- [timer://mytimer] out : Exchange[ExchangePattern: InOnly, BodyType: String, Body: Bonjour, this is Camel running in Spring Boot!]
Get the example code on GitHub
You’ve just witnessed an example of Spring’s convention over configuration, where Spring establishes a convention for something - in this case, application parameters - making it easy to set them up with minimum fuss.
You can use this powerful feature to easily parameterise your Camel routes, and reduce the number of things you hard-code.
You can read more about properties and configuration in the official Spring Boot documentation.
What do you think? You can use Markdown in your comment.
To write code, indent each line with 4 spaces. Or, to paste a lot of code, you can put it in pastebin.com and share the link in your comment. | https://tomd.xyz/camel-spring-boot-properties/ | CC-MAIN-2022-40 | refinedweb | 981 | 57.77 |
. There are some web scraping libraries out there, namely BeautifulSoup, which are aimed at doing this same sort of task.
On to the code:
import urllib.request import re url = '' req = urllib.request.Request(url) resp = urllib.request.urlopen(req) respData = resp.read()
Up to this point, everything should look pretty typical, as you've seen it all before. We specify our url, our values dict, encode the values, build our request, make our request, and then store the request to respData. We can print it out if we want to see what we're working with. If you are using an IDE, sometimes printing out the source code is not the greatest idea. Many webpages, especially larger ones, have very large amounts of code in their source. Printing all of this out can take quite a while in the IDLE. Personally, I prefer to just view-source. In Google Chrome, for example, control+u will view-source.
Alternatively, you should be able to just right-click on the page and select view-source. Once there, you want to look for your "target data." In our case, we just want to take the paragraph text data. If you're looking for something specific, then what I suggest you do is copy some of the "thing" you are looking for. So in the case of specific paragraph text, highlight some of it, copy it, then view the source. Once there, do a find operation, control+f usually will open one up, then paste in what you are looking for. Once you've done that, you should be able to find some identifiers near what you are looking for. In the case of paragraph data, it is paragraph data because people tell the browser it is. This means usually that there are literally paragraph tags around what we want that look like:
<p>text goes here</p>
Some websites get fancy with their HTML and do things like
<p class="derp">text here</p>
...keep this in mind. With that in mind, most websites just use simple paragraph tags, so let's show that:
paragraphs = re.findall(r'<p>(.*?)</p>',str(respData))
The above regular expression states: Find me anything that starts with a paragraph tag, then in our parenthesis, we say exactly "what" we're looking for, and that's basically any character, except for a newline, one or more repetitions of that character, and finally there may be 0 or 1 of THIS expression. After that, we have a closing paragraph tag. We find as many of these that exist. This will generate a list, which we can then iterate through with:
for eachP in paragraphs: print(eachP)
The output should be a bunch of paragraph data from our website. | https://pythonprogramming.net/parse-website-using-regular-expressions-urllib/ | CC-MAIN-2022-40 | refinedweb | 460 | 72.87 |
Hello all.
I need help (not THAT sort of help). I've been searching for a while, in
vain, for a code snippet to enable me to write a program to browse folders
and files on Windows network shares. I need to be able to browse domains,
workgroups, machine names, folders, files, etc... and I need to do it using
.NET code. Either C# or VB.NET.
I'm trying to write a .NET Windows form control to do this, but I've been
stumped. Can sonbody who has a better understanding than me please help.
Thank you.
You can look at System.DirectoryServices namespace for getting domains, user, groups etc. information. The following link has some useful articles on these topics.
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?154339-DataGridView-Returning-Values-from-Cells&goto=nextoldest | CC-MAIN-2015-14 | refinedweb | 140 | 68.47 |
Hello everyone, i’m trying to do a system for controlling a heat resistence with a thermistor MAX6675 and a Solid State Relay “SSR 40 DA”. What i have in mind is like a 3D Printer Hotend control, so after a calibration i can keep a decent temperature hold starting from the one i’ve set on the project. I started from the “PID_RelayOutput.ino” example you can find in PID Arduino Library.
Now my code looks like this:
#include <PID_v1.h> #include <max6675.h> #include "thermocouples.h" #include "hold.h" #define RELAY_PIN 6 //Define Variables we'll be connecting to double Setpoint, Input, Output; //Specify the links and initial tuning parameters double Kp=2, Ki=5, Kd=1; PID myPID(&Input, &Output, &Setpoint, Kp, Ki, Kd, DIRECT); int WindowSize = 5000; unsigned long windowStartTime; void setup() { Serial.begin(9600); pinMode(RELAY_PIN, OUTPUT); windowStartTime = millis(); //initialize the variables we're linked to Setpoint = 150; //tell the PID to range between 0 and the full window size myPID.SetOutputLimits(0, WindowSize); //turn the PID on myPID.SetMode(AUTOMATIC); delay(500); } void loop() { serial.Update(); Input = thermocouple.readCelsius(); myPID.Compute(); /************************************************ * turn the output pin on/off based on pid output ************************************************/ if (millis() - windowStartTime > WindowSize) { //time to shift the Relay Window windowStartTime += WindowSize; } if (Output < millis() - windowStartTime) digitalWrite(RELAY_PIN, HIGH); else digitalWrite(RELAY_PIN, LOW); }
“hold.h”
double MAXREAD = thermocouple.readCelsius(); class Hold { // Class Member Variables // These are initialized at startup long OnTime; // milliseconds of on-time long Wait; // milliseconds of off-time // These maintain the current state int state; // ledState used to set the LED unsigned long previousMillis; // will store last time LED was updated // Constructor - creates a Hold // and initializes the member variables and state public: Hold(long on, long off) { OnTime = on; Wait = off; state = LOW; previousMillis = 0; } void Update() { // check to see if it's time to change the state of the LED unsigned long currentMillis = millis(); if((state == HIGH) && (currentMillis - previousMillis >= OnTime)) { state = LOW; // Turn it off previousMillis = currentMillis; // Remember the time MAXREAD = thermocouple.readCelsius(); Serial.println(MAXREAD); MAXREAD = 0; } else if ((state == LOW) && (currentMillis - previousMillis >= Wait)) { state = HIGH; // turn it on previousMillis = currentMillis; // Remember the time } } }; Hold serial(50, 1000);
“thermocouples.h”
// Termocouple 1 int DO1 = 2; int CS1 = 3; int CLK1 = 4; MAX6675 thermocouple(CLK1, CS1, DO1);
The problem is that the value read from the MAX6675 seems to be in conflict with the “Input = thermocouple.readCelsius();” in void loop. This is supposed to make the PID library read the value from the Thermistor. The value seems to become fixed and of course anything works anymore.
Maybe my approach to this is totally wrong, i have no idea, is the first time that i play around with PID and honestly i’m finding the documentation really hard to understand.
Plus another thing is that if the temperature is low (like 30° C) the heater element doesn’t fire at all.
If you want to take a look i’m putting in attachments the full project zip file.
PID_MAX6675_SSR.zip (17.2 KB) | https://forum.arduino.cc/t/max6675-pid-thermostat/479674 | CC-MAIN-2021-31 | refinedweb | 504 | 52.09 |
30 July 2009 21:15 [Source: ICIS news]
HOUSTON (ICIS news)--Citi upgraded US major Dow Chemical to “buy” from “hold” behind improving sequential volumes and accelerated cost cutting, the research firm said on Thursday.
Dow reported earnings of 5 cents/share, beating Citi’s expectations of a loss of 5 cents/share, due to improved volumes in electronics and coatings, lower raw material costs and continued cost cutting, Citi said.
“At critical inflection points in the economy, sequential trends matter more than year-over-year changes,” Citi analyst PJ Juvekar said. “Although Dow’s volumes and prices were down 20%, volumes improved 5% sequentially, its first gain in a year.
“Operating rates improved 7% to 75% globally,” he added.
Citi also said Dow was “very adept at cost cutting”, and praised CEO Andrew Liveris for his role in the massive cost-cutting programme in 2001-02 when Dow acquired Union Carbide.
“The company is likely to accelerate its effort this time given the financial urgency and a deeper recession,” Juvekar said.
Citi said Dow had been “quick on its feet” to divest several assets, calling the company’s sale of Morton Salt, its stakes in the TRN refinery and Optimal olefins, glycols and chemicals joint ventures and its calcium chloride business as an “upside surprise”.
Those divestments add up to about $3.3bn (€2.3bn), Citi said, and puts Dow ahead of schedule in paying a bridge loan put in place for the company’s acquisition of Rohm and Haas.
Even so, Citi continued to label Dow as “high risk”, citing the global recession, the substantial amount of debt raised for the Rohm and Haas acquisition and risks in commodity businesses from the new wave of ?xml:namespace>
Dow surged to $21.98/share in mid-day trading on the New York Stock Exchange, up $1.71, or 8.4%.
Citi’s 12-month price target for the stock is $27/share.
($1 = €0.71) | http://www.icis.com/Articles/2009/07/30/9236374/citi-upgrades-us-dow-after-sequential-gains-accelerated-cuts.html | CC-MAIN-2015-22 | refinedweb | 326 | 59.53 |
Package: libxkbfile1 Version: 1:1.0.3-2 Severity: normal When doing something like xkbcomp $DISPLAY /tmp/xkb <edit /tmp/xkb to change key autorepeat state> xkbcomp /tmp/xkb $DISPLAY to change some keys' autorepeat state as viewed by the X server, the autorepeat states are silently unchanged. I believe that this is because the code which would be responsible for changing the state is commented out: #ifdef NOTYET ... #endif in XkbWriteToServer in srvmisc.c. My IBM X40 Thinkpad has two keys which I would like to turn into Hyper and Super modifiers; however, their default state is to autorepeat, and I can't turn this off using xkbcomp. (Using xset -r <keycode> works, however, as does generating an XkbSetControls request manually). I'd be happy to provide more information or provide sample files if that would help. Christophe -- System Information: Debian Release: 4.0 libxkbfile1 depends on: ii libc6 2.3.6.ds1-8 GNU C Library: Shared libraries ii libx11-6 2:1.0.3-4 X11 client-side library ii x11-common 1:7.1.0-10 X Window System (X.Org) infrastruc libxkbfile1 recommends no packages. -- no debconf information | https://lists.debian.org/debian-x/2007/01/msg00860.html | CC-MAIN-2017-30 | refinedweb | 193 | 59.4 |
C library function - putc()
Description
The C library function int putc(int char, FILE *stream) writes a character (an unsigned char) specified by the argument char to the specified stream and advances the position indicator for the stream.
Declaration
Following is the declaration for putc() function.
int putc(int char, FILE *stream)
Parameters
char -- This is the character to be written. The character is passed as its int promotion.
stream -- This is the pointer to a FILE object that identifies the stream where the character is to be written.
Return Value
This function returns the character written as an unsigned char cast to an int or EOF on error.
Example
The following example shows the usage of putc() function.
#include <stdio.h> int main () { FILE *fp; int ch; fp = fopen("file.txt", "w"); for( ch = 33 ; ch <= 100; ch++ ) { putc(ch, fp); } fclose(fp); return(0); }
Let us compile and run the above program that will create a file file.txt in the current directory which will have following content:
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcd
Now let's see); }
Let us compile and run the above program to produce the following result:
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcd | http://www.tutorialspoint.com/c_standard_library/c_function_putc.htm | CC-MAIN-2017-34 | refinedweb | 195 | 63.39 |
ttyslot - find the slot of the current user in the user accounting database (LEGACY)
#include <stdlib.h> int ttyslot(void);. The index is an ordinal number representing the record number in the database of the current user's entry. The first entry in the database is represented by the return value 0.
This interface need not be reentrant.
Upon successful completion, ttyslot() returns the index of the current user's entry in the user accounting database. The ttyslot() function returns -1 if an error was encountered while searching the database or if none of file descriptors 0, 1 or 2 is associated with a terminal device.
No errors are defined.
None.
None.
None.
endutxent(), ttyname(), <stdlib.h>. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/ttyslot.html | CC-MAIN-2017-30 | refinedweb | 117 | 58.89 |
Compile and run the program.
It contains the C++ wages program using a repeat loop in order to enable the user to compute several wages. The loop ends when the user enters -1 for either the hours_worked or the pay_rate. C++ uses the "do" keyword instead of "repeat".
/* Program to compute the weekly wages repeatedly */
/* */
/* Inputs: (keyboard) */
/* 1. No. of hours worked during a week (integer) */
/* 2. Pay rate (dollars per week) (float) */
/* */
/* Output: */
/* weekly wages displayed on screen as float */
/* */
/* Algorithm: wages = hours worked * rate of pay */
/* */
/****************************************************/
#include <iostream>
using namespace std;
int main()
{
int hours_worked;
float pay_rate;
float wages;
do //here starts the repeat loop
{
//read in the hours worked
cout << endl ;
cout << "Number of hours worked during"
<< " the week (integer) = " ;
cin>> hours_worked;
// read in the pay rate
cout << "Weekly pay rate (specify two digits "
<< "after the decimal point) = " ;
cin>>pay_rate;
// compute wages
wages = hours_worked * pay_rate;
// display the result
cout << endl ;
cout << "The weekly wages are: $" << wages << endl ;
}while((hours_worked != -1) && (pay_rate != -1)); //loop ends here
return(0);
} | https://brainmass.com/computer-science/cpp/77080 | CC-MAIN-2016-44 | refinedweb | 170 | 67.18 |
.
The.
Don't forget to browse the sample code in our online code library.
For more articles on testing, be sure to check out our MSDN Magazine archives. Magazine, Ashish Ghoda demonstrates how you can build Document Information Panels in the Microsoft 2007 Office system to capture and manipulate metadata in Office documents.
Don't forget to browse the sample code in our online code library.
Be sure to check out our MSDN Magazine archives for more articles about Office and SharePoint development.
Going Places is a new MSDN Magazine column devoted to mobile device development. The main focus will be on Windows Mobile phones and Tablet PCs, although any topic related to mobility is fair game.
In the April 2008 issue of MSDN Magazine, Mike Calligaro talks about programmatically configuring -- or "provisioning" -- Windows Mobile devices.
If you're an IT manager or support technician, and are manually configuring Windows Mobile devices for your users, stop. This article is for you. If you're a mobile app developer and wish you had an easy way to add your Web page to the browser's favorites, this will help you too. of MSDN Magazine, Declan Brennan shows you how he built a Silverlight-based Web application that folds 3D polyhedron shapes from a flat template. strategies that you can employ to improve scaling: specialization, optimization, and distribution. How you apply them will vary, but the actual strategies are straightforward and consistent.
Check out our MSDN Magazine archives for more about ASP.NET and coding for performance. of MSDN Magazine, Matthew DeVore focuses on how to integrate your extension with the My Extensibility feature.
If you find yourself wanting more information on how to write the actual code extensions to the My namespace, see Simplify Common Tasks by Customizing the My Namespace in the July 2005 issue of MSDN Magazine.
Many applications are written with almost no thought given to performance. But when the need for high performance does present itself, do you have the knowledge, skills, and tools to do a good job?
In the April 2008 issue of MSDN Magazine, Vance Morrison discusses what you need to write high-performance applications with the .NET Framework. Vance will show you how to quantify the expense of various .NET features so you know when it's appropriate to use them.
Check out our MSDN Magazine archives for more about coding for performance.
Recently,.
Scott Mitchell in our Toolbox column has covered quite a few free tools you’ll love. Here are some you may have missed from Scott and some of our other authors as well. Check back again for more free tools next time.
SubSonic is an application toolset that is centered on its ability to completely generate your data access layer. Unlike some other Object Relation Mapping (ORM) frameworks, SubSonic takes the approach of generating and compiling your data access layer as opposed to performing a reflection-based mapping at run time (and for this reason, some would call it a code generation tool rather than an ORM).
Roy Osherove has created a number of free tools that make up for the lack of regular expression support in Visual Studio. The most well-known of these tools is Regulator, a standalone regular expression editor that includes color syntax highlighting, IntelliSense-like hints, a code snippets window, and search access to RegExLib.com, an online library of common regular expression patterns.
Scott Mitchell says Pixie 1.0, by Nattyware, is one of the best color pickers he has come across. And it’s free too.
Finally, check out FxCop, which lets you check a .NET assembly for compliance using a number of different rules. FxCop comes with a set number of rules created by Microsoft, but you can also write your own rules. James Avery wrote about it here, but you have to get it here..
The April 2008 issue of MSDN Magazine is now available online.
In the April issue, we take a look at some new technologies like voice apps and Silverlight animations, plus a look back at performance and scaling optimization. We also feature our first full-scale interview in a long time, with our Editor in Chief Howard Dierking discussion the evolution and future of programming languages with none other than Bjarne Stroustrup.
Here's what's in the issue:
Talk Back: Voice Response Workflows with Speech Server 2007 -- Speech Server 2007 lets you create sophisticated voice-response applications with Microsoft .NET Framework and Visual Studio tool integration. Here’s how. by Michael Dunn
Performance: Scaling Strategies for ASP.NET Applications -- Performance problems can creep into your Web app as it scales up, and when they do, you need to find the causes and the best strategies to address them. by Richard Campbell and Kent Alstad
Silverlight: Building Advanced 3D Animations with Silverlight 2.0 -- Animating with Silverlight is easier than you think. Here we create a 3D app that folds a polyhedron using XAML, C#, and by emulating the DirectX math libraries. by Declan Brennan
Interview++: Bjarne Stroustrup on the Evolution of Languages -- Howard Dierking talks to the inventor of C++, Bjarne Stroustrup, about language zealots, the evolution of programming, and what’s in the future of programming.
Office Development: Manage Metadata with Document Information Panels -- Use Document Information Panels in the Microsoft 2007 Office system to manipulate metadata from Office docs for better discovery and management. by Ashish Ghoda
In the columns we launch a new column about mobile device development: Going Places. We start out with a look at provisioning mobile devices. Among the other columns, Dr. James McCaffrey covers testing SQL stored procedures with LINQ and Michele Leroux Bustamante begins a series of columns on building a WCF-based router service.!
In the March 2008 installment of the Advanced Basics column in MSDN Magazine, Ken Getz shows you how to programmatically interact with the Office Open XML File Formats using Visual Studio 2008, LINQ to XML, and the Community Technology Preview (CTP) edition of the Microsoft SDK for Open XML Formats.
Also in the March 2008 issue, in our Data Points column, John Papa performs practical queries and operations with LINQ, using both LINQ to Objects and LINQ to Entities. John demonstrates how LINQ's standard query operators are enhanced using lambda expressions and how they can let you parse specific information from a sequence and perform complex logic over a sequence.
Again, you can browse the sample code in our online code library.
Trademarks |
Privacy Statement | http://blogs.msdn.com/msdnmagazine/default.aspx | crawl-001 | refinedweb | 1,084 | 53.61 |
.exec("command /c c:/dos/prog.exe")
and same for batches.
It throws an IOException when I try that.
It DOES work from the command line, although I noticed that it doesnt open up a new DOS window it uses the one that is currently running (where the command was executed)
could this be where the problem is?
if the java interpreter is using command.com and then tries to create a process in the same window wouldnt that result in an exception like im getting?
I can either get rid of the java interpreter and then run the DOS program OR start the DOS program in a different window and then exit the java interpreter. It really makes no difference.
Is this the root of my problem?
I dont know I only bang my head against the wall here (daily)
Thanks
Try this
c:\\winnt\system32\command
Now, coming to problem, exec invokes win32 method CreateProcess with the first argument as NULL and second
argument as the command line you passed. Going thru the documentation of exec, we find that CreateProcess is invoked with creation flag as DETACHED_PROCESS (win32 funda). So, This means
"For console processes, the new process does not have access to the console of the parent process. The new process can call the AllocConsole function at a later time to create a new console. This flag cannot be used with the CREATE_NEW_CONSOLE flag."
To solve your problem, first write a C program which calls AllocConsole and creates a console and then uses the c standard function system() to invoke the batch file.
create the exe for the c program and invoke this exe in your Runtime.exec
It’s our mission to create a product that solves the huge challenges you face at work every day. In case you missed it, here are 7 delightful things we've added recently to monday to make it even more awesome.
that, you have the impression that nothing is done. If you want the process to be finished
before continuing you have to write something like:
Runtime runtime = Runtime.getRuntime();
String command = ...;
Process process = runtime.exec(command);
process.waitFor(); // the next line is executed AFTER the process has finished execution.
int status = process.exitValue(); // so that you know the status of the execution.
I noticed that when I shut down the system command.com was still running but was not visible at all...
also it doesnt seem to be doing anything even if i wait for the process to be finished executing before continuing like fontaine said.
the code was as follows:
Runtime r=Runtime.getRuntime();
Process process= r.exec("c:\\windows\\comma
process.waitFor();
int status= process.exitValue();
System.out.println("Status
There was no file called blah.txt created on my system and the status code came back
as "0"
Do I really need to open a different console with C and run a system call with that like evijay talked about?
It makes sense I guess...Java wasnt really written for the kind of things im trying to do with it
Curse platform dependence!
To give you some background on what Im trying to accomplish... Im trying to write a Java Menu system for a DOS program (as silly as that sounds) So what will eventually happen is the java program will call the DOS executable (with the proper arguments according to what button is clicked) and vice-versa
10.7 So why can't I exec common DOS commands this way (as in 10.6)?
A. The reason is that many of the DOS commands are not individual programs, but merely
"functions" of command.com. There is no DIR.EXE or COPY.EXE for example. Instead, one
executes the command processor (shell) explicitly with a request to perform the builtin
command, like so:
Runtime.getRuntime().exec(
On NT, the command interpreter is "cmd.exe", so the statement would be Runtime.getRuntime().exec(
The following program runs for me (I am on an NT server):
public class Test {
public static void main(String args[]) throws Exception {
Runtime r=Runtime.getRuntime();
Process process= r.exec("command.com /c dir > blah.txt");
process.waitFor();
int status= process.exitValue();
System.out.println("Status
}
}
Be also sure that the directory in which you are doing the test is not "read-only".
that seems to work I used the Java FAQ (i SWEAR ive read that thing 10 times and still keep finding new things.. with your help of course)
it even works when i use a short batch file
although when i call a longer batch file (13 lines seems to be the cutoff) the interpreter hangs (i cant even close the window normally)
the commands that are included in the batch file consist of setting environment variables etc (nothing that would take any amount of time)
also if i put 11 lines and 2 spaces it STILL doesnt work.. although if i remove the spaces it works
nothing in the batch file needs to be displayed in another DOS window (yet) and nothing requires user input
obviously im about pulling my hair out here
does this make any sense? (it doesnt to me)
does calling a (long) batch file cause any problems for anyone else?
Which ever command is executed using Runtime.exec, that command will not have a console associated with it !!. This means, what ever output the command prints, it wont be displayed. If the command expects some input, it cant accept!!
So, The only solution is to just call AllocConsole in a c program which inturn may accept a batch file as command line argument and runs the batch file. a sample c program is here
test.c
main (int argc, char **argv)
{
AllocConsole();
if (argc <= 1) {
char dummyline[100];
printf("usage : test.exe batchfilename\n");
scanf("%s\n", dummyline);
}
system(argv[1]);
}
Once, you compile this (u have to use VC++ or borland c++ and specify appropriate library for AllocConsole function), write your java program like this
Runtime.exec("text.exe mybatchfile.bat");
this will work (it worked in my system !!)
vijay
}
I do have one of the compilers you specified -- Borland C++ 3.0.
What library is AllocConsole in with your system?
I guess it makes some sense that a separate thread is needed for long batch files.
I have VC++ and AllocConsole is in
Kernel32.lib. You need to include header file wincon.h
I think kernel32.lib must be present in Borland c++ since it is the
basic library in win32 programming. Best of luck
Vijay
Starting a new thread to run batches works like a charm
Thanks alot
Runtime.getRuntime().exec(
that will run the DOS program in a new console | https://www.experts-exchange.com/questions/10058043/exec-a-DOS-program.html | CC-MAIN-2018-13 | refinedweb | 1,120 | 65.22 |
Welcome to a series of Quick Tip lessons in which we'll learn about components in Flash Professional CS5. In this week's tutorial we'll be learning about the Button and Label components.
Brief Overview
In the SWF you see two buttons and two labels. When you click on the top button the Label will update with how many times you have clicked the button. The bottom button acts like a toggle with an on and off state. When you click the bottom button the label changes to show whether the button is on or off.
The bottom label allows different colors in the same text. This is achieved by inserting HTML in the text (which we will also cover in this tutorial). buttons and two labels to the stage.
In the Properties panel give the first button the instance name "basicButton". If the Properties panel is not showing go to Menu > Window > Components or press CTRL+F3
Set the button's X to 86.00 and its Y to 107.00.
In the Properties panel give the first label the instance name "basicLabel".
Set the label's X to 239.00 and its Y to 107.00
In the Properties panel give the second button the instance name "toggleButton".
Set the button's X to 86.00 and its Y to 234.00
In the Properties panel give the second label the instance name "htmlLabel".
Set the label's X to 239.00 and its Y to 234.00
Step 3: Import Script [Actionscript 3.0]
Uncheck "Automatically declare stage instances".
In the Main.as Open the package declaration and import the classes we will be using.
Add the following to the Main.as:
package { import flash.display.MovieClip; import fl.controls.Button; import fl.controls.Label; //needed to automatically size the labels import flash.text.TextFieldAutoSize; import flash.events.MouseEvent; import flash.events.Event;
Step 4: Setup the Main Class
Add the class declaration, extend
MovieClip and set up our constructor function. Add the following to Main.as:
public class Main extends MovieClip { //This is our basicButton component on the stage public var basicButton:Button; //This is our toggleButton component on the stage public var toggleButton:Button; //This is our basicLabel component on the stage public var basicLabel:Label; //This is our htmlLabel component on the stage public var htmlLabel:Label; //Used to keep track of how many times the user has clicked the button var numClicks:Number = 0; public function Main() { //Used to setup the buttons and add eventListeners setupButtons(); //Used to setup our labels setupLabels(); }
Step 5: Main Constructor Functions
Here we'll define the
setupButton() and
setupLabels() functions.
In the code below we use the
htmlText property of the Label; this allows us to insert HTML into the text string. It should be noted that Flash only supports a limited set of HTML tags; check out the livedocs for a list of supported HTML tags. We'll use the
<font> tag to alter the color of the text.
Add the following to the Main.as
public function setupButtons():void { //sets up the label on the button basicButton.label = "Click Me"; basicButton.addEventListener(MouseEvent.CLICK,basicButtonClick); toggleButton.label = "On"; //We use selected here to put the button in it's "On" state toggleButton.selected = true; //Used to toggle the button..each time you click the button it turns selected to true/false toggleButton.toggle = true; //The toggle property expects a change so here we use Event.CHANGE not MouseEvent.CLICK toggleButton.addEventListener(Event.CHANGE,toggleButtonClick); } private function setupLabels():void { //This sets the label to where it automatically sizes to hold the text passed to it basicLabel.autoSize = TextFieldAutoSize.LEFT; //Sets the initial text of the label basicLabel.text = "You have clicked the button 0 times"; htmlLabel.autoSize = TextFieldAutoSize.LEFT; //To be able to use flashes supported html tags we use the htmlText of the label htmlLabel.htmlText = "The button is <font color ='#00FF00'>On</font>"; }
Step 6: Event Listeners
Here we'll code our Event Listeners we added to the Buttons. Add the following to the Main.as:
private function basicButtonClick(e:Event):void { //Increments the number of times the user has clicked the button numClicks++; //Here we cast numClicks to a string since text expects a string basicLabel.text = "You have clicked the button " + String(numClicks) + " times"; } private function toggleButtonClick(e:Event):void { //If the button is selected we set the htmlText of the label with a green "On"; //And change the buttons label to "ON"; //Preferably you'd do something useful here like show a movie clip if(e.target.selected==true){ htmlLabel.htmlText = "The button is <font color ='#00FF00'>On</font>"; e.target.label = "ON"; //Do something useful } else { //If the button is not selected we set the htmlText of the Label with a red Off //And change the buttons Label to "OFF"; //Preferably you'd do something useful here like hide a movie clip htmlLabel.htmlText = "The button is <font color ='#FF0000'>Off</font>"; e.target.label="OFF"; //Do something useful } }
Then, close out the class and package declarations with two closing curly braces.
Conclusion
Using Button and Label components is a simple and fast way to have fully functional buttons and labels without having to build them yourself.
You'll notice in the Component Parameters panel of the components that you can check and select certain properties.
Button component properties.
Properties for the Button Component
emphasized: a Boolean value that indicates whether a border is drawn around the Button component when the button is in its up state
enabled: a Boolean value that indicates whether the component can accept user input
label: the text label for the component
labelPlacement: position of the label in relation to a specified icon
selected: a Boolean value that indicates whether a toggle button is toggled in the on or off position
toggle: a Boolean value that indicates whether a button can be toggled
visible: a Boolean value that indicates whether the component instance is visible
Properties for the Label Component
autoSize: indicates how a label is sized and aligned to fit the value of its text property
condenseWhite: a Boolean value that indicates whether extra white space such as spaces and line breaks should be removed from a Label component that contains HTML text
enabled: a Boolean value that indicates whether the component can accept user input
htmlText: the text to be displayed by the Label component, including HTML markup that expresses the styles of that text
selectable: a Boolean value that indicates whether the text can be selected
text: the plain text to be displayed by the Label component.
visible: a Boolean value that indicates whether the component instance is visible
wordWrap: a Boolean value that indicates whether the Label supports word wrapping
The help files are a great place to learn more about the component properties you can set in the Component Parameters panel. Here are the livedocs pages for the Button and for the Label.
Thanks for reading and watch out for more upcoming component Quick Tips!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/quick-introduction-flash-button-and-label-components--active-5594 | CC-MAIN-2019-35 | refinedweb | 1,196 | 54.83 |
Products and Services
Downloads
Store
Support
Education
Partners
About
Oracle Technology Network
Name: diC59631 Date: 12/03/98
Obviously the two below operations will not
compile in the same java file.
import java.util.Date ;
import java.sql.Date ;
It would be nice to alias one such as below:
import java.util.Date ;
import java.sql.Date as SqlDate ;
Date => java.util.Date
SqlDate => java.sql.Date
This way instead of leaving out the
import java.sql.Date line and having to
fully qualify the SQL Date class for every
usage, you'd only have to use the SqlDate
alias.
This becomes a much larger issue with package
names like:
com.fake1.western.division.plane.Fuel ;
com.fake2.tiger.landbased.vehicle.Fuel ;
Especially when you need to use both types of
Fuel, but can only use the shortend version
for one of them at most.
"import .. as .." is not a new concept, and
I'm surprised that it wasn't incorporated into
the language specification from the beginning.
(Review ID: 43613)
======================================================================
An alternative syntax is proposed in RFE 4478140:
import jlang=java.lang.*;
alias jlang=java.lang;
The need for an extra keyword is not clear.
###@###.### 2005-04-17 07:20:05 GMT
4983159 is now JDK-8061419
EVALUATION
4983159 is the master CR for type aliasing.
WORK AROUND
Name: diC59631 Date: 12/03/98
Always use fully qualified class names.
======================================================================
EVALUATION
This is a minor bit of syntactic sugar. I am not convinced that it is very important in practice. The usual arguments about destabilizing the language,
high bar for changes and creeping featurism apply.
gilad.bracha@eng 1998-12-03 | http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4194542 | CC-MAIN-2016-44 | refinedweb | 272 | 58.58 |
Part of twisted.words.protocols.jabber.xmlstream View Source View In Hierarchy
Inherited from XmlStream:
Inherited from BaseProtocol (via XmlStream, Protocol):
Inherited from EventDispatcher (via XmlStream):
Reset XML Stream.Resets the XML Parser for incoming data. This is to be used after successfully negotiating a new layer, e.g. TLS and SASL. Note that registered event observers will continue to be in place.
Called when a stream:error element has been received.Dispatches a
STREAM_ERROR_EVENTevent with the error element to allow for cleanup actions and drops the connection.
Send stream level error.
If we are the receiving entity, and haven't sent the header yet, we sent one first.After sending the stream error, the stream is closed and the transport connection dropped.
Send data over the stream.This overrides
xmlstream.Xmlstream.sendto use the default namespace of the stream header when serializing
domish.IElements. It is assumed that if you pass an object that provides
domish.IElement, it represents a direct child of the stream's root element.
Called when a connection is made.Notifies the authenticator when a connection has been made.
Called when the stream header has been received..
streamStartedmethod will be called. | http://twistedmatrix.com/documents/8.2.0/api/twisted.words.protocols.jabber.xmlstream.XmlStream.html | CC-MAIN-2015-32 | refinedweb | 196 | 53.88 |
confused with gc
Armstrong Dor
Greenhorn
Joined: Jul 23, 2002
Posts: 5
posted
Jul 23, 2002 11:30:00
0
Hi,
I am not able to understand the code below from Mughal's page 253 & 254 but i know it is finalize() chaining i have gone through these two pages many times but still not able to understand the problem I have two doubts
class SuperBlob { static int population; public SuperBlob() { ++population; } protected void finalize() throws Throwable { super.finalize(); --population; } } class blob extends SuperBlob { int[] fat; public blob(int bloatedness) { fat = new int[bloatedness]; System.out.println(bloatedness + ": Hello"); } protected void finalize() throws Throwable { System.out.println(": Bye"); super.finalize(); } } public class Finalizers { public static void main(String args[]) { int blobsRequired, blobSize; try { blobsRequired = Integer.parseInt(args[0]); blobSize = Integer.parseInt(args[1]); } catch(IndexOutOfBoundsException e) { return; } for (int i = 0; i < blobsRequired; i++) { new blob(blobSize); } System.out.println(SuperBlob.population + " blobs alive"); } }
when you compile and run the code twice by giveing two types of COMMAND ARGUMENTS as BELOW
java
Finalizers 5 50 (the 1st time)
it prints like this
50: Hello
50: Hello
50: Hello
50: Hello
50: Hello
5 blobs alive
------------------------------------------
java Finalizers 5 50000 (the 2nd time)
it prints like this
50000: Hello
50000: Hello
50000: Hello
: Bye
: Bye
50000: Hello
50000: Hello
3 blobs alive
------------------------------------------
the above is correct but what i did not understand is this
1. why cant it print 5 blobs alive when i give the 2nd COMMAND ARGUMENT as 50000
why is it saying 3 blobs alive.
2. why does it print Bye after the 3rd Hello when I give the 2nd COMMAND ARGUMENT as 50000
which does not happen when i give the 2nd COMMAND ARGUMENT 50.
Thank you very much
Armstrong D
Chung Huang
Ranch Hand
Joined: Jun 21, 2002
Posts: 56
posted
Jul 23, 2002 12:41:00
0
This program shows you the thing about GC; GC will re-claim memory, but the exact time or the calculation it goes through to find out the time is platform dependent. In Java, you
can't
force GC to re-claim a particular object, to re-claim objects at particulary, or both. This means as a programmer you can never predict just when would an object that is eligible for garbage collection would be garbage collected.
This program create objects that were not assign any reference to it, so you know that right after you create them they are eligible to be garbage collected. To show you how it work, the constructor prints out message showing you that the objects arecreated. Then, when GC actually do decide to work, the finalize method is executed to show that the object is re-claimed. And at the end, the program prints out just how many object are still alive (it uses a static int to keep track, notice how it increase in consstructor and decrease in finalize method).
For you first question, GC did not work (for some reason it decide not to do any work that time). That is why you see 5 objects being created and all 5 objects were still alive when the program end. The second time, GC did work after the 3rd object was created. Relating to your second question, this is why bye was printed; bye was printed twice because 2 out of the 5 objects created was garbage collected, so each of the GCed objects' finalize method was executed. Because 2 of the 5 objects were GCed, and so at the end of the 2nd run you see that only 3 blobs are alive.
Does this help? And please correct me if I am wrong folks.
Let us be showered in the light of confusion!
Corey McGlone
Ranch Hand
Joined: Dec 20, 2001
Posts: 3271
posted
Jul 23, 2002 12:43:00
0
This program simply creates objects from the heap and causes them to be of a given size, depending upon the second argument, the "blob size." Notice that, once an object is created, there are no references to it so it is immediately available for garbage collection.
What this program is showing is that, if the objects are of a small size, the garbage collector won't bother to run. In that case, there is plenty of available memory so using up precious processor time to clean up memory is just a waste.
In the second case, we cause the size of each object to be quite large. In this case, after a few objects are created, we need to reclaim some space from memory in order to create more. So, the garbage collection process executes and collects two objects, calling their finalize methods. That's what causes the two "Bye" statements to be there. That's where two objects got garbage collected.
I hope that helps,
Corey
SCJP Tipline, etc.
I agree. Here's the link:
subject: confused with gc
Similar Threads
GC
Cannot understand the following GC example from Khalid Mughal
Doubt in Garbage Collector
garbage collection
Finalize()
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/238610/java-programmer-SCJP/certification/confused-gc | CC-MAIN-2015-48 | refinedweb | 856 | 65.35 |
Hi everyone:
I have to list three inout numbers in ascending order and here's my code so far:
Not perfect yet but my question is this:Not perfect yet but my question is this:Code:
*/
Variables
num1 The first number input by the user
num2 The second number input by the user
num3 The third number input by the user
*/
#include <stdio.h>
int main()
{
float num1;
float num2;
float num3;
float largest;
float middle;
float smallest;
printf("Enter first number\n" );
scanf( "%f" , &num1 );
printf("Enter second number\n" );
scanf("%f" , &num2);
printf("Enter the third number\n" );
scanf("%f", &num3 );
if (num1 > num2 )
largest = num1;
printf(The largest number is: %.2f\n, num1);
return 0;
}
Instead of writing a HUGE if statements for all three numbers how can I write something like this:
If num1 > num2 and >num3
then the largest number is num1
Any idea how this is done?
Thank you very much.
-Extro | http://cboard.cprogramming.com/c-programming/66420-greater-than-less-than-printable-thread.html | CC-MAIN-2016-07 | refinedweb | 158 | 60.32 |
Build an interactive program that reads in the information about the Inventory objects from the file Inventory.txt into an array of Inventory objects
Provide a Panel on the screen that has a TextField for the product code and a Button called “Lookupâ€. The user should be able to type in a Product code into the JTextField, and hit the button. Your program should then do a Binary Search of the array of Inventory objects to find the appropriate inventory object.
Build a second JPanel on the screen to show the results of the lookup. This panel should have:
A JTextField showing the Price of the Item
A JTextField showing the Quantity on Hand of the Item
A JTextField showing the Status of the lookup
If the item is found, the Price and Quantity on Hand should be displayed. If not found, the Price and Quantity on Hand should be blank, and the Status should say “Item Not Foundâ€
This program can be done with a simple grid layout on the screen, but a more elegant solution would be to have a JPanel on the top of the screen with a Label saying “Product Codeâ€, a JTextField for the user entry, and a JButton. Then, a JPanel with the GridLayout could be built for the bottom of the screen with three rows and Labels on each row such as “Priceâ€, “Qty on Handâ€, and “Statusâ€.
The JPanel class is not mentioned in the book, but provides a way to divide the screen up. Simply create a Panel with “JPanel p = new JPanel ( );†You can then add a Panel to the Frame with the “add†method, followed by the desired section of the screen: BorderLayout.NORTH for the top, or BorderLayout.CENTER for the rest of the screen. Items can then be added to the JPanel just like the JFrame, and they will go to the section of the JFrame given to the JPanel. You can also change the layout of the JPanel with the same setLayout( ) method as the JFrame, but you’re just changing one part of the screen.
//PanelSample.java
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
public class PanelSample extends JFrame
{
public PanelSample()
{
setTitle("INVEVTORY REPORT JAVA 205");
setLayout(new BorderLayout());
setSize(400,150);
Container contentPane = getContentPane();
topPanel= new JPanel(new FlowLayout());
lookUp = new JButton("Look Up");
productCode = new JTextField("Enter product code here");
topPanel.add(new JLabel("Product Code:"));
topPanel.add(productCode);
topPanel.add(lookUp);
bottomPanel = new JPanel(new GridLayout(0,2));
price = new JTextField(" ");
quantityOnHand = new JTextField(" ");
status = new JTextField(" ");
bottomPanel.add(new JLabel("Price: " ));
bottomPanel.add(price);
bottomPanel.add(new JLabel("Quantity On Hand : "));
bottomPanel.add(quantityOnHand);
bottomPanel.add(new JLabel("Status: "));
bottomPanel.add(status);
contentPane.add(topPanel, BorderLayout.NORTH);
contentPane.add(bottomPanel, BorderLayout.SOUTH);
addWindowListener(new WindowAdapter(){
public void windowClosing(WindowEvent e)
{
System.exit(0);
} // windowClosing
}); // WindowListener
} // end of constructor
static JPanel topPanel;
static JPanel bottomPanel;
static JButton lookUp;
static JTextField productCode;
static JTextField price;
static JTextField quantityOnHand;
static JTextField status;
public static void main(String [] args)
{
JFrame frame = new PanelSample();
frame.show();
} // main
} // PanelSample
I am sorry those websites did not have the answer. | http://www.chegg.com/homework-help/questions-and-answers/build-an-interactive-program-that-reads-in-the-information-about-the-inventory-objects-fro-q3319640 | CC-MAIN-2013-20 | refinedweb | 534 | 51.48 |
What is PlaidML?.
- Accelerates deep learning on AMD, Intel, NVIDIA, ARM, and embedded GPUs.
PlaidML is developed by Vertex.ai. Vertex.ai was founded in 2015 by Jeremy Bruestle and Choong Ng. The startup has been acquired by Intel and will join the Intel’s Artificial Intelligence Products Group.
Why PlaidML?
I use Keras for training Neural Networks and most of the time I have to do all the training either on google Colab or Kaggle Kernels because I own a laptop with AMD GPU(R9 M375).
So with the help of PlaidML, just adding two lines of code at the starting of my Jupyter Notebook I can use my GPU for training neural networks without using CUDA.
import os
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
Experiment and Evaluation
Here I use cifar10_cnn.py (only 1 and 10 epochs with no data augmentation) from keras-team on GitHub to conduct the tests. Figure 3 shows the performance of this program on three backends:
- Tensorflow on Intel i7 5500U
- PlaidML on Intel HD Graphics 5500
- PlaidML on AMD Radeon R9 M375
Source: Deep Learning on Medium | http://mc.ai/plaidml-a-alternative-open-source-deep-learning-library-for-all-gpus/ | CC-MAIN-2019-09 | refinedweb | 185 | 63.8 |
03/08/2011 at 15:43, xxxxxxxx wrote:
Here's an example:
import c4d
from c4d import gui
def main() :
obj = doc.GetActiveObject() #Assigns the FFD to a variable
obj[c4d.FFDOBJECT_SIZE]= c4d.Vector(10,10,10) #Sets the GridSize
obj.Message(c4d.MSG_UPDATE) #Updates the object changes
c4d.EventAdd() #Tells C4D that something has changed
if __name__=='__main__':
main()
You almost always need to use c4d.EventAdd() in order to get a proper update to your code. So you'll see it in just about every script. It tells C4D that something has changed. So do a refresh.
In this case. obj.Message(c4d.MSG_UPDATE) isn't absolutely needed. But it is needed in some cases when doing things like moving the individual verts. of an object or spline around.
-ScottA
On 04/08/2011 at 00:57, xxxxxxxx wrote:
Thanks for your reply ScottA.
I should have said, I normally would use an EventAdd() in the code.
Unfortunately, your code still does the same thing.
If you copy it to your python script manager and exexute it, you will see that the FFD does not update, even though the grid size values change.
On 04/08/2011 at 08:36, xxxxxxxx wrote:
Sorry about that. I could have sworn it worked yesterday when I tried it.
Looks like it might a bug. Because it works properly in coffee:
var obj = doc->GetActiveObject();
obj#FFDOBJECT_SIZE= vector(10,10,10);
*Shrug*
On 04/08/2011 at 09:37, xxxxxxxx wrote:
BTW.
Here's how to scale an object's matrix:
#Python version to scale an object's matrix
import c4d
from c4d import gui
def main() :
m = op.GetMg()
scale = c4d.Vector(m.v1.GetLength()+.3, #Change number as needed
m.v2.GetLength()+.3, #Change number as needed
m.v3.GetLength()+.3) #Change number as needed
m.v1 = m.v1.GetNormalized() * scale.x
m.v2 = m.v2.GetNormalized() * scale.y
m.v3 = m.v3.GetNormalized() * scale.z
op.SetMg(m);
c4d.EventAdd()
if __name__=='__main__':
main()
Maybe that will bail you out until they fix it?
On 05/08/2011 at 00:53, xxxxxxxx wrote:
Thanks for that Scott, I'll have a play with it when I get time, might work as a work around.
If anyone from maxon could confirm if its a bug or a gap in my knowledge that would be great.
On 20/08/2011 at 08:13, xxxxxxxx wrote:
Hi Scott, I tried scaling the matrix but the FFD doesn't work properly when its non-uniformly scaled.
Unfortunately I can't use CallCommand with Reset Scale because it has a dialogue and I don't want the user to have to use it.
Do you know of a way to scale the FFDs points without affecting the object scale, or a way to correct it after ?
On 20/08/2011 at 10:20, xxxxxxxx wrote:
ASFAIK. The only way to change the scale of the FFD without changing the scale of the mesh is to use Model Tool Mode when changing it.
I did some checking using GetCache on the FFD deformer.
The container cache is updating properly. Which is why the values change properly in the attributes.
But the matrix cache always returns None. So that's probably why scale is not working properly.
I'm not seeing any problems when I scale the matrix by hand to a non uniform scale.
I'm just testing it on a cube. And when I change the size of the matrix to something like 390,330,300 the FFD still deform the cube as expected.:
import c4d
from c4d import gui
def main() :
m = op.GetMg()
scale = c4d.Vector(m.v1.GetLength()+.3, #Sets FFD X size to 390
m.v2.GetLength()+.1, #Sets FFD Y size to 330
m.v3.GetLength()+0) #Leaves FFD Z size at default(300)
m.v1 = m.v1.GetNormalized() * scale.x
m.v2 = m.v2.GetNormalized() * scale.y
m.v3 = m.v3.GetNormalized() * scale.z
op.SetMg(m);
c4d.EventAdd()
if __name__=='__main__':
main()
On 18/01/2013 at 07:14, xxxxxxxx wrote:
Hi,
I up this topic because I faced the same problem, and it seems to still be a bug in R14.
I notice that the PointTag isn't correctly updated when you set the FFD size via Python.
Here the solution I found :
1. Get the Point Tag (c4d.Tpoint)
2. Get Point tag data : pointTag.GetAllHighlevelData()
3. Multiply each point position by the scale factor (newSize / currentSize)
point.x *= newSize.x / currentSize.x
…
4. Set the FFD size : ffd[c4d.FFDOBJECT_SIZE] = size
I hope this will be useful for someone.
On 18/01/2013 at 10:36, xxxxxxxx wrote:
Can you please post an example?
Because I don't understand where you're getting the newSize.x & currentSize.x values from.
Example:
import c4d
def main() :
obj = doc.GetActiveObject() #Assigns the FFD to a variable
count = obj.GetPointCount() #The number of points in the deformer
tag = obj.GetTag(c4d.Tpoint) #Get the deformer's default(hidden) point tag
pd = tag.GetAllHighlevelData() #The vector positions of each point in the deformer
for i in xrange(0, count) :
pd[i].x *= 100 / 300 #<----- newSize.x / oldSize.x ?
obj[c4d.FFDOBJECT_SIZE] = c4d.Vector(100,300,300) #<----Still does not work!
obj.Message(c4d.MSG_UPDATE) #Updates the object changes
c4d.EventAdd() #Tells C4D that something has changed
if __name__=='__main__':
main()
On 18/01/2013 at 10:53, xxxxxxxx wrote:
Here an example :
import c4d
def setFFDSize(ffd, size) :
tag = ffd.GetTag(c4d.Tpoint)
points = tag.GetAllHighlevelData()
oldSize = ffd[c4d.FFDOBJECT_SIZE]
scaleFactor = c4d.Vector(size.x / oldSize.x, size.y / oldSize.y, size.z / oldSize.z)
for i, point in enumerate(points) :
points[i].x *= scaleFactor.x
points[i].y *= scaleFactor.y
points[i].z *= scaleFactor.z
tag.SetAllHighlevelData(points)
ffd[c4d.FFDOBJECT_SIZE] = size
ffd.Message(c4d.MSG_UPDATE)
def main() :
setFFDSize(doc.GetActiveObject(), c4d.Vector(1200, 350, 100))
c4d.EventAdd()
if __name__=='__main__':
main()
On 18/01/2013 at 11:03, xxxxxxxx wrote:
You just forgot to set the data :
tag.SetAllHighlevelData(pd)
On 18/01/2013 at 11:29, xxxxxxxx wrote:
Got it.
Thanks for posting your solution.
On 20/01/2013 at 14:55, xxxxxxxx wrote:
Hi XSYANN, thanks for posting this workaround, annoying that they never fixed it properly yet. I even reported the bug to maxon tech support.
My original intention was to write a plugin for adding deformers correctly sized & orientated to the selected objects.
I got quite far without finishing it. Since then, Marc Pearson has written py-deform which supports everything apart from the ffd, for this reason (I spoke with him in a thread about it).
I think I'll give him a heads up that there is a workaround.
Thanks for both of your help on this. | https://plugincafe.maxon.net/topic/5924/6013_struggling-with-ffd--ffdobjectsize/11 | CC-MAIN-2021-43 | refinedweb | 1,138 | 68.77 |
The Trends of the Software Engineering Field in 2021
I’m interested in the trends in the software engineering field. Here are three basic questions.
Which Language will most people desire in 2021?
Which Database will most people desire in 2021?
Which Platform will most people desire in 2021?
The dataset I will use in this article for the answers from StackOverflow.
Overview of the Dataset
The name of the dataset is Stack Overflow Annual Developer Survey 2020. With nearly 65,000 responses fielded from over 180 countries and dependent territories, the 2020 Annual Developer Survey examines all aspects of the developer experience from career satisfaction and job search to education and opinions on open-source software. Each row represents the answer of a person.
I downloaded the dataset and put it in the data folder. The below code is used to import packages and the dataset to JupyterNodebook.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('./data/survey_results_public.csv')
num_rows = df.shape[0] #Provide the number of rows in the dataset
num_col = df.shape[1] #Provide the number of columns in the dataset
print("rows=",num_rows, "cols=",num_cols)
print("columns", df.columns.tolist())
The dataset has 61 columns and 64461 rows. There are three columns that could help me find out answers.
‘LanguageDesireNextYear’: which programming, scripting, and markup languages do you want to work in over the next year?
‘DatabaseDesireNextYear’: which database environments do you want to work in over the next year?
’PlatformDesireNextYear’: which platforms do you want to work in over the next year?
Data Preparation
First, let’s see what is the “LanguageDesireNextYear”’s values. Below code is used to count the values of LanguageDesireNextYear.
desire_L = df.LanguageDesireNextYear.value_counts()#Provide a pandas series of the counts for each value
the top 5 values :
Python 1152
Rust 528
HTML/CSS;JavaScript;TypeScript 499
C# 461
Go 412
The value is a possible combination that should be split. After the splitting, I add up numbers of each language. Here it’s a class. (code below)
class helper():
def __init__(self, column, dataframe):
self.column = column
self.dataframe = dataframe.dropna(subset = [self.column], how = "all")# Drop only rows with missing values in self.column
self.dic = {}
def split_sum_value(self):
self.dataframe.reset_index(drop=True,inplace=True)
for i in range(self.dataframe.shape[0]):
temp_list = self.dataframe[self.column][i].split(";")# split combination value
for j in range(len(temp_list)):
if temp_list[j] in self.dic:
self.dic[temp_list[j]] += 1
else:
self.dic[temp_list[j]] = 1
return self.dic
The dataset is ready, Let’s dive into exploratory analysis.
Exploratory Analysis
I input ‘LanguageDesireNextYear’ as a column to a helper class. Then I create the bar chart that shows the result of the sum.
#creat a helper instance and call the split_sum_value() function
LanguageDesireNextYear_sum = helper("LanguageDesireNextYear", df)
dict_1 = LanguageDesireNextYear_sum.split_sum_value()
dict_1 = dict((sorted(dict_1.items(), key=lambda item: item[1])))#creat bar chart
plt.bar(range(len(dict_1)), list(dict_1.values()), align='center')
plt.title("The rank of LanguageDesireNextYear")
plt.xticks(range(len(dict_1)), list(dict_1.keys()),rotation='vertical')
plt.tight_layout() #make room for the label.#save figure
plt.savefig('The_rank_of_LanguageDesireNextYear.png')
plt.show()
In Figure 1–1, The top 2 language is Python and JavaScript. It matches my expectation. Python is a language that most people use in the Artificial intelligence field which is hot Technology.
second, I input ‘DatabaseDesireNextYear’ as a column to the helper class and got Figure 1–2 below.
The most desire Database is PostgreSQL which is a free and open-source relational database management system. The next two are MongoDB and MySQL. These two are also free and open-source.
Third, I input ‘PlatformDesireNextYear’ as a column to the helper class and got Figure 1–3 below.
The plot shows that the top 1 of desire Platform is Linux.
Conclusion
The dataset of StackOverFlow is very useful. I found out the answers and shows some visualizations.
Which Language will most people desire in 2021? answer: Python
Which Database will most people desire in 2021? answer: PostgreSQL
Which Platform will most people desire in 2021? answer:Linux | https://xyf3333.medium.com/the-trends-in-the-software-engineering-field-ef2464761be9?source=post_internal_links---------6---------------------------- | CC-MAIN-2022-27 | refinedweb | 692 | 61.33 |
- ×Show All
Countable
Countable is a JavaScript function to add live paragraph-, word- and character-counting to an HTML element. Countable is a zero-dependency library and comes in at 1KB when minified and gzipped.
Installation
The preferred method of installation is npm or yarn.
npm i --save-dev countable yarn add --dev countable
Alternatively, you can download the latest zipball or copy the script directly.
Usage
Countable is available as a Node / CommonJS module, an AMD module and as a global. All methods are accessed on the Countable object directly.
Callbacks
The
onand
countmethods both accept a callback. The given callback is then called whenever needed with a single parameter that carries all the relevant data.
thisis bound to the current element. Take the following code for an example.
Countable.count(document.getElementById('text'), counter => console.log(this, counter))
=> <textarea id="text"></textarea>, { all: 0, characters: 0, paragraphs: 0, words: 0 }
Countable.on(elements, callback, options)
Bind the callback to all given elements. The callback gets called every time the element's value or text is changed.
Countable.on(area, counter => console.log(counter))
Countable.off(elements)
Remove the bound callback from all given elements.
Countable.off(area)
Countable.count(elements, callback, options)
Similar to
Countable.on(), but the callback is only executed once, there are no events bound.
Countable.count(area, counter => console.log(counter))
Countable.enabled(elements)
Checks the live-counting functionality is bound to the given.
Countable.enabled(area)
Options
Countable.on()and
Countable.count()both accept a third argument, an options object that allows you to change how Countable treats certain aspects of your element's text.
{ hardReturns: false, stripTags: false, ignore: [] }
By default, paragraphs are split by a single return (a soft return). By setting
hardReturnsto true, Countable splits paragraphs after two returns.
Depending on your application and audience, you might need to strip HTML tags from the text before counting it. You can do this by setting
stripTagsto true.
Sometimes it is desirable to ignore certain characters. These can be included in an array and passed using the
ignoreoption.
Browser Support
Countable supports all modern browsers. Full ES5 support is expected, as are some ES6 features, namely
letand
const. | https://www.javascripting.com/view/countable | CC-MAIN-2021-39 | refinedweb | 366 | 51.95 |
Apache 1.3 (and 2.0, as far as I can see) calculates the modified time used to compare the If-Modified-
Since request header with and determine validation like this (in http_protocol.c ap_meets_conditions);
mtime = (r->mtime != 0) ? r->mtime : time(NULL);
Later, this time is used to figure out if a 304 Not Modified is sent;
if ((ims >= mtime) && (ims <= r->request_time)) {
return HTTP_NOT_MODIFIED;
The problem is that Apache fakes the mtime to the current time if it can't find r->mtime (e.g., if it's a
CGI script). If a client sends an If-Modified-Since header, I believe there's a race condition whereby an
IMS containing the current time will get a 304 Not Modified, even though the underlying resource has
no concept of mtime (i.e., it doesn't have a Last-Modified).
This can be demonstrated against the following script (at-
ims.cgi):
#!/usr/local/bin/python
import os
print "Content-Type: text/plain"
print
print os.environ.get("HTTP_IF_MODIFIED_SINCE", "-")
using the following command-line for the client side;
~> URL=""; DATE=`curl -Is -X HEAD $URL | grep ^Date |
cut -d ":" -f 2-`; echo "D-> $DATE"; curl -Is -H "If-Modified-Since: $DATE" $URL
D-> Wed, 12 Apr 2006 17:58:39 GMT
HTTP/1.1 304 Not Modified
Date: Wed, 12 Apr 2006 17:58:39 GMT
Server: Apache/1.3.29
As you can see, Apache incorrectly sends a 304 Not Modified when the If-Modified-Since request
header is set to the current time. For a CGI script that changes on a sub-second basis (e.g., one used
by AJAX), this reduces the granularity of responses to one second, confusing the implementor (because
they don't expect a 304 Not Modified to ever be sent).
All of this would be largely a theoretical problem, because such a resource should never see an If-
Modified-Since request header, having not send a previous Last-Modified response header. However,
Safari/OSX *does* send a If-Modified-Since request header based upon the Date response header when
it doesn't find a Last-Modified response (a bug which I'm reporting to them now).
Safari, of course, should fix themselves, but Apache is also somewhat broken here, as it should not
default mtime to the current time (which may cause problems in other scenarios).
Note that while I see this behaviour on the mnot.net server (Apache 1.3.29), I've had trouble verifying it
on other systems. I suspect that this is because it's a race condition, and therefore hard to reproduce;
that said, it could be something on that particular server instance (it's a shared host, so I dont' have
access to the source they've used).
However, if I'm correct about the underlying reason, the code hasn't changed from then until HEAD, and
is substantially the same in Apache 2.2.0.
BTW, there's nothing special about that script; I used the one I'm showing Apple's problem with, but it
could easily be as simple as
print "Content-Type: text/plain"
print
print "hi!"
Created attachment 29237 [details]
Fix If-Modified-Since "now" with non-mtime-aware resources, add missing APR_DATE_BAD check.
No response within six years although the fix is easy:
Keep assuming that r->mtime == 0 means "now" but check for mtime == 0 explicitly instead of faking mtime = apr_time_now(). Note that this will break the If-Unmodified-Since check if the requested resource really has an mtime of 1970-01-01T00:00:00 (but how likely is that?).
A thorough fix would be to introduce a "time unset" constant as already stated in a comment by dgaudet.
A quick test shows Mark Nottingham's test case still elicits the bug with httpd trunk (2.5.0-dev) r1470940.
I just retested this and the test case still elicits the bug in 2.4.12, 2.4.13-dev (r1665365), and 2.5.0-dev (r1665394), running on OSX 10.9.5.
There's been enough code change that Carsten Gaebler's patch no longer applies cleanly, but it looks like the current code still has the same bug/behavior as the old code: if the resource has no last-modified date, it acts as if the last-modified date is apr_time_now(). | https://bz.apache.org/bugzilla/show_bug.cgi?id=39287 | CC-MAIN-2020-45 | refinedweb | 723 | 60.14 |
Updated with links of WCF-Part1, WPF, WWF, WCF Tracing FAQ, 8 steps to enable Windows authentication on WCF BasicHttpBinding , Difference between BasicHttpBinding and WsHttpBinding & WCF FAQ Part 3 – 10 security related FAQ. Updated With Links of SilverLight FAQ Part 1 and Part 2.
Introduction?
In this section we will run through a quick FAQ for WCF. I am sure after reading this you will get a good understanding of the fundamentals of WCF.
I
Click here to see Windows Communication Framework (WCF) - Part.
Figure 18:- Duplex Service code
Let us try to understand the duplex concept by doing a small sample. The code snippet is as shown in the above figure. We will extend the previous sample, which was shown in the first question only that now we will provide the notification to the client once the doHugeTask is completed.
The first step is to change the service class. Above is the code snippet of the service class. Again, we have numbered them so that we can understand how to implement it practically line by line. So below is the explanation number by number:-
1 - In the service contract attribute we need to specify the callback contract attribute. This Callback Contract attribute will have the interface, which will define the callback.
2 - This is the interface which client will call. In order that it should be asynchronous, we have defined the one-way attribute on the doHugeTask method.
3 -This is the most important interface. As it forms the bridge of communication between server and client. The server will call the client using Completed method. Client needs to provide implementation for the completed method at the client side and server will call that method once it completes the doHugeTask method.
4 and 5 - In this we implement the Interface Duplex interface and provide implementation for doHugeTask () method.
6 - This is an important step. The OperationContext.Current.GetCallBackChannel will be used to make callback to the client.
7 - We will expose the service using HTTP protocol. In this sample, because we want to do duplex communication we need to use wsDualHttpBinding rather than just using simple Http Binding. This new binding configuration is then assigned to the end on which the service is hosted.
This completes the server side explanation for duplex.
Figure: - 19 Duplex Client side code
Above is the code snippet for client side. So let us explain the above code with numbers.
1- We implement the completed method. This is the callback method, which the server will call when doHugeTask is completed.
2- In this step we create object of the Instance Context class. Instance Context represents context information of a service. The main use of Instance Context is to handle incoming messages. In short, proxy is used to send messages to server and Instance Context is used to accept incoming messages.
3 -In this step we pass the Instance Context object to the constructor of the proxy. This is needed, as the server will use the same channel to communicate to the client.
4 - In this section two windows are shown. The top windows is the servers output and the below windows is of the client. You can see in the below window that server as made a callback to the client.
Note: - You can get source code for the same in WCFDuplex folder. Feel Free to experiment with it. Try making one simple project of client to client chat using WCF duplex fundamentals I am sure your doubts will be cleared in and out..
Note: - You can the below sample in “WCFMultipleProtocolGetHost” folder in the CD provided.
Below is the code snippet pasted from the same sample? As usual, we have numbered them and here is the explanation of the same:-
1 and 2 - As we are hosting the service on two protocols we need to create two objects if the URI. You can also give the URI through config file. Pass these two objects of the URI in the constructor parameter when creating the service host object.
Figure 20:- Server side code for Multi Protocol hosting
3 – In the config file we need to define two bindings and end as we are hosting the service in multiple protocols.
Once we are done the server side coding its time to see make a client by which we can switch between the protocols and see the results. Below is the code snippet of the client side for multi-protocol hosting.
Figure 21:- Multi Protocol Client code
Let us understand the code:-
1 - In the generated config file we have added two ends. When we generate the config file, it generates only for one protocol. The other end has to be manually added.
2 - To test the same we have a list box, which has the name value given in the end point.
3 - In the list box select event we have then loaded the selected protocol. The selected protocol string needs to be given in the proxy class and finally call the proxy class GetTotalCost.
First let us understand why MSMQ came in to picture and then rest will follow. Let us take a scenario where your client needs to upload data to a central server. If everything will works fine and the server is available 24 hours to client then there are no issues. In case the server is not available, the clients will fail and the data will not be delivered. There is where MSMQ comes in to picture. It eliminates the need of persistent connection to a server. Therefore, what you do is deploy a MSMQ server and let the clients post message to this MSMQ server. When your actual server runs, it just picks up the message from the queue. In short, neither the client nor the server needs to be up and running on the same time. In WCF we have a client and service model and in real world it is very much practical possible that both the entities will not be available at one time.
In order to use MSMQ you need to install the Message queuing by click on Install windows component and selecting MSMQ queuing. Once that done we are all set to make our sample of MSMQ using WCF
Figure 22:- MSMQ Installation
This sample will be simple. We send some messages from the client and keep the server down. As the server is down it will make postings to the MSMQ server. Then we will run the server service and service should pick up the message and display the same.
Figure 23:- MSMQ Server side code walkthrough
Above snippet is the server side code. So let us understand the whole snippet:-
1 – First thing is the queue name where the messages will go if the server is not running. The server name we can get from the config file. You can see in the above snippet we have defined the Mismanage in App.config file.
2 – We first check does this queue exist if it exists then go-ahead. If not then go to create a queue. Please note you need import using System. Messaging namespace. Then we can use the Message Queue to create a queue if the queue does not exist.
3 – First thing which should surprise you is why we are creating a URI with HTTP protocol as we are going to use MSMQ. Ok! This is bit tricky but fun. As we know in order to connect to this service, we need to generate a client proxy using SVCUTIL. Therefore, this URI will help us do that. It is not actually used by the client to connect.
4 – We need our end and the MSMQ bindings.
5 and 6 – Again as usual we need is an interface and implementation for the same. In implementation we have created a method called send Message.
Once the above steps are executed, run the server and generate the client proxy using SVCUTIL utility.
Now comes the final step making a client. One change you need to make is change the address to the MSMQ server in the app.config. In this, we have just looped ten times and fired an offline message. Please note to keep your server down.
Figure 24:- MSMQ Client side code
Now go to computer management and you can see your queue messages waiting to be picked by the server. Run the server and below should be the output. If you come back again you will see that there are no messages in the queue.
Figure 25:- MSMQ Server side display
One of the important concepts is to understand when to use MSMQ protocol. When we expect that client and server will not be available, at one time this is the best option to opt for..
Figure 26:- Transactions in WCF
In order to support transaction the service should first support transactions. Above is a simple code snippet of the server service and client and explanation for the same:-
The top code snippet is of the server service and the below code snippet is of the client.
1 - At interface level the operation contract is attributed with [Transaction Flow] attribute. There are three values for it Allowed (which mean operation mar or may not be used in transaction), Not Allowed (Where it is never used in transaction) and required (Where the service can only be used in transactions). Below code snippet currently says that this service can only be used with transactions.
2 - In this section the [Service Behavior] attribute specifies the transaction isolation level property. Transaction isolation specifies the degree of isolation most compatible with other applications. So let us review what are the different levels you can provide in transaction isolation.
The data affected by a transaction is called volatile.
Chaos: - pending changes from more highly isolated transactions cannot be overridden.
Read Committed: - Volatile data can be modified but it cannot be read during the transaction.
Read Uncommitted: - Volatile data can be read as well as modified during the transaction.
Repeatable Read: - Volatile data can be read but not modified during the transaction and new data can be added.
Serializable: - Volatile data can be only read. However, no modifications and adding of new data is allowed.
Snapshot: - Volatile data can be read. However, before modifying the data it verifies if any other transaction had changed data. If yes then it raises error.
By default, the System. Transactions infrastructure creates Serializable transactions.
3 - This defines the transaction behavior with in the service. [Operation Behavior] has a property called as transaction scope. Transaction Scope setting indicates that operation must be called within a transaction scope. You can also see TransactionAutoComplete is set to true which indicates that transaction will complete by default if there are no errors. If you do not provide the TransactionAutoComplete to true then you will need to call OperationContext.Current.SetTransactionComplete() to make the transaction complete.
Now let us make a walkthrough of the client side code for the service.
4 and 5 - You can see from the client we need to define Isolation level and the scope while making the call to our Update Accounts method
2 -
There are scenarios in the project when you want the message to deliver in proper time. The timely delivery of message is more important than losing message. In these scenarios, Volatile queues are used.
Below is the code snippet, which shows how to configure Volatile queues. You can see the binding Configuration property set to Volatile Binding. This code will assure that message will deliver on time but there is a possibility that you can lose data.
<appSettings>
<!-- use appSetting to configure MSMQ queue name -->
<add key="queueName" value=".\private$\ServiceModelSamplesVolatile" />
</appSettings>
<system.serviceModel>
<services>
<service name="Samples.StockTickerServ"
behaviorConfiguration="CalculatorServiceBehavior">
...
<!-- Define NetMsmqEndpoint -->
<endpoint address="net.msmq://localhost/private/ServiceModelSamplesVolatile"
binding="netMsmqBinding"
bindingConfiguration="volatileBinding"
contract="Samples.InterfaceStockTicker" />
...
</service>
</services>
<bindings>
<netMsmqBinding>
<binding name="volatileBinding"
durable="false"
exactlyOnce="false"/>
</netMsmqBinding>
</bindings>
...
</system.serviceModel>
The main use of queue is that you do not need the client and the server running at one time. Therefore, it is possible that a message will lie in queue for long time until the server or client picks it up. But there are scenarios where a message is of no use after a certain time. Therefore, these kinds of messages if not delivered within that time span it should not be sent to the user.
Below is the config snippet, which defines for how much time the message should be in queue.
<bindings>
<netMsmqBinding>
<binding name="MyBindings"
deadLetterQueue="Custom"
customDeadLetterQueue="net.msmq://localhost/private/ServiceModelSamples"
timeToLive="00:00:02"/>
</netMsmqBinding>
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/aspnet/WCFPart2.aspx | crawl-002 | refinedweb | 2,112 | 65.22 |
We will upgrade the macOS minimum version to 10.13. I will make that change shortly along with related changes for the precompiled libraries.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Today
There's an issue on macOS that multiple devs stumbled over now, bcf49d13e534. To address that we'd have to drop support for macOS 10.12 and 10.13 from what I can tell.
After discussion with Sergey, we decide not to upgrade the external libraries to C++14, and instead go directly to C++17 in about 6 months from now when we upgrade to VFX reference platform 2021.
Sat, Jun 27
Just thought I should mention @Ray molenkamp (LazyDodo)'s D8126 here.
Thu, Jun 25
embree is done, tbb got added when i took out the erroneous exports, oidn still needs to be done indeed.
I believe we are now ready with the CMake builder updates for 2.90, and that we can do the precompiled library updates now.
Wed, Jun 24
Hi! please use svn precompiled libraries as mentioned at
mkdir ~/blender-git/lib cd ~/blender-git/lib svn checkout
For build issues, please ask on
Fri, Jun 19
In T77813#958440, @Germano Cavalcante (mano-wii) wrote:
This report is already getting very confusing.
A lot of pertinent information is scattered in the comments.
And I'm not sure which to look at first.
This is too time consuming for us to track down and we require the bug reporter to narrow down the problem.
This report is already getting very confusing.
A lot of pertinent information is scattered in the comments.
And I'm not sure which to look at first.
Thu, Jun 18
Just tested this issue under my Archlinux/KDE installation with blender installed from official Arch repo (blender 17:2.83-1), do the exact same steps, and also can reproduce this issue.
To reproduce this issue, we need a clean environment with blender 2.83 just installed. I use the zip version instead of the installer. The reproduce steps:
Wed, Jun 17
False alarm. (I mistook "Save Startup File" for "Save preferences")
I can confirm with the official version of blender 2.80, installing through the installer (not the zip)
Tue, Jun 16
- CTest integration via the gtest_discover_tests() command.
Mon, Jun 15
Sun, Jun 14
Sat, Jun 13
Fri, Jun 12
I would like to see Python upgraded to 3.7.7, as there have been a bunch of security fixes.
Tue, Jun 9
That or use a naming convention so every test in a file starts with the same prefix, but that's a bit fragile.
Mon, Jun 8
In D7649#192868, @Sybren A. Stüvel (sybren) wrote:
- Each test invocation then has to load the same binary. I expect this to be potentially quite slow, as it can be 1.5 GB in debug mode. Loading that binary once and then running all tests will probably be faster, although it does increase the chance of one test influencing the test (as they'd then all run in the same process) and would prevent running the tests in parallel.
In D7649#192724, @Brecht Van Lommel (brecht) wrote:
I was expecting the tests to remain individual ctests instead of all being part of one Blender ctest, invoking blender_tests multiple times with different --gtest_filter arguments.
In D7649#192674, @Sybren A. Stüvel (sybren) wrote:
I think it's nice to have the tests in the same namespace as the code under test. It could be in the global namespace too and have a using blender::module; statement at the top. I don't have strong feelings about this one.
- Setup libdirs and liblinks as per Brecht's diff.
- Add include(GTestTesting) to intern/libmv/CMakeLists.txt to get libmv building again.
- Add buildinfo to the blender_test executable.
Sat, Jun 6
This patch does not pass the CMake step for me either, afaict it's missing the BLENDER_SRC_GTEST entirely?
Fri, Jun 5
I had to disable WITH_LIBMV to even get it through cmake
I had to add this to make it build, but I guess this is still work in progress (and not integrated with ctest?).
diff --git a/tests/gtests/CMakeLists.txt b/tests/gtests/CMakeLists.txt index 68a44b28817..90f610899b1 100644 --- a/tests/gtests/CMakeLists.txt +++ b/tests/gtests/CMakeLists.txt @@ -21,9 +21,13 @@ if(WITH_GTESTS) endif() unset(_test_libs)
- Updated namespaces
- Moved BLENDER_GFLAGS_NAMESPACE/GFLAGS/GLOG defines away from global scope. This does add to the duplication between blender_add_test_lib and BLENDER_SRC_GTEST_EX, so I can look into that later.
- Removed gtests_runner.{cc,h} as these were actuall needed for running the tests from Blender itself.
- Moved remove_strict_flags() into GTestTesting.cmake
- Added some clarifying comments
Please ignore the exact namespace in the test files for now. What I do want to have feedback on, is whether the test should be in:
the same namespace as the code under test,
in a tests sub-namespace, or
in an anonymous sub-namespace.
For the namespace question: I think it should be tests sub-namespace of what's being tested. For example, blender::depsgraph::tests.
When some helpers are needed use anonymous namespace inside of the tests.
This is great news. Would really appreciate if this fix make it to the next 2.83 LTS release!
Thx to @Clément Foucault (fclem), also here it's works fine 👍
If I had symlinked the compilers, this would not have been an issue.
But such scripts fit my purpose so I'd rather move lib/clang folder for my use.
(app versions below)
I couldn't reproduce this problem with Xcode 11.0 and CMake 3.14.1.
Thu, Jun 4
That's good news :)
Thanks to @Clément Foucault (fclem)
Fixed! Works beautifully now. Finally usable on OSX thanks to everyone’s hard work.
I agree, unfortunate typo
Some improvements have been made in this area.
Can anyone confirm if this problem was fixed in the latest builds?
So as noted in comment in rB60bed34f165b, FFMPEG 4.3.2… does not yet exist. Looks like it should be 4.2.3?
Jun 2 2020
I've pretty much picked off the easy bits, the ones still open are linux only, and USD which needs changes to the integration code, @Sybren A. Stüvel (sybren) mind taking this one on linux? once the build issues in blender are taken care of i'll do the windows libs. | https://developer.blender.org/feed/?projectPHIDs=PHID-PROJ-6i7z4qtnswaj7epemlno | CC-MAIN-2020-29 | refinedweb | 1,068 | 75.3 |
#include "avformat.h"
#include "libavutil/avstring.h"
#include "rtpdec.h"
#include "rdt.h"
#include "libavutil/base64.h"
#include "libavutil/md5.h"
#include "rm.h"
#include "internal.h"
#include "avio 553 of file rdt.c.
Referenced by ff_rdt_calc_response_and_checksum().
Definition at line 458 of file rdt.c.
Referenced by real_parse_asm_rulebook().
Register RDT-related dynamic payload handlers with our cache.
Definition at line 569 of file rdt.c.
Referenced by av_register_all().
Definition at line 78 of file rdt.c.
Referenced by ff_rtsp_undo_setup().
Actual data(). 132 of file rdt.c.
Referenced by rdt_parse_sdp_line().
Definition at line 392 of file rdt.c.
Referenced by rdt_parse_sdp_line().
Definition at line 445 of file rdt.c.
Referenced by real_parse_asm_rulebook().
The ASMRuleBook contains a list of comma-separated strings per rule, and each rule is separated by a ;. The last one also has a ; at the end so we can use it as delimiter. Every rule occurs twice, once for when the RTSP packet header marker is set and once for if it isn't. We only read the first because we don't care much (that's what the "odd" variable is for). Each rule contains a set of one or more statements, optionally72 of file rdt.c.
Referenced by ff_real_parse_sdp_a_line(). | http://ffmpeg.org/doxygen/0.9/rdt_8c.html | CC-MAIN-2014-41 | refinedweb | 203 | 56.52 |
virtualpathresolver(
prefix)
When implementing Virtual Source Objects it's important that you can locate them. While virtual sources do not appear as children of pages they can be navigated to through the database pad. This is achieved through custom virtual path resolvers.
Each source object needs to be a unique prefix which identifies all instances
of it. For instance if you want to implement a special page (like a feed)
that would exist for specific pages, then you can register the virtual path
prefix
feed with your plugin. Though you should use more descriptive names
for your plugins as these paths are shared.
If a user then resolves the page
/my-page@feed it would invoke your URL
resolver with the record
my-page and an empty list (rest of the path
segments). If they would request
/my-page@feed/recent then it would
pass
['recent'] as path segments.
Here an example that would implement a virtual source for feeds and an associated virtual path resolver:
from lektor.sourceobj import VirtualSourceObject from lektor.utils import build_url class Feed(VirtualSourceObject): def __init__(self, record, version='default'): VirtualSourceObject.__init__(self, record) self.version = version @property def path(self): return '%s@feed%s' % ( self.record.path, '/' + self.version if self.version != 'default' else '', ) @property def url_path(self): return build_url([self.record.url_path, 'feed.xml']) def on_setup_env(self, **extra): @self.env.virtualpathresolver('feed') def resolve_virtual_path(record, pieces): if not pieces: return Feed(record) elif pieces == ['recent']: return Feed(record, version='recent') | https://www.getlektor.com/docs/api/environment/virtualpathresolver/ | CC-MAIN-2018-17 | refinedweb | 247 | 50.33 |
Question
Create X-bar, R, and s charts for the following data on the diameter of pins produced by an automatic lathe. The data summarize 12 samples of size 5 each, obtained at random on 12 different days. Is the process in control? If it is not, remove the sample that is out of control and redraw the charts.
.png)
Answer to relevant QuestionsAn article in Bloomberg Markets compares returns for the Hermitage Fund with those of the Russian Trading System Index. Paired data of rates of return for the two funds during 12 randomly chosen years are as follows: Conduct ...Shearson Lehman Brothers, Inc., now encourages its investors to consider real estate limited partnerships. The company offers two limited partnerships-one in a condominium project in Chicago and one in Dallas. Annualized ...The following data are the one-year return to investors in world stock investment funds, as published in Pensions & Investments. The data are in percent return (%): 35.9, 34.5, 33.7, 31.7, 27.5, 27.3, 27.3, 27.2, 27.1 25.5. ...A random sample of 12 consumers are asked to rank their preferences of four new fragrances that Calvin Klein wants to introduce to the market in the fall of 2008. The data are as follows (best liked denoted by 1 and least ...A study reports an analysis of 35 key product categories. At the time of the study, 72.9% of the products sold were of a national brand, 23% were private label, and 4.1% were generic. Suppose that you want to test whether ...
Post your question | http://www.solutioninn.com/create-xbar-r-and-s-charts-for-the-following-data | CC-MAIN-2017-04 | refinedweb | 267 | 66.44 |
HoloLens (1st gen) and Azure 303: Natural language understanding (LUIS) integrate Language Understanding into a mixed reality application using Azure Cognitive Services, with the Language Understanding API.
Language Understanding (LUIS) is a Microsoft Azure service, which provides applications with the ability to make meaning out of user input, such as through extracting what a person might want, in their own words. This is achieved through machine learning, which understands and learns the input information, and then can reply with detailed, relevant, information. For more information, visit the Azure Language Understanding (LUIS) page.
Having completed this course, you will have a mixed reality immersive headset application which will be able to do the following:
- Capture user input speech, using the Microphone attached to the immersive headset.
- Send the captured dictation the Azure Language Understanding Intelligent Service (LUIS).
- Have LUIS extract meaning from the send information, which will be analyzed, and attempt to determine the intent of the user’s request will be made.
Development will include the creation of an app where the user will be able to use voice and/or gaze to change the size and the color of the objects in the scene. The use of motion controllers will not be covered..
Be prepared to Train LUIS several times, which is covered in Chapter 12. You will get better results the more times LUIS has been trained. LUIS retrieval
Before you start
To avoid encountering issues building this project, it is strongly suggested that you create the project mentioned in this tutorial in a root or near-root folder (long folder paths can cause issues at build-time).
To allow your machine to enable Dictation, go to Windows Settings > Privacy > Speech, Inking & Typing and press on the button Turn On speech services and typing suggestions.
The code in this tutorial will allow you to record from the Default Microphone Device set on your machine. Make sure the Default Microphone Device is set as the one you wish to use to capture your voice.
If your headset has a built-in microphone, make sure the option “When I wear my headset, switch to headset mic” is turned on in the Mixed Reality Portal settings.
Chapter 1 – Setup Azure Portal
To use the Language Understanding service in Azure, Language Understanding, and click Enter.
Note
The word New may have been replaced with Create a resource, in newer portals.
The new page to the right will provide a description of the Language Understanding service. At the bottom left of this page, select the Create button, to create an instance of this service.
Once you have clicked on Create:
Insert your desired Name for this service instance.
Select a Subscription.
Select the Pricing Tier appropriate for you, if this is the first time creating a LUIS Service, a free tier (named F0) should be available to you. The free allocation should be more than sufficient for this course. courses) LUIS service instance.
Within this tutorial, your application will need to make calls to your service, which is done through using your service’s Subscription Key.
From the Quick start page, of your LUIS.
In the Service page, click on Language Understanding Portal to be redirected to the webpage which you will use to create your new Service, within the LUIS App.
Chapter 2 – The Language Understanding Portal
In this section you will learn how to make a LUIS App on the LUIS Portal.
Important
Please be aware, that setting up the Entities, Intents, and Utterances within this chapter is only the first step in building your LUIS service: you will also need to retrain the service, several times, so to make it more accurate. Retraining your service is covered in the last Chapter of this course, so ensure that you complete it.
Upon reaching the Language Understanding Portal, you may need to login, if you are not already, with the same credentials as your Azure portal.
If this is your first time using LUIS, you will need to scroll down to the bottom of the welcome page, to find and click on the Create LUIS app button.
Once logged in, click My apps (if you are not in that section currently). You can then click on Create new app.
Give your app a Name.
If your app is supposed to understand a language different from English, you should change the Culture to the appropriate language.
Here you can also add a Description of your new LUIS app.
Once you press Done, you will enter the Build page of your new LUIS application.
There are a few important concepts to understand here:
- Intent, represents the method that will be called following a query from the user. An INTENT may have one or more ENTITIES.
- Entity, is a component of the query that describes information relevant to the INTENT.
- Utterances, are examples of queries provided by the developer, that LUIS will use to train itself.
If these concepts are not perfectly clear, do not worry, as this course will clarify them further in this chapter.
You will begin by creating the Entities needed to build this course.
On the left side of the page, click on Entities, then click on Create new entity.
Call the new Entity color, set its type to Simple, then press Done.
Repeat this process to create three (3) more Simple Entities named:
- upsize
- downsize
- target
The result should look like the image below:
At this point you can begin creating Intents.
Warning
Do not delete the None intent.
On the left side of the page, click on Intents, then click on Create new intent.
Call the new Intent ChangeObjectColor.
Important
This Intent name is used within the code later in this course, so for best results, use this name exactly as provided.
Once you confirm the name you will be directed to the Intents Page.
You will notice that there is a textbox asking you to type 5 or more different Utterances.
Note
LUIS converts all Utterances to lower case.
- Insert the following Utterance in the top textbox (currently with the text Type about 5 examples… ), and press Enter:
The color of the cylinder must be red
You will notice that the new Utterance will appear in a list underneath.
Following the same process, insert the following six (6) Utterances:
make the cube black make the cylinder color white change the sphere to red change it to green make this yellow change the color of this object to blue
For each Utterance you have created, you must identify which words should be used by LUIS as Entities. In this example you need to label all the colors as a color Entity, and all the possible reference to a target as a target Entity.
To do so, try clicking on the word cylinder in the first Utterance and select target.
Now click on the word red in the first Utterance and select color.
Label the next line also, where cube should be a target, and black should be a color. Notice also the use of the words ‘this’, ‘it’, and ‘this object’, which we are providing, so to have non-specific target types available also.
Repeat the process above until all the Utterances have the Entities labelled. See the below image if you need help.
Tip
When selecting words to label them as entities:
- For single words just click them.
- For a set of two or more words, click at the beginning and then at the end of the set.
Note
You can use the Tokens View toggle button to switch between Entities / Tokens View!
The results should be as seen in the images below, showing the Entities / Tokens View:
At this point press the Train button at the top-right of the page and wait for the small round indicator on it to turn green. This indicates that LUIS has been successfully trained to recognize this Intent.
As an exercise for you, create a new Intent called ChangeObjectSize, using the Entities target, upsize, and downsize.
Following the same process as the previous Intent, insert the following eight (8) Utterances for Size change:
increase the dimensions of that reduce the size of this i want the sphere smaller make the cylinder bigger size down the sphere size up the cube decrease the size of that object increase the size of this object
The result should be like the one in the image below:
Once both Intents, ChangeObjectColor and ChangeObjectSize, have been created and trained, click on the PUBLISH button on top of the page.
On the Publish page you will finalize and publish your LUIS App so that it can be accessed by your code.
Set the drop down Publish To as Production.
Set the Timezone to your time zone.
Check the box Include all predicted intent scores.
Click on Publish to Production Slot.
In the section Resources and Keys:
- Select the region you set for service instance in the Azure Portal.
- You will notice a Starter_Key element below, ignore it.
- Click on Add Key and insert the Key that you obtained in the Azure Portal when you created your Service instance. If your Azure and the LUIS portal are logged into the same user, you will be provided drop-down menus for Tenant name, Subscription Name, and the Key you wish to use (will have the same name as you provided previously in the Azure Portal.
Important
Underneath Endpoint, take a copy of the endpoint corresponding to the Key you have inserted, you will soon use it in your code.
Chapter 3 – Set up the Unity project
The following is a typical set up for developing with the mixed reality, and as such, is a good template for other projects.
Open Unity and click New.
You will now need to provide a Unity Project name, insert MR_LUIS._LuisScene, then press Save. 4 – Create the scene
Important
If you wish to skip the Unity Set up component of this course, and continue straight into code, feel free to download this .unitypackage, import it into your project as a Custom Package, and then continue from Chapter 5.
Right-click in an empty area of the Hierarchy Panel, under 3D Object, add a Plane.
Be aware that when you right-click within the Hierarchy again to create more objects, if you still have the last object selected, the selected object will be the parent of your new object. Avoid this left-clicking in an empty space within the Hierarchy, and then right-clicking.
Repeat the above procedure to add the following objects:
- Sphere
- Cylinder
- Cube
- 3D Text
The resulting scene Hierarchy should be like the one in the image below:
Left click on the Main Camera to select it, look at the Inspector Panel you will see the Camera object with all the its components.
Click on the Add Component button located at the very bottom of the Inspector Panel.
Also make sure that the Transform component of the Main Camera is set to (0,0,0), this can be done by pressing the Gear icon next to the Camera’s Transform component and selecting Reset. The Transform component should then look like:
- Position is set to 0, 0, 0.
- Rotation is set to 0, 0, 0.
Note
For the Microsoft HoloLens, you will need to also change the following, which are part of the Camera component, which is on your Main Camera:
- Clear Flags: Solid Color.
- Background ‘Black, Alpha 0’ – Hex color: #00000000.
Left click on the Plane to select it. In the Inspector Panel set the Transform component with the following values:
Left click on the Sphere to select it. In the Inspector Panel set the Transform component with the following values:
Left click on the Cylinder to select it. In the Inspector Panel set the Transform component with the following values:
Left click on the Cube to select it. In the Inspector Panel set the Transform component with the following values:
Left click on the New Text object to select it. In the Inspector Panel set the Transform component with the following values:
Change Font Size in the Text Mesh component to 50.
Change the name of the Text Mesh object to Dictation Text.
Your Hierarchy Panel structure should now look like this:
The final scene should look like the image below:
Chapter 5 – Create the MicrophoneManager class
The first Script you are going to create is the MicrophoneManager class. Following this, you will create the LuisManager, the Behaviours class, and lastly the Gaze class (feel free to create all these now, though it will be covered as you reach each Chapter).
The MicrophoneManager class is responsible for:
- Detecting the recording device attached to the headset or machine (whichever is the default one).
- Capture the audio (voice) and use dictation to store it as a string.
- Once the voice has paused, submit the dictation to the LuisManager class.
To create this class:
Right-click in the Project Panel, Create > Folder. Call the folder Scripts.
With the Scripts folder created, double click it, to open. Then, within that folder, right-click, Create > C# Script. Name the script MicrophoneManager.
Double click on MicrophoneManager to open it with Visual Studio.
Add the following namespaces to the top of the file:
using UnityEngine; using UnityEngine.Windows.Speech;
Then add the following variables inside the MicrophoneManager class:
public static MicrophoneManager instance; //help to access instance of this object private DictationRecognizer dictationRecognizer; //Component converting speech to text public TextMesh dictationText; //a UI object used to debug dictation result
Code for Awake() and Start() methods now needs to be added. These will be called when the class initializes:
private void Awake() { // allows this class instance to behave like a singleton instance = this; } void Start() { if (Microphone.devices.Length > 0) { StartCapturingAudio(); Debug.Log("Mic Detected"); } }
Now you need the method that the App uses to start and stop the voice capture, and pass it to the LuisManager class, that you will build soon.
/// <summary> /// Start microphone capture, by providing the microphone as a continual audio source (looping), /// then initialise the DictationRecognizer, which will capture spoken words /// </summary> public void StartCapturingAudio() { if (dictationRecognizer == null) { dictationRecognizer = new DictationRecognizer { InitialSilenceTimeoutSeconds = 60, AutoSilenceTimeoutSeconds = 5 }; dictationRecognizer.DictationResult += DictationRecognizer_DictationResult; dictationRecognizer.DictationError += DictationRecognizer_DictationError; } dictationRecognizer.Start(); Debug.Log("Capturing Audio..."); } /// <summary> /// Stop microphone capture /// </summary> public void StopCapturingAudio() { dictationRecognizer.Stop(); Debug.Log("Stop Capturing Audio..."); }
Add a Dictation Handler that will be invoked when the voice pauses. This method will pass the dictation text to the LuisManager class.
/// <summary> /// This handler is called every time the Dictation detects a pause in the speech. /// This method will stop listening for audio, send a request to the LUIS service /// and then start listening again. /// </summary> private void DictationRecognizer_DictationResult(string dictationCaptured, ConfidenceLevel confidence) { StopCapturingAudio(); StartCoroutine(LuisManager.instance.SubmitRequestToLuis(dictationCaptured, StartCapturingAudio)); Debug.Log("Dictation: " + dictationCaptured); dictationText.text = dictationCaptured; } private void DictationRecognizer_DictationError(string error, int hresult) { Debug.Log("Dictation exception: " + error); }
Important
Delete the Update() method since this class will not use it.
Be sure to save your changes in Visual Studio before returning to Unity.
Note
At this point you will notice an error appearing in the Unity Editor Console Panel. This is because the code references the LuisManager class which you will create in the next Chapter.
Chapter 6 – Create the LUISManager class
It is time for you to create the LuisManager class, which will make the call to the Azure LUIS service.
The purpose of this class is to receive the dictation text from the MicrophoneManager class and send it to the Azure Language Understanding API to be analyzed.
This class will deserialize the JSON response and call the appropriate methods of the Behaviours class to trigger an action.
To create this class:
Double click on the Scripts folder, to open it.
Right-click inside the Scripts folder, click Create > C# Script. Name the script LuisManager.
Double click on the script to open it with Visual Studio.
Add the following namespaces to the top of the file:
using System; using System.Collections; using System.Collections.Generic; using System.IO; using UnityEngine; using UnityEngine.Networking;
You will begin by creating three classes inside the LuisManager class (within the same script file, above the Start() method) that will represent the deserialized JSON response from Azure.
[Serializable] //this class represents the LUIS response public class AnalysedQuery { public TopScoringIntentData topScoringIntent; public EntityData[] entities; public string query; } // This class contains the Intent LUIS determines // to be the most likely [Serializable] public class TopScoringIntentData { public string intent; public float score; } // This class contains data for an Entity [Serializable] public class EntityData { public string entity; public string type; public int startIndex; public int endIndex; public float score; }
Next, add the following variables inside the LuisManager class:
public static LuisManager instance; //Substitute the value of luis Endpoint with your own End Point string luisEndpoint = "... add your endpoint from the Luis Portal";
Make sure to place your LUIS endpoint in now (which you will have from your LUIS portal).
Code for the Awake() method now needs to be added. This method will be called when the class initializes:
private void Awake() { // allows this class instance to behave like a singleton instance = this; }
Now you need the methods this application uses to send the dictation received from the MicrophoneManager class to LUIS, and then receive and deserialize the response.
Once the value of the Intent, and associated Entities, have been determined, they are passed to the instance of the Behaviours class to trigger the intended action.
/// <summary> /// Call LUIS to submit a dictation result. /// The done Action is called at the completion of the method. /// </summary> public IEnumerator SubmitRequestToLuis(string dictationResult, Action done) { string queryString = string.Concat(Uri.EscapeDataString(dictationResult)); using (UnityWebRequest unityWebRequest = UnityWebRequest.Get(luisEndpoint + queryString)) { yield return unityWebRequest.SendWebRequest(); if (unityWebRequest.isNetworkError || unityWebRequest.isHttpError) { Debug.Log(unityWebRequest.error); } else { try { AnalysedQuery analysedQuery = JsonUtility.FromJson<AnalysedQuery>(unityWebRequest.downloadHandler.text); //analyse the elements of the response AnalyseResponseElements(analysedQuery); } catch (Exception exception) { Debug.Log("Luis Request Exception Message: " + exception.Message); } } done(); yield return null; } }
Create a new method called AnalyseResponseElements() that will read the resulting AnalysedQuery and determine the Entities. Once those Entities are determined, they will be passed to the instance of the Behaviours class to use in the actions.
private void AnalyseResponseElements(AnalysedQuery aQuery) { string topIntent = aQuery.topScoringIntent.intent; // Create a dictionary of entities associated with their type Dictionary<string, string> entityDic = new Dictionary<string, string>(); foreach (EntityData ed in aQuery.entities) { entityDic.Add(ed.type, ed.entity); } // Depending on the topmost recognized intent, read the entities name switch (aQuery.topScoringIntent.intent) { case "ChangeObjectColor": string targetForColor = null; string color = null; foreach (var pair in entityDic) { if (pair.Key == "target") { targetForColor = pair.Value; } else if (pair.Key == "color") { color = pair.Value; } } Behaviours.instance.ChangeTargetColor(targetForColor, color); break; case "ChangeObjectSize": string targetForSize = null; foreach (var pair in entityDic) { if (pair.Key == "target") { targetForSize = pair.Value; } } if (entityDic.ContainsKey("upsize") == true) { Behaviours.instance.UpSizeTarget(targetForSize); } else if (entityDic.ContainsKey("downsize") == true) { Behaviours.instance.DownSizeTarget(targetForSize); } break; } }
Important
Delete the Start() and Update() methods since this class will not use them.
Be sure to save your changes in Visual Studio before returning to Unity.
Note
At this point you will notice several errors appearing in the Unity Editor Console Panel. This is because the code references the Behaviours class which you will create in the next Chapter.
Chapter 7 – Create the Behaviours class
The Behaviours class will trigger the actions using the Entities provided by the LuisManager class.
To create this class:
Double click on the Scripts folder, to open it.
Right-click inside the Scripts folder, click Create > C# Script. Name the script Behaviours.
Double click on the script to open it with Visual Studio.
Then add the following variables inside the Behaviours class:
public static Behaviours instance; // the following variables are references to possible targets public GameObject sphere; public GameObject cylinder; public GameObject cube; internal GameObject gazedTarget;
Add the Awake() method code. This method will be called when the class initializes:
void Awake() { // allows this class instance to behave like a singleton instance = this; }
The following methods are called by the LuisManager class (which you have created previously) to determine which object is the target of the query and then trigger the appropriate action.
/// <summary> /// Changes the color of the target GameObject by providing the name of the object /// and the name of the color /// </summary> public void ChangeTargetColor(string targetName, string colorName) { GameObject foundTarget = FindTarget(targetName); if (foundTarget != null) { Debug.Log("Changing color " + colorName + " to target: " + foundTarget.name); switch (colorName) { case "blue": foundTarget.GetComponent<Renderer>().material.color = Color.blue; break; case "red": foundTarget.GetComponent<Renderer>().material.color = Color.red; break; case "yellow": foundTarget.GetComponent<Renderer>().material.color = Color.yellow; break; case "green": foundTarget.GetComponent<Renderer>().material.color = Color.green; break; case "white": foundTarget.GetComponent<Renderer>().material.color = Color.white; break; case "black": foundTarget.GetComponent<Renderer>().material.color = Color.black; break; } } } /// <summary> /// Reduces the size of the target GameObject by providing its name /// </summary> public void DownSizeTarget(string targetName) { GameObject foundTarget = FindTarget(targetName); foundTarget.transform.localScale -= new Vector3(0.5F, 0.5F, 0.5F); } /// <summary> /// Increases the size of the target GameObject by providing its name /// </summary> public void UpSizeTarget(string targetName) { GameObject foundTarget = FindTarget(targetName); foundTarget.transform.localScale += new Vector3(0.5F, 0.5F, 0.5F); }
Add the FindTarget() method to determine which of the GameObjects is the target of the current Intent. This method defaults the target to the GameObject being “gazed” if no explicit target is defined in the Entities.
/// <summary> /// Determines which object reference is the target GameObject by providing its name /// </summary> private GameObject FindTarget(string name) { GameObject targetAsGO = null; switch (name) { case "sphere": targetAsGO = sphere; break; case "cylinder": targetAsGO = cylinder; break; case "cube": targetAsGO = cube; break; case "this": // as an example of target words that the user may use when looking at an object case "it": // as this is the default, these are not actually needed in this example case "that": default: // if the target name is none of those above, check if the user is looking at something if (gazedTarget != null) { targetAsGO = gazedTarget; } break; } return targetAsGO; }
Important
Delete the Start() and Update() methods since this class will not use them.
Be sure to save your changes in Visual Studio before returning to Unity.
Chapter 8 – Create the Gaze Class
The last class that you will need to complete this app is the Gaze class. This class updates the reference to the GameObject currently in the user’s visual focus.
To create this Class:
Double click on the Scripts folder, to open it.
Right-click inside the Scripts folder, click Create > C# Script. Name the script Gaze.
Double click on the script to open it with Visual Studio.
Insert the following code for this class:
using UnityEngine; public class Gaze : MonoBehaviour { internal GameObject gazedObject; public float gazeMaxDistance = 300; void Update() { // Uses a raycast from the Main Camera to determine which object is gazed upon. Vector3 fwd = gameObject.transform.TransformDirection(Vector3.forward); Ray ray = new Ray(Camera.main.transform.position, fwd); RaycastHit hit; Debug.DrawRay(Camera.main.transform.position, fwd); if (Physics.Raycast(ray, out hit, gazeMaxDistance) && hit.collider != null) { if (gazedObject == null) { gazedObject = hit.transform.gameObject; // Set the gazedTarget in the Behaviours class Behaviours.instance.gazedTarget = gazedObject; } } else { ResetGaze(); } } // Turn the gaze off, reset the gazeObject in the Behaviours class. public void ResetGaze() { if (gazedObject != null) { Behaviours.instance.gazedTarget = null; gazedObject = null; } } }
Be sure to save your changes in Visual Studio before returning to Unity.
Chapter 9 – Completing the scene setup
To complete the setup of the scene, drag each script that you have created from the Scripts Folder to the Main Camera object in the Hierarchy Panel.
Select the Main Camera and look at the Inspector Panel, you should be able to see each script that you have attached, and you will notice that there are parameters on each script that are yet to be set.
To set these parameters correctly, follow these instructions:
MicrophoneManager:
- From the Hierarchy Panel, drag the Dictation Text object into the Dictation Text parameter value box.
Behaviours, from the Hierarchy Panel:
- Drag the Sphere object into the Sphere reference target box.
- Drag the Cylinder into the Cylinder reference target box.
- Drag the Cube into the Cube reference target box.
Gaze:
- Set the Gaze Max Distance to 300 (if it is not already).
The result should look like the image below:
Chapter 10 – Test in the Unity Editor
Test that the Scene setup is properly implemented.
Ensure that:
- All the scripts are attached to the Main Camera object.
- All the fields in the Main Camera Inspector Panel are assigned properly.
Press the Play button in the Unity Editor. The App should be running within the attached immersive headset.
Try a few utterances, such as:
make the cylinder red change the cube to yellow I want the sphere blue make this to green change it to white
Note 11 – Build and sideload the UWP Solution
Once you have ensured that the application is working in the Unity Editor, you are ready to Build and Deploy.
To Build:
Save the current scene by clicking on File > Save.
Go to File > Build Settings.
Tick the box called Unity C# Projects (useful for seeing and debugging your code once the UWP project is created.
Click on Add Open Scenes, then click Build.
You will be prompted to select the folder where you want to build the Solution.
Create a BUILDS folder and within that folder create another folder with an appropriate name of your choice.
Click Select Folder to begin the build at that location.
Once Unity has finished building (it might take some time), it should open a File Explorer window at the location of your build.
To Deploy on Local Machine:
In Visual Studio, open the solution file that has been created in the previous Chapter.
In the Solution Platform, select x86, Local Machine.
In the Solution Configuration select Debug. the Build menu and click on Deploy Solution to sideload the application to your machine.
Your App should now appear in the list of installed apps, ready to be launched!
Once launched, the App will prompt you to authorize access to the Microphone. Use the Motion Controllers, or Voice Input, or the Keyboard to press the YES button.
Chapter 12 – Improving your LUIS service
Important
This chapter is incredibly important, and may need to be iterated upon several times, as it will help improve the accuracy of your LUIS service: ensure you complete this.
To improve the level of understanding provided by LUIS you need to capture new utterances and use them to re-train your LUIS App.
For example, you might have trained LUIS to understand “Increase” and “Upsize”, but wouldn’t you want your app to also understand words like “Enlarge”?
Once you have used your application a few times, everything you have said will be collected by LUIS and available in the LUIS PORTAL.
Go to your portal application following this LINK, and Log In.
Once you are logged in with your MS Credentials, click on your App name.
Click the Review endpoint utterances button on the left of the page.
You will be shown a list of the Utterances that have been sent to LUIS by your mixed reality Application.
You will notice some highlighted Entities.
By hovering over each highlighted word, you can review each Utterance and determine which Entity has been recognized correctly, which Entities are wrong and which Entities are missed.
In the example above, it was found that the word “spear” had been highlighted as a target, so it necessary to correct the mistake, which is done by hovering over the word with the mouse and clicking Remove Label.
If you find Utterances that are completely wrong, you can delete them using the Delete button on the right side of the screen.
Or if you feel that LUIS has interpreted the Utterance correctly, you can validate its understanding by using the Add To Aligned Intent button.
Once you have sorted all the displayed Utterances, try and reload the page to see if more are available.
It is very important to repeat this process as many times as possible to improve your application understanding.
Have fun!
Your finished LUIS Integrated application
Congratulations, you built a mixed reality app that leverages the Azure Language Understanding Intelligence Service, to understand what a user says, and act on that information.
Bonus exercises
Exercise 1
While using this application you might notice that if you gaze at the Floor object and ask to change its color, it will do so. Can you work out how to stop your application from changing the Floor color?
Exercise 2
Try extending the LUIS and App capabilities, adding additional functionality for objects in scene; as an example, create new objects at the Gaze hit point, depending on what the user says, and then be able to use those objects alongside current scene objects, with the existing commands. | https://docs.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/mr-azure-303 | CC-MAIN-2021-25 | refinedweb | 4,817 | 54.02 |
This is where things get interesting. Our io_read() function gets called to handle three things:
The first two operate on a file, and the last operates on a directory. So that's the first decision point in our new io_read() handler:
static int io_read (resmgr_context_t *ctp, io_read_t *msg, RESMGR_OCB_T *ocb) { int sts; // use the helper function to decide if valid if ((sts = iofunc_read_verify (ctp, msg, &ocb -> base, NULL)) != EOK) { return (sts); } // decide if we should perform the "file" or "dir" read if (S_ISDIR (ocb -> base.attr -> base.mode)) { return (io_read_dir (ctp, msg, ocb)); } else if (S_ISREG (ocb -> base.attr -> base.mode)) { return (io_read_file (ctp, msg, ocb)); } else { return (EBADF); } }
By looking at the attributes structure's mode field, we can tell if the request is for a file or a directory. After all, we set this bit ourselves when we initialized the attributes structures (the S_IFDIR and S_IFREG values). | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.cookbook/topic/s2_web_io_read_phase3.html | CC-MAIN-2022-27 | refinedweb | 149 | 67.69 |
COAP:COAP-2180/week6
Contents
- 1 Week 6
- 2 Monday
- 3 Wednesday
- 4 Homework 5/6
- 5 List of teaching materials
1 Week 6
Week six will focus on introducing some popular XML applications
Main topics:
- Discussion of the exam
- Discussion of the term project
- XML namespaces (short recall)
- Introduction to static SVG and SMIL animations
- Discussion of the ePUB format: Quick mention of Calibre, a ePUB generating software and demonstration of Sigil (an ePub editor)
2 Monday
2.1 Discussion of the mid-term
Short discussion of the mid-term.
2.2 Discussion of the term project / DTD refinement
The term project will include:
- A DTD that models a "domain of your choice". This domain should be somewhat text-centric, i.e. one must be able to display the data in a meaningful way with an XSLT stylesheet.
- An extensive XML test file
- Rendering in HTML via XSLT + CSS, therefore an .xsl and a .css file for the resulting HTML
- A second rendering in HTML via XSLT + CSS that will filter and (optionally) rearrange some data.
- An XML Schema (week 7) that adds some data constraints
- A report/documentation in HTML, PDF or Word format (at least 1/2 page, but more if you aim for a top grade)
Other constraints
- All elements can be revisions of prior homework
- Prior to turning in the project, the instructor must validate a draft DTD if the project is different from one of the prior homework. This is to prevent both failure and cheating...
On Wednesday, the instructor may discuss with each student if modifications ought to be made to DTDs made for prior homework (ask !)
Due:
- Presentation/demo on Monday week 8
- Final version on Wednesday week 8
2.3 ID / IDREF attributes
ID attributes require unique values. Below is an example that demonstrates use of an ID an IDREF, and an IDREFS attribute in order to create links between persons.
<?xml version = "1.0"?> <!DOCTYPE Folks [ <!ELEMENT Folks (Person*)> <!ELEMENT Person (Name,Email?)> <!ATTLIST Person Pin ID #REQUIRED> <!ATTLIST Person Friend IDREF #IMPLIED> <!ATTLIST Person Likes IDREFS #IMPLIED> <!ELEMENT Name (#PCDATA)> <!ELEMENT Email (#PCDATA)> ]> <Folks> <Person Pin = "N1"> <Name>Someone</Name> </Person> <Person Pin = "N2" Friend = "N1"> <Name>Someone else</Name> </Person> <Person Pin = "N3" Likes = "N2 N1"> <Name>Fan</Name> </Person> </Folks>
2.4 XSLT code snippets for embedding pictures
- See also XSLT Tutorial - Basics and the optional XPath tutorial - basics and XSLT to generate SVG tutorial
URL is text of an element (
):
<xsl:template <p> <img src="{.}"/> </p> </xsl:template>
URL is defined with an "source" attribute (
<image source="filename.png">Cool picture</image>):
<xsl:template <p> <img src="{@source}"/><br/> <xsl:value-of <!-- insert a caption --> </p> </xsl:template>
Links follow the same logic, identify the HTML result you need, and read this
2.5 Creating link HTML links with XSLT
Creating links follows the same logic as dealing with pictures.
- Read: Producing HTML links (chapter from the XSLT Tutorial - Basics)
2.6 Internal tables of contents
Creating an internal table of contents is a bit more complicated. You must
- create internal anchors (
<a name="....">...</a>)
- then create links that point to these (
<a href="#....">...</a>)
Try to find a solution on the web, e.g. on with a google search like "stackoverflow xslt TOC". Make sure to narrow down good answers to simple solutions. Alternatively, search for "xslt create simple table of content"
New: Read Creating_an_internal_table_of_contents] (two examples that may help)
2.7 SVG
- Static SVG tutorial
- SVG/SMIL animation tutorial, Interactive SVG-SMIL animation tutorial
- XSLT to generate SVG tutorial
- Tools
- (Online editor)
- Inkscape (should be installed on your computer)
- Resources
- (largest open source clipart collection)
- (huge collection of icons)
2.8 Code snippets for SVG visualization with XSLT
<?xml version="1.0"?> <xsl:stylesheet <xsl:output <xsl:template <html xmlns=""> <head> <meta charset="utf-8"></meta> <title>Anything</title> </head> <body> <xsl:apply-templates/> </body> </html> </xsl:template> <xsl:template Something here: <xsl:apply-templates/> <xsl:variable <svg style="background-color:yellow;float:right" width="20" height="100" xmlns: <rect x="5px" y="5px" height="{$NUMBER*1.2}" width="10px" fill="green" /> </svg> </xsl:template> </xsl:stylesheet>
3 Wednesday
- SVG (continued ...): Adding an animation to a clipart object.
- Electronic books with Epub
- Time for work on individual projects
3.1 Integration of XML languages with namespaces
Textbook chapters
- Harold, XML in a Nutshell, Chapter 4 Namespaces (more informative)
- Learning XML, Chapter "Markup and Core Concept". Namespace are shortly explained in the Elements Section
4 Homework 5/6
Unless you are behind with homework, pick only one.
4.1 Homework 5
Task
- Revise homework 4 so that the DTD includes internal and external links and pictures. Alternatively, create a new DTD
- Modify or create the XSLT so that it will display pictures and internal as well as external links. Displaying an internal link also can mean that you insert a value of the linked element (e.g. a name).
- Create a CSS for the generated HTML or improve the existing one, either use inline styles or better: an external CSS file)
- (Optional), output HTML5 including some SVG
Tips:
- Use a template for each of your elements, otherwise make sure that you include contents using xsl:value-of or {...} within HTML attributes.
- Elements that only include other elements and no text should just be made into <div> .... </div> blocks for styling
- For debugging, use an XSLT processor (e.g. in the Exchanger editor) to manually create an HTML file, then look at its structure. You also should validate the file. Alternatively, use IE to look at the HTML transformation.
Deadline and submission:
- Monday week 7 (before start of class)
- Use the world classroom:
- Submit the *.dtd, the *.xml test files, the *.xslt file, the *.css file, multimedia assets (pictures), an optional optional report file (see below)
Evaluation criteria (roughly)
Work considered as weak:
- Incomplete rendering of elements, e.g. pictures do not show.
Work considered as minimalistic:
- Minimal rendering of elements, e.g. no extra information inserted
- XSLT can deal with pictures and links
- Very minimal CSS
Good work and excellent work (B to A-) may include one or several of the following:
- Inserted comments in the XSLT that explain some (not all of the translation)
- XSLT will add output that is needed to make the XML contents understandable
- A report (e.g. in HTML, word, PDF etc.)
- Good styling (usability)
- Good styling (coolness)
- Almost valid HTML produced
- HTML5 + SVG output
- Automatically generated table of contents.
Brilliant work (A)
- Does most of the above, i.e. produces a result that could be used in real life.
4.2 Homework 6 (SVG)
Due: Before start of class, Wednesday week 7
Task (2 options)
- Option A
- Create some HTML5 contents that include an interesting SVG animation.
(Bonus for a start "button", you will have to do some reading...).
- Option B
- Alternatively, create an SVG data visualization made with XSLT.
- Start from a data-rich XML file. There is not need to create a DTD, a well-formed XML file with meaningful tags is enough.
- For the SVG generation you can either use XSLT or use a PHP XML parser (sax, dom or simple) or E4X (Javascript)
- Resources
- Wiki pages: SVG (links), Static SVG tutorial
Upload: all files
5 List of teaching materials
Tour de XML (cancelled this week)
Demonstration
Using the editor
- Exchanger XML Editor (in particular, read the sections concerning XSLT)
- In case you are using another XML editor that can't do XSLT transformations, you could use the debug tools of IE (hit F12).
XSLT and XPath texts
- XSLT Tutorial - Basics
- DTD tutorial (wiki lecture notes, recall)
- XPath tutorial - basics (optional wiki lecture notes for doing more advanced stuff)
- xml-xslt.pdf (slides)
SVG texts (optional)
- Static SVG tutorial (important)
- Using SVG with HTML5 tutorial
- SVG/SMIL animation tutorial
- XSLT to generate SVG tutorial
Examples files (also on the N: drive)
Textbook chapters
If you find that my lecture notes and slides are incomplete, too short or not good enough, reading either one or both texts is mandatory !
DTDs:
- XML in a Nutshell, Chapter 3 Document Type Definitions (start here)
- Learning XML, Chapter 4 Quality Control with Schemas (additional reading)
XSLT:
- Ray, Learning XML, Second Edition, Transformation with XSLT chapter
- Ray, Learning XML, Second Edition, XPath and XPointer chapter (pp. 205-213)
- Harold, XML in a Nutshell, XSL Transformations (XSLT) chapter (optional)
These chapters are available through the world classroom. | https://edutechwiki.unige.ch/en/COAP:COAP-2180/week6 | CC-MAIN-2018-47 | refinedweb | 1,408 | 53.71 |
//Pg.288 #include <cassert> #include <iostream> using namespace std; void deleteKthElement(int k, int& count); int main() { struct nodeType { int info; nodeType *link; }; nodeType *first, *newNode, *last, *traverse; int num; int count = 0; cout << "Enter a list of integers ending with -999.\n"; cin >> num; first = NULL; while (num != -999) { newNode = new nodeType; assert(newNode !=NULL); newNode ->info = num; newNode ->link = NULL; if(first == NULL) { first = newNode; last = newNode; } else { last -> link = newNode; last = newNode; } cin >> num; count++; //increment the counter to keep track of no. of elements } // end while //print the linked list prior to deleting the 5th element cout << "The linked list before:" << endl; traverse = new nodeType; assert(traverse != NULL); traverse = first; while (traverse != NULL) { cout << traverse->info << endl; traverse = traverse->link; } deleteKthElement(5,count); //print the linked list after deleting the 5th element !! cout << "The linked list after deleting the 5th element:" << endl; traverse = new nodeType; assert(traverse != NULL); traverse = first; while (traverse != NULL) { cout << traverse->info << endl; traverse = traverse->link; } } // end main void deleteKthElement(int k, int& count) { assert(k <= count); nodeType *current, *trailCurrent; int i; if (first ==NULL) cerr << "Cannot delete from an empty list!" << endl; else if (k == 1) { current = first; first = first->link; if (first == NULL) last = NULL; delete current; } else { trailCurrent = first; current = first->link; i = 2; while(i < k) { trailCurrent = current; current = current->link; i++; } trailCurrent->link = current->link; if (current == last) last = trailCurrent; } }
I'm struggling to get the 5th element deleted in the list
Compiler giving me a lot of errors ; eg. nodeType undeclared, etc.
Please help ? | https://www.daniweb.com/programming/software-development/threads/178310/linked-list-problem | CC-MAIN-2017-51 | refinedweb | 258 | 59.94 |
From: Ed Brey (brey_at_[hidden])
Date: 2001-04-25 08:29:26
From: "Paul A. Bristow" <pbristow_at_[hidden]>
> Please can someone (Jes?) produce a detailed prototype of the format
> I should produce them in (like the one I produced but just for pi say.
> (and an example of how to use them too please).
The following prototype is similar to one that I produced a while back,
which I don't recall hearing any complaint over. I'm sure someone will
let me know if this leaves any requirements unfulfilled:
namespace boost { namespace math {
template<typename T> struct constants {};
template<> struct constants<float> {
static float pi() {return 3.14F;}
};
template<> struct constants<double> {
static double pi() {return 3.14;}
};
template<> struct constants<long double> {
static long double pi() {return 3.14L;}
};
template<typename T> struct tailored_constants: constants<T> {};
#ifdef PLATFORM1
template<> struct tailored_constants<float> {
static float pi() {union {unsigned u; float f;} b = {0x12345678};
return b.f;}
};
template<> struct tailored_constants<double> {
static double pi() {return ...;}
};
template<> struct tailored_constants<long double> {
static long double pi() {return ...;}
};
#endif
}}
Usage would typically be:
typedef boost::math::constants<float> c;
a = c::pi() * r * r;
hypotenuse = isosceles_triangle_leg * c::sqrt2();
// sqrt2 is a placeholder for the proper name according to the naming
convention.
I wouldn't worry about the binary values for specific platforms at this
time, meaning ignore the #ifdef section.
The actual file that follows this prototype (as with most prototypes),
can be automatically generated, so there need be no concern over
repetitive strain. Moreover, generation should be preferred over hand
crafting wherever possible for this domain to help avoid human mistakes.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/04/11351.php | CC-MAIN-2020-45 | refinedweb | 291 | 57.67 |
Opcodes - More Opcodes information from opnames.h and opcode.h
use Opcodes; print "Empty opcodes are null and ", join ",", map {opname $_}, opaliases(opname2code('null')); # All LOGOPs perl -MOpcodes -e'$,=q( );print map {opname $_} grep {opclass($_) == 2} 1..opcodes' # Ops which can return other than op->next perl -MOpcodes -e'$,=q( );print map {opname $_} grep {Opcodes::maybranch $_} 1..opcodes' the description for an OP.
Operator names are typically small lowercase words like enterloop, leaveloop, last, next, redo etc. Sometimes they are rather cryptic like gv2cv, i_ncmp and ftsvtx.
The opcode information functions all take the integer code, 0..MAX0, MAXO being accessed by scalar @opcodes, the length of the opcodes array.
Retrieve information of the Opcodes. All are available for export by the package. Functions names starting with "op" are automatically exported.
In a scalar context opcodes returns the number of opcodes in this version of perl (361 with perl-5.10).
In a list context it returns a list of all the operators with its properties, a list of [ opcode opname ppaddr check opargs ].
Returns the lowercase name without pp_ for the OP, an integer between 0 and MAXO.
Returns the address of the ppaddr, which can be used to get the aliases for each opcode.
Returns the address of the check function.
Returns the string description of the OP.
Returns the opcode args encoded as integer of the opcode. See below or opcode.pl for the encoding details.
opflags 1-128 + opclass 1-13 << 9 + argnum 1-15.. << 13
Returns the arguments and types encoded as number acccording to the following table, 4 bit for each argument.
'S', 1, # scalar 'L', 2, # list 'A', 3, # array value 'H', 4, # hash value 'C', 5, # code value 'F', 6, # file value 'R', 7, # scalar reference + '?', 8, # optional
Example:
argnum(opname2code('bless')) => 145 145 = 0b10010001 => S S? first 4 bits 0001 => 1st arg is a Scalar, next 4 bits 1001 => (bit 8+1) 2nd arg is an optional Scalar
Returns the op class as number according to the following table from opcode.pl:
'0', 0, # baseop '1', 1, # unop '2', 2, # binop '|', 3, # logop '@', 4, # listop '/', 5, # pmop '$', 6, # svop_or_padop '#', 7, # padop '"', 8, # pvop_or_svop '{', 9, # loop ';', 10, # cop '%', 11, # baseop_or_unop '-', 12, # filestatop '}', 13, # loopexop
Returns op flags as number according to the following table from opcode.pl. In doubt see your perl source. Warning: There is currently an attempt to change that, but I posted a fix
'm' => OA_MARK, # needs stack mark 'f' => OA_FOLDCONST, # fold constants 's' => OA_RETSCALAR, # always produces scalar 't' => OA_TARGET, # needs target scalar 'T' => OA_TARGET | OA_TARGLEX, # ... which may be lexical 'i' => OA_RETINTEGER, # always produces integer (this bit is in question) 'I' => OA_OTHERINT, # has corresponding int op 'd' => OA_DANGEROUS, # danger, unknown side effects 'u' => OA_DEFGV, # defaults to $_
plus not from opcode.pl:
'n' => OA_NOSTACK, # nothing on the stack, no args and return 'N' => OA_MAYBRANCH # No next. may return other than PL_op->op_next, maybranch
These not yet:
'S' => OA_MAYSCALAR # retval may be scalar 'A' => OA_MAYARRAY # retval may be array 'V' => OA_MAYVOID # retval may be void 'F' => OA_RETFIXED # fixed retval type, either S or A or V
All OA_ flag, class and argnum constants from op.h are exported. Addionally new OA_ flags have been created which are needed for B::CC.
Returns the opcodes for the aliased opcode functions for the given OP, the ops with the same ppaddr.
Does a reverse lookup in the opcodes list to get the opcode for the given name.
Returns if the OP function may return not op->op_next.
Note that not all OP classes which have op->op_other, op->op_first or op->op_last (higher then UNOP) are actually returning an other next op than op->op_next.
opflags(OP) & 16384
Opcode -- The Perl CORE Opcode module for handling sets of Opcodes used by Safe.
Safe -- Opcode and namespace limited execution compartments
B::CC -- The optimizing perl compiler uses this module. Jit also, but only the static information
CPAN Testers:
Travis:
Coveralls:
Reini Urban
rurban@cpan.org 2010, 2014
Copyright 1995, Malcom Beattie. Copyright 1996, Tim Bunce. Copyright 2010, 2014 Reini Urban. All rights reserved.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/dist/Opcodes/lib/Opcodes.pm | CC-MAIN-2017-13 | refinedweb | 705 | 65.22 |
System Information
FreeBSD 10.1 amd64 GTX520
Blender Version
Broken: 2.72+
Worked: 2.71
Short description of error
An osl script that uses #include "xxx.h" to make use of osl header files that ship with blender (inside addons/cycles/shader) fails to compile.
There are 5 files included that an osl script should be able to utilise - node_color.h node_fresnel.h node_texture.h oslutil.h stdosl.h
While stdosl.h is always included, adding it as an include to a script shouldn't break compiling the script.
Exact steps for others to reproduce the error
Compile an osl script within blender that has an #include line.
Fix
Within OSLShaderManager::osl_compile located in intern/cycles/render/osl.cpp the -I option flag and the path it specifies are being added to options as separate items, they need to be added as one string. options is a vector<string_view> so options.push_back() adds a new item to the list, unlike adding characters to the end of a string.
Just due to proximity - the #if test for the use of string and string_view is wrong. vector<string> is used in osl 1.4.x and vector<string_view> is used in osl 1.5.x and higher - included in this patch.
This patch fixes this issue for me.
diff --git a/intern/cycles/render/osl.cpp b/intern/cycles/render/osl.cpp index 45b4d28..2200eb4 100644 --- a/intern/cycles/render/osl.cpp +++ b/intern/cycles/render/osl.cpp @@ -251,20 +251,19 @@ void OSLShaderManager::shading_system_free() bool OSLShaderManager::osl_compile(const string& inputfile, const string& outputfile) { -#if OSL_LIBRARY_VERSION_CODE < 10602 - vector<string_view> options; -#else +#if OSL_LIBRARY_VERSION_CODE < 10500 vector<string> options; +#else + vector<string_view> options; #endif string stdosl_path; - string shader_path = path_get("shader"); + string shader_path = path_get("shader").insert(0,"-I"); /* specify output file name */ options.push_back("-o"); options.push_back(outputfile); /* specify standard include path */ - options.push_back("-I"); options.push_back(shader_path); stdosl_path = path_get("shader/stdosl.h"); | https://developer.blender.org/T43561 | CC-MAIN-2019-35 | refinedweb | 318 | 52.97 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.