text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I am getting the following error,"The static field IntegerSet.a should be accessed in a static way"
`public static IntegerSet union(IntegerSet setA, IntegerSet setB){ IntegerSet u = new IntegerSet(); for (int i=0;i <= CAPACITY; i++) if (a[i]==true || setB.a[i]==true)// u.a[i]=true; return u;
My Class is as followed:
public class IntegerSet { private static boolean[] a; static final int CAPACITY=100;
My complier is saying that the .a[i] in seB.a[i] and u.a[i] are giving me problem. Read a little about the problem , the solution is to put class instead of the varible since its static but how? | https://www.daniweb.com/programming/software-development/threads/424855/the-static-field-integerset-a-should-be-accessed-in-a-static-way | CC-MAIN-2017-09 | refinedweb | 109 | 58.38 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
Hi,
I am trying to send an e- mail using jython...
I have tried the following code
import smtplib
srv=smtplib.SMTP(localhost)
and it gives this error,
java.net.SocketException: java.net.SocketException: Software caused
connection a
bort: connect
I have also tried connecting to the server by using smtpd.SMTPServer, It
says smtpd.SMTPServer listening to..., but fails to get connected and gives
a socket error
Could someone tell me the source of this exception
_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today it's FREE! | http://sourceforge.net/p/jython/mailman/message/12336744/ | CC-MAIN-2015-18 | refinedweb | 109 | 60.31 |
Barrick Gold Corp. (Symbol: ABX). So
this week we highlight one interesting put contract, and one
interesting call contract, from the April expiration for ABX. The
put contract our YieldBoost algorithm identified as particularly
interesting, is at the $18 strike, which has a bid at the time of
this writing of 63 cents. Collecting that bid as the premium
represents a 3.5% return against the $18 commitment, or a 18.8%
annualized rate of return (at Stock Options Channel we call this
the
YieldBoost
).
Turning to the other side of the option chain, we highlight one
call contract of particular interest for the April expiration, for
shareholders of Barrick Gold Corp. (Symbol: ABX) looking to boost
their income beyond the stock's 1% annualized dividend yield.
Selling the covered call at the $21 strike and collecting the
premium based on the 59 cents bid, annualizes to an additional
16.4% rate of return against the current stock price (this is what
we at Stock Options Channel refer to as the
YieldBoost
), for a total of 17.4% annualized rate in the scenario where the
stock is not called away. Any upside above $21 would be lost if the
stock rises there and is called away, but ABX shares would have to
advance 8.5% from current levels for that to happen, meaning that
in the scenario where the stock is called, the shareholder has
earned a 11.5% return from this trading level, in addition to any
dividends collected before the stock was called.
Top YieldBoost AB? | http://www.nasdaq.com/article/interesting-april-stock-options-for-abx-cm325167 | CC-MAIN-2015-48 | refinedweb | 258 | 62.98 |
Cover | Table of Contents | Colophon.
Noël 25 Ben 76 Clementine 49 Norm 66 Chris 92 Doug 42 Carol 25 Ben 12 Clementine 0 Norm 66 ….)
open. The
openfunction takes at least two parameters: the filehandle and filename you want to associate it with. Perl also gives you some predefined (and preopened) filehandles.
STDINis your program's normal input channel, while
STDOUTis your program's normal output channel. And
STDERRis an additional output channel that allows your program to make snide remarks off to the side while it transforms (or attempts to transform) your input into your output.
openfunction to create filehandles for various purposes (input, output, piping), you need to be able to specify which behavior you want. As you might do on the
0".
/foo/in a conditional, you know you're looking at an ordinary pattern-matching operator:
if (/Windows 95/) { print "Time to upgrade?\n" }
s/foo/bar/, you know it's asking Perl to substitute "bar" for "foo", if possible. We call that the substitution operator. It also happens to return true or false depending on whether it succeeded, but usually it's evaluated for its side effect:
s/Windows/Linux/;
splitoperator uses a regular expression to specify where the data isn't. That is, the regular expression defines the separators that delimit the fields of data. Our Average Example has a couple of trivial examples of this. Lines 5 and 12 each split strings on the space character in order to return a list of words. But you can split on any separator you can specify with a regular expression:
@array = (1 + 2, 3 - 4, 5 * 6, 7 / 8);
foreachloop provides it. The
sort, you'll find the syntax summary:
sort LIST
sortprovides a list context to its arguments.
typewriterfont is likely to be found in Chapter 29..
$, even when referring to a scalar that is part of an array or hash. It works a bit like the English word "the". Thus, we have:
$days,
@days, and
%dayswithout Perl getting confused.
$fooand
@fooare two different variables. Together with the previous rules, it also means that
$foo[1]is an element of
@foototally unrelated to the scalar variable
$foo. This may seem a bit weird, but that's okay, because it is weird.
&, although the funny character is optional when calling the subroutine. Subroutines aren't generally considered lvalues, though recent versions of Perl allow you to return an lvalue from a subroutine and assign to that, so it can look as though you're assigning to the subroutine.
(10/3 == 1/3*10)tend to fail mysteriously.
%.14g" on most machines. Improper conversions of a nonnumeric string like
footo a number count as numeric 0; these trigger warnings if you have them enabled, but are silent otherwise. See Chapter 5, for examples of detecting what sort of data a string holds.
$x = funkshun(); # scalar context $x[1] = funkshun(); # scalar context $x{"ray"} = funkshun(); # scalar context
@x = funkshun(); # list context @x[1] = funkshun(); # list context @x{"ray"} = funkshun(); # list context %x = funkshun(); # list context
($x,$y,$z) = funkshun(); # list context ($x) = funkshun(); # list context
myor
our, so we have:
my $x = funkshun(); # scalar context my @x = funkshun(); # list context my %x = funkshun(); # list context my ($x) = funkshun(); # list context
funkshun()function above) know which context they are in, and return a list in contexts wanting a list but a scalar value in contexts wanting a scalar. (If this is true of an operation, it will be mentioned in the documentation for that operation.) In computer lingo, the operations are
(LIST)
@stuff = ("one", "two", "three");
@stuff, but the scalar assignment:
%map = ('red',0xff0000,'green',0x00ff00,'blue',0x0000ff);
%map = (); # clear the hash first $map{red} = 0xff0000; $map{green} = 0x00ff00; $map{blue} = 0x0000, );
$rec = { NAME => 'John Smith', RANK => 'Captain', SERNO => '951413', };
$field = radio_group( NAME => 'animals', VALUES => ['camel', 'llama', 'ram', 'wolf'], DEFAULT => 'camel', LINEBREAK => 'true', LABELS => \%animal_names, );
*foocontains the values of
$foo,
@foo,
%foo,
&foo, and several interpretations of plain old
foo.) The type prefix of a typeglob is a
*because it represents all types.
$fh = *STDOUT;
$fh = \*STDOUT;
sub newopen { my $path = shift; local *FH; # not my() nor our() open(FH, $path) or return undef; return *FH; # not \*FH! } $fh = newopen('/etc/passwd');
openfunction for other ways to generate new filehandles.
*foo = *bar;
foo" a synonym for every corresponding thing named "
bar". You can alias just one variable from a typeglob by assigning a reference instead:
*foo = \$bar;
$fooan alias for
$bar, but doesn't make
@fooan alias for
@bar, or
%fooan alias for
%bar. All these affect global (package) variables only; lexicals cannot be accessed through symbol table entries. Aliasing global variables like this may seem like a silly thing to want to do, but it turns out that the entire module export/import mechanism is built around this feature, since there's nothing that says the symbol you're aliasing has to be in your namespace. This:
local *Here::blue = \$There::green;
$Here::bluean alias for
$There::green, but doesn't make
@Here::bluean alias for
$info = `finger $user`;
$/to use a different line terminator.)
$?(see Chapter 28 for the interpretation of
$?, also known as
$CHILD_ERROR). Unlike the csh version of this command, no translation is done on the return data--newlines remain newlines. Unlike in any of the shells, single quotes in Perl do not hide variable names in the command from interpretation. To pass a
$through to the shell you need to hide it with a backslash. The
$userin our finger example above is interpolated by Perl, not by the shell. (Because the command undergoes shell processing, see Chapter 23, for security concerns.)
qx//(for "quoted execution"), but the operator works exactly the same way as ordinary backticks. You just get to pick your quote characters. As with similar quoting pseudofunctions, if you happen to choose a single quote as your delimiter, the command string doesn't undergo double-quote interpolation;
! $x # a unary operator $x * $y # a binary operator $x ? $y : $z # a trinary operator print $x, $y, $z # a list operator
chdir) is followed by a left parenthesis as the next token (ignoring whitespace), the operator and its parenthesized arguments are given highest precedence, as if it were a normal function call. The rule is this: If it looks like a function call, it is a function call. You can make it look like a nonfunction by prefixing the parentheses with a unary plus, which does absolutely nothing, semantically speaking--it doesn't even coerce the argument to be numeric.
chdir, we get:
chdir $foo || die; # (chdir $foo) || die chdir($foo) || die; # (chdir $foo) || die chdir ($foo) || die; # (chdir $foo) || die chdir +($foo) || die; # (chdir $foo) || die
*has higher precedence than
chdir, we get:
chdir $foo * 20; # chdir ($foo * 20) chdir($foo) * 20; # (chdir $foo) * 20 chdir ($foo) * 20; # (chdir $foo) * 20 chdir +($foo) * 20; # chdir ($foo * 20)
rand: | http://www.oreilly.com/catalog/9780596000271/toc.html | crawl-001 | refinedweb | 1,146 | 60.24 |
A few months ago, I decided to release Caer, a Computer Vision package available in Python. I found the process to be excruciatingly painful. You can probably guess why — little (and confusing) documentation, lack of good tutorials, and so on.
So I decided to write this article in the hope that it’ll help people who are struggling to do this. We’re going to build a very simple module and make it available to anyone around the world.
The contents of this module follow a very basic structure. There are, in total, four Python files, each of which has a single method within it. We’re going to keep this real simple for now.
base-verysimplemodule --> Base └── verysimplemodule --> Actual Module ├── extras │ ├── multiply.py │ ├── divide.py ├── add.py ├── subtract.py
You will notice that I have a folder called
verysimplemodule which, in turn, has two Python files
add.py and
subtract.py. There is also a folder called
extras (which contains
multiply.py and
divide.py). This folder will form the basis of our Python module.
Bringing out the __init__s
Something that you’ll always find in every Python package is an
__init__.py file. This file will tell Python to treat directories as modules (or sub-modules).
Very simply, it will hold the names of all the methods in all the Python files that are in its immediate directory.
A typical
__init__.py file has the following format:
from file import method # 'method' is a function that is present in a file called 'file.py'
When building packages in Python, you are required to add an
__init__.py file in every sub-directory in your package. These sub-directories are the sub-modules of your package.
For our case, we’ll add our __init__.py files to the ‘actual module’ directory
verysimplemodule, like this:
from add import add from subtract import subtract
and we’re going to do the same for the
extras folder, like this:
from multiply import multiply from divide import divide
Once that’s done, we’re pretty much halfway through the process!
How to set up setup.py
Within the
base-verysimplemodule folder (and in the same directory as our module
verysimplemodule ), we need to add a
setup.py file. This file is essential if you intend to build the actual module in question.
Note: Feel free to name the
setup.py file as you wish. This file is not name-specific as our
__init__.py file is.
Possible name choices are
setup_my_very_awesome_python_package.py and
python_package_setup.py , but it’s usually best practice to stick with
setup.py.
The
setup.py file will contain information about your package, specifically the name of the package, its version, platform-dependencies and a whole lot more.
For our purposes, we’re not going to require advanced meta information, so the following code should suit most packages you build:
from setuptools import setup, find_packages VERSION = '0.0.1' DESCRIPTION = 'My first Python package' LONG_DESCRIPTION = 'My first Python package with a slightly longer description' # Setting up setup( # the name must match the folder name 'verysimplemodule' name="verysimplemodule", version=VERSION, author="Jason Dsouza", author_email="<youremail@email.com>", description=DESCRIPTION, long_description=LONG_DESCRIPTION, packages=find_packages(), install_requires=[], # add any additional packages that # needs to be installed along with your package. Eg: 'caer' keywords=['python', 'first package'], classifiers= [ "Development Status :: 3 - Alpha", "Intended Audience :: Education", "Programming Language :: Python :: 2", "Programming Language :: Python :: 3", "Operating System :: MacOS :: MacOS X", "Operating System :: Microsoft :: Windows", ] )
With that done, all we have to do next is run the following command in the same directory as
base-verysimplemodule:
python setup.py sdist bdist_wheel
This will build all the necessary packages that Python will require. The
sdist and
bdist_wheel commands will create a source distribution and a wheel that you can later upload to PyPi.
PyPi — here we come!
PyPi is the official Python repository where all Python packages are stored. You can think of it as the Github for Python Packages.
To make your Python package available to people around the world, you’ll need to have an account with PyPi.
With that done, we’re all set to upload our package on PyPi. Remember the source distribution and wheel that were built when we ran
python setup.py ? Well, those are what will actually be uploaded to PyPi.
But before you do that, you need to install
twine if you don’t already have it installed. It’s as simple as
pip install twine.
How to upload your package to PyPi
Assuming you have
twine installed, go ahead and run:
twine upload dist/*
This command will upload the contents of the
dist folder that was automatically generated when we ran
python setup.py. You will get a prompt asking you for your PyPi username and password, so go ahead and type those in.
Now, if you’ve followed this tutorial to the T, you might get an error along the lines of repository already exists.
This is usually because there is a name clash between the name of your package and a package that already exists. In other words, change the name of your package — somebody else has already taken that name.
And that’s it!
To proudly
pip install your module, fire up a terminal and run:
pip install <package_name> # in our case, this is pip install verysimplemodule
Watch how Python neatly installs your package from the binaries that were generated earlier.
Open up a Python interactive shell and try importing your package:
>> import verysimplemodule as vsm >> vsm.add(2,5) 7 >> vsm.subtract(5,4) 1
To access the division and multiplication methods (remember that they were in a folder called
extras ?), run:
>> import verysimplemodule as vsm >> vsm.extras.divide(4,2) 2 >> vsm.extras.multiple(5,3) 15
It’s as simple as that.
Congratulations! You’ve just built your first Python package. Albeit very simple, your package is now available to be downloaded by anyone around the world (so long as they have Python, of course).
What’s next?
Test PyPi
The package that we used in this tutorial was an extremely simple module — basic mathematical operations of addition, subtraction, multiplication and division. It doesn’t make sense to upload them directly to PyPi especially since you’re trying this out for the first time.
Lucky for us, there is Test PyPi, a separate instance of PyPi where you can test out and experiment on your package (you will need to sign up for a separate account on the platform).
The process that you follow to upload to Test PyPi is pretty much the same with a few minor changes.
# The following command will upload the package to Test PyPi # You will be asked to provide your Test PyPi credentials twine upload --repository testpypi dist/*
To download projects from Test PyPi:
pip install --index-url "<package_name>"
Advanced Meta Information
The meta information we used in the
setup.py file was very basic. You can add additional information such as multiple maintainers (if any), author email, license information and a whole host of other data.
This article will prove particularly helpful if you intend to do so.
Look at other repositories
Looking at how other repositories have built their packages can prove to be super useful to you.
When building Caer, I would constantly look at how Numpy and Sonnet set up their packages. I would recommend taking a look at Caer’s, Numpy’s, and Tensorflow’s repositories if you plan on building slightly more advanced packages. | https://envo.app/how-to-build-your-very-first-python-package/ | CC-MAIN-2022-33 | refinedweb | 1,252 | 64.3 |
CodePlexProject Hosting for Open Source Software
Hello.
The following tutorial explains the basics of creating a part, driver, shape and use of the Editor
and Display methods from within a driver. When doing an Editor part for the shape we could use the DataAnnotations on the dashboard-side but what about the validation capabilities of the shape being displayed on the front-end using Display method
?
Basically what would do the trick is to make use of the @Html.ValidationMessage() which unfortunately does not work no matter what I do.
I would really appreciate a helping hand with this one :)
For me that helpers just works. What exactly are you trying to do? If you are building an editor you should use the Editor methods, not the Display. Or do you want to have a separate editor for the same part on the backend and on the frontend?
I'm trying to create a simple editor for my front-side. Apparently I misunderstood the use of the Editor overrides. I thought that editor methods allowed me to display editor shape only on my dashboard - LOL.
So... this is what I have now:
Controllers
Controllers\AddressHandlingController.cs
Drivers
Drivers\AddressDisplayDriver.cs
Drivers\AddressEditDriver.cs
Models
Models\AddressDisplayPart.cs
Models\AddressEditPart.cs
Views
Views\Parts
Views\Parts\AddressDisplay.cshtml
Views\Parts\AddressEdit.cshtml
Module.txt
placement.info
Routes.cs
Migrations.cs
And what I should have is a one Driver (instead of two separate :) that will handle display and edit templates (both on the front-side). The driver would look like this than ?
public class AddressEditDriver : ContentPartDriver<AddressEditPart>
{
protected override DriverResult Display(AddressEditPart part, string displayType, dynamic shapeHelper)
{
return ContentShape("Parts_Address", () => shapeHelper.Parts_Address());
}
protected override DriverResult Editor(AddressEditPart part, dynamic shapeHelper) {
return ContentShape("Parts_Address_Edit", () => shapeHelper.EditorTemplate(TemplateName: "Parts/AddressEdit", Model: part, Prefix: Prefix));
}
protected override DriverResult Editor(AddressEditPart part, IUpdateModel updater, dynamic shapeHelper) {
updater.TryUpdateModel(part, Prefix, null, null);
return Editor(part, shapeHelper);
}
}
This looks fine to me and yes, then you should just use the standard editor methods. Be aware though that editors are often designed to be used in the backend only (or rather, to be used by administrators and editors), that means you may bump into issues
properly setting user permissions. But if you make a simple content type this shouldn't be a real issue, but e.g. BodyPart or TagsPart is not meant to be used by ordinary users.
Also if you want a frontend editor AFAIK you'll need a custom controller and build your editor from there (IContentManager.BuildEditor() and friends).
Ok. Let's sum this up so I would have a clear picture what is what here:
Driver with overloads:
Display method - for getting a shape (Views\template.cshtm) to display on the front end
Editor method - for getting a shape (Views\EditorTemplates\template.cshtm) to display on the dashboard
If I want to edit stuff I need to define a Routes.cs and MyController.cs. Within MyController I can get content to edit like so:
ContentItem contentItem = this.ContentManager.Get(id);
dynamic editor = this.ContentManager.BuildEditor(contentItem);
return new ShapeResult(this, shape);
but this would get me a shape of the whole page. For instance if the id=1 I'm getting site configuration shape.
I'm still confused as to how to get a shape from Drivers Editor method to use in my controller.
"Editor method - for getting a shape (Views\EditorTemplates\template.cshtm) to display on the dashboard"
Yes, you should place the template into the EditorTemplates folder if you intend to create editor shapes the common way (how it's also in the tutorial). Now the editor of the content item could be displayed not just on the dashboard but from a custom controller
too, as you've also used.
"If I want to edit stuff I need to define a Routes.cs and MyController.cs."
You only need to define routes if you're not satisfied with the standard one, e.g. MyModule/MyControllersName/MyAction.
"Within MyController I can get content to edit like so:"
This is correct.
"but this would get me a shape of the whole page. For instance if the id=1 I'm getting site configuration shape."
BuildEditor() creates the editor (that means, the content of an editor form, without the <form></form> itself) for the whole content item (unless you've defined editor groups like it's with site settings and e.g. Media settings). You can't simply
build an editor for only one part, if that's what you'd like to do, but e.g. you could hide all the other editor shapes from the Placement.info (by using a match for the url of the action) or attach you part's editor to a new group and build the editor shape
for only that group.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://orchard.codeplex.com/discussions/352165 | CC-MAIN-2017-09 | refinedweb | 841 | 56.96 |
Shooting (2/2)
Our magnificent ship is now shooting innocent flying octopuses.
It can’t stay that way. They need to respond. To shoot back. To fight for freedo… Oups. Sorry.
Using what we did in the last part, we will modify the enemy behavior so it can shoot projectiles too.
The enemy projectile
We will create a new projectile using this sprite:
(Right click to save the image)
If you are as lazy as I am, duplicate the “PlayerShot” prefab, rename it to “EnemyShot1” and change the sprite with the new one above.
To duplicate, you can create an instance by doing a drag and drop on the scene, renaming the created game object and finally saving it as a
Prefab.
Or you could simply duplicate the
Prefab directly inside the folder with the
cmd+D (OS X) or
ctrl+D (Windows) shortcuts.
If you like to do it the hard way, you could also recreate a whole new sprite, rigibody, collider with trigger, etc.
The right scale is
(0.35, 0.35, 1).
You should have something like this:
If you hit “Play”, the shot will move and potentially destroy the enemy. This is because of the “ShotScript” properties (which is harmful for the Poulpi by default).
Don’t change them. Remember the “WeaponScript” from the last part? It will set those values properly.
We have an “EnemyShot1”
Prefab. Remove the instances from the scene if there are some.
Firing
Like we did for the player, we need to add a weapon to the enemy and make him call
Attack(), thus creating projectile.
New scripts and assignments
- Add a “WeaponScript” to the enemy.
- Drag and drop the “EnemyShot1”
Prefabinto the “Shot Prefab” variable of the script.
- Create a new script called “EnemyScript”. It will simply try to trigger the weapon at each frame. A kind of auto-fire.
using UnityEngine; /// <summary> /// Enemy generic behavior /// </summary> public class EnemyScript : MonoBehaviour { private WeaponScript weapon; void Awake() { // Retrieve the weapon only once weapon = GetComponent<WeaponScript>(); } void Update() { // Auto-fire if (weapon != null && weapon.CanAttack) { weapon.Attack(true); } } }
Attach this script to our enemy.
You should have this (observe the slight increase of the shooting rate to
0.75):
Remark: if you are modifying the game object in the scene, remember to save all the changes to the
Prefab using the “Apply” button on the top right of the “Inspector”.
Try to play and look!
Okay, it’s kinda working. The weapon is firing on its right because that’s what we told it to do.
If you rotate the enemy, you can make it fire on its left, but, erm… the sprite is also upside down. That’s not what we want.
So what?! Obviously, we made this mistake for a reason.
Shooting in any direction
The “WeaponScript” has been made in a particular way: you can choose its direction simply by rotating the game object onto which it is attached. We’ve seen that when we rotated the enemy sprite before.
The trick is to create an empty game object as a child of the enemy
Prefab.
We need to:
- Create an “Empty Game Object”. Call it “Weapon”.
- Delete the “WeaponScript” attached to your enemy prefab.
- Add a “WeaponScript” to the “Weapon” object and set the shot prefab property like you did before.
- Rotate the “Weapon” to
(0, 0, 180).
If you did the process on the game object (and not the
Prefab), do not forget to “Apply” the changes.
You should have this:
However, we have a small change to make on the “EnemyScript” script.
In its current state, the “EnemyScript” call to
GetComponent<WeaponScript>() is going to return null. Indeed, the “WeaponScript” is not attached to the same game object anymore.
Fortunately, Unity provides a method that can also look in the children hierarchy of the game object calling it: the
GetComponentInChildren<Type>() method.
Note: like for
GetComponent<>(),
GetComponentInChildren<>() also exists in a plural form :
GetComponentsInChildren<Type>(). Notice the
s after “Component”. This method returns a list instead of the first corresponding component.
In fact, just for fun, we have also added a way to manage multiple weapons. We are just manipulating a list instead of a single instance of the component.
Take a look at the whole “EnemyScript”:
using System.Collections.Generic; using UnityEngine; /// <summary> /// Enemy generic behavior /// </summary> public class EnemyScript : MonoBehaviour { private WeaponScript[] weapons; void Awake() { // Retrieve the weapon only once weapons = GetComponentsInChildren<WeaponScript>(); } void Update() { foreach (WeaponScript weapon in weapons) { // Auto-fire if (weapon != null && weapon.CanAttack) { weapon.Attack(true); } } } }
Finally, update the shot speed by tweaking the public property of the “MoveScript” of the “enemyShot1”
Prefab. It should move faster than the Poulpi speed:
Great, we have a super dangerous Poulpi now.
Bonus: firing in two directions
Firing in two directions is just a few clicks and a duplication in the editor. It doesn’t involve any script:
- Add another weapon to the enemy (by duplicating the first “Weapon”).
- Change the rotation of the second “Weapon”.
The enemy should fire in two directions now.
A possible result:
It’s a good example of using Unity properly: by creating independent scripts like this one and making public some useful variables, you can reduce the amount of code drastically. Less code means less errors.
Hurting the player
Our Poulpies are dangerous, right? Erm, nope. Even if they can shoot, it won’t harm the player character.
We are still invincible. No challenge there.
Simply add a “HealthScript” on the player. Make sure to uncheck the “IsEnemy” field.
Run the game and observe the difference:
Bonus
We are going to give you some hints to go further on the shooting aspect of your game. You can skip this part if you are not interested into more specific shmup thoughts.
Player-enemy collision
Let’s see how we can handle the collision between the player and an enemy, as it is quite frustrating to see them block each other without consequences…
The collision is the result of the intersection of two non-triggers Colliders 2D. We simply need to handle the event
OnCollisionEnter2D in our
PlayerScript:
//PlayerScript.cs //.... void OnCollisionEnter2D(Collision2D collision) { bool damagePlayer = false; // Collision with enemy EnemyScript enemy = collision.gameObject.GetComponent<EnemyScript>(); if (enemy != null) { // Kill the enemy HealthScript enemyHealth = enemy.GetComponent<HealthScript>(); if (enemyHealth != null) enemyHealth.Damage(enemyHealth.hp); damagePlayer = true; } // Damage the player if (damagePlayer) { HealthScript playerHealth = this.GetComponent<HealthScript>(); if (playerHealth != null) playerHealth.Damage(1); } }
On collision, we damage both the player and the enemy by using the
HealthScript component. By doing so, everything related to the health/damage behavior is linked to it.
Pool of projectiles
As you play, you can observe in the “Hierarchy” that game objects are being created and removed only after 20 seconds (unless they hit a player or an enemy).
If you plan to do a danmaku which needs a LOT of bullets, this is not a viable technique anymore.
One of the solution to handle a multitude of bullets is to use a pool. Basically, you can use an array of bullets limited in size. When the array is full, you delete the oldest object and replace it by a new one.
We won’t implement one here but it is quite simple. We used the same technique on a painting script.
You could also reduce the time to live of a bullet so it will disappear more quickly.
Attention: keep in mind that using the
Instantiate method heavily has a cost. You need to use it carefully.
Bullet behavior
A good shooter should have memorable fights.
Some libraries, like BulletML, allows you to easily define some complex and spectacular bullet patterns.
If you are interested in making a complete Shoot’Em Up game, have a look at our BulletML for Unity plugin.
Delaying shots
Add a few armed enemies in the scene and run the game. You should see how synchronous all enemies are.
We could simply add a delay to the weapon: initialize the cooldown to something else than 0. You could use an algorithm or simply put a random number instead.
The speed of the enemies can also be altered with a random value.
Once again, it’s up to you. It depends solely on what you want to achieve with your gameplay.
Next step
We have just learned how to give a weapon to our enemies. We’ve also seen how to reuse some scripts to improve the gameplay.
We have an almost complete shooter! A very basic and hardcore one, admittedly.
Don’t hesitate to add enemies, weapons and to experiment with the properties.
In the next chapter, we will learn how to enhance the background and the scene to create a big level.
| http://pixelnest.io/tutorials/2d-game-unity/shooting-2/ | CC-MAIN-2018-09 | refinedweb | 1,454 | 66.54 |
Bert Bates wrote:Hi Guys,
First off, thanks for taking the time to post these errata!
Second, I'd like to request that before an entry goes on this list, it's discussed in a separate thread - to be sure it's actually an error.
Thanks,
Bert
Exception in thread "main" java.lang.NoClassDefFoundError: myApp/Foo
at GetJar.main(GetJar.java:3)
Caused by: java.lang.ClassNotFoundException: myApp.Foo
...
Page xxxi states there are 72 questions on the exam and you need to get 47 of them correct to pass. On the exam I took there were only 60 questions
Question 7 :
Given:
3. public class Bridge {
4. public enum Suits {
5. CLUBS(20), DIAMONDS(20), HEARTS(30), SPADES(30),
6. NOTRUMP(40) { public int getValue(int bid) {
return ((bid - 1) * 30) + 40;} };
7. Suits(int points) { this.points = points;}
8. private int points;
9. public int getValue(int bid) {return points * bid;}
10. }
11. public static void main(String[] args) {
12. System.out.println(Suits.NOTRUMP.getBidValue(3));
13. System.out.println(Suits.SPADES + " " + Suits.SPADES.points);
14. System.out.println(Suits.values());
15. }
16. }
Which are true? (Choose all that apply.)
A. The output could contain 30
B. The output could contain @bf73fa
C. The output could contain DIAMONDS
D. Compilation fails due to an error on line 6
E. Compilation fails due to an error on line 7
F. Compilation fails due to an error on line 8
G. Compilation fails due to an error on line 9
H. Compilation fails due to an error within lines 12 to 14
Answer:
- A and B are correct. The code compiles and runs without exception. The values()
method returns an array reference, not the contents of the enum, so DIAMONDS is never
printed.
- C, D, E, F, G, and H are incorrect based on the above. (Objective 1.3)
Sarah wrote:Page 803 defines a jar called MyJar.jar but page 804 refers to a jar myApp.jar, which is presumably meant to be the same jar.
Sarah wrote:I took "system classpath" to mean something like jre/lib/ext which is not overridden.
Sarah wrote:Page 798 states "When searching for class files, the java and javac commands don't search the current directory by default".
No cast is needed in this case because d double can hold every piece of information that a long can store.
Alexander Exner wrote:Some misleading code on Page 579
Integer[] ia2 = new Integer[0];
ia2 = iL.toArray(ia2);
does the job as well. To set the length of ia2 to 3 may cause somebody thinking its necessary to create an array of the same size first.
[...] it comes in two flavors: one that returns a new Object array, and one that uses the array you send it as the destination array:.)
The behavior of a sorted map is well-defined even if its ordering is inconsistent with equals; it just fails to obey the general contract of the Map interface. | http://www.coderanch.com/t/499740/java-programmer-SCJP/certification/Discussing-errata-SCJP | CC-MAIN-2014-52 | refinedweb | 499 | 68.67 |
Thanks to the module interface unit and the module implementation unit, you can separate the interface from the implementation when defining a module. Let me show, how.
As promised in my last post C++20: A Simple math Modul, I want to make a short detour on my Clang Odyssee. My detour is a compact refresher to all I wrote in the referred post.
First, I don't want to blame anyone but me. Based on talks from Boris Kolpackov "Building C++ Modules" at the CppCon 2017 or Corentin Jabot "Modules are not a tooling opportunity" I had the impression, that the vendors suggested the following extensions for their module definition:
In the case of the Clang compiler, I was totally wrong. This is my simple math module, which I tried to compile with the Clang compiler.
// math.cppm
export module math;
export int add(int fir, int sec){
return fir + sec;
}
I tried to compile the module with Clang 9 and Clang 10 on Microsoft and Linux. I tried to compile it with the brand-new Clang 11 compiler, built from the sources. Here is one of my many tries.
This command-line should create the module math.pcm. I specified in the command-line -std=c++20 -fmodules-ts and the error message said: module interface compilation requires '-std=c++20' or '-fmodules-ts'. I made all variations of the two flags, added the global module fragment to the module definition, invoked the Clang compiler with additional flags, but the result was always the same.
Then I asked Arthur O'Dwyer and Roland Bock for their help. For Arthur modules worked fine with Clang: "Hello World with C++2a modules". Roland rebuilt its Clang 11 and it worked with my module definition.
Roland and I literally had the same Clang compiler and the same module definition. Character by character, I compared his command-line with mine, and I noticed something.
Mine: clang++ -std=c++20 - -fmodules-ts -stdlib=libc++ -c math.cppm -Xclang -emit-module-interface -o math.pcm
Roland: clang++ -std=c++20 - -fmodules-ts -stdlib=libc++ -c math.cpp -Xclang -emit-module-interface -o math.pcm
Roland gave his module math.cpp cpp, and so did Arthur. Don't give your module definition the suffix cppm.
Now, compiling and using the module was straightforward.
To end this Odyssey here is the client.cpp file and a few words to the necessary flags for the Clang command line.
// client.cpp
import math;
int main() {
add(2000, 20);
}
clang++ -std=c++2a -stdlib=libc++ -c math.cpp -Xclang -emit-module-interface -o math.pcm // (1)
clang++ -std=c++2a -stdlib=libc++ -fprebuilt-module-path=. client.cpp math.pcm -o client // (2)
The module math was straightforward. Let's be a bit more sophisticated.
Here is the first guideline for a module structure:
module; // global module fragment
#include <headers for libraries not modularized so far>
export module math; // module declartion
import <importing of other modules>
<non-exported declarations> // names with only visibiliy inside the module
export namespace math {
<exported declarations> // exported names
}
This guideline serves two purposes. It gives you a simplified structure of a module and also an idea, what I'm going to write about. So, what's new in this module structure?
According to the previously mentioned guideline, I want to refactor the final version of module math from the previous post C++20: A Simple math Modul.
// mathInterfaceUnit.ixx
module;
import std.core;
export module math;
export namespace math {
int add(int fir, int sec);
int getProduct(const std::vector<int>& vec);
}
// mathImplementationUnit.cpp
module math;
import std.core;
int add(int fir, int sec){
return fir + sec;
}
int getProduct(const std::vector<int>& vec) {
return std::accumulate(vec.begin(), vec.end(), 1, std::multiplies<int>());
}
// client3.cpp
import std.core;
import math;
int main() {
std::cout << std::endl;
std::cout << "math::add(2000, 20): " << math::add(2000, 20) << std::endl;
std::vector<int> myVec{1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
std::cout << "math::getProduct(myVec): " << math::getProduct(myVec) << std::endl;
std::cout << std::endl;
}
Manually building the executable includes a few steps.
cl.exe /std:c++latest /c /experimental:module mathInterfaceUnit.ixx /EHsc /MD // (1)
cl.exe /std:c++latest /c /experimental:module mathImplementationUnit.cpp /EHsc /MD // (2)
cl.exe /std:c++latest /c /experimental:module client3.cpp /EHsc /MD // (3)
cl.exe client3.obj mathInterfaceUnit.obj mathImplementationUnit.obj // (4)
For the Microsoft compiler, you have to specify the exception handling model (/EHsc) and the multithreading library (/MD). Additionally, use the flag /std:c++latest.
Finally, here is the output of the program:
In the next post, I extend my module math with new features. First, I import modules and export them in one unit; second, I use names that are only visible inside the649
Yesterday 8077
Week 24298
Month 208834
All 5761177
Currently are 242 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Anyway, my complete clang command is
clang++ -stdlib=libc++ --std=c++2a -fbuiltin-module-map -fmodule-map-file=/usr/include/c++/v1/module.modul emap -fimplicit-modules -fPIC -fprebuilt-module-path=. --precompile -o math.pcm math.cppm | http://modernescpp.com/index.php/c-20-module-interface-unit-and-module-implementation-unit | CC-MAIN-2021-10 | refinedweb | 864 | 58.69 |
3?fa WiicWvc Ipailtjfptgle: ffrMay fpGrruittg, ritolret 3, 1890.
7
V
w
f
te
:v
y
r
IHllll Wif.hitfl Wklflsalfi k Mflniifartiirinrt ffmiftpq
A CIGAlt EOX'S STAMS.
THEY GIVE INFORMATION REGARD
ING THE CIGAR INSIDE.
IT Ton Bay Imported Cigars It Will Pay
You to Take Heed of tho Various
Brands and labels What Most of the
Colored Posters Mean Homo Brands.
The interior wrappings and lithographs
tell about the cigar when the box is opened,
but more can be told about it from the
outside. It requires somewhat close ob
servation to note all the marks on a cigar
box. On a box of imported cigars, for in
stance, there is branded the name of the
manufacturer. That is usually tho name
of some factory and the placo where the
factory is. The name of the factory gives
an Indication about its location. The brand
"Campa Gral de Tabaco del Filipinas"
shows unmistakably where the cigars that
were put in thRt box were made, unless
the brand is a counterfeit. It is seldom
that counterfeit brands ore found on im
ported cigars, as the import stamp is a
guarantee that the cigar has gone through
the custom house. This stamp is put on
first. Each of the Havana factories has
Its stamp, the Garcia, the Clay, tho Caro
lina, or whatever it may be, on the lid of
the box. It may bo hid afterward by tho
revenue stamp and lithographs, but tho
Urst thing done is to brand that name on.
Tho Havana cigars frequently havo
stamped on them also "Unban a," with tho
Spanish abbreviation designating the qual
ity of tobacco or tho bize. This is put on
when the boxes are sorted out to be filled;
tho stamp of tho factory is put on them in
tho first place. This stamp also is not
branded liko the factory Btamp, but is put
on with a stenciL Tho name of the fac
tory can not bo taken off without planing
Into the lid, but tho brand of tho quality
and color can.
Before the cigars are put in the box is fur
ther branded with the color'cluro," "color
adoclaro," "Colorado," "coloradomaduro,"
"maduro," or, as known to Americans, very
mild, mild, medium, fairly strong and
strong. These aro not enough grades to
mark tho various distinctions in color and
strength, but they are generally approxi
mated. Borne brands of Colorado claro ci
gars are milder than the claro cigars of
other brands, but the mildest ones aro al
ways put into tho claro boxes. There has
been soma change iu the strictness of
marking in recent years caused by tho
fact that the American trade prefers
Colorado claro and claro to the maduro
and Colorado maduro.
When the box has beon marked in this
way it is filled and the llnal tacks are put
in. Tho manufacturer usually then pastes
somo advertisements on it in order that
tho box may not be oponed and other cigars
substituted without its being evident to
tho purchaser. Manufacturers have labels
which they paste over tho seams, which
contain advertisements and notices of va
rious kinds. If tho manufacturer has taken
prizes at any international exhibition fac
similes of tho medals will probably bo
found on the labels. Usually thero is a
picture of the factory, with tho firm name,
ooat-of-arms and other designs. In this
uhape the box is ready to bo sent here.
It arrives with tens of thousands of other
boxes and is examined by an inspector.
He pastes over it the import stamp so that
tho box cannot bo opened without destroy
ing tho stamp. Tho import stamp certifier
to tho number of cigars in the box and that
the tax is paid. Besides that thero aro
blank spaces on tho Stamp which tho in
spector fills out with n stenciL
When filled up the stamp shows not only
that the cigars went through the custom
house, but tho steamer in which they came,
the port at which they wero entered, tho
date on which they wero received and
stamped, and the name of tho inspector
who stamped them. This is an unfailing
certificate of tho length of time tho cigar
has been in the country. Tho stamps aro
linely mado, in ordor to prevent counter
feit; there is more tracery and vignetto
work than on the ordinary revenue stamp.
When the import stamp has been pasted
on tho box tho internal rovenuo stamp is
put on before tho cigars can bo sold. Tho
internal revenue stamp is a cheaper affair
on bluish-green paper. It is canceled at
the same time that it is put on, and with a
stamp which, if it were plain, might show
the date; but this stamping is done much
more hastily, and docs not aid in the his
tory of the cigar.
A cigar box with an internal revenue
stamp on it and no import stamp does not
onco in 50,000 times contain imported
cigars, smuggled or otherwise. Some of
the fictitious smugglers who go around
among downtown offices and peddle cigars
which they say are imported produce them
in boxes with only the internal lovenuu
stamp on. Smuggled cigars have no
stamps whatever. Any cigars that go
through the customhouse havo the import
ed stamp and tho internal revenuo stamp
both.
A cigar which has only an internal rev
enuo stamp iias been stamped at somo do
mestic factory. If it was smuggled it was
taken to u factory to be stamped, which
would bo foolishness nnd wasto of money
on tho part of the smuggler, and bosides
he would run a great deal of risk, as tho
internal revenue officer who stamped tho
box could readily tell, if ho was an export
in his business, the difference between tho
boxes used in the Havana trado and the
boxes used in the domestic trade. Thero
uro details in tho way of packing, litho
graphing and branding whioh show unmis
takably, unless they aro very cleverly coun
terfeited. A man who is going to buy cigars nnd
wants to be sure of what ho is getting can
tell by the box if it has not been opened.
It is moro risk to take an opened box, for
nome unscrupulous dealers will put cheap
er cigars into a box which hold high grade
cigars nnd sell them as imported cigars.
Still theso dealers often make mistakes, as
it is hard for them to got the same size of
domestic cigars nnd tho same color. If a
man is buying what is said to lw nn im
ported cigar, and sees that the cigar is
dark while tho box is marked claro, ho
may be sure that there is some imposition
homewbere, probably that the dealer in re
filling the box was not careful enough to
put in cigars of the same color But tho
bt'st way to do is to examine the box first
nnd then to have the dealer open it after
ward to see if the cigars aro what is wauted.
The age of the cigar can be told from tho
import stamp, the color from the brand on
the back of the box, the factory from tho
factory brand and tho bhaj)e from the size
of the box. Almost eerythiug about
cigars which go through the standard
Havana factories can be told without
opening the box. A cigar box with tho
blue label of the Cigarmakers' Interna
tional union does not hold imported but
domestic cigars. Domestic cigars can fur
ther be told by an examination of tho
1kx and the stamp and the warning not to
U60 tho box again, which has on it tha
district and the number of the factory.
According to law the warning must be on
the box; it is a sine sign of a domestic
cigar. New York bun.
rpa Didn't Interfere.
It was growing late, nnd papa crept down
stairs to warn tho young people that it
was too damp to sit outside any longer.
"I don't see why you have to ask me for
my hand, Tom," ho hoard, as ho ruaohed
tho door; "you've been keeping it all th
evening."
Papa retired quietly. Harper's Bazar
JLJL IvlllLUt jLJL HvlLrULilU W. xlXIXJLlilXiAvLIAJLlJLij llvUUvUi
The houses given below are representative ones in their line, and thoroughly reliable. They are furnished thus for ready refer
ence for the South generally, as well as for city and suburban buyers. Dealers and inquirers should correspond direct
with names given.
CHAS. T. CHAMPION,
WHOLESALE
SCHOOL '. BOOKS,
Ay J) SCHOOL SUPPLIES.
Mail Orders will Kcceive Prompt Attention at
EASTERN PRICES.
118 East Douglas Avenue, "Wichita, Kansas.
GLOBE -. IRON -. WORKS,
MANUFACTURERS OF
Steam Engines, Boilers and Pumps, and Dealers in Brass Goods, Rubber and
Hemp Packing, Steam Fittings, Etc. Repairing of all Kinds of Ma
chinery a Specialty. Orders promptly filled for all kinds
of Sheet Iron work. All kinds ol castings made.
A. FLAGGr, Proprietor.
The Stewart Iron Works,
JIAXrFACTUREKS 07
IRON FENCING,
Architectural, Wrought and Cast
J ron Work for Buildings.
Factory: South. Washington Avenue.
Wichita, Kansas.
TO ART DEALERS AND ARTISTS.
Artist's Materials, Picture. Moulding anil Frames
Wholesale and retail. Catalogue free.
MAIL ORDERS PROMPTLY ATTENDED.
F. P. MARTIN, 114 Market St.
iisi-tr iLLbpnoNE::.
THOMAS SHAW
nOLESAT.EniAI.ru IK
Pianos and Organs
Sheet music And hooks. All kinds of mii.lca
pools. Tlraibanl and orchestra music 12JMain
btroet, " Ichita, Kansas.
TrirnWe Eros. & Tlirelkeld,
WHOLESALE
HARDWARE
Special attention to mail orders.
110 E Douglas, - Wichita, Kan
D. W. STALLINGS & SONS,
ilANTFACTUnERS OP
STALLIXG'S I'ALMOLE TOILET SOAP
ltbMiutllTfls the comploxlon and Veoo tlu
bkln Kf t. niuooth clear and healthy. For
sale by druggists and grocers.
520 Chicago Are. - Telephone OO
THE JOHNSTON & LARIMER DRY GOODS CO.,
WHOLEbALK
Dry : Goods, : Notions : and : Furnishing : Goods.
Complete Stock in all the Departments.
119, 121 & 123 N Topeka Ave. Wichita, Kansas.
L. M. COX,
Manufacturing -. Confectioner,
And jobber in Fijrs, Dates, Cigars. Foreign and Domestic Nuts, Cider,
Paper Dags, Paper lioxes. Candy Jars, Trays, Etc.
21o and 217 South Main St., - - - - Wichita, Kansas.
THE C. E. POTiS DRUG CO.
(Formerly Charles E. Potts & Co., Cincinnati, O.)
WHOLESALE DRUGGISTS.
Goods Sold at Si. Louis and Kansas City Prices.
233 and 235 South Main Street, Wichita, Kansas.
BAKER, BLASDEL & CO.,
COK. MARKET AND FIRST ST3., WICHITA, KAN.
Manufacturer wholesale, transfer and forwarding: aprenis, and dealers
in carriages, wagons, farm implements, wind mills, scales, engines and
threshing machinery. "We have on hand a full line of tho following mauufac-
iint-i-i Ki"'a ijj.ii u ciiu iii at quicti
MiuienaKcr uros. ;uig. Co., houtli JJend, Ind.; Enterprise Carriage Co.,
Cincinnati, Ohio; Hoover A Gamble, Miamisburg, Ohio; Esterly Harvesting
Co., AVhitewater, Wis.; Fairbank, Moore & Co., Chicago. 111.; Walton Plow Co.,
IJloomington, 111.; Pekin Plow Co., Pckin, 111.; Avery Planter Co.. Peoria, 111.;
.Tno. Dodds Hay Hack Co., Dayton, Ohio.; Frick Engine Co., Wavnesboro,
Penn.; Massilton Thrasher Co., Massilton, Ohio; Krugslorud A; Douglas Mfg.
Co., St. Louis, Mo.j Hubor Engiuo Co., Marion, Ohio.
ME WICHITA OVERALL AND SniRT MAtfUFACTUEIXG CO.
MANCFACTLKER3 AND JOHBERS OP
Overalls, Jeans, Casslmcre and Cottonade Pants; Duck Lined Coats and Vests;
Fancy Flannel aud Cotton Overshirts; Canton Flannel
Undershirts, Drawers, Etc.
Factory and Salesroom 139 N. Topeka, Wichita. Correspondence Solicited
i tr
A MONSTER SEA SERPENT.
Tire 3Icn Docrlbe a GlRantlo Monstros
ity of I'lalt Trap Luke.
A party of va persons, all gentlemen,
arrived home from a four days' visit to
Fish Trap lako, about threo miles bolow
Galena, 111 , a place not only famous
among anglers thereabouts, but as being
the same lody of water in which was dis
covered the huge serpent fish which
created so much excitement among people
generally and discussion among naturalists
esecially about ten years ago.
The Intense interest which was then ex
cited has been greatly magnified by the
fact that this particular monster, judging
from reliable and detailed accounts that
have been received, has aqain made its ap
pearance, and was seen by every one of the
five gentlemen comprising the party above
referred to on the afternoon of the day
previous to their departure for home.
"We were out in the middle of the lake
trolling for salmon and pike," said one of
tho number on being questioned, "and
were drifting along almost imperceptibly
on its perfectly Bmooth surface toward the
western extremity, propelled by the lisjht
breeze prevailing from the east, when sud
denly the wr.ter about us became fearfully
agitated, the boat commenced to heave and
rock, and we were wondering with
blanched faces what could be the cnue of
the phenomenon when, beforo a single
word could be uttered, a sight presented
itself which not one of us will ever forget
to our dying day, though we live to be as
old as Methuselah.
"Can I describe it in detail?" said the
narrator in resoonse to an inauirv. "Yes.
Wichita, Kansas.
FLOUR DEPOT.
C. Elcnmayer Er. Milling and Elevator Co., of
Ualstead, Kan., carry a full line of hard and soft
wheat flour at their agency la thh city, bend for
prices and samples.
OTTO WEISS. Agent, 253 N Main. Wichita.
L. xi .A. x o,
WHOLESALE
SADDLES,
Saddlery Hardware. Manufacturer of fpror.
Leather and tiiidings, hides, furs, wool and rooes.
HI Douglas Avenue, cor. Water St., "Wichita, Kan.
SWAB & GLOSSER,
TAILORS
And Jobbers of Woolens and Tail
ors Trimmings.
145 X Main Street, - Wichita.
WICHITA BOTTLING WORKS,
OTTO ZIMMEKUANN. Prop.
Bottlers of Ginger Ale. Champagne
Cider, Sada Water, Standard Verve
Food, also General Western
Agents for Win. J.Lemp's Extra Pale.
Cor. First and WacoSts., - Wichita.
E. VAIL & CO.,
IVTIOLESAI E
WATCHES, JEWELRY,
CLOCKS AND SILVER-WEAK.
10(1 E Douglas Are., - Wiihifa.
nonce:
f ior u. so viuaiy impressed itself upon my
mind that were I an artist I could dra v
tho scene with lifelike perfection and ac
curat''. I was looking down into the lake
from the sternsheets of the boat to discov
er, if possible, some cause for the singular
movement of the water which threatened
to submerge us, when like a flash there
emerged from its seething breast, and not
more than ten rods away, a half fish, half
serpent monter, fully thirty feet m
length, the exposure from extremity to ex
tremity being complete, as it lay upon the
water after reaching the surface.
"Its appearance was heralded by a shont
from the huge creature, exactly like that
of a fretful horse, which threw a column
of water into the air like the spouting of a
whale, the mist from which completely
saturated our clothing. The head and
body of the animal to its middle were
shaped like that of a sturgeon, with
a long, narrow top fin and a forward fin
on either side, just back of the gills. The
eyes were black and beadlike, and two
holes for spouting or throwing water were
plainly discernible cs top of the head. The
body from the center to the after extremity
was that of a serpent, without fins of any
kind, tho tail tapering to a point like tha:
of a pilot or bull snake.
The hiaeous monstrosity lay prone
upon the water for a half dozen seconds or
more, when it started for the mouth of
the lake, swimming with great Tokxaty,
its body partly out of the water. Stricken
with awe mingled with terror, we watched
it in silence until it paused out into the
Mississippi, when it dove downward on
reaching the channel and was seen tu
mora. Fortunately Adam Bell, a trusts
Cigar Headquarters. Cor. 3Tain and Pirst Streets.
W. T. BISHOP & SONS,
WICHITA, KANSAS.
Send us a Trial Order or Call and See Us.
FINLAY ROSS
WHOLESALE AND RETAIL
FURNITURE, CARPETS, ETC.
The Largest Establishment in the State.
Nos. 119 and 121 Main Street, Wichita, Kansas.
GETTO-McCLUN G- BOOT AjSTD SHOE CO.,
Manufacturers and Wholesale Dealers la
BOOTS :&: SHOES.
All goods of our own manufacture warranted. Orders by mail
promptly and carefully tilled.
135 and 137 N Market Street, Wichita, Kansas.
CORNER & FARNUM,
EOYAL SPICE MILLS,
Jobbers and Manufacturers, Teas,
Coffees, Spices, Extracts, Baking
Powders, Bluing, Cigars, Etc.
112 and 114 S Emporia Ave.
McCOMB BROS.
Manufacturer. Wholesale and RuUll
Dealers In
SADDLES & SADDLERY HARDWARE.
Correspondence Solicited.
121 E Douglas Ave, Wichita, Kan
BURR FURNITURE CO.,
Wholesale and Retail
FURNITURE
125 East Douglas Ave.
FOREST CITY COAL CO.,
Wholesale aud ltetall Dealers In
C O A. L!
Weir City and Rich Hill Coal
a Specialty.
119 2f Water St., - i V ichita, Kan
SHAFFER MARINE!
Wholesale and Retail
i o.iI, ii'raul Itimiw;', Kouling and
TtiiiMinir Materials.
Telephone 101. ISth St. and 4th Ave, Wichita, Kan.
WICHITA WHOLESALE GROCERY CO.,
Wholesale Grocers,
OFFICE AND WAREHOUSE 21.1 TO 223 SOUTH MARKET STREET.
Keep everything in the grocery line, show rases, Scales and grocers fixtures
Sole agents for the state for '"Grand Republic" cigars, also solo proprietors of
the "Royalty" aud "La Iunoceucia" brands. 15
L -tfiGHIIA CTjvi
Do not
llVflT'VRW.
IS
fPl-iocrn
stood the test
vocirs! nffflinct. nl nPW- 5
iPERIl
Jlll "&LV"
pnmoi-c min
:?Twn o nrettflfllTSa
-iA.il iii.oir.uu30 iiuvgio
1:5yicHiTAfKANSsasell them
worthy gentleman or Uh -i . was witft us,
and was also one of the party who discov
ered the huge marine monstrosity of ten
years ago, and described it as being iden
tically the same in size and general appear
ance as that which we witnessed this after
noon before we struck camp for Galena,
and expresses the opinion that its home is
in one of the partially submerged caverns
which abound along the bayous in the
vicinity of New California."
A remarkable thing, it may bo said,
about Fish Trap lake is its extreme depth,
the bottom in some places never having
been reached with any of the lines used in
sounding. It sets directly beck from the
Mississippi river, and is a mile and a quar
ter in length and a half mile in breadth,
its banks being bordered with heAvy tim
ber and thick shrubbery. Its junction
with the Mississippi is formed by a mouth
thirty or forty rods wide. It vr&s former
ly a veritable haven for anglers, but since
the discovery of the half nab and half ser
pent monstrosity of tec years ao it has
been deserted, except by the more intrepid
sportsmen of this section. Cor. St. Locis
Globe-Democrat.
Rural Ixrer on the Et Hd.
It was a warm and sunshiny afternoon
on the oast side and everybody was out of
doors. A couple came sauntering up Eld
ridge street toward Houston, hand ia
hand, and talking m the smiling sort of
confidence that betokens lovers. Both were
young, and he was dark and stalwart and
she bad the blue eyes, golden hair and
plump form bequeathed by Teutonic an
cestry. They itood at the corner clasping
hands ia farewell, when suddenly a &hrxH
SPRAY YEAST.
The quickest, strongest and purest
Dry Hop Yeast on tho market. "Will
keep a year in any climate.
Price, 5 cents per package of 7 cakes
For sale by all wholesale and retail
grocers.
Manufactured by Corner & Farnnm
factory oorner Kellogg and Moaley
Avenues.
J. A. BISHOP,
Wholesale and Retail
WALL PAPER
Paints, Oils and Glass.
150 y Market St., Wichita, Kan
CHAS. LAWRENCE,
DEALER IX
Pliotograkrs' : Supplies!
102 E Douglas A venue.
Wichita, Kan. Telephone Connection
BUTLER & G-RALEY
Manufacturers and Dealers In
TINWARE, ETC.
All kinds of cans for shipping purposes, rulta,
luklns powder, etc
213 South Main, Wichita, Kan.
The Hyde & Humble Sta. Co.,
Wholesale and Retail Dealers in
Books : and : Stationery,
Wall Paper, Wrapping Papers.
IU y Main St., - Wichita, Kan
"Wichita Trunk Factory
II. HOSSFIELD, rrorietor.
Manufacturer Of, Wholesale
and Retail Dealer in
Trunks, Valises, Satchels,
Shawl and Trunk Straps,
Pocket Books, AVil-
lov Ware, Etc.
125 W Douglas.
experiment
Sf-.tfTfTHlTAcrro- ss-X
urnvv-smtrm
with new FLOURS. 'Wltm
livnnflj llHVA r- . . .-'A I
for sixteen
, r
iflVA TlPVPr .
- 92
ini.jz,e?ea4
been befeated. MTff-Hi
&ategreas3E4
voice smot tfcc sunny atmospnere that en
veloped them and cried, ixiy. Muter,
you've forgot something " He tuml and
looked the question wh'.-h be aid not
speak. "You've forgotten to put a kiss on
her head," was ebouted derisively. Two
pair of hands parted suddenly, a young
man strode briskly toward the Eat nver
as if he had just recollected an appoint
ment in Brooklyn: a yoaag woaaa pd
swiftly down EW ridge street with her chin
in the air, and a tattooed urchin stood
upon his hands on the pavement and clap
ped his heels together in gUw. 2ew York
Letter.
A Sacaclso Magpie.
Out oa Fillmore street tore is a big
magpie that is never ia iu oage, bet mean
ders all over the neighborhood. It Ft&ad
on the curb, with bedraggled tail and
drooping wins. and whL-U "Any Old
Knives to Grind?" If any nun come up
and whistles to zszkc it reprsat it leta out a
wailing shriek expreMre of lb suprrat
di&gnst- A favorite trick of thin bird U to
whiUe for oogs, when every ear withia
three blocks rata to it. Whoa the kp)
oosgresate the mfpie foes p oa a batch
er's boo, and jeers at litem. Of lata the
poandkeprs deputies hn emntfhi ou
the bird and it habit of attracting dogs,
and the harvest faa beo bountiful. Sao
Fraacuoo Examiner.
Thsre are about 105 urdiae factorkn ia
France, and the produce at eeca Uetmrj is
4.0C0or, MceovsUiaiagKObexaa. The
total produce for t year 1K, wMdi I
not fr from the average, was 3iZJlcb,
and there i a net pi oft: on cms C2eW
.iut 56 cents.
JSNGLiSH PAWSK M.AKKKS,
WORK AND WAGES OF WRITERS AND
COMPOSITORS IN LONDON.
Blf Sams Paid Members of editorial
Staffo Hard and Fast tines Within
WhicU Reporters Toll The 'I'rlnt
er's" Unties and Wide Anthority.
News gathering, as it is understood and
practiced in America, is tabooed in En
glish journalism. Reporters work in
restricted paths. They cannot interview
and must not anticipate anything in the
way of news. If Bob Smith, after a long
career of brutality, is at last arrested for
beating his wife, tho bare fact may bo
stated, but the details of the assault must
not ba given until they are bet forth ofli-
m;5T
f'ZZOj
'S fs
"&'
i M. DE DLOWITZ.
dally In tho police court next day. Should
Sir Humphrey Makeshift elope with
Squire Bolingbroko's wifo a discreet silence
must be preservod until a bill for divorce
is filed, and then it is of ten prudent to only
hint at the names of the parties until the
trinl begins. No comments must be mado
on a case sub judice. Everything must
"happen" in the fullest son- of the Urm
beforo it can be presented to novtpior
readers, and then it must bu set forth in a
stiff, established form.
Sensationalism is frownwl upon, person
alities rigidly prohibited aud pitpmnt gos
sip strictly forbidtlwn. In fact an Knglnh
reporter gets no chaaco to "spread" him
self. He may report a speech in the bust
stylo known to the stenographic wit; he
may write up a flro in storootyiKsd phnwrfv,
or give tho list of prizes at a liower show
with the descriptive aid of the catalogue,
hut to presume to exerciso his fancy iu re
cording tho shifting scenes of life would
entail upon him the severest kmd of
censure, if not instant dismissal, with tho
editorial verdict that ho was a disgrace to
the profession.
One result of this policy of repression is
that the reporter becomes a more machine.
He goes through a daily routine mechani
cally, takes and transcribes hia shorthand
notes like an automaton, and luldlitt his
brain with parrot like phrases that have
done duty for tune immemorial, and will
probnbly continue to enhance tho dnincss
of British journalism uiitil the crack of
doom, lie is a reporter pure nnd simple,
whose ears have to be acuter than his eyes,
and whose usefulne5s is graded according
to his ability to write shorthand and his
celerity of transcription. Whatever de
scriptive matter is needed by a British
newspaper is generally done by a "special
commissioner," whose work is heavy com
pared with thut of the average American
news gatherer, but who would feel insulted
were he classed with the ordinary reporters
of his journal.
The highest ambition of an English re
porter is to bo employed on tho parliament
ary staff of a London daily. Great pains
aro taken with parliamentary reporting,
each journal maintaining a separate forco
of from five to eighteen men. Som of tho
newspapers only give the leading speeches,
but Tho Times prints a very full report,
devoting three or four pages a dny to tha
proceedings of both houses. Tho Ilau
sord.iwhich is recognized as the official roo
ord of the debates, is made up largely from
Tho Times' report, member having tho
privilege of clipping from its pages and
extending nnd correcting their remarks
before publication in the oilicial volumes.
The Times lias the largest reporting staff
in parliament. The men begin uilh
"turns" of half an hour or twenty min
utes; late at night the "turns" are cut
down to five or ten minute, and tho re
porters drive Ut Printing House square to
transcribe their notes Tbear quarters aro
very comfortable. An extensive reference
library is at their command, ami so that
they may nut be disturbed in their work of
transcription servants are constantly at
their elbows to supply every want.
In tho matter of accommodation, both
for editors aud reporters, the Koglwh Jour
nals are very liberal. Kach writer ha a
separate room, and an electric bell within
reach to summon an attendant, and flvo or
tix rooms aro usnaliy Mt apart for the re-
OEON6K ACGCBTCS RAUL
porters, who? work ia not very arduous m
rule. In the event of a speech by men
tike Gladstone or Lord Swliebvy Iron six
So eight nva are detailed to report it. awl
their "copy" ia all in y the tle the f-ch
Li delivered. Neither the typerrrU'-r nor
the phonograph hae yet b railed in to
uwist in reporting or trawwriV'K. rtoe
KBinent men hare gradoeted from tins re
porters' gallery of the houae of eomiaoe.
Charles Dtckeoa we a perUatoeeiArr
aaad, and Xr Locy, the able dior-io
shief of The Deiiy Xewa. begM m a r
pornv en toet paper.
Th: city editor of a Lmdoa Jonnwl an
iwers to our fine oriel editor; he i note
rag to do with the new enhaaeaa. eod U
bald solely reapourfbie im toe report f
Stock Exeb'Oge and other flaeineial trar
factions. Tbf chW rrparter hem eftury '
th locsd tnri i j the prt lirea Ie lAHi,r,
lb aainoT couru. ecrtdea, iiren, ee . a
rappUed by "ponT--He " Roerte-i
py j rtii Oel'T-wd t tJ- oaeaptwag
-oeea without rrvitien: thu oewu.y
Je cam at si cot wfceu aruaat sM ixf).
That then art sot saatenra eerrtcttoni
to be aade by political 4keri, woo are
keenly mmiUtc to error of itotMeaes',
show. tL&t tb Kogiftnh reertr. however
defictosl he may l is. otter rort. as
bo reJvd upon iar tb greatest aoearnry 'c
wp-irune: toe uUntaem of public nva
Rr-a when oeodfand work i refliJr 1
fall ebenbaod notes are V&m. T-t
mark ers made by the Mde of importeot
fcrotoeer-4, with tue noit tbet notejsuc
motoriU m ci initial e if a eoeseh be to
b est down a naif or im-Jexrem.
Alamo rrory enajomi.e em apeo r !
England U mwerooa in ttcaealteap wwb
ImmulC drtow roeetre aauroU nmuwe
of from see vz'zzjl to tfte. rtoorften bate
Mt '-lX
JllJflrSi !
ft2?KJ i '( V! -T- -rSi4 -
V'Z0&?rt'
I At J- v
THE MEAT EKBLISH REME1T,
BEECHAM'S PILLS
i iii xrniuud nHii liciiuai lmuimsrt
Werti & guinea SaxP tat Kii
for 25 Cents,
BY ALL. URCGGISTS.
iiuu... m . tutir, ana
the comiKHittirs, ,jrrssraen and atercotyp
ers are granted at ieast a week in the year
with full pay. Then there is a system of
rvmcinne fr nl.l wrvontv in ovfr ?jnjsrt
ment. Englishmen do not chango their
situations mucu. It is no uncommon;
thing to find reporter who have been con
nected with the same journal for thirty or
forty years. When they roach the retiring
age in some oflices they receive n third or
half their salary fer bfe. Compositors fre
quently step in fer pensions af:er loos ser
vice. The "printer." r& the foreman of a Lon
don daijy is called, is a very important per
sonage, and eorrspoeU in omo respecut
to our night editor f ' "make up" la
completely in bit. L.i.i ' keeps careful
track of the copy, ai tl ...-. i i.rd jjoea a long
way with the sub-editor aa to what shall or
shall not run near prow time. Ills men aro
the pick of the craft. Earh one acquired
considerable editorial abtuty: he in not a
; slavish follower of "copy," but reads for
I tho sense, and is on the alert for blunders
of omiion or commission. Hid proofs aro
remarkably cleoq, and tho printer keeps
him up to the highest notch of perfection,
bome of the printers ire a little too auto
cratic and exacting. One on The Standard
some years ago forbade the use of slngt to
protect the realtor after dumping. Thi.1
rule developed exceptional steadiness of
nerve in the chapel, since the slighter
shakines wns dL-AtroiM in the small
1mui-s wheu the 'takes" ran about tho
linos each.
Tho identity of writers fe carefully con
coaled by Enguah journal. Bvun tha
names of the wjftori arc not known out
side th? proft...i n I is always Tho
Times, The Telegraph. The Standard thaB
addroses the public, and in no other cou:
tryistheetiitorial"wo"9opowerfuL Manr
of tho minor correspondent areas ignorant
of the personality of the manager of tho
papers as the readers. They deal with Tho
Standard, 3'he Daily News, or The Time".
Tho pay for newspaper work varU witlv
the cias of journal served. In Louden tS.CXX)
a year is easily tterned by a leader writer,
and there an several w no run high a-i
?10,OOiJ. Eililor inrhief reeive from 113.
000 to Stt.100 Tlw Trleraph pay ltd ParU
corrnspondttut 4t.'0U and xpensen. He IS
a very able writer, and sends daily two
ARTHIBAin FuRKEa.
colnmnn of th munt rMlahle mattrr
printetl in Ennland, in'til, hU lettera ami
perfoet jwn pictures of life in the gay mr-
tropoli, and are cotapemble in point of
attrctivrne)w to the work of the beat jour
nalism in America.
Men of nut like Kdwln Arnold, De IHo
witz and George Augnettu Sela aro said to
earn fabulouw Mima. Arnold i at prHeti&
writing to The Telegraph frutu JapaH, and
Snla haa written a daily editorial for tho
same paper for twenty year. beidea con
tributing largely U periodic 1 literature.
The Timea hae the largeet fcUiff of artlcio
writer f any jonrual in the world. A.
number who havo made their mark in dif
fcrniit branfhci of bw and wlence hnvo
reUinera of from f 1,000 to 91000 a year,
and rtfetve IV) for ich lMtl they ar
called upon to contribute. War eorr
spondcntM Hitd sfjciul eommieetoHen- -men
who do the ilrint4ve work receivo
from 9i ftfK) to 110,000 per annum. Arch
timid Forbe had the MtOr aum. wr i
carte blnnchu tut to expenaea, when he
was with The Daily Xewa After hfe bri I i
iant work during tho wnr.betwern Bam v
ami Tnrkey the proprietor preeented hit .
with a check for &9jm in addition to bU
falary. From tU5 to (flu per column Is th
remuneration nt foretirn i orwupomlentj
not oa the aaiary ht of the b'.'uUng new- '
papers. The rataa paid by toeprioeipU
jfmrnala of the prweincea are aboui ow
third letM all round.
Ijondon reportera receive front tO to f 50
per wrrk. in the provinrMi junlote K't
from W to 110. twnond and third nnni4
from tS to tli&. aud chief of mtjdt from I-
to 140. The Iondon prnea mrelveN mnatfold
reporte of the tranaactiona at the minor
noorts, accidente, Are, eir . and per tktt-t
and four rent per lino for wUl U pui
llabrd. The Time employ eeeeral yon n
leuTbtere to moke report of tho prate!
log of toe higher courte. When race '
unosnaj public intereat ore on aborShnri'l
rfporter are ngnged to furaieh vedmtini
microti nts.
ComfMMltors work on "piaox" mm! mm
in IndoM from 112 to IO week, oeobn!
ng to ability and ItiHl ia "Ufc" iJai
other Engiieh workmn, tbry do not o-r
exert thctualttM A "alogger." er ouo
who la ronUnaelly on the ruan to mek a
"atnng," 1 frowned upra by the ehwprU
They get their hwc or fin nt rogolar itr
ral during tha nini, Ui cbnoel noftafC
man Ve wntt neon tb"on in UkU teoy irt
Time nanda get tr'mt (U to &9 a week. I'l
certoln piovinr',J iovaa time n'a .
peid, uauaily t0'A per wek of fifty tw
boar. roofreederk r!mve from f !' Vo
iL&awnea. Jilit W liJiiATt.
- m uerman peoeamtry a bLf
kmg ezlatud tha tee orI -.f tki pttr trt
or. ae wisf th:::. u n.nery Dryad ir
habiting tt. y.nm ! t rr. ipunet tie
saaiady
you hare, a
COLD or COUCH, j
arute or Jeadinx to J
CONSUMPTION, I
SCOTT
U
EMULSION
i OF Pl'BE C OI LITEIl OIL
awd nTropno3Pnm;s
i
or LtstR juro soda
xr tmxrxxt crxrji-w ron xt.
( T
t. rrrrt-t. ' t. eiU.
Ufcr V'.-T'io '! I'l JlyfpAfMtf
Vy fhjabM a . 'e " rv It
r ie Oft Ur on A ?Ue
riM. iaor IM kuOM im4 rr
coss cjrpTioy.
Sorofwte, Flesh Producer
maee H aowrtec 23 TT 5 IMSIWOI.
KhMkt r lbMMi tw-i.r
MHmuWH
teaer.yeof
xml | txt | http://chroniclingamerica.loc.gov/lccn/sn82014635/1890-10-03/ed-1/seq-7/ocr/ | CC-MAIN-2016-40 | refinedweb | 5,750 | 72.46 |
The prototype of this function is given below.
int remove(const char* filename);
After the operation of the above function, the file is no longer accessible with the name given as the argument. The subsequent attempts to open the file with the same name will fail unless it is created again with that name.
Illustrates the functions remove()
#include <stdio.h>
#include <stdlib.h>
void main()
{
FILE* fptr ;
fptr = fopen("Student file", "w");
clrscr();
if( fptr == NULL) //test if fopen () fails to open file. /
{
printf("File could not be opened.");
exit (1);
}
else
printf("File Student_file is open for writing.\n");
remove ("Student_file");
if(remove ("Student_file") != 0) // test for removal
printf ("Student_file is removed. \n");
freopen ("Student_file", "a", fptr);
/* reopen file for appending*/
if(fptr ==NULL)
printf( "Failed to reopen");
printf ("Student_file is opened again. \n"); | http://ecomputernotes.com/what-is-c/file-handling/function-remove | CC-MAIN-2020-05 | refinedweb | 136 | 59.8 |
So I just downloaded DEVc++, just using an example my teacher has given during class to check if it works. Now that it didn't work and I don't know why. I tried to looking for answers about linking error online, but I can't understand the answers.
So here is the program:
//Program to convert from Fahrenheit to Celcius
#include <iostream>
double fahrenToCelsius (double t);
//precondition:
//t is a valid tempreture in Fahrenheit
//postcondition:
//returns equivalent temp. in Celcius
int main ()
{
using namespace std;
double f;
cout<< "Enter temp. in degrees Fahrenheit. ";
cin >> f;
cout<< f << "in degs. Fahrenheit = "
<< fahrenToCelsius (f) << "in degs. Celcius" ;
cout<<endl;
return 0;
}
double fahrenToCelcius (double t)
{
double c;
c= (t-32.0)*(5.0/9.0);
return c;
}
And here is the problem:
[Linker error] C:\Users\Owner\AppData\Local\Temp\cckex8SZ.o:fahrenToCelsius.cpp: (.text+0x3d): undefined reference to `fahrenToCelsius(double)'
collect2: ld returned 1 exit status
I'm suspecting the program maybe that I saved it wrong? I saved it as fahrentoCelsius.cpp inside the folder "Work" ( I created this folder) which is inside the folder "Dev-cpp".
Sorry, can someone help me delete this post? I just found out my problem was so simple. Nothing about saving. I just have to put the line using namespace std; right underneath #include<iostream>
Well, I solved my own problem, sorry about the trouble again!
View Tag Cloud
Forum Rules | http://forums.codeguru.com/showthread.php?532491-linking-error-I-just-started-learning-C&p=2099461 | CC-MAIN-2015-35 | refinedweb | 238 | 59.5 |
Works for me, thanks
Search Criteria
Package Details: lightspark-git 0.7.2.r528.gc5a377a-1
Dependencies (15)
- boost-libs
- curl (curl-git, curl-http2, curl-http2-git)
- desktop-file-utils
-)
- glew (glew-git, glew-libepoxy)
-2_mixer (sdl2_mixer-hg)
- boost (make)
- cmake (cmake-git) (make)
- git (git-git) (make)
- llvm (llvm-assert, llvm-pypy-stm, llvm-svn) (make)
- nasm (nasm-git) (make)
- gnash-gtk (gnash-git) (optional) – Gnash fallback support
Required by (0)
Sources (1)
Latest Comments
Alad commented on 2016-11-05 23:00
dlandau commented on 2016-11-05 20:32
Updated, could you test now?
Alad commented on 2016-10-23 17:53
Fails to build:
dlandau commented on 2016-06-09 13:36
Seems to have been an upstream errro.!
hugoroy commented on 2014-10-22 20:14
This is now fixed
hugoroy commented on 2014-10-17 12:00
I can't compile it: I get this (some bits translated from French)
[ 36%] Building CXX object src/CMakeFiles/spark.dir/scripting/abc.cpp.o
/tmp/yaourt-tmp-hrd/aur-lightspark-git/src/lightspark/src/scripting/abc.cpp:49:36: fatal error: llvm/Analysis/Verifier.h : No such file or directory
#include <llvm/Analysis/Verifier.h>
^
compilation over.-09-24 07:44
Disowned, please treat it with care :)
sekret commented on 2014-09-21 06:46
Confirmed! But this has to go upstream! Wanna report it there?
neng commented on 2014-09-20 23:03
aur-lightspark-git/src/lightspark/src/scripting/abc.cpp:49:36: fatal error: llvm/Analysis/Verifier.h: No such file or directory
compilation terminated.-07-30 22:52
Adopted and updated. Please give word for anything missing. Strangely the browser plugin builds even without xulrunner. I built it in a chroot environment to be sure.
FredBezies commented on 2014-05-18 09:31
You should try this PKGBUILD, far cleaner and working :
apaugh commented on 2013-11-11 21:52
Is anyone else getting this error? The build seems to have a problem with my version of freetype. Could this be an x64 issue? I haven't been able to figure it out. I tried an older snapshot of the repo (fff7e63) and I have the same issue.
Linking CXX shared library ../x86_64/Release/lib/liblightspark.so
/usr/lib64/i386-linux-gnu/libfreetype.so: could not read symbols: File in wrong format
collect2: error: ld returned 1 exit status
src/CMakeFiles/spark.dir/build.make:2709: recipe for target 'x86_64/Release/lib/liblightspark.so.0.7.2' failed
make[2]: *** [x86_64/Release/lib/liblightspark.so.0.7.2] Error 1
CMakeFiles/Makefile2:157: recipe for target 'src/CMakeFiles/spark.dir/all' failed
make[1]: *** [src/CMakeFiles/spark.dir/all] Error 2
Makefile:136: recipe for target 'all' failed
make: *** [all] Error 2
cpatrick08 commented on 2013-08-21 22:53
might have gotten the errer from migrating to parabola from arch. reinstalled with parabola iso and don't have the problem anymore
mmm commented on 2013-08-21 20:59
hmm..just tested and I cant confirm your troubles, builds fine here (x86_64). Are you using updated versions for all progs?
btw, you can test: download the PKGBUILD from this site, install all dependencies and run makepkg. also, does it create a dir, what is its name?
cpatrick08 commented on 2013-08-21 19:57
get following error message when I run makepkg ==> Making package: lightspark-git 20130215-2 (Wed Aug 21 14:56:48 CDT 2013)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Retrieving sources...
-> Cloning lightspark git repo...
==> Validating source files with md5sums...
lightspark ... Skipped
==> Extracting sources...
-> Creating working copy of lightspark git repo...
/usr/bin/makepkg: line 1387: cd: lightspark: No such file or directory
==> ERROR: Failed to change to directory lightspark
Aborting...
here is my /usr/bin/makepkg
dlh commented on 2013-08-13 15:36
I have the same issue as @the-kyle
kyle commented on 2013-08-09 17:29
Perhaps I'm missing a package somehow, but the current version fails to build on my system. I am including only one of many similar errors here. Hopefully someone can help.
In file included from /tmp/packerbuild-1000/lightspark-git/lightspark-git/src/lightspark/src/scripting/flash/net/flashnet.h:29:0,
from /tmp/packerbuild-1000/lightspark-git/lightspark-git/src/lightspark/src/swf.h:32,
from /tmp/packerbuild-1000/lightspark-git/lightspark-git/src/lightspark/src/tiny_string.cpp:23:
/tmp/packerbuild-1000/lightspark-git/lightspark-git/src/lightspark/src/backends/decoder.h:31:30: error: ‘AVCODEC_MAX_AUDIO_FRAME_SIZE’ was not declared in this scope
#define MAX_AUDIO_FRAME_SIZE AVCODEC_MAX_AUDIO_FRAME_SIZE
^
/tmp/packerbuild-1000/lightspark-git/lightspark-git/src/lightspark/src/backends/decoder.h:202:19: note: in expansion of macro ‘MAX_AUDIO_FRAME_SIZE’
int16_t samples[MAX_AUDIO_FRAME_SIZE/2];
^
/tmp/packerbuild-1000/lightspark-git/lightspark-git/src/lightspark/src/backends/decoder.h: In constructor ‘lightspark::AudioDecoder::FrameSamples::FrameSamples()’:
/tmp/packerbuild-1000/lightspark-git/lightspark-git/src/lightspark/src/backends/decoder.h:207:26: error: ‘samples’ was not declared in this scope
FrameSamples():current(samples),len(0),time(0){}
^
/tmp/packerbuild-1000/lightspark-git/lightspark-git/src/lightspark/src/backends/decoder.h: At global scope:
/tmp/packerbuild-1000/lightspark-git/lightspark-git/src/lightspark/src/backends/decoder.h:283:2: error: ‘CodecID’ does not name a type
CodecID LSToFFMpegCodec(LS_AUDIO_CODEC lscodec);
^
make[2]: *** [src/CMakeFiles/spark.dir/swftypes.cpp.o] Error 1
FredBezies commented on 2013-07-20 14:42
Cleaner PKGBUILD :
Feel free to use it !
mmm commented on 2013-04-12 16:41
upgraded PKGBUILD to take advantage of new makepkg's ability of VCS packages!
mmm commented on 2013-04-12 16:37
@GuestOne: git/svn and other VCS packages are not "updated", it depends on you, when you build it, a fresh upstream version is used.
Short: wanna new? rebuild!
mmm commented on 2013-04-12 16:37
@GuestOne: git/svn and other VCS packages are not "updated", it depends on you, when you build it, a fresh upstream version is used.
Short: wanna new? rebuild!
heftig commented on 2013-03-29 23:08
No, sorry. Lost interest.
GuestOne commented on 2013-03-29 23:05
Is this package updated or not?
xrchz commented on 2012-05-22 17:24
should the optional dep be pulseaudio rather than libpulse?
heftig commented on 2011-10-30 00:33
Won't do anything about that, sorry. Wait for upstream to fix their LLVM support.
Anonymous comment on 2011-10-29 21:41
dWolf commented on 2011-07-20 15:33
current lightsparc-git don't work with the arch's ffmpeg package. For me it just crashes on youtube or vimeo.
td123 commented on 2011-02-02 18:18
librtmp is a new dependency for lightspark, I don't think there is a package for it
td123 commented on 2011-02-02 05:02
Hello, I'm getting the following error when building this package;
anoknusa commented on 2011-01-18 21:29
I haven't had any problem building Lightspark; however, YouTube videos state that the plugin is outdated. Replacing it with flashhplugin-prerelease takes care of this, but I'd rather go to OSS route. Any idea why this is? I didn't see anything mentioned on the Lightspark homepage or Launchpad.
td123 commented on 2011-01-05 18:29
I have worked through getting lightspark to correctly build
You can really just copy+paste the code, but it does need to be statically linked against libxml++ 2.33.1 or it has to depend on that version (which no package exists for it right now)
Otherwise if you don't, there will be lots of random crashes related to libxml++.
td123 commented on 2010-11-08 20:57
This package fails to compile, please see the lightspark PKGBUILD to see how it is worked around there.
cookiecaper commented on 2010-11-07 09:44
See the following thread:
It appears that LLVM 2.8 has an incompatibility with GCC 4.5 and will sometimes cause this error. It is fixed in LLVM SVN, so you'll either need to upgrade to LLVM SVN or downgrade to 2.7. The guys in #llvm say that they usually don't do point releases.
Anonymous comment on 2010-10-19 08:38
Ftr, this package apparently won't compile with the current LLVM (2.8). You need to revert to 2.7 for it to compile.
Anonymous comment on 2010-10-15 19:31
fails in compilation
td123 commented on 2010-09-25 04:37
Ya, libxml++ was recently used to handle security policies in flash. please add libxml++ to the depends array
Anonymous comment on 2010-09-24 12:14
I got errors trying to build this one from yaourt.
Installing libxml++ fixed it. I guess it should be in the dependencies?
Thnx for the pkg
td123 commented on 2010-09-16 19:18
@skodabenz
Please finish reading the second sentence in the link you posted. PA is still required, but there is now support for a plugin architecture which makes it possible for an alsa backend.
Anonymous comment on 2010-09-14 16:06
Seems like Alsa audio backend is now supported . Pulseaudio dependency can be dropped.
Anonymous comment on 2010-09-12 13:34
requires boost as a build dependency
td123 commented on 2010-09-12 02:51
please add a .install file (an example can be found on the from the lightspark package) because a .desktop and png icons were added upstream
td123 commented on 2010-08-19 19:06
upstream repo has changed to git://github.com/lightspark/lightspark.git
heftig commented on 2010-08-10 23:57
Not out of date. Still builds.
heftig commented on 2010-07-21 20:13
Leftover from inspection, sorry.
Anonymous comment on 2010-07-21 20:12
Uh, why is there a zsh command in this PKGBUILD?
kfgz commented on 2010-07-21 14:32
Please add line conflicts=(lightspark)
kfgz commented on 2010-07-21 14:27
Please add line conflicts=(lightspark)
nTia89 commented on 2010-07-21 09:35
i've installed lightspark but how can i enable it in firefox or other browser ?
thks
heftig commented on 2010-07-20 22:43
Done.
intgr commented on 2010-07-20 21:55
Lightspark now uses fontconfig to locate fonts, so ttf-liberation is no longer a dependency; please remove.
heftig commented on 2010-07-20 20:58
Seems to play youtube videos, but controls don't work.
flamelab commented on 2010-07-20 20:28
Is it usable right now since it supports fontconfig (no liberation font issue) ?
rnestler commented on 2010-07-18 10:44
If I check the created package with namcap I get the following errors and warnings:
lightspark-git E: Dependency detected and not included (libffi) from files ['usr/lib/liblightspark.so.0.4', 'usr/lib/liblightspark.so.0.4.2', 'usr/lib/liblightspark.so']
lightspark-git W: Referenced library 'liblightspark.so.0.4' is an uninstalled dependency
lightspark-git W: Dependency included but already satisfied ('mesa')
lightspark-git W: Dependency included but already satisfied ('sdl')
lightspark-git W: Dependency included but already satisfied ('zlib')
lightspark-git W: Dependency included and not needed ('ttf-liberation')
lightspark-git W: Dependency included but already satisfied ('pcre')
The first one sounds like a real issue.
heftig commented on 2010-06-21 19:37
Patched. Builds again, provided you have glew-1.5.4 (which should be released shortly).
juantascon commented on 2010-06-21 14:40
hi, I am getting this error compiling:
usr/bin/ld: cannot find -lLLVMLinker
you can see the whole log here:
heftig commented on 2010-06-17 21:04
It's not in any usable state yet. Some very simple SWFs actually manage to not crash.
flamelab commented on 2010-06-17 20:44
Have you actually make it work ? It always crashes on my 64bit system.
heftig commented on 2010-06-17 20:26
Updated.
cookiecaper commented on 2010-06-17 18:56
See for a pkgbuild that gets around the lib64 errors.
cookiecaper commented on 2010-06-17 18:39
Now getting the same as MadFish :(
Anonymous comment on 2010-06-17 17:19
Can't build on x86_64. PKGBUILD wants to use "lib", while it is being built to "lib64".
chmod: cannot access `/home/madfish/temp/lightspark/pkg/usr/lib/mozilla/plugins/liblightsparkplugin.so': No such file or directory
==> ERROR: Packaging Failed.
Aborting...
heftig commented on 2010-06-17 10:11
Install libvpx from [extra].
flamelab commented on 2010-06-17 09:06
Anyone else with vp8 errors during compiling ?
heftig commented on 2010-06-17 03:48
The issue is that Arch Linux's GLEW doesn't ship with a pkgconfig file. Anyway, patched.
Anonymous comment on 2010-06-17 00:33
You can kludge it into building by using the PKGBUILD here -
cookiecaper commented on 2010-06-16 22:23
I'm getting the same as intgr and Mystal. Using stable packages on x86-64.
cookiecaper commented on 2010-06-16 22:23
I'm getting the same as intgr and Mystal. Using stable packages.
intgr commented on 2010-06-16 17:27
@Mystal
Same problem here (x86_64, testing, community-testing). They probably broke their git.
Mystal commented on 2010-06-16 17:23
I'm don't believe I'm doing anything wrong, but for some reason cmake cannot find glew. I have it installed, double checked that. Any ideas?
...
-- Found Threads: TRUE
-- Found CURL: /usr/lib/libcurl.so
-- Found ZLIB: /usr/lib/libz.so
-- checking for modules 'gl;libpcrecpp;libavcodec;libavutil;ftgl;x11;libpulse;glu;fontconfig;glew'
-- package 'glew' not found
CMake Error at /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:259 (message):
A required package was not found
Call Stack (most recent call first):
/usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:311 (_pkg_check_modules_internal)
CMakeLists.txt:51 (pkg_check_modules)
...
heftig commented on 2010-06-14 23:00
Ah whoops, that was a typo. Sorry :)
Anonymous comment on 2010-06-14 22:11
Looks like ttf-liberation can be taken out as a dependency:
Also I can't see why gtk1 is required? I haven't tried to build this yet though as I don't currently want pulseaudio.
Anonymous comment on 2010-06-12 10:05
For those having crashs, not working on 64bit or wondering why ttf-liberation is mandatory please read this :
heftig commented on 2010-06-05 23:19
Depends on it. At least according to the included dpkg control file.
intgr commented on 2010-06-05 22:35
Why the dependency on ttf-liberation? I don't want ttf-liberation installed, Adobe Flash works fine without it. It makes no sense.
daimonion commented on 2010-05-20 09:32
Not working for me on 64bit architecture.
heftig commented on 2010-05-19 16:38
Looks like logging has been enabled for Release as well.
Adding glproto to makedepends.
Anonymous comment on 2010-05-19 09:38
+1 for glproto in makedepends, no matter if it's documented as makedependency or not, it's definitely needed.
Anonymo commented on 2010-05-19 06:33
losinggeneration, you can create a package that you maintain called lightspark-debug or something
losinggeneration commented on 2010-05-18 20:46
Fair enough. You're free to do what you want with your package, but since you yourself admit the package isn't usable (because of crashes), it'd make sense to me to have it as the default with debugger symbols so useful backtraces could be submitted upstream and change it to Release when it does become usable.
heftig commented on 2010-05-18 20:42
Not for general use (should this become usable). If you want to debug it, change it yourself.
losinggeneration commented on 2010-05-18 20:42
bah, it's actually RelWithDebInfo not, DebugWithRelInfo
losinggeneration commented on 2010-05-18 20:39
since the package is from Git, and the source was just recently declared beta, wouldn't it be better to have CMAKE_BUILD_TYPE be something more like Debug, or DebugWithRelInfo (rather than Release)? (sorry for the edits...)
losinggeneration commented on 2010-05-18 20:38
pkgconfig is a build dependency
Also, I don't know about baghera's suggestion, I don't have glproto and it seems to compile just fine.
One final thing, since the package is from Git, and the source was just recently declared beta, wouldn't it be better to have CMAKE_BUILD_TYPE be something more like Debug, or DebugWithRelInfo (rather than Release)?
heftig commented on 2010-05-18 20:38
The glproto headers aren't referenced anywhere in the source.
losinggeneration commented on 2010-05-18 20:32
strange, it builds and runs in my arch64 chroot, but not in my arch32 chroot
losinggeneration commented on 2010-05-18 20:27
Also, I don't know about baghera's suggestion, I don't have glproto and it seems to compile just fine.
losinggeneration commented on 2010-05-18 20:26
pkgconfig is a build dependency
Anonymous comment on 2010-05-18 20:13
This build s successfully for i686 by changing the 'arch' in the PKGBUILD. Have not tested it in use as yet, however.
Duologic commented on 2010-05-18 18:23
anyone has a fix or workaround for x64?
flamelab commented on 2010-05-18 17:53
It crashes here as well (x86_64).
baghera commented on 2010-05-18 17:20
You should add glproto to makedeps, it doesn't build without it.
heftig commented on 2010-05-18 17:00
Builds, but crashes. Needs more research. | https://aur.archlinux.org/packages/lightspark-git/?comments=all | CC-MAIN-2017-13 | refinedweb | 2,922 | 57.37 |
Ice Manual Wrong About Properties Initialization
in Help Center
In section 30.9.1, the Ice manual states that one can manage one's own properties (properties not reserved for Ice modules) by initializing the Ice communicator and then using the communicator's getProperties() call. I have tried this little example based on the samples given by the manual:
Well, this does not work! When I call the program (called 'props') with the following command line:
#include <Ice/Ice.h> #include <iostream> int main(int argc, char* argv[]) { Ice::CommunicatorPtr ic = Ice::initialize(argc, argv); std::cout << "max files = " << ic->getProperties()->getPropertyAsInt("Filesystem.MaxFileSize") << std::endl; }
Well, this does not work! When I call the program (called 'props') with the following command line:
it prints out:it prints out:
./props --Filesystem.MaxFileSize=555
max files = 0
0
Have a look at the section "30.9.3 Parsing Properties":
And the code example that follows.
Still, I think that the documentation (section 30.9.1) is misleading in that respect and should be amended accordingly.
Cheers | https://forums.zeroc.com/discussion/comment/34739/ | CC-MAIN-2019-39 | refinedweb | 175 | 57.47 |
Now that the State Machine Activity Pack has been released on Codeplex (see for details) I thought I should resurrect one of the samples I regularly used in the 3.x days to show a state machine example. The point of this example is to show how the state machine activities could be used to control access to a building as it’s a simple example and showcases the main features available in the state machine activity pack.
My example models access control to a door – the host is a WPF application (which could do with some better styling!). Imagine if you could ‘boot’ up a building. In that case each door would for sake of argument be closed and locked – this forms the initial state of the state machine. Thereafter we’re awaiting an event, which is the user unlocking the door. This might be by using a PIN pad or a smart card – the details of this are beyond the scope of the post. An overview of the states and transitions is shown in the image below.
When the user presents their credentials the Unlocked transition occurs – here you could optionally pass the PIN number (or card entry code) into the transition to lookup the code in a database and verify that it is correct. In my example I just transition to the ClosedUnlocked state without any verification.
From this state there are 3 possible things that could happen – the user could open the door (the most likely outcome), they could choose to re-lock the door for some reason, or they might not open the door within a set period in which case you would want the door to automatically re-lock itself and transition back to the ClosedLocked state. So at this point there are 3 transitions executing, two are waiting on an event (more on this in a moment) and the third (Timeout) is waiting for a Delay which in my example is 5 seconds. I’ll explain a little more about the State and Transition classes as they are critical to understanding how the state machine works.
A state machine comprises a collection of states and a collection of transitions between states. There’s always one initial state, and you can optionally have a final state (which completes the execution of the workflow instance). When you enter a state you can optionally run some activities, and similarly when you exit a state you can also execute some activities. To define the entry and exit activities, double click on the state in the designer which will show something like the following (I chose the ClosedLocked state from the above diagram)…
When a state activity executes it calls the Entry activity (which can of course be a Sequence so that you can run multiple activities) and we then find all outgoing transitions. We then execute each transition, which in the case of my state machine example executes a set of activities that create a bookmark and then wait for this bookmark to be resumed. This may seem a bit odd – if you have 3 transitions we schedule the Trigger activity (shown in the image below) for each of the 3 branches. When the state machine engine then does is waits for one of these activities to complete, and whichever gets in first will ‘win’ and that transition will complete.
Possibly the easiest way to view transitions is as a Pick activity. When you are in a given state, we effectively execute each transition branch in a similar manner to that of the Pick activity, and the first activity to complete will complete the transition. You can optionally include a Condition on the transition which is evaluated when the trigger event occurs and the transition is only completed if the condition returns true. In the ClosedLocked state we’re effectively waiting as follows…
I’ve omitted any Actions in the above as the transition branch works in a slightly different manner to Pick. When the transitions Trigger activity completes (and assuming the condition evaluates to true) we’ll also execute the Action activity if you’ve created one.
So, in a state we’re waiting for an activity to complete, and in the above image I have a custom activity shown as the DoorUnlocked and DoorLocked activities. These are instances of the DoorEvent activity which uses a bookmark that can be resumed by the client. The code for this activity is fairly trivial and is shown below…
public class DoorEvent : NativeActivity
{
protected override bool CanInduceIdle
{
get { return true; }
}
[RequiredArgument]
public InArgument<string> BookmarkName { get; set; }
protected override void Execute(NativeActivityContext context)
{
context.CreateBookmark(BookmarkName.Get(context), new BookmarkCallback(OnBookmarkResumed));
}
private void OnBookmarkResumed(NativeActivityContext context, Bookmark bookmark, object state)
{
}
}
Not a great deal going on there really – it has a mandatory BookmarkName property which defines the name of the bookmark we’re using. In the example, the name of the bookmark matches the transition label.
In the example I’m using a WPF host, and using the MVVM pattern (here with only a View Model) to maintain the user interface. The UI needs to know what state the state machine is in in order to resume appropriate bookmarks, so in each state activity I pass the state out to the View Model using another custom activity. This updates a property on the view model which in turn enables/disables parts of the UI.
There are a couple of classes used in the example for view models – the MainViewModel class is the root of the application, and it contains a set of DoorModel instances, one for each door shown on the UI. When a DoorModel is created it creates a WorkflowApplication instance and adds an extension to the instance (IDoor) that the door model implements. It is this extension that the workflow calls back to the door model on when notifying state changes. The pseudo code for the system is shown below…
Thereafter the UI is in charge of the state machine. When a command such as ‘Unlock’ is executed, it simply resumes the appropriate bookmark on the state machine…
_unlockCommand = new DelegateCommand((unused) =>
{
_stateMachine.ResumeBookmark("Unlocked", null);
},
(unused) =>
{
return this.DoorState == DoorState.ClosedLocked;
});
Here I’m using a DelegateCommand class so that I can use lambda functions to respond to the command (and also to define whether the command is valid or not based on the current state).
That’s pretty much all there is to it – as the user clicks around on the UI it executes the appropriate command, this resumes a bookmark which will then force a transition in the workflow, and when a new state is entered this calls back to the view model to indicate the new state. Simple.
There’s one other part of the application I wanted to call out and that’s the use of a custom PlaySound action on each transition. This simply plays a sound to indicate the state of the door (well, sort of!) when the transition completes. As an example there’s one sound used when the timeout happens but another when the door is manually locked. I initially defined these sounds in the ‘Entry’ element of each State activity, but this didn’t indicate the particular transition that had just occurred – i.e. there was no way to distinguish how the state machine had transitioned to the ClosedLocked state to allow me to play a different sound on entry to that activity.
When you run the application you’ll see the following wonderful UI…
If you click on the Unlock button the workflow executes and changes the state to ClosedUnlocked. You’ll also see a couple more buttons show up that correspond to the now valid states (Lock and Open). If you unlock a door and then leave it for around 5 seconds the door will automatically lock again. For all transitions you’ll also be rewarded with some sounds – if these don’t work check the PlaySound activities in each transition and choose your own files to play (I’m on Windows 7 so the directory and files might be different from your machine).
Click here to download the source code and VS project. Feel free to tart up the user interface!
Originally posted by Morgan Skinner on 15th June here | http://blogs.msdn.com/b/ukadc/archive/2010/07/13/a-practical-state-machine-example.aspx | CC-MAIN-2013-48 | refinedweb | 1,386 | 55.07 |
[
]
ASF GitHub Bot commented on DRILL-4387:
---------------------------------------
Github user jinfengni commented on a diff in the pull request:
--- Diff: contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseGroupScan.java
---
@@ -34,6 +34,7 @@
import java.util.concurrent.TimeUnit;
import com.fasterxml.jackson.annotation.JsonCreator;
+import com.google.common.base.Objects;
--- End diff --
right. I'll remove these unused imports. Thanks.
> Improve execution side when it handles skipAll query
> ----------------------------------------------------
>
> Key: DRILL-4387
> URL:
> Project: Apache Drill
> Issue Type: Bug
> Reporter: Jinfeng Ni
> Assignee: Jinfeng Ni
> Fix For: 1.6.0
>
>
> DRILL-4279 changes the planner side and the RecordReader in the execution side when they
handles skipAll query. However, it seems there are other places in the codebase that do not
handle skipAll query efficiently. In particular, in GroupScan or ScanBatchCreator, we will
replace a NULL or empty column list with star column. This essentially will force the execution
side (RecordReader) to fetch all the columns for data source. Such behavior will lead to big
performance overhead for the SCAN operator.
> To improve Drill's performance, we should change those places as well, as a follow-up
work after DRILL-4279.
> One simple example of this problem is:
> {code}
> SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`;
> {code}
> The query does not require any regular column from the parquet file. However, ParquetRowGroupScan
and ParquetScanBatchCreator will put star column as the column list. In case table has dozens
or hundreds of columns, this will make SCAN operator much more expensive than necessary.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.apache.org/mod_mbox/drill-issues/201602.mbox/%3CJIRA.12939550.1455641798000.87258.1455831918160@Atlassian.JIRA%3E | CC-MAIN-2018-30 | refinedweb | 267 | 50.33 |
Hi i have an array of structs which i am trying to sort. It contains various strings of words and i wan to put them in alpha order. My code compiles fine but it crashes when i call the qsort. Just wondering if anybody could point me to where i am going wrong
I am filling the array from a linked list. When i do a printf the strings in in the arrayI am filling the array from a linked list. When i do a printf the strings in in the arrayCode:/* Structure definitions. */ typedef struct wordStruct* WordTypePtr; typedef struct wordStruct { char* word; unsigned count; WordTypePtr nextWord; } WordType;
My qsortMy qsortCode:WordType *conductor; //array deceleration WordType wordArray[index->totalWords]; //further down wordArray[j].word=conductor->word;
index->totalWords is the count of how big the array is
comp functioncomp functionCode:qsort(*wordArray->word, index->totalWords, sizeof(*wordArray), comp);
Code:int comp(const void *s1, const void *s2) { return (strcmp(*(char **)s1, *(char **)s2)); } | https://cboard.cprogramming.com/c-programming/129248-qsort-array.html | CC-MAIN-2017-13 | refinedweb | 163 | 52.8 |
The class loader in Java is a powerful concept that helps to provide an extensible environment for running Java code with varying degrees of trust. Each piece of byte code in a running program is loaded into the Java virtual machine by a class loader, and Java can grant different security permissions to runtime objects based upon the class loaders used to load them.
Most of the time this mechanism is used implicitly by both the writer and user of a Java program and 'it just works'. However there is quite a lot happening behind the scenes; for example when you run a Java applet some of the classes are being loaded across the Internet while others are read from the local machine. The class loader does the work of getting the byte code from the target Web site and it also helps to enforce the so-called 'sandbox' security model.
Another place where class loaders are used is for Web services. Typically the main application classes are loaded from a WAR (Web ARchive) file but may make use of standard Java classes as well as other classes or JAR (Java ARchive) files that may be shared between multiple applications running inside a single server. In this case the principal reason for the extra class loaders is to ensure that each Web application remains as independent of the others as possible and in particular that there is no conflict should a class with the same name exist in two different WAR files. Java achieves this because each class loader defines a separate name space - two Java classes are the same only if they were loaded with the same class loader. As we shall see this can have some surprising results.
In the case of a browser or a Web server the framework usually provides all the various class loaders. However you can use additional class loaders, and it is surprisingly easy to do so. Java provides an abstract base class, java.lang.ClasssLoader, which all class loaders must extend. The normal model is that each class loader has a link to its 'parent' class loader and all requests for loading classes are first passed to the parent to see if they can be loaded, and only if this delegated load fails does the class loader try to satisfy the load. The class loaders for a Java program form a tree, with the 'bootstrap' class loader as the top node of the tree and this model ensures that standard Java classes, such as String, are found in the usual place and only the application's own classes are loaded with the user-supplied class handler. (Note that this is only a convention and not all class loaders follow the same pattern. In particular it is up to the implementer of a class loader to decide when and if to delegate load requests to its parent)
One important issue when creating a class loader is deciding which class loader to use as the parent. There are several possibilities:
No parent loader. In this case the loader will be responsible for loading all classes.
Use the system class loader. This is the commonest practice.
Use the class loader used to load the current class. This is how Java itself loads dependent classes.
Use a class loader for the current thread context.
Java provides a simple API for getting and setting the default class loader for the current thread context. This can be useful since Java does not provide any way to navigate from a parent class loader to its child class loader(s). I demonstrate setting the thread's default class loader in the example below.
Java provides a standard URLClassLoader that is ready to use, or you can implement your own class loader.
As an example of the first case, you might want to run a Java program on workstations in your organisation, but be able to hold all the Java code centrally on a Web server. Here is some example code that uses the standard java.net.URLClassLoader to instantiate an object from a class held, in this instance, on my own Web site:
/** *This is a trivial example of a class loader. *It loads an object from a class on my own *Web site. */ public class URLExample { private static final String defaultURL = ""; private static final String defaultClass = "articles.java.Welcome"; public static void main( String args[] ) throws Exception { final String targetURL = ( args.length < 1 ) ? defaultURL : args[0]; final String targetClass = ( args.length < 2 ) ? defaultClass : args[1]; // Step 1: create the URL class loader. System.out.println( "Creating class loader for: " + targetURL ); java.net.URL[] urls = { new java.net.URL ( targetURL ) }; ClassLoader newClassLoader = new java.net.URLClassLoader( urls ); Thread.currentThread() .setContextClassLoader ( newClassLoader ); // Step 2: load the class and create an instance of it. System.out.println( "Loading: " + targetClass ); Class urlClass = newClassLoader.loadClass ( targetClass ); Object obj = urlClass.newInstance(); System.out.println( "Object is: \"" + obj.toString() + "\"" ); // Step 3: check the URL of the loaded class. java.net.URL url = obj.getClass().getResource ( "Welcome.class" ); if ( url != null ) { System.out.println( "URL used: " + url.toExternalForm() ); } } }
When I compile and run this program it produces the folllowing output:
Creating class loader for: Loading: articles.java.Welcome Object is: "Welcome from Roger Orr's Web site" URL used:
The URLClassLoader class supplied with standard Java is doing all the hard work. Obviously there is more to write for a complete solution, for example a SecurityManager object may be required in order to provide control over the access rights of the loaded code.
The source code for the 'Welcome.class' looks like this:
package articles.java; public class Welcome { private WelcomeImpl impl = new WelcomeImpl(); public String toString() { return impl.toString(); } }
Notice that the class has a dependency upon WelcomeImpl - but we did not have to load it ourselves. The same class loader newClassLoader we use to load Welcome is used by the system to resolve references to dependent classes, and so the system automatically loaded WelcomeImpl from the Web site as it was not found locally. There is little code needed for this example and 'it just works' as expected.
Although undoubtedly useful the URLClassLoader does not provide everything and there will be cases where a new class loader must be written. This might be because you wish to provide a non-standard way of reading the bytes code or to give additional control over the security of the loaded classes. All you need to do is to override the findClass method in the new class loader to try and locate the byte code for the named class; the implementation of other methods in ClassLoader does not usually need overriding.
Here is a simple example of a class loader which looks for class files with the .clazz extension by providing a findClass method. This automatically produces a class loader that implements the delegation pattern - the new class loader is only used when the parent class loader is not able to find the class. At this point the findClass method shown below is invoked and the myDataLoad method tries to obtain the class data from a .clazz file. Although only an example it does illustrate the principles of writing a simple class loader of your own.
import java.io.*; public class MyClassLoader extends ClassLoader { public MyClassLoader( ClassLoader parent ) { super( parent ); } protected Class findClass(String name) throws ClassNotFoundException { try { byte[] classData = myDataLoad( name ); return defineClass( name, classData, 0, classData.length ); } catch ( Exception ex ) { throw new ClassNotFoundException(); } } // Example: look for byte code in files with .clazz extension private byte[] myDataLoad ( String name ) throws Exception { ByteArrayOutputStream bos = new ByteArrayOutputStream(); InputStream is = getClass().getResourceAsStream ( name + ".clazz" ); if ( is != null ) { int nextByte; while ( ( nextByte = is.read() ) != -1 ) { bos.write( (byte) nextByte ); } } return bos.toByteArray(); } }
We might want to get the Java runtime to install the class loader when the application starts. This can be done by defining the java.system.class.loader property - for this JVM instance - as the class name of our class loader. An object of this class will be constructed at startup, with the 'parent' class loader being the default system class loader. The supplied class loader is then used as the system class loader for the duration of the application.
For example:
C:>javac Hello.java C:>rename Hello.class Hello.clazz C:>java Hello Exception in thread "main" java.lang.NoClass DefFoundError: Hello C:>java -Djava.system.class.loader=MyClass LoaderHello Hello World
In practice, for both applets and Web servers, everything does not always work without problem. Unfortunately from time to time there are interactions between the various class loaders and, in my experience, these are typically rather hard to track down.The sort of problems I have had include:
strange runtime errors caused by different versions of the same class file(s) in different places in the CLASSPATH.
problems with log4j generating NoClassDefFound or ClassCastException errors.
difficulties registering protocol handlers inside a WAR file.
My experience is that resolving these sort of problems is made more difficult by the lack of easy ways to see which class loader was used to load each class in the system. For any given object it is quite easy to track down the class loader - the getClass() method returns the correct 'Class' object and calling the getClassLoader() method then returns the actual class loader used to instantiate this class. The class loader can be null - for classes loaded by the JVM 'bootstrap' class loader.
Since Java treats any classes loaded by different class loaders as different classes it can be critical to find out the exact class loaders involved. However I do not know of a way to list all classes and their loaders. The Java debugger 'JDB' has a 'classes' command but this simply lists all the classes without, as far as I know, any way to break them down by class loader.
I wanted to find a way to list loaded classes and their corresponding class loader so I could try and identify the root cause of this sort of problem. One way is to extract the source for ClassLoader.java, make changes to it to provide additional logging and to place the modified class file in the bootstrap class path before the real ClassLoader. This is a technique giving maximum control, but has a couple of downsides. Firstly you need access to the boot class path - this may not always be easy to achieve. Secondly you must ensure the code modified matches the exact version of the JVM being run. After some experimentation, I decided to use reflection on ClassLoader itself to provide me pretty well what I wanted.
Reflection allows a program to query, at run time, the fields and methods of objects and classes in the system. This feature, by no means unique to Java, provides some techniques of particular use for testing and debugging. For example, a test harness such as JUnit can query at run time the methods and arguments of public methods of a target object and then call all methods matching a particular signature. This sort of programming is very flexible, and automatically tracks changes made to the target class as long as they comply with the appropriate conventions for the test harness. However the downside of late binding like this is that errors such as argument type mismatch are no longer caught by the compiler but only at runtime.
There are two main types of reflection supported for a class; the first type provides access to all the public methods and fields for the class and its superclasses, and this is the commonest use of reflection. However there is a second type of reflection giving access to all the declared methods and fields on a class (not including inherited names). This sort of reflection can be used, subject to the security manager granting permission, to provide read (and write) access even to private members of another object.
I noticed that each ClassLoader contains a 'classes' Vector that is updated by the JVM for each class loaded by this class loader.
[Code from ClassLoader.java in 'java.lang'] // Invoked by the VM to record every loaded class with this loader. void addClass(Class c) { classes.addElement(c); }
I use reflection to obtain the original vector for each traced class loader and replace it with a proxy object that logs each addition using addElement. The steps are simple, although a lot of work is going on under the covers in the JVM to support this functionality. The class for the ClassLoader itself is queried with the getDeclaredField to obtain a 'Field' object for the (private) member 'classes'. This object is then marked as accessible (since by default private fields are not accessible) and finally the field contents are read and written.
The complete code looks something like this:
// Add a hook to a class loader (using reflection) private void hookClassLoader( final ClassLoader currLoader ) { try { java.lang.reflect.Field field = ClassLoader.class.getDeclaredField ( "classes" ); field.setAccessible( true ); final java.util.Vector currClasses = (java.util.Vector)field.get ( currLoader ); field.set( currLoader, new java.util.Vector() { public void addElement( Object o ) { showClass( (Class)o ); currClasses.addElement(o); } }); } catch ( java.lang.Exception ex ) { streamer.println( "Can't hook " + currLoader + ": " + ex ); } }
The end result of running this code against a class loader is that every time the JVM marks the class loader as having loaded a class the showClass method will be called. In this method we can take any action we choose based on the newly loaded class. This could be to simply log the class and its loader, or something more advanced.
When I first used reflection to modify the behaviour of a class in Java like this I was a little surprised - I've done similar tricks in C++ but it involves self-modifying code and assembly instructions.
There are several problems with this approach.
First of all, it requires sufficient security permissions to be able to access the private member of ClassLoader. This is not usually a problem for stand-alone applications but will cause difficulty for applets since the container by default installs a security manager that prevents applet code from having access to the ClassLoader fields.
Secondly, the code is not future proof since it relies upon the behaviour of a private member variable. This does not worry me greatly in this code as it is solely designed to assist in debugging a problem and is not intended to be part of a released program, but some care does need to be taken. What I have done by replacing private member data with a proxy breaks encapsulation.
Thirdly, the technique is not generally applicable since there must be a suitable member variable in the target class - in this case I was able to override Vector.addElement().
Fourthly, the code needs calling for each class loader in the system - but there is no standard way for us to locate them all!
Fifthly, the bootstrap class loader is not included in this code since it is part of the JVM and does not have a corresponding ClassLoader object.
It is possible to partly work around the fourth and fifth problems by registering our own class loader at the head of the chain of class loaders. Remember that each class loader in the system (apart from the JVM's own class loader) has a 'parent' class loader. I use reflection to insert my own class loader as the topmost parent for all class loaders.
Once again I achieve my end by modifying a private member variable of the classloader - this time the 'parent' field.
/** * This method injects a ClassLoadTracer object into the current class loader chain. * @param parent the current active class loader * @return the new (or existing) tracer object */ public static synchronized ClassLoadTracer inject( ClassLoader parent ) { // get the current topmost class loader. ClassLoader root = parent; while ( root.getParent() != null ) root = root.getParent(); if ( root instanceof ClassLoadTracer ) return (ClassLoadTracer)root; ClassLoadTracer newRoot = new ClassLoadTracer( parent ); // reflect on the topmost classloader to install the ClassLoadTracer ... try { // we want root->parent = newRoot; java.lang.reflect.Field field = ClassLoader.class.getDeclaredField( "parent" ); field.setAccessible( true ); field.set( root, newRoot ); } catch ( Exception ex ) { ex.printStackTrace(); System.out.println( "Could not install ClassLoadTracer: " + ex ); } return newRoot; }
The end result of calling the above method against an existing class loader is that the top-most parent becomes an instance of my own ClassLoadTracer class. This class, yet another extension of ClassLoader, overrides the loadClass method to log successful calls to the bootstrap class loader (thus solving the fifth problem listed above). It also keeps track of the current thread context class loader to detect any class loaders added to the system and thus helps to resolve the fourth problem.
Note however that this is only a partial solution since there is no requirement that class loaders will follow the delegation technique and so it is possible that my ClassLoadTracer will never be invoked. However, for the cases I have used it the mechanism seems to work well enough for me to get a log of the classes being loaded by the various class loaders.
Class loaders are powerful, but there does not seem to be enough debugging information supplied as standard to resolve problems when the mechanism breaks down. I have shown a couple of uses of reflection to enable additional tracing to be provided where such problems exist. The techniques shown are of wider use too, enabling some quite flexible debugging techniques that add and remove probes from target objects in the application at runtime.
All the source code for this article is available at:
Thanks are due to Alan Griffiths, Richard Blundell and Phil Bass who reviewed drafts of this article and suggested a number of improvements. | http://accu.org/index.php/journals/285 | CC-MAIN-2015-11 | refinedweb | 2,960 | 53.71 |
What's the best mock framework for Java?
I am the creator of PowerMock so obviously I must recommend that! :-)
PowerMock extends both EasyMock and Mockito with the ability to mock static methods, final and even private methods. The EasyMock support is complete, but the Mockito plugin needs some more work. We are planning to add JMock support as well.
PowerMock is not intended to replace other frameworks, rather it can be used in the tricky situations when other frameworks does't allow mocking. PowerMock also contains other useful features such as suppressing static initializers and constructors.
What's the best framework for creating mock objects in Java? Why? What are the pros and cons of each framework?
I've been having success with JMockit.
It's pretty new, and so it's a bit raw and under-documented. It uses ASM to dynamically redefine the class bytecode, so it can mock out all methods including static, private, constructors, and static initializers. For example:
import mockit.Mockit; ... Mockit.redefineMethods(MyClassWithStaticInit.class, MyReplacementClass.class); ... class MyReplacementClass { public void $init() {...} // replace default constructor public static void $clinit{...} // replace static initializer public static void myStatic{...} // replace static method // etc... }
It has an Expectations interface allowing record/playback scenarios as well:
import mockit.Expectations; import org.testng.annotations.Test; public class ExpecationsTest { private MyClass obj; @Test public void testFoo() { new Expectations(true) { MyClass c; { obj = c; invokeReturning(c.getFoo("foo", false), "bas"); } }; assert "bas".equals(obj.getFoo("foo", false)); Expectations.assertSatisfied(); } public static class MyClass { public String getFoo(String str, boolean bool) { if (bool) { return "foo"; } else { return "bar"; } } } }
The downside is that it requires Java 5/6.
Yes, Mockito is a great framework. I use it together with hamcrest and Google guice to setup my tests.
I used JMock early. I've tried Mockito at my last project and liked it. More concise, more cleaner. PowerMock covers all needs which are absent in Mockito, such as mocking a static code, mocking an instance creation, mocking final classes and methods. So I have all I need to perform my work.
Mockito also provides the option of stubbing methods, matching arguments (like anyInt() and anyString()), verifying the number of invocations (times(3), atLeastOnce(), never()), and more.
I've also found that Mockito is simple and clean.
One thing I don't like about Mockito is that you can't stub static methods.
I started using mocks through JMock, but eventually transitioned to use EasyMock. EasyMock was just that, --easier-- and provided a syntax that felt more natural. I haven't switched since.
I started using mocks with EasyMock. Easy enough to understand, but the replay step was kinda annoying. Mockito removes this, also has a cleaner syntax as it looks like readability was one of its primary goals. I cannot stress enough how important this is, since most of developers will spend their time reading and maintaining existing code, not creating it.
Another nice thing is that interfaces and implementation classes are handled in the same way, unlike in EasyMock where still you need to remember (and check) to use an EasyMock Class Extension.
I've taken a quick look at JMockit recently, and while the laundry list of features is pretty comprehensive, I think the price of this is legibility of resulting code, and having to write more.
For me, Mockito hits the sweet spot, being easy to write and read, and dealing with majority of the situations most code will require. Using Mockito with PowerMock would be my choice.
One thing to consider is that the tool you would choose if you were developing by yourself, or in a small tight-knit team, might not be the best to get for a large company with developers of varying skill levels. Readability, ease of use and simplicity would need more consideration in the latter case. No sense in getting the ultimate mocking framework if a lot of people end up not using it or not maintaining the tests. | http://code.i-harness.com/en/q/58a9 | CC-MAIN-2018-43 | refinedweb | 664 | 57.87 |
[
]
Oliver Deakin commented on HARMONY-6290:
----------------------------------------
Hi Jesse/Nathan,
Running the following test:
import java.io.*;
class FileTest {
public static void main(String[] args) throws Exception {
BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream("test.txt"),
"IBM-1047"));
System.out.println(br.readLine());
}
}
against an EBCDIC test.txt file containing:
Hello<NEL>
World<NEL>
<EOF>
on Windows produces the same output from Harmony, IBM and the RI:
Hello
In other words, even when we are not on zOS platforms the RI treats the NEL character as a
newline character when reading a file in EBCDIC. If we remove the encoding specified to InputStreamReader,
so we revert to reading the file in the native encoding for the Windows platform, then again
all 3 jdks have matching behaviour. When we read the EBCDIC file in the native Windows encoding
the NEL hex value (0x15) is not mapped to the unicode NEL character (\u0085) and is just treated
as a normal character.
So it appears that our code currently behaves the same as the RI, even if the spec does not
mention this special case for the EBCDIC character set. I'm not sure if there is any way for
a character to get mapped to the NEL unicode character (\u0085) when we are not working in
EBCDIC, so it may be the case that the code we have right now has the correct logic.
We could err on the safe side and add an encoding check (check we are reading in EBCDIC and
then check if the character is \u0085) but since we already seem to match the RI behaviour
I'm not sure if that is necessary. What are your thoughts?
Regards,
Oliver
> BufferedReader.readLine() breaks at EBCDIC newline, violating the spec
> ----------------------------------------------------------------------
>
> Key: HARMONY-6290
> URL:
> Project: Harmony
> Issue Type: Bug
> Components: Classlib
> Environment: SVN Revision: 800827
> Reporter: Jesse Wilson
>. | http://mail-archives.apache.org/mod_mbox/harmony-commits/200908.mbox/%3C1566835975.1249468394797.JavaMail.jira@brutus%3E | CC-MAIN-2014-23 | refinedweb | 309 | 57.4 |
Entry 3: My Remembery Is Faulty (or: What's That Address Again?)01/09/2017 at 20:05 • 2 comments, clear the carry, and return. Clearly this is a utility function to get a keypress and return (via the Carry flag) whether or not something was read.
What calls GETKEY? I want to find the thing that responds when you press '7'. Should be easy enough.
Except that it isn't. The code is sprawling; it's clearly compiled (not hand-coded) because there are pieces of code that look like they're doing parts of the same thing, in very different parts of memory. I'd only found a couple places that call GETKEY; from there, I suspect everything is picking up the key code from $18? Hmm. Well, I find code snippits like this that are looking for specific keys:
0x12F4 L12F4 JSR GETKEY ; wait until we get a keypress; S/R go to 1303, others return 0x12F7 BCS L12F4 0x12F9 CMP #$D3 ; 'S' 0x12FB BEQ L1303 0x12FD CMP #$D2 ; 'R' 0x12FF BEQ L1303 0x1301 L1301 CLC 0x1302 RTS 0x1303 L1303 STA $0 ; keypress was 'S' or 'R' from L12F4... only no part of the game I know of uses S or R. But there are pieces like this:
; This is reached normally for every tick 0x18A5 L18A5 LDA KBD ; read from keyboard 0x18A8 BMI L18EF ; this may be "if key not pressed"? 0x18AA LDY $5B0D 0x18AD CPY #$7 0x18AF BCC L18B3 ; branch if screen < 7 0x18B1 LDY #$7 0x18B3 L18B3 LDA L5B04 ; wait out the rest of the cycle? 0x18B6 BEQ L18BB 0x18B8 JSR $FCA8 ; F8ROM:WAIT 0x18BB L18BB LDA $5B0D 0x18BE CMP #$0 ; on screen #0? 0x18C0 BEQ L18C6 ; yes: go to L18C6 0x18C2 CMP #$2 ; on screen #2? 0x18C4 BNE L18EA ; no: go to 18ea 0x18C6 L18C6 LDA #$12 ; this is screen 2 work... 0x18C8 JSR L5511 0x18CB TAX 0x18CC INX... which are identifiably "if you hit key 0-7, then we do something". I dug through a lot of code like this, which is repeated various places, that is so tangled up that I'd need a lot of tea to decipher its intent.
I spent a lot of time changing pieces of the program to BRK statements (0x00) so that I could tell whether or not that bit of code was being executed, and when. None of them were executed when I pressed '7'. Here's another one:
0x1039 L1039 CMP #$B0 ; 0? 0x103B BCC L1003 ; branch if A <'0' 0x103D CMP #$B8 ; 8? 0x103F BCS L1003 ; branch if A >= '8' 0x1041 PHA ; start of handling keys 0-7 0x1042 SEC 0x1043 SBC #$B0 0x1045 STA $5B0D ; 5b0d gets the current screen we're on 0x1048 PLA 0x1049 JSR L564F 0x104C BCS L10B5 0x104E BPL L1001 ; WTF? can't be right. 0x1050 DEC $10 0x1052 ??? 0x1053 ROL $11,X 0x1055 ??? 0x1056 SEI 0x1057 ORA ($B4),Y 0x1059 DEC $11,X 0x105B LDA CV,X 0x105D ??? 0x105E LDX L124F 0x1060 ??? 0x1061 ??? 0x1062 ??? 0x1063 ???This is just after I've identified that $5B0D is what I call CURSCR. When the screen changes, a number from 0 to 7 (presumably) is put in $5B0D. When the game starts, it puts $FF in there (presumably meaning "on the menu"). But hours of poring through code paths that mention $5B0D, and I'm no closer. So I zeroed in on something odd in this particular listing.
It's those question marks.
Not all 256 values are opcodes. Those ??? values are things that don't disassemble properly. And that instruction at 0x104E -- "Branch on Plus to L1001" makes no sense! Back in the first listing, you'll see that there's an instruction at 0x1000 and then one at 0x1003, because the first instruction is three bytes long. How can we be jumping back to 0x1001? Is this really a program that's sophisticated enough to be able to take advantage of gadgets embedded at offset code locations?
I doubt it. More likely, I'm looking at something that's not code.
The Apple ][, by virtue of that "if the high bit is set, then there's a key that's been pressed" feature, winds up setting the high bit on ASCII characters. So you can't see things like text from a straight hex dump. But if you XOR the whole thing by 0x80, then you might see something like...
Now we're talking. I'll just update the disassembler to dump those as strings rather than trying to disassemble them, and then...
0x1271 L1271 CLC 0x1272 RTS 0x1273 JSR F8ROM:INIT 0x1276 JSR F8ROM:HOME 0x1279 JSR $A7AD 0x127C LDA #$0 0x127E STA $A9A1 0x1281 JSR L5479 0x12B9 STR '\cM SAVE / RESET STATE\cM\cM\cM\cMCATALOG (Y OR N) _\0' 0x12B9 BIT KBDSTRB 0x12BC L12BC JSR GETKEY 0x12BF BCS L12BC 0x12C1 CMP #$D9 0x12C3 BNE L12D3 0x12C5 JSR L5479 0x12D3 L12D3 STR '\cM\cDCATALOG\cM\0' 0x12D3 L12D3 JSR L5479 0x12EF STR '\cM\cMSAVE OR RESET (S OR R) '... setting aside some of the minor formatting bugs in the disassembler: this is looking really interesting. Not only do I now see something that's talking about 'S' and 'R', but I also see a clear entry point to the function that is definitely responsible for saving and restoring! Since I never see these messages, I'm assuming that the magic memory regions $A7AD and $A9A1 are implicated in the crash.
Region $A000 is DOS. So this makes sense - we're about to try to invoke some DOS actions. The string at 0x12D3 is chr(4)CATALOG -- where control-D (character 4) is a magic way to tell DOS that you want it to execute a command. And CATALOG is the DOS way to 'ls' or 'dir'. Yes yes yes. This is it, folks. If I were playing hot/cold, we'd be boiling right about now.
The only trouble is that I have no idea what $A7AD is supposed to do. It's not familiar to me at all. None of my reference books (now PDFs; I've long since gotten rid of the paper copies, so thank you whomever scanned these things and put them on the Internet!) mention that vector. So I'm stumped until some Google-Fu lands me here, back in 1980, reading Micro, the 6502 Journal.
As was the fashion at the time, clever people were always coming up with little programs to do this-or-that better than before. They'd publish the listings in publications like BYTE Magazine. Or, apparently, Micro. And on page 9 is "A Little Plus for your Apple II", by Craig Peterson. Thank you Craig. And thank you Micro, for publishing him. Because right there, on the right side of the page, Craig says:
Also, this example is setup for use with 3.2 DOS on a 48K system. If you have 3.1 DOS and 48K memory, use DOS addresses $A7AD and $A99E in place of $A851 and $AA5B in lines 200, 210, 400, 640, and 690.
That's the Rosetta Stone. $A851 I know! That's the vector to re-initialize the DOS hooks so that you can make DOS calls from inside a binary program. Wow, that tells me a lot. Combined with the history I know about the TMI game (that I've read online), I can now reconstruct how this image came to be, and I think I know how to fix it.
TMI was purportedly originally an Integer BASIC game. I never saw it in that form; it was rewritten in machine language in 1980 - compiled, as far as I can tell, by a fairly inefficient compiler that stomps all over zero page and redoes work and whatnot - and must have been originally distributed, in that form, for Apple DOS 3.1.
It had to be for DOS 3.1. That's the DOS 3.1 entry point for the thing I know of in DOS 3.3, that my new bud Craig says was in DOS 3.2.
But this copy is on a DOS 3.3 diskette. How did that happen? And can we just put it back on a DOS 3.1 disk and run it? Well, no, we can't just put it back.
DOS 3.1 and 3.2 used 13 sectors per track on the disk. When Apple were working on 3.3, they found a sneaky way to increase that to 16 sectors per track, which gave them more storage on the media. But it's not backwards-compatible. People had to lug their Apple ][ back to the dealer for an EPROM replacement on the disk drive controllers in order to use DOS 3.3. Which means that my emulators would need a copy of the 13-sector ROM for DOS 3.1 in order to use it. Which I don't have, and have little interest in looking for; I'd have trouble figuring out how to transfer the binary back over to that disk, and then I'd still have some strange nonstandard emulator configuration that I couldn't run easily. Nor could I fix this problem for the world without crazy instructions about replacing ROMs (because, y'know, the world obviously wants Three Mile Island fixed so that they can all play it without it crashing - so that they can melt down a nuclear reactor in peace and all).
My guess is that all of the copies of this disk out on the Internet came from the same person. I'm guessing that someone used MUFFIN or some other Apple sector converter tool (hey, that's how I could get the binary on a 13-sector disk! Still doesn't fix the "need a special ROM to use it" problem, but anyway) - ahem, or some other Apple sector converter tool to copy it on to a 16-sector DOS 3.3 disk. And that one plays the game. Mostly. With a little crashy DOS 3.1 incompatibility left behind.
I doubt this is a copy from the manufacturer, though. I remember playing this game in middle school, and I don't recall it crashing. And I know we were running DOS 3.3, which means 16-sector disks, which means there was a DOS 3.3 compatible version of TMI. It had a well-printed label, as I recall. I'm reasonably sure it was an authentic original. So the manufacturer *had* a copy that was 3.3-clean; not this abomination of a 3.1 copy on a 3.3 disk.
But then again, we don't need to put it back on a 3.1 disk to make it work again.
All we have to do is change which DOS entry point it's using. It's just this one place in the game, these two little bytes (little-endian order), which need to read:
Save. Reboot to disk copy. Copy from ProDOS-mounted Mac volume on to virtual disk. Swap disks, reboot. Aaaannd...
HELLZ YEAH! No crashie. Time to stick my name in here for the old-timey "I done cracked a warez" feeling, but better, since I've just fixed a bug:
That's one 37-ish-year-old bug squashed.
If you want to grab a copy of the disk image for yourself, here's a copy on my webserver or you can now get a copy from the Asimov archive. Or you can grab the original from just about any Apple ][ disk archive and edit the binary yourself. You're welcome, world.
And now if you'll forgive me, I have a nuclear reactor that I have to go melt down.
Entry 2: The Problem01/09/2017 at 18:39 • 0 comments.
Entry 1: History01/09/2017 at 18:12 • 0 comments pretty much nonstop for the next 5 years. (Including - to much of my family's chagrin I'm sure - on vacations.)
In high school, I wound up working for Babbage's Software. Software sales were also nascent. People didn't know what they were buying or why. Software sales folks didn't know what to tell their customers about what to buy. But there were a large (and increasing) number of software titles, many competing in the same space; Babbage's was a pre-Internet way to put shrink-wrapped boxes of software in front of curious eyes, free from the car-dealership feel of computer hardware sellers. And one of the ways that they managed to keep the staff informed was to let the staff take software home and test it out.
I amassed quite a pile of software for my Apple //e. There are some utilities that I remember copying from my middle school. Some that I got from a friend with an Apple //c in early high school. And many, many that I bought while I worked at Babbage's. (I'm not sure I actually made any money there; I think I just funneled it all back in to software!)
So that I can wrap this up and get on with my day, I'll fast forward over the rest of the fun times. I wound up at Drexel University in 1990, where I worked with my roommate's Mac SE (we couldn't afford to buy me a new Mac of my own). The utilities I wanted now existed, and I was wholly ready to move on. My Apple //e went in to a box until about 1994, when a friend - a writer - was looking to buy a word processor. I spent a weekend dumping disk images out a serial port at a friend's house (he had the serial card), and then my Apple //e became a full-time word processor.
Most of the Apple ][ disks I had couldn't be imaged (copy protection was rampant at the time, and those diskettes were getting long in the tooth anyway). The remaining disk images went in to a CVS archive, later converted to git, and eventually supplemented with images found on the 'Net of software that I hadn't been able to copy (but someone had cracked, copied, and uploaded now that they're all abandonware).
Occasionally I'll pull out an Apple ][ emulator and play a game I remember. And one that I remember vividly from my early middle school years is Three Mile Island - a nuclear reactor simulator.
I never owned a copy of this disk, and I must have played it in 6th and 7th grades. I found the image online (probably in the early 2000s) and spent some time playing it. The only problem with it is that it crashes if you press '7'. Which is problematic, because the number keys switch between screens in the game. '7' is supposed to let you save and restore. Instead... instant boom. Bummer for a game that takes hours to play.
Now here I am, in 2016 - and I've got a hankering to play again. Only this time, I don't want to play TMI; I want to fix it. | https://hackaday.io/project/19352/logs | CC-MAIN-2018-17 | refinedweb | 2,517 | 81.73 |
Use the
DELETE /files{/path/to/a/file} transaction to delete a file or a folder from the user's storage area.
A successful transaction return object contains the status flag and a message, as shown below.
{ "ok" : true, "status": "test/today/test123.txt has been deleted" }
If the target does not exist or cannot be deleted, the transaction returns an object containing a
false flag and a message, as shown below.
{ "ok" : false, "status": "test/today/test123.txt is not found or cannot be deleted" }
Send the
/user command using curl:
curl -b cookies -X DELETE
On success, the contents of the file are returned. If the existing JWT is invalid, an HTTP 401 Unauthorized code will be returned.
Make sure Requests is correctly installed in your Python environment, and run the following lines:
import requests # Existing cookies are expected to be stored in the variable 'cookies' r = requests.delete('', cookies=cookies) # Show the returned object r.json() | http://www.jeplus.org/wiki/doku.php?id=docs:users:del_files | CC-MAIN-2021-39 | refinedweb | 159 | 63.59 |
Lazy loading of tooltip text when user onhover the status column
RESOLVED FIXED in Firefox 58
Status
P3
normal
People
(Reporter: gasolin, Assigned: abhinav.koppula, Mentored)
Tracking
({good-first-bug})
Firefox Tracking Flags
(firefox57 fix-optional, firefox58 fixed)
Details
(Whiteboard: [good first bug][mentor-lang=zh][lang=js])
Attachments
(1 attachment)
As Bug #1406312 we found Waterfall's timingBoxes is slow because of its l10n calls. We could make tooltip computation lazily by computed only when user onhover in the column. That will make big performance difference when we have lots of requests in netmonitor.
Whiteboard: [good first bug][mentor-lang=zh][lang=js]
Hi Fred, Can I work on this one?
Flags: needinfo?(gasolin)
I would love to work on this as I'm new to contributing to Mozilla and this seems like a good way to start! Please let me know if I can work on this :)
Hi Abhinav, thanks for provide the patch! Please take a look on test result Some tests related to tooltip are broken because we now show tooltip on mouse hover. You can reference how we test image tooltip and use similar way to mimic mouse hover event to test tooltips nikshepsvn, thanks for interesting on contribute to devtools, please glance on and you could find other available bugs.
Assignee: nobody → abhinav.koppula
Flags: needinfo?(gasolin)
Hi Fred, I'm a bit stuck at this one. This is what I am trying. let target = document.querySelectorAll(".requests-list-column.requests-list-status")[1]; let win = target.ownerDocument.defaultView; EventUtils.synthesizeMouseAtCenter(target, { type: "mousemove" }, win); yield waitUntil(() => target.title != ""); I add this above this line - So I am trying to hover and make sure title attribute is populated for all the `status` columns before verification. I see the hover happening correctly but it keeps waiting and the condition of target.title != "" never gets satisfied. Interestingly, when I put debug points, I am able to see that the control reaches my method of getColumnTitle but still even after the method execution, title attribute isn't set. Can you help me understand where I am going wrong?
Flags: needinfo?(gasolin)
FYR here's how I passed the test by add the mouse hover simulation ``` let request = document.querySelectorAll(".request-list-item")[0]; let requestsListStatus = request.querySelector(".requests-list-status"); EventUtils.synthesizeMouse(requestsListStatus, 0, 0, { type: "mousemove" }, monitor.panelWin); yield waitUntil(() => document.querySelector(".requests-list-column.requests-list-status[title]")); ```
Flags: needinfo?(gasolin)
Thanks Fred. I have updated my PR with many changes. However, I am still facing a weird issue. So I have updated verifyRequestItemTarget to first do a mouseHover and then proceed. Interestingly, when I run browser_net_brotli.js in isolation, all tests pass but when I run the full suite, I still see 21 failures. Moreover, when I run browser_net_brotli.js with --verify option, some tests fail. So I am guessing this patch still needs some rework. Can you help me in understanding why test fails with --verify option?
passing `monitor` param seems eats the benifit of doing mouseHover in verifyRequestItemTarget. I suggest to fix only ~5 test files by adding mouseHover before verifyRequestItemTarget instead
Hi Fred, As per your suggestion, I have fixed around 5 tests by adding mouseHover before verifyRequestItemTarget. I ran `./mach test <test_name>` for each test I fixed and saw that there are no failures now. Can you take a quick look and tell me if this is the way forward?
Comment on attachment 8917512 [details] Bug 1407561 - Lazy loading of tooltip text when user hovers the status column. Looks good, just some small test changes needed before landing. I triggered auto test and we can see if anything left. ::: devtools/client/netmonitor/test/browser_net_brotli.js:39 (Diff revision 3) > + let request = document.querySelector(".request-list-item"); > + let requestsListStatus = request.querySelector(".requests-list-status"); > + EventUtils.synthesizeMouse(requestsListStatus, 0, 0, { type: "mousemove" }, > + monitor.panelWin); > + yield waitUntil(() => > + document.querySelector(".requests-list-column.requests-list-status[title]")); or we can use more specific scope like `requestsListStatus.title` or `request.querySelector(.requests-list-status[title])` if above not work. Same for the rest test cases ::: devtools/client/netmonitor/test/browser_net_filter-01.js:166 (Diff revision 3) > is(!!document.querySelector(".network-details-panel"), true, > "The network details panel should render correctly."); > > // First test with single filters... > testFilterButtons(monitor, "all"); > + yield hoverOverRequests(); can we move `hoverOverRequests` into `testContents` if all testContents need to be done after `hoverOverRequests`? You can define `testContents` as generator via `function* testContents` and do `yield testContents`
Hi Fred, Thanks for the review. Please let me know if any other tests need fixing.
Flags: needinfo?(gasolin)
Hi Abhinav, recent update rename all react component file name so `request-list-column-status` now become `RequestListColumnStatus`. Could you help update accordingly?
Flags: needinfo?(gasolin) → needinfo?(abhinav.koppula)
Hi Fred, Sure, I will rename the file as well while I am addressing the review-comments. Is there any other test that we need to fix apart from the ones I have already fixed in the last patch?
Flags: needinfo?(abhinav.koppula) → needinfo?(gasolin)
I can't apply the patch yet since the file name is changed, I'll test again when the new PR is available. Thanks.
Netmonitor is now changed to ES6 class, so you might need to rebase again
Flags: needinfo?(gasolin)
Hi Fred, I have rebased with the latest code. Can you please check once from your side? Thanks Abhinav
Comment on attachment 8917512 [details] Bug 1407561 - Lazy loading of tooltip text when user hovers the status column. Thanks for update! Local test passed and only 1 thing need to be addressed before landing. ::: devtools/client/netmonitor/src/components/RequestListColumnStatus.js:63 (Diff revision 4) > + div({ className: "requests-list-status-icon", "data-code": code }), > + div({ className: "requests-list-status-code" }, status) > + ) > + ); > > - if (statusText) { > + function getColumnTitle() { we can move getColumnTitle out of render function and pass the item param like Therefore the function can be offloaded from the render function ::: devtools/client/netmonitor/test/browser_net_brotli.js:34 (Diff revision 4) > yield ContentTask.spawn(tab.linkedBrowser, {}, function* () { > content.wrappedJSObject.performRequests(); > }); > yield wait; > > + let request = document.querySelector(".request-list-item"); nit: it will be nice to keep consistency name like `requestItem`across tests, but its fine to not change it.
based on test result, we need to fix 3 more tests: * browser_net_cached-status.js * browser_net_filter-02.js * browser_net_filter-flags.js then we will good to go
Hi Fred, `browser_net_cached-status.js` was already fixed in the previous review request. However, this doesn't pass on TRY. Is it an intermittent issue or something genuine? In the latest review request, I have addressed the review comment, nit and have fixed browser_net_filter-02.js & browser_net_filter-flags.js.
Flags: needinfo?(gasolin)
With new result I saw more unexpected test cases failed Could you help run `./mach test devtools/client/netmonitor/` locally and fix the rest test issues? The good news is the patch did show some perf gain (2~4% for simple netmonitor actions)
Flags: needinfo?(gasolin) → needinfo?(abhinav.koppula)
Hi Fred, I think I have understood the issue. There were 2 issues I feel: 1. This was kind of an implementation bug - Suppose I load some website and then even before the status code appears in the request item, I do a hover over the status. This results in "undefined" being shown as tooltip. Now after the status code is shown in the request item, if I hover again, then we can see the correct status displayed in the tooltip. To fix this, I have added a check of status & statusCode in the onMouseOver function. 2. I feel EventUtils.synthesizeMouse was causing some issue and maybe that's why some tests like "browser_net_cached-status.js" pass when run in isolation but fail when the whole suite is run. I have changed the implementation to use EventUtils.sendMouseEvent which passes all the tests now. On my local, with the new patch, all the tests are passing. Can we push to TRY once?
Flags: needinfo?(abhinav.koppula) → needinfo?(gasolin)
Comment on attachment 8917512 [details] Bug 1407561 - Lazy loading of tooltip text when user hovers the status column. Thanks for figure out and fix the test issue, looks good to me! Only a nit need to be addressed before landing. ::: devtools/client/netmonitor/src/components/RequestListColumnStatus.js:55 (Diff revision 6) > - if (statusText) { > + return ( > + div({ > + className: "requests-list-column requests-list-status", > + onMouseOver: function ({ target }) { > + if (status && statusText) { > + if (!target.title) { conditions can be in the same line, like `if (status && statusText && !target.title)` ::: devtools/client/netmonitor/src/components/RequestListColumnStatus.js:71 (Diff revision 6) > +} > + > +function getColumnTitle(item) { > + let { fromCache, fromServiceWorker, status, statusText } = item; > + let title; > + if (status && statusText) { don't need double check here
Thanks Fred, I have fixed the same.
Comment on attachment 8917512 [details] Bug 1407561 - Lazy loading of tooltip text when user hovers the status column. Looks good, let's wait the test result and land it! Thanks for the great contribution!
Attachment #8917512 - Flags: review?(gasolin) → review+
Pushed by flin@mozilla.com: Lazy loading of tooltip text when user hovers the status column. r=gasolin
Backed out for failing /browser_net_filter-02.js Failure log: Backout:
Flags: needinfo?(abhinav.koppula)
Hi Andreaa, Fred, I'm sorry but I think this failed because I missed changing `EventUtils.synthesizeMouse` to `EventUtils.sendMouseEvent` for 2 tests - filter-02.js and filter-flags.js and both of them failed on autoland. I have fixed both of these tests to use `EventUtils.sendMouseEvent`. Fred, the above change should fix the issue, right?
Flags: needinfo?(abhinav.koppula)
Previous try test are all green before land Though synthesizeMouse mimic more like actual user's behavior, it seems not stable enough for the hover event I think its right to use sendMouseEvent instead. push another try and wait for the test result. Thanks Abhinav for quickly address the test issue!
Flags: needinfo?(gasolin)
Pushed by flin@mozilla.com: Lazy loading of tooltip text when user hovers the status column. r=gasolin
Status: NEW → RESOLVED
Closed: 2 years ago
status-firefox58: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → Firefox 58
Product: Firefox → DevTools | https://bugzilla.mozilla.org/show_bug.cgi?id=1407561 | CC-MAIN-2019-30 | refinedweb | 1,674 | 50.94 |
To read all comments associated with this story, please click here.:
2005-10-23
It seems nobody has pointed out the obvious that a line 'using Some.Namespace' has nothing to do with whether or not functionality from that namespace is used or not.
Ok, 'nothing' is a bit harsh, but consider:
1. 'using Foo.Bar' is only a shorthand to not have to type the full qualifyer each time (eg 'new Foo.Bar.Baz()' vs. just 'new Baz()')
2. IDE's tend to give you the boilerplate using-statements at the top when you create a new class leaving you with a few unused 'using'-statements at the top. | http://www.osnews.com/thread?453423 | CC-MAIN-2015-40 | refinedweb | 110 | 73.78 |
This is a web-based note viewer. I use this in conjuction with a script bound to an F-key that starts an emacsclient with a timestamped text file in a note directory. This allows me to quickly make one-off notes by pressing a single key, then close the emacsclient window and continue with what I was doing before. However, since I deliberately don't include filenames or the like - to make note-taking as quick and low-investment as possible - viewing the notes in a text editor isn't very nice. So this is a Flask application that displays them on a simple web interface.
Feature Creep
I've already added automatic linking of URLs, editing in the browser, and am working on including non-text files. For example, I'd like to have a similar keybinding to take a screenshot and stick it in the same directory.
Integration
I integrate notedir using werkzeug's DispatcherMiddleware like so:
from notedir import app as notedir_app notedir_app.config["NOTEDIR_DIRECTORY"] = "/home/akg/var/notes" app = DispatcherMiddleware(app, { # ... "/notes": notedir_app, }) | https://bitbucket.org/adamkg/notedir/src/2de00b4ea63c/?at=default | CC-MAIN-2015-11 | refinedweb | 178 | 53.31 |
Pointers are powerful features of C++ that differentiates it from other programming languages like Java and Python.
Pointers are used in C++ program to access the memory and manipulate the address.
To understand pointers, you should first know how data is stored on the computer.
Each variable you create in your program is assigned a location in the computer's memory. The value the variable stores is actually stored in the location assigned.
To know where the data is stored, C++ has an & operator. The & (reference) operator gives you the address occupied by a variable.
If
var is a variable then,
&var gives the address of that variable.
#include <iostream> using namespace std; int main() { int var1 = 3; int var2 = 24; int var3 = 17; cout << &var1 << endl; cout << &var2 << endl; cout << &var3 << endl; }
Output
0x7fff5fbff8ac 0x7fff5fbff8a8 0x7fff5fbff8a4
Note: You may not get the same result on your system.
The 0x in the beginning represents the address is in hexadecimal form.
Notice that first address differs from second by 4-bytes and second address differs from third by 4-bytes.
This is because the size of integer (variable of type
int) is 4 bytes in 64-bit system.
C++ gives you the power to manipulate the data in the computer's memory directly. You can assign and de-assign any space in the memory as you wish. This is done using Pointer variables.
Pointers variables are variables that points to a specific address in the memory pointed by another variable.
int *p; OR, int* p;
The statement above defines a pointer variable p. It holds the memory address
The asterisk is a dereference operator which means pointer to.
Here, pointer p is a pointer to
int, i.e., it is pointing to an integer value in the memory address.
Reference operator (&) as discussed above gives the address of a variable.
To get the value stored in the memory address, we use the dereference operator (*).
For example: If a number variable is stored in the memory address 0x123, and it contains a value 5.
The reference (&) operator gives the value 0x123, while the dereference (*) operator gives the value 5.
Note: The (*) sign used in the declaration of C++ pointer is not the dereference pointer. It is just a similar notation that creates a pointer.
C++ Program to demonstrate the working of pointer.
#include <iostream> using namespace std; int main() { int *pc, c; c = 5; cout << "Address of c (&c): " << &c << endl; cout << "Value of c (c): " << c << endl << endl; pc = &c; // Pointer pc holds the memory address of variable c cout << "Address that pointer pc holds (pc): "<< pc << endl; cout << "Content of the address pointer pc holds (*pc): " << *pc << endl << endl; c = 11; // The content inside memory address &c is changed from 5 to 11. cout << "Address pointer pc holds (pc): " << pc << endl; cout << "Content of the address pointer pc holds (*pc): " << *pc << endl << endl; *pc = 2; cout << "Address of c (&c): " << &c << endl; cout << "Value of c (c): " << c << endl << endl; return 0; }
Output
Address of c (&c): 0x7fff5fbff80c Value of c (c): 5 Address that pointer pc holds (pc): 0x7fff5fbff80c Content of the address pointer pc holds (*pc): 5 Address pointer pc holds (pc): 0x7fff5fbff80c Content of the address pointer pc holds (*pc): 11 Address of c (&c): 0x7fff5fbff80c Value of c (c): 2
Explanation of program
c = 5;the value 5 is stored in the address of variable c - 0x7fff5fbff8c.
pc = &c;the pointer pc holds the address of c - 0x7fff5fbff8c, and the expression (dereference operator)
*pcoutputs the value stored in that address, 5.
c = 11;since the address pointer pc holds is the same as c - 0x7fff5fbff8c, change in the value of c is also reflected when the expression
*pcis executed, which now outputs 11.
*pc = 2;it changes the content of the address stored by pc - 0x7fff5fbff8c. This is changed from 11 to 2. So, when we print the value of c, the value is 2 as well.
Suppose, you want pointer pc to point to the address of c. Then,
int c, *pc; pc=c; /*. */
In both cases, pointer pc is not pointing to the address of c.
You should also check out these pointer related tutorials: | https://cdn.programiz.com/cpp-programming/pointers | CC-MAIN-2019-47 | refinedweb | 701 | 61.56 |
Red Hat Bugzilla – Bug 186609
Evolution does not read my system mailbox
Last modified: 2018-04-11 06:51:10 EDT
Description of problem:
Evolution does not show my system mailbox on my imap server
Version-Release number of selected component (if applicable):
evolution-2.6.0-1
How reproducible:
Steps to Reproduce:
1. Configure an account on an imap server (details later)
2. Try to read your mail
3.
Actual results:
System mailbox is not displayed
Expected results:
System mail box should be displayed
Additional info:
Imap server : Fedora core 4, uw-imap-2004g-3.fc4 from Fedora Extras
Folders stored in ~/Courrier on the server can be read normaly.
Option "Overrife server supplied namespace" is checked, Namespace is set to
"Courrier". If the option is unchecked, evolution gets a lot of data from the
server (several minutes at 100Mbit/s), and then use 100% of CPU for a very long
time (I always stopped it before the end...).
The previous version (under Fedora Core 4), or Thunderbird works fine on thise
server. My mailbox on my home access provider (imap.free.fr) is corretly
accessed through Imap and evolution (UW imap also, AFAIK).
I have encountered the same problem. Previously I had no problem reading the
Inbox on my mail server using IMAP with evolution 2.2.3 in FC4. I updated from
FC4 to FC5 by doing a fresh install while preserving the data in my home
directory. I can still read all the folders in ~/mail on my mail server, but I
can't access the Inbox.
The email account in question is configured with the "Override server supplied
namespace" option checked and with Namespace set to "mail" (which worked
before). Unchecking the override option seems not to change anything.
I tried to solve the problem by configuring evolution from scratch again. I
moved ~/.evolution to another directory and removed (and backed up) all of
evolution's gconf keys. When I tried to set up the email account without
checking the "Override server supplied namespace" option, evolution just sat
there consuming 100% of the processor time but doing nothing else (no network
activity). Eventually I killed it. When I selected the override option, it
behaved as before: I could read the folders in ~/mail, but not the Inbox.
Same problem here. It broke on upgrade from FC4 top FC5. It currently works with
Thunderbird in FC5, so I rule out server misconfiguration.
My situation is similar. In FC4 I had "Show only subscribed folders" checked and
had not overrided the server namespace (it worked fine since I subscribed to
folders using SquirelMail, can't remeber why, but I remember doing it just
once). When I upgraded to FC5, evolution would just stuck, doing nothing. Then I
tried to restrict the server namespace to mail (witch is stricly correct in my
setup, even though it worked without the override before). This time it worked,
but the INBOX vanished, with a strage message about it already being subscribed.
If anyone need more info, just ask.
I can only add a "me too" here, since the situation has already been described.
I had to temporary switch to thunderbird, which seems to work all right, but I
really would like to go back to Evolution sometime soon.
Please let me know if I can provide additional information, but my scenario is
the same than the others.
Me too. Simliar scenario described above, but the IMAP server is RHEL3U7.
There is a temporary workaround, for those who have a shell access to their mail
server : on the server, create a symnlink to your system mailbox
(/var/spool/mail/login_name) in your mail directory.
I'm having the same issue: FC5 x86_64 evolution 2.6.0-1 connecting to a RHL9
box running uw-imap 2001a-18.2.legacy. I did manage to use the symlink
workaround to at least view mail.
If you let the evolution try to process the entire contents of the folder (don't
override the server namespace) you will eventually get subscribed to all the
files in your home folder. I do see the INBOX when this completes though.
Depending on how many files you have in your home folder and what they range
from in size, it could take evolution a long time to load. Once it is loaded
and you try to go and unsubscribe all folders but the ones in "~/mail",
evolution will most likely crash.
Is this bug corrected with evolution-2.6.1-1.fc5.2 released today? I don't have
time to test it yet :(
No, it is not..
For me it was fixed with an update to FC5, can't remember exactly when. I has
not reappeared in FC6 or F7. | https://bugzilla.redhat.com/show_bug.cgi?id=186609 | CC-MAIN-2018-17 | refinedweb | 792 | 64.41 |
Why `print` became a function in Python 3
After writing my post on why Python 3 exists which included an explanation about the most common question/complaint about why Python 3 forced textual and binary data into separate types, I heard from people about the second most common question/complaint about Python 3: why did we make
Who can do what?
The
print statement
In its most basic form,
print A is the equivalent of
sys.stdout.write(str(A) + '\n'). If you pass extra arguments separated by commas then they will also be passed to
str() and be printed with a space between each argument. For example,
print A, B, C is equivalent to
sys.stdout.write(' '.join(map(str, [A, B, C])) + '\n'). If there is a trailing comma then the added newline is left off, e.g.
print A, is the same as
sys.stdout.write(str(A)).
Introduced in Python 2.0, the
print >> output, A is the same as
output.write(str(A) + '\n').
The
print function
The equivalent definition of the
import sys def print(*objects, sep=None, end=None, file=None, flush=False): """A Python translation of the C code for builtins.print(). """ if sep is None: sep = ' ' if end is None: end = '\n' if file is None: file = sys.stdout file.write(sep.join(map(str, objects)) + end) if flush: file.flush()
As you may have noticed, all the features of the
print A==
print(A)
print A, B, C==
print(A, B, C)
print A,==
print(A, end='')
print >> output, A==
print(A, file=output)
The obvious thing the
It's all about flexibility
But the real key to the
# Manually ... print A, '...' # For a reusable solution (which also works with a functional print) ... def ellipsis_print(*args): for arg in args: print arg, '', print '...'
But for Python 3, you have much better solutions:
# Manually ... print(A, end='...\n') # Multiple reusable solutions that won't work with a syntactic print... ellipsis_print = lambda *args, **kwargs: print(*args, **kwargs, end='...\n') # Or ... import functools ellipsis_print = functools.partial(print, end='...\n')
In other words, with
builtins.print while you can't do that with a statement.
The flexibility that the Python development team gains is being unshackled from having to make the features of
It should also be mentioned that as a general guideline, syntax is reserved for things that are either impossible to do otherwise or there is a clear readability benefit to the syntax. In the case of
print A and
print(A) is negligible and thus there is no loss in readability. And since we were able to completely replace
You might also be interested in these articles... | http://www.snarky.ca/why-print-became-a-function-in-python-3 | CC-MAIN-2016-30 | refinedweb | 446 | 63.19 |
How to hide status bar in scene?
How can I keep the status bar (giving time, date, .... battery level) from appearing at the top of my scene?
import scene import ui class Loop(scene.Scene): def setup(self): self.background_color = 'white' def update(self): time = self.t v = ui.View(frame=(0, 0, 1366, 1024)) v.present('full_screen', hide_title_bar = True)
- list item
import objc_util objc_util.UIApplication.sharedApplication().statusBar().hidden = True
Add that after you present your script.
That causes my screen to appear briefly, then the program terminates.
@Tey That's because Pythonista crashes, this function needs to run in main thread
See file _objc_exception.txt
The app was terminated due to an Objective-C exception. Details below: 2020-02-19 10:39:50.178902 threading violation: expected the main thread
Try so
@on_main_thread def x(): objc_util.UIApplication.sharedApplication().statusBar().hidden = True x()
No more crash but does not work, perhaps due to ios13
@Tey, just checking the question. If I run the below, I get a full red screen with no extras except the ”X” to close. What happens on your device, and which device is it?
import scene class Loop(scene.Scene): def setup(self): self.background_color = 'red' scene.run(Loop())
include title_bar_color='black' in present.
It works but I don't understand why.
The status bar either disappears, either has its characters white???
@cvp, because
sceneis not
ui, and this is how it is supposed to work?
uihas issues not really supporting removing the extras with
present*), but if OP had the issue with
sceneas the title suggests, I do not see the problem.
*) Workaround:
runa scene, then use its
viewattribute as you would any other
uiroot view.
@mikael Already understood in the past and forgotten. 70 years old today, thus perhaps too old for this kind of stuff 😢 If you say "yes", I kill you 😂
70 years old today
How would you use Python to determine how many days old you are? Hint: use actual birthdate and 70 year birthday date to avoid any off-by-one errors for leap years.
Extra credit: How many leap seconds have you live through so far?!?
How would you use Python to determine how many days old you are?
Sure you will have shorter
from datetime import date d0 = date(1950,2,19) # birthdate d1 = date.today() delta = d1 - d0 print(delta.days)
My ... we do wander off topic don’t we?
I’m running 13.3.1 on an iPad Pro (12.9 inch) (2nd generation)
My problem is with the status bar, not the title bar. The title bar is the one with the X used to terminate the program. It can be made to completely disappear using hide_title_bar = True, as in my original post.
The status bar is an Apple thing. It contains time and date at the left and battery indicator among other things at the left. You see it at the top of your main screen.
So, to summarize:
@JonB your suggestion of
import objc_util
objc_util.UIApplication.sharedApplication().statusBar().hidden = True
for reasons unknown to me no longer causes the screen to appear for a split second before Pythonista terminates. The program no longer terminates but the status bar is still present.
@cvp your suggestion of
“@on_main_thread
def x():
objc_util.UIApplication.sharedApplication().statusBar().hidden = True
x() “
causes a NameError, name on_main_thread is not defined. And the status bar is still there.
@cvp I have you beat by a decade. My first computer was an IBM 1620, used paper tape, took 180 microseconds to execute a NOOP and had a 40K digital memory. None of this newfangled octal or hexadecimal. Seventy may be the new fifty but eighty is still the same old eighty!
@Tey The suggestion of @JonB is correct but, since iOS 13 needs to run in main thread.
You can have the reason of the error in the _objc_exception.txt file after the crash
Thus, you need to do
from objc_util import * . . . @on_main_thread # comes from objc_util def x(): . .
Thus, so no more crash but status bar still there because jonb suggestion is no more valid in iOS 13
@on_main_thread
def x():
objc_util.UIApplication.sharedApplication().statusBar().hidden = True
x() “
causes a NameError, name on_main_thread is not defined.
NameError because we
import objc_util
Instead of
from objc_util import *
Still get the NameError. Just to be clear, below is what ran. I’ve also run it with @JonB lines after v.present but of course still get the name error.
I’ll give up on this for now because in actual practice my screen is black not white and the status info only shows up when something light colored passes behind the status bar.
Thanks everyone for trying.
import scene import ui from objc_util import * class Loop(scene.Scene): def setup(self): self.background_color = 'white' def update(self): time = self.t v = ui.View(frame=(0, 0, 1366, 1024)) @on_main_thread # comes from objc_util def x(): objc_util.UIApplication.sharedApplication().statusBar().hidden = True x() v.present('full_screen', hide_title_bar = True)
@Tey, this seems solvable. Could you first clarify whether you are developing with the ui or scene? I would expect scene if this is some type of a game, and ui otherwise.
Your code snippet imports both, but only uses ui, as the scene class seems not to be used.
If you are really using scene only, you should not be using
present, only
scene.run.
If you are using ui, you can use the workaround I mentioned above to hide the Apple stuff at the top. | https://forum.omz-software.com/topic/6179/how-to-hide-status-bar-in-scene | CC-MAIN-2021-39 | refinedweb | 919 | 68.87 |
No additional agenda items added. John Ibbotson noted the Usage Scenarios would be published as a WD (this was in David Fallside's agenda update).
Fallside proposed that the approval of the F2F minutes be postponed until next week. The minutes still need to be cleaned up and completed. No objections from the WG on this proposal. The minutes from the 12/5 XML Protocol Working Group teleconference were approved as posted.
Henrik Nielsen requested that all action items be updated by Friday. Hugo noted that the list of issues are typically updated during the telcon itself.
SOAP Version 1.2 Part 0: Primer - No status provided (Nilo Mitra joined late). Per Yves Lafon, there were minor edits made to the document to make it compliant with publication rules; no content changes though. After changes are made, the document will be ready to go. Parts 1 and 2. Editors: Jean-Jacques said the HTML conversion has taken a week. SOAP Version 1.2 Part 2: Adjuncts. Jean-Jacques: TBTF material has now been fully incorporated into Part 2, and converted to valid XML. Most comments on the TBTF text have also been incorporated, apart from a few exceptions, in particular adding labels to the arrows of the state transition diagram. David Fallside: You have not yet included my proposed rewrite of the new section 6! Jean-Jacques: Yes I have, but only in the XML version. I was waiting for prior WG approval before generating the HTML version! David Fallside: Oh!... There have only been positive comments on this text so far, so I propose that we accept this rewrite. No objection, so David's rewrite is accepted. In the background, Yves regenerates the HTML for Part 2 from the XML provided by Jean-Jacques. SOAP Version 1.2 Part 1: Messaging Framework. Marc Hadley: All issues resolutions and editorial changes have been incorporated. Per Fallside, actual publication decision is the subject of another agenda item. The group thanks Jean-Jacques Moreau for his tremendous efforts on the HTML to XML conversions. ETF - The group met briefly last week. There was not a lot of work done and attendance has been sparse. There is a list of action items that the ETF is working through to provide resolutions to issues. Next call will be Thursday, 13 December. TBTF - The task force has not met since the F2F meeting. We need to restart this effort shortly. There are a number of issues on its plate. Conformance - Per Hugo, David Turner from Microsoft has provided legal text such that we can use SOAPBuilders tests. Hugo needs to consult the W3C Legal (or Joseph Reagle) to determine if the legal text is acceptable. Usage Scenarios - Thanks to Yves Lafon for getting the scenarios converted from Word to HTML particularly the tables. Ibbotson is working on the converting the HTML version of the document to XML and DTD. Jean-Jacques Moreau will send DTD used in the spec to Ibbotson. Issues List - Per David Fallside, the issue list maintainers are backed up. When do they expect the issues list to be completed? Yves Lafon will update issues list by end of this week. There was a question as to who is the official issues list maintainer? Yves responded that he was doing it since Hugo was busy. Hugo indicated that he has no time to update it.
[23] and Exclusive Canonical XML [24]. There are no responses to the e-mail from David Orchard [4] that proposed a repsponse. Per Orchard, a liaison group was formed as a result of his 3 July e-mail, but nothing happened. Orchard said that there are two ways of going forward (1) send the response as the opinion of one member, or (2) send the response as WG's opinion and say nobody disagrees with it. Orchard added that there appears to be great apathy on this issue. Per David Fallside, you can infer some information from the lack of response. Fallside added that the XMLP WG was told at last week's telcon that we were looking for an XMLP response for last call to the XML Encryption group, and so it is fair that if nobody responded, this would go out as the WG's response. Action: Orchard will combine the two responses (the other from Yen-Ling Husband) as well as Paul Denning's comments. Orchard will draft and post the combined response today so the WG can see the final response. Fallside will send response to the XML Encryption Working Group on Friday, assuming no WG pushback.
Is there any objection to asking the W3C to publish Parts 0 [6], 1 [7], 2 [8] of the SOAP 1.2 specification and the Usage Scernarios as WDs? There has been no objection to the Part 2/Sec 6 revised text [9], and it is expected this will be added into the publication version. Note editorial comments [10]. Fallside: We have not seen the final HTML texts for the three pieces. We would like to see the final versions of Parts 0, 1,2, and Usage Scenario in their final HTML form before we send them to W3C. Lafon: Final HTML will be available within an hour. Fallside: There are no comments on Parts 0, 1, and 2. I propose to employ the same review mechanism as for Orchard's texts, and to send WDs to W3C for publication on Friday. No objections received. Fallside: If publication is succesful, the documents should be published early next week.
ref'ing data that may be stripped from a message [11]. See Jacek's proposed resolution [16], and subsequent discussion, including a couple of modified proposals, [17b ] [18] Henrik Nielsen and Jean Jacques had the most discussion about this issue. Henrik Nielsen noted this issue was discussed at the f2f, and he asked what are the requirements for passing hrefs and actually dereferencing them as part of message processing? HFN noted some links are not relevant, such a link to a photo of my dog. Henrik: In case you want to resolve an href and you can't, this is the fault you can generate. There was some follow-up discussion, Noah also had a point and discussion died. There are two issues that are slightly mixed up. Several proposals were made. The first proposal is to always require resolution of an href and require a fault. A second proposal says we don't say anything about requiring resolution but if resolution is attempted and fails, it is a fault, and the fault mechanism is used. Noah: Henrik seems to be responding to #168 which is related, but are we talking about #170 now? Henrik: Yes those are the two that I may have mixed up. Noah agreed that we can put aside his comments about what a binding specification should or can say about attachments. He was comfortable with HFN's direction on Henrik's proposal about dangling hrefs that don't relate to attachments. Noah surmised that SOAP might say a fault "may" be generated vs a fault "must" be generated. When using encoding There was some dialog on serializing graphs (with missing edges, etc.). The sending nodes may encode graphs (with hrefs) which are incomplete. This is the conversation that leads us toward #168. Stuart suggested that IDRefs are more limiting and may be better in SOAP messages than HREFs. David Orchard did say that Microsoft had in the past produced a Biztalk canonicalization proposal that use IDRefs. There was some debate about simplicity and consistency. The crux of the issue is do SOAP processors have to look at all HREFs? (Scribe's notes on the rest of this agenda item are incomplete.) Ibbotson said there are two approaches: 1. The model itself links to resources 2. The encoding allows ability to express links to a resource. Since there was no convergence of the discussion, Fallside ruled that the issue should go back to email, and he noted that we need a way of bridging two ideas: (1) within the SOAP model (2) the mapping of nodes and serialization.
attribute clashes on refs [12]. Do we accept Murali's proposed resolution [13]? Murali drafted text for closing issue #171 based on the f2f discussions surrounding this issue. There were a couple of responses. Noah recommended that text be added to clarify the relationships between xsi:type and graph nodes. Murali did not have the opportunity to follow up with Noah. The other comment was from Jean-Jacques. We should talk about attributes other than xsi:type, though Murali feels it might not be necessary. Noah clarifies his point. He agrees with "if you want to know about xsi:type to read the Schema spec". He would like to see "Everything you find in the Schema spec about xsi:type applies. Because we are describing a graph model here, it is the case, that when you use an xsi:type as an element, you are specifying the type of the node in the graph". Noah: What is it do we want to say about schema validation? Schema validation is not the issue on the table. When you are doing schema validation, the Schema spec tells you what xsi:type means and all that applies. Whether or not you are validating schema, when xsi:type names one of the 12 built-in types, then essentially the node is typed unconditionally. Two texts were included in Murali's proposal - minimal text and the verbose text. Though the minimal text is ok, it was generally agreed by email that the verbose text is much better. It specifically acknowledges the problem expressed in issue #171. Noah proposed (1) add before the second sentence the "type of node" (2) a third sentence to say "Furthermore, regardless of whether schema validation is employed, when xsi:type names is one of the built-in types, the SOAP processor MUST recognize the node being the corresponding..." Action: Murali will expand the proposed (verbose) text taking into account Noah's concerns by 12/13.
This agenda item was not discussed because it concerned an issue that has already been closed.
support for resource constrained devices [19] During 11/14 telecon discussion of this issue, we changed R309 (see 11/14 minutes) as a prerequisite to resolving the issue. Mark Baker has proposed resolution text [20] in light of our change to R309. There were a couple more emails on the subject - what is our final proposal? Mark Baker continues to work on Issue 40. He needs to include Mike Champion's recent text. There were no other comments/discussions on this issue. Discussion postponed for next week's telecon.
clarify use of enc:offset and enc:position [21] (time permitting). A proposal is forthcoming, details shortly [22]. Glen Daniels' text is not on the issues list and the feedback from his email was unknown. Glen should forward status on his query to SoapBuilders. Jacek said a related issue #144 may be closed. Action: Stuart Williams took the action to ping Glen to send his email.
Henrik summarized the issue as how to handle multiple fault codes results, for example like the XML dot notation. After we decided to remove the dot notation, the issue remained whether or not to have only one fault code. If fault codes are defined by arbitrary namespaces, we can have a situation where a SOAP node does not know a particular extension has no idea what is going on in the message. Henrik's proposal is that the 1st fault code is defined in SOAP 1.2. Part 1 defines all that SOAP nodes will know plus additional q-names. This should not change the existing situation with respect to boxcarring. We want a relatively well-known mechanism for what went wrong and be able to extend it with new fault codes. The proposal is based on one from Martin Gudgin and Rich Salz. It does not change the SOAP fault mechanism or the ability to carry multiple fault codes. Fallside asked if in fact HFN's proposal differed from MartinG's in the sense that MartinG's proposed fault value/fault subValue would exist in hierarchical tree. HFN agreed that this was the difference. Henrik advocates a "flat" SOAP fault mechanism with Value (MustUnderstand) and Sub (May understand). Sub is a specialization and there may be multiple Subs at the same level. The other approach is hierarchical subs in the SOAP fault. Both approaches were discussed. Per Henrik, there is only one value and multiple subs. Doug Davis proposed to get rid of the sub and allow multiple values, with the namespace of the value child be the indicator of whether you understand it or not. Fallside asked if there is a separator convention between the value inside that value element. Davis assumes there will be different value elements. There will be no distinction as what is actually defined by SOAP or not defined by SOAP. It would be identified by the namespace. Fallside added that the content of the elements would make the distinction rather than the surrounding elements. Marc Hadley wondered about interoperability between SOAP versions if we choose the the flat approach which is a change. Fallside polled the group for those in favor of "flat" vs "hierarchical" approaches. A large majority favored the hierarchical approach. Action: Marc Hadley took the action to write a schema fragment to describe fault code for hierarchical option by end of this week. | http://www.w3.org/2000/xp/Group/1/12/12-minutes.html | CC-MAIN-2014-35 | refinedweb | 2,253 | 65.73 |
paths.py not installed
Bug Description
# make install
Creating architecture independent directories...
[...]
Compiling /testing/
Traceback (most recent call last):
File "bin/update", line 44, in ?
import paths
ImportError: No module named paths
make: *** [update] Error 1
Exit 2
[http://
Complete output attached...
Something else weird is going on with your installation.
Why did you get this:
(cd email-1.2 ; /usr/bin/python setup.py --quiet install
--install-lib /opt/mailman2.
Traceback (most recent call last):
File "setup.py", line 11, in ?
from distutils.core import setup
ImportError: No module named distutils.core
make[1]: *** [install-packages] Error 1
i.e. why couldn't Python find distutils.core? Do you have
some weird PYTHONHOME envar setting? Try firing up Python
from the command line and importing distutils.core.
This failure is preventing the rest of misc/Makefile's
install target from completing, which is why bin/update
doesn't find paths.py.
Your Python installation looks messed up.
o.k. -- This seems to be a problem with Debian's Python
installation. I'll investigate and diskuss the matter there...
Thanks!
I can't reproduce this. Look at your make output and be
sure you don't have a permission problem somewhere. | https://bugs.launchpad.net/mailman/+bug/265619 | CC-MAIN-2015-48 | refinedweb | 201 | 62.64 |
.
Good article. How would would you go about adding IoC (Inversion of Control)? for example: event registration could be done via attributes to hook up publishers to subscribers? Or another example setting up business objects within a plugin.
Wow! What a great question. I think there are several questions in there 🙂
We are planning to give more guidance on using patterns, etc.. on our model. In some respects there are several patterns you employ (or may employ) in using our model (e.g., Proxy, Adapter, Bridge, …). The key to our model is a separation of implementation and integration. After initial activation we get out of the way. Which of course implies that you need to build your object model on pattern(s).
We will give an example of how events are enabled over our model.
We will give an example of how a traditionally thought of Add-In (e.g., Office Add-In) works over our model. The article shows an Add-in providing a service to a Host not an Add-in automating a Host.
I’m not clear on your BO plugin question but if you are referring to an Add-in (sorry but we use the term Add-in not plug-in) that also acts as a Host (e.g., Windows media player) we support this model. Or if you are refering to plugable pipelines we also support this.
Great…
I’ve seen this question posted several times, but need to post it again… When will it be released?
Oh also, WCF example, for example, WCF interface mapped and a plugin interface mapped into a single instance to a module…
Thus from the programmers view point it can be coming over the web, or UI it uses the same interface?
re: When it is released –
re: WCF. We reconcile the models via an interface/contract. In fact we call it a contract because we worked with the WCF team. There is certainly more we could do to bring the models together. The models are similar wrt a proxy pattern. The WCF model is fundamentaly based upon a messaging model and reconciling to a single model for cross machine and intra-machine would be difficult.
Can this technique be used in Web apps? I am working on a project that is a web app and we would like to develop a framework that enables develoeprs to build tools against our app and then treat those tools as add-ins. Would this be the technique we should be moving towards?
We designed these features to be used in both clients and servers including web applications. We are also capable of running in partial trust so if you need, for example, to host add-ins from a partially trusted IIS/ASP.Net domain you will be able to.
In fact, a few of the internal partners we are working with are building web applications and drove many of our requirments.
Thanks,
Jesse Kaplan
How would an add-in talk to another add-in in your model? Is there some kind of contract between an add-in and the host to find and communicate with other add-ins?
In order to create a host application, should we wait for ‘Orcas’ release? The AddIn namespace is new as of Orcas, correct?
With regards to add-ins talking to add-ins – I’m not sure we envisioned that. Could have possibilities though. Thanks for bringing it up!
Leland
Is it possible for an out-of-process addin to be activated remotely (say on remote server machine on the corporate network)?
re: Add-Ins talking to other add-ins
There are actually two possibilities here. The first is that one add-in can directly activate a second. In our system add-in/host is really just about who calls the Activate API and it’s entirely possible for a component to be both at the same time.
Additionally we provide functionality to make it relativly easy to pass add-ins from one entity (say to the original host) to another (say one of its add-ins). Part of the object model between the host and the add-in would have to allow for the add-in to be passed across. I’ll make a full post on this at a later time but we provide the functionality to allow the host to pass the add-in off to another entity and let that entity talk to the add-in with a completely different “view” than the original host.
re: Release schedules
All of this functionality will first appear in the Orcas release of the .Net Framework. If you can wait to release your product until we release we would certainly encourage it, but if you cannot wait until that time I still encourage you to read about what we’re providing so that you know what pitfalls to look out for and have a migration plan to move over to the framework provided system as it makes sense.
If you do think you can wait until Orcas, or even if you can’t, you don’t need to wait until the final release or even the beta’s to start building or prototyping on our bits. The current CTP has nearly all the features we’ll be delivering and is quite stable. There may be a few code changes required for moving fromt the CTP to betas and final releases but the ammount of effort will be minimal.
re: Remote activation
In Orcas we do not provide anything that would let a client side host application run add-ins on a server. We’ll be coming out with samples/articles that demonstrate how to easily discover add-ins from a server and deploy them on a client, but they would then be run on the client.
How do you plan to support the third category, the UI type addin? Thanks
The question about UI add-ins probably warrants its own post but the short answer is that our model is agnostic as far as UI is concerned and doesn’t do anything that would prevent hosts from designing object models that would allow the add-ins to directly display UI to the user.
The problem is that there is currently no UI framework built on the CLR that is capable of running in multiple AppDomains. We are working with the WPF and WindowsForms teams to enable these scenarios but we’re not sure when that will come online. In the meantime, if your add-ins need to contribute UI directly to the user you can run those add-ins in the same AppDomain as the host: you will lose the sandboxing potential and reliability you get with AppDomain isolation but you still get most of the versioning benefits enabled by our sytsem.
–Jesse Kaplan
What would be the guidance for when to use System.Addin as opposed to CAB?
Hi John. System.AddIn is a platform component and is therefore fully supported and serviced as a Microsoft product.
Whereas, CAB is Prescriptive Guidance, not a product. Don’t get me wrong, they have done some great work and there is a model for receiving some support. And many times their work finds its way into core products.
The PAG CAB is more specifically a UI framework model.
The Add-In model is a “generic” model for extensibility. As such, CAB could certainly be built on the Add-In model!
If your question is why not have a standard contract definition for IWorkspace, IUElementAdapter,.. you find any argument from me
I would love to get feedback on whether or not people would find this valuable.
Thanks for the response JackG. Perhaps part of my original question should have been… Since CAB is a pre-built framework, why would I re-implement the same thing under the Add-In model? I know something about CAB but little about AddIn. Are there any advantages to building my own AddIn based framework? i.e. CAB is heavy handed compared to AddIn, doesn’t provide isolation, etc….? I do like the idea of working with a supported MS product. BTW – I’ve read the MSDN articles – good stuff.
John, Thanks for clarifying. I agree a UI f/x built upon the Add-In model would be a great offering. We have had many discussions internaly about this very subject. I will pass on this desire to our sister team. If others out there are also interested, please let us know. Customer demand certainly drives our feature work. JackG
Regarding support for UI Add-In’s – Here you go!
<a href= >jodo upskirt gameshow</a> | https://blogs.msdn.microsoft.com/clraddins/2007/02/08/2nd-msdn-magazine-article-hits-the-web-jack-gudenkauf/ | CC-MAIN-2019-18 | refinedweb | 1,445 | 71.95 |
Post your Comment
Java get default Locale
Java get default Locale
..., or cultural region. We can get and set our default
locale through the java programs. For setting or getting default Locale we need
to call static method getDefault
Setting the default Locale
Setting the default Locale
This section shows you how to set the default locale.
Example presented in the section, illustrates you how to
get and set the default
locale.
Java Set Default Locale
Java Set Default Locale
Setting Default Locale Example in Java
Locale can be set also...
In our example of setting default locale value we can
get the value
Java get System Locale
Java get System Locale
In this section, you will learn how to obtain the locale. We are
providing you an example which will obtain the locale of the system
JSP Locale
JSP Locale
JSP Locale is used to get the preferred locale... illustrate you how to get the locale from the jsp page. The
locale.jsp include a page
Java get System Locale
Java get System Locale
In this section, you will learn how to obtain the locale. We are
providing you an example which will obtain the locale of the system by using
How to get country locale from a http request
How to get country locale from a http request Hi,
I Have a requirement like , i need to get the country locale(from whcih country he is logged in) when user login to a web application. Based on that country locale i am going
Locale with SimpleDateFormat - Java Beginners
parameter, it will format the date according to the default Locale.
import...Locale with SimpleDateFormat What does that mean when instantiating a SimpleDateFormat object,specify a locale?
Its just a good programming
date_default_timezone_get
date_default_timezone_get
date_default_timezone_get function... in a script. It returns a string.
Description
string date_default_timezone_get..., date_default_timezone_get will return a default timezone of UTC.
Examples
Locale in Spring MVC
. It takes defaultLocale property to set the default
locale.
<bean...=fr_FR to set the locale to French and get the
message from French properties... as default file when the locale is set to
default locale. The file name must
Formatting and Parsing a Locale-Specific Percentage
returns a percentage format for the current default locale and the method format...Formatting and Parsing a Locale-Specific Percentage
In this section, you will learn how to format and parse a locale-specific
percentage.
On the basis
Java get default Charset
Java get default Charset
We can achieve the default charset
through the java program...:\javaexamples>java GetDefaultCharset
Default encoding is : Cp1252
C
Select DropDown Default
= stateChange;
xmlHttp.open("GET", url, true...;
<FORM action="HomePage.jsp" method="get">
<table align
Locale Specific Date validations through JavaScript/Ajax...
Locale Specific Date validations through JavaScript/Ajax... Hi,
I am developing an JSP Application which supports I18N(Internationalization). In my... MY Field!";
If(lanuage="Denmak"){
//You can even Get this value from
Locale class
Locale class What is the Locale class
date_default_timezone_set
_set('India/Kolkata');
$script_tz = date_default_timezone_get();
if (strcmp...
date_default_timezone_set
date_default_timezone_set function sets the default timezone used by all date/time functions in a script. This function
Java Locale
Java Locale How will you load a specific locale?
Hi Friend,
A locale is the part of a user's environment that brings together.../java/javadate/locale-format.shtml
Default constructor generate two random equations
Default constructor generate two random equations Need to create a default constructor in a programmer defined class that generates a random... get 6-0 please help.
package project4;
public class Project4App
Find Default ResultSet Type Example
Find Default ResultSet Type Example:
We are discuss about to find out Default... Type so you can find out the default
ResultSet type using this example. We are using MySql database server for this
example and get the meta data
The java.text package
the default locale or a specific locale.The purpose and
use... locale has four default formats for formatting
and parsing dates. They are called... formats dates using the default locale
(which is "ru_RU"). If the example
Formatting and Parsing a Date Using Default Formats
Formatting and Parsing a Date Using Default Formats
In this section, you will learn how to format and parse a Date using default
formats.
Java has provide... have defined all the default formats so the date
will get displayed in all
Java Get Example
can get and set our default
locale through the java programs.
... charset
through the java program.
Default Locale...
Java Get Example
Java program to get current date
Java program to get current date now
In this example program we have to get... to get the calendar instance using the
default specified time zone
Java Get Example
can get and set our default
locale through the java programs.
... charset
through the java program.
Default Locale
Locale...
Java Get Example
What is the Locale class?
in java program.
thanks,
Hi,
The Locale class in Java is Java... region. IN Java computing, locale describes a set of parameters to classify... is called locale-sensitive.
For details visit What is Locale class in Java
What is Locale - Java Beginners
the following links:
http...What is Locale Hi,
What is Locale? Write example to show the use of Locale.
Thanks
Hi Friend,
A locale is the part
Java Locale
Java Locale
A locale class in java api represents a specific geographical, political,
or cultural region. In java computing, locale defines a set of parameters
Get Length of Array
( ) - This method return you the default time zone and
current locale.
We declare a variable...
Get Length of Array
In this Tutorial we make a simple example to understand Get Length
Default Package
Default Package Can we import default packages???
Yes, you can import the default packages. But there will be no error or exception, if you will not import default packages. Otherwise, importing packages is must
Post your Comment | http://roseindia.net/discussion/22356-Java-get-default-Locale.html | CC-MAIN-2014-35 | refinedweb | 978 | 57.16 |
[
]
Doug Cutting commented on HADOOP-3307:
--------------------------------------
> Why not do an equivalent of NFS mount
On unix, mounts are not managed by a filesystem implementation, but by the kernel. If we
were to add a mount mechanism, we should add it at the generic FileSystem level, not within
HDFS.
In fact, we already have a mount mechanism, but it only permits mounts by scheme, not at any
point in the path. We could add a mechanism to mount a filesystem under an arbitrary path,
or even a regex like "*.har". This could be confusing, however, since a path that looks like
an hdfs: path would really be using some other protocol. And I don't yet see that we need
to add a new mount feature, since I think the existing one is sufficient to implement this
feature. Also, if we use "har:" or "hdfs-ar:" then it is clear that these are not normal
HDFS files.
A feature that might be good to add to the generic FileSystem is symbolic links. Then one
could add a link in HDFS to an archive URI, thus grafting it into the namespace. If one linked
hdfs://host:port/dir/foo/ to hdfs-ar://host:port/dir/foo.har then one could list the files
in the former to get the uris in the latter. But that's beyond the scope of this issue.
This would be good future work to make transparent archives possible.
>. | http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200804.mbox/%3C1469415788.1209152155796.JavaMail.jira@brutus%3E | CC-MAIN-2014-52 | refinedweb | 241 | 79.09 |
.
tl;dr
Use
forward on
MyApp.Router to forward the request (
%Conn{}) to a custom plug (
MyApp.Plugs.WebhookShunt) which maps
%Conn{} to a route (and thus a controller action) defined on
MyApp.WebhookRouter, based on the data in the request body.
I.e.,
%Conn{} ->
Router ->
WebhookShunt ->
WebhookRouter ->
WebhookController
lv;e (long version; enjoy!)
Let’s restate the problem:
- all requests are being sent to the same webhook callback url
- there are many different possible request payloads
- application requires different computation depending on payload
Let’s say we’re receiving webhooks which contain an
event key in the request body. It describes the event which triggered the webhook and we can use it to determine what code we are going to run.
Below was my first and somewhat naïve implementation. This is what the router looked like:
scope "/", MyAppWeb do post("/webhook", WebhookController, :hook) end
And the
WebhookController:
def hook(conn, params) do case params["event"] do "addition" -> #handle addition "subtraction" -> #handle subtraction "multiplication" -> #handle multiplication "divison" -> #handle division end end
All incoming webhooks go to the same route and therefore, the same controller action.
It took three refactors to get to a satisfactory solution. I will, however, explain each one in this post, as they are logical steps in reaching the final solution and proved interesting learning opportunities:
- Multiple function clauses for controller action
- Plug called from endpoint
- Plug and second router
Multiple function clauses
Let’s start separating the computation into smaller fragments by moving the pattern matching from the
case statement to the function’s definition. We are still using only one route and only one controller action, but we write multiple clauses of that function to match a certain value of the
event key in the params.
Here’s our controller with the multiple clauses:
def hook(conn, %{"event" => "addition"} = params), do: add(params) def hook(conn, %{"event" => "subtraction"} = params), do: subtract(params) def hook(conn, %{"event" => "multiplication"} = params), do: multiply(params) def hook(conn, %{"event" => "division"} = params), do: divide(params)
The request payload will match the clauses for the
hook/2 function and execute different functions depending on what
event was passed in. This refactor is a step in the right direction, but it still doesn’t fit well with the idea that a controller action should handle one specific request. The router serves no real purpose, as there is still only one route, and our code has the potential to get very messy.
Shunting incoming connections
What if we could interfere with the incoming webhook before it hits the router? We could then modify the path of the request depending on the params, match a route and execute the corresponding controller action.
The router would look something like this:
scope "/webhook", MyAppWeb do post("/addition", WebhookController, :add) post("/subtraction", WebhookController, :subtract) post("/multiplication", WebhookController, :multiply) post("/division", WebhookController, :divide) end
And the controller:
def add(conn, params), do: #handle addition def subtract(conn, params), do: #handle subtraction def multiply(conn, params), do: #handle multiplication def divide(conn, params), do: #handle division
In this case, each controller action serves a specific function, the router maps each incoming request to these actions and the code is easily maintainable, well-structured and won’t become jumbled over time. To achieve this, however, we need to change a couple of things.
First off, we need to interfere with the incoming request before it hits the router so it will match our new routes. This is because the webhook callback url is always the same and doesn’t depend on what event triggered it e.g.,
"my_app_url/webhook". You would think we could create a plug for this and simply add it to a custom pipeline for the routes. The problem with this, is the router will invoke the pipeline after it matches a route. Therefore, we cannot modify the request’s path in this pipeline and expect it to match our
addition,
subtraction,
multiplication or
division routes. If we want our new routes to match, we need to intercept the
%Conn{} in a plug called in the endpoint. The endpoint handles starting the web server and transforming requests through several defined plugs before calling the router.
Let’s add a plug called
MyApp.WebhookShunt to the endpoint, just before the router.
defmodule MyApp.Endpoint do # ... plug(MyApp.WebhookShunt) plug(MyApp.Router) end
And let’s create a file called
webhook_shunt.ex and add it to our
plugs folder:
defmodule MyAppWeb.Plug.WebhookShunt do alias Plug.Conn def init(opts), do: opts def call(conn, _opts), do: conn end
The core components of a Phoenix application are plugs. This includes endpoints, routers and controllers. There are two flavors of
Plug, function plugs and module plugs. We’ll be using the latter in this example, but I highly suggest checking out the docs.
Let’s examine the code above, you’ll notice there are two functions already defined:
init/1which initializes any arguments or options to be passed to
call/2(executed at compile time)
call/2which transforms the connection (it’s actually a simple function plug and is executed at run time)
Both of these need to be implemented in a module plug. Let’s modify
call/2 to match the
addition event in the request payload and change the request path to the route we defined for addition:
defmodule MyAppWeb.Plug.WebhookShunt do alias Plug.Conn def init(opts), do: opts def call(%Conn{params: %{"event" => "addition"}} = conn, opts) do conn |> change_path_info(["webhook", "addition"]) |> WebhookRouter.call(opts) end def call(conn, _opts), do: conn def change_path_info(conn, new_path), do: put_in(conn.path_info, new_path) end
change_path_info/2 changes the
path_info property on the
%Conn{}, based on the request payload matched in
call/2, in this case to
"webhook/addition". You’ll notice I also added a no-op function clause for
call/2. If other routes are added and don’t need to be manipulated in the same way as the ones above, we need to make sure the request gets through to the router unmodified.
This strategy isn’t great, however. We are placing code in the endpoint, which will be executed no matter what the request path is. Furthermore, the endpoint is only supposed to (from the docs):
- provide a wrapper for starting and stopping the endpoint as part of a supervision tree
- define an initial plug pipeline for requests to pass through
- host web specific configuration for your application
Interfering with the request to map it to a route at this point would be unidiomatic Phoenix. It would also make the app slower, and harder to maintain and debug.
Forwarding conn to the shunt and calling another router
Instead of intercepting the
%Conn{} in the endpoint, we could forward it from the application’s main router to the
WebhookShunt, modify it and call a second router whose sole purpose would be to handle the incoming webhooks.
- The request hits router which has one path for all webhooks (
"/webhook")
%Conn{}is forwarded to the
WebhookShuntwhich modifies the path based on the request payload
- The
WebhookShuntcalls the
WebhookRouter, passing it the modified
%Conn{}
- The
WebhookRoutermatches the
%Conn{}path and calls the appropriate action on the
WebhookController
I.e.,
%Conn{} ->
Router ->
WebhookShunt ->
WebhookRouter ->
WebhookController
I think this approach is better. We don’t need to modify the endpoint, the router simply forwards anything that matches the webhook path to the shunt and the app’s concerns are clearly separated.
Let’s set up our webhook path in
router.ex:
scope "/", MyAppWeb do forward("/webhook", Plugs.WebhookShunt) end
As long as your external APIs makes a request to this path when you do the setup for the webhook callbacks, every incoming request to this path will be forwarded to the
WebhookShunt.
Let’s refactor
call/2 to handle all events by replacing the hardcoded
"addition" event and path with the
event variable:
defmodule MyAppWeb.Plugs.WebhookShunt do alias Plug.Conn alias MyAppWeb.WebhookRouter def init(opts), do: opts def call(%Conn{params: %{"event" => event}} = conn, opts) do conn |> change_path_info(["webhook", event]) |> WebhookRouter.call(opts) end def change_path_info(conn, new_path), do: put_in(conn.path_info, new_path) end
With this refactor, all our routes must follow the
"webhook/event" pattern. In more complex applications, you might not be able to conveniently use the event name as a part of the path but the principle remains the same.
You’ll notice I’ve removed the no-op
call/2 function clause. This is because we no longer have to handle all potential requests like we did in the endpoint; we can focus entirely on the webhhooks. Now, if we receive a request with an event which doesn’t match a route, Phoenix will raise an error, which is what we want as we don’t know how to handle that request.
Note: if you can’t configure your API to send only the webhooks you’re interested in handling, you should write some code to take care of that.
Let’s also create
webhook_router.ex in the
_web directory:
defmodule MyAppWeb.WebhookRouter do use MyAppWeb, :router scope "/webhook", MyAppWeb do post("/addition", WebhookController, :add) post("/subtraction", WebhookController, :subtract) post("/multiplication", WebhookController, :multiply) post("/division", WebhookController, :divide) end end
The
WebhookRouter is called from the
WebhookShunt with
WebhookRouter.call(conn, opts), and maps the modified
%Conn{}s to the appropriate controller action on the
WebhookController, which looks like this:
def add(conn, params), do: #handle addition def subtract(conn, params), do: #handle subtraction def multiply(conn, params), do: #handle multiplication def divide(conn, params), do: #handle division
I think this last solution ticks all the boxes. Externally, there is still only one webhook callback url; internally, we have a route and a controller action for each event our application needs to handle. Our concerns are therefore clearly separated, making the application extensible and easy to maintain.
So there you have it, handling webhooks in Phoenix. | https://simplabs.com/blog/2018/02/14/handling-webhooks-in-phoenix.html | CC-MAIN-2018-39 | refinedweb | 1,650 | 50.06 |
Agenda
See also: IRC log
<trackbot> Date: 26 January 2010
<dug> test
<dug> pong
Hi Tom
<Tom_Rutt> Hi , let me know when to dial in
<Bob> Yves is working on the zakin dial-up issues
<Tom_Rutt> "the conference is restricted" message is on the phone
<dug> they're working on it
<li> can't dial in 1-617-761-6200, says "conference is restricted at this time"
<asoldano> li, use code 26633
<li> thanks, asoldano
<asoldano> np li
Request for the following issues for tomorrow from Ram:
6435, 6436, 8196, 8205, 8290
These will be dealt with tomorrow.
<dug> MOAP doc:
Doug reviewed the above document.
Key Points:
- How do we advertice "feature" WSDL?
- Then how do we distinguish the different types of WSDL?
<li> sorry have to drop off for an urgent call
The proposal moves the feature wsdl to a position after the advertizement of the feature in the policy assertion.
It also provides a WSDL Mex request only returns the application WSDL. The others come from the assertions.
Tom: Could the feature WSDL ever also be an application WSDL?
Doug: Yes, but probably out of scope of this WG.
Asir: For clarity, we are talking about associating metadata (e.g. WSDL) with the feature policy assertion?
Doug: Yes.
... the proposal is general in the sense that it associates metadata with it feature assertion.
Gil: Asserts that Eventing specifies only one WSDL for an endpoint.
Confirmed A.2.1 of eventing
Doug: The point may be valid if there are multiple WSDLs associated with the assertion.
The policy creator has the option to do it however, the need.
Asir: How do we reference multipl schema docs.
Doug: Thses would be included (via includes) in the WSDL.
Ram: Are we restricting this to one artefact?
Dooug: Short answer - no restriction, but guiadance to wrap these based on the feature type.
There was a short detour into Paul's concerns.
Doug: The same pattern may apply to Paul's issues.
Asir: Asir believes that there is a different pattern.
Look at the transfer assertion, a QName only.
<dug> <wst:Resource ...> xs:QName </wst:Resource>
Doug: This QName referes the type.
<Ram> ping
<Ram> Asir: There is a resource policy assertion param, that indicates the QName.
<Ram> Asir: Do you want to use the same pattern?
Doug: the problem is not really knowing which element in the collection would refer to the resource type.
Asir: Can we use the same pattern in both both?
Doug: Feels they are different,
because in the Ws-T case we don't know which is refeing to the
resources.
... Basically, each feature would have it's own WSDL.
Asir: The same for other
features, e.g. frag's dialect assertion.
... Would like to study more.So on to the next bit.
Status:
- Need a list of assetions where this applies?
- Need to identify the assertions where extenibility need to be addressed.
- Does this apply to Paul's issue?
Doug: For the first, we can do it for the specs we control, but other should be encouraged to follow the pattern.
It is not clear what the question about WS-T means.
- We need to make the ecommendation to other users explicit.
Example walk through:
See WSDL+EventSource.
This shows how to assert that Eventing is supported.
See: Next section
This one defines event description metadata.
This one shows how to add WSDL.
This example may help with the WSDD discussion.
Ram: This does seem to work, but ....
Asir: The mail suggestes that it needs to be more abstract
Doug: E.g. in the PortType.
Bob: Assuming that we are happy with this approach, then the WSDD might be solved by moving this to the PortType.
Next Example:
This one inlines a specific WS-Eventing WSDL document.
This one provides a specific (different) address for events.
Next Example:
Sam as above, but links to metadat rather tan inlined.
Two approaches are shown.
Asir: These are new features:
Doug: Include new feature for Mex (two flavours)
There seems to be a slight difference from the pattern in the Mex document.
Asir: Are the semantics dependent onthe semantics of the parrent?
Doug: MEX does not say anything specific.
So this association may move the semantics, e.g use an include here rather than inlining.
Asir: Why not add a new context free element?
Doug: Do we add new elements to Mex.
Asir: No, use new generic elements.
Doug: Good point, don't use the element from eslewhere.
Asir: Sounds good.
See example Metadat in an EPR
As before the exaples forllow a progressing addition of more complexity.
Review the Summary section:
The EPR discussion may be ssociated to another issue, but not critical here.
Consider droping this issue here and deal with it under the other issue.
Need to check carefully which issues are closed by this approach.
See section "One of the nice things ...."
All metadata could appear in one place.
<Tom_Rutt> which bullet?
See beginning of the document.
By using the ?wsdl (or mex.GetMetadata("wsdl") mapping is very easy to explain.
Asir: Currently their is no equivalent to ?wsdl.
Gil: The application gets to descide what "wsdl" means.
It is unclear what ?wsdl means, so it make this dificult to define.
Gil: The app developer could
create a single document (mostly like ?wsdl).
... Get Metedata can return a lot of stuff. get wsdl would return just one wsdl.
Doug: A get wesdl operation that colsely follows "wsdl pattern.
s./"wsdl/?wsdl/
Jeff: What do I need to know to use the bag of stuf.
Doug: this proposal aim to address this problem.
Jeff: The get wsdl at least we know the schema of what comes back.
A diversion about giving up on GetMetadat entirely.
<MartinC> isnt that a meta-meta model jeff
Asir: Some advanced tools can deal with less information.
Doug: Let's do this and talk about some other details later.
Jeff: the metamodel of WSDL is available.
Bob: Are we agreed that this is a good road to fiollow?
Doug: Next step is to make the Mex and Eventing spec cahnges.
Asir: Why don't we make Doug do them all?
Bob: Only if Asir is really happy with the approach.
Asir: There are some bullets that need addressing. Doug should go for all of them in all the pecs and let's push forward.
Specific on the new elements.
Doug: Wants agreement generall and will attack the spec ASAP.
Resloution: We have directional agreement based on the 5 bullets avove.
Fresh names.
get WSDL.
Postpone Paul's issue when we get there.
List of assertions.
Recommendation to other spec writers
Policy assertion on the porttype.
<Tom_Rutt> what is next?
Since Martin is here 7986 will be next.
Break to 10:50 PST
<Tom_Rutt> Do we know when will 8196 and 8229 (namespace issues) be discussed?
<Bob> tom, sometime this afternoon I hope
Resumming.
Toipc: 7986
<Bob> s/Topic
How does the EventSink know the available policies available?
Gil: An event source can send notification.
The sink can say, please do it this way.
This can be placed in the notify EPR.
How does the sink know what approaches are likely to work.
Doug: If the source can publish notification wsdls this is a sloution
But this is not required, e.g. it uses event descriptions.
No concrete proposal yet.
DaceS: Is this really needed?
Doug: Forcing use of not. wsdl or fault driven discovery is not good enough.
Without Not. WSDL where is thin information available.
Doug: Because Not. WSDL is not required, but for this bit of information to be available, this is the only way at present.
Asir; This is providing policy for the next level of interaction. Similar to Paul's request.
Doug: Proposal to define another policy parameter that could be used in a notify to EPR.
Gil: Identify all the available poilcies and provide only the ID.
Doug: Is this possible?
Asir: You can used named policy
assertions.
... You can name the policy..
A list of experssion can each have a name, but an expression with alternative won't fly.
Resoultion: This seems a reasonable (if not very tasty) direction.
<dug>
Doug: Should be covered by the MOAP proposal.
The support for that resolution being allowed on the portType as well.
Fred Maciel Joins
Gil: A portType now could include binding specific policy.
This is not really a good idea.
An advisory could probably be added, here.
Asir: Accouring to Anton there is something about the associatin cardinality rules.
WU; We know how to do the bindings, but just need the abstract information (ie the association.
Doug: This should be possible at
the portType level.
... It may be silly to put binding info in the portType, but so what. It could also be profilesd.
Gil: It seems that we might be agreeing. The parameters are important.
they have WSDLs, therefore you have to support it
Wu: You only do the matching at the assertion level. The parameters provide further information.
Gil: These parameters are surely important.
Wu: It is untamately no proble to provide binding information in both places.
Resolution: This one should be closed along with MOAP
<Bob>
Gil: What can we say to push developer toward a recommended dialect.
Shold we point to XPath 1.0
<Bob> Support fort he XPath 1.0 dialect (described below) is RECOMMENDED. Alternate filter
<Bob> dialects can be defined. Such dialect definitions MUST include sufficient
<Bob> information for proper application.
Resolved: 8275 as indicated by Bob above.
Gil: Is this implying that filter is recommended.
<Bob> If filtering is supported, then support for the XPath 1.0 dialect (described below) is RECOMMENDED. Alternate filter dialects can be defined. Such dialect definitions MUST include sufficient information for proper application.
Resolution: 8275 as above.
Doug: Does this apply to Enum as
well?
... There is a place for this in Enum.
Reslolution: 8275 can aslo apply to Enumeration.
Gil: There is a proposal and it's a good one.
Resolved: As proposed modifiec by comment #1 (8288)
Ram: there is a dependency between this one and 8302
s/8308/8302/
Ram: These look like the same issue.
<dug>
Ram: Targets two messages (Create/Put Response). Have a look at the text changes.
The curx is that there is something unclear about what happen with extension elements.
Doug: Works for no extension element, but what about when they are there?
Ram: Today the spec requires that you need to have the representation.
Asir: Spec the rep comes first the extensions.
s.say/Spec/
Ram: In the spec, the response says the rep. must be there. Same in Create and Put.
Doug: Likes the Put Response wording.
Gil: Put response sounds good until the last sentence. (assuming mutating ....)
Ram: What this means is that there may be other changes made concurrently.
Gil: this may confuse some implementations.
Yves: This is good to say.
Maybe a separete sentence could be better.
Text requested:
Ram is mailing the text to the mailing list.
Gil: Does this work with 8299 as well as 8302?
Doug: 8180 looks like a part of this as well?>
<wchou> Iq-
<Ram> Remove the sentence "This second child element, if present, MAY be an empty element depending on the resource representation." from the above proposal from CreateResponse.
Gil: 8299 is addressed for put, but this may not be as clear for create response.
Ram: First is the EPR then the representation.
Gil: How do we know that?
The porblem also exists in 8180
Doug: Do we just drop thie extensibility in responses`?
We will do these two tomorrow.
Break for one hour while we tweak the text.
<Zakim> asir, you wanted to ask an orthogonal question
<Bob> we resume
<Ram> ping
<Ram> scribenick Ram
<Ram> Continuing discussion on issue 8302 and 8299
<Ram> Continuing discussion on issue 8302 and 8299
<Bob> proposal at
<Tom_Rutt> gettin started yet?
<Bob> we resume
<Ram> Gil: Presents proposal
<Bob> continuing on 8302
<Ram> Dave: Does an extension in the request result in a subsequent extension in the response.
<Ram> Gil: Should the resource manager create a representation that differs from what was supplied.
<Ram> Doug: The current text was intended to create a resource that is close enough to what was supplied.
<Ram> Gil: The current text does not reflect that.
<Ram> Dug: I am wondering if we should take a step back and approach this differently.
<Ram> Dug: A Resource Manager can change the resource anytime.
<Ram> Dug: It may be useful to put a paragraph that states that above.
<Ram> Dug: The other thing to do is, to look at the problem, and say the bare minimum.
<Ram> Dug: The extensibility seems more important. But should the resource representation be returned?
<Ram> Dug: Should the representation allow for explicit empty element.
<Ram> Ram: Should we discuss 8180?
<Ram> Bob: Park this discussion for the moment.
<Ram> TOPIC issue 8180
<Ram> Proposal:
<Bob> proposal at
<Ram> Gil: Should you return the resource representation?
<Ram> Bob: Sending back a large representation as a result of a small change may not be useful.
<dug> <wst:Resource/>
<Ram> Ram: It may be useful to have empty representation form for Put and Create requests to null out the resource.
<dug> <wst:Resource/>
<Ram> This empty resource must appear on all operations..
<Ram> Then remove returning the resource representation from CreateResponse and PutResponse.
<Tom_Rutt> can the scribe summarize?
<dug> 1) if the representation is in a msg it must be wrapped with <wst:Resource/> no matter what operation we're talking about.
<Ram> Ram: Having the representation in PutResponse and CreateResponse is complicated.
<Ram> Gil: The resource may change change inadvertantly?
<Bob> ack \
<Ram>.
<Ram> Tom:I am in favor of getting rid of the optimization.
<Ram> Tom:I am in favor of getting rid of the optimization.
<Ram> Bob: Are folks OK with adding a wst:Resource element?
<Ram> This allows distinguishing the representation from extension elements.
<Tom_Rutt> Clarification of my question: Is this the semantics of put: "DO WHAT YOU CAN, GIVE A SUCCESS RESPONSE IF YOU CAN DO ANY OF IT, THEN LET THEM DO A GET TO FIND OUT WHAT REALLY HAPPENED?
<Ram>> Ram: How does the client know if the service created the resource as requested or not?
<Ram> Doug: The service cannot guarantee at all.
<Ram> Bob: 8299, 8302, and 8180 are woven together.
<Ram> Bob: Create a comprehensive proposal for all three issues.
<Ram> Dave: I have some reservation about adding <wst:Resource/>.
<Ram> Dug: I would rather have a model that all cases including extensions.
<Ram> Criteria to consider:
<Ram> 1. Allow extensions.
Lightweight
<Ram> 2. Diambiguate the representation <rep><A/></rep> vs <rep/>
<Ram> 3. The resource manager may do more to the created/updated resource than create.
<Ram> 4. Ram: Investigate if there needs to be an resource representation in the CreateResponse and PutResponse.
<Ram> Break for 15 minutes. Resume @ 30:30pm PT.
<Ram> AI: Gil has the AI to prepare proposal for 8299, 8302, and 8180.
<Ram> Bob: The session resumes after the break.
<dug>
<dug> "For example, it would need to define the context (which data) over which the filter operates."
<Ram> Dug: In our zeal to remove advertisement to XPath 1.0, we removed the above sentence.
<Ram> The above amendment to 8275 was accepted without objections.
<Tom_Rutt> tom rutt as youth:
<Bob> ACTION: Gil has the AI to prepare proposal for 8299, 8302, and 8180. [recorded in]
<trackbot> Sorry, couldn't find user - Gil
].
<Ram> Bob: Is 7791 soaked up by MOAP proposal?
<Ram> Dug: Yes, it should mostly.
<Ram> TOPIC issue 7774
<Ram> Proposal:
<Yves>
<Yves>
<Ram> Bob: Discussions?
<Ram> Yves: Looks good.
<Ram> Bob: Any objections?
<Ram> Issue 7774 resolved with above proposal.
<Ram> TOPIC issue 8181
<dug>
<Ram> Issue 8181 resolved as proposed.
<Ram> TOPIC issue 8182
<Ram> Gil: Someone convinced me that this was not a good idea.
<Ram> Dug: Suggest closing with no action since this is addressed.
<Ram> Gil: This is not the case where you can look at the request is not valid.
<Ram> Gil: It is the case where it is only invalid because of the state of the resource at the point of the Put request.
<Ram> Gil: Is there a need for a special fault in this case?
<Ram> Bob: Any objections to close with no action?
<Ram> Issue 8182 closed with no action.
<Ram> TOPIC issue 8191
<Ram> Bob: Gil, what kind of serialization rules are necessary.
<Ram> Asir: What serialization rules are being asked for?
<Ram> Doug: This is NOT an editorial change.
<Ram> Gil: There is no clarity on the WS-Fragment about how the serialization work with Put.
<Ram> Dug: You can construct an XPath that points to multiple things.
<Ram> Ram: How does this relate to the request from the XML Security WG about seperating out the XPath expressions from the core protocol?
<Ram> Gil: Can the Expression point to multiple nodes?
<Ram> Bob: The purpose of XPath is essentially to address a point/location.
<Ram> Bob: What can you do to that point is the question.
<Ram> Bob: Are you going to change all the attributes in a single Fragment Put is a question?
<Ram> Bob: It can get complex.
<Ram> Bob: It can get complex.
<Ram> Dug: The WS-Fragment says for XPath 1.0: "An XPath 1.0 expression MAY evaluate to multiple nodes; because of this the XPath 1.0 language MUST NOT be used with a "Put" or "Create" operation."
<Ram> Gil: A clarification on the Put is sufficient.
<Ram> ACTION ITEM Doug: Work with Gil on proposal.
<trackbot> Sorry, couldn't find user - ITEM
<Ram> TOPIC 8185
<Ashok> ACTION: Dug to work with Gil on proposal for 8191 [recorded in]
<trackbot> Created ACTION-139 - Work with Gil on proposal for 8191 [on Doug Davis - due 2010-02-02].
<Ram> Ram: This related to 8193.
<Ram> Ram: The specification does not talk about inserting attribute nodes.
<Tom_Rutt> hai: transport ack not application ack
<Ram> Bob: We need some concrete text.
<Ram> ACTION: Gil to Produce concrete proposal. [recorded in]
<trackbot> Sorry, couldn't find user - Gil
<Ram> ACTION: gpilz to Produce a concrete proposal. [recorded in]
<trackbot> Created ACTION-140 - Produce a concrete proposal. [on Gilbert Pilz - due 2010-02-02].
<Ram> TOPIC 8193
<dug>
<Ram>
<Ram> Ashok: We are going through these many difficult cases. We ought to remember that there is this specification called XQuery update which does all these. The XQuery has details on how to do the XML update.
<Ram> Dug: Having an attribute to indicate Insert instead of Replace would work for me.
<Tom_Rutt> I am checking out for the day, hear you tomorrow morning
<Ram> Bob: Why can't you use an Insert for non-array nodes?
<Ashok> XQuery update language:
<Ram> Doug: In XPath can you do a nodename/*?
<Ram> Doug: Can you insert an first child?
<Ram> Bob: If the child count is zero, an insert will introduce the first child element.
<Ram> Bob: Insert and Replace has distinct semantics.
<Ram> Doug: Would it work in the case of attributes?
<Ram> Bob: For replacement, you need to know the specific attribute.
<Ram> Bob: I can write a conceptual outline.
<Ram> ACTION Bob to produce a conceptual model
<trackbot> Created ACTION-141 - Produce a conceptual model [on Bob Natale - due 2010-02-02].
<Ashok> ]
<Ram> Conversation on XPath semantics ensuing.
<Ram> XPath allows you to point to a location. Insert and Replace allows you to manipulate data at that location or around it.
<Ram> Break until 4:20pm PT
<Ram> Session resumes.
<Ram> TOPIC 8257
<Ram>
<Ram> Bob: Any objections to resolve as proposed?
<Ram> Issue 8257 resolved as proposed.
<Ram> TOPIC 8258
<Ram> Bob: Any objections to removing Create from "XPath 1.0 language MUST NOT be used with a "Put" or "Create" operation"?
<Ram> Bob: Isn't it true that the ability to support multiple nodes dependent on the expression languages.
<Ram>> Ram: It is important to separate the protocol semantics from expression language semantics.
<Ram> Ram: The protocol should NOT disallow using sophisticated expression languages.
<Ram> Ram: The specification should define at least one usable expression language.
<Ram> TOPIC 8284
<Bob>
<Bob> q JeffM
<Ram> Ram describes his dissertation.
<Ram> Gil: When implementation/tools point to WSDLs, what do I need to do a reasonable event sink?
<Ram> Gil: I want to implement that derivations of the WS-Eventing WSDL is BP compliant.
<Ram> Gil: So the implementations are interoperable.
<Ram> Wu: I don't think we can limit WS-Eventing binding only to HTTP or SOAP 1.1.
<Ram> Correction: I don't think we can limit WS-Eventing binding only to HTTP or SOAP 1.1.
<Ram> Wu: We should follow WS-Addressing and WS-Policy and follow their lead.
<Ram> Wu: It is beyond scope of what we are attempting to do here.
<Ram> Jeff: If we leave it to people's good judgements, why do we need standards?
<Ram> WSDL 1.1 is NOT a standard.
<Ram> BP 1.1 is an ISO standard.
<Ram> Jeff: We want to say WSDL 1.1 as modified by BP 1.1.
<Ram> Jeff: It is not to restrict what is allowed, but to ensure interoperability.
<Ram> Wu: Why doesn't anyone complain about WS-Policy?
<Ashok> WSDL 1.1 W3C Note 15 March 2001
<Ram> Asir: BP 1.2 and 2.0 when it becomes an Final Material can be useful but NOT Basic Profile 1.1.
<Ram> Jeff: Should we defer until BP 2.0 is done.
<Ram> Bob: You could do that.
<Ram> Bob: Should we wait until BP 1.2 and 2.0 become Final Material?
<Ram> Bob: We can safely defer this until after Last Call.
<Ram> Jeff: Could we put some text in the specification to clarify use of WSDL 1.1?
<Ram> Doug: Could we have a section in BP 1.2 and 2.0 that pertains specifically to fixing WSDL 1.1 issues only?
<Ram> Asir: Should we consider closing this without prejudice and reopen it later?
<Ram> Jeff: No we cannot close this. We should defer.
<Ram> Bob: Should we defer this until after Last Call at least.
<Ram> Bob: At least until CR.
<Ram> Jeff: If we can come up with a proposal, we should consider resolving it.
<Ram> Asir: What are the trip wires to process this issue?
<Ram> Bob: 1) A new proposal 2) BP 1.2 and 2.0 are Final Material 3) We reach Candidate Recommendation and we decide what to do.
<Ram> 8284 is an Last Call issue.
<Ram> Meeting is adjourned for the day.
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/patter/pattern/ Succeeded: s/plces/both/ Succeeded: s/forother/for other/ Succeeded: s/identift/identify/ Succeeded: s/gack/back/ WARNING: Bad s/// command: s/Toipc Succeeded: s/Toipc/Topic/ Succeeded: s/thsi si/this is/ Succeeded: s/publich/publish/ Succeeded: s/not/Not./ Succeeded: s/name the policy assertion/name the policy./ Succeeded: s/MOAF/MOAP/ Succeeded: s/ti/it/ FAILED: s/8308/8302/ Succeeded: s/8303/8302/ Succeeded: s/Say/Spec/ Succeeded: s/This empty resource may appear on all operations/This empty resource must appear on all operations./ Succeeded: s/1.2/1.1/ No ScribeNick specified. Guessing ScribeNick: DaveS Inferring Scribes: DaveS WARNING: Replacing list of attendees. Old list: vikas [Fujitsu] +39.331.574.aaaa Tom_Rutt asoldano +1.908.696.aabb Li +03531803aacc MartinC New list: [Fujitsu] MartinC Tom_Rutt vikas Default Present: [Fujitsu], MartinC, Tom_Rutt, vikas Present: [Fujitsu] MartinC Tom_Rutt vikas Agenda: WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Found Date: 26 Jan 2010 Guessing minutes URL: People with action items: doug dug gil gilbert gpilz WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output] | http://www.w3.org/2010/01/26-ws-ra-minutes.html | CC-MAIN-2015-27 | refinedweb | 4,013 | 69.18 |
Date: 2004-12-08T07:06:28
Editor: AleksanderSlominski <aslom@cs.indiana.edu>
Wiki: Apache Web Services Wiki
Page: ChatAgenda/20041208/ChatLog
URL:
New Page:
{{{
[12/8/2004 8:56 AM] =-= Topic for #apache-axis is ``Apache Axis Web service Framework - WSDL
WSDD SOAP XMLRPC''
[12/8/2004 8:56 AM] =-= Topic for #apache-axis was set by FR^2 on Monday, December 06, 2004
6:58:53 AM
[12/8/2004 8:59 AM] <gdaniels> good $TIME_OF_DAY, folks!
[12/8/2004 9:00 AM] <alek_s> good $DAY_OR_NIGHT :)
[12/8/2004 9:00 AM] <Srinath> Hi glen
[12/8/2004 9:00 AM] <Deepal> Hi all
[12/8/2004 9:00 AM] <Ajith> hi all
[12/8/2004 9:00 AM] <Srinath> good morning/night :)
[12/8/2004 9:01 AM] <Harsha> Hi All
[12/8/2004 9:01 AM] <Srinath> what is sheduled for today?
[12/8/2004 9:01 AM] <Srinath> M1 ?
[12/8/2004 9:01 AM] =-= gdaniels has changed the topic to ``Weekly Axis2 chat''
[12/8/2004 9:01 AM] <gdaniels> Hey, why isn't the chat info on the front page of the
Wiki anymore?
[12/8/2004 9:02 AM] <Ajith> WSDL and deployment
[12/8/2004 9:02 AM] <Srinath> let me see what happen to front page
[12/8/2004 9:02 AM] <Ajith> it is !
[12/8/2004 9:02 AM] <Ajith> aaah the link is missing :)
[12/8/2004 9:02 AM] -->| chathura (~chathura@203.94.95.82) has joined #apache-axis
[12/8/2004 9:02 AM] <Ajith> BTW very small crowd
[12/8/2004 9:02 AM] <Ajith> Hi Chathura
[12/8/2004 9:02 AM] <chathura> hi all
[12/8/2004 9:03 AM] <chathura> hi ajith
[12/8/2004 9:03 AM] <Srinath> the chat info is at
[12/8/2004 9:03 AM] <gdaniels> Small crowd often means good ability to get things done
:)
[12/8/2004 9:03 AM] <gdaniels> Why is it there?
[12/8/2004 9:03 AM] <Ajith> :)
[12/8/2004 9:03 AM] <chathura> :)
[12/8/2004 9:03 AM] <Deepal> :)
[12/8/2004 9:03 AM] <gdaniels> It's not about people, it's about the development process,
should definitely be on the first page IMO.
[12/8/2004 9:04 AM] <Srinath> will fix it
[12/8/2004 9:04 AM] <alek_s>
[12/8/2004 9:05 AM] <gdaniels> I'd like to talk about the ServiceDesc thread
[12/8/2004 9:05 AM] <Srinath> me too
[12/8/2004 9:05 AM] <Srinath> there is a mail regarding that
[12/8/2004 9:05 AM] <chathura> +1:)
[12/8/2004 9:05 AM] <Srinath> let me find subject
[12/8/2004 9:05 AM] <gdaniels> yes, hence "thread"
[12/8/2004 9:06 AM] <gdaniels> Using the WSDLService in the registry[continuingRE: [Axis2][Engine]Step
by Step;Engine, Engine registry deployment and the Phase Resolver]
[12/8/2004 9:06 AM] <Srinath> mail subjectr:Using the WSDLService in the registry[continuingRE:
[Axis2][Engine]Step by Step;Engine, Engine registry deployment and the Phase Resolver]
[12/8/2004 9:06 AM] <gdaniels> :)
[12/8/2004 9:06 AM] -->| sanjiva (~sanjiva@203.94.84.117) has joined #apache-axis
[12/8/2004 9:06 AM] <Srinath> yes glen :)
[12/8/2004 9:06 AM] <sanjiva> hi yall sorry to be late
[12/8/2004 9:06 AM] <Harsha> Hi Sanjiva
[12/8/2004 9:07 AM] <gdaniels> That's OK, Sanjiva, we assigned you the AI of finishing
the code for us by next week.
[12/8/2004 9:07 AM] <gdaniels> Hope that's OK! :)
[12/8/2004 9:07 AM] <sanjiva> ah sure! :-)
[12/8/2004 9:07 AM] <Srinath> +1
[12/8/2004 9:07 AM] <Srinath> :D
[12/8/2004 9:07 AM] <gdaniels> So we're about to discuss ServiceDesc, it seems
[12/8/2004 9:07 AM] <Deepal> yep.
[12/8/2004 9:07 AM] <gdaniels> At the F2F we talked about using the WSDL structure to
hang our metadata off of.
[12/8/2004 9:07 AM] <Ajith> chathura?
[12/8/2004 9:08 AM] <gdaniels> I'm wondering why we wouldn't want to do that still.
[12/8/2004 9:08 AM] <gdaniels> (I used to hate that idea, but I'm liking it more and
more)
[12/8/2004 9:08 AM] <Ajith> mmm wouldn't that corrupt the WOM
[12/8/2004 9:08 AM] <Ajith> ?
[12/8/2004 9:08 AM] <gdaniels> not at all
[12/8/2004 9:08 AM] <chathura> no ajith
[12/8/2004 9:08 AM] <chathura> the reason was
[12/8/2004 9:08 AM] <Srinath> glen that means there is no metadata .. and we populate
info to WSDLService?
[12/8/2004 9:08 AM] <gdaniels> WSDL is extensible, first of all. Also, the WSDL model
we use should be even more extensible (stuff can hang off it that isn't necessarily serialized)
[12/8/2004 9:08 AM] <sanjiva> Ah yes .. let's talk about that .. I think we are still
doing that .. so let's assume we will introduce a "userData" property to every type of node
in the WSDL component model
[12/8/2004 9:08 AM] <chathura> we wanted give a better api for the engine programmer
[12/8/2004 9:09 AM] <sanjiva> let that be a hashtable
[12/8/2004 9:09 AM] <gdaniels> Srinath - of course there's metadata!
[12/8/2004 9:09 AM] <gdaniels> We love metadata
[12/8/2004 9:09 AM] <gdaniels> It's just where it lives, and on what kind of objects
[12/8/2004 9:09 AM] <sanjiva> then we can put whatever stuff we want .. like per operation
specific modules or whatever
[12/8/2004 9:09 AM] <gdaniels> +1 Sanjiva
[12/8/2004 9:09 AM] <gdaniels> That's the general idea
[12/8/2004 9:10 AM] <Srinath> +1
[12/8/2004 9:10 AM] <gdaniels> then if there are specific things we want to bake in,
we can put those in outside the userData part
[12/8/2004 9:10 AM] <sanjiva> then the only "ugliness" is that to get say the inFlowChain
for an operation, you'd need to do:
[12/8/2004 9:10 AM] <gdaniels> if you look at the 1.X ServiceDesc classes, a lot of
that info is the exact same info you need in a WSDL model
[12/8/2004 9:10 AM] <sanjiva> service.operation.getUserData().get("inFlowChain") ..
[12/8/2004 9:10 AM] <sanjiva> the only reason I'm for keeping the ServiceDesc class
is to make that call look like OperationDesc.getInFlowChain ()
[12/8/2004 9:10 AM] <gdaniels> Sanjiva: we could also do this:
[12/8/2004 9:11 AM] <gdaniels> class AxisOperation extends WSDLOperation {
[12/8/2004 9:11 AM] <sanjiva> undernearth it should still be doing that call .. going
to the WSDL model and pulling the data out
[12/8/2004 9:11 AM] <gdaniels> public InFlowChain getInFlowChain();
[12/8/2004 9:11 AM] <gdaniels> }
[12/8/2004 9:11 AM] <gdaniels> etc
[12/8/2004 9:11 AM] <sanjiva> ah that's a better plan!
[12/8/2004 9:11 AM] <gdaniels> lots of ways to skin this cat
[12/8/2004 9:11 AM] <sanjiva> So instead of OperationDesc we have AxisOperation .. I
like it!
[12/8/2004 9:12 AM] <sanjiva> we'd still need to cast the type I guess to get to AxisOperation
(as Interface.getOperation(qname) will return an Operation) but that's ok
[12/8/2004 9:12 AM] <Srinath> yes the additional info can be stroed like e.g. HashTable
properties .. but we can wrap can give nice API by ServiceDesc
[12/8/2004 9:12 AM] <gdaniels> we need to think about what kinds of stuff we put in
there, and how/if it affects the WSDL serialization, but I think it can work
[12/8/2004 9:12 AM] <Srinath> 1)Handlers 2)properties
[12/8/2004 9:12 AM] <Srinath> ect ..
[12/8/2004 9:13 AM] <chathura> so this is basically a wrapper am i right
[12/8/2004 9:13 AM] <gdaniels> Well here's a question. Where does the info about the
back-end implementation class go?
[12/8/2004 9:13 AM] <chathura> a wrapper that will wrap the component model
[12/8/2004 9:13 AM] <gdaniels> chathura: Yup
[12/8/2004 9:13 AM] <Deepal> Handler mean not only the executable handler ?
[12/8/2004 9:13 AM] <sanjiva> Yes at one point I was proposing that we use the WSDL
syntax itself instead of our own service.wsdd, but I changed my mind after convincing myself
that the WSDL syntax would be very painful for users
[12/8/2004 9:13 AM] <Deepal> it should also keep pahse rules etc...
[12/8/2004 9:14 AM] <sanjiva> Glen: to me the impl class stuff is an endpoint .. so
it goes in the Endpoint object
[12/8/2004 9:14 AM] <alek_s> Just a comment: i think it would be good to consider more
type safe to cast metadata into Java interfaces
[12/8/2004 9:14 AM] <Srinath> glen e.g. that back end info go in to the WSDLService
as a property .. but it is nicley wrapped by ServiceDesc
[12/8/2004 9:14 AM] <gdaniels> I think it's more appropriate in the service, actually,
snajiva
[12/8/2004 9:14 AM] <gdaniels> s/snajiva/sanjiva/
[12/8/2004 9:15 AM] <gdaniels> different endpoints are just different ways to get to
the "same thing"
[12/8/2004 9:15 AM] <sanjiva> Hmmm. Maybe I snafooed. Hmm. But wait, that's not consistent
with soap endpoints for example ..
[12/8/2004 9:15 AM] <alek_s> something like: FlowChainMetada fcmd = (FlowChainMetada)
service.operation.getUserDataCast(FlowChainMetada.class); fcmd.doWhatever()
[12/8/2004 9:16 AM] <Deepal> alke : if you are refer to excution cahin
[12/8/2004 9:16 AM] <chathura> hmm my idea was it should go into the service ....
[12/8/2004 9:17 AM] <Deepal> we will do somthing smilar to thet
[12/8/2004 9:17 AM] <Deepal> that
[12/8/2004 9:17 AM] <alek_s> i do nto like hashtables annd mahoc strings constants -
it creates second kind of API that is very fragile
[12/8/2004 9:18 AM] <gdaniels> alek: I don't agree that it's fragile
[12/8/2004 9:18 AM] <sanjiva> So now this is going back to what I was proposing back
at the F2F ... treat a service as a WSDL binding (ala WSIF).
[12/8/2004 9:18 AM] <gdaniels> I think that kind of thing is VERY useful for loosely
coupled and extensible portions of a system
[12/8/2004 9:18 AM] <sanjiva> Alek: if the string key for the hashtable is burnt into
a subclass its not fragile right?
[12/8/2004 9:18 AM] <Srinath> glemn, Sir can u explained bit more on endpoint?
[12/8/2004 9:18 AM] <Srinath> sorry glen
[12/8/2004 9:18 AM] <sanjiva> let's finish the fragility thread first :)
[12/8/2004 9:18 AM] <alek_s> Glen: it is fragile as it is not checked during compiel
time
[12/8/2004 9:19 AM] <sanjiva> things get really fragile if we start having multiple
threads
[12/8/2004 9:19 AM] <alek_s> Sanjiva: yes - you can add wrappers and they hide problems
[12/8/2004 9:19 AM] <sanjiva> Alek: How do you make it compile time checked? The WSDL
model is a constant .. we can't put Axis2 specific stuff in there; all we can put are hooks
[12/8/2004 9:20 AM] <Srinath> or properties
[12/8/2004 9:20 AM] <gdaniels> alek: Extensibility means to some extent a lack of compile
time checking. I think that's OK. You can mitigate it by providing a casting API, but I
tend not to love those.
[12/8/2004 9:21 AM] <sanjiva> Srinath: ??
[12/8/2004 9:21 AM] <alek_s> Glen: i am more concerned about having it somewhat under
control
[12/8/2004 9:21 AM] <Srinath> means hooks = properties
[12/8/2004 9:22 AM] <alek_s> Glen: and i am talking abiut dynamic casts so it is also
extensible ...
[12/8/2004 9:22 AM] <chathura> well this is how i feel there are things that we shouldnt
change WSDL component based thing we did so i doubt we can make it compile time checked at
all unless
[12/8/2004 9:22 AM] <chathura> we come up witht a parellel object hierarchy
[12/8/2004 9:22 AM] <gdaniels> Hm. I think we need to ground this discussion in specific
examples to make sure we aren't talking around each other.
[12/8/2004 9:22 AM] <Srinath> sir, by "or properties" I means hooks=propertis
[12/8/2004 9:23 AM] <alek_s> Wouldnt it be better to have XML tree as place to hang
user data for extensions?
[12/8/2004 9:23 AM] <gdaniels> Alek, let's say I want to add a property at runtime to
a service, like I can do today with service.setOption()
[12/8/2004 9:24 AM] <gdaniels> Do you think we should take out the setOption(String,
Object)/getOption() APIs?
[12/8/2004 9:24 AM] <gdaniels> I would be STRONGLY -1 to that
[12/8/2004 9:24 AM] <gdaniels> msgContext.setProperty/getProperty is one of the major
design strengths of Axis, IMO
[12/8/2004 9:25 AM] <Srinath> glen:yes the setOptions shuould be there !
[12/8/2004 9:25 AM] <alek_s> i would add namespace to property name to avoid clashes
as minimum
[12/8/2004 9:25 AM] <alek_s> setOption(XmlNamesapce ns, String name, Object value)
[12/8/2004 9:25 AM] <gdaniels> Figuring out what should be type-checked actual APIs
and what should be generic Map-like APIs is an art, not a science
[12/8/2004 9:25 AM] <gdaniels> ew
[12/8/2004 9:25 AM] <Srinath> properties are strings is not they?
[12/8/2004 9:25 AM] <alek_s> Glen: this is an oppurtinity to try to make it better ...
[12/8/2004 9:27 AM] <Srinath> alek_s: what is the better option? I mean we have to choose
one at the end
[12/8/2004 9:27 AM] <alek_s> i think Glen is talking about WSDL proeprties not Java
properties
[12/8/2004 9:27 AM] <gdaniels> I'm talking about Java properties, I think...
[12/8/2004 9:27 AM] <gdaniels> i.e. not stuff that necessarily gets written into WSDL
xml
[12/8/2004 9:27 AM] <sanjiva> wait, let's make it more concrete ..
[12/8/2004 9:27 AM] <gdaniels> stuff that exists at runtime
[12/8/2004 9:27 AM] <Srinath> I think wsdl properties become java propertis
[12/8/2004 9:27 AM] <alek_s> key quesiton is how we document those properties in API
and then enforce that right one are used - they are like GOTO - they can show up anywhere
...
[12/8/2004 9:27 AM] <gdaniels> Yes, Srinath
[12/8/2004 9:28 AM] <sanjiva> org.apache.wsdl.WSDLOperation has a method called setProperty
(string, object) and a corresponding getProperty(); yes?
[12/8/2004 9:28 AM] <Srinath> +1
[12/8/2004 9:28 AM] <gdaniels> think so
[12/8/2004 9:28 AM] <chathura> yup
[12/8/2004 9:28 AM] <sanjiva> So Alek's concern is that that's an open invitation for
people to store whatever crap they want in the WSDLOperation, right Alek?
[12/8/2004 9:29 AM] <alek_s> yes and AXIS1 msgContext can contain anthing including
kitchen sink (Engine :))
[12/8/2004 9:29 AM] <sanjiva> and that we have no control over what gets stuffed in
there (nor any compile time checks)
[12/8/2004 9:29 AM] <alek_s> that makes it interestign challenge to serialzie it ...
[12/8/2004 9:29 AM] <alek_s> and for example to pause or persist handler chain
[12/8/2004 9:29 AM] <sanjiva> Yes I agree we need to control (by documentation at least)
what gets stored in message context, but let's focus on WSDLOperation for now
[12/8/2004 9:30 AM] <sanjiva> So, we have another option then - we can use WSDL's extensibility
stuff ..
[12/8/2004 9:30 AM] <Srinath> sir you mean checking the option is known one?
[12/8/2004 9:30 AM] <sanjiva> isntead of setProperty() etc., we can do "addChild (OmElement)
to add a child to the WSDLOperation ..
[12/8/2004 9:30 AM] <gdaniels> ew ew ew
[12/8/2004 9:30 AM] <gdaniels> -1 to that, sanjiva
[12/8/2004 9:30 AM] <sanjiva> I know ;-) ew from me too ...
[12/8/2004 9:31 AM] <alek_s> Sanjiva: so WSDL is new object model that is different
completely from XML Infoset?
[12/8/2004 9:31 AM] <alek_s> Sanjiav: +1 - that would be serializable and XML element
namespaces would protet against collisions
[12/8/2004 9:31 AM] <Deepal> can we add omelemnt to wsdl ?
[12/8/2004 9:31 AM] <Srinath> where the OM element comes in :)
[12/8/2004 9:31 AM] <gdaniels> WSDL object model != XML infoset, Alek, correct.
[12/8/2004 9:32 AM] <Srinath> WOM != OM !!
[12/8/2004 9:32 AM] <sanjiva> Well the problem is if Alek (or someone) wants to serialize
the WSDL object model then the random props can show up ..
[12/8/2004 9:32 AM] <gdaniels> There are two things going on here, and I think we should
separate them out.
[12/8/2004 9:32 AM] <sanjiva> yes I know Srinath but (I forgot too) there can be arbitrary
extensibility in a WSDL document ...
[12/8/2004 9:32 AM] <gdaniels> First, there is the idea of attaching runtime stuff to
the WSDL object model. Our own metadata, etc.
[12/8/2004 9:33 AM] <Srinath> :)
[12/8/2004 9:33 AM] <Srinath> best case we can check the property is it a know one when
we set
[12/8/2004 9:33 AM] <gdaniels> This is SEPARATE from the second thing, which is about
how our own extensions affect the WSDL serialization
[12/8/2004 9:33 AM] <gdaniels> I'm trying to talk about the first, and not the second,
right now.
[12/8/2004 9:33 AM] <sanjiva> +1 Glen ... but do we need to worry about "serializing"/persisting
the first case somehow via the WSDL OM?
[12/8/2004 9:34 AM] <sanjiva> (Yes let's not talk about the 2nd one for now .. that's
the one where we need to re-eval the WSDL OM and AXIOM relationship. Maybe next week.)
[12/8/2004 9:34 AM] <gdaniels> sanjiva: I'm not sure - I could see ways of doing it
where the answer is yes and other ways where the answer is no.
[12/8/2004 9:34 AM] <sanjiva> I think that's part of what concerns Alek .. if we somehow
merge 1 & 2 then you get persistence free
[12/8/2004 9:34 AM] <alek_s> about persitence: if OmElement implements NotXmlSerializable
it would be just skipped and essentially be only available runtime - disappear when serializaed
...
[12/8/2004 9:35 AM] <Deepal> I think it is better if we can talk what we want for the
M1 :)
[12/8/2004 9:35 AM] <alek_s> persistence is oneof fundamental concerns (liek security)
it affect everything
[12/8/2004 9:35 AM] <Srinath> Deeapl +1
[12/8/2004 9:35 AM] <alek_s> and everything is easier if you do not have to worry about
it
[12/8/2004 9:36 AM] <gdaniels> well there are at least two kinds of serialization too
[12/8/2004 9:36 AM] <sanjiva> well Alek let's remember that what we're talking about
is the *runtime* repreesntation of something that's already persistent ...
[12/8/2004 9:36 AM] <gdaniels> there is Java serialization and then WSDL serialization
[12/8/2004 9:36 AM] <Srinath> do we want to worry about the persistance of WSDLService
when read in .. are we lose something if we say it is readonly?
[12/8/2004 9:36 AM] <sanjiva> So its not really necessary to be able to store every
bit of runtime state we create!
[12/8/2004 9:36 AM] <gdaniels> even if for some reason we want to serialize and restore
the WSDLService to/from a database, that doesn't mean that our additional objects would change
the WSDL XML....
[12/8/2004 9:36 AM] <Srinath> Sir +1
[12/8/2004 9:36 AM] <gdaniels> they might be just internal Axis things
[12/8/2004 9:36 AM] <chathura> yup
[12/8/2004 9:37 AM] <alek_s> ok i think it is nto required for M1 - but later those
files may be created and modfied through API right ...
[12/8/2004 9:37 AM] <sanjiva> glen: +1
[12/8/2004 9:37 AM] <Srinath> we might go in to level that WSDLService is readonly
are we?
[12/8/2004 9:37 AM] <gdaniels> There is definitely a call to support things that DO
change the WSDL serialization - but not for M1
[12/8/2004 9:37 AM] <gdaniels> Srinath: readonly -1
[12/8/2004 9:37 AM] <sanjiva> Alek: yes agreed. But we can achieve that by having the
AxisOperation class do the right kind of serialization ..
[12/8/2004 9:37 AM] <gdaniels> You should be able to generate one dynamically if you
want
[12/8/2004 9:38 AM] <chathura> no think we need not provide for garenteed serialisability
for now ...at least
[12/8/2004 9:38 AM] <Srinath> glen: yes I accept .. but it might make things simple
.. anyway let us differ that for now:)
[12/8/2004 9:39 AM] <chathura> i was refering to the axis specific stuff btw
[12/8/2004 9:39 AM] <sanjiva> ok are we converging? shall I try to summarize (at the
risk of getting shot)
[12/8/2004 9:39 AM] <gdaniels> chathura: +1
[12/8/2004 9:39 AM] <gdaniels> sanjiva: go for it
[12/8/2004 9:39 AM] <Srinath> seems we are happy to have a hashmap:)
[12/8/2004 9:40 AM] <sanjiva> (1) we create org.apache.wsdl.* to be the WSDL object
model with an extensibility hook to store arbitrary runtime stuff via a hashmap
[12/8/2004 9:40 AM] <Srinath> +1
[12/8/2004 9:40 AM] <sanjiva> (2) we create o.a.axis.description.AxisOperation etc.
classes that extend the corresponding class in o.a.wsdl and provides a nice API to get/set
the stuff we care about for each WSDL component
[12/8/2004 9:41 AM] <gdaniels> that sounds good, but of course we need to see the actual
design
[12/8/2004 9:41 AM] <alek_s> Let be noted that i am opposed to use of (Hash)Map but
i am OK for now ...
[12/8/2004 9:41 AM] <sanjiva> (3) when a service is deployed the WSDLService structure
is created with all the additional properties etc.
[12/8/2004 9:41 AM] <gdaniels> noted, Alek
[12/8/2004 9:41 AM] <gdaniels> (yay minutes :))
[12/8/2004 9:41 AM] <sanjiva> (4) The registry stores AxisService etc.
[12/8/2004 9:41 AM] <sanjiva> (5) Alek raises a minority objection ;-)
[12/8/2004 9:42 AM] <chathura> :)
[12/8/2004 9:42 AM] <gdaniels> chat.setProperty("minorityObjection
[12/8/2004 9:42 AM] <Srinath> oh :D
[12/8/2004 9:42 AM] <gdaniels> ", aleksThing)
[12/8/2004 9:42 AM] <Deepal> :)
[12/8/2004 9:42 AM] <sanjiva> (6) The Module stuff is still independent .. we'lll prolly
still need a ModuleDesc type thing to store the module data in memory. Need to come up with
a consistent naming scheme to make them fit together nicely.
[12/8/2004 9:42 AM] <Srinath> seems that objection is not persistance and forgotten
by next chat :)
[12/8/2004 9:43 AM] <gdaniels> LOL Srinath
[12/8/2004 9:43 AM] <sanjiva> ;-)
[12/8/2004 9:43 AM] <sanjiva> So, are we rigid enough to go back to service vs. endpoint
to store the impl class etc.?
[12/8/2004 9:43 AM] <chathura> think so
[12/8/2004 9:43 AM] <Srinath> yes I think
[12/8/2004 9:44 AM] <gdaniels> Module stuff - sure, but modules have API from AxisOperation/AxisService/etc
[12/8/2004 9:44 AM] <Deepal> sir cant we keep that inside org.apache.descrption.
[12/8/2004 9:44 AM] <gdaniels> axisOperation.addModule()/getModules()/etc
[12/8/2004 9:44 AM] <sanjiva> Glen: yes, you can look them up etc. from AxisOperation
etc. ..
[12/8/2004 9:44 AM] <Deepal> I mean ModuleDesc
[12/8/2004 9:44 AM] <gdaniels> +1
[12/8/2004 9:44 AM] <sanjiva> Just that you need a ModuleDesc type thing which is clearly
not WSDL related
[12/8/2004 9:44 AM] <Srinath> +1 to keep them in discription
[12/8/2004 9:44 AM] <sanjiva> Actually before moving to the Service/Endpoint discussion,
let's talk about naming schemes ..
[12/8/2004 9:44 AM] <Srinath> it is a extenstion I think :)
[12/8/2004 9:45 AM] <sanjiva> WSDLOperation or WsdlOperation
[12/8/2004 9:45 AM] <Deepal> but what we are going to keep inside that pakage
[12/8/2004 9:45 AM] <Deepal> can be use by Deployemnt module too
[12/8/2004 9:45 AM] <gdaniels> sanjiva: I prefer WSDLOperation
[12/8/2004 9:45 AM] <alek_s> as you know i am for WsdlOperation it is for me easier
to read and more in JavaNaming spiriti :)
[12/8/2004 9:45 AM] <sanjiva> Alek prefers the latter .. I prefer the prior, but not
hard enought to lay down on the tracks for it
[12/8/2004 9:45 AM] <gdaniels> because WSDL is the common way people refer to it
[12/8/2004 9:45 AM] <gdaniels> AxisOperation rather than AXISOperation because that's
the common way people refer to it
[12/8/2004 9:45 AM] <chathura> hmm no need to refactor;)
[12/8/2004 9:45 AM] <sanjiva> So same for XMLPullParser too? What does StAX do BTW?
[12/8/2004 9:46 AM] <alek_s> StAXOMBuilderTest vs StaxOmBuilderTest
[12/8/2004 9:46 AM] <sanjiva> no that's our stuff .. I mean StAX proper
[12/8/2004 9:46 AM] <alek_s> Straming Api for XML
[12/8/2004 9:46 AM] <alek_s> but my favorite is: StAXSOAPModelBuilder ...
[12/8/2004 9:46 AM] <Srinath> Stax is written as StAx
[12/8/2004 9:46 AM] <alek_s> it looks lie XSOAP :)
[12/8/2004 9:47 AM] <sanjiva> Alek, I mean the name of the interface .. is it XMLPullParser
or XmlPullParser or something else?
[12/8/2004 9:47 AM] <alek_s> (XSOAP is my project)
[12/8/2004 9:47 AM] <sanjiva> I though XSUL was your project ;-) u have too many damned
projects that's the problem!
[12/8/2004 9:47 AM] <alek_s> in stax XML convention is used
[12/8/2004 9:47 AM] <alek_s> in XPP Xml convention is used
[12/8/2004 9:47 AM] <Deepal> XSUL ?
[12/8/2004 9:48 AM] <alek_s> all classes are XsulSomething :)
[12/8/2004 9:48 AM] <Srinath> can someone decode XSUL for us :D
[12/8/2004 9:48 AM] <alek_s> XSUL is XML Services "Utility Library"
[12/8/2004 9:49 AM] <alek_s> i no longer think that everything must be SOAPized ;-)
[12/8/2004 9:49 AM] <sanjiva> So XML convention is more the Java rule right? The rule
is if its an acronym you keep it that way .. IIRC
[12/8/2004 9:49 AM] <alek_s> (or RMIzed as it was SoapRMI - learning process ...)
[12/8/2004 9:50 AM] <alek_s> yes - it is prevailing convention based on assumption that
people actually think about XML as acronym and not a word
[12/8/2004 9:50 AM] <Srinath> means we go for WSDL?
[12/8/2004 9:50 AM] <sanjiva> ok shall we just take a vote on the list? We need to decide
.. and its purely subjective so no point wasiting time now I guess.
[12/8/2004 9:50 AM] <alek_s> uf you join more than two actonyms you GETREALHORRORS
[12/8/2004 9:51 AM] <sanjiva> Ah but those are not acryonims GetRealHorrors
[12/8/2004 9:51 AM] <alek_s> it seesm it would be another mojority objection anyway
...
[12/8/2004 9:51 AM] <chathura> :)
[12/8/2004 9:51 AM] <alek_s> GET - Graphical Envorment Task etc ;-)
[12/8/2004 9:51 AM] <sanjiva> ;-) you never know Alek, there's only like 3 people voting
now and Glen and I can attest to the fact that most decision in WSDL were made by the silent
majority
[12/8/2004 9:52 AM] <alek_s> democracy requires participation ...
[12/8/2004 9:52 AM] <gdaniels> I need to take off soon to attend some meeting-related
stuff downstairs. Would love to discuss the engine next week.
[12/8/2004 9:52 AM] <gdaniels> Handlers, phases, etc
[12/8/2004 9:52 AM] <sanjiva> ok, so can we switch topics then again? Store impl info
in WService or WEndpoint?
[12/8/2004 9:52 AM] <gdaniels> I'll put that in wiki
[12/8/2004 9:52 AM] <sanjiva> Glen: sounds good ..
[12/8/2004 9:52 AM] <gdaniels> impl info should be in service
[12/8/2004 9:52 AM] <gdaniels> imho
[12/8/2004 9:53 AM] <sanjiva> Well that's not consistent with WSDL tho Glen
[12/8/2004 9:53 AM] <gdaniels> WSDL doesn't talk about impls
[12/8/2004 9:53 AM] <sanjiva> WSDL doesn't put anything in Service .. only Endpoint
[12/8/2004 9:53 AM] <gdaniels> (does it?)
[12/8/2004 9:53 AM] <sanjiva> well endpoints are analogous to implementations .. "an
endpoint is where the service is available"
[12/8/2004 9:53 AM] <gdaniels> I don't see how that matters, Sanjiva
[12/8/2004 9:53 AM] <gdaniels> We're talking about building a transport-generic service
engine
[12/8/2004 9:53 AM] <sanjiva> do we want to leave room for the possibility of multiple/alternate
implementations for the same service .. down the road
[12/8/2004 9:54 AM] <gdaniels> I can attach a variety of transports (endpoints) to a
given service (service)
[12/8/2004 9:54 AM] <Srinath> service can stroe providers I think ..
[12/8/2004 9:54 AM] <sanjiva> glen: yes, but I'm talking about implementing the service
in Java vs in XSLT, for example
[12/8/2004 9:54 AM] <Srinath> providers will allow differant impls of service
[12/8/2004 9:54 AM] <gdaniels> sanjiva: So this is about providers
[12/8/2004 9:55 AM] <gdaniels> Srinath: yup
[12/8/2004 9:55 AM] <Srinath> :)
[12/8/2004 9:55 AM] <gdaniels> WE only have a Java impl class for certain kinds of services
[12/8/2004 9:55 AM] <gdaniels> the default kind, probably, but not all kinds
[12/8/2004 9:55 AM] <sanjiva> yes indeed .. impl and providers go together
[12/8/2004 9:55 AM] <sanjiva> yes but if we had Xalan integrated (for example) an XSLT
service is an easy thing ..
[12/8/2004 9:56 AM] <gdaniels> So a WSDLService should, IMO, tie to a single provider,
and that provider has various options
[12/8/2004 9:56 AM] <sanjiva> ok. maybe that's overengineering it .. I'm ok with using
Service and we can revisit if we find that to be a limitation in the future.
[12/8/2004 9:56 AM] <chathura> yup .. for m1
[12/8/2004 9:56 AM] <sanjiva> (no I meant even later future .. like Axis2 v2.x :-))
[12/8/2004 9:57 AM] <gdaniels> ok - on that note I'm gonna take off. Enjoy, everyone!
[12/8/2004 9:57 AM] <sanjiva> ok bye
[12/8/2004 9:57 AM] |<-- gdaniels has left irc.freenode.net ()
[12/8/2004 9:57 AM] <sanjiva> oops I have a call too in 2 mins; shall we call it a nite/morning/day?
[12/8/2004 9:57 AM] <chathura> :)
[12/8/2004 9:57 AM] <Harsha> What is the timeline for implmentation?
[12/8/2004 9:57 AM] <Srinath> Harsha dec mid/end
[12/8/2004 9:57 AM] <sanjiva> Chathura, what's your estimate of the WOM for the basic
stuff?
[12/8/2004 9:58 AM] <sanjiva> Harsha, you were looking into the WSDL stuff right? Maybe
you can help with AxisOperation etc. stuff?
[12/8/2004 9:58 AM] <chathura> within next week
[12/8/2004 9:58 AM] <chathura> i can safely say it ll be in god shape
[12/8/2004 9:58 AM] <Harsha> I can help with some development work.
[12/8/2004 9:58 AM] <Deepal> Chathura I can join with u if u want any help
[12/8/2004 9:59 AM] <Deepal> boz we need to intregtae Deployemnt and WSDL
[12/8/2004 9:59 AM] <sanjiva> How's teh WSDL reading stuf coming along? Are you implementing
it using StAX or what?
[12/8/2004 9:59 AM] <Srinath> we might try to integerate WOM prototype too
[12/8/2004 9:59 AM] <Harsha> I was looking at WSDl from a user perspective only. Don't
know much on the internals!
[12/8/2004 9:59 AM] <sanjiva> yes we should put all the pieces to the proto2 dir ASAP
[12/8/2004 10:00 AM] <chathura> ok this org.apache.axis.descrption . Axis* is what i
ll be working now
[12/8/2004 10:00 AM] <Deepal> Harsha : for the M1 deployment is already completed :)
[12/8/2004 10:00 AM] <chathura> wom stuff is ready to be shipped to prototype 2
[12/8/2004 10:00 AM] <Harsha> Deepal: ok:)
[12/8/2004 10:01 AM] <Srinath> Harsha I think we have areas that are not to spread in
WSDL that you can work in
[12/8/2004 10:01 AM] <Srinath> cahtura what do you think .. where the gharsha can hop
in?
[12/8/2004 10:01 AM] <Srinath> sorry means Harsha not g..
[12/8/2004 10:01 AM] <Harsha> Ok Let me know. Given a spec I can build something. I
am nto sure how the devlopment process works here>
[12/8/2004 10:02 AM] <chathura> harsha can you look into this org.apache.axis.description
thing
[12/8/2004 10:02 AM] <Harsha> Where do I start with that?
[12/8/2004 10:03 AM] <Deepal> I can provide bacis requirments from Deployment for the
org.apache.axis.description
[12/8/2004 10:03 AM] <chathura> you ll have to start with the wom
[12/8/2004 10:03 AM] <Srinath> think Harsha and chathura start taling .. I will quit
bye :)
[12/8/2004 10:03 AM] <Srinath> talkin
[12/8/2004 10:03 AM] <Deepal> I mean what I want to be there
[12/8/2004 10:03 AM] <Harsha> Bye Srinath
[12/8/2004 10:04 AM] <chathura> and extend it to provide the functionalities required
by the engine guys
[12/8/2004 10:04 AM] <alek_s> i will send chat log
[12/8/2004 10:04 AM] <alek_s> bye
[12/8/2004 10:04 AM] |<-- Srinath has left irc.freenode.net ("Client Exiting")
[12/8/2004 10:04 AM] <Deepal> bye all
}}} | http://mail-archives.apache.org/mod_mbox/ws-dev/200412.mbox/%3C20041208150629.484.51723@minotaur.apache.org%3E | CC-MAIN-2019-35 | refinedweb | 5,971 | 63.43 |
If you're a web developer, and you've already used some JS frameworks (especially React), then you might be familiar with the concept of CSS-in-JS. Basically, it all boils down to creating your CSS stylesheets through JavaScript, rather than usual CSS. It's somewhat better than solutions like SCSS, because of the continuous access to all the JS goodness that it gives you. It also simplifies the management of your CSS styles and the general development experience (DX) as a whole.
Now, let's remind ourselves TypeScript - JS superset with static type system included. This one further improves DX through additional tooling, suggestions, and type-safety. So, the question should be asked - what would happen, if we mix CSS-in-JS and TypeScript together? Well, I'll tell you - TypeStyle will happen! So, bear with me for this one, as we're going to discover, what kind of goodness can such a combination provides us with, if it's worth your effort and how to use it!
The idea
First, let's take a step back and discuss exactly why anyone would ever mix TS with CSS-in-JS concept. Here, the answer is simply - because why not!? Like really, CSS-in-JS is just a general idea, obviously connected with CSS and JS, while TS is just a JS superset with easy access to all of its underlying features. That's why there's no point of not doing such a thing.
Going even further, it's the possible benefits of such a mix that make it even more interesting! CSS-in-JS concept and libraries implementing it, all aim at making CSS more... "maintainable". As you may know, many of them achieve it in different ways. Ones allow you to define your CSS classes in the form of objects, other - in form of the template literal, and some make the whole stuff even more complex by providing a Babel plugin. Don't get me wrong - all these approaches are good, depending on your use-case of course. But, they also have some more disadvantages...
One thing that almost all these libraries lack is type-safety. Of course, I mean TypeScript. Most of them are written in plain JavaScript, with only some partially-complete external typings. Such state may be the result of how hard creating a suitable, statically-typed API can be, especially for JS representation of CSS. There's just too many CSS properties and specialized rules (like
@media) to do that. Yet, still - we can try!
TypeStyle
So, what is TypeStyle? By now you obviously know - it's a CSS-in-JS library, written in TypeScript. Its main goal is to make CSS maintainable and type-safe. With that said, it also comes with some pretty neat features built-in.
What differentiates TypeStyle from quite a lot of CSS-in-JS libs is that it's runtime-only. By using all CSS-related APIs (I discussed these in my previous post), it simply creates all your stylesheets with JavaScript, instead of doing any preprocessing. In this way, TypeStyle is super "portable". Because of its runtime-based model and small size (~6 KB min-zipped), you can just swap it in and you're ready to go!
The library is also framework-independent. Because of that, TypeStyle tries to mirror CSS design to a much higher degree than some libraries. Of course, this comes with some possible "drawbacks" to some, like - most notably - no auto-prefixing and other post-CSS stuff.
Of course, the biggest feature of TypeStyle is its typings. The API does a great job of allowing TS-powered autocompletion and code hints features. Maybe CSS will never be 100% type-safe, but the library does a good job of taking what we have available today to whole another level.
Basics
So, with some reasoning and introduction behind us, let's dive right into a small overview of TypeStyle API. Keep in mind that it's not really a big library, and its documentation already does its best of explaining all the stuff. With that said, go check it out, if you want to know more.
npm install typestyle
CSS classes
The most basic use of TypeStyle involves creating simple CSS classes.
import { style } from "typestyle"; const className = style({ backgroundColor: "red", width: 100, height: 100 });
By using
style() function, we're creating a new CSS class, which we can later access by the returned, hashed class name. The provided config object can be treated just like any other. This includes destructuring,
Object.assign() and other cool stuff. You can do similar stuff just by supplying any number of config objects to
style() function.
import { style, types } from "typestyle"; const rect: types.NestedCSSProperties = { width: 100, height: 100 }; const className = style({ backgroundColor: "red", ...rect }); // or style({backgroundColor: "red"}, rect);
The use of such patterns will result in losing the type-safety and TS support in all "components" of our style config. If you're using TS and don't want that to happen, you can specify the type for your object directly, with some help from TypeStyle-provided types, just like in the example above.
Nesting
The basic TS support for
style()-like functions is present throughout multiple other CSS-in-JS libraries. What sets TypeStyle apart is the level of this integration. A great example of that is the way that TypeStyle handles pseudo-classes. Take a look:
// ... const className = style({ backgroundColor: "red", ...rect, $nest: { "&:hover": { backgroundColor: "green" } } });
The library requires special nested property -
$nest - in order to supply style config for different pseudo-classes and stuff. This allows TypeScript to inference proper type, and thus, provide all support it can for widely-known pseudo-classes. The
$nest property can also be used for normal nested selectors. Although, keep in mind that such usage leaves you with no TS support, and a class with nested selectors that's somewhat hard to manage in most CSS-in-JS scenarios.
Helpers
Generally, the
style() function is all there is to TypeStyle. It's both simple and intuitive. The rest of the library basically builds upon this functionality, with additional helper functions and other useful tools.
Media queries
Most notable examples of such helpers include
media() function, used for type-safe media queries.
import { style, media } from "typestyle"; // ... const className = style( rect, media({minWidth:0,maxWidth:600}, {backgroundColor: "red"}), media({minWidth:601}, {backgroundColor: "green"}), );
The
media() function is a mixin, outputting a normal style config. You can think of it as a nice replacement for
$nest property.
// ... const className = style( rect, $nest: { "@media only screen and (max-width: 600px)": { backgroundColor: "red" }, // ... } );
Pretty nice, huh? The
$nest property might still be required for some advanced use-cases. Remember that, because we're working in JS/TS, you can always create your own mixins, in order to give some structure and look to your main style config.
Animations
Just like media queries, CSS keyframe animations are an equally "special" feature, that may be hard to use in CSS-in-JS. For that TypeStyle, again, provides nice helper function -
keyframes().
import { style, keyframes } from "typestyle"; // ... const animationName = keyframes({ '0%': { color: 'red' }, '100%': { color: 'green' } }) const className = style({ ...rect, animationName: animationName, animationDuration: '2s', });
The function returns new, hashed name of created animation for your later use. It's this kind of intuitiveness that made me really like this library.
Concatenation
Finally, if you work with React or simple
className property for that matter, you might enjoy
classes() helper. It simply concatenates all supplied class names and returns the result.
import { classes } from "typestyle"; // ... const classStr = classes(className, className2);
Raw CSS
So, as you can see from the examples above, TypeStyle provides a nice, but small set of helper functions. Like really - how much can you pack in 6 KB library? Anyway, the point is that the library doesn't provide helpers for everything. This is something that you can easily create yourself if you like, using mixins, component objects and etc.
You might guess by now that TypeStyle applies all its classes and stuff into a single stylesheet (single
<style/> tag), that's created with the help of some CSS-related Web APIs. It's an important detail to remember when using TypeStyle's raw CSS functions -
cssRule() and
cssRaw().
import { cssRule, cssRaw } from "typestyle"; // ... cssRule(".red-rect", { ...rect backgroundColor: "red" }); cssRaw(` .green-rect { height: 100px; width: 100px; background-color: green; } `);
I don't think that these functions need a deep explanation. First allows you to create a CSS rule with a custom string selector, which is still somewhat type-safe.
cssRaw(), on the other hand, should be used only for loading CSS libraries, and even then - you might be better with a normal, external CSS file. It provides no type-safety at all!
Of course, such functions are extremely useful - especially when you want all your CSS to be written in CSS-in-JS fashion. Such functions can be used with e.g.
@import rule, where the placement matters. That's why it's so important to understand that TypeStyle works on a single stylesheet, as for such use-cases, you should use
cssRaw() before any other CSS-related call, to place your custom rule at the top of the stylesheet.
SSR
I previously mentioned that TypeStyle is runtime-only. This means that it doesn't base on any kind of Babel plugin and stuff at all by default. If you want to argue that it's not the best decision performance-wise, then think again. The performance loss is left unnoticed (for me, at least), and you just really shouldn't trade performance with maintainability. But, if you don't want to change your mind, there's another way.
TypeStyle has built-in support for Server-Side Rendering (SSR) and static page generation. Because of the use of a single stylesheet, TypeStyle provides an easy-to-use function -
getStyles() - to extract all of its rules.
import { style, getStyles } from "typestyle"; // ... const className = style({ backgroundColor: "red" ...rect, }); getStyles(); /* Example result: hashed-class-name { height: 100px; width: 100px; background-color: red } */
By using the
getStyles() function, you can easily use all of the TypeStyle features - including hashed CSS class names, without any (even the tiniest) loss in performance. Just put the result of this call into
<style/> tag of your template file, and you're ready to go! Of course, if you know how it's done, you can even create your very own Babel plugin for that very easily (most likely).
There's more!
As I don't want this post to be a documentation, rather than a simple, beginner-friendly tutorial, we'll stop here. There's still some interesting features and gotchas noted in the official docs. If you're interested in this library, I highly recommend reading the docs - they're impressively well-written! But, even so, with the set of features you learned about in this article, you should easily be able to represent most of your CSS in a type-safe, maintainable and expressive way.
Thoughts?
So, what do you think of TypeStyle? Do you like this somewhat different approach to CSS-in-JS that it represents? Let me know down in the comments section below. Also, if you like the article, consider leaving a reaction, a comment or a suggestion for future posts. For more up-to-date content, follow me on Twitter, my Facebook page or through the weekly newsletter. I hope you enjoyed this one, and have a great day! | https://areknawo.com/a-different-approach-to-css-in-js/ | CC-MAIN-2021-25 | refinedweb | 1,919 | 64.71 |
Pt-G-3
Cliff Says
Here’s a stab at it. Generate a PaymentType model with a single string field.
rails generate model payment_type name:string
Add the three payment types in seeds.rb, and run rake db:seed.
PaymentType.create(:name => "Check") PaymentType.create(:name => "Credit card") PaymentType.create(:name => "Purchase order")
Add a names class method to the PaymentType model.
class PaymentType < ActiveRecord::Base def self.names all.collect { |payment_type| payment_type.name } end end
Change the validation in the Order model. Initially, one might do this:
validates :pay_type, :inclusion => PaymentType.names
Now I’m a Rails n00b, but as far as I can tell, doing validation that way would fetch the payment types just once from the DB at the moment when Ruby defines the Order model class. What if the payment types table in the DB is modified after the Order model class has been defined? Hence, I rewrote the validation to query the DB for the payment types every time the validation is called (at least, I think I did):
Then, in views/orders/_form.html.erb, replaceThen, in views/orders/_form.html.erb, replace
validates_each :pay_type do |model, attr, value| if !PaymentType.names.include?(value) model.errors.add(attr, "Payment type not on the list") end end
<%= f.select :pay_type, Order::PAYMENT_TYPES, :prompt => 'Select a payment method' %>
with
<%= f.select :pay_type, PaymentType.names, :prompt => 'Select a payment method' %>
Wham bam. Added some unit and functional tests too. Works fine. I’m not super satisfied with the messy-looking validates_each block, though. Anyone with a more elegant validation?
Older says (11/09/08)
Probably the best way to avoid later problems in case there is some payment method chances, would be to link an order’s payment method to the payment method’s id. This way, we can always change payment method names without having to update older orders. Then, it would be nice to migrate and add payment_method.id to order. Edit: I tried to but did not go well. I guess there is a way to deal with catalog table I don’t know yet.
Ernesto says (14/09/2011)
than:than:
rails generate migration add_pay_type_id_to_order pay_type_id:integer rails generate scaffold pay_type name:string rake db:migrate
order.rb has_one :pay_type validates :pay_type_id, presence: true
pay_type.rb belongs_to :orde
in app/views/orders/_form.html.erb <div class="field"> <%= f.label :pay_type_id %><br /> <%= collection_select(:order, :pay_type_id, PayType.all, :id, :name) %> </div>
You also need to edit the other views in app/views/orders….
more informations about: starting at 3.2
Pepe says (26/09/2011)
Ernesto solution have some errors in order and pay_type models using has_one and belongs_to rails methods.
belongs_to means that the foreign key is in the table for this class. So belongs_to can ONLY go in the class that holds the foreign key.
has_one means that there is a foreign key in another table that references this class. So has_one can ONLY go in a class that is referenced by a column in another table.
So correct setting are:
order.rb belongs_to :pay_type validates :pay_type_id, presence: true
pay_type.rb has_many :order
In app/views/orders/index.html.rb <td><%= order.pay_type.name %></td>
Diego says (28/05/2012)
Well I also implemented migration code that creates the new PaymentType table and migrates all the existing data using the new data model.
def up create_table :payment_types do |t| t.string :name t.timestamps end PaymentType.create(name: 'Check') PaymentType.create(name: 'Credit Card') PaymentType.create(name: 'Purchase Order') add_column :orders, :payment_type_id, :integer Order.reset_column_information Order.all.each do |order| order.payment_type_id = case order.pay_type when 'Check' 1 when 'Credit Card' 2 when 'Purchase Order' 3 end order.save validate: false end Order.reset_column_information remove_column :orders, :pay_type end def down add_column :orders, :pay_type, :string Order.reset_column_information Order.all.each do |order| order.pay_type = case order.payment_type_id when 1 'Check' when 2 'Credit Card' when 3 'Purchase Order' else 'Credit Card' end order.payment_type_id = 0 order.save validate: false end Order.reset_column_information drop_table :payment_types remove_column :orders, :payment_type_id end
Not the best code in the earth but it was a good opportunity to try some things using ruby :)
Page History
- V27: Diego Kurisaki [almost 4 years ago]
- V26: Diego Kurisaki [almost 4 years ago]
- V25: Diego Kurisaki [almost 4 years ago]
- V24: Diego Kurisaki [almost 4 years ago]
- V22: Ernesto Fries Urioste [over 4 years ago]
- V21: Ernesto Fries Urioste [over 4 years ago]
- V20: Ernesto Fries Urioste [over 4 years ago]
- V19: Ernesto Fries Urioste [over 4 years ago]
- V15: Clifton Crosland [over 4 years ago]
- V14: Clifton Crosland [over 4 years ago] | https://pragprog.com/wikis/wiki/Pt-G-3 | CC-MAIN-2016-22 | refinedweb | 770 | 52.26 |
Time flies! This is already the third installment of PHP Annotated Monthly, our monthly overview where we highlight the most interesting content from around the web, posted by developers like us.
In the past month we’ve learned that the HTML5 recommendation has become final and Docker is coming to Windows, some day. Great news, but there’s more awesome PHP news out there. Read more in this edition of PHP Annotated Monthly!
PHP
A fresh batch of PHP releases is available: PHP 5.4.34, PHP 5.5.18 and PHP 5.6.2 all come with security-related bugfixes. Check the changelog for more details.
A pull request has been submitted for PHP to support read-only properties (see RFC). It introduces a new readonly keyword which makes a property readable for everyone, but writable only to the containing object. Until now we had to hack this into our objects using the __get and __set magic methods, but having a readonly keyword would be so much cleaner. Let’s hope it gets accepted soon!
The PSR-0 autoloading standard has been deprecated in favor of the PSR-4 standard. It makes sense, as they both touch the same topic: autoloading, namespaces and paths.
If you want to know what is happening on PHP’s internals@ mailing list, Pascal Martin does a monthly blog post on noteworthy discussions, RFC’s and much more.
Unserializing objects from unknown data is always a risk. But sometimes we have to, for example when building libraries that may be used by others. How do we prevent PHP Object Injection in such cases? That’s where Stas Malyshev’s filtered unserialize() RFC comes in.
Peter Kokot made a nice overview of how PHP has evolved over the last 20 years.
Frameworks
A new Drupal 8 beta 3 will hit the Internet on Wednesday, November 12th. Drupal 7.32 has been released, containing a number of security fixes.
Symfony 2.6.0 beta 1 was just released, as always backward compatible but with a ton of new features. A number of debugging components have been added, as well as forms and validation improvements. A full list of changes is available from the announcement blog post.
Yii 2.0 is there, so now is a good time to read up on all that is new and what has changed. Three years of development have resulted in a lot of good things, like PHP namespaces, Composer and Bower integration, a dependency injection container and much more. Tuts+ also posted a tutorial on getting started with Yii 2.0.
Community/other
Rafael Dohms blogged about version selection in Composer. He compares the different version constraints we can define in our composer.json file and what effect they have on the versions being installed and Composer performance.
More Composer: Peter Petermann wrote a post about how we can create project skeletons using Composer. A good read, as it definitely helps with not repeating the same tasks over and over again when starting new projects.
In any development community, not only PHP, there has been discussion about how to version REST APIs. Some prefer to version the URL, others say that versioning should happen in the HTTP headers. Flame wars and religious discussions aside, Willem-Jan Zijderveld does an excellent job explaining and implementing API versioning using the HTTP Accept header.
The FriendsOfPHP security advisories database is now public domain. It lists all security vulnerabilities reported in frameworks such as Doctrine, DomPdf, Laravel, SabreDav, Symfony, Swiftmailer, Twig, Yii and Zend Framework. We can upload our composer.lock to check if there are known security issues in our dependencies. Speaking of security… Vic Cherubini blogged about preventing SQL injection attacks.
Nicolas Scolari blogged about Symfony route annotations. He explains an alternate way of defining routes in our application, using annotations instead of YAML files. And Daniel Espendiller wrote an open-source plugin for PhpStorm to provide code completion and navigation for PHP annotations.
Gilles Crettenand covers functional programming in PHP. He explains what functional programming is, what advantages it has, and shows some examples in PHP. By showing us these examples, he shows that in functional programming all output depends solely on the input and makes code more readable and easier to understand.
What do we do when traveling to a country where the power outlets are different from the ones at home? Right! We use a travel adapter to make the socket compatible with a different interface. Guess what? That is exactly what the adapter pattern does for us when writing code.
Haven’t started with Vagrant? Aldo Ziflaj blogged about several ways to get started with PHP on Vagrant. He covers my favorite PuPHPet, which creates a Vagrant virtual machine configuration with just a few clicks. Protobox, Phansible and Rove are covered too. Not really in-depth, but a good starting point!
I find myself guilty of this when submitting talks, too. Anna Filina describes the main thing any talk proposal for a conference should contain: a purpose! Matthew Turland followed up on that with a collection of resources for conference speakers.
Have news to share? Found an interesting read? Have some comments on this post? We’d love to see hear from you in the comments below. Also feel free to reach out to @maartenballiauw on Twitter.
Till next time!
Develop with pleasure!
– JetBrains PhpStorm Team | https://blog.jetbrains.com/phpstorm/2014/11/php-annotated-monthly-november-2014/ | CC-MAIN-2017-13 | refinedweb | 898 | 66.74 |
Compile time error, plus I'm pretty sure the code is not actually doing what I want it to. Like I said, I want a post method that will add a row to the database, and a get that will retrieve a list of the rows (as a JSON). --Zach On Tue, Mar 29, 2011 at 9:29 PM, Michael Snoyman <michael at snoyman.com>wrote: > Are you getting a compile-time error, or is the result at runtime not > what you expected? More details will be helpful here. > > Michael > > On Tue, Mar 29, 2011 at 8:58 PM, Zachary Kessin <zkessin at gmail.com> wrote: > > Ok, I'm trying to setup a handler to take data from a HTTP post, and > store > > it into a > > database, This is what I came up with, but its not right. Can someone > tell > > me what I am doing > > wrong. > > postPostMessageR :: Handler RepJson > > postPostMessageR = do > > liftIO $ putStrLn "POST" > > CharecterPost clients _ <- getYesod > > (text, charecter) <- do > > text <- lookupPostParam "text" > > charecter <- lookupPostParam "charecter" > > case (text, charecter) of > > (Just text', Just charecter') -> return (text', charecter') > > _ -> invalidArgs ["text not provided."] > > > > jsonToRepJson $ jsonMap (("text", text'),( "charecter", charecter')) > > > > --Zach > > _______________________________________________ > > web-devel mailing list > > web-devel at haskell.org > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | http://www.haskell.org/pipermail/web-devel/2011/001137.html | CC-MAIN-2014-23 | refinedweb | 209 | 78.59 |
On Thu, Oct 22, 2009 at 05:47:04AM -0400, Arjun Roy wrote: > 1. All the data stored right now as a python dict would be replaced with a > collection of xml (or any other kind of markup that works) file encoding the > same data. It is fairly important that this data store be user updatable, without negatively impacting RPM upgrades. So should not be one single giant file, but rather lots of individual files so users can easily add in data > The primary storage record would be the <distro>. It would consist of a required > ID, and one or more optional attributes. I think an integer ID is a bad idea because it makes custom local updates hard to resolve wrt later updates without risk of clashing keys. I think it would be better to use RDF here, linking using URIs >> If we used RDF this would allow for <distro rdf: <name>Fedora 10</name> </distro> <distro rdf: <name>Fedora 11</name> <upgrades rdf: </distro> NB, RDF URIs don't have to actually point to any real webpage, they are just a unique namespace >> I don't think this type=str vs type=ver is really adding anything of value here. You can't rely on projects providing nicely parsable version numbers, so in practice you just end up with version strings instead. >(); Either CamelCase or underscores, but please not a mix of both :-( > /* Setting parameters like libvirt version, etc. */ > int osi_setLibParam(osi_lib_t lib, cstring key, cstring val); > int osi_getLibParam(osi_lib_t lib, cstring key); Unless there's a really compelling reason not to, it is better to use standard 'char *' or 'const char *' for strings - it makes life much nicer for people writing bindings to other languages. > > /*); What does the 'Put' method do ? > > /*); Again I don't think the String/Version distinction is really helping here. I think we should just use strings for all :| | http://www.redhat.com/archives/virt-tools-list/2009-October/msg00106.html | CC-MAIN-2013-48 | refinedweb | 316 | 64.95 |
[ Tcllib Table Of Contents | Tcllib Index ]
control(n) 0.1.3 "Tcl Control Flow Commands"
Table Of Contents
Synopsis
- package require Tcl 8.2
- package require control ?0.1.3?
Description
The control package provides a variety of commands that provide additional flow of control structures beyond the built-in ones provided by Tcl. These are commands that in many programming languages might be considered keywords, or a part of the language itself. In Tcl, control flow structures are just commands like everything else.
COMMANDS
- control::control command option ?arg arg ...?
The control command is used as a configuration command for customizing the other public commands of the control package. The command argument names the command to be customized. The set of valid option and subsequent arguments are determined by the command being customized, and are documented with the command.
- control::assert expr ?arg arg ...?
When disabled, the assert command behaves exactly like the no-op command.
When enabled, the assert command evaluates expr as an expression (in the same way that expr evaluates its argument). If evaluation reveals that expr is not a valid boolean expression (according to [string is boolean -strict]), an error is raised. If expr evaluates to a true boolean value (as recognized by if), then assert returns an empty string. Otherwise, the remaining arguments to assert are used to construct a message string. If there are no arguments, the message string is "assertion failed: $expr". If there are arguments, they are joined by join to form the message string. The message string is then appended as an argument to a callback command, and the completed callback command is evaluated in the global namespace.
The assert command can be customized by the control command in two ways:
[control::control assert enabled ?boolean?] queries or sets whether control::assert is enabled. When called without a boolean argument, a boolean value is returned indicating whether the control::assert command is enabled. When called with a valid boolean value as the boolean argument, the control::assert command is enabled or disabled to match the argument, and an empty string is returned.
[control::control assert callback ?command?] queries or sets the callback command that will be called by an enabled assert on assertion failure. When called without a command argument, the current callback command is returned. When called with a command argument, that argument becomes the new assertion failure callback command. Note that an assertion failure callback command is always defined, even when assert is disabled. The default callback command is [return -code error].
Note that control::assert has been written so that in combination with [namespace import], it is possible to use enabled assert commands in some namespaces and disabled assert commands in other namespaces at the same time. This capability is useful so that debugging efforts can be independently controlled module by module.
% package require control % control::control assert enabled 1 % namespace eval one namespace import ::control::assert % control::control assert enabled 0 % namespace eval two namespace import ::control::assert % one::assert {1 == 0} assertion failed: 1 == 0 % two::assert {1 == 0}
- control::do body ?option test?
The do command evaluates the script body repeatedly until the expression test becomes true or as long as (while) test is true, depending on the value of option being until or while. If option and test are omitted the body is evaluated exactly once. After normal completion, do returns an empty string. Exceptional return codes (break, continue, error, etc.) during the evaluation of body are handled in the same way the while command handles them, except as noted in LIMITATIONS, below.
- control::no-op ?arg arg ...?
The no-op command takes any number of arguments and does nothing. It returns an empty string.
LIMITATIONS
Several of the commands provided by the control package accept arguments that are scripts to be evaluated. Due to fundamental limitations of Tcl's catch and return commands, it is not possible for these commands to properly evaluate the command [return -code $code] within one of those script arguments for any value of $code other than ok. In this way, the commands of the control package are limited as compared to Tcl's built-in control flow commands (such as if, while, etc.) and those control flow commands that can be provided by packages coded in C. An example of this difference:
% package require control % proc a {} {while 1 {return -code error a}} % proc b {} {control::do {return -code error b} while 1} % catch a 1 % catch b 0
Bugs, Ideas, Feedback
This document, and the package it describes, will undoubtedly contain bugs and other problems. Please report such in the category control of the Tcllib Trackers. Please also report any ideas for enhancements you may have for either package and/or documentation. | http://docs.activestate.com/activetcl/8.5/tcl/tcllib/control/control.html | CC-MAIN-2019-22 | refinedweb | 796 | 54.22 |
Problem written by Chris Piech
You. Ever wondered how they know?
Carbon dating is a way of determining the age of certain archaeological artifacts of biological origin. Your job is to write a program that figures out how old a sample is.
This is a real velociraptor fossil. It's very old.
When an orgnanism is alive, its matter will have a constant amount of a special type of carbon called carbon-14 (c14). It turns out all living things have the same fraction: wood, mammoths, your body right now... small cats... all have the same amount.
An important property of c14 is that it dissapears at a constant rate: every 5,700 years the amount of c14 is halfed (its half life is 5,700 years). For example, if one starts with 100% of the natural occurence of c14:
In 5,700 years we expect there will only be 50% left.
In 11,400 years we would expect there will only have 25% left.
And so on...
When an organism is alive, it keeps replentishing its supply of c14 to the natural level. But when it dies (moment of silence), the amount of c14 starts to decrease. By testing how much c14 is left, scientists know how how long a dead thing has been dead for. Good times.
public class CarbonDatingSolnConst extends ConsoleProgram { private static final double LIVING_C14 = 13.6; private static final int HALF_LIFE = 5730; public void run() { // print some introduction information println("Radioactive molecule = C14"); println("Halflife = " + HALF_LIFE + " years"); println("C14 in living organisms = " + LIVING_C14 + " dpm"); println("-----"); println(""); // adds an extra new-line // repeats forever while(true) { // read the amount of C14 from the user double amount = readDouble("Amount of C14 in your sample: "); // Half life formula to calculate the age: // age = log(fractionLeft) / log(0.5) * 5730 // fractionLeft = amountCO2Left / livingCO2Amount double fractionLeft = amount / LIVING_C14; double age = Math.log(fractionLeft) / Math.log(0.5) * HALF_LIFE; age = Math.round(age); println("Your sample is " + age + " years old."); println(""); } } } | http://web.stanford.edu/class/cs106a/examples/carbonDating.html | CC-MAIN-2019-09 | refinedweb | 330 | 65.12 |
I have had similar problems in building GNU/Emacs in two different versions: emacs 19.34(source downloaded from): I can run ./configure without a problem. When I then type "make" the compile quickly exits with a complaint about /usr/include/unistd.h: at line 568 "parse error before '('", "parse error before __pgrp" and then, at emacs.c 657 "too few arguments to function get pgrp" emacs 20.6(source downloaded from): I can run ./configure without a problem. When I then type "make" the complie quickly exits with a complaint about /usr/include/unistd.h: at line 562, "macro 'setpgrp' requires 2 arguments, but only one given." Needless to say, I am anxious about my /usr/include/unistd.h. I am running Mandrake 8.2, and maybe they have given me a corrupt unistd.h? But I had a friend send me his unistd.h, and "diff" on them reveals no difference. As mentioned, I am running Mandrake 8.2, with gcc 2.96. I give you, just below, lines 547 to 576 of my /usr/include/unistd.h. The tips about submitting bugs tell us that we shouldn't speculate about the cause of difficulties, and so I won't. I shall be very grateful to you for any help that you can give with this problem. Best wishes, Alan McConnell (start)-------------------------------------- #if defined __USE_SVID || defined __USE_BSD || defined __USE_XOPEN_EXTENDED /* Both System V and BSD have `setpgrp' functions, but with different calling conventions. The BSD function is the same as POSIX.1 `setpgid' (above). The System V function takes no arguments and puts the calling process in its on group like `setpgid (0, 0)'. New programs should always use `setpgid' instead. The default in GNU is to provide the System V function. The BSD function is available under -D_BSD_SOURCE. */ # ifndef __FAVOR_BSD /* Set the process group ID of the calling process to its own PID. This is exactly the same as `setpgid (0, 0)'. */ extern int setpgrp (void) __THROW; #else /* Another name for `setpgid' (above). */ # ifdef __REDIRECT extern int __REDIRECT (setpgrp, (__pid_t __pid, __pid_t __pgrp) __THROW, setpgid); # else # define setpgrp setpgid # endif # endif /* Favor BSD. */ #endif /* Use SVID or BSD. */ -------------------------------------(finish) -- Alan McConnell : We do not know one millionth of one percent about anything.(Edison) Education cuts don't heal. | https://lists.gnu.org/archive/html/bug-gnu-emacs/2002-11/msg00115.html | CC-MAIN-2021-10 | refinedweb | 380 | 77.84 |
hey, this is my problem/homework. A user inputs a city and state and the program searches a text file for the name. The latitude and longitude is given in the text file and the distance is calculated.
The stuff in the txt file are in this format: " [Los_Angeles/CA:city] 34.08616 -118.37598 "
I have to make 3 arrays (for city, latitude, and longitude). My problem is that I can't get the array to read the city. Here is my code:
Code:#include<iostream> #include<fstream> #include<iomanip> #include<cstdlib> //includes libraries #include<string> using namespace std; const int SIZE= 25374; string name[SIZE]; double latitude[SIZE]; double longitude[SIZE]; int main() { ifstream inStream; inStream.open("US-PLACES.txt"); for(int i=0;i<=25374;i++){ inStream>>name[i]; cout<<i<<endl; } cout<<"hey"; for(int j=0;j<25374;j++){ cout<<name[i]; } cout<<"hey"; return(0); } | https://cboard.cprogramming.com/cplusplus-programming/35266-copying-string-array.html | CC-MAIN-2018-05 | refinedweb | 151 | 65.62 |
Here's a "Hello, world!" example. First the server:
package require SOAP::Domain namespace eval YourNamespace { proc iam {name} { return "Hello, $name!" } SOAP::export iam } SOAP::Domain::register -prefix /yourPrefix \ -namespace YourNamespace \ -uri YourNamespacePut the server in the CGI directory of your webserver (tclhttpd works great if you don't already have one). Now the client:
package require SOAP SOAP::create iam \ -uri YourNamespace \ -proxy invoke the procedure in the client:
\ -params {name string}\ -params {name string}
(sandbox) 1 % iam busirane Hello, busirane!
AM (7 august 2009) This example was very helpful in setting up a client program (first in Tcl, then in C with Tcl embedded) to connect to a server written in Java. The services were readily available in Tcl, provided the name of the SOAP procedure matches the name of the Java function that implements the service. (I did not pay much attention to the -uri argument though, I do not know what it is for ;)). | http://wiki.tcl.tk/15738 | CC-MAIN-2017-17 | refinedweb | 159 | 57.91 |
Nepomuk provides a Query Library which is a part of the nepomuk-core library. It can be used to dynamically create queries. This Query Library eventually compiles the queries into Sparql which is then run on our database - virtuoso.
It is recommended that you use the Query Library when you're searching for resources which have some direct properties. If the queries are too complex, maybe you should be directly using SPARQL. It all depends on your use case. The only advantage of using the QueryLibrary is that with improvements in the query library, your queries should get faster.
The entire Query Library resides in the Nepomuk2::Query namespace. It consists of a number of terms which can be combined together to create larger terms. Eventually when you've have created the final term, you can pass it to Nepomuk2::Query::Query and run the query
The different kinds of terms are -
Imagine we wanted to find all files that are tagged with a certain tag:
Nepomuk::Tag myTag = someFancyMethod(); // term matching the tag Nepomuk::Query::ResourceTerm tagTerm( myTag ); // term matching tagged resource Nepomuk::Query::ComparisonTerm term( Soprano::Vocabulary::NAO::hasTag(), tagTerm, Nepomuk::Query::ComparisonTerm::Equal ); // build the query Nepomuk::Query::Query query( term );
The Query class provides convenient methods to convert the query into its relevant sparql query, which can then easily be read. Additionally, it also provides a toSearchUrl method. | https://techbase.kde.org/index.php?title=Projects/Nepomuk/QueryLibrary&oldid=80955 | CC-MAIN-2017-26 | refinedweb | 232 | 51.78 |
These problems concern the USMoney class discussed in Section 6.2.
These problems concern the USMoney class discussed in Section 6.2.
a. A default (no-argument) constructor was not included for the USMoney class. Should there be such a constructor? If so, how much money should it represent? Explain.
It would only make sense for such a constructor to construct money with amount 0. However, one can easily construct money of amount 0 by passing 0 in as the only parameter to one of the other constructors.
b. Because USMoney is immutable, the USMoney object representing $1 could theoretically be reused everywhere exactly $1 is needed. Describe a way that the USMoney class could be designed so that only one object is ever created that represents $1.
Instead of public constructors, use a “factory” method. That is, create a public static method in the money class that returns a USMoney object of the desired amount. The USMoney class could maintain a pool of all existing USMoney objects and their amounts. If a user requests a new USMoney object, the factory method could first check the pool to see whether a USMoney object with the given amount exists. If so, it returns that object. If not, the method creates a new USMoney object with the given amount, adds it to the pool, and then returns that object.
c. Should there be additional variations on the getAmount method that return the amount in some form other than the number of cents returned as a long value? Discuss the alternatives and their advantages and disadvantages.
It could return it as a double value, but the issues of round off errors arise. There could be getDollars and getCents methods, but they provide little in the way of convenience.
d. Do we really need a getAmount method at all? What would it be used for?
There are many monetary calculations that the user may want to perform, such as the calculation of mortgage payments on a debt, given as USMoney. Our USMoney methods are not very convenient for such calculations and so it is important to be able to extract the amount of money from the USMoney object to do such calculations.
e. Why do we add and subtract USMoney objects, but we multiply and divide by plain doubles? Why not multiply and divide by USMoney objects, like we did for addition and subtraction?
It doesn’t make sense multiplying two USMoney objects. Would the resulting unit of measurement be USMoney2?
f. Add method headers that follow Javadoc conventions and completely describe the behavior of each of the methods in the USMoney interface.
See the answer to question h below.
g. Show that the minus, dividedBy, and negate methods are just convenience methods by implementing them very simply using the plus and times methods.
See the answer to question h below.
h. Implement the USMoney class using one long instance variable to store the amount of money in cents. Don’t forget to override the inherited equals and hashCode methods.
public class USMoney implements Comparable<USMoney>{ /** the amount of money in cents */ private long amount; /** * constructs a USMoney object with the given value * @param dollars the number of dollars * @param cents the number of cents */ public USMoney(int dollars, int cents) { amount = 100*dollars + cents; } /** * constructs a USMoney object with the given value * @param cents the value in cents */ public USMoney(long cents) { amount = cents; } /** * @return the value of this USMoney object in cents */ public long getAmount() { return amount; } /** * creates a String displaying this money in the standard * form with "$". For example, for one dollar, the result is * "$1.00". For negative one dollar, the result is "-$1.00". * @return the String displaying the value of this money * using "$" notation */ public String toString() { String result = (amount < 0 ? "-$" : "$"); long absAmount = (amount < 0 ? -amount : amount); result += absAmount /100 + "."; long cents = absAmount % 100; if (cents < 10) result += "0"; result += cents; return result; } /** * compares the amount of this money to the amount of some * other money m. * @param m the other USMoney to be compared to this USMoney * @return 0 if equal, -1 if this money is smaller, * 1 if other money is smaller or is null */ public int compareTo(USMoney m) { if(m == null) return 1; else return (amount < m.amount ? -1 : (amount == m.amount ? 0 : 1)); } /** * tests whether Object o is equal to this USMoney. * @param o the Object to be compared to this money * @return true if o is a USMoney object with the same value * as this money */ public boolean equals(Object o) { if (this == o) return true; if (o == null || ! (o instanceof USMoney)) return false; USMoney usMoney = (USMoney) o; return this.amount == usMoney.amount; } /** * @return the hash code value for this money */ public int hashCode() { return (int) (amount ^ (amount >>> 32)); } /** * creates a new USMoney object whose value is the sum of the * value of this USMoney and the USMoney m. * @param m the other USMoney to be added to this money * @return a new USMoney whose value is the sum of this' * value and m's value */ public USMoney plus(USMoney m) { return new USMoney(this.amount + m.amount); } /** * creates a new USMoney object whose value is the difference * of the value of this USMoney and the USMoney m. * @param m the other USMoney to be subtracted from this * @return a new USMoney whose value is the difference of * this' value and m's value */ public USMoney minus(USMoney m) { return this.plus(m.times(-1)); } /** * creates a new USMoney object whose value is the product of * the value of this USMoney and the given factor * @param factor the amount to be multiplied by this money * @return a new USMoney whose value is the product of this' * value and the factor */ public USMoney times(double factor) { return new USMoney((long) (this.amount * factor)); } /** * creates a new USMoney object whose value is the quotient * of the value of this USMoney and the given divisor * @param divisor the amount to be divided into this money * @return a new USMoney whose value is the quotient of this' * value and the divisor */ public USMoney dividedBy(double divisor) { return this.times(1/divisor); } /** * @return a new USMoney object whose value is the negative * of this money */ public USMoney negate() { return this.times(-1); }}
i. Implement the USMoney class so that arbitrarily large amounts of money can be stored in a USMoney object. You are welcome to make minor changes to the method and constructor headers.
This implementation will require use of the BigInteger class or something equivalent..
Either they are already their own complement or the complement doesn’t make sense.
(b) There is no complement to the toString method. For symmetry to this method, which converts a USMoney object into String object, it would be nice to have a method that converts a String into a USMoney object. One such method could be a fromString method that takes a String as its parameter. This method could be a static method in the USMoney class (why static?). Or, instead, we could create a new constructor that takes a String as its parameter and creates the corresponding USMoney object. Either way you do it, the string needs to be parsed to create an object with the appropriate value. For example, if the user wrote
USMoney money = USMoney.fromString("$5.00");
then the variable money could refer to an object corresponding to $5. Note that this method will somehow need to handle Strings that have improper format. We could throw an IllegalMoneyFormatException, a new class of exceptions that we create. Implement the fromString method as part of the implementation of the USMoney class described in part (h) above and have it throw an IllegalMoneyFormatException if the input string is improperly formatted. You will need to create this new exception class yourself.
public class IllegalMoneyFormatException extends RuntimeException { } public static USMoney fromString(String s) { //deal with short strings if(s.length() < 2) throw new IllegalMoneyFormatException(); //deal with negative sign at front int sign = (s.charAt(0)=='-' ? -1 : 1); if(sign == -1) s = s.substring(1); //eliminate the negative sign //deal with dollar sign if( s.charAt(0) != '$') throw new IllegalMoneyFormatException(); s = s.substring(1); //eliminate the dollar sign //deal with the decimal point int periodIndex = s.indexOf('.'); if( periodIndex == -1 || periodIndex != s.length()-3) throw new IllegalMoneyFormatException(); s = s.substring(0,periodIndex) + s.substring(periodIndex+1); //deal with remaining integer digits long value = 0; try { value = Long.parseLong(s); } catch (NumberFormatException e) { throw new IllegalMoneyFormatException(); } if(value < 0) // extra minus sign after the $ sign throw new IllegalMoneyFormatException(); return new USMoney(sign * value); }
k. The times and dividedBy methods result in round-off errors, since they involve floating point calculations. To avoid this problem, we could require the factor and divisors to be integers, but even then the dividedBy method could have round-off errors. One way to avoid this problem is to return an array of USMoneys. That is, the return type of dividedBy would be USMoney[]. This approach is discussed in [1]. It corrects for the problem of lost cents when you divide an amount of money by an integer. If we use dividedBy to divide an amount of money by n, then the size of the array returned by dividedBy is n and each object in that array contains 1/nth of the original amount of money, except that the first few objects in the array have one more cent so that the total sum of the objects in the array adds up to the original amount. For example, if you divide $1 by 3, the dividedBy method will return an array of 3 USMoney objects, the first one corresponding to 34 cents and the last two corresponding to 33 cents. For this exercise, implement these new times and dividedBy methods, assuming that the value of a USMoney object in cents is stored in a long instance variable:
(a) USMoney times(int factor)
(b) USMoney[] dividedBy(int divisor) | http://www.chegg.com/homework-help/object-oriented-design-using-java-1st-edition-chapter-6-solutions-9780072974164 | CC-MAIN-2015-32 | refinedweb | 1,658 | 61.46 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rl_usb.h>
BOOL usbh_uninit_all (void);
The usbh_uninit_all function uninitializes the USB Host
Stack and all of the enabled USB Host Controllers Hardware. It can be
used if during the application run time USB Host Stack needs to be
disabled for whatever reason (for example lowering power
consumption). After usbh_uninit function is called only
usbh_init or usbh_init_all should be called for
reinitialization of USB Host Stack and USB Host Controller
Hardware.
The usbh_uninit_all function is part of the RL-USB
Host software stack.
The usbh_uninit_all function returns one of the following
manifest constants.
usbh_engine, usbh_engine_all, usbh_get_error_string, usbh_get_last_error, usbh_init, usbh_init_all, usbh_uninit
#include <rl_usb.h>
int main (void) {
..
usbh_init_all(); // USB Hosts Initialize
..
usbh_uninit_all(); // USB Hosts Uninitial. | http://www.keil.com/support/man/docs/rlarm/rlarm_usbh_uninit_all.htm | CC-MAIN-2019-30 | refinedweb | 126 | 50.84 |
Logstash 6.1.0 has launched! We've got some great new features to talk about! Read about it here, or just head straight over to our downloads page and give it a shot! However, you may want to take a minute to read about breaking changes and release notes first. Read on for what's new in Logstash 6.1.0.
File Based Ruby Scripting Support
We’re proud to announce a great new way to extend Logstash functionality in 6.1.0. Complex modification of events in Logstash is now much easier due to our new feature, file based Ruby scripting via the Logstash Ruby filter. While the Ruby filter already lets you use custom ruby code to modify events, that code must live inside the Logstash configuration file itself, which doesn’t work well for longer pieces of code, and is hard to debug. Additionally, there’s never been a good way to reuse or test that code. File based Ruby scripts can take arguments, letting you reuse code within your Logstash configs easily.
This new feature lets you write Ruby code in an external file, with tests inline in that same file, and reuse that anywhere you’d like. Another nice feature here is that we can generate accurate line numbers in stack traces for exceptions in file based Ruby scripts, making them much easier to debug. The full details for configuring this are available in the Ruby filter docs. For a short example see below:
To configure the ruby filter with a file use:
filter { ruby { # Cancel 90% of events path => "/etc/logstash/drop_percentage.rb" script_params => { "percentage" => 0.9 } } }
The file 'drop_percentage.rb' would look like:
def register(params) @should_reject = params["reject"] end def filter(event) return [] if event.get("message") == @should_reject event.set("field_test", rand(10)) extra_processing(event) [event] end def extra_processing(event) # .. end test "non rejected events" do parameters do { "reject" => "hello" } end in_event { { "message" => "not hello" } } expect("events to flow through") do |events| events.size == 1 end expect("events to mutate") do |events| events.first.get("field_test").kind_of?(Numeric) end end
Experimental New Execution Engine
Logstash 6.1.0 brings some exciting new changes to the Logstash internals. We’ve been working on a full rewrite of the internal execution engine in Logstash. This rewrite moves the core execution logic from JRuby to Java/JVM Bytecode. With this approach we’ll be able to pave the way to more performance improvements in the future, as well as the ability to write optimized Logstash plugins in any JVM language.
This feature is currently disabled by default, and users should note that it is experimental and not yet ready for production. To enable this feature, you’ll need to use the '--experimental-java-execution' flag. We encourage users to try this flag out in test and staging environments and report any bugs found. Our hope is to make this the default execution method sometime in the 6.x timeframe. | https://www.elastic.co/blog/logstash-6-1-0-released | CC-MAIN-2018-39 | refinedweb | 495 | 64.71 |
After writing enough analysis scripts for yourself, you may become
known as an expert to your colleagues, who will want to use your
scripts. Systemtap makes it possible to share in a controlled manner;
to build libraries of scripts that build on each other. In fact, all
of the functions (
pid(), etc.) used in the scripts above come
from tapset scripts like that. A ``tapset'' is just a script that
designed for reuse by installation into a special directory.
Systemtap attempts to resolve references to global symbols (probes,
functions, variables) that are not defined within the script by a
systematic search through the tapset library for scripts that define
those symbols. Tapset scripts are installed under the default
directory named
/usr/share/systemtap/tapset. A user may give
additional directories with the
-I DIR option. Systemtap
searches these directories for script (
.stp) files.
The search process includes subdirectories that are specialized for a
particular kernel version and/or architecture, and ones that name only
larger kernel families. Naturally, the search is ordered from
specific to general, as shown in Figure
.
When a script file is found that defines one of the undefined symbols, that entire file is added to the probing session being analyzed. This search is repeated until no more references can become satisfied. Systemtap signals an error if any are still unresolved.
This mechanism enables several programming idioms. First, it allows
some global symbols to be defined only for applicable kernel
version/architecture pairs, and cause an error if their use is
attempted on an inapplicable host. Similarly, the same symbol can be
defined differently depending on kernels, in much the same way that
different kernel
include/asm/ARCH/ files contain macros that
provide a porting layer.
Another use is to separate the default parameters of a tapset routine
from its implementation. For example, consider a tapset that defines
code for relating elapsed time intervals to process scheduling
activities. The data collection code can be generic with respect to
which time unit (jiffies, wall-clock seconds, cycle counts) it can
use. It should have a default, but should not require additional
run-time checks to let a user choose another.
Figure
shows a way.
A tapset that exports only data may be as useful as ones that exports functions or probe point aliases (see below). Such global data can be computed and kept up-to-date using probes internal to the tapset. Any outside reference to the global variable would incidentally activate all the required probes.
Probe point aliases allow creation of
new probe points from existing ones. This is useful if the new probe
points are named to provide a higher level of abstraction. For
example, the system-calls tapset defines probe point aliases of the
form
syscall.open etc., in terms of lower level ones like
kernel.function("sys_open"). Even if some future kernel
renames
sys_open, the aliased name can remain valid.
A probe point alias definition looks like a normal probe. Both start
with the keyword
probe and have a probe handler statement block
at the end. But where a normal probe just lists its probe points, an
alias creates a new name using the assignment (
=) operator.
Another probe that names the new probe point will create an actual
probe, with the handler of the alias prepended.
This prepending behavior serves several purposes. It allows the alias definition to ``preprocess'' the context of the probe before passing control to the user-specified handler. This has several possible uses:
Figure
demonstrates a probe point alias
definition as well as its use. It demonstrates how a single probe
point alias can expand to multiple probe points, even to other
aliases. It also includes probe point wildcarding. These functions
are designed to compose sensibly.
Sometimes, a tapset needs provide data values from the kernel that
cannot be extracted using ordinary target variables (
$var). This may be because the values are in complicated data structures, may
require lock awareness, or are defined by layers of macros. Systemtap
provides an ``escape hatch'' to go beyond what the language can safely
offer. In certain contexts, you may embed plain raw C in tapsets,
exchanging power for the safety guarantees listed in
section
. End-user scripts may not include
embedded C code, unless systemtap is run with the
-g (``guru''
mode) option. Tapset scripts get guru mode privileges automatically.
Embedded C can be the body of a script function. Instead enclosing
the function body statements in
{ and
}, use
%{
and
%}. Any enclosed C code is literally transcribed into the
kernel module: it is up to you to make it safe and correct. In order
to take parameters and return a value, macros
STAP_ARG_* and
STAP_RETVALUE are made available. The familiar data-gathering
functions
pid(),
execname(), and their neighbours are
all embedded C functions. Figure
contains
another example.
Since systemtap cannot examine the C code to infer these types, an
optional5 annotation
syntax is available to assist the type inference process. Simply
suffix parameter names and/or the function name with
:string or
:long to designate the string or numeric type. In addition,
the script may include a
%{
%} block at the outermost
level of the script, in order to transcribe declarative code like
#include <linux/foo.h>. These enable the embedded C functions
to refer to general kernel types.
There are a number of safety-related constraints that should be observed by developers of embedded C code.
trylocktype call to attempt to take the lock. If that fails, give up, do not block.
Using the tapset search mechanism just described, potentially many script files can become selected for inclusion in a single session. This raises the problem of name collisions, where different tapsets accidentally use the same names for functions/globals. This can result in errors at translate or run time.
To control this problem, systemtap tapset developers are advised to follow naming conventions. Here is some of the guidance.
manpages.
execve?) to update the list incrementally. | http://www.sourceware.org/systemtap/tutorial/Tapsets.html | CC-MAIN-2013-48 | refinedweb | 993 | 56.35 |
Red Hat Bugzilla – Bug 603073
python >>> help() >>> modules command traceback when used without DISPLAY
Last modified: 2016-05-31 21:47:07 EDT
Description of problem:
When testing 526313 over ssh on shared box, I've noticed that 'modules' command in python interactive help probably executes more stuff than necessary, resulting in traceback in the case when no display is available (which is the case of ssh-ing to the box). I also tried to ssh there with -X to see what happens, which results in some gtk (I think) output one would not expect in this context.
Version-Release number of selected component (if applicable):
python-2.6.2-7.el6.x86_64
How reproducible:
always
Steps to Reproduce:
1. ssh <rhel6 box> # I think it should be reproducible even in RHEL6 box in runlevel 3, but I have no such box
2. $ python
3. >>> help()
4. >>> modules abcd
Actual results:
help> modules abcd
Here is a list of matching modules. Enter any module name to get more help.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/site.py", line 429, in __call__
return pydoc.help(*args, **kwds)
File "/usr/lib/python2.6/pydoc.py", line 1720, in __call__
self.interact()
File "/usr/lib/python2.6/pydoc.py", line 1738, in interact
self.help(request)
File "/usr/lib/python2.6/pydoc.py", line 1757, in help
self.listmodules(split(request)[1])
File "/usr/lib/python2.6/pydoc.py", line 1862, in listmodules
apropos(key)
File "/usr/lib/python2.6/pydoc.py", line 1962, in apropos
ModuleScanner().run(callback, key)
File "/usr/lib/python2.6/pydoc.py", line 1927, in run
for importer, modname, ispkg in pkgutil.walk_packages(onerror=onerror):
File "/usr/lib/python2.6/pkgutil.py", line 110, in walk_packages
__import__(name)
File "/usr/lib/python2.6/site-packages/invest/__init__.py", line 6, in <module>
import gtk, gtk.gdk, gconf, gobject
Expected results:
no traceback
Additional info:
When you *have* a display, you can get something like this, which also does not seem really right:
** (.:27993): WARNING **: Trying to register gtype 'WnckWindowState' as flags when in fact it is of type 'GEnum'
** (.:27993): WARNING **: Trying to register gtype 'WnckWindowActions' as flags when in fact it is of type 'GEnum'
** (.:27993): WARNING **: Trying to register gtype 'WnckWindowMoveResizeMask' as flags when in fact it is of type 'GEnum'
2010-06-11 09:24:03.982258: ERROR: Could not load the stocks from /root/.gnome2/invest-applet/stocks.pickle: [Errno 2] No such file or directory: '/root/.gnome2/invest-applet/stocks.pickle'
It seems this is more likely an issue of some misbehaving python package, but I've reported it to python because I'm not sure what the help() >>> modules actually executes.. **
Sorry for not getting to this earlier.
Looks like bug 461419, which I've fixed in Fedora in python-2.6.5-12.fc14
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
New Contents:
The "pydoc -k" command performs a keyword search of the synopses in all installed Python modules. This command failed on modules that did not import, resulting in a traceback. "pydoc -k" now ignores modules that had import exceptions, allowing searches on the remaining. | https://bugzilla.redhat.com/show_bug.cgi?id=603073 | CC-MAIN-2017-26 | refinedweb | 554 | 59.4 |
66213/download-website-scraping-anyone-explain-jupyter-example
Hey,
Web scraping is a technique to automatically access and extracts large amounts of information from a website. So let's see how to use python as our web scraping language. So for this, you need to follow the steps below:
easy_install pip
pip install BeautifulSoup4
4. Before we jump into coding you should know basics oh HTML.
5. Inspecting the page, let's take an example of this website.
6. First, right-click and open your browser’s inspector to inspect the web page.
7. Once you click on inspect, the related HTML will be selected in the browser console.
8. From the result, you will get the price is inside a few levels of HTML codes, which will be:
<div class="basic-quote">
→ <div class="price-container up">
→ <div class="price">.
9. Similarly, if you just click the name “S&P 500 Index”, which is inside:
<div class="basic-quote">
<h1 class="name">.
10. Now we will know the location of the data with the help of class tags.
11. Let's jump on the code, the point we know out data location, we can start coding in web scraper. You need to open your text editor.
12. For that, we need to import all the libraries that we are going to use:
# import libraries
import urllib2
from bs4 import BeautifulSoup
13. Then we need to declare a variable for the URL of the page:
# specify the url
quote_page = ‘paste the url'
14. Then we need to make use of the Python urllib2 to get the HTML page the URL declared:
# query the website and return the html to the variable ‘page’
page = urllib2.urlopen(quote_page)
15. And finally, we can parse the page into BeautifulSoup format so we can use BeautifulSoup to work on that.
# parse the html using beautiful soup and store in variable `store`
store = BeautifulSoup(page, ‘html.parser’)
Now we have a variable, store, containing the HTML of the page. Now we can start coding the part that extracts the data.
16. Here we can extract the content with find(). Since HTML class name is unique on this page, we can simply query:
<div class="name">.
# Take out the <div> of name and get its value
name_box = store.find(‘h1’, attrs={‘class’: ‘name’})
17. Once we get the tag, we can get the data by getting its text.
name = name_box.text.strip() # strip() is used to remove starting and trailing
print name
18. Similarly, we can get the price also:
# get the index price
price_box = store.find(‘div’, attrs={‘class’:’price’})
price = price_box.text
print price
Once you run the program, you will able to see that it prints out the current price of the S&P 500 Index.
I hope this will be helpful to you. And To know more about jupyter, you can go through this
Hi @Mike. First, read both the csv ...READ MORE
i have textfile now i want to ...READ MORE
motion_detection.ipynb
# Python program to implement
# Webcam ...READ MORE
down voteacceptedFor windows:
you could use winsound.SND_ASYNC to play them ...READ MORE
You probably want to use np.ravel_multi_index:
[code]
import numpy ...READ MORE
readline function help to read line in ...READ MORE
For Python 3, try doing this:
import urllib.request, ...READ MORE
Hi, good question.
Let us first assume that ...READ MORE
Hi, @Shubham,
Web scraping is the technique to ...READ MORE
Delete the List and its element:
We have ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/66213/download-website-scraping-anyone-explain-jupyter-example?show=66222 | CC-MAIN-2022-40 | refinedweb | 608 | 75.61 |
Alright so I have to write this code this way as a method with a question input for my homework.
The assignment is make a print tax table method that will print a tax table from 50000 to 60000 tax income range at $50 a row.
The only thing that I cant figure is the computation part where I println and the math is not computing properly. Also the compiler did not like my attempt to increment by 50.
How do I increment by 50 in the while loop?
What is wrong with my println statement?
Why cant I use the identifiers in the println statement?
import java.util.Scanner; public class ComputeTax { public static void main(String[] args) { //Create a Scanner Scanner input = new Scanner(System.in); //Prompt the user to enter filing status System.out.print( "Would You like to print a Tax table of 50000 to 60000 1-Yes, 2-N"); int status = input.nextInt(); double taxableIncome=0; if (status == 1) { printTable(status, taxableIncome); } else { System.out.println("Come back when you want to print in the 50k to 60k range!"); System.exit(0); } public static double printTable(int status, double taxableIncome) { double taxableIncome = 50000; double endingJob = 60000; System.out.println("Taxable\tSingle\tMarried\tMarried\tHead of"); System.out.println("Income\tFiler\tJoint\tSeparate\ta House"); while (taxableIncome <= endingJob) { System.out.println(taxableIncome + "\t" + (6000 * 0.10 + (27950 - 6000) * 0.15 + (taxableIncome - 27950) * 0.27)) + "\t" + (12000 * 0.10 + (46700 - 12000) * 0.15 + (taxableIncome - 46700) * 0.27) + "\t" + (6000 * 0.10 + (23350 - 6000) * 0.15 + (taxableIncome - 23350) * 0.27) + "\t" + (10000 * 0.10 + (37450 - 6000) * 0.15 + (taxableIncome - 37450) * 0.27) + "\n"; taxableIncome + 50; } } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/5906-method-mixup.html | CC-MAIN-2015-35 | refinedweb | 277 | 63.15 |
Namespace
Expand Messages
- Hi all,
When I use "package" in one file, is this word a namespace? I don't understand very well the namespace concept. Can someone help me?
Thanks in advance.
perl -e '$_="tMM naaCt Feocmama_itpUilucoGa";$_.=$1,print $2 while s/(..)(.)//;print substr$_,1,1;'
"...just because I don't know the meaning of my art, does not mean it has no meaning..." S.D.
_______________________________________________________
Novidade no Yahoo! Mail: receba alertas de novas mensagens no seu celular. Registre seu aparelho agora!
[Non-text portions of this message have been removed]
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/perl-beginner/conversations/topics/24678 | CC-MAIN-2017-17 | refinedweb | 109 | 71 |
8 Things I Wish I’d Known When I Started as a Web DeveloperBy Shaumik Daityari
I have been in web development for more than five years and it’s been quite a journey — trying out different technologies and experimenting with different styles. I’ve been successful at times, but I’ve also had my fair share of disappointment. In this post, I’m going to talk about certain realizations I’ve had over the course of my life as a web developer, with the hope you can learn from my mistakes.
1. Write Clean Code
Source: funny-memes.org
The first thing you realize when you start making large applications is that a huge amount of time is required for debugging. Often, you’ll spend more time debugging than writing new code.
In such a situation, it’s highly important you write properly indented and commented code, which adheres to best practices. Imagine you have hundreds of lines of code, and you have no idea what’s causing a small bug. What’s worse is that if you write unreadable code, you’ll probably fail to understand what each snippet does after some time. Are you sure you want to go down that path? Here are some tips to make writing clean code easier.
2. Language first, framework later
People often first learn the tricks of a framework, and then move on to the language. That’s actually not the right way to go.
The popularity of Django can be attributed to the strengths of Python — so it’s important that you get comfortable with Python before trying out Django and not the other way round.
The simple reason here is that if you know about the underlying language, it helps you understand how the framework works. If you have no idea about the trades of a a language, there is no way you will understand why something is done a certain way in the framework.
3. Learn JavaScript, not jQuery
Getting a bit more specific than the idea raised above, I would like to give special emphasis to JavaScript, which is the most accessible language in the world. Any device with a browser is capable of running a JavaScript application.
The mistake that young developers often make is to “learn jQuery”. Questions like these on Quora suggest that jQuery is a very popular option among people who have no idea how the underlying JavaScript works!
jQuery is just a set of wrapper functions written over JavaScript and the only reason people prefer jQuery is because of the fewer number of lines of code required. However, recent versions of JavaScript (or ECMAScript as it is originally known as) have made the syntax more user friendly, making many jQuery functions obsolete.
I am not saying using jQuery is wrong. I am just telling you to follow the correct learning path. If you have no idea about closures and namespaces in JavaScript, how would you explain the use of “$” in jQuery?
4. Don’t just read, implement
I’ve often seen developers read through tutorials or sometimes even whole books without anything much to show for it. However, my biggest concern is how much would you retain if you just read?
If you want to learn Ruby on Rails, try to develop a small application as you are going through the documentation or a set of tutorials. If you want to try the MEAN stack, get it running in your local machine and explore the different options — that’s the best way to learn!
5. Jack of all trades, Master of one
It’s good to explore new technologies, but one must remember to stick to one for most purposes. It’s always tempting for beginners to learn multiple languages at the same time, but it’s advisable to stick to one until you develop a certain level of mastery over it.
Once you have a go-to language for your day-to-day needs, you can move on to new ones. You may also changed your preferred language in this process, but attempting to master one before you move on to others is often a wise decision. Here’s some advice for choosing which one to start with.
6. Learn version control
In today’s world, it’s very rare that you work on a project alone. To collaborate with others, you need to learn something called version control!
Developers usually don’t dive into version control until they absolutely need to do so. However, as version control is necessary to work in a team, it’s a good idea to understand how it works and the basic commands that get you started early on.
7. Learn from the work of others
Mastering a technology on your own is great, but sometimes you learn a lot by just looking at the code of others. Be it your colleagues or random tutorials on the internet, try to find why someone approaches a problem in a certain way — and ask questions if necessary.
It’s also important for developers to realize that it’s impossible to know everything, but the knowledge is out there — you just need to Google it. As a beginner, if you’re stuck there is high probability that someone like you had the same problem in the past and the solution is somewhere out there in the internet (this often happens to the veterans too!)
8. Ask for code reviews (and enjoy them!)
Over the years, code reviews have immensely improved my programming skills. Proper code reviews take time on the part of the reviewer, so you should always ask others to review what you have written — your peers as well as mentors. It helps expose loopholes in your approach, and makes you learn. If you find someone who genuinely takes interest in reviewing your code, take their reviews seriously. Here’s a look at an advanced way to code review.
Lastly, never take code reviews personally. Code is like art — it’s difficult when someone points out errors in what you have created, but code reviews can only make you better — provided you take them the right way!
Conclusion
This post stems from my experiences as a web developer and what I have learned from events that have shaped my life. For further reading, check out this article on how to be a good developer.
What have you learned that you would have loved your younger self to have known? Let us know in the comments below.
I'm guilty of a lot of those points as well, especially number 5's: Jack of all trades, Master of none. When I first started out, I wanted to learn it all and I wanted to learn it all right that instant. I made the mistake of trying to learn too much at once and ended up slowing myself down and even becoming overwhelmed.
A ninth point that others might find useful is to beware of snippets written by others. I've often found myself stuck during the coding process and used to run off to find a snippet that someone else has written to do what I need. Implementing other people's code does work, and it's not a bad thing to do, but make sure that you understand what it's doing and how it works before you forget it. Much like you mentioned in your first point, at a bare minimum, leave a comment with a link to the page where you got the code to remind you what the code does (and also give credit where it's due).
I wish I had learned version control as soon as I was starting to write code. This could have saved me hours of headaches caused by accidentally deleted code. I had to rely on multiple copies of the same project on my PC.
I'll have to disagree with number 3, even so I learned Javascript before jQuery. I see where you are coming from here "learn the foundation and then move to a library". If one learns jQuery, they will be saving a lot of time - and if one learn it and see it as a language, not as just a library, I'm sure they will be able to move to Javascript. Another point: when was the last time you used vanilla JS? I might be wrong, but I guess less and less people are using it.
Yes, that is true! That habit can get you in trouble later.
Happens to most people. In fact, when I first started using revision control, I was really skeptical.
I believe learning jQuery definitely gets your work done quickly, but once you know some jQuery (and no JS) you would want to include those files in every page that you create! (And on a lighter note, people would ask these kinds of questions -)
Oh man. That is brutal.
I definitely disagree with you - learning JQuery once you know JavaScript is trivial - a significant fraction of JQuery questions are asked by people who didn't learn JavaScript first where the answer to their question is one or two lines of native JavaScript. Learning JavaScript after you know JQuery is not much different from if you didn't already know JQuery and so you will spend time to learn it as if it were another completely new language.
I created a site with examples of JavaScript and JQuery code - the several hundred JavaScript examples come first followed by 15 examples that cover JQuery completely - simply because most of the JQuery commands refer back to the JavaScript example that does the same thing and so whole groups of JQuery commands that are used the same way can be dealt with in a single example since the JavaScript page already covers everything else about what each JQuery command does.
Of course there are lots of JavaScript pages that the JQuery doersn't link to because there are lots of things that you can do with JavaScript where JQuery doesn't help at all (such as being able to send information to the server where you don't need a response back - which only requires 2 lines of JavaScript and where the shortest JQuery code to do the same is about 10 times longer - not counting the JQuery library).
lol #1 was funny to me. Thats so me. exactly lol
Okay all good points but I want to add a beginners caveat, especially with regards to 2 & 3. Some times it is good to do things the wrong way just to get them done. Nothing stops the enthusiasm of being a beginner more that trying to do things perfect. There is so much to learn in web development that I would advise beginners to take the short cuts, like jQuery, at first so they can get their projects out the door. Once you have a few projects completed you then have the confidence to go back and do them right (i.e. start learning how jQuery works by learning javaScript). Also don't forget how important being able to make mistakes is in the learning process.
Most importantly don't forget we all have to ask questions when we are learning. The only stupid questions are the ones not asked.
The best thing about jQuery is not that it's less code to do something, but that you can expect the code to work just fine on IE6. jQuery takes all quirks and bugs in to account and makes your life more simple. This is only true if you need support för old IE though. I'm not using jQuery for new projects anymore.
If you learn JavaScript properly then you will very quickly have your own library of code that fixes all of those quirks and can just use the ones you need. Plus once you know JavaScript a few seconds more and you will know jQuery as well - saving about 3/8 of the learning time to learn both.
The way many people use jQuery it is like placing your house into your hand luggage to go on a short trip rather than just taking what you need.
Also there are now two versions of JQuery - one that is huge and provides patches for every imaginable IE6 browser quirk - and one that provides the same functionality for modern browsers that is a small fraction of the size and becoming smaller all the time now that most JQuery commands are being adopted into JavaScript itself. The main purpose of JQuery wasn't to fix the quirks, it was to add commands that the author considered should be in JavaScript and so JQuery itself should disappear within the next few years as what JQuery commands are not in ES6 (due for release in March) will be in ES7. Within 5 years JavaScript will still be around and JQuery will be the history of some of the commands (not implemented using the exact same command but doing the exact same thing).
I'm totally agree with point 2 and 3. So much people who learn how to use Codeigniter and jQuery rather than how to use PHP and Javascipt
Thanks Shaumik Daityari, great article!
always is good to know tips and best practices from others to improve skills and abilities for coding.
I would suggest to focus in what really matters (project goals) customer does not even care how you do it, they want it, and they wanted for "yesterday"
#1: Getting outsourced work that looks clean on the outside, but...
Another thing I wish I knew was when to say no. | https://www.sitepoint.com/8-things-wish-id-known-started-developer/ | CC-MAIN-2016-50 | refinedweb | 2,280 | 75.54 |
The goal of this example is to show you how to serve different content at different URLs.
The, reactor , the object which implements the Twisted main loop, and endpoints, which contains classes for creating listening sockets. We’ll also import File to use as the resource at one of the example URLs.
from twisted.web.server import Site from twisted.web.resource import Resource from twisted.internet import reactor, endpoints from twisted.web.static import File
Now we create a resource which will correspond to the root of the URL hierarchy: all URLs are children of this resource.
root = Resource()
Here comes the interesting part of this example. We’re now going to
create three more resources and attach them to the three
URLs
/foo ,
/bar , and
/baz :
root.putChild(b"foo", File("/tmp")) root.putChild(b"bar", File("/lost+found")) root.putChild(b"baz", File("/opt"))
Last, all that’s required is to create a
Site with the root
resource, associate it with a listening server port, and start the reactor:
factory = Site(root) endpoint = endpoints.TCP4ServerEndpoint(reactor, 8880) endpoint.listen(factory) reactor.run()
With this server running,
will serve a listing of files
from
/tmp , will
serve a listing of files from
/lost+found ,
and will serve a listing of
files from
/opt .
Here’s the whole example uninterrupted:
from twisted.web.server import Site from twisted.web.resource import Resource from twisted.internet import reactor, endpoints from twisted.web.static import File root = Resource() root.putChild(b"foo", File("/tmp")) root.putChild(b"bar", File("/lost+found")) root.putChild(b"baz", File("/opt")) factory = Site(root) endpoint = endpoints.TCP4ServerEndpoint(reactor, 8880) endpoint.listen(factory) reactor.run() | https://twistedmatrix.com/documents/current/web/howto/web-in-60/static-dispatch.html | CC-MAIN-2018-26 | refinedweb | 279 | 53.17 |
- Write a program in C++ to calculate the sum of natural numbers.
- Write a C++ program to find sum of numbers between 1 to N.
- WAP in C++ to find sum of consecutive numbers using for loop.
Natural Numbers are counting numbers, that is 1, 2, 3, 4, 5... etc. Remember, zero and negative numbers are not part of natural numbers. In this program, we will take a positive number N as input from user and find the sum of all natural numbers between 1 to N(including 1 and N).
Algorithm to calculate sum of natural numbers
- Take number of elements(let it be N) as input from user using cin.
- Using a for loop, read N number from user and add them to a sum variable.
- Print the sum of all numbers using cout.
C++ program o find sum of all numbers between 1 to N
/* * C++ Program to find sum of N Natural Numbers */ #include <iostream> using namespace std; int main(){ int N=0, num, i, sum=0; cout << "Enter the number of integers to add"; cin >> N; cout << "Enter " << N << " numbers seperated by space\n"; for(i= 0; i< N; i++){ cin >> num; sum = sum + num; } cout << "SUM = " << sum; return 0; }Output
Enter the number of integers to add: 6 Enter 5 numbers seperated by space 1 2 3 4 5 6 SUM = 21
Related Topics | http://www.techcrashcourse.com/2016/03/cpp-program-to-find-sum-natural-numbers.html | CC-MAIN-2017-17 | refinedweb | 230 | 65.66 |
In my previous aggregator post, I set up Feed and Post models to capture the core information about these items. We can now store the information from the aggregation process in a persistent location, and this can be used by some external program to view the aggregation of content. Now we just need to get the information from somewhere...
The core aggregation logic is quite simple:
- We'll fetch the feed.
- We'll parse it.
- We'll save various bits of feed information to the database.
- For each entry in the feed, either create or update the information in the database.
There are a few optimisations we can do there for various levels of winnage. For example, a pretty big win is that there's no point parsing the feed if we can be sure the feed hasn't changed. An even bigger win given the rate of change of individual feeds on the Internet, is that there's no point fetching the full feed if it hasn't changed - use of eTag and If-Modified-Since can save us from not only unnecessary work, but unnecessary traffic (important in South Africa) and just being a good Internet citizen. A small win is that we don't need to update the database if the entry in the feed hasn't changed.
The Universal Feed Parser is about the best feed parser out there. It has literally thousands of unit tests to ensure it handles the largely varying brokenness of feeds out there, as well as providing a really simple API for handling various syndication formats, without hiding the extra information that more comprehensive syndication formats may provide.
It'll both fetch and parse the feed, so for now it does the first two steps.
The goal is that the aggregator itself will take care of the tedious part of the aggregation process - the bit that does the parsing and standard aggregator logic in terms of when to update feeds and posts and when to exit out early. The actual storage of the posts and feed will differ from project to project, but that core logic is mostly unchanged. I decided to use plain old inheritance for this (Bryn, of course, suggested generic functions...).
I've mostly used a genericised version of the collector parts from FeedJack (a Django-based aggregator) and some ideas from Planet (Planet Venus, to be precise), so thanks to Gustavo Picón, Scott James Remnant, Jeff Waugh, Sam Ruby, and others that contributed to those that I have
stolen^Wborrowed from.
First, set up some options on initialisation - mostly setting up logging so that the calling code can get the sort of logging they'd like out of the application:
class Aggregator(object): def __init__(self, options = None): if not options: options = {} self.options = options self.user_agent = options.get('user_agent', 'aggregate2db/0.1') self.log = options.get('log', None) self.verbose = options.get('verbose', False) if not self.log: self.log = logging.getLogger("Aggregator/%d" % (hash(self),)) self.log.propagate = False if self.verbose: self.log.level = logging.DEBUG else: self.log.level = logging.WARNING self.log.addHandler(logging.StreamHandler(sys.stderr))
Then, the feed processing method is the (current) entry-point:
# Process a feed, updating the feed information and processing the # entries in the feed. # # Returns boolean True or False on whether the feed was updated. def process_feed(self, feed): self.log.info('Processing feed: %s', feed.feed_url) self.log.debug('Feed %s last checked at: %s', feed.feed_url, feed.last_checked) feed.last_checked = datetime.now() feedparser_options = { 'agent': self.user_agent, 'etag': feed.etag, 'modified': feed.last_modified.timetuple(), } parsed_data = feedparser.parse(feed.feed_url, **feedparser_options) if 'status' in parsed_data: self.log.info('Feed %s status: %s', feed.feed_url, parsed_data.status) # Hook for people to handle 301 (permanently moved) or 410 # (gone) response codes more effectively than just logging # about it. self.handleStatus(feed, parsed_data.status, parsed_data) if parsed_data.status == 301: self.log.warning('Feed %s permanently (301) moved to %s', feed.feed_url, parsed_data.href) if parsed_data.status == 302: self.log.debug('Feed %s temporarily (302) moved to %s', feed.feed_url, parsed_data.href) if parsed_data.status == 304: self.log.debug('Feed %s unchanged', feed.feed_url) return False if parsed_data.status == 410: self.log.error('Feed %s has gone away (410)', feed.feed_url) return False if parsed_data.status >= 400: self.log.warning('Feed %s has error status %s', feed.feed_url, parsed_data.status) return False # Update etag so that we don't have to download anything if we don't need to feed.etag = parsed_data.get('etag', '') feed.title = parsed_data.feed.get('title', '') feed.description = parsed_data.feed.get('tagline', '') feed.link = parsed_data.feed.get('link', '') self.log.debug('Feed title: %s', feed.title) self.log.debug('Feed description: %s', feed.description) self.log.debug('Feed link: %s', feed.link) # Get all the existing posts mentioned out of the database - # doing this in one go could be quite a bit more efficient than # doing it one at a time. posts = self.getExistingPostsForEntries(feed, parsed_data.entries) # Keep track of whether we've had any updates - that affects # whether we should forcibly updated last_modified if the feed # doesn't provide any last_modified data. updated = False for entry in parsed_data.entries: try: entry_updated = self.process_entry(feed, entry, posts, parsed_data) updated = updated or entry_updated except: self.log.warning("Entry %s could not be processed", entry.link) # TODO: Hard to debug errors in entry_updated if we just # swallow this error - need to have option to reraise # when in development # Figure out from the data when the feed was last modified, # potentially just using the feed last modified time, but maybe # checking individual entries in the list, or just making up a # time. last_modified = self.determineFeedLastModified(feed, parsed_data, updated) if last_modified: feed.last_modified = last_modified feed.save() return updated
Then, for each entry found in the feed processing, the process_entry method is called:
# Process a feed entry, creating or updating the entry information # # Returns boolean True or False on whether the entry was created/updated. def process_entry(self, feed, entry, posts, parsed_data): # This maps the entry data into a dictionary with keys the same # name as the attributes of a post object. postdata = self.get_entry_data(entry, parsed_data, feed) self.log.info('Considering entry: %s', postdata['link']) self.log.debug('%s - title: %s', postdata['link'], postdata['title']) self.log.debug('%s - guid: %s', postdata['link'], postdata['guid']) # If the post already exists, look it up from the pre-populated # posts using the guid. if postdata['guid'] in posts: post = posts[postdata['guid']] # entryChanged is a hook for potential customisation, as # the default method may not always be the most accurate for # a particular set of feeds. Generally, will check modified # time, if it exists, or compare the actual content. if not self.entryChanged(post, postdata, parsed_data): self.log.info("Entry %s exists, and is unchanged", postdata['link']) return False self.log.info("Entry %s exists, but is changed, updating", postdata['link']) if not postdata['date_modified']: postdata['date_modified'] = post.date_modified for k, v in postdata.items(): setattr(post, k, v) else: self.log.info("Entry %s is a new entry", postdata['link']) # Determine when this post was actually created - which, if # it is not explicitly in the entry, requires checking the # feed and/or headers or just putting the current time if # there's no better indicator postdata['date_modified'] = self.determineEntryLastModified(feed, postdata, entry, parsed_data) post = self.createPost(**postdata) post.save() # Hook for further processing of the entry - tags, updating # search data, &c. self.furtherProcessEntry(post, entry, parsed_data, feed, posts) # Entry was changed return True
There are a few calls to other methods on the object. This serves to keep the functions pretty short as well as allowing alternate implementations to be used. The model-creation/fetching methods (createPost and getPostsForGuids) also need to be provided to persist and retrieve objects. These should always be pretty small - here's for the models from last time:
def createPost(self, **kw): post = Post(**kw) post.save() return post def getPostsForGuids(self, feed, guids): for guid in guids: posts = Post.select((Post.c.feed_id == feed.feed_id) & (Post.c.guid == guid)) if not len(posts): continue yield posts[0]
In my next post, I'll have a download of the code in a usable form (still trying to figure more elegant ways to , and explore adding some extra features like tags and searching using furtherProcessEntry.
Feedparser is indeed quite neat, but I do have a problem or two with it. It would be really nice if it were a bit more lazy about parsing the feeds. Instead of just parsing the entire feed and returning a UserDict it'd be nice if it would return a generator for at least entries. That way it could incrementally parse the item entries in a post so that if you only need to handle and allocate memory for entries that you don't already have in your database. This is especially nice for when you have only a fixed (small) amount of memory you can use.
Why do you opt for options=None in the __init__ signature of the Aggregator class? Isn't this a great time to use **kwargs?
Would save you some lines of code and look a bit more elegant (at least to my eyes).
Thanks for the comments, Alexander. You're quite right on the options thing - I'm not really sure how I managed to miss using kwargs...
Yeah, everyone makes silly mistakes, thats why we should all share our code so we can fix them instead of hiding them behind shrink wrap :)
I notice you're doing a join with sqlalchemy by giving the join parameters (Post.c.feed_id == Feed.c.feed_id) you should checkout one of two things:
join(Klass1.table, Klass2.table).select() or select(and_(Klass1.join_to('backref_to_klass2'), Klass1.c.name == 'fred'))
The latter is really useful if you have a has_and_belongs_to_many() relationship since you don't have to know the names for the intermediary table that is used to do that relationship.
I'm just spamming you all night tonight :/
Anyway, really excited to see what comes of this project; I've recently worked on an aggregator myself (with a very different purpose though) so would like to see a different take on it.
Thanks again Alexander. In this case I'm just using the value in feed.feed_id, but I still learnt something from what you said about join_to. I can see a few places now where I can use it to make my life a lot simpler...
Another interesting article - thanks a lot. But since you're on the topic of RSS feeds, did you know that your feed doesn't look right in Google Reader? For whatever reason, it displays the HTML source instead of the rendered output. Perhaps the warnings from might give a clue?
I'm only pointing this out because I enjoy the blog. Keep up the good work :-)
Simon, I've improved the Atom feed, and it now passes [feedvalidator.org] the Feed Validator [feedvalidator.org]. Thanks for the notice and comment.
...and it looks great in Google Reader too. Thanks a lot! | http://nxsy.org/building-an-aggregator-part-two-core-aggregation-logic | crawl-002 | refinedweb | 1,851 | 57.57 |
I've had my share in doing strange things in weird situations and here are my strange results: we can get much more agility and put more payload onto the Internet Cloud if we switch from the Internet Zeppelins to the personal Internet Jetpacks. I'm no Steve or Bill, comfortable and blessed with the comprehension of that. I'm not after building a great company at all. But people, I'm flying in the Cloud in my Jetpack for a while now (skyshots), would love to share it like the Big Z did: "Hey, guys, the swell is happening over on the north shore!"
In more accurate terms the Cloud Tool I'm bragging about is many times more (not just more) efficient and easy to use then the frameworks of the Big Flyers like Microsoft and Zoho and Amazon and Google, whom I respect immensely. And my humble "apple" does pretty much everything the "oranges" do. Anyways, I took a few well known steps - just sideways (don't ask me why) - and cooked that CloudBurger(aka Datalator aka Jetpack). And here we are, please try the recipe.
1. Start with Declarative (vs. Imperative) Programming. Feel the Data supremacy over the Algorithms.
Replace those new(...) and add(...) directives with a table containing names, types and properties of UI controls when building an application's screens. Add a piece of code to utilize those declarations and issue new() and add() statements in a short standard loop. Add another piece of code - a visual designer to build those screens and save them to those tables.
2. Use Views to build the Application Data Model.
In a data driven application a visual representation of a Database Table - views and forms - usually contains enough information to recreate that Table. Add a piece of code generating a Database Table from already built application's screens (see above). Add another piece of code glueing the Views and the Table together and one more, interpreting the user input. Now you can build and serve a single Table on a standalone computer.
3. Visually define Table relations.
The one-to-many relation is a simple one; it powers such data structures as menu-submenus and department-employees kinds. Modify your visual designer to assign the property of a "kid-parent" type to a Database Table. The "parent" property of the Employees Table should point to the Departments and the "kid" property of the Department Table will point back. The same pair of properties works well to define relations in menu-submenu structures.
4. Define mechanics to navigate along and between related Tables.
The stereotype perception of "related tables" might be reflected in an SQL statement of the following type:
Replace that complex (to non-techie) statement with more "organic" approach: display the parent table and highlight the "current" parent row. Define some interface gestures to move the " current" row inside the Table. Define the "enter" gesture like: a click on the "current" row must take a user along the link by generating and executing that freaky SQL above - silently on the background. When ready, the "kid" table must be visualized. And only those "kid" records, which conform to the "WHERE..." clause, must be brought along. And that is cool - you have a simple pure visual system with no code to write and bugs to chase later. By defining the Table "relation" property (a link), employing an idea of a "View row cursor", synchronized with the "Table row cursor", and defining a "follow the link" action - you can build powerful runtime to navigate inside and along hierarchical data structures. Any hierarchical structures that is, all served by the standard interpreter and a few bytes of a "link" property.
5. Add functionality to work with the content of related Tables.
If the runtime remembers a parent record UID which was used to navigate to the "kid" Table, it might easily generate something like
The same goes for "UPDATE..." statements, when user(or API) uses a Table form. A gesture (or API) might commence a record deletion process, which will generate the only "DELETE..." statement if applied to the kid table, or many "deletes" - for all linked kid records and a parent record - if applied to the parent table. It is not difficult to use the technique on multilevel hierarchies, thus providing full CRUD functionality for visually designed customary hierarchical data structures. Click and connect - like Lego blocks.
6. Add functionality to support many-to-many Tables relations.
If you could convert user gestures into SQL statements with a simple WHERE clause - you can handle more complex ones where WHERE terms are connected with AND, OR and NOT predicates. Believe me it is much harder to come up with straightforward interface representation and gesture language to control many "parent" Tables with selected rows at the same time. And we need them all to create a complex query. If I did it you can do it too.
And it makes the magic work: all the Tables and all the relations are visually built, the interface objects namespace is separated from the Database namespace, all the CRUD operations are enabled by default, all the content always ACID as the runtime controls potentially ambiguous actions and all the business logic beyond default CRUD could be expressed in terms "load the Table(name)", "find the record where...", "get/set the record field(name)" and "save/delete the record". Plus you get the zero-maintenance application Database if the generation of SQL statements is bug-free. Plus you get the SaaS ready platform if your runtime collects stats about Tables usage. My does. And you can actually explain what you are doing to your 5th grader kid or your boss which is always impressive. I have no boss.
7. Make a stack of activated Tables visible.
When surfing along the links in the application database - say from Departments to Employees and further to Employee's Records - you must keep the stack of all "parent" Tables - because they define SQL statements parameters when fetching or altering the "kid" Table data. Now make that stack visible(like I did) or show a three of active records of active Tables (like Project Navigators do). Devote some screen space to help to navigate an application and The User will love it. Remember that even a complex application is limited and well structured unlike the World Wide Web, thus allowing the application map be shown and used.
8. Add support for different data types.
Keep data on the file system or in the Database, but allow your interface access all the different types. I managed to serve only Strings, Numbers, Dates, Times, Currencies, Texts, HTMLs and Images. It would be nice to be able to work with everything else as well - Flash, PDF, DOC, MP3 - you name it. Takes some more time though.
9. Separate the Interface and the Data Model.
Add a messaging layer between your Tables and their interface representation. By now you have keyboard/mouse/touch/kinect gestures defined to control the front end already. Detect such a gesture, code it and pass to the server. RMI, AJAX, SMTP, HTTP, JMS - any messaging technique will do. Mine works on plain TCP sockets. Nothing fancy here - the request-respond protocols are relatively simple and well studied.
At this time it might be nice to add some real-time collaboration to the system. You have to create feedback mechanics to push "server status changed" signals to every active client. And you might not want to open listening port(s) on the client side. Well, I used "long polling" with the client responsible for the lost connection healing.
With such a feedback line you can be sure that all the clients will be notified about the content or design changes. And that is up to you now how to react. I decided that relevant changes of the relevant Tables must be immediately reloaded asynchronously on the background.
That client-server architecture will work with or without messages encoding. I scramble messages just enough to scare away an occasional sniffer. But the system will not be consistent without transactions support and collisions marshalling. You will not want to delegate content locking to the Database of your choice, because your choice of the Database might be changed over time. If the system supposed to work with wide range of Databases - have your own locks: acquire the record lock to edit it, lock all the parent records to prevent them from deletion and lock the layer when you going to redesign it. Release the locks when appropriate.
Now cache everything cacheable on the client side before querying the server and on the server side before querying the Database. The Database takes care about caching queries results internally.
At last - and that's a big part - you have to think about improving the scalability. I did not - my present server is plain multithreaded and does not support clasterization. It will by the time I have applications with hundreds of simultaneous users.
10. Provide convenient API.
You have to pass control to relevant custom event handlers before and after processing interface events on the client side. You may want to call the custom code to process runtime originated events, like painting or not-enough-memory cries. You may also want to custom process the server events, like "content has been changed". In some cases you may want to change the way how the Table(s) visualized - think of a game programming. You will want for sure to add some convenience calls. And it all has to be easy to comprehend and employ.
Conclusion.
At this point you are already far ahead of all the competition. You have a light tool which replaces a whole bunch of technologies. You can build the application when talking to a customer and rebuild it without stopping the server. Your apps are reliable and consistent, easy to deploy and easy to use. You grow your library of styles and reusable Tables, you have your visually designed reports (just another View of the Table(s)) and you can import/export content in CSV format. Your server hosts unlimited number of applications and serves unlimited groups of unlimited users, collects SaaS statistics, does not crash and does not requires attention. Fast, reliable and cheap.
Could it be better? Indeed. Scalability, security, speed, cleaner implementation, support for more data types, more dressed up components-Tables (chat, videoconference, mail client, etc.), support for mobile platforms, API for different languages, IDEs integration (should Datalator be incorporated into IDE or IDE incorporated into Datalator?), Web Server integration (should Datalator be... ). It could be truly RESTful if the business logic is kept on the server - along with the Table content and definition - and loaded on runtime. Self-contained Table - a smart plug-and-play component, what could be better?
Sky is the limit. Prove me right.
The story is told and excitement is over, now comes the time of business development. But speaking about of sometimes strange ways of looking at things - there are a few more perpendicular questions to be answered one day:
Why the Text is the only feasible comprehensive representation of a computer program? Can we create a convenient 3D environment for immersive general programming by team, RPG style?
Why Claude Shannon alphabet is immutable in his famous Source Coding Theorem? Can the theoretical compression limitations be overcome by extending the alphabet?
Why Artificial Intelligence mainstream research studies the logic before recognition? What if we solve the mystery of the Tabula Rasa first?
Well, another stories for another time. I'll keep you posted.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/winning-cloud-race-ten-simple | CC-MAIN-2017-26 | refinedweb | 1,952 | 63.59 |
Problems writing to SD Card
Has anyone else had issues writing to SD on the stack? I can place a file on an sd card and the read the directory and read the file. I can read files that were previously written to the card manually, but writing has been a challenge. I constructed this simple example to test writing and it fails every time. I'm not sure what im doing wrong? (I used to have code to initialize the card, using the proper CS pin but have seen from other examples that it wasn't necesssary on the stack. Writing didn't work for me either way)
#include <M5Stack.h> void setup() { M5.begin(); Serial.begin(9600); char filename[] = "LOGGER00.CSV"; File logfile = SD.open(filename, FILE_WRITE); logfile.close(); if (SD.exists(filename)) { Serial.println("LOGGER00.CSV exists."); } else { Serial.println("LOGGER00.CSV doesn't exist."); } } void loop() { M5.update(); }
The following code should work provided you use a Micro SD / TF card that only has a few files on it.... too many and the display will run off the screen and you won't see the write to card and read back.
#include <M5Stack.h> //Micro SD / TF Card Test void listDir(fs::FS &fs, const char * dirname, uint8_t levels){ // Print blank line on screen M5.Lcd.printf(" \n "); M5.Lcd.printf(" \n "); Serial.printf("Listing directory: %s\n", dirname); M5.Lcd.printf("Listing directory: %s\n", dirname); File root = fs.open(dirname); if(!root){ Serial.println("Failed to open directory"); M5.Lcd.println("Failed to open directory"); return; } if(!root.isDirectory()){ Serial.println("Not a directory"); M5.Lcd.println("Not a directory"); return; } File file = root.openNextFile(); while(file){ if(file.isDirectory()){ Serial.print(" DIR : "); M5.Lcd.print(" DIR : "); Serial.println(file.name()); M5.Lcd.println(file.name()); if(levels){ listDir(fs, file.name(), levels -1); } } else { Serial.print(" FILE: "); M5.Lcd.print(" FILE: "); Serial.print(file.name()); M5.Lcd.print(file.name()); Serial.print(" SIZE: "); M5.Lcd.print(" SIZE: "); Serial.println(file.size()); M5.Lcd.println(file.size()); } file = root.openNextFile(); } } void readFile(fs::FS &fs, const char * path) { Serial.printf("Reading file: %s\n", path); M5.Lcd.printf("Reading file: %s\n", path); File file = fs.open(path); if(!file){ Serial.println("Failed to open file for reading"); M5.Lcd.println("Failed to open file for reading"); return; } Serial.print("Read from file: "); M5.Lcd.print("Read from file: "); while(file.available()){ int ch = file.read(); Serial.write(ch); M5.Lcd.write(ch); } } void writeFile(fs::FS &fs, const char * path, const char * message){ Serial.printf("Writing file: %s\n", path); M5.Lcd.printf("Writing file: %s\n", path); File file = fs.open(path, FILE_WRITE); if(!file){ Serial.println("Failed to open file for writing"); M5.Lcd.println("Failed to open file for writing"); return; } if(file.print(message)){ Serial.println("File written"); M5.Lcd.println("File written"); } else { Serial.println("Write failed"); M5.Lcd.println("Write failed"); } } // the setup routine runs once when M5Stack starts up void setup() { // initialize the M5Stack object M5.begin(); M5.startupLogo(); Wire.begin(); // Lcd display M5.Lcd.setBrightness(100); M5.Lcd.fillScreen(BLACK); M5.Lcd.setCursor(0, 10); M5.Lcd.setTextColor(WHITE); M5.Lcd.setTextSize(1); // Page Header M5.Lcd.fillScreen(BLACK); M5.Lcd.setCursor(0, 05); M5.Lcd.printf(" Testing Micro SD Card Functions:\r\n"); // digitalWrite(TFT_CS, 1); // Print blank line on screen M5.Lcd.printf(" \n "); M5.Lcd.printf(" \n "); listDir(SD, "/", 0); M5.Lcd.printf(""); M5.Lcd.printf(""); // Print blank line on screen M5.Lcd.printf(" \n "); M5.Lcd.printf(" \n "); writeFile(SD, "/hello.txt", "Hello world from M5Stack !!"); M5.Lcd.printf(""); M5.Lcd.printf(""); // Print blank line on screen M5.Lcd.printf(" \n "); M5.Lcd.printf(" \n "); // Print blank line on screen M5.Lcd.printf(" \n "); M5.Lcd.printf(" \n "); readFile(SD, "/hello.txt"); } void loop() { // put your main code here, to run repeatedly: M5.update(); }
Thank you!
I had started with those functions, but my function that retrieved data from the serial port was sending back a string and all these SD card functions require a Const Char * and I didn't have the skills yet to compensate.
Ultimately, it came down to the path. I was using "\filename.ext" and the SD card functions require "/filename.ext." Instead of giving me a nice soft error message like "bad file path" or something, it just compiles and does nothing.
I caught the difference when I was comparing your code to mine. Thanks for the assistance!
@vsthose No problem ! Would be great if the Arduino IDE could identify tiny errors like this.... We wait !
Thank you ! It helps !
Hi Imro, no problem... glad it helped.
That code is taken from the FactoryTest example that is part of the M5Stack library....
The code in the examples is a useful source for learning how to make things happen on the M5Stack. | http://forum.m5stack.com/topic/184/problems-writing-to-sd-card | CC-MAIN-2019-51 | refinedweb | 816 | 55.71 |
![if !(IE 9)]> <![endif]>
I read a post recently about a check of the LibRaw project performed by Coverity SCAN. It stated that nothing interesting had been found. So I decided to try our analyzer PVS-Studio on it.
LibRaw is a library for reading RAW-files obtained from digital photo cameras (CRW/CR2, NEF, RAF, DNG, and others). Website:
The article that induced me to check the project with PVS-Studio can be found here: "On Static Analysis of C++" (RU). Let me quote a short extract from the preface part of the article:
Coverity SCAN: 107 warnings, about a third of which are of High Impact level.
Out of the High Impact level warnings:
About 10 referring to Microsoft STL
Others are of the following kind:
int variable; if(layout==Layout1) variable=value1; if(layout==Layout2) variable=value2;
On this code, I got a warning saying about carelessness in the code - an uninitialized variable. I agree with it as such, because it's no good doing things like that. But in real life, there are two types of layout - and this is explicitly specified in the calling code. That is, the analyzer has enough data to figure out that this is not a 'High impact' error but just a carelessly written code.
Some warnings like about unsigned short when extending to 32-64 bits may be painful. I won't argue with that - the analyzer is right de jure, but de facto these unsigned short's store picture sizes which are not going to grow up to 32767 in the near future.
That is, again, no fixing is needed - in this particular case.
All the rest 'High Impact' issues are simply false positives. Well, the code is not perfect, that's right (I wish you could see it from dcraw !), but all the issues found are not bugs.
Now let's see if the PVS-Studio analyzer managed to find anything worthy after Coverity. I didn't have any expectations of catching super-bugs, of course, but it still was interesting to try.
PVS-Studio generated 46 general warnings (of the first and second severity levels).
Welcome to have a look at the code fragments I found interesting.
void DHT::hide_hots() { .... for (int k = -2; k < 3; k += 2) for (int m = -2; m < 3; m += 2) if (m == 0 && m == 0) continue; else avg += nraw[nr_offset(y + k, x + m)][kc]; .... }
PVS-Studio's warning: V501 There are identical sub-expressions to the left and to the right of the '&&' operator: m == 0 && m == 0 dht_demosaic.cpp 260
I guess we are dealing with a typo here. The check should probably look like this:
if (k == 0 && m == 0)
A similar fragment can be found in the file aahd_demosaic.cpp (line 199).
int main(int argc, char *argv[]) { int ret; .... if( (ret = RawProcessor.open_buffer(iobuffer,st.st_size) != LIBRAW_SUCCESS)) { fprintf(stderr,"Cannot open_buffer %s: %s\n", argv[arg],libraw_strerror(ret)); free(iobuffer); continue; } .... }
PVS-Studio's warning: V593 Consider reviewing the expression of the 'A = B != C' kind. The expression is calculated as following: 'A = (B != C)'. dcraw_emu.cpp 468
This is a bug related to operation priorities. The "RawProcessor.open_buffer(iobuffer,st.st_size) != LIBRAW_SUCCESS" comparison is executed first, then its result is written into the 'ret' variable. If an error occurs, it will cause printing of an incorrect error code into the file. This is not a crucial defect, but it is still worth mentioning.
unsigned CLASS pana_bits (int nbits) { .... return (buf[byte] | buf[byte+1] << 8) >> (vbits & 7) & ~(-1 << nbits); .... }
PVS-Studio's warning: V610 Undefined behavior. Check the shift operator '<<. The left operand '-1' is negative. dcraw_common.cpp 1827
Shifting negative numbers causes undefined behavior. Programmers often use such tricks and get the program pretend to work well. But actually you cannot rely on such a code. To learn more about negative number shifts, see the article "Wade not in unknown waters. Part three".
Similar issues can be found in the following fragments:
void DHT::illustrate_dline(int i) { .... int l = ndir[nr_offset(y, x)] & 8; l >>= 3; l = 1; .... }
PVS-Studio's warning: V519 The 'l' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 671, 672. dht_demosaic.cpp 672
Perhaps this is not a bug and the "l = 1" statement is written deliberately. But the code does look suspicious.
Here's one more suspicious piece of code:
void CLASS identify() { .... if (!load_raw && (maximum = 0xfff)) .... }
PVS-Studio's warning: V560 A part of conditional expression is always true: ((imgdata.color.maximum) = 0xfff). dcraw_common.cpp 8496
Both analyzers have found very few defects. It's natural for small projects. However, it was an interesting experiment checking LibRaw. ... | https://www.viva64.com/en/b/0233/ | CC-MAIN-2018-17 | refinedweb | 785 | 66.94 |
I’ve got a sort-of meta-application that I’m building in Rails 3 for a client. The core of the application is a framework on which we build various “Programs” that a patient can participate in. On the Patient Profile screen, a doctor or other medical professional has the ability to invite patients to participate in a given program. Once a patient has been invited to participate in that program, they will see a program dashboard on their profile page. The doctors and nurses will also see the dashboard.
Here’s the mock up of what this looks like:
Each program has the potential for a very complicated and customized set of actions and workflows within it, all driven through the program dashboard on the patient’s profile.
A Problem Of Coupling
The problem that I was about to run into on this particular requirement, was keeping the code for the dashboard clean and keeping it out of the controller for the patient profile. Sure, we can load up a list of programs that a patient is participating in, but beyond that, it should be the responsibility of the program dashboard to figure out what’s going on, not the patient profile controller or view.
We could have solved this with a simple partial for the program. In the most basic of scenarios this would have worked nicely. However, the functionality that we need on this first program dashboard is anything but “basic”. We have a lot of data to load and sort, filter and prepare, and otherwise work with before we can render the view.
In the end, I did not want to do was clutter up a partial for the dashboard with a bunch of code to load and prep data, or create a bunch of helper methods in the patient profile helper module to do the same. The above options would create far too much coupling between the profile and the programs, which would destroy the ability for this system to be extended with additional programs in the future.
Cells: A Solution For “Partial Controllers”
Not liking the options I was faced with, I asked twitter if there was anything like a “Partial Controller” in Rails 3. What I wanted was a controller that would execute for a given partial. For example, if I had a _my_program_partial.html.haml, I wanted a my_program_controller.rb to go along with it. Ben Scheirman (@subdigital) pointed me to a nice little ruby gem that provides exactly the functionality I’m looking for:
Cells – Compnents for Rails
Directly from the Cells website:
Cells look like controllers. They are super fast. They provide clean encapsulation. They are reuseable across your project. They can be plugged anywhere in your views or controllers.
Call them “mini controllers” . But hey, they are faster!
Call them “partials on steroids” . But wait, they are object-oriented instances, not a loose helper+partial thing.
Implementing The First Program’s Dashboard As A Cell
This was surprisingly easy, and I’m really happy with the result. After installing the cells gem into my app, I ran this from the command line:
rails g cell SomePatientProgram display
This tells the generator to build a cell for SomePatientProgram with a controller method and view called display. The generator created a set of files for me in a new folder called app/cells
app/cells/ app/cells/some_patient_program_cell.rb app/cells/some_patient_program/ app/cells/some_patient_program/display.html.erb
I had to change the display.html.erb to haml, but it works just fine. Once I had all this in place, I needed to pass the PatientProgram in question (the assignment of a Patient to a Program) to cell. From the patient profile view, I made this call:
- @patient_programs.each do |patient_program| render_cell :some_patient_program, :display, :patient_program => patient_program
Then in the display method of the cell’s controller, I grab the patient program from the supplied options and turn it into the pieces that I need, as instance variables.
class SomePatientProgramCell < Cell::Rails def display @patient_program = options[:patient_program] @program = @patient_program.program @patient = @patient_program.patient # ... lots of other data manipulation here render end end
Then in the display.html.haml view, I set up the various layout elements that I need. I’m still working on the final layout, but this is what I have so far:
#some-program %h2 Some Program Title Here %p A long description of the program should go here [...] %fieldset %legend Assessments assessment stuff goes here. %fieldset %legend Panels Panel stuff goes here. %fieldset %legend Follow-up Actions Follow-up stuff goes here.
Once I had that in place, I was able to view the patient profile, invite the patient to the program, and have the program’s cell displayed as I needed:
Note that the bottom section of this screen shot is the actual cell, in place.
A Small Problem And A Workaround: Content_For Doesn’t Work
There is one little “bug” that is annoying me a bit. Due to a bug or design issue or something in Rails 3, calling “content_for” does not work from Cell views. In my case, I wanted to provide a style sheet for the cell’s view and only include it in the header of the page when the cell’s view is rendered. This is important because every “Program” will need it’s own styling for the dashboard, and I can’t put all of the styling for every program in a single stylesheet.
After a few answers from the #cells irc chat room turned up nothing, I found a work around via the content_for bug on the cells github page. At the bottom of the comments, ahmeij mentions using a helper method to put the content of a cell into a named content area. I only want to put the css file for a cell into my :css area, but the principle of what he said helped me come up with a simple helper method for my patients controller:
def render_program_cell(*args) cell_name = args[0] args[0] = cell_name.to_s + "_patient_program" content_for :css do stylesheet_link_tag "#{cell_name}/patient_program" end render_cell *args end
This method is pretty simple, but it does a few things for me. It assumes a naming convention for my programs and the related cells. The first argument is the name of the program, minus “_patient_program”, which is used to figure out where the css file is and use generate the full name of the cell (re-adding the “_patient_program”). Then I call out to the content_for :css and supply the right stylesheet name, again using a convent of having a stylesheet named “(something)_patient_program”. And lastly, I render the actual cell by passing all the params to the Cells helper method.
From there, I can call my method in the patient profile view the same as I was calling the original render_cell method:
- @patient_programs.each do |patient_program| render_program_cell :some_patient_program, :display, :patient_program => patient_program
Now it produces the cell’s view and it puts the cell’s css in the header of the page, the way I want.
A Clean Solution
I generally like the solution that I found. It works well, is simple to work with, etc. Even with the helper method that I had to create, to wrap around my need for css, it’s still a much better solution than what I would have been able to accomplish with standard Rails controllers, views and partials.
Get The Best JavaScript Secrets!
Get the trade secrets of JavaScript pros, and the insider info you need to advance your career!
Post Footer automatically generated by Add Post Footer Plugin for wordpress. | http://lostechies.com/derickbailey/2011/04/11/cells-partial-controllers-and-views-for-rails-3/ | CC-MAIN-2014-52 | refinedweb | 1,271 | 61.26 |
Now I'm meant to implement a function to "doublify" a linked list. That is, a single linked list with the structure only containing a next pointer is to be modified so that it will have a pointer to "previous" as well. And to take a single linked list and implement the previous pointer.
I did it without looking at the solutions posted by my lecturer, and after I finished, I looked the lecturer's code. I got rather confused, my code was a rather simple (well it seemed like it) bit of code to do, it's just the bit in red, the following bit is just testing to see if it was done. While my lecturer's was a nightmare to follow
Now here's the lecturer's solution:Now here's the lecturer's solution:Code:
link doublify (link l) {
char c;
link current = l;
link temp = l;
while(current->next != NULL) {
current = current->next;
current->prev = temp;
temp = temp->next;
}
current = l;
while ((c = getchar()) != EOF) {
switch (c) {
case 'n':
if(current->next != NULL) {
current = current->next;
printf("%c\n", current->item);
}
else if(current->next == NULL) {
printf("No next node, going back to head\n");
current = l;
}
break;
case 'p':
if(current->prev != NULL) {
current = current->prev;
printf("%c\n", current->item);
}
else if(current->prev == NULL) {
printf("No prev node, going back to head\n");
current = l;
}
break;
case 'e':
return 0;
break;
}
}
return (l);
}
Whilst I am sure I could understand it if I spent 30 minutes drawing up his method, since my method seemed to work for me, I don't see why his method is so complicated? Is my method wrong?Whilst I am sure I could understand it if I spent 30 minutes drawing up his method, since my method seemed to work for me, I don't see why his method is so complicated? Is my method wrong?Code:
link doublify (link l)
{
link t,
new = NULL,
nextPrev = NULL;
link *linkPtr = &new;
while (l != NULL) {
t = (link) malloc (sizeof *t);
t->item = l->item;
t->prev = nextPrev;
t->next = NULL;
nextPrev = t;
*linkPtr = t;
linkPtr = &(t->next);
l = l->next;
}
return new;
} | http://cboard.cprogramming.com/c-programming/104102-im-confused-about-link-lists-again-printable-thread.html | CC-MAIN-2016-26 | refinedweb | 363 | 66.88 |
Here in Part V of the Guide, we'll venture into the world of property sheets. When you bring up the properties for a file system object, Explorer shows a property sheet with a tab labeled General. The shell lets us add pages to the property sheet by using a type of shell extension called a property sheet handler.
This article assumes that you understand the basics of shell extensions, and are familiar with the STL collection classes. If you need a refresher on STL, you should read Part II, since the same techniques will be used in this article.
Remember that VC 7 (and probably VC 8) users will need to change some settings before compiling. See the README section in Part I for the details.
Everyone is familiar with Explorer's properties dialogs. More specifically, they are property sheets that contain one or more pages. Each property sheet has a General tab that lists the full path, modified date, and other various stuff. Explorer lets us add our own pages to the property sheets, using a property sheet handler extension. A property sheet handler can also add or replace pages in certain Control Panel applets, but that topic will not be covered here. See my article Adding Custom Pages to Control Panel Applets to learn more about extending applets.
This article presents an extension that lets you modify the created, accessed, and modified times for a file
right from its properties dialog. I will do all property page handling in straight SDK calls, without MFC or ATL.
I haven't tried using an MFC or WTL property page object in an extension; doing so may be tricky because the shell
expects to receive a handle to the sheet (an
HPROPSHEETPAGE), and MFC hides this detail in the
CPropertyPage
implementation.
If you bring up the properties for a .URL file (an Internet shortcut), you can see property sheet handlers in action. The CodeProject tab is a sneak peek at this article's extension. The Web Document tab shows an extension installed by IE.
You should be familiar with the set-up steps now, so I'll skip the instructions for going through the VC wizards.
If you're following along in the wizards, make a new ATL COM app called FileTime, with a C++ implementation
class
CFileTimeShlExt.
Since a property sheet handler operates on all selected files at once, it uses
IShellExtInit as
its initialization interface. We'll need to add
IShellExtInit to the list of interfaces that
CFileTimeShlExt
implements. Again, this should be familiar to you, so I will not repeat the steps here.
The class will also need a list of strings to hold the names of the selected files.
typedef list< basic_string<TCHAR> > string_list; protected: string_list m_lsFiles;
The
Initialize() method will do the same thing as Part II - read in the names of the selected file
and store them in the string list. Here's the beginning of the function:
STDMETHODIMP CFileTimeShlExt::Initialize ( LPCITEMIDLIST pidlFolder, LPDATAOBJECT pDataObj, HKEY hProgID ) { TCHAR szFile[MAX_PATH]; UINT uNumFiles; HDROP hdrop; FORMATETC etc = { CF_HDROP, NULL, DVASPECT_CONTENT, -1, TYMED_HGLOBAL }; STGMEDIUM stg; INITCOMMONCONTROLSEX iccex = { sizeof(INITCOMMONCONTROLSEX), ICC_DATE_CLASSES }; // Init the common controls. InitCommonControlsEx ( &iccex );
We initialize the common controls because our page will use the date/time picker (DTP) control. Next we do all
the mucking about with the
IDataObject interface and get an
HDROP handle for enumerating
the selected files.
// Read the list of items from the data object. They're stored in HDROP // form, so just get the HDROP handle and then use the drag 'n' drop APIs // on it. if ( FAILED( pDataObj->GetData ( &etc, &stg ) )) return E_INVALIDARG; // Get an HDROP handle. hdrop = (HDROP) GlobalLock ( stg.hGlobal ); if ( NULL == hdrop ) { ReleaseStgMedium ( &stg ); return E_INVALIDARG; } // Determine how many files are involved in this operation. uNumFiles = DragQueryFile ( hdrop, 0xFFFFFFFF, NULL, 0 );
Next comes the loop that actually enumerates through the selected files. This extension will only operate on files, not directories, so any directories we come across are ignored.
for ( UINT uFile = 0; uFile < uNumFiles; uFile++ ) { // Get the next filename. if ( 0 == DragQueryFile ( hdrop, uFile, szFile, MAX_PATH ) ) continue; // Skip over directories. We *could* handle directories, since they // keep the creation time/date, but I'm just choosing not to do so // in this example. if ( PathIsDirectory ( szFile ) ) continue; // Add the filename to our list of files to act on. m_lsFiles.push_back ( szFile ); } // end for // Release resources. GlobalUnlock ( stg.hGlobal ); ReleaseStgMedium ( &stg );
The code that enumerates the filenames is the same as before, but there's also something new here. A property
sheet has a limit on the number of pages it can have, defined as the constant
MAXPROPPAGES in prsht.h.
Each file will get its own page, so if our list has more than
MAXPROPPAGES files, it gets truncated
so its size is
MAXPROPPAGES. (Even though
MAXPROPPAGES is currently 100, the property
sheet will not display that many tabs. It maxes out at around 34.)
// Check how many files were selected. If the number is greater than the // maximum number of property pages, truncate our list. if ( m_lsFiles.size() > MAXPROPPAGES ) m_lsFiles.resize ( MAXPROPPAGES ); // If we found any files we can work with, return S_OK. Otherwise, // return E_FAIL so we don't get called again for this right-click // operation. return (m_lsFiles.size() > 0) ? S_OK : E_FAIL; }
If
Initialize() returns
S_OK, Explorer queries for a new interface,
IShellPropSheetExt.
IShellPropSheetExt is quite simple, with only one method that requires an implementation. To add
IShellPropSheetExt
to our class, open FileTimeShlExt.h and add the lines listed here in bold:
class CFileTimeShlExt : public CComObjectRootEx<CComSingleThreadModel>, public CComCoClass<CFileTimeShlExt, &CLSID_FileTimeShlExt>, public IShellExtInit, public IShellPropSheetExt { BEGIN_COM_MAP(CFileTimeShlExt) COM_INTERFACE_ENTRY(IShellExtInit) COM_INTERFACE_ENTRY(IShellPropSheetExt) END_COM_MAP() public: // IShellPropSheetExt STDMETHODIMP AddPages(LPFNADDPROPSHEETPAGE, LPARAM); STDMETHODIMP ReplacePage(UINT, LPFNADDPROPSHEETPAGE, LPARAM) { return E_NOTIMPL; }
The
AddPages() method is the one we'll implement.
ReplacePage() is only used by extensions
that replace pages in Control Panel applets, so we do not need to implement it here. Explorer calls our
AddPages()
function to let us add pages to the property sheet that Explorer sets up.
The parameters to
AddPages() are a function pointer and an
LPARAM, both of which are
used only by the shell.
lpfnAddPageProc points to a function inside the shell that we call to actually
add the pages.
lParam is some mysterious value that's important to the shell. We don't mess with it,
we just pass it right back to the
lpfnAddPageProc function.
STDMETHODIMP CFileTimeShlExt::AddPages ( LPFNADDPROPSHEETPAGE lpfnAddPageProc, LPARAM lParam ) { PROPSHEETPAGE psp; HPROPSHEETPAGE hPage; TCHAR szPageTitle [MAX_PATH]; string_list::const_iterator it, itEnd; for ( it = m_lsFiles.begin(), itEnd = m_lsFiles.end(); it != itEnd; it++ ) { // 'it' points at the next filename. Allocate a new copy of the string // that the page will own. LPCTSTR szFile = _tcsdup ( it->c_str() );
The first thing we do is make a copy of the filename. The reason for this is explained below.
The next step is to create a string to go in our page's tab. The string will be the filename, without the extension. Additionally, the string will be truncated if it's longer than 24 characters. This is totally arbitrary; I chose 24 because it looked good to me. There should be some limit, to prevent the name from running off the end of the tab.
// Strip the path and extension from the filename - this will be the // page title. The name is truncated at 24 chars so it fits on the tab. lstrcpyn ( szPageTitle, it->c_str(), MAX_PATH ); PathStripPath ( szPageTitle ); PathRemoveExtension ( szPageTitle ); szPageTitle[24] = '\0';
Since we're using straight SDK calls to do the property page, we'll have to get our hands dirty with a
PROPSHEETPAGE
struct. Here's the setup for the struct:
psp.dwSize = sizeof(PROPSHEETPAGE); psp.dwFlags = PSP_USEREFPARENT | PSP_USETITLE | PSP_USEICONID | PSP_USECALLBACK; psp.hInstance = _Module.GetResourceInstance(); psp.pszTemplate = MAKEINTRESOURCE(IDD_FILETIME_PROPPAGE); psp.pszIcon = MAKEINTRESOURCE(IDI_TAB_ICON); psp.pszTitle = szPageTitle; psp.pfnDlgProc = PropPageDlgProc; psp.lParam = (LPARAM) szFile; psp.pfnCallback = PropPageCallbackProc; psp.pcRefParent = (UINT*) &_Module.m_nLockCnt;
There are a few important details here that we must pay attention to for the extension to work correctly:
pszIconmember is set to the resource ID of a 16x16 icon, which will be displayed in the tab. Having an icon is optional, of course, but I added an icon to make our page stand out.
pfnDlgProcmember is set to the address of the dialog proc of our page.
lParammember is set to
szFile, which is a copy of the filename the page is associated with.
pfnCallbackmember is set to the address of a callback function that gets called when the page is created and destroyed. The role of this function will be explained later.
pcRefParentmember is set to the address of a member variable inherited from
CComModule. This variable is the lock count of the DLL. The shell increments this count when the property sheet is displayed, to keep our DLL in memory while the sheet is open. The count will be decremented after the sheet is destroyed.
Having set up that struct, we call the API to create the property page.
hPage = CreatePropertySheetPage ( &psp );
If that succeeds, we call the shell's callback function which adds the newly-created page to the property sheet.
The callback returns a
BOOL indicating success or failure. If it fails, we destroy the page.
if ( NULL != hPage ) { // Call the shell's callback function, so it adds the page to // the property sheet. if ( !lpfnAddPageProc ( hPage, lParam ) ) DestroyPropertySheetPage ( hPage ); } } // end for return S_OK; }
Time to deliver on my promise to explain about the duplicate string. The duplicate is needed because after
AddPages()
returns, the shell releases its
IShellPropSheetExt interface, which in turn destroys the
CFileTimeShlExt
object. That means that the property page's dialog proc can't access the
m_lsFiles member of
CFileTimeShlExt.
My solution was to make a copy of each filename, and pass a pointer to that copy to the page. The page owns
that memory, and is responsible for freeing it. If there is more than one selected file, each page gets a copy
of the filename it is associated with. The memory is freed in the
PropPageCallbackProc function, shown
later. This line in
AddPages():
psp.lParam = (LPARAM) szFile;
is the important one. It stores the pointer in the
PROPSHEETPAGE struct, and makes it available
to the page's dialog proc.
Now, on to the property page itself. Here's what the new page looks like. Keep this picture in mind while you're reading over the explanation of how the page works.
Notice there is no last accessed time control. FAT only keeps the last accessed date. Other file systems keep the time, but I have not implemented logic to check the file system. The time will always be stored as 12 midnight if the file system supports the last accessed time field.
The page has two callback functions and two message handlers. These prototypes go at the top of FileTimeShlExt.cpp:
BOOL CALLBACK PropPageDlgProc ( HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam ); UINT CALLBACK PropPageCallbackProc ( HWND hwnd, UINT uMsg, LPPROPSHEETPAGE ppsp ); BOOL OnInitDialog ( HWND hwnd, LPARAM lParam ); BOOL OnApply ( HWND hwnd, PSHNOTIFY* phdr );
The dialog proc is pretty simple. It handles three messages:
WM_INITDIALOG,
PSN_APPLY,
and
DTN_DATETIMECHANGE. Here's the
WM_INITDIALOG part:
BOOL CALLBACK PropPageDlgProc ( HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam ) { BOOL bRet = FALSE; switch ( uMsg ) { case WM_INITDIALOG: bRet = OnInitDialog ( hwnd, lParam ); break;
OnInitDialog() is explained later. Next up is
PSN_APPLY, which is sent if the user
clicks the OK or Apply button.
case WM_NOTIFY: { NMHDR* phdr = (NMHDR*) lParam; switch ( phdr->code ) { case PSN_APPLY: bRet = OnApply ( hwnd, (PSHNOTIFY*) phdr ); break;
And finally,
DTN_DATETIMECHANGE. This one is simple - we just enable the Apply button by sending
a message to the property sheet (which is the parent window of our page).
case DTN_DATETIMECHANGE: // If the user changes any of the DTP controls, enable // the Apply button. SendMessage ( GetParent(hwnd), PSM_CHANGED, (WPARAM) hwnd, 0 ); break; } // end switch } // end case WM_NOTIFY break; } // end switch return bRet; }
So far, so good. The other callback function is called when the page is created or destroyed. We only care about
the latter case, since it's when we can free the duplicate string that was created back in
AddPages().
The
ppsp parameter points at the
PROPSHEETPAGE struct used to create the page, and the
lParam member still points at the duplicate string which must be freed.
UINT CALLBACK PropPageCallbackProc ( HWND hwnd, UINT uMsg, LPPROPSHEETPAGE ppsp ) { if ( PSPCB_RELEASE == uMsg ) free ( (void*) ppsp->lParam ); return 1; }
The function always returns 1 because when the function is called during the creation of the page, it can prevent the page from being created by returning 0. Returning 1 lets the page be created normally. The return value is ignored when the function is called when the page is destroyed.
A lot of important stuff happens in
OnInitDialog(). The
lParam parameter again points
to the
PROPSHEETPAGE struct used to create this page. Its
lParam member points
to that ever-present filename. Since we need to have access to that filename in the
OnApply() function,
we save the pointer using
SetWindowLong().
BOOL OnInitDialog ( HWND hwnd, LPARAM lParam ) { PROPSHEETPAGE* ppsp = (PROPSHEETPAGE*) lParam; LPCTSTR szFile = (LPCTSTR) ppsp->lParam; HANDLE hFind; WIN32_FIND_DATA rFind; // Store the filename in this window's user data area, for later use. SetWindowLong ( hwnd, GWL_USERDATA, (LONG) szFile );
Next, we get the file's created, modified, and accessed times using
FindFirstFile(). If that succeeds,
the DTP controls are initialized with the right data.
hFind = FindFirstFile ( szFile, &rFind ); if ( INVALID_HANDLE_VALUE != hFind ) { // Initialize the DTP controls. SetDTPCtrl ( hwnd, IDC_MODIFIED_DATE, IDC_MODIFIED_TIME, &rFind.ftLastWriteTime ); SetDTPCtrl ( hwnd, IDC_ACCESSED_DATE, 0, &rFind.ftLastAccessTime ); SetDTPCtrl ( hwnd, IDC_CREATED_DATE, IDC_CREATED_TIME, &rFind.ftCreationTime ); FindClose ( hFind ); }
SetDTPCtrl() is a utility function that sets the contents of the DTP controls. You can find the
code at the end of FileTimeShlExt.cpp.
As an added touch, the full path to the file is shown in the static control at the top of the page.
PathSetDlgItemPath ( hwnd, IDC_FILENAME, szFile ); return FALSE; }
The
OnApply() handler does the opposite - it reads the DTP controls and modifies the file's created,
modified, and accessed times. The first step is to retrieve the filename pointer by using
GetWindowLong()
and open the file for writing.
BOOL OnApply ( HWND hwnd, PSHNOTIFY* phdr ) { LPCTSTR szFile = (LPCTSTR) GetWindowLong ( hwnd, GWL_USERDATA ); HANDLE hFile; FILETIME ftModified, ftAccessed, ftCreated; // Open the file. hFile = CreateFile ( szFile, GENERIC_WRITE, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL );
If we can open the file, we read the DTP controls and write the times back to the file.
ReadDTPCtrl()
is the counterpart of
SetDTPCtrl().
if ( INVALID_HANDLE_VALUE != hFile ) { // Retrieve the dates/times from the DTP controls. ReadDTPCtrl ( hwnd, IDC_MODIFIED_DATE, IDC_MODIFIED_TIME, &ftModified ); ReadDTPCtrl ( hwnd, IDC_ACCESSED_DATE, 0, &ftAccessed ); ReadDTPCtrl ( hwnd, IDC_CREATED_DATE, IDC_CREATED_TIME, &ftCreated ); // Change the file's created, accessed, and last modified times. SetFileTime ( hFile, &ftCreated, &ftAccessed, &ftModified ); CloseHandle ( hFile ); } else // <<Error handling omitted>> // Return PSNRET_NOERROR to allow the sheet to close if the user clicked OK. SetWindowLong ( hwnd, DWL_MSGRESULT, PSNRET_NOERROR ); return TRUE; }
Registering a drag and drop handler is similar to registering a context menu extension. The handler can be invoked
for a particular file type, for example all text files. This extension works on any file, so we register
it under the
HKEY_CLASSES_ROOT\* key. Here's the RGS script to register the extension:
HKCR { NoRemove * { NoRemove shellex { NoRemove PropertySheetHandlers { {3FCEF010-09A4-11D4-8D3B-D12F9D3D8B02} } } } }
You might notice that the extension's GUID is the stored as the name of a registry key here, instead of a string value. The documentation and books I've looked at conflict on the correct naming convention, although during my brief testing, both ways worked. I have decided to go with the way Dino Esposito's book (Visual C++ Windows Shell Programming) does it, and put the GUID in the name of the registry key.
As always, on NT-based OSes, we need to add our extension to the list of "approved" extensions. The
code to do this is in the
DllRegisterServer() and
DllUnregisterServer() functions in
the sample project.
Coming up in Part VI, we'll see another new type of extension, the drop handler, which is invoked when shell objects are dropped onto a file..
April 8, 2000: Article first published.
June 6, 2000: Something updated. ;)
May 25, 2006: Updated to cover changes in VC 7.1, cleaned up code snippets, sample project gets themed on XP.
Series Navigation: « Part IV | Part VI »
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/shell/shellextguide5.aspx | crawl-002 | refinedweb | 2,715 | 55.95 |
I'm making an event posting app, but it's not saved in the database after implementing the tag function.Reference code
Event controller
def index @tag_list = Tag.all @events = Event.all @event = current_user.events.new end def new @event = Event.new end def create # binding.pry @event = Event.new (event_params) # binding.pry if @ event.save # binding.pry tag_list = tag_params [: tag_names] .delete (""). split (",") @ event.save_tags (tag_list) redirect_to root_path else else render'new' end end Private Def event_params params.require (: event) .permit (: name,: expectation,: facility_id,: scale_id,: category_id,: volunteer, images: []). merge (user_id: current_user.id) end def tag_params params.require (: event) .permit (: tag_names) end
Event model
What I went toWhat I went to
class Event</pre> <p>Tag model</p> <pre><code>class Tag</pre> <p><br /> New registration view</p> <pre><code><% = form_with model: @event, url: events_path, local: true do | f |%> <% = render'shared/error_messages', model: f.object%> <label>event name</label> <% = f.text_field: name, class :: form_control%> <Dib,> <label>tag</label> <% = f.text_field: tag_names, class: "input-tag"%>
After checking the params in binding.pry after the save method
"zJzUCe4RFWoSTgLeXk64nRjg9Db7BuoC6ZBZkh/1RaRvqyCH5BHfdbB6iScb5n + JGEYk9vpEanaf/Kh5MwUFgQ ==", aaaa "=" "," volunteer "=>" "," tag_names "=>" s, aaaa, rrrrr "," facility_id "=>" 2 "," scale_id "=>" 3 "," category_id "=>" 2 "} permitted: false>, "commit" =>"save", "controller" =>"events", "action" =>"create"} permitted: false> [2] pry (#<EventsController>)>@ event.save =>true
I was able to confirm that the save method was working, but it was not reflected in the database. I don't know where the cause is, so I'd appreciate it if you could give me some advice.Postscript
Event model
def save_tags (tag_list) 26: # binding.pry 27: tag_list.each do | tag | 28: unless find_tag = Tag.find_by (tag_name: tag.downcase) =>29: binding.pry 30: # binding.pry 31: # begin 32: # self.tags.create! (Tag_name: tag) 33: 34: # rescue 35: # nil 36: # end 37: # else 38: #EventTagRelation.create! (event_id: self.id, tag_ids: find_tag.id) 39: # binding.pry 40: end 41: end 42: end
pry content
[1] pry (#<Event>)>tag_name NameError: undefined local variable or method `tag_name'for #<Event: 0x00007fee549252d0> from /Users/user/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/activemodel-6.0.3.4/lib/active_model/attribute_methods.rb:432:in `method_missing' [2] pry (#<Event>)>tag.downcase =>"s"
I entered "s, aaaa, rrrrr" in the tag
- Answer # 1
- Answer # 2
Is there an error?
If it is out, please put it on. Otherwise, the cause cannot be grasped.
Thinking in Esper mode
validates: tagname, uniqueness: true
This feels extra
I'm rescued so I don't understand the error.
Exception measures Please remove once and try
Related articles
- ruby on rails data is not saved in db? ??
- ruby - i want to display the url saved in the db as a hyperlink in the view
- javascript - tab contents are not hidden
- ruby on rails - data cannot be saved in the intermediate table (parameters cannot be passed)
- ruby - account id cannot be saved with stripe
- ruby - validation does not start
- ruby on rails - date_select cannot be saved
- ruby on rails 6 - user information is not updated
- ruby - logut function does not work
- ruby - [rails] data is not saved due to activerecord :: notnullviolation error
- ruby - scss is not reflected
- ruby on rails - saved description in nested controller
- ruby - [rails] the created_at time saved in the db does not reach japan time
- ruby - [rails] actioncontroller::parametermissing is displayed and data is not saved
- ruby on rails - devise::sessionscontroller not found
- ruby on rails 6 - class does not end with end
- ruby - i want to solve the problem that posts are not saved
- ruby on rails - the image saved in the database cannot be displayed
- images are not displayed with css]
The description of validation in the tag model was incorrect.
validates: tagname, presence: true
From
validates: tag_name, presence: true
I was able to implement it at | https://www.tutorialfor.com/questions-325227.htm | CC-MAIN-2022-27 | refinedweb | 622 | 57.87 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How can i access to the email template used for "Portal Access Management".
So you go to a customer/supplier, in "More", you can select "Portal Access Management" and invite people to use the portal, once you've done that an email is sent using an email template and the title of this email is "Your OpenERP Account at Your company". (sorry my karma is not sufficient to attach printscreen :()
I want to be able to modify this template but i can't find it.
Can somebody tell me where is this template (i know where to find the other template but this one doesn't appear there).
Thank You!
For odoo 10, you can open " addons/portal/wizard/portal_wizard.py" as Els wrote.
In that file, you will find:
@api.multi
def _send_email(self):
""" send notification email to a new portal user """
if not self.env.user.email:
raise UserError(_('You must have an email address in your User Preferences to send emails.'))
# determine subject and body in the portal user's language
template = self.env.ref('portal.mail_template_data_portal_welcome')
for wizard_line in self:...
The template "portal.mail_template_data_portal_welcome" is named "Portal: new user" in the admin setting of the email in developper mode.
You will have to look at the user freindly emails, it seems that the name of the sender is the current user in the system. So you might have to modify the pyhon code in the file...:
def extract_email(email):
""" extract the email address from a user-friendly email address """
We just implemented thit here and hope this is the right way to do it.
Ths is a old post, but it is the first one that come on google, so maybe this will save some one else some time.
The way to edit it is to change the translation from *Settings > Translations > Application Terms > Translated Terms*. Search the translated field as "addons/portal/wizard/portal_wizard.py". You will find the email template in the list. Simply enter a translation for your language (even if you use English).
Dear %(name)s,
You have been given access to %(company)s's %(portal)s.
Your login account data is:
Username: %(login)s
Portal: %(portal_url)s
Database: %(db)s
You can set or change your password via the following url:
%(welcome_message)s
--
Odoo - Open Source Business Applications
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-can-i-access-to-the-email-template-used-for-portal-access-management-55147 | CC-MAIN-2018-05 | refinedweb | 443 | 64.2 |
Manual upgrade procedure for Hadoop clusters.
Upgrade is an important part of the lifecycle of any software system, especially a distributed multi-component system like Hadoop. This is a step-by-step procedure a Hadoop cluster administrator should follow in order to safely transition the cluster to a newer software version. This is a general procedure, for particular version specific instructions please additionally refer to the release notes and version change descriptions.
The purpose of the procedure is to minimize damage to the data stored in Hadoop during upgrades, which could be a result of the following three types of errors:
Hardware failure is considered normal for the operation of the system, and should be handled by the software.
Software errors, and
Human mistakes
can lead to partial or complete data loss.
In our experience the worst damage to the system is incurred when as a result of a software or human mistake the name node decides that some blocks/files are redundant and issues a command for data nodes to remove the blocks. Although a lot has been done to prevent this behavior the scenario is still possible.
Common assumptions:
Newer versions should provide automatic support and conversion of the older versions data structures.
Downgrades are not supported. In some cases, when e.g. data structure layouts are not affected by particular version the downgrade may be possible. In general, Hadoop does not provide tools to convert data from newer versions to older ones.
Different Hadoop components should be upgraded simultaneously.
Inter-version compatibility is not supported. In some cases when e.g. communication protocols remain unchanged different versions of different components may be compatible. For example, JobTracker v.0.4.0 can communicate with NameNode v.0.3.2. In general, Hadoop does not guarantee compatibility of components of different versions.
Instructions:
Stop map-reduce cluster(s)
bin/stop-mapred.sh
and all client applications running on the DFS cluster.
Run fsck command:
bin/hadoop fsck / -files -blocks -locations > dfs-v-old-fsck-1.log
Fix DFS to the point there are no errors. The resulting file will contain complete block map of the file system.
Note. Redirecting the fsck output is recommend for large clusters in order to avoid time consuming output to stdout.
Run lsr command:
bin/hadoop dfs -lsr / > dfs-v-old-lsr-1.log
The resulting file will contain complete namespace of the file system.
Run report command to create a list of data nodes participating in the cluster.
bin/hadoop dfsadmin -report > dfs-v-old-report-1.log
Optionally, copy all or unrecoverable only data stored in DFS to a local file system or a backup instance of DFS.
Optionally, stop and restart DFS cluster, in order to create an up-to-date namespace checkpoint of the old version.
bin/stop-dfs.sh
bin/start-dfs.sh
Optionally, repeat 3, 4, 5, and compare the results with the previous run to ensure the state of the file system remained unchanged.
Copy the following checkpoint files into a backup directory:
dfs.name.dir/edits
dfs.name.dir/image/fsimage
Stop DFS cluster.
bin/stop-dfs.sh
Verify that DFS has really stopped, and there are no DataNode processes running on any nodes.
Install new version of Hadoop software. See GettingStartedWithHadoop and HowToConfigure for details.
Optionally, update the conf/slaves file before starting, to reflect the current set of active nodes.
Optionally, change the configuration of the name node’s and the job tracker’s port numbers, to ignore unreachable nodes that are running the old version, preventing them from connecting and disrupting system operation.
fs.default.name
mapred.job.tracker
Optionally, start name node only.
bin/hadoop-daemon.sh start namenode -upgrade
This should convert the checkpoint to the new version format.
Optionally, run lsr command:
bin/hadoop dfs -lsr / > dfs-v-new-lsr-0.log
and compare with dfs-v-old-lsr-1.log
Start DFS cluster.
bin/start-dfs.sh
Run report command:
bin/hadoop dfsadmin -report > dfs-v-new-report-1.log
and compare with dfs-v-old-report-1.log to ensure all data nodes previously belonging to the cluster are up and running.
Run lsr command:
bin/hadoop dfs -lsr / > dfs-v-new-lsr-1.log
and compare with dfs-v-old-lsr-1.log. These files should be identical unless the format of lsr reporting or the data structures have changed in the new version.
Run fsck command:
bin/hadoop fsck / -files -blocks -locations > dfs-v-new-fsck-1.log
and compare with dfs-v-old-fsck-1.log. These files should be identical, unless the fsck reporting format has changed in the new version.
Start map-reduce cluster
bin/start-mapred.sh
In case of failure the administrator should have the checkpoint files in order to be able to repeat the procedure from the appropriate point or to restart the old version of Hadoop. The *.log files should help in investigating what went wrong during the upgrade.
Enhancements:
This is a list of enhancements intended to simplify the upgrade procedure and to make the upgrade safer in general.
A shutdown function is required for Hadoop that would cleanly shut down the cluster, merging edits into the image, avoiding the restart-DFS phase.
The safe mode implementation will further help to prevent name node from voluntary decisions on block deletion and replication.
A faster fsck is required. Currently fsck processes 1-2 TB per minute.
Hadoop should provide a backup solution as a stand alone application.
Introduce an explicit -upgrade option for DFS (See below) and a related
finalize upgrade command.
Shutdown command:
During the shutdown the name node performs the following actions.
It locks the namespace for further modifications and waits for active leases to expire, and pending block replications and deletions to complete.
Runs fsck, and optionally saves the result in a file provided.
Checkpoints and replicates the namespace image.
Sends shutdown command to all data nodes and verifies they actually turned themselves off by waiting for as long as 5 heartbeat intervals during which no heartbeats should be reported.
Stops all running threads and terminates itself.
Upgrade option for DFS:
The main idea of upgrade is that each version that modifies data structures on disk has its own distinct working directory. For instance, we'd have a "v0.6" and a “v0.7” directory for the name node and for all data nodes. These version directories will be automatically created when a particular file system version is brought up for the first time. If DFS is started with the -upgrade option the new file system version will do the following:
The name node will start in the read-only mode and will read in the old version checkpoint converting it to the new format.
Create a new working directory corresponding to the new version and save the new image into it. The old checkpoint will remain untouched in the working directory corresponding to the old version.
The name node will pass the upgrade request to the data nodes.
Each data node will create a working directory corresponding to the new version. If there is metadata in side files it will be re-generated in the new working directory.
Then the data node will hard link blocks from the old working directory to the new one. The existing blocks will remain untouched in their old directories.
The data node will confirm the upgrade and send its new block report to the name node.
Once the name node received the upgrade confirmations from all data nodes it will run the fsck and then switch to the normal mode when it’s ready to serve clients’ requests.
This ensures that a snapshot of the old data is preserved until the new version is validated and tested to function properly. Following the upgrade the file system can be run for a week or so to gain confidence. It can be rolled back to the old snapshot if it breaks, or the upgrade can be “finalized” by admin using the “finalize upgrade” command, which would remove old version working directories.
Care must be taken to deal with data nodes that are missing during the upgrade stage. In order to deal with such nodes the name node should store the list of data nodes that have completed the upgrade, and reject data nodes that did not confirm the upgrade.
When DFS will allow modification of blocks, this will require copying blocks into the current version working directory before modifying them.
Linking allows the data from several versions of Hadoop to coexist and even evolve on the same hardware without duplicating common parts.
Finalize Upgrade:
When the Hadoop administrator is convinced that the new version works properly he/she/it can issue a “finalize upgrade” request.
The finalize request is first passed to the data nodes so that they could remove their previous version working directories with all block files. This does not necessarily lead to physical removal of the blocks as long as they still are referenced from the new version.
When the name node receives confirmation from all data nodes that current upgrade is finalized it will remove its own old version directory and the checkpoint in it thus completing the upgrade and making it permanent.
The finalize upgrade procedure can run in the background without disrupting the cluster performance. Being in finalize mode the name node will periodically verify confirmations from the data nodes and finalize itself when the load is light.
Simplified Upgrade Procedure:
The new utilities will substantially simplify the upgrade procedure:
Stop map-reduce cluster(s) and all client applications running on the DFS cluster.
Stop DFS using the shutdown command.
Install new version of Hadoop software.
Start DFS cluster with -upgrade option.
Start map-reduce cluster.
Verify the components run properly and finalize the upgrade when convinced. This is done using the -finalizeUpgrade option to the hadoop dfsadmin command.
Upgrade Guide for Hadoop-0.14
The procedure described here applies any Hadoop upgrade. In addition to this page, please read Hadoop-0.14 Upgrade when upgrading to Hadoop-0.14 (Check
release notes). Hadoop-0.14 Upgrade describes how to follow the progress of upgrade that is specific to Hadoop-0.14. | http://wiki.apache.org/hadoop/Hadoop_Upgrade | crawl-002 | refinedweb | 1,707 | 57.16 |
Two years ago this month, I started as a developer at The Outline. At the time, the site was just an idea that existed as a series of design mock ups and small prototypes. We had just three months to build a news website with some pretty ambitious design goals, as well as a CMS to create the visually expressive content that the designs demanded. We chose Elixir and Phoenix as the foundation of our website after being attracted to its concurrency model, reliability, and ergonomics.
Over this time, I have gained a major appreciation for Elixir, not only for the productivity it affords me, but of the business opportunities it has opened up for us. In these past two years, Elixir has gone from 1.3 to 1.7, and great improvements have been introduced by the core team:
- GenStage / Flow
- mix format
- Registry
- Syntax highlighting
- IEx debugging enhancements
- Exception.Blame and other stack trace improvements
- Dynamic Supervisor
As I reach this two year mark, I thought others might benefit from an explanation of why I love Elixir so much after two years, what I still struggle with, and some beginner mistakes that I made early on.
90ms is a 90th percentile response time on The Outline. Our post page route is even faster! We got this performance out of the box, without really any tuning or fine-grained optimizations. For other routes that do not hit the database, we see response times measures in microseconds. This speed allows us to build features that I wouldn’t have even considered possible in other languages.
Elixir is so fast that we haven’t had much need for CDN or service level caching. It’s been a luxury to not have to spend time debugging caching issues between Redis and memcached, which are issues that have kept me up into the wee hours of the morning in past roles. The lack of public cache opens up the path for dynamic content and user-based personalization on initial page load.
While we don’t cache routes at the CDN, we do cache some expensive database queries. For that we use light in-memory caching via ConCache, a wonderful library by Saša Jurić.
It seems like people get started with Phoenix writing JSON apis, and leave the HTML to Preact and other front end frameworks. A lot of the raw site performance we get from Elixir and Phoenix is from its ability to render HTML extremely quickly, on the order of microseconds. Phoenix allows us to have really fast server-rendered pages, and then we let Javascript kick in to add dynamic features. Before reaching for Vue.js or Svelte, consider going old school and rendering your HTML on the server; you might be delighted.
ExUnit gives you so much out of the box. In most of the other languages that I’ve used, testing frameworks are third-party, and setup is often a pain. ExUnit comes bundled with a code coverage tool, and its assertion diffs keep improving! Not only that, you can
mix test --slowestto find your slowest tests, or
mix test --failedto rerun only the tests that failed the last run.
Doctests are easily my favorite part of ExUnit. For the uninitiated, doctests are tests that you write inline in your documentation. They get compiled and run when you do mix test. The power here is two-fold; you get code examples right next to the definition of your code and you know that the examples work.
Having a consistent way to read docs across packages makes things really easy to find. I spent some time taking a data science and machine learning course in Python last month, and I realized exactly how spoiled I’ve been with Elixir documentation. It’s hard to measure the value of a consistent, familiar, and pervasive documentation system. The latest distillery release excepted, every Elixir library’s documentation has the same look and feel..
Think of Phoenix Channels as controllers for Websockets. The socket registers topics which are analogous to a router. At The Outline, we were able to remove thousands of lines of JavaScript by moving code into the Channel. Moving mutable JavaScript into Elixir was a great feeling. It’s always been our goal to ship as little code to the client as possible, and keeping user state in Channels facilitates that in a way that I would not have considered if I was using Node.js or Ruby. The memory overhead of channels has been relatively low, and we didn’t need to make any changes to our infrastructure to support them.
Elixir has been a friendly and helpful community these past two years. I’ve received a ton of advice on the Elixir Slack channel when I’ve asked for help. I’ve also enjoyed attending and speaking at the NYC Elixir Meetup, as well as the Empex and Empex West conferences. I’ve met some great people through these events, including several leaders in the community, and I hope to meet more passionate people in the future!
I’d like to also call out both the ElixirTalk (hi Chris and Desmond) and Elixir Outlaws podcasts, which are fantastic and do a really great job of breaking down interesting problems in the ecosystem.
Sometimes you change a line in a controller or a view, and you end up with a stack trace in your 1000 line module that starts at line 1. The problem? Meta-programming! Despite all the great things that meta-programming gives us in terms of ergonomics, its makes certain types of exceptions really hard to pinpoint. Luckily, not all stack traces are this way, but it can be extremely frustrating when the stack trace leaves you empty handed.
Asynchronous and concurrent code is notoriously hard to debug. What’s harder to debug is asynchronous and concurrent code that you haven’t written. We have some lingering error messages that get printed during random test runs. Attempts to debug them have been futile, so they appear to be heisenbugs. I have a suspicion that our particular issue is with Phoenix Channels and Ecto Sandbox mode, but I haven’t quite narrowed it down. Please let me know if you have!
While I’m really comfortable working with changesets and writing join queries in Ecto, breaking down my code for associations is still hard. Its pretty straightforward when dealing with simple associations, but when you have a data model that involves multiple entities, and you want to create new entities while associating them to existing entities, some things break down for me.
What still does not feel natural to me is where to place code that deals with the
put_assoc and
cast_assoc family of functions. My first tendency would be to put it in the
changeset/2 function in the schema, but you do not always want that logic. Of course, you can have multiple changeset functions, but I haven’t found the right balance for that either. What I’ve started doing is moving association code outside of the schema and changeset, and into the bounded context thats building the association.
What really drew me into Elixir at first was how wonderful it felt to pattern match in function heads. The utility of multiple function heads,
if as an expression rather than a statement, and immutable data structures had me hooked really fast (especially coming from Javascript).
What ended up happening is that I would pattern match at every single opportunity. Without a static type system, pattern matching felt like a friendlier replacement, and I wanted to make use of it at every corner. The problem is that it’s not a type system, and using it as such has drawbacks that are not immediately obvious until you write a certain amount of Elixir code. When you pattern-match gratuitously, you over-specify your code, and you miss opportunities to apply generic code to wider domains, and make that code more difficult to refactor in the future.
While my love of pattern-matching has not gone away, it has become clearer to me when to pattern-match, and more importantly, what level of specificity should I pattern match on. Do I need to pattern-match on this struct, or will a map suffice? Does this private function need to pattern match it arguments when the shape is already clear in its only caller? These nuances become clearer as you write more code, and deciding when and when not to pattern-match is a matter of preference and style.
This is a problem that’s closely related with the desire to pattern-match. Once you start rendering more than the Hello World example of Phoenix, you’re gonna have to start passing data through nested views and templates to fully render a page. When you start passing data down, tend towards being additive rather than regressive.
# Here we’re possibly over pattern matching and over specifying.
# If we want to pass more data down in the future, we have to
# change this function in addition to its caller
def render(“parent.html”, %{content: content}) do
render(“child.html”, %{content: content, extra: data})
end
# This way is less restrictive, and makes maintenance easier
# in the future if we decide to pass more data
def render(“parent.html”, params) do
render(“child.html”, Map.put(params, :extra, :data))
end
When starting to learn about Elixir / Erlang, it’s so tempting to start writing GenServers, Tasks, processes, etc for the problem at hand. Before you do, please read Saša’s To spawn, or not to spawn?, which breaks down when you should reach for processes and when modules / functions are good enough.
Knowing when to implement a protocol, such as Phoenix’s HTML.Safe protocol, can be extremely powerful. I wrote a bit about protocols in my last blog post, Beyond Functions in Elixir: Refactoring for Maintainability. In that post, I walk through implementing a custom Ecto.Type for Markdown, and then automatically converting it to HTML in your templates via protocols.
As soon as you get data from the external world, cast it into a well known shape. For this, Ecto.Changeset is your best friend. When I first started out, I resisted using changesets, as there is a bit of a learning curve, and it seemed easier to shove data right into the database. Don’t do this.
Ecto.Changeset is such a wonderful tool that will save you so much time, and there are many ways to learn it. I haven’t read the Ecto book, but I do recommend reading through the documentation as well as the free What’s new in Ecto 2.1?. José Valim also wrote an excellent blog post describing how to use Ecto Schemas and Changesets to map data between different domains, without those domains necessarily being backed by a database.
- nerves — Craft and deploy bulletproof embedded software in Elixir
- raft — An Elixir implementation of the raft consensus protocol
- Property testing — via PropEr and StreamData
- LiveView — Upcoming Phoenix compatible library from Chris McCord that blends Phoenix Channels and reactive html
Well, thank you for reading this far! These past two years have been a wonderful time. I’m excited to get more involved in the community, and to write more! Say hi on twitter and let me know what else you’d like to hear about! | https://www.tefter.io/bookmarks/52076/readable | CC-MAIN-2020-10 | refinedweb | 1,905 | 61.46 |
Here is what I'm working on:
Write and test two functions enterData() and printCheck() to produce the sample paycheck illustrated in Figure 7-21 on the screen (not including the boxed outline). The items in parenthese should be accepted by enterData() and passed to printCheck() for display.
Here is what I have so far. The display from printCheck() is way off and I can fix that later, so don't worry about that. But what else is wrong here?
Code:#include "stdafx.h" #include <iostream> #include <iomanip> using namespace std; int _tmain(int argc, _TCHAR* argv[]) { void enterData(); void printCheck(int, int, int, char, char, double); enterData(); return 0; } void enterData() { int month, day, year; char firstName, lastName; double amount; cout << "Enter the month number: "; cin >> month; cout << "Enter the day: "; cin >> day; cout << "Enter the year: "; cin >> year; cout << "Enter first name of the person getting paid: "; cin >> firstName; cout << "Enter the last name of the person getting paid: "; cin >> lastName; cout << "Enter the amount being paid: "; cin >> amount; printCheck(month, day, year, firstName, lastName, amount); return; } void printCheck(int mm, int dd, int yy, char first, char last, double money); { cout << setw(40) << fixed << "Zzyz Corp." << setw(20) << "Date: " << mm << "/" << dd << "/" << yy << "\n" << setw(40) << "1164 Sunrise Avenue\n" << setw(40) << "Kalispell, Montana\n\n" << setw(40) << "Pay to the order of: " << first << " " << last << setw(20) << "$" << money << "\n\n" << "UnderSecurity Bank\n" << setw(40) << "Missoula, MT" << setw(20) << "____________________\n" << setw(40) << "Authorized Signature"; return; } | http://cboard.cprogramming.com/cplusplus-programming/99460-little-help-please.html | CC-MAIN-2014-52 | refinedweb | 248 | 55.92 |
1. What is a bloom filter
Bloom filter was proposed by bloom in 1970. It is actually composed of a long binary vector and a series of random mapping functions. Bloom filter can be used to query whether an element is in a set. Its advantage is that its space and time efficiency are much higher than ordinary algorithms, but its disadvantage is that it will be difficult to misjudge and delete.
- It does not store the data itself, so it cannot extract the original data from the bloom filter.
- When it judges the existence of data, there is a certain error: a certain data does not exist, but it may say that the data exists.
- If it determines that the data is not in it, it must be.
- Data can only be added to, not removed.
2. Principle of Bloom filter(Reference articles)
First understand the concept of hash function: a function that converts data of any size into data of a specific size. The converted data is called hash value or hash encoding.
The bottom layer of the bloom filter is a bit array. When a data a is put in, several values are calculated through several hash functions. Suppose that three hash values x, y and Z are calculated through three hash functions. x. Y, Z correspond to the corresponding position of the bit array, and the bit at this position is set to 1 Then the data is put in.
In the following figure, blue / red / Purple indicates that three data are put into the bit array, and their corresponding bits are set to 1
At this time, the data W is not put into the bit array, but the bits corresponding to its corresponding hash value are all 1, then the bloom filter will consider that w exists. This will lead to miscarriage of justice.
3. Application scenarios
Bloom filter is suitable for large amount of data, but it can allow a certain degree of error. For example:
Duplicate crawler URL judgment
When a URL is to be crawled, the bloom filter is used to determine whether it exists. If it does not exist, it will be put in, and if it exists, it will not be processed. The misjudgment is that this URL has not been crawled, but the bloom filter says that it has been crawled, so some URLs will be missing in the crawler process, which will not affect.
Cache penetration
All data is put into a bit array through a bloom filter. When a request comes, if the requested data exists, it will be released; If the data does not exist, it may be blocked and released with a small probability. Then, most malicious requests can be blocked.
4. Java implementation of Bloom filter
The implementation of bloomfilter is provided in the guava package.
import com.google.common.base.Charsets; import com.google.common.hash.BloomFilter; import com.google.common.hash.Funnels; public class BloomFilterTest { public static void main(String[] args) { BloomFilter<CharSequence> bloomFilter = BloomFilter.create(Funnels.stringFunnel(Charsets.UTF_8), 200000, 1E-7); bloomFilter.put("test"); boolean contain = bloomFilter.mightContain("test"); if (contain) System.out.println("contain test"); } } | https://developpaper.com/principle-and-application-scenario-of-bloom-filter/ | CC-MAIN-2022-40 | refinedweb | 527 | 63.09 |
IMPORTANT Generic methods with generic params cannot be used in Source XML as in-rule methods or rule actions. Use plain .Net class(es) if your source object declares generic methods.
If applied to a source object, references an external qualified public method as a rule action. To qualify, the method must return System.Void and be parameterless or have only parameters of source object type or any value type supported by Code Effects.
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Interface,
AllowMultiple = true, Inherited = true)]
public class ExternalActionAttribute : System.Attribute,
CodeEffects.Rule.Attributes.IExternalAttribute,
CodeEffects.Rule.Attributes.ISortable
Assembly
Type: System.String
Gets or sets the full name of the assembly that declares the action method class, which can be obtained by executing Assembly.GetAssembly( className ).FullName. If the Type property is not set, this property is required.
Method
Type: System.String
Gets or sets the name of the action method declared outside of the source object that Rule Editor should use as a rule action. This is a required property.
SortOrder
Type: System.Int16
Gets or sets the sort order in which the rule action method is placed in action menu on Rule Area. This property is optional. Default value is 0., which
{
//;
}
}
} | https://codeeffects.com/Doc/Business-Rule-External-Action-Attribute | CC-MAIN-2021-31 | refinedweb | 202 | 51.24 |
On June 4, 2004 05:42 am, Jens Axboe wrote:> On Thu, Jun 03 2004, Andrew Morton wrote:> > Ed Tomlinson <edt@aei.ca> wrote:> > >> > > Hi,> > > > > > I am still getting these ide errors with 7-rc2-mm2. I get the errors even> > > if I mount with barrier=0 (or just defaults). It would seem that something is > > > sending my drive commands it does not understand... > > > > > > May 27 18:18:05 bert kernel: hda: drive_cmd: status=0x51 { DriveReady SeekComplete Error }> > > May 27 18:18:05 bert kernel: hda: drive_cmd: error=0x04 { DriveStatusError }> > > > > > How can we find out what is wrong?> > > > > > This does not seem to be an error that corrupts the fs, it just slows things > > > down when it hits a group of these. Note that they keep poping up - they> > > do stop (I still get them hours after booting).> > > > Jens, do we still have the command bytes available when this error hits?> > It's not trivial, here's a hack that should dump the offending opcode> though.Hi Jens,I applied the patch below and booted into the new kernel (the boot message showed the new compile time). The error messages remained the same - no extra info. Is there another place that prints this (or (!rq) is true)?Ideas?Ed> --- linux-2.6.7-rc2-mm2/drivers/ide/ide.c~ 2004-06-04 11:32:49.286777112 +0200> +++ linux-2.6.7-rc2-mm2/drivers/ide/ide.c 2004-06-04 11:41:47.338870307 +0200> @@ -438,6 +438,30 @@> #endif /* FANCY_STATUS_DUMPS */> printk("\n");> }> + {> + struct request *rq;> + int opcode = 0x100;> +> + spin_lock(&ide_lock);> + rq = HWGROUP(drive)->rq;> + spin_unlock(&ide_lock);> + if (!rq)> + goto out;> + if (rq->flags & (REQ_DRIVE_CMD | REQ_DRIVE_TASK)) {> + char *args = rq->buffer;> + if (args)> + opcode = args[0];> + } else if (rq->flags & REQ_DRIVE_TASKFILE) {> + ide_task_t *args = rq->special;> + if (args) {> + task_struct_t *tf = (task_struct_t *) args->tfRegister;> + opcode = tf->command;> + }> + }> +> + printk("ide: failed opcode was %x\n", opcode);> + }> +out:> local_irq_restore(flags);> return err;> }> -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2004/6/4/76 | CC-MAIN-2014-10 | refinedweb | 345 | 65.12 |
Created on 2013-08-07 14:22 by kristjan.jonsson, last changed 2013-08-14 21:48 by ncoghlan.
A proposed patch adds two features to context managers:
1)It has always irked me that it was impossible to assemble nested context managers in the python language. See issue #5251.
The main problem, that exceptions in __enter__ cannot be properly handled, is fixed by introducing a new core exception, ContextManagerExit. When raised by __enter__(), the body that the context manager protects is skipped. This exception is in the spirit of other semi-internal exceptions such as GeneratorExit and StopIteration. Using this exception, contextlib.nested can properly handle the case where the body isn't run because of an internal __enter__ exception which is handled by an outer __exit__.
2) The mechanism used in implementing ContextManagerExit above is easily extended to allowing a special context manager: None. This is useful for having _optional_ context managers. E.g. code like this:
with performance_timer():
do_work()
def performance_timer():
if profiling:
return accumulator
return None
None becomes the trivial context manager and its __enter__ and __exit__ calls are skipped, along with their overhead.
This patch implements both features.
In addition, it:
1) reintroduces contextlib.nested, which is based on nested_delayed
2) introduces contextlib.nested_delayed, which solves the other problem with previous versions of nested, that an inner context manager expression shouldn't be evaluated early. contextlib.nested evaluates callables returning context managers, rather than managers directly.
3) Allows contextlib.contextmanager decorated functions to not yield, which amounts to skipping the protected body (implicitly raising ContextManagerExit)
4) unittests for the whole thing.
I'll introduce this stuff on python-ideas as well.
Your use cases are either already addressed by contextlib.ExitStack, or should be addressed in the context of its existence. It is the replacement for contextlib.nested.
IMHO, exitstack is not a very nice construct. It's implementation is far longer than contextlib.nested.
And the chief problem still remains, which has not been addressed until this patch (as far as I know):
In Python, it is impossible to combine existing context managers into a nested one. ExitStack may address a use case of nested context managers, but it doesn't address the basic problem.
ContextManagerExit comes with its own nice little features, too. Now you can write:
@contextlib.contextmanager:
def if_ctxt(condition):
if condition:
yield
#hey look! an if statement as a with statement!
with if_ctxt(condition):
do_work
This can easily be extended, where a context manager can both manage context, _and_ provide optional execution of its block.
Raising it on python-ideas sounds like a good idea, then.
I must admit that I don't understand what you mean by "combining existing context managers into a nested one" that isn't addressed by ExitStack.
Simply put, there is no way in the language to nest two context managers, even though we have full access to their implementation model, i.e. can call __enter__ and __exit__ manually. This reflects badly (pun intended) on Python's reflection and introspection capabilities.
If context managers are to be first class entities in the language, then you ought to be able to write absract code using them, and
assemble complex ones out of simple ones. Hypothetical code here:
def nest(a, b):
# currently not possible
return c
def run_with_context(ctxt, callable):
# abstract executor
with ctxt:
return callable()
run_with_context(nested(a,b), callable)
ExitStack address one use case that contextlib.nested was supposed to solve, namely the cleanup of a dynamic sequence of context managers. But it does this no by creating a new manager, but by providing a programming pattern to follow. In that sensse, the multiple context manager syntax (with (a, b, c): ) is also a hack because it provides language magic to perform what you ought to be able to do dynamically...
Does this makes sense?
Anyway, by providing the ContextManagerExit exception, then sufficient flexibility is added to the context manager mechanism that at least the use case of nested() becomes possible.
Context managers are really interesting things. I was inspired by Raymond Hettinger's talk last pycon to explore their capabilities and this is one of the things I came up with :)
I pitched the idea of making it possible to skip the with statement body
quite some time ago, and Guido convinced me it was a bad idea for much the
same reason he chose PEP 343 over his original PEP 340 design: allowing
suppression of exceptions from __enter__ hides local control flow by
blurring the boundaries between with and if statements.
Regarding nested, we killed that because it was a bug magnet for context
managers that acquire the resource in __init__ (like file objects), not
because it didn't work.
It's trivial to recreate that API on top of ExitStack if you like it,
though. The only thing that doesn't work (relative to actual nested with
statements) is suppressing exceptions raised inside __enter__ methods.
Hi there.
"allowing
suppression of exceptions from __enter__ hides local control flow by
blurring the boundaries between with and if statements.
"
I'm not sure what this means. To me, it is a serious language design flaw that you can write a context manager, and it has a well known interface that you can invoke manually, but that you cannot take two existing context managers and assemble them into a nested one, correctly, however much you wiggle.
In my mind, allowing context managers to skip the managed body breaks new ground. Both, by allowing this "combination" to be possible. And also by opening up new and exciting applications for context managers. If you saw Raymond's talk last Pycon, you should feel inspired to do new and exciting things with them.
the bug-magnet you speak of I already addressed in my patch with nested-delayed, more as a measure of completeness (address both the problems that old "nested" had. The more serious bug (IMHO) is the suppression of __enter__ exceptions.
Nick was probably talking about what is further elaborated in PEP 343. I'd recommend taking a particular look at the "Motivation and Summary" section regarding flow control macros.
I've modified the patch. The problem that nested_delayed was trying to solve are "hybrid" context managers, ones that allocate resources during __init__ and release them at exit. A proper context manager should allocate resources during __enter__, and thus a number of them can be created upfront with impunity.
Added contextlib.proper to turn a hybrid context manager into a proper one by instantiating the hybrid in a delayed fashion.
added contextlib.opened() as a special case that does open() properly.
With this change, and the ability to nest error handling of exceptions stemming from __enter__(), nested now works as intended.
Thanks, Eric.
I read that bit and I can't say that I disagree.
And I'm not necessarily advocating that "skipping the body" become a standard feature of context managers. But it is a necessary functionality if you want to be able to dynamically nest one or more context managers, something I think Python should be able to do, for completeness, if not only for aesthetic beauty.
Having said that, optionally skipping the body is a far cry from the more esoteric constructs achievable with pep 340.
And python _already_ silently skips the body of managed code, if you nest two managers:
@contextmanager errordude:
1 // 0
yield
@contextmanager handler:
try:
yield
except ZeroDivisionError:
pass
with handler, errordude:
do_stuff()
These context managers will skip the execution of f. It will be Python's internal decision to do so, of course. But the "with" statement already has the potential to have the body silently skipped.
What I'm adding here, the ContextManagerExit, is the ability for the context manager itself to make the decision, so that the two context managers above can be coalesced into one:
with nested(handler, errordude):
do_stuff()
The fact that do_stuff can be silently skipped in the first case, where we explicitly have two nested calls, invalidates IMHO the argument that context managers should not affect control flow. why shouldn't it also be skippable in the case of a single context manager?
Using my latest patch, the ExitStack inline example can be rewritten:
with ExitStack() as stack:
files = [stack.enter_context(open(fname)) for fname in filenames]
# All opened files will automatically be closed at the end of
# the with statement, even if attempts to open files later
# in the list raise an exception
becomes:
with nested(opened(fname) for fname in filenames) as files:
do_stuff_with_files(files)
Allowing a context manager to skip the statement body isn't a new proposal,
and I previously argued your side. However, with multiple context managers,
there is no invisible flow control. Two context managers are locally
visible, which means the outer one completely encloses the inner one and
can suppress exceptions it throws. Guido explicitly made the decision to
require two managers at the point of use to achieve that behaviour when I
proposed making the change - he doesn't care about allowing a single
context manager to provide that functionality.
For the other question, how does your version of nested keep people from
doing "nested(open(fname) for name in names)"? That was the core problem
with that style of API: it made it far too easy to introduce a latent
defect when combined with file like objects that eagerly acquire their
resource. It wasn't that it couldn't be used correctly, but that the
natural and obvious way of combining it with open() is silently wrong.
"locally visible" is, I think a very misleading term. How is
with ignore_error, acquire_resource as r:
doo_stuff_with_resource(r) #can be silently skipped
any more locally visible than
with acquire_resource_ignore_error as r:
doo_stuff_with resource(r) # can be silently skipped.
? does the "nested with" syntax immediatelly tell you "hey, the body can be silently skipped"?
Requiring that some context manager patterns must be done with a special syntax is odd. What is more, it prohibits us to abstract away context managers. For instance, you can write a function like this
def execute_with_context(ctxt, fn, args):
with ctxt:
return fn(*args)
but if your context manager is of the kind mentioned, i.e. requiring the double syntax, you are screwed.
Basically, what I'm proposing (and what the patch provides) is that you can write this code:
@contextmanager
def nestedc(ca, cb):
with ca as a, cb as b:
yield a, b
and have it work for _all_ pair of ca, cb. This then, allows context managers to be used like abstract entities, like other objects in the language. It is _not_ about flow control, but about completeness.
A similar pattern for functions is already possible:
def nestedf(fa, fb):
def helper(v):
return fa(fb(v))
return helper
And so, we could write:
execute_with_context(nestedc(ca, cb), nestedf(fa, fb), ('foo',))
Current python does not allow this for arbitrary pairs ca, cb. My version does. This is what I'm advocating. That programmers are given the tool to combine context managers if they want.
As for "contextlib.nested()".
I'm not necessarily advocation its resuciation in the standardlib, but adding that to the patch here to demonstrate how it now _works_.
Here is a simpler version of contextlib.nested:
@contextmanager
def nested_empty():
yield []
@contextmanager
def nested_append(prev, next):
with prev as a, next as b:
a.append(b)
yield a
def nested(*managers):
total = nested_empty()
for mgr in managers:
total = nested_append(total, mgr)
return total
Pretty nice, no?
Now we come to the argument with nested(open(a), open(b)).
I see your point, but I think that the problem is not due to nested, but to open. Deprecating nested, even as a programming pattern demonstration is throwing out the baby with the bathwater.
I´ve coined the term "hybrid context manager" (at least I think I have)to mean resources that are their own context managers. They're hybrid because they are acquired explicitly, but can be released via a context manager. The context manager is a bolt on, an afterthought. Instead of adding __exit__() to files, and allowing
with open(fn) as f: pass
We should have encouraged the use of proper context managers:
with opened(fn) as f: pass
or
with closing(f): pass
Now, we unfortunately have files being context managers and widely see the pattern
with open(fn) as f, open(fn2) as f2:
pass
But how is this bug here:
with nested(open(fn), open(fn2)) as f, f2: pass
any more devuiys than
f, f2 = open(fn), open(fn2)
with f, f2: pass
?
The problem is that files aren't "real" context managers but "hybrids" and this is what we should warn people about. The fact that we do have those hybrids in our code base should not be cause to remove tools that are designed to work with "proper" context managers.
The decision to remove "nested" on these grounds sets the precedence that we cannot have any functions that operate on context managers. In fact, what this is really is saying is this:
"context managers should only be used with the "with" statement and only instantiated in-line.
Anything else may introduce sublte bugs because some context managers are in fact not context managers, but the resource that they manage.
"
In my opinion, it would have been better to deprecate the use of files as context managers, and instead urge people to use proper context managers for the: (the proposed) contextlib.opened and (the existing) contextlib.closing)
K
I think you make a good case, but I already tried and failed to convince Guido of this in PEP 377 (see)
More importantly, see his quoted concerns in
While you have come up with a much simpler *implementation* for PEP 377, which imposes no additional overhead in the typical case (unlike my implementation, which predated the SETUP_WITH opcode and avoided introducing one, which required wrapping every __enter__ call in a separate try/except block), it still adds a new builtin exception type, and I thing needs a new builtin constant as well.
The latter comes in because I think the bound variable name still needs to be set to something, and rather than abusing any existing constant, I think a new SkipWith constant for both "don't call enter/exit" and "with statement body was skipped" would actually be clearer.
I actually think explaining a custom exception and constant is less of a burden than explaining why factoring out certain constructs with @contextmanager and yield doesn't work properly (that's why I wrote PEP 377 in the first place), but Guido is the one that ultimately needs to be convinced of the gain. | https://bugs.python.org/issue18677 | CC-MAIN-2021-21 | refinedweb | 2,453 | 51.58 |
looks something like this:
import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) const store = new Vuex.Store({ state: { user: null }, mutations: { setUser (state, user) { state.user = user } }, })
The store’s state begins with an empty
user object, and a
setUser mutation that can update the state. Then in our application we may want to show the user details:
<template> <div> <p v-Hi {{ user.name }}, welcome back!</p> <p v-else>You should probably log in.</p> </div> </template> <script> export default { computed { user() { return this.$store.state.user } } } </script>
So, when the App loads it shows the user a welcome message if they are logged in. Otherwise, it tells them they need to log in. I know this is a trivial example, but hopefully, you have run into something similar to this.
If you’re like me, the question comes up:
How do I add data to my store before my app loads?
Well, there are a few options.
Set the Initial State
The most naive approach for pre-populating your global store is to set the initial state when you create your store:
import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) const store = new Vuex.Store({ state: { user: { name: "Austin" } }, mutations: { setUser (user) { state.user = user } } })
Obviously this only works if you know ahead of time the details about the user. When we are building out application, we probably won’t know the user’s name, but there is another option.
We can take advantage of
localStorage to keep a copy of the user’s information, however. When they sign in, you set the details in
localStorage, and when they log our, you remove the details from
localStorage.
When the app loads, you can pull the user details from
localStorage and into the initial state:
import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) const store = new Vuex.Store({ state: { user: localStorage.get('user') }, mutations: { setUser (user) { state.user = user } } })
If you’re working with data that does not require super tight security restrictions, then this works pretty well. I would recommend the
vuex-persistedstate library to help automate that.
Keep in mind that you should never store very sensitive data like auth tokens in
localStorage because it can be targeted by XSS attacks. So our example works alright for a user’s name, but not for something like an auth token. Those should only be store in memory (which can still be Vuex, just not persisted).
Request Data When the App Mounts
Now let’s say for whatever reason we don’t want to store data in
localStorage. Our next option might be to leave our initial state empty and allow our application to mount. Once the app has mounted, we can make some HTTP request to our server to get our data, then update the global state:
<template> <div> <p v-Hi {{ user.name }}, welcome back!</p> <p v-else>You should probably log in.</p> </div> </template> <script> export default { computed { user() { return this.$store.state.user } }, async mounted() { const user = await getUser() // Assume getUser returns a user object with a name property this.$store.commit('setUser', user) } } </script>
This works fine, but now we have a weird user experience. The application will load and send off the request, but while the user is waiting for the request to come back, they are seeing the “You should probably log in.” When the request returns, assuming they have a logged in session, that message quickly changes to “Hi
{{ user.name }}, welcome back!”. This flash can look janky.
To fix this flash we can simply show a loading element while the request is out:
<template> <div> <p v-Loading...</p> <p v-Hi {{ user.name }}, welcome back!</p> <p v-else>You should probably log in.</p> </div> </template> <script> export default { data: () => ({ loading: false }), computed { user() { return this.$store.state.user } }, async mounted() { this.loading = true const user = await fetch('/user').then(r => r.json()) // Assume getUser returns a user object with a name property this.$store.commit('setUser', user) this.loading = false } } </script>
Keep in mind that this is a very bare example. In yours, you might have a dedicated component for loading animations, and you may have a
<router-view> component in place of the user messages here. You may also choose to make that HTTP request from a Vuex action. The concept still applies.
Request Data Before App Loads
The last example I’ll look at is making HTTP requests similar to the last, but waiting for the request to return and updating the store before the application ever has a chance to load.
If we keep in mind that a Vuex store is just an object with some properties and methods, we can treat it the same as any other JavaScript object.
We can import our store into our
main.js file (or whatever the entry point for your application is) and invoke our HTTP request before mounting the application:
import Vue from "vue" import store from "./store" import App from "./App.vue" fetch('/user') .then(r => r.json()) .then((user) => { store.commit('setUser', user) new Vue({ store, render: (h) => h(App), }).$mount("#app") }) .catch((error) => { // Don't forget to handle this })
This approach has the benefit of preloading your global store with any data it would need to get from an API before the application loads. This is a convenient way to avoid the previously mentioned issues of janky jumps or managing some loading logic.
However…
There is a major caveat here. It’s true that you don’t have to worry about showing a loading spinner while the HTTP request is pending, but in the meantime, nothing in your app is showing. If your app is a single page application, then your user could be stuck staring at a blank white page until the request returns.
So you aren’t really solving a latency problem, just deciding what sort of UI experience to show while your waiting on data.
Closing Remarks
I don’t have any hard, fast rules on which method here is best. In reality, you may use all three depending on what the data is that you are fetching, and what your application needs are.
I should also mention that although my examples I’m making
fetch requests then using Vuex mutations to commit to the store directly. You could just as easily use Vuex actions to implement the
fetch. You could also apply these same principles to any other state management tool, such as Vue.observable.
That’s it for now, and if you have any comments or questions please let me know. Twitter is a great place to reach me, and you can sign up for the newsletter for more updates like this. | https://www.coodingdessign.com/javascript/3-ways-to-prepopulate-your-vue-js-global-stores-state/ | CC-MAIN-2020-50 | refinedweb | 1,136 | 65.32 |
It's hard to distinguish a constructor from a class in an export list.=20 S | -----Original Message----- | From: Alastair David Reid [mailto:reid@cs.utah.edu]=20 | Sent: 20 September 2001 19:38 | To: Simon Marlow | Cc: Ian Lynagh; haskell@haskell.org; hugs-bugs@haskell.org | Subject: Re: Prelude and (:) and []((:), []) bugs? |=20 |=20 |=20 | > Ah, I forgot that you can't export a constructor on its own. |=20 | You can't? |=20 | I probably knew this once but looking at it now, it seems=20 | kinda surprising. Haskell's module system is supposed to be=20 | just namespace control --nothing more-- so why is it=20 | preventing me from doing something which is perfectly safe=20 | and well-defined? |=20 | I'll readily agree that there's no strong motivation for=20 | exporting a constructor on its own (I think the only reason=20 | Hugs allows it is just so we can export (:) from the Prelude)=20 | but what is the motivation for disallowing it? |=20 | --=20 | Alastair Reid reid@cs.utah.edu =20 | |=20 |=20 | _______________________________________________ | Hugs-Bugs mailing list | Hugs-Bugs@haskell.org=20 | |=20 | http://www.haskell.org/pipermail/haskell/2001-September/007935.html | CC-MAIN-2014-41 | refinedweb | 189 | 65.62 |
Unity: Creating GUI transitions
Posted by Dimitri | May 4th, 2012 |:
using UnityEngine; using System.Collections; public class HorizontalTransitionGUI : MonoBehaviour { //A 4x4 Matrix private Matrix4x4 trsMatrix; //A three dimension vector that will translate GUI coordinate system private Vector3 positionVec; //Two booleans to determine which of the GUI buttons have been pressed private bool next = false; private bool back = false; // Use this for initialization void Start() { //Initialize the matrix trsMatrix = Matrix4x4.identity; //Initialize the Vector positionVec = Vector3.zero; } // Update is called once per frame void Update() { //If the 'next' boolean is true if(next) { //Interpolate the current vector x component until it has the same as value the screen width positionVec.x = Mathf.SmoothStep(positionVec.x, Screen.width,Time.deltaTime*10); /*Make 'trsMatrix' a matrix that translates, rotates and scales the GUI. The position is set to positionVec, the Quaternion is set to identity and the scale is set to one.*/ trsMatrix.SetTRS(positionVec , Quaternion.identity, Vector3.one); } else if(back) //If 'back is true' { //Interpolate the current vector x component until it reaches zero positionVec.x = Mathf.SmoothStep(positionVec.x, 0,Time.deltaTime*10); //Make 'trsMatrix' a matrix that translates, rotates and scales the GUI. trsMatrix.SetTRS(positionVec , Quaternion.identity, Vector3.one); } } void OnGUI() { //The GUI matrix must changed to the trsMatrix GUI.matrix = trsMatrix; //If the button labeled 'Next' is pressed if(GUI.Button(new Rect(Screen.width - 400, 315, 100, 30),"Next")) { next = true; back = false; } //The TextArea that appears on the first screen. GUI.TextArea(new Rect(300,200,Screen.width-600,100), "Click on the 'Next' button to change the Text Area."); //If the button labeled 'Back' is pressed if(GUI.Button(new Rect(-Screen.width + 300, 315, 100, 30),"Back")) { next = false; back = true; } //The TextArea that appears on the second screen GUI.TextArea(new Rect(-Screen.width + 300,200,Screen.width-600,100), "Click on the 'Back' button to return to the previous Text Area."); //To reset to GUI matrix, just make it equal to a 4x4 identity matrix GUI.matrix = Matrix4x4.identity; //A Label that won't change position GUI.Label(new Rect(300,350,500,100),"This Text Label remains at the same position, no matter what."); } }
Right at the top of the script, a Matrix4x4 object, a Vector3 and a pair of boolean variables are being declared. As previously stated, the Matrix4x4 is going to be the one that will replace the GUI matrix later on the code. The Vector3 is there to aid in the task of building the aforementioned Matrix4x4 as a translation matrix. The pair of booleans are declared to tell if one of the buttons have been pressed (lines 7 through 12).
At the Start() method, the Matrix4x4 is initialized as a identity matrix and the Vector3 objects is initialized as a a null vector (lines 15 through 21). The Update() method is where it all happens. In the above script, it’s basically a if else block that checks whether the value of the next and previous booleans are true or false; changing according to the button pressed at the interface (lines 24 through 44). If next is true, the positionVec X component is interpolated the SmoothStep() method, which takes three parameters.
The first one is the value the interpolation is going to start, the second is the value where the interpolation ends and the third, can be thought as the speed which in the interpolation must happen. In the above script, the parameters are, respectively: the positionVec X component, the screen’s width and the Time.deltaTime value multiplied by 10 (line 30). So, what will happen is that the X component of postionVec will be gradually interpolated using the third parameter until the X component of the Vector3 has the same value as the screen width. Effectively, this will move the whole GUI system origin out of the screen when it’s matrix translated with the recently interpolated component of the Vector3.
To create a translation matrix, the TRS() method is called from the Matrix4x4 object, taking three parameters. The first one is going to be positionVec and since the rotation and scale of the GUI system must remain the same, the second and third parameters are filled with Quaternion.identity and Vector3.one (line 34). That’s it! With that, all that’s necessary is to replace the GUI.matrix with the trsMatrix. This can only be done inside the OnGUI() method, which starts at line 46 and ends at line 76.
After replacing the GUI coordinate system with the trsMatrix at line 50, some elements are being rendered, namely a pair of Buttons elements, two TextAreas elements and a Label. All elements, except for the Label are being rendered with their X position relative to the screen’s width. That’s because the GUI’s system reference X value is going to be shifted from zero to Screen.width at the Update() method, so it makes sense to make everything dependent on the screen width. The Label doesn’t change position because the current GUI.matrix it’s using as the reference for positioning is the identity matrix.
Also, the two buttons declared here changes the value of the next and previous boolean variables accordingly. It’s worth mentioned that the OnGUI() method resets the GUI.matrix to identity each call, so line 75 (the GUI.Label() method call) could be placed before line 49 (GUI.matrix = trsMatrix;), yielding the same results.
Final Thoughts
Some readers might be wondering why manipulate the GUI system matrix to create a simple horizontal transition when the same could be easily done by changing the X coordinate of a GUI group. And surely, some are wondering why use the Matrix.TRS() method to make a simple translation, when all it takes is to change the last element of the first line of the trsMatrix (trsMatrix.m03).
For the above example, the aforementioned alternatives could easily be used as it’s just a simple horizontal translation. However, for more complex transitions, such as the ones that changes the rotation and the scale, a matrix is necessary. Remember that, in Unity, there’s no way to change the scale or rotation of some GUI element without altering the current GUI.matrix. Furthermore, on those cases, changing the GUI.matrix is necessary.
Here’s a video of the transition code in action:
Thanks for this,
Very clear, got me out of a Jam.
Keep up the good work mate.
Thanks!
Thank You very much !
Hi…can you give us some tips on how to position the GUI transition to bottom???
thanks man… you’re the best :D
i want a translation to right to left how ? | http://www.41post.com/4766/programming/unity-creting-gui-transitions | CC-MAIN-2020-16 | refinedweb | 1,116 | 63.29 |
Ads Via DevMavens.
Imagine a web form requiring a user to enter a parameter into a TextBox control. An excerpt might look like the following.
<asp:TextBox
<asp:RequiredFieldValidator
Note EnableClientScript=false in the validator. This will simulate a client with scripting turned off (a scenario we should always be prepared to handle), and leads to some interesting behaviors later in some examples that follow.
One way to get this parameter to a second form would be the Response.Redirect approach. We could add a button to the form with the following event handler.
protected void Redirect_Click(object sender, EventArgs e)
{
if (Page.IsValid)
{
Response.Redirect(
"destination.aspx?parameter=" +
Server.UrlEncode(TextBox1.Text)
);
}
}
This parameter is easy to pick up in the destination web form.
protected void Page_Load(object sender, EventArgs e)
string parameter = Request["parameter"];
if(!String.IsNullOrEmpty(parameter))
// party on the parameter
Some people will crib about Response.Redirect needing a round trip to the client, but it is a simple approach. However, the length of the query string is limited, and we may not want the parameter displaying in the destination’s URL. Query strings also tend to lead to “magic string literals” in the source code, but even if we took the time to define a const string variable for both the sender and receiver forms to use, the interface between the two is weak and prone to break.
An alternate approach, one that improves in 2.0, is to use Server.Transfer. I previously mentioned Server.Transfer in criticism of the ASP.NET 2.0 compilation model, but like a blind man with an elephant I hadn’t quite felt the full elephant. This approach has some great improvements in 2.0. The code for the sender might look like the following.
protected void Transfer_Click(object sender, EventArgs e)
Server.Transfer("destination.aspx");
The receiving web form has several options to fetch data from the first web form. All of these options involve a new property on the Page class in 2.0: PreviousPage. PreviousPage represents the originating page in transfer and cross page post back operations. Pulling data in the destination page could look like the following.
if (PreviousPage != null)
TextBox textBox = PreviousPage.FindControl("Parameter")
as TextBox;
if (textBox != null)
{
string parameter = textBox.Text;
Parameter.Text = parameter;
}
}
The problem I have with the approach is that FindControl is easily breakable. All someone has to do is modify change an ID or push the TextBox inside a naming container, and FindControl will return null. You might be thinking about exposing the parameter as a property of the original form, and that’s what I’m thinking, too.
public string ParameterText
get { return Parameter.Text; }
Now we have a different problem. You might think we could just cast PreviousPage to the type of the originating form, but you have to be careful with the new compilation model in ASP.NET 2.0. Each form might compile into a different assembly. Instead of casting we will take advantage of the new compilation model and the @ PreviousPage directive. All we need in the destination form is the following:
<%@ PreviousPageType VirtualPath="~/Default.aspx" %>
The directive will give us a strongly typed PreviousPage property. In other words, instead of retuning a System.Web.UI.Page reference, PreviousPage will return the type of the first web form, like ASP.Default_aspx. This makes the job easy.
string parameter = PreviousPage.ParameterText;
if(!string.IsNullOrEmpty(parameter))
// party!
Parameter.Text = parameter;
As simple as this appears to be, we still have problem. We’ve tied these two forms together. What if we wanted to make the second form a transfer destination for different pages in the application? What if we just didn’t feel comfortable referencing one web form from another? Let’s put the following into the App_Code (or a referenced class library).
using System;
public interface IParameterForm
string ParameterText
get;
Now all we need to do is include IParameterForm in the derivation list for our original page class. We can remove the @ PreviousPage directive from the second form and use the following code instead.
IParameterForm form = PreviousPage as IParameterForm;
if (form != null)
string parameter = form.ParameterText;
if (!string.IsNullOrEmpty(parameter))
{
// party!!
}
It requires a little more code to dig the parameter out, but we have decoupled the forms and gained some flexibility. Any number of pages can transfer into this destination page.
Server.Transfer does come with disadvantages. The most serious is that the URL in the browser does not change. The browser still believes it has posted back and received content for the first web form, so history and book-marking suffer. These issues are fixed with the new cross page postback feature.
A cross page postback all starts with by setting the PostBackUrl property of the Button control.
<asp:Button
We do not need to respond to the ClickEvent for this button. When the user clicks the button, client side JavaScript will set our form’s action attribute to point to the destination page before posting.
In the destination form, a client side post back will give us a non-null PreviousPage property. We can even have a strongly typed PreviousPage property if we use a @ PreviousPage directive. For now, we will leave our destination page code almost as is.
if (PreviousPage != null &&
PreviousPage.IsCrossPagePostBack)
Notice we have an IsCrossPagePostBack property we can evaluate to determine if the request arrived as a result of cross page post back. A value of false means the PreviousPage did a Server.Transfer.
It’s also interesting to note the behavior of the PreviousPage property during a cross page post back. PreviousPage uses lazy evaluation, meaning no previous page will exist until the first time you touch the property. At that point the runtime will load a new instance of the PreviousPage web form and have it start to process the current request. This might sound odd at first, but we’ll see why the approach makes sense.
In order to extract data from the PreviousPage, the PreviousPage will need to be instantiated, load it’s ViewState, and respond to the typical events like Init and Load. With Server.Transfer this happens automatically, because the request arrives at the PreviousPage first, which then has a chance to restore data to it’s controls before handing off processing to the destination web form using Server.Transfer. With a cross page post back the request arrives at the destination web form, which then must ask the PreviousPage to execute in order to restore itself. The PreviousPage will execute all stages up to, but not including, the PreRender event, at which point control returns to the destination page.
Let’s try to make the scenario clear. Validation controls will run server-side when a cross page post back occurs, and the destination page inspects the PreviousPage.IsValid property. As we mentioned earlier, when we touch the PreviousPage property the runtime loads and executes the PreviousPage form, including the validation controls. We must check the IsValid property of our previous page to make sure the page passed all validation tests.
PreviousPage.IsCrossPagePostBack &&
PreviousPage.IsValid)). With Server.Transfer this scenario was not a problem, as we could skip the Transfer when validation failed and let the validation controls display error messages to the user from the original web form.
The above question is one you’ll need to answer when deciding to use cross page post back on pages with validation controls. One answer might be to require a client to enable scripting for your site. You’ll still want to check PreviousPage.IsValid before accepting any input over the network, and spit out some informative error message for the bots and clients with no scripting enabled.
We’ve uncovered four rules of thumb in this article.
By K. Scott Allen
Leave comments and questions about this article on my blog. | http://www.odetocode.com/articles/421.aspx | crawl-002 | refinedweb | 1,311 | 56.76 |
I.
Just store the references to the controls somewhere instead of their names,
e.g. in a List<CheckBox>, then you can access them by index.
Also you actually should not do any of that, use data-templating and
data-binding to create controls for data. If done right you just need to
change a boolean on your data and the check-box will be checked/unchecked.
Did you google around?
There are lot of links that can help.
Try this.Works for me try replacing the contentplaceholder with the form id
may work.
String Value=
(this.Form.FindControl("ContentPlaceHolder1").FindControl("panel").FindControl("txtbx"
) as TextBox).Text;
I.
I think you can differentiate by using the id edit-issue-dialog
if($("#edit-issue-dialog").length){
//u r in edit form, and do your stuff
}else{
//in create form do your stuff
}.
Well I would declare interface, lets say
interface IControl {
void foo();
}
in Project2, have this tab1.ascx implement this interface, and type control
once you find it to this interface. (This interface can be also defined in
Project3 that will be referenced by both Project1 and Project2).
You need to use the x:Name attribute, not just Name.
<uc:StandardDialog x:Name="StandardDialog" ...
Windows Store applications don't automatically create a field for objects
with their Name specified. The x:Name attribute is required if you want a
named field to be created and available to the code behind...");
}
}
}
Do you realize .All("btnK") returns a collection? So, you are doing
.InvokeMember("click") on a Collection :). You cannot do that, you can only
do .InvokeMember("click") on an element for obvious reasons!
Try this:
wb.Document.All("btnK").Item(0).InvokeMember("click")
The .Item(0) returns the first element in the collection returned by
.All("btnK"), and since there will only probably be one item returned,
since there is only one on the page, you want to do the InvokeMember on the
first item, being .Item(0).
May I ask what it is you are developing?
Since you're a new user, please up-vote and/or accept if this answered your
question.);
I cannot say I understand your question completely, but if you wish to make
your non-reference type "variables" (fields?) reference-able, you can put
them inside of a class. For example, create a dictionary field mapping
strings or integer constants to a FieldBase class, then store in that
dictionary all the FieldBase objects you need, and look them up by name.
The FieldBase class can be non-generic, and derived classes generic:
abstract class FieldBase
{
public readonly string Name;
protected FieldBase(string name) { Name = name; }
}
sealed class Field<T> : FieldBase where T : struct
{
public Field(string name) : base(name) { }
public T Value;
}
Add "Name" attribute
Msdn doc :
This is happening because you're calling
base.AddAttributesToRender(writer); at the end if the if statement. Instead
of calling base here just add a line to add the id attribute:
writer.AddAttribute(HtmlTextWriterAttribute.Id, this.ID);
It's correct as long as Courses is CodeName of your sheet but I don't think
so.
As long as Courses is simple name which you could see in Excel App then you
need to change your code into this one:
Sub ListBox7_Change()
With Sheets("Courses").ListBox7
.AddItem "Hi"
End With
End Sub
I asked this a long time ago,
But I think this is how you should do it. Using a factory pattern for user
controls.
Thanks for the guys in the comments section I found the reason. When
rendered the checkbox id is changed from chckTOC to cpmain_chckTOC and the
label tag it was not valid. I've changed the label tag to <label
for='cpmain_chckTOC' /> and now I've got a brand new styled checkbox as
expected.
I've also managed to avoid this hardcode id -> <label
for='cpmain_chckTOC' />, because it would be a problem for example, if
you have your checkbox inside an APS repeater - then each checkbox id will
be different something like "cpmain_repeaterid_chckTOC_1" and then
"cpmain_repeaterid_chckTOC_2" and so on. And in this situation hardcoded id
won't help you.
Instead I've managed to do it using only ASP.NET controls:
<asp:CheckBox
Hold a reference in a class-level variable when you create the DataGridView
control, and then use that variable to add the rows:
//Class-level variables
private DataGridView _gridView;
public void create_Tab_Control()
{
//Logic to create the Tabs
_gridView = new DataGridView();
//Add the DataGridView to the TabControl
}
public void add_row()
{
//Add the row(s) to the DataGridView
_gridView.Rows.Add("column 1", "column 2");
}
Thanks to @satpal
He put his answer and after that deleted it. I don't know why. But he is
right!
Html.RenderAction((string)@ViewBag.InitializeUserControl);
This is working! =
I'm not sure if I understood your question, but assuming you have a Button
with a specified name on a Window. The only way you can access that button
in another class is by having access to that window instance in that class.
I believe the window can be casted as a frameworkelement and you can use
the method FindName
Here to find an element inside that window. However this only works is you
have access to the instantiated Window inside that class.
The fact, that the in Template is used to Path and it assumes a string
value in the Data parameter. But now you use DrawingBrush, usually it is
used for Rectangle control.
So, we change the type of attached depending properties on the type of
DrawingBrush:
public static readonly DependencyProperty IsCheckedOnDataProperty;
public static void SetIsCheckedOnData(DependencyObject DepObject,
DrawingBrush value)
{
DepObject.SetValue(IsCheckedOnDataProperty, value);
}
public static DrawingBrush GetIsCheckedOnData(DependencyObject DepObject)
{
return (DrawingBrush)DepObject.GetValue(IsCheckedOnDataProperty);
}
static CustomCheckBoxClass()
{
PropertyMetadata MyPropertyMetadata = new PropertyMetadata(null);
IsCheckedOnDataProperty = DependencyProperty.RegisterAttached("IsCheck");
} | http://www.w3hello.com/questions/-reference-web-control-in-dynamically-added-user-control-ASP-NET-2- | CC-MAIN-2018-17 | refinedweb | 960 | 56.35 |
See also: Agenda, IRC log
<dka> trackbot, this will be tag
<trackbot> Sorry, dka, I don't understand 'trackbot, this will be tag'. Please refer to <> for help.
<dka> ;alkyd;alksd
<dka> trackbot, start meeting
<trackbot> Date: 01 October 2013
<dka> darobin can you join us this morning?
<darobin> ohai
<darobin> dka: I can join in ~10min, for about 30-45min
<darobin> after that I'm off to tell the lovely people of Lyon how much they'd enjoy making standards
<wycats_> I'm on my way, fyi
<wycats_> Is Alex already there?
<Yves> not yet
<Yves> nor anne
<wycats_> OK I am at his hotel waiting for him
<dka> Ok - we really need to start shortly due to Philippe's schedule.
<wycats_> Anne's here
<dka> Suggest you come over - maybe you can ring up to Alex's room?
<wycats_> I texted him
<dka> ok we have Philippe until 10:30 so we can wait a few minutes
<dka> Robin we are on 0824
<scribe> scribenick: noah
<darobin> [Polyglot has an update about to be published:]
We have Philippe le Hegaret visiting in person and Robin Berjon on the phone
HT: Proposed topics RDFa, polyglot, authoring spec, mediat type
<darobin> [RDFa isn't us, but Microdata is more or less in a bad state it would seem]
TBL: Is it appropriate to talk about proposed solution to the Web IDL problem
AVK: DRM?
HT: EME = Encrypted Media Extensions are actually going into the browsers
<darobin> [the TAG brainstormed about DRM last time (or the time before?), did anything come of that?]
PLH: EME is in HTML WG pushed by
Web & TV interest group. 1.5 years work in HTML wg
subgroup.
... Microsoft, Netflix and Google are active and contributed editors to the specification
... Use case is ability to play back protected premium content
s/XXX/Google/
PLH: Want you to be able to
interact with platform's Content Distribution Management system
(CDM), which provides whatever actual DRM, possibly HW or
SW
... You can also use the API for less constrainted systems, but in practice browser vendors are implementing to use underlying platform
... Mozilla is said to be working on this but their detailed plans aren't clear to me. Henri Sivonen is involved from Mozilla
HT: Google means Webkit?
Several: No, blink
PLH: (scribed missed some details about MPEG Dash?)
HT: Is the CDM target exclusively Windows?
PLH: No, Apple as well
HT: Linux?
PLH: Well, if you want some content on Linux, there are providers who require DRM.
YK: Jeff's opinion last time, with which I disagree, is that it could run on Linux should the community wish to do it. I (Yehuda) think most agree that the necessary support won't be on Linux in practice.
DA: Why can't others write it for Linux
YK: You won't get certs
TBL: Too many assumptions here
YK: OK, I retract, but I'm betting Netflix won't publish to Linux without DRM
TBL: Why?
... There are lots of possible architectures. The problem with building it this way is that it depends on support from machine manufacturer. You could imagine, for example, built into the LCD that does the decryption. You could image a band publishing music, some free some priced.
... You could imagine a deployed infrastructure for deploying the keys. For the musician, likely preferable to having control of keys so centralized. Moving to a more open market is desirable.
YK: I'm not saying we will necessarily not have DRM on Linux, I'm skeptical that companies with content like Netflix will publish to it.
TBL: False assumption that there is only one CPU. Phones have many e.g. I have root access on my Mac and can install open source on it. It's not easy for me to copy something downloaded using iTunes. I know/suspect the machine also has subsystems to which I don't have access.
<Zakim> darobin, you wanted to talk about potential reform action
PLH: Today, to watch Hulu or Netflix you need a Flash plugin. If EME works right, then no need for plugin. Now there's a burden on browsers to work with the platform, and this is especially problematic for open source systems.
<timbl> logger, pointer
<timbl> logger, pointer?
RB: Media is just the tip of the iceberg. We need to anticipate requests for protection of Apps, Books, media. Wondering whether the TAG could help push on question of Web-compatible copyrights.
DA: Bookmark that thought
PLH: So, no plugin, but browser has no control over underlying DRM systems, e.g. issues like accessibility.
HT: The well argued >technical< objection is that it is a step toward taking control of the platform away from the owner. That's the key problem. We are making a link in that chain easier, but the deep evil (if you believe it's evil) is further down the chain
PLH: It's a small step
<Zakim> wycats_, you wanted to ask about the distinction between built-in EME and built-in Flash
YK: I see benefits of standard thing with standard interface. I wouldn't like it but can see benefit. I'm not convinced EME is better than plugin.
PLH: You would not have to ship flash, reduces browser footprint
YK: Do you think that will happen
<darobin> [with EME you push the plugin further down the stack and reduce its surface compared to Flash]
PLH: Yes, streaming etc would need to be addressed
<slightlyoff> AR: necessary but not sufficient
PLH: I'd guess EME will show on mobile platforms over time
AVK: We never standardized the plugin API, and we know plugins have problems. Now we're opening an extra hole.
YK: At least with Flash you could implement it yourself in principle. With EME it's not clear that you can.
AVK: Standardizing this seems counter to the W3C's mission
<wycats_> to be clear, I didn't mean that you could implement the DRM part -- what I meant was that you could implement FLASH yourself
TBL: That's a complex discussion
<wycats_> so the undocumented EME API is worse than the undocumented Flash API as it relates to the web content that each of them is enabling
AVK: I am worried about the Web depending on proprietary technologies not easily ported to other platforms.
Robin Berjon thanks the group but has to leave.
<Zakim> noah, you wanted to say even small steps are symbolically significant
<wycats_> For me, the nut of the issue is that new browsers will not be able to run "web content" without a non-trivial amount of non-technical work
<annevk> +1 to wycats_
<slightlyoff> thanks darobin !
<slightlyoff> wycats_: that seems like a no-op vs the current state, no?
PLH: Regarding Web IDL: it's a metasyntax for writing specifications. Within a few years we'll use that either Web IDL or JS IDL.
<wycats_> slightlyoff: the current state is not in a standard -- so the current content is not "web content" and we're honest about it
<wycats_> the claimed benefit of the new state is that it can be described as "web content"
<wycats_> except it's not actually
<slightlyoff> I accept the semantic distinction.
PLH: Some groups are using the full semantics of WEB IDL but older stuff doesn't support that.
<wycats_> this conversation is nuanced... we should have it in person
PLH: Both Geolocation and Touch Events claim to be using ECMAscript bindings, but when it gets to things like prototypes it doesn't work.
AVK: Browser bugs?
<wycats_> I totally disagree that these are just "browser bugs"
YK: Yes, but it's not that simple, even browser vendors have trouble knowing what's right. Ordinary users have no chance.
<Yves> type conversion is also overlook
PLH: Leads to a situation where
specifications lose credibility because don't match reality.
Conformance tests fail.
... The incentive to fix isn't strong enough.
YK: I agree that in principle this stuff tends to be edge cases, but I have spent say 100 hours of my career burning time figuring out such mismatches between spec and reality. Getting the specs to where they are an accurate guide to practical reality is important (scribe has paraphrased)
AVK: Don't follow. You want specs to say where implementations will go, not where they are.
TBL: Anne, delighted to hear that!
YK: Not making strong statement on that distinction
AVK: Mozilla uses Web IDL for geolocation
AR: Chrome doesn't always follow
the IDL
... Is the issue that prototypes aren't be linearized?
YK: Part of the problem
AR: We in chrome never have performance regression, and have spent years figuring out how to linearize prototypes. So far, haven't found a way that performs.
TBL: Eventually?
AR: Eventually we should be able to do it except maybe for some real edge cases
YK: I'm on chrome going to monkey patch an event target, and on another platforms may not work
AVK: We've had IDL based stuff since forever
AR: Yeah, it's a pain to do it by hand
AVK: Then we moved to Web IDL and some did it piecemeal (Chrome?). We are trying for the whole thing, but across browser versions it's a huge job. Real interop will take you a long time.
YK: So, your view is WEB IDL documents ideal reality, and it's just taking a very long time for real reality to catch up?
AVK: Yes
YK: I have more optimism that
it's a few years, but not decade(s)
... Some things like that some browsers require capture flag and some don't really does bother developers. We play lip service to interop
AVK: Pressure for new features
YK: Maybe, the HTML5 parser was welcomed for improving interop
AR: WEB IDL is high fidelity but to the wrong ideal
YK: But PLH was asking is there enough info there, if so, then browser vendors please prioritize
TBL: So, we have specs that are not really WEB IDL compatible but still use WEB IDL to list parameters
YK: Does prose tell you
TBL: I'm saying it's being used just for the parameter list, with no implication that there are prototypes, setters, getters etc. They are usefully using as a descriptive.
Several: Whoa, there are specifications saying that things like getters and setters >are< implied
TBL: So, let's say the don't have getters and setters, should they not use WEB IDL and instead use something else, or should they say they're using WEB IDL in limited way.
AR: First, do you think the API without getters/setters is good
TBL: Right now, they are trying to document a spec that for better or worse does not intend to have getters and setters
PLH: So, today the protoypes are
being followed by all execpt Webkit and Blink. Mozilla and
Microsot are doing what they reasonably can.
... We need to make sure specs describe deployed reality even if later we hope to improve implementations
TBL: Is it pragmatic for them to use the WebIDL syntax?
AR: Two issues. A) Carries implications that to which they are not signing up, so misleading B) It's going to mislead implementors
TBL: The implementations were done "without thinking" in the sense of without attending to Web IDL
AR: If you are attempting to
describe a Javasript interface that doesn't match WEB IDL, use
Javascript to describe it
... Insofar as Javascript is a poor description language, that's on TC 39 to fix. It is however to write out an interface description, e.g. with a dummy implementation, that can meet the need
PLH: There are a few like that, maybe Web storage
TBL: Or you could clone WEB IDL and rip out pieces
YK: People have enough problem reading/learning WebIDL. Subset version adds confusion.
Someone: so, the browsers are on the road to fixing the APIs
PLH: But doing things like adding getters is very low priority
YK: I can live with that. Web IDL is the desired reality, fixing bugs is low priority
TBL: Not OK. When bugs are likely to be open for ten years, then that's not OK because specs aren't affecting reality
<annevk> plh:
YK: Really going to take ten years for geolocation
AVK: No
YK: So?
TBL: I think I'm hearing browser vendors don't want to fix some of these bugs
YK: I'm more optimistic that implementations are converging on the spec faster than that
DA: Does this fit into bucket of "TAG provides guidance to working groups"? If so, what guidance.
TBL: I think the deep dive here
is valuable.
... Popping back up might lose that value
YK: I think we can get behind saying "standards should drive interop"
DA: But TAG can get involved at detail level as we did with Web audio. Anything like that here?
AVK: The geolocation group is "run" by two people who work on Blink, so they are likely to give the corresponding answer
AR: If it's only geolocation, then we can fix this
AVK: I assume they felt that
fixing this wasn't consistent with getting to REC in a timely
way
... We >want< the prototype and we want it interoperable
YK: Would be useful to know why blink isn't doing it. Was there a deep reason making it really hard, or just didn't push hard on it?
AVK: What's a long time?
YK: Having a spec not interoperate for multiple years
AVK: We have several
YK: We should consider avoiding publication of specs likely not be interoperable for extended periods
<slightlyoff> plh: do you have a pointer to the discussions about this?
<slightlyoff> hey JeniT!
NM: If you're going to write specs that cover ideal behavior and interim, can be useful to give them formal distinction as differently named conformance levels.
AR: Confused. Of what substantive issue is geolocation an example.
<plh>
PLH: Well touch events is more clearcut, but considering geolation, the specification says the XXX object must be supported on the navigator interface, but it is not.
AVK: Makes sense, because navigator was complex and took a long time to convert.
<annevk> To be clear, Firefox Nightly passes that test
AR: When geolocation was published, WEB IDL was only a working draft. What is the core issue, that Blink isn't linearizing prototypes, or is there an issue unique to geolocation?
PLH: I now see that something
told to the director was unduly pessimistic. We thought we
heard there were not two implementations because IE passed but
Firefox failed IDL compliance. We now hear that was a short
term concern, bug fixed, and so we do have two
implementations.
... The touch events one was also presented to (Tim). Looked at running on both Firefox and Chrome mobile. There were pieces of a test failing (scribe isn't sure he got this right)
TBL: Can you do touch evens with trackpad on desktop
PLH: In principle, but I can't check that myself
AVK: If someone had checked the Firefox bug database they would have seen the fix was on the way.
PLH: We somewhat trust the WGs
AVK: Not clear this WG has deep insight into implementations other than Blink
PL: Did I hear someone concerned about need to express things beyond what WEB IDL can do?
YK: Well, I heard Tim speculate that if Web IDL semantics not right for all, the syntax might still be used.
<dka> I will note: we have talked about 3 APIs this morning - Geolocation enables access to an underlying capability of the device which may be implemented using a patent-encumbered technology (GPS); Touch events enables access to an underlying capability of the device which may be implemented using a patent-encumbered technology (multi-touch); EME enables access to an underlying capability of the device which may be implemented using a patent-encumbered technology (some
<dka> underlying DRM).
TBL: They came to me and asked about syntactic conformance (just matches Web IDL grammar). I said "no", we need syntax and semantics here
PLH: Touch events is a bit tricky. IE doesn't implement.
YK: What's the status of the patents
PLH: There is a PAG.
YK: Aren't we moving to pointer events?
PLH: Yes, but there is deployments of Touch Events so they want a 1.0 rec
YK: But not likely to get much attention in the future? Then I don't care about Web IDL so much.
AVK: But I care, because we do it with Web IDL
AR: What's the issue?
<plh>
YK: Should touch events become a
REC if not describable by suitable IDL
... Given that Web IDL is part of the spec, we don't have interoperable implementations of touch events
DA: Is this a TAG issue? Unconvinced.
YK: Broader issue is whether WEB IDL semantics are important when considering interoperable implementations? I think we agree: yes.
PLH: Anything else in remaining 4 minutes.
HT: I want to know where we are on Polyglot
PLH: HTML WG is publishing it.
HT: Henri had an objection
PLH: I think it's moving
forward
... As long as someone is willing to do the work it will move forward.
... On other things: RDFa went to rec awhile ago; on microdata it's going to note
... We're doing the same with Authoring spec
NM: We had a clear, negotiated agreement with HTML WG chairs that authoring spec goes to REC. please check the record from a couple of years ago.
DA: I agree with Noah
PLH: I will check
<plh>
<ht>
PLH: The HTML WG got a new
charter yesterday with two new things 1) new dual license which
can be used for extension specifications
... 2) The DOM4 specification was moved into HTML WG also announced yesterday and can get cc: by license
AVK: I am concerned that the cc: by license is non-GPL compatible.
AR: I'm unhappy disucssing document licenses for software
<plh> Plh: an example is: Is Anne willing to edit the DOM specification in the HTML WG under the new dual license?
<plh> Anne: no, because it's incompatible with GPL
TBL: There are some people in this discussion who have taken the trouble to be quite pedantic about it. The notion that GPL would be incompatible with cc: by is a bit pedantic.
AR: You can't drill on this without asking who would sue whom and why? Who owns the rights? If these are rights granted to the Free Software Foundation, they may be litigious. I think it's less likely that W3C or Mozilla will. There are multiple potential ways to solve this. One might be a covenant not to sue.
PLH: We published a FAQ. Our legal people believe that in this particular case your use of cc: by is compatiable with GPL.
YK: How do you link a spec.
AVK: Given that FSF considers it incompatible...
<ht>
<plh>
<plh>
AR: Henri Sivonen makes an
argument, of which I'm not convinced, that if you are going to
automatically generate help or error text that extracts from
the specification and winds up in code, the code is then
potentially not GPL clean.
... I believe that a covenent should resolve that concern.
... I do understand that this doesn't in the general case resolve the question of whether cc: by leaves you GPL clean
TBL: Definitely not just help text. It's also generating parsers from BNF, etc.
NM: I agree with Tim
DA: The question is, FSF and Creative Commons say it's incompatible and W3C says compatible, are we getting them together.
PLH: Harder than that.
AR: FSF has their own interpretation that they can enforce with respect to the products they control
DA: Thank you Philippe
Philippe leaves
TBL: This about people doing things on principle. Principle is important, and important things follow from principles. Viral licenses are complicated to design and are tweaked occasionally. The fact that FSF has come to this conclusion is a failure of the GPL license. Compatibility with cc: By should be straightforward and should be a requirement for GPL.
AVK: Yes, some of it is principle. I would like my work to be in the public domain, much as Tim got the ideas and software around the Web to be given away.
AR: CC0 is NOT public domain. Having a license and public domain is different thing. In US law, governments can put things in public domain but individuals can't.
SK: But laws typically say that the owner has all rights to do anything.
DA: Which jurisdiction?
SK: I guess I'm speaking of Russia, but that's based closely on Berne convention and should apply at least to most of Europe.
AR: ...and US
... What can the TAG do? Ask US libary of congress to let W3C put into public domain.
TBL: But our members don't want
to put our work into the public domain.
... I did not get the Web into public domain. I got CERN to agree not to charge license. The work I did here is MIT license and copyright MIT, similar to BSD and cc: by.
AR: So key was?
TBL: As I recall, not to charge royalties.
<ht> Yehuda -- yes, this is the "Authoring Spec":
<ht> "HTML5: Edition for Web Authors"
<slightlyoff> big deprecated warning at the top?
<slightlyoff> "This document has been discontinued and is only made available for historical purposes. The HTML specification includes a style switcher that will hide implementer-oriented content."
<ht> It's a Note, whereas Polyglot () is still on the REC track
<ht> But the link to the Authoring spec. from the charter says "The HTML Working Group will complete work on the following existing deliverables of the group:"
<ht> ???
<annevk> timbl:
<annevk> timbl: "The definition of protocols such as HTTP and data formats such as HTML are in the public domain and may be freely used by anyone. -- Tim BL"
<ht> Anne, yes, but as AR and SK said, _saying_ that is an indication of author's intent, but it doesn't, legally, make it so IIUC: only the US gov't can 'make' something public domain wrt US law, I believe.
<annevk> ht: this was back in Switzerland I suspect
<annevk> ht: either way, CC0 covers the US gov case
<trackbot> Meeting: Technical Architecture Group Teleconference
<trackbot> Date: 01 October 2013
<ht> Here is the (relatively) famous analysis of DRM effect on Vista and chip design:
<slightlyoff> Scribe: slightlyoff
DKA: sergey, you volunteered the topic, perhaps you can talk a little bit about it?
<twirl>
SK: in my opinion, the design
guide should serve 2 main goals
... 1.) to be a guide for folks who design APIs
... 2.) to build a guide for API reviews
... the second is guide for the TAG
YK: should non-platform builders trying to design things use the guide? what about ember?
SK: what I've written is very
general
... it should primarily be focused on the web platform
YK: if it can't be useful for library developers, it'll probably fail the idiomaticness test
DKA: my sense is that we should be focusing primarily on the work of spec developers
YK: I'm saying this is an acceptance criteria
AVK: "idiomatic" changes over
time
... what AWB does is different in style from what query does
SK: there are some bits that are specific to our field, but hopefully it will be relatively general
<twirl>
SK: it's not a guide, per sae, but it includes good IDL design notes
<wycats_> AR: Proscriptive text should be more fluid
<wycats_> AR: I would like there to be a more general design guidelines
<wycats_> ... my sense of Robin's guidelines is that they are good tactical advice but not a broad sense of the landscape
(thanks wycats_)
<wycats_> ... we need something that helps people navigate the landscape
<scribe> Scribe: slightlyoff
SK: I think it should have 2
parts
... something to help folks designing interfaces and more general guidance. It could be 2 documents.
YK: some of this stuff is going to need to change, e.g. the advice about globals will change in the light of ES6 modules
DKA: don't we ant to build looking forward to that future?
YK: if we do that, there's a group in TC39 that's refactoring the exiting platform in terms of modules and classes and they'll need to be roped into this effort to make it forward looking
<wycats_> AR: I feel like that is an attractive thing to want to do, but it's maybe too soon
<wycats_> ... since we don't have a lot of design expertise with modules
<wycats_> ... and no consumers among working groups yet
<wycats_> ... I would like to focus on the design challenges that we've observed
YK: new specs will want to use modules in 6 months
AVK: I'm not sure
YK: that's my sense of the timeline
DKA: my main concern for this
work is getting started on what we can get consensus on
... I'd like to have something meaty by the end of the year
YK: if we can get it done by the
end of the year, the current outline seems good
... it sounds like there's a lack of consensus
DKA: until there's something concrete, perhaps we should be talking about what we should say in this document
YK: I think there are some areas where we should be explaining to people how to use modules to help get them over the hurdle of using them
AVK: there's a problem with shipping
YK: slightlyoff is right…there's
a transition problem
... we need to both provide the back off strategy -- someplace to do things with modules in the interim
AVK: once things ship in 2 browsers, I think it'll be "game on"
SK: the platform is evolving
permanently. Every 6 months there will be some changes that
will cause spec authors to need to revise their work
... modules are not blocking this
... the TAG has a role: questions about wether or not to use modules _will_ be directed to the tag, so we'll be on the hook for providing advice; both current state and how to think bout the future
... I don't know if we should write down the current state in this guide, and how to cope, and how to think about designing in the future. Perhaps we should have 2 parts? something general that doesn't change frequently and a part that's more tactical: how to use modules, promises, etc.
<wycats_> Just to be clear, I am not saying that everyone will be using modules in 6 months, but that new specs will want to use them in 6 months
SK: are we agreed about the main goals of the guide?
[agreement]
SK: I have a plan for the
relatively general part
... I'd like to present it and collect your feedback, and if we agree, I'll start working on the general part of the guide
... API developers should be explaining what tasks the API should solve
... how are those tasks solved in other plaforms/APIs (prior art)
... and there should be some rationale and exposition about what to borrow and why to leave behind
... the other question is: what are the use cases?
... i've reviewed 3 APis before this meeting: web audio, web animations, and push notifications
... all presented me the same problem
... I don't know what the common use-cases are and the spec doesn't provide direction about where to find them
... I think that's a major problem with these specs
... I have to find out how they're used and why the current solutions aren't used
AR: it might be good for spec authors to have that sort of text for their own sake, but it doesn't seem like it'll be a shortcut for reviewers
SK: I agree, I do still need to
review and check the background
... when you're reviewing the spec, you need to come to an independent view
... so it's good to have the list to see if the spec authors were hitting the right issues
... for example web audio, it's intended in a browser, but it has no mechanisms for working with audio more than a minute in lenght
... it's important to have the use cases both for the folks doing review and for the folks doing the specs
... the designer should be reviewing the spec at checkpoints to make sure they align with the major design guidance
... else things may drift in scope
... I'll write this down with extended examples
... both good and bad
AR: how do we imagine this being used primarily? In coordination with us? independently?
DKA: my though was that folks who are building specs would use this as an input
AR: so it needs to stand alone and be self-supporting
DKA: agree
SK: we don't want to be going to each spec designer and having to explain the guide
AR: I think that raises the risk for advice that has a sell-by date
YK: yeah, promises, streams, globals
<wycats_> AR: I think the right way to do this is to extract advice out of our interactions with working groups
<wycats_> AR: otherwise we'll find ourselves with a pile of good but eventually contradictory statements that will not hang together
(thanks wycats_ )
DKA: is this a traditional TAG
finding?
... or is this different kind of document?
... TAG findings have publication dates and errata
... or should this be a living doc that only lives on github, etc.
NM: it could even go to REC
DKA: but if the ground is shifting quickly....
YL: we need a clear way of updating it and marking some advice obselete
DKA: agree
SK: I think this document will be
tested when we continue to do reviews
... and we can come up with determination about how it should live as a result
DKA: what can we pull out of our work with WebAudio? WebRTC?
wycats_: there's a chicken-and-egg problem with Alex's approach: it won't help us spread new technology that should be broadly used
AR: agree
SK: next chapter
AR: what do we want to do about it?
YK: I think we're going to have to take a bit of a leading role
SK: the second level that should
be explained in a spec is defining the levels of
abstraction
... what are the data structures?
... how abstract is it?
... what UI does the spec provide?
... how doe the spec objects interact with each other?
... so WA becomes clear: the data structures are very low level and it doesn't have UI
AR: how does this help us or a spec author get clarity on wether or not these are the right choices to make?
SK: it's pretty basic that the levels of abstraction should be outlined. If the abstraction levels are defined, things will compose well
AR: but does this provide practical advice? How does noting the level of abstraction turn into guidance?
SK: I've thought a lot about this….
YK: do we think this sort of document will help folks in the wrong headspace?
SK: I do think it'll help. Won't be a magic bullet,but will help clarify
PL: it'd be nice if we had a clear description of where the levels of abstraction really ARE in the platform. Seems like something the TAG should provide.
YK: agree. That seems like the first thing we should do here
PL: I think a lot of us have pictures of it in our heads, but we haven't annunciated it
YK: if you're crossing layers, at a minimum there should be an API
PL: I think we should define this model
DKA: that seems like a separate part of this document that needs to be written
YK: yeah, this was the thing that
Alex and I had in mind and we should talk about it more
... [ need offline discussion ]
DKA: if you don't have the bandwidth, perhaps you can review?
YK: yeah, need to find time
DKA: you agree with the basic formulation of the plan? draft and let sergey edit?
[ agreement ]
SK: defining the abstraction levels is the hard part
YK: I think there's a growing
consensus
... I'm optimistic that we're more on the same page than not
SK: are there written results?
[ no ]
YK: I think the extensible web manifesto is one work product. Trying to explain the platform in terms of those broad ideas is something we need to do
[ breaking for lunch ]
DKA: after lunch, we have wendy
seltzer joining us to discuss privacy and security
... after that we can loop back to this
(back at 1pm EST)
<dka> trackbot, start meeting
<trackbot> Meeting: Technical Architecture Group Teleconference
<trackbot> Date: 01 October 2013
<dka> Scribe: wycats_
<wycats> Scribe: wycats
Wendy (WS): Security and Privacy expert
WS: Here to talk about where society is going with regard to security and privacy
DKA: We had some ideas about what we can talk about
1) What could we (TAG) be doing with regard to the government snooping situation?
DKA: It's not our role to wade
into the politics
... but the question of securing the web is something that is clearly in the realm of the TAG and Web Architecture
... so could we be giving more guidance about the use or non-use of "security" technologies
2) Publishing and Linking
scribe: is there any action that W3C is planning to take with amicus briefs or anything that we can provide background
<slightlyoff> thanks wycats
3) Some top-level thoughts on "let's make a deal" application security model
DKA: 4) any input into the API
design guide around privacy and security
... 5) What should we be thinking about
WS: Let's sketch out what T&S
is doing right now
... and we'll see if there are architectural questions that TAG can help with
... to influence a "secure and trustworthy web"
DKA: Let's start with (1)
"government snooping"
... AR: Can you outline your security proposals
AR: I have the benefit of leaning
on much smarter people
... specifically Adam Langley, who works on SSL in Chrome
... I would trust AGL with my private data
... I asked him: (1) should the TAG weigh in
... and he said yes
... (2) what specific things can we do around SSL?
... (a) Perfect Forward Secrecy
... (b) strong keys
... (c) cert pinning and OCSP
()
scribe: (d) removing weak TLS
versions
... (e) Strict Transport Security headers
()
scribe: he doesn't believe that
pinning is something most sites can do
... cert pinning is really hard
... OCSP pinning is a performance optimization
... OCSP is a response to the problem of revoking certificates
... browsers can look up in a global list for revoke certs
... high lag / low compliance
YK: issues with captive
portals
... (The previous discussion was about CRLs)
... OCSP is an improvement on CRLs
()
scribe: OCSP is blocking
... therefore sucks
... OCSP pinning is a way to provide a response to the OCSP question as part of the handshake
YK: Is there a solution for OCSP and Captive Portals?
AR: We need to never trust
captive portals on the browser side
... AGL thinks both kinds of pinning are too hard for most publishers
... because OCSP pinning requires relatively frequently updating your certs and maintaining strict security around the key material
YK: Google does pinnin
AR: Yep
HT: I assumed there was an informal consensus that SSL itself isn't fit for purpose
AR: Let's try to be more
specific
... the issue with SSL is trusting the root signer
... you have to trust the root signer for a long time and you have to update them sometime
... there are economic forces that drive prices down
... it's legitimate to have a varying view of their (roots of trusts) compliance efforts
HT: I hit cert errors from sites
I have reason to trust so I blow them off
... and I think it's a universal experience
... which means certs aren't useful
AR: (f) Crypto all the things
DKA: Website authors may tell people to skip cert errors
HT: UofE does this
YK: This is really an issue of self-signed certs
AR: The advice is "spend a couple hundred bucks"
TBL: MIT says to install the MIT root cert
<Yves> install... but what to do when it's revoked?
AR: PFS can be implemented
without high costs
... we need to get more browsers to implement it
... it would be great if they did
... IE doesn't support PFS
... it's an option in the SSL handshake
(scribe suggests reading the wikipedia page for more information -- cannot scribe all the technical details)
<dka>
AR: Proposal: We should advocate that major publishers should provide it and all browsers should implement it
TBL: How do you tell someone to do it
DKA: This really all gets to Wendy's discussion about UI
WS: The user research that I'm aware of is that users are terrible at any question about security
YK: What about the "zomg don't enter this site" malicious modal dialog
AR: I'm looking into it
SK: I think it's over 90% don't follow it
AR: We've made it more and more onerous over time
HT: There is at least one page that I don't know how to get past
AVK: There are issues with captive portals
PL: Is there a standard for captive portals
YK: There is a proposed status code
TBL: Can we (TAG) kill captive portals
YK/AVK: No
AR: We can maybe limit them to reserved IPs
PL: We need a solution before HTTP status codes
TBL: Which adds problems for cert errors
<wseltzer> [the captive portal problem... how many AP devices provide the bulk of these?]
should we be change the OS UI when you're behind a captive portal?
YK: explains the generate_204 solution
()
AVK: There are also issues with timeouts
DKA: Captive portals make people oblivious to security errors
TBL: Captive portals violate web
principles
... it makes HTTP meaningless
<slightlyoff> AGL advises that the data here is pretty fresh:
()
WS: There isn't "one owner" of
this problem
... there are many interactions
... you can't solve it with one solution
<slightlyoff> Click through rates for badware/malware is low, click-through rates SSL warnings is sadly very high = (
WS: but maybe an architectural solution can work across the board to solve it cooperatively
slightlyoff: how do users perceive the difference
<slightlyoff> IIRC, different colors, but I'd have to go dig up the HTML
DKA: Work item Proposal: Recommendations for Captive Portal Owners
YK: You might be able to design a good captive portal
PL: Yes, you can for example block all ports by 80 and 443
TBL: They can look at the user agent
AR: Key strength is in flux
... we can count on governments and non-commercial actors being able to do 2^80 computation
... being able to break 1024 bit keys now or at least soon
... we need to take 1024 keys off the table
... browsers will warn
... (for 1024 keys)
AVK: What's moving faster - ability to generate keys or ability to crack them
Yves: Weak ciphers can also render the keys useless
HT: If you're paranoid you may believe that government agencies may have cracked 1024-bits better than brute force
<slightlyoff> was looking for this earlier....:
YK: It may still be worth protecting ourselves from others than the NSA even if the NSA cracked RSA
TBL: The TAG could just say "don't use 1024 bit keys"
AR: And we can say "don't use the null cipher"
TBL: We can ask the validator suite to add validation for sane SSL practices
AR: And a name-and-shame list
<Yves>
AR: Strong Versions
... 1.1 and 1.2 are the state of play, people should not be using something else
... don't use TLS <= 1.0 or SSL <= 3.0
... Strict Transport Security
...
... Crypto All the Things
... Public services should all be HTTP
Yves: What about caching
HT: HTTP thinks that the web only works because of caching
YK: Public caches are such bad actors that you may wish to use SSL just to opt out of them
HT: The W3C servers were being
brought to their knees by poorly written proxies that were
requesting namespaces
... IBM said bandwidth was cheaper than writing a cache
<slightlyoff> HSTS background:
YK: The mixed content warning has helped with getting SSL support in CDNs
AR: TL;DR We need more
crypto
... the expectation that the web's traffic is mostly unencrypted is "an invitation to be embarrassed later"
DKA: What other unintended consequences are there
(scattered discussion)
<slightlyoff> heh
YL: People tend to use SSL and/or port 443 for new services to avoid proxies messing around
<Yves> especially interception proxies
WS: UI issues
... in some ways we have steered away from recommending UI because browsers see it as a competitive advantage
... but maybe we really do need to make some recommendations now
... so users have a consistent mental model of what good behavior is vs. bad behavior with regard to warnings
... and have a hope of making better decisions about security
... rather than throwing up our hands and letting users choose browsers that make decisions for them
<wseltzer> "warning fatigue"
YK: Why are self-signed certs
still emitting warnings
... Why not "no padlock, no warning"?
AR: You could imagine this
... the only real value is that it makes the work of hackers more difficult
... it also puts encrypted traffic in a different bucket
(if we think this warning is useful, why not yell at the user when they use HTTP)
(surely a self-signed cert is no worse than unencrypted HTTP?)
<wseltzer> (how do you help the user differentiate between seeing a self-signed cert for well-known-site and one for his own/friend's site?
HT: I have my owned self-signed cert -- what's the additional vulnerability if I don't install the self-signed certs
YK: Strict Transport Security may help
AR: It gives you temporal security
TBL/AR: Man in the middle is still easy
("easy")
TBL: I like being able to opt into trusting a specific self-signed cert
YK: I think you are in the top 0.00001% of the mental model of the situation
TBL: Teenagers using facebook
understand friends of friends
... the only major difference is UI
<wseltzer> "we have bad security heuristics"
WS: This is a very hard set of
problems
... what can we do to chip away at it?
... how can we think about the different elements of the threat model?
... for example the pervasive passive adversary vs. the adversary targeting you vs. the adversary targeting a type of communication
... for example crypto all the things helps with passive collections
... and a warning for self-signed-certs interferes with "Crypto All the Things" which helps with passive adversaries
... we now know that there is much more of the passive adversary threat
... so maybe Crypto All the Things is a high priority
... as IETF liaison I'm thinking more about what orgs have what responsibilities
<slightlyoff>
AR: What is IETF thinking
WS: Crypto All the Things
AR: Good Crypto All the Things
WS: Also PFS to avoid passive collection and future cracking
YL: What about DNSSEC?
<annevk> Crypto All the Things Strongly? -> CATS!
<Yves> DANE
<Yves> withint IETF
<slightlyoff>
YK: I want to remove the padlock and the warning for self-signed certs
AVK: File a bug
YK: Sounds good
... I don't think enlisting users to yell at webmasters is a generally effective strategy
TBL: Maybe the browser should send requests to a website so it shows up in their error logs
DKA: Should we have a work item?
AR: Let's get SSL experts as collaborators
DKA: we have some good starting points there
<dka> q
YK: Let's make a starting document that isn't controversial
Sergey: what about expired certs?
AR: No. We need to warn
people
... the idea is to create a class of certs that aren't meant to imply protection from MitM
... what about expired self-signed certs
YK: People aren't willing to say that HTTP traffic is specially warned, so anything better than HTTP shouldn't warn unless the server expresses intent to have a padlock show up
DKA: Let's rope some people
in
... like your friend
ACTION Alex to start writing some text for security recommendations with AGL's help
<trackbot> Created ACTION-831 - Start writing some text for security recommendations with agl's help [on Alex Russell - due 2013-10-08].
(scattered discussion about spy vs. spy icon)
DKA: What should we be thinking
WS: We're focusing on (a) DNT
Header (b) Privacy Interest Group
... we don't have formal security reviews
YK: Is there interest in doing security reviews?
WS: The IETF does it
YK: Why doesn't the W3C do it?
DKA: unknown
... TBL?
TBL: The TAG hasn't recommended
it
... at first, it seemed like lip service
... but there have been a few times where I felt it was useful
... so I would be happy to do it
YK: I'm suggesting that there is a formal security review of specs
(as opposed to the lip service "security considerations" section)
TBL: We do this already for
accessibility and internationalization
... so maybe we should do it for security
YK: I didn't enjoy doing this for JSON API
<wseltzer> bureaucratic hassle--
TBL: Would have been nice if they had to write security considerations when they wrote SMTP
WS: Please submit any documents
that you want wider reviews on to the IG or the Workshop
... there isn't yet any specific plans for the Workshop
<wseltzer> (EU Project)
SK: Part III: Defining Object Responsibilities
<dka> trackbot, start meeting
<trackbot> Meeting: Technical Architecture Group Teleconference
<trackbot> Date: 01 October 2013
SK: This stage can help find
design problems
... IV: Object Interface
... this stage helps ensure consistency with the names used in the details of the APIs
... this feeds into the Platform-Specific guide
DKA: Maybe we can discuss this in terms of an API like Web Audio
SK: I had a push API review that we could use for this
AR: I looked at it
... it seems like a good way to analyze OO APIs
... what about layering?
... how do these things relate to markup?
YK: We asked for example how Web Audio related to <audio>
AR: We should keep these things
front and center
... because it can be easily lost when you're focused on a particular layer
YK: You could imagine an HTML form of the push API
SK: Push API doesn't have any
elements
... it may be problematic to add elements
... there are lots of questions about how all this should work
... I cannot make assumptions about this should look
YK: Not every tag has a visual representation... it's just our declarative tool of choice
AR: What else?
YK: What, if any, new
capabilities does this proposal introduce? If none, what
capabilities is it described in terms of?
... If there isn't a direct JavaScript API for something for performance, what is the rationale?
DKA: What platform invariants exist? Does this proposal violate any?
YK: We know of raciness, but also
things like not leaking same origin
... And doing things like Fetch and Service Workers may make it easier to avoid breaking Same Origin because there's a JS model for what's happening
DKA: What other WGs should we be reaching out to?
YK: I think we should review the Web Components family of specs
ACTION Alex to invite Dmitry to present about Web Components
<trackbot> Created ACTION-832 - Invite dmitry to present about web components [on Alex Russell - due 2013-10-08].
ACTION Yehuda to write some text about capability layering
<trackbot> Created ACTION-833 - Write some text about capability layering [on Yehuda Katz - due 2013-10-08].
ACTION Sergey to start fleshing out an API review document
<trackbot> Error finding 'Sergey'. You can review and register nicknames at <>.
ACTION Сергей to start fleshing out an API review document
<trackbot> Error finding 'Сергей'. You can review and register nicknames at <>.
ACTION twirl to start fleshing out an API review document
<trackbot> Created ACTION-834 - Start fleshing out an api review document [on Сергей / Sergey Константинов / Konstantinov - due 2013-10-08].
<slightlyoff> Scribe: slightlyoff
<annevk>
<annevk> for notes
YK: consensus is that we need
bundling for ES6 modules
... AVK came up with the idea that if you want to represent the file inside a zip, use the fragment identifier
YK: ...but then AVK noted that there's an issue with it and we need to work it out.
<annevk> plinss: can you project maybe
YK: the platform uses fragments to navigate *inside* resources ( generally), but this now changes it to move fragments into part of the resource being identified
TBL: you shouldn't feel constrained to what HTML fragments do
YK: everyone's happy with
fragment semantics
... when you're writing HTML that's RELATIVE to another file (say from a filesystem), it's easy:
... you do <a href="../thinger.html">
... now imagine the cluster of files lives inside a zip file
... it doesn't appear that ".." can be relative to something inside a fragment.
... the inclination was "surely this happened in XML...maybe that was solved"
HT: the media type defines the semantics of fragment identifiers
[ inaudible ]
AVK: if I get an HTML file with
it's own media type, and now the question is "what's the URL of
the document"?
... is there an "outer URL/inner URL" boundary?
<dka> Noting we are also working on a capability URLs doc: - and Yehuda and I have talked over lunch about a possibility of a "URL best practices for web applications" doc… Now we're discussing Zip URLs…
HT: this occurs in multipart-mime as well
TBL: now imagine an address space where you have fragments for things that are named by bits of a fragment
HT: there's a base URI for the virtual installed location for the whole package
<dka> Also, see for what the TAG has said previously on hash URIs in web app context.
HT: and relative URLs are relative to the package
YL: what about a Service Worker in the zip file that can do the resolution?
TBL: my suggestion was to use "/"
[ whiteboard example of <a href="../foo.html"> ] in a zipe with #foo.html and #bar.html
YK: so is the idea that there's a virtual URL?
AVK: yes, there's an internal
addressing scheme
... zip://..../bar.html
TBL: alternative design with
different problems
...
AVK: how does this work?
TBL: 3xx redirect from the unresolveable URL to foo.zip
[ we need something that doesn't require server support ]
TBL: this has nice properties
(natural URLs, etc.)
... requires both server and client support
AVK: the problem with this is
that we've learned that deploying anything at the HTTP layer is
REALLY hard. Even CORS headers is a tall ask.
... this is negotiation style...it's a lot harder
YK: it's trivial to do in
rails
... you need to get it in apache
... in general we reject solutions that require .htaccess files
AVK: if we disregard fragment,
and we disregard the redirect solution, the other ways to do
it...
... you can have nested URLs: outer URL with new scheme
<wycats> I am unwilling to disregard the fragment solution
AVK: there are some nasty problems; URLs are now a stack, zip urls require parsing changing, etc.
TBL: historically that sort of
thing has gone down rather badly
... you might have had "zip:" and then a URL encoded URL
AVK: [ notes that FF has "jar:"
]
... the 4th solution is to have some sort of sub-path
... some new sort of seaparator
... I proposed "$sub=" which would be very unique
... everything after "$sub=" is for local processing
... it works for polyfilling too
YK: I think you should be able to use whatever rules you'd sub in for stuff after "$sub=" you can use in the hash soltuion
AVK: nooo....
TBL: can I put slashes in the post "$sub=" area?
[ yes ]
HT: what about a generated URI for the contents?
YK: can you say that the fragment is part of the base URI for that scheme?
AVK: what scheme? there's no scheme for the fragment
[ discussion about the relativeness of the mimetypes and the address-bar duality ]
AVK: the "$sub=" only affects the
URL parser
... what the IETF specifies isn't what we have
... "$" is illegal...it's reserved
YK: not use "$"?
AVK: people use illegal charachters = (
[ backing up ]
PL: what's the basic
usecase?
... is it something that's always going to do processing on the client? or are we hoping for smart servers that can collaborate and send the right things?
TBL: when people put things on the web as zips, now we've closed them off
[ TBL articulates an example where the server might only want to send individual files, not the whole zip ]
[ discussion about server assistance and what gets sent to the server ]
PL: the reason I asked is that I
can see utility for both versions
... we might want to do a verssion that supports both
... with combinations of smart/dumb clients/servers
YK: for a dumb server, you need to send the extra info in a separate header
PL: I'm suggesting HTTP extnsions
YK: you still need a new separator
PL: if that's our goal, it rules out fragments and new schemes
<wycats> I am warming to the % solution
HT: can I present a different solution?
<wycats> as a new URL separator that means "don't send this to the server, but it's part of the base URI" (and maybe the rest is sent in a header)
HT: suppose we start at the
following point: somebody requests the package, and it's got
the relevenat content-type.
... the client is going to get a directory listing and what you'll get in the URL bar is something with zip://example.com/package.zip/
(and you get a list of URLs in the bundle)
HT: zip://example.org/package.zip/foo.html
TBL: doesnt have enough information! No delimiter
HT: we said we want this to work:
TBL: you don't know where to break it up
[ YK and AVK discuss the public/private URL merits ]
YK: I'm looking at the % solution: send only the base to the server
AVK: % isn't popular because you
can't polyfill it. 400 on apache and issues on IE and IIS
... naked "%" is interesting as it's an extension to URL parsing
... nobody can be using it
YK: polyfilling is about what old
*browsers*, not old servers do
... the fragment has the same problem
AVK: yes
[ discussion about how making it work in old clients ]
TBL: there's always been the problem that when you introduce a new mimetype, there's download vs. understand
YK: in the ideal world, the old
browser would send the request and the server would have a
chance to send the right file
... you could unpack the file on disk next to the real zip
AVK: we need a delimiter, and we need something so unique that it won't conflict in data URLs
(etc.)
YK: is that important?
AVK: useful for feature testing
YK: make a blob!
... we need to find a char that's supported by old browser
s
<Yves> See semicolon
<Yves> [[
<Yves> For
<Yves> example, the semicolon (";") and equals ("=") reserved characters are
<Yves> often used to delimit parameters and parameter values applicable to
<Yves> that segment.
<Yves> ]]
TBL: old browser, new server...
YK: we could say the new semantics are "%", but that "$sub=" is equivalent
HT: what about "|"
[ can't use ';' ]
YK: need something illegal that no browser sends or doesn't occur in the corpus of the web
<annevk> Yves: ; and = work fine in URLs today
<annevk> Yves: people use = in the query string for sure
[ discussion about unpacking for compat ]
AVK: we need to require a mime
type
... otherwise sites that upload zip files become vulnerable to attacks
<Yves> something like /foo.zip;sub=root/blah.html would be usable, even illegal characters might be on the web today
AVK: if they host zip files which have HTML in it, no they are subject to XSS via the HTML inside the zips
PL: does zip content need to be a different origin?
YK: can't be
TBL: do you wnat to say within a site that some bits are different origins?
YK: polyfilling is an interesting
constraint that can help us tie-break...so what's in tie?
... TBL hates "%"
AVK: "%!" would work, but you can't just have "%"
YK: so semantics are,
[base]%![location in zip]
... the base is sent to the server
... and you send a new header with [location in zip]
PL: an old client w/ an old
server, would pass the whole thing to the server and 404
... old client / new server might do the right thing
YK: the reason I want a different
separator is that righ tnow we have 2 semantics : thing to be
sent and thing to be reserved
... but it turns out you really want both
PL: what I like is that you send the second half in an additional header
TBL: why is "?...." weird?
AVK: it's the query string....
TBL: right, servers lop it off and send the zip file
[ discussion of servers throwing away stuff after "?" for static assets ]
YK: sort of works and is
semantically reasonable....emperical question
... IRL, it already has these semantics
... how does ".." resolve with "?"?
<plinss> ‽
[ it jumps behind the "?" ]
<annevk> plinss: if only URLs were not bytes
TBL: my requirements are that I should be able to link within, without, and out with an abs URL
<dka> ¿
YK: "%!" is appealing
AR: what's the case aginst "%!"?
AVK: doesn't work in IE...IE doesn't parse the URL
YL: changing URL parsing on server is unpossible
<dka> ¶
<dka> ∑
AVK: most of the solutions require that sort of change
<annevk> bytes people, URLs are bytes
<plinss> ?!
<annevk> :-)
<dka> £
<annevk>^_^image.jpg
TBL: if I wasn't so enamoured with clean specs, what I'd do is send things, look for 404s, and then look for the zip on disk
AVK: "%!" fails the OldIE test
YK: what about "$!"?
[ discussion of ".." vs. "?" ]
Yves: +1
AVK: so this can't happen after the query string?
[ yes ]
AVK: thre are downlod services that treat the full URL as the addres
[ relative URLs vs query strings ]
[ what about "$!" ? ]
<timbl> [ discussion of ".." vs. "?" ∀ values of ..]
AVK: haven't looked at that
YK: has to not fail on old IE
AVK: and has to work with data URLs?
YK: seems like a good tiebreaker?
AVK: at what level in the URL parser does this enter?
YK: URL parsers need to see it as part of the delimiter
[ data: questions ]
AVK: data: urls do have query
strings
... we could do that, but htne you don't get the zip out of it
AR: love that you're gonna be base64 encoding the zip ;-)
YK: turns out that the limits stick us with a new delimiter that works in old browsers...preferably not alphaneumeric
TBL: the architecture of URLs is
that there are these chunks...
... things that are being sent to various parties but not others...lots of stuff is built on this...and worried that this is new
YK: doesn't break infrastructure
because new browsers won't be sending broken stuff into the
wild and old browsers might, but that's the price for
supporting them
... thought there was a binary choice between sending and not sending to the server, but it turns out there's a union option
... proposal is a new URL delimiter
... [before]$![after]
... [before] is sent to server
... [after] is sent in a new header
[ what about query strings? does it break 'em? ]
PL: should be part of the query string if it's going to break them
[ discussion about ".." relative to "?" ]
[ polyfill case ]
YK: it really is a new semantic, so you actually need new syntax
HT: you're not gonna get it until HTTP 2
YK: why not? this is a client behavior
[ seems old browsers send "$" ]
(encoded or not?)
AVK: not encoded in gecko
YK: wanting somthing that's a delimiter and doesn't send stuff to the server, it means we need to change URLs
AR: agree
TBL: what about "/$/" ?
... it'll 404 on an old server
YK: if it's not used on the web,
seems fine
... do we agree on the shape of the solution?
HT: would like to see something that thinks about what it means to add contains to the ontology of client/server interaction
[ that's what this is ]
TBL: one of the things about having a "/" is that...if you have lots of icons, you could just 303 to the zipfile anyway
HT: I do want the nesting..and
we're not going to get that = \
... I want a way to say I want you to start the whole resolution over again...mounting a zipfile as a filesystem analogy
YK: I thinks that's what this delimiter propsal does
TBL: you want the base URL to be somewhere else?
HT: yeah
YL: new clients will have separate logic for redirects inside zips...need to identify the main content of the file
(HALP, scribe error)
TBL: the whole tag is labouring
under the burden of getting a change to apache...trying to get
mimetypes added...etc.
... apache doesn't change until IETF stability...then a long lag
<Yves> server return a CL ttp://, new clients will recognize that the "container" is a zip and fetch it for further resolution
TBL: it'd be nice to have a zip-aware apache module...
YK: I think that'll happen, but we should also support low-fideltiy modes
TBL: you could unwrap..
(inaudible)
HT: agree that this covers all
teh viable proposals...the idea that you need a new
delimiter
... a mechanism whereby a new syntactic signal strips a part of a URI and puts it in a header...that seems like the only thing that talks about nesting containers
DKA: what's the proposed TAG output?
[ none ]
DKA: are we sure?
... a summary? a note?
<timbl> If you use a /$/ then 1) new clients can just fetch the zip & DTRT. 2) old clients will work if EITHER the server has been hacked to be smart and 303 to the zip file OR someone has unzipped the thing into a directory called $
YK: a note seems good
timbl, agree
[ agreement about a note ]
ACTION wycats draft a note regarding constraints and decisions regarding zip URLs
<trackbot> Created ACTION-835 - Draft a note regarding constraints and decisions regarding zip urls [on Yehuda Katz - due 2013-10-08].
<timbl> If you use a / then 1) new clients cannot just fetch the zip so they may have to use an algorithm on old servers, nut on a new server they get a 303 2) old clients will work if EITHER the server has been hacked to be smart and 303 to the zip file OR someone has unzipped the thing into a directory called $
AR: do these compose? is parsing specified as left-to-right?
YK: yes
PL: yes, it should just work
AVK: you need to repeat the delimiter
[ yes, of course ]
PL: one of the things I was
considering...smart client and server let you compose arbitrary
URLs with these deimilters and the server might be able to help
you zip things up with arbitrary paths
... these delimiters should be equivalent to a slash for caching, etc.
TBL: that's why I like "/"
PL: right, but I need a way to signal that this one is special
TBL: analogus: instead of having
a domain name in the wrong order, thought through inverting
domain names... /org/example/...
... would have been extra work, but nice for lots of reasons
... would like this to be clean in that way
... an appealing property
YK: the syntax isn't expressive enough to help smart server do the Right Thing (™
PL: the protocol about what sort of response I get from what sort of server needs to be specified
AVK: for now we don't need to do the server part
YK: we need to say that there's a header
HT: no 303 needed...it's a mediatype solution
YL: you need a container type
PL: if the server only sends one resource, it needs to send the right mimetype
[ discussion about navigating around inside a zip ]
HT: does apache give you header-based hooks?
PL: rewrite rule
HT: so a vanilla apache will work...don't need to rev the server to get this to work...a sysadmin can do this
TBL: right. There are 2 levels.
The .htaccess level...not obvious how easy that is to make it
work
... can't look at the file tree since there isn't one
HT: uncovered a difference in media type expectations: does this work with any zipfile?
TBL: I'm hearing a requirement that zip files be usable without hassle
AVK: that was the plan but it's
not clear we can do that security-wise
... new problems...cache-manifest issue
AR: can we solve this with cors?
[ no, same-origin all the time ]
YK: need a new opt-in
SK: do we care about encrypted zip files?
[ maybe? ]
YK: what about the username/passwd URL features?
AVK: no, but we need to talk
about the format...what do we support
... and there is some ork on the fetch side about how long things persist, are cached, etc.
... [inaudible] pinning []
AR: ? ?
AVK: you need to keep the zip
alive for some amount of time
... don't want to throw it away quickly
HT: what we're looking for is a
"super slash"
... what we want it to mean is "this is something that redirects...to understand this you ahve to unpack to proceed"
TBL: I prefer a clean
hierarchy
... "//" was a mistake
... it shoudl have been /com/microsoft/.....
... there's level-breaking here...things that aren't really hierarcies
HT: taht's what they're proposing...assimilating it to the notion of slash is reasonable...we can't use "/" as such....
[ missed it...sorry ]
HT: TBL is saying "/" will work
YK: you can't disambiguate
HT: how does the client know?
TBL: it gets a redirection
HT: example.org/foo.zip/index.html
[ discussion of 303 solution ]
TBL: pushing back...why not the server solution?
YK: in my dayjob, I do write
servers, and while I'd love it for servers to be the answer,
but it's not tennable
... e.g., github pages
TBL: a sad case
YK: millions of things on it!
AVK: shouldn't have taken mimetypes out of the body = (
[ argument about archeology ]
(whimsical debate about polyglot zip/html)
YK: what about the mimetype? sniffing?
AVK: sniffing doesn't support all file types
<annevk>
AVK: you can't sniff css, html,
xml, ...
... so you use the extension
AR: you need to not have a manifest...it imperils the streaming
YK: surprised and nervous about
.png being different inside a zip file
... what if it's a GIF?
AVK: image rendering system ignores the mimetype already...no worries
YK: if you take a bunch of
content and zip it up, want it to behave the same
... want the manifest to help me avoid weird behavior
AVK: we could, but I don't want to
AR: type comes through use
YK: existing content may be
relying on the quirks
... and make break if you zip it up
TBL: the server could do a request to itself and bundle up responses
AVK: we should consider the complexity budget
YK: but we're adding a new
thing...a new semantic
... the extension dominating is something we've never had before
... and this has new semantics
AVK: the alternative is a manifest
YK: these semantics are new
AVK: so is zip
... sniffing is not sensible, it doesn't cover all bases
AR: what actually breaks?
AVK: pain text in an iframe...what should it do?
TBL: different behavior for same file at two locations
<annevk> SVG without a MIME type won't work in <img>
SK: we're losing content type and encoding and we can't detect the charset
AVK: you can
YK: we try hard, but it's not 100%
DKA: we have a system like
this...widget
... it has it's own URI spec for internal resources
HT: why aren't you using multi-part mime?
[ charset, text/plain discussion ]
<dka> Just for your bedtime reading, the Widget family of specifications:
thanks!
<dka> Widgets URI scheme:
<ht> TBL: Whatever determines how the server would serve a document (wrt Content-type and charset) should also be what a server will include in a 'zip' file
AR: crazy idea: let service worker hve access to unzipping APIs and let it do this
<ht> AR, That works best if we take TBL's suggestion and just use /
TBL: similar problem in another space...serving some things up raw...tables in the data area
ht, true
TBL: and we mostly don't use FTP servers...we just use HTTP servers
AR: what about subsetting? only a few mime types allowed
[ no ]
TBL: manifest is appealing because you can introduce new mime types on old servers
AR: perhaps zip won't work?
YK: not sure...
[ SVG and image ]
TBL: suppose we say the zip mimetype, there's this set of extensions that map to mime types
<ht> I hear two positions: No manifest, but some combination of sniffing and context-of-use will determine (most) media types, vs. manifest (at the beginning)
[ that's annevk's proposal ]
YK: you can't just zip things
up
... now you have to fix in extensions
[ discussion about constraint of zipping up a dir ]
[ we don't have locale to fall back on ]
AVK: I'm ok with a manifest...stuff that's not in the manifest will be application/octet-stram
<ht> Actually, it was revisiting the point wrt "zipping up a dir" that relative URIs should continue to work
YK: so css has to be in the manifest?
AVK: yes
<ht> W/o a manifest, no-one has come up with a way to guarantee charset
AR: nail in right-click
argument's coffin: ordering for right-click case is likely tobe
very bad, making things slow
... in that case, why not multipart mime?
YK: hrm....
... we need to arrrive at a format that lets us have multiple http documents with their headers
HT: you can zip a multipart mime file
<ht> or, as others seem to prefer, gzip
not sure why I try to participate
<dka> Another red herring for the fire:
<dka> And another:
<dka> One more:
I was going to say that it'd be really nice to have an API that deals in terms of WS Response objects and a map
<timbl> Due to the temporary shutdown of the federal government,
<timbl> the Library of Congress is closed to the public and researchers beginning October 1, 2013 until further notice.
<timbl> Who can keep a copy of the LoC in case the USG goes down? Wikileaks?
<timbl>
<ht> ar, what changes if you move 'down' to the HTTP Response layer? What's better there?
<ht> I guess you get the Transfer Encoding vs. Charset Encoding distinction. . .
ht: I don't think a ton...except that we hve this nice API symmetry
AVK: now we need to define the format if we use multipart-mime
TBL: isn't there content-disposition?
[ lots of discussion about zipping, multipart mime...etc. ]
<ht> AR: I don't want to zip the imaages in a bundle
thanks ht
sorry for failing at the scribing
<ht> Various: multipart-mime allows that some parts are zipped, and some aren't -- use content-encoding: zip/gzip/...
calling for help in filling in the last 20 min
<dka> ajourned
\o/
This is scribe.perl Revision: 1.138 of Date: 2013-04-25 13:59:11 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/bor/nor/ Succeeded: s/HT L/HTML/ Succeeded: s/XXX/Google/ FAILED: s/XXX/Google/ Succeeded: s/WEB IDL/IDL/ Succeeded: s/done by hand/it's a pain to do it by hand/ Succeeded: s/mysefl/myself/ Succeeded: s/The DOM4/2) The DOM4/ Succeeded: s/compatible/incompatible/ Succeeded: s/the browser/the ideas and software around the Web/ Succeeded: s/CC: by/CC0/ Succeeded: s/encrypted/unencrypted/ Succeeded: s/sane/consistent/ Succeeded: s/timbl:/timbl,/ Succeeded: s/TM)/™/ Succeeded: s/ht: true/ht, true/ Succeeded: s/put stuff/fix/ Found ScribeNick: noah Found Scribe: slightlyoff Inferring ScribeNick: slightlyoff Found Scribe: slightlyoff Inferring ScribeNick: slightlyoff Found Scribe: wycats_ Found Scribe: wycats Inferring ScribeNick: wycats Found Scribe: slightlyoff Inferring ScribeNick: slightlyoff Scribes: slightlyoff, wycats_, wycats ScribeNicks: noah, slightlyoff, wycats Default Present: +1.617.715.aaaa, Dan, Peter, Tim, Serguey, Yves, Henry, plh, Anne, Yehuda, [IPcaller], darobin Present: Dan Peter Tim Serguey Yves Henry plh Anne Yehuda Noah Robin Regrets: Jeni Found Date: 01 Oct 2013 Guessing minutes URL: People with action items:[End of scribe.perl diagnostic output] | http://www.w3.org/2001/tag/2013/10/01-minutes.html | CC-MAIN-2014-42 | refinedweb | 12,295 | 69.21 |
Suppose that you don't use an engine like "Unity3D" that has some built-in ways to deal with spritesheets, how would you deal with the "spritesheet problem"? As it is known, spritesheets are better than loading separate .png files for animation purposes (Considering that a character has movement, atack, defense, death, etc, animations). Most of the people, I guess, would take the 0x, 0y pixel colorkey and make it transparent for the whole image AND cut manually all animations and store them in a collection of some sort.
The key point here is automation. If a spritesheet has irregular sprites (For example, the first one is a rectangle of 30 by 25 pixels, and the second one is irregularly far from the first sprite image), one cannot implement a function to cut all next sprites based on the rectangle of the first, because all sprites would have parts missing, etc.
Manually storing every rectangle position in the sprite sheet seems to be a great option for a general game, but the same does not apply to an engine. I'm developing an engine on Pygame/Python, and, therefore, I want a clever way to separate/cut the inner sprite rectangles and return them as a list.
The solution? Looping pixel per pixel and applying some logic based on the colorkey. How would you do that? Would you bother to implement such a function? What do you think about it? For the sake of the topic, here's my method for cutting based on the first rectangle's position and size (It does not work if the spritesheet is irregular):
def getInnerSprites(self, xOffset, yOffset, innerRectWidth, innerRectHeight, innerRectQuantity): """ If the grict is a sprite sheet, it returns a list of sprites based on the first offsets and the width and the height of the sprite rect inside the sprite sheet. """ animation = [] if self.isSpriteSheet == True: for i in range(xOffset, innerRectWidth*innerRectQuantity, innerRectWidth): print i animation.append(self.getSubSurface((i,yOffset,innerRectWidth,innerRectHeight))) else: print "The Grict must be a sprite sheet in order to be animated." return animation
I'll try to implement the "getInnerSpritesByPixel()" method. Spritesheets are a key thing in a complex game like a MMORPG, where almost every item has its own animation. Such method is more than necessary. | http://www.gamedev.net/topic/654308-spritesheet-algorithms/ | CC-MAIN-2015-22 | refinedweb | 383 | 60.45 |
Building iOS Apps with Xamarin and Visual Studio
Learn how to make your first iOS app using Xamarin and Visual Studio, by making a simple photo library viewer.
Version
- C#4, iOS 10, Other
When creating iOS apps, developers typically turn to the languages and IDE provided by Apple: Objective-C / Swift and Xcode. However, this isn’t the only option—you can create iOS apps using a variety of languages and frameworks.
One of the most popular options is Xamarin, a cross-platform framework that allows you to develop iOS, Android, OS X and Windows apps using C# and Visual Studio. The major benefit here is Xamarin can allow you to share code between your iOS and Android app.
Xamarin has a big advantage over other cross-platform frameworks: with Xamarin, your project compiles to native code, and can use native APIs under the hood. This means a well written Xamarin app should be indistinguishable from an app made with Xcode. For more details, check out this great Xamarin vs. Native App Development article.
Xamarin had a big disadvantage too in the past too: its price. Because of the steep licensing cost of $1,000 per platform per year, you’d have to give up your daily latte or frappuccino to even think about affording it … and programming without coffee can get dangerous. Because of this steep price, until recently Xamarin appealed mostly to enterprise projects with big budgets.
However, this recently changed when Microsoft purchased Xamarin and announced that it would be included in all new versions of Visual Studio, including the free Community Edition that’s available to individual developers and small organizations.
Free? Now that’s a price to celebrate!
Besides cost (or lack thereof), Xamarin’s other virtues include allowing programmers to:
- Leverage existing C# libraries and tools to create mobile apps.
- Reuse code between apps on different platforms.
- Share code between ASP.Net backends and customer-facing apps.
Xamarin also offers a choice of tools, depending on your needs. To maximize cross-platform code reuse, use Xamarin Forms. This works especially well for apps that don’t need platform-specific functionality or a particularly custom interface.
If your app does require platform-specific features or designs, use Xamarin.iOS, Xamarin.Android and other platform-specific modules to get direct interaction with native APIs and frameworks. These modules provide the flexibility to create very custom user interfaces, yet still allow sharing of common code across platforms.
In this tutorial, you’ll use Xamarin.iOS to create an iPhone app that displays a user’s photo library.
This tutorial doesn’t require any prior iOS or Xamarin development experience, but to get the most from it you’ll need a basic understanding of C#.
Getting Started
To develop an iOS app with Xamarin and Visual Studio, you’ll ideally need two machines:
- A Windows machine to run Visual Studio and write your project’s code.
- A Mac machine with Xcode installed to act as a build host. This doesn’t have to be a dedicated computer for building, but it must be network accessible during development and testing from your Windows computer.
It greatly helps if your machines are physically near each other, since when you build and run on Windows, the iOS Simulator will load on your Mac.
I can hear some of you saying, “What if I don’t have both machines?!”
For Mac-only users, Xamarin does provide an IDE for OS X, but in this tutorial we will be focusing on the shiny new Visual Studio support. So if you’d like to follow along, you can run Windows as a virtual machine on your Mac. Tools such as VMWare Fusion or the free, open-source VirtualBox make this an effective way to use a single computer.
If using Windows as a virtual machine, you’ll need to ensure that Windows has network access to your Mac. In general, if you can
pingyour Mac’s IP address from inside Windows, you should be good to go.
- For Windows-only users, go buy a Mac right now. I’ll wait! :] If that’s not an option, hosted services such as MacinCloud or Macminicolo provide remote Mac access for building.
This tutorial assumes you’re using separate Mac and Windows computers, but don’t worry—the instructions are basically the same if you’re using Windows inside a virtual machine on your Mac.
Installing Xcode and Xamarin
If you don’t have it already, download and install Xcode on your Mac. This is just like installing any other app from the App Store, but since it’s several gigabytes of data, it may take a while.
After Xcode is installed, download Xamarin Studio onto your Mac. You’ll need to provide your email, but the download is otherwise free. Optional: do a happy dance for all the coffees you can still afford.
Once the download is complete, open the installer package and double click Install Xamarin.app. Accept the terms and conditions and continue.
The installer will search for already-installed tools and check for current platform versions. It will then show you a list of development environments. Make sure Xamarin.iOS is checked, then click Continue.
Next you’ll see a confirmation list summarizing the items to be installed. Click Continue to proceed. You will be given a summary and an option to launch Xamarin Studio. Instead, click Quit to complete the installation.
Installing Visual Studio and Xamarin
For this tutorial you can use any version of Visual Studio, including the free Community Edition. Some features are absent in the Community Edition, but nothing that will prevent you from developing complex apps.
Your Windows computer should meet the Visual Studio minimum system requirements. For a smooth development experience, you’ll want at least 3 GB of RAM.
If you don’t already have Visual Studio installed, download the Community Edition installer by clicking the green Download Community 2015 button on the Community Edition web site.
Run the installer to begin the installation process, and choose the Custom installation option. In the features list, expand Cross Platform Mobile Development, and select C#/.NET (Xamarin v4.0.3) (where v4.0.3 is the current version when this tutorial was written, but will likely be different in the future).
Click Next and wait for the installation to complete. This will likely take a while; go take a walk to burn off all the cookies you ate while installing Xcode. :]
If you already have Visual Studio installed but don’t have the Xamarin tools, go to Programs and Features on your Windows computer and find Visual Studio 2015. Select it, click Change to access its setup, then select Modify.
You’ll find Xamarin under Cross Platform Mobile Development as C#/.NET (Xamarin v4.0.3). Select it and click Update to install.
Whew—that’s a lot of installations, but now you’ve got everything you need!
Creating the App
Open Visual Studio and select File\New\Project. Under Visual C# expand iOS, select iPhone and pick the Single View App template. This template creates an app with a single view controller, which is simply a class that manages a view in an iOS app.
For both the Name and the Solution Name, enter ImageLocation. Choose a location on your computer for your app and click OK to create the project.
Visual Studio will prompt you to prepare your Mac to be the Xamarin build host:
- On the Mac, open System Preferences and select Sharing.
- Turn on Remote Login.
- Change Allow access to Only these users and add a user that has access to Xamarin and Xcode on the Mac.
- Dismiss the instructions and return to your Windows computer.
Back in Visual Studio, you will be asked to select the Mac as the build host. Select your Mac and click Connect. Enter the username and password, then click Login.
You can verify you’re connected by checking the toolbar.
Select iPhone Simulator from the Solution Platform dropdown—this will automatically pick a simulator from the build host. You can also change the device simulator by clicking the small arrow next to the current simulator device.
Build and run by pressing the green Debug arrow or the shortcut key F5.
Your app will compile and execute, but you won’t see it running on Windows. Instead, you’ll see it on your Mac build host. This is why it helps to have your two machines nearby :]
At the recent Evolve conference, Xamarin announced iOS Simulator Remoting that will soon allow you to interact with apps running in Apple’s iOS Simulator as though the simulator were running on your Windows PC. For now, however, you’ll need to interact with the simulator on your Mac.
You should see a splash screen appear on the simulator and then an empty view. Congratulations! Your Xamarin setup is working.
Stop the app by pressing the red stop button (shortcut Shift + F5).
Creating the Collection View
The app will display thumbnails of the user’s photos in a Collection View, which is an iOS control for displaying several items in a grid.
To edit the app’s storyboard, which contains the “scenes” for the app, open Main.storyboard from the Solution Explorer.
Open the Toolbox and type collection into the text box to filter the list of items. Under the Data Views section, drag the Collection View object from the toolbox into the middle of the empty view.
Select the collection view; you should see hollow circles on each side of the view. If instead you see T shapes on each side, click it again to switch to the circles.
Click and drag each circle to the edge of the view until blue lines appear. The edge should snap to this location when you release the mouse button.
Now you’ll set up Auto Layout Constraints for the collection view; these tell the app how the view should be resized when the device rotates. In the toolbar at the top of the storyboard, click on the green plus sign next to the CONSTRAINTS label. This will automatically add constraints for the collection view.
The generated constraints are almost correct, but you’ll need to modify some of them. On the Properties window, switch to the Layout tab and scroll down to the Constraints section.
The two constraints defined from the edges are correct, but the height and width constraints are not. Delete the Width and Height constraints by clicking the X next to each.
Notice how the collection view changes to an orange tint. This is an indicator that the constraints need to be fixed.
Click on the collection view to select it. If you see the circles as before, click again to change the icons to green T shapes. Click and drag the T on the top edge of the collection view up to the green rectangle named Top Layout Guide. Release to create a constraint relative to the top of the view.
Lastly, click and drag the T on the left side of the collection view to the left until you see a blue dotted line. Release to create a constraint relative to the left edge of the view.
At this point, your constraints should look like this:
Configuring the Collection View Cell
You may have noticed the outlined square inside the collection view, inside of which is a red circle containing an exclamation point. This is a collection view cell, which represents a single item in the collection view.
To configure this cell’s size, which is done on the collection view, select the collection view and scroll to the top of the Layout tab. Under Cell Size, set the Width and Height to 100.
Next, click the red circle on the collection view cell. A pop-up will inform you that you haven’t set a reuse identifier for the cell, so select the cell and go to the Widget tab. Scroll down to the Collection Reusable View section and enter ImageCellIdentifier for the Identifier. The error indicator should vanish.
Continue scrolling down to the Interaction Section. Set the Background Color by selecting Predefined and blue.
The scene should look like the following:
Scroll to the top of the Widget section and set the Class as PhotoCollectionImageCell.
Visual Studio will automatically create a class with this name, inheriting from
UICollectionViewCell, and create
PhotoCollectionImageCell.cs for you. Sweet, I wish Xcode did that! :]
Creating the Collection View Data Source
You’ll need to manually create a class to act as the
UICollectionViewDataSource, which will provide data for the collection view.
Right-click on ImageLocation in the Solution Explorer. Select Add \ Class, name the class PhotoCollectionDataSource.cs and click Add.
Open the newly added
PhotoCollectionDataSource.cs and add the following at the top of the file:
using UIKit;
This gives you access to the iOS
UIKit framework.
Change the definition of the class to the following:
public class PhotoCollectionDataSource : UICollectionViewDataSource { }
Remember the reuse identifier you defined on the collection view cell earlier? You’ll use that in this class. Add the following right inside the class definition:
private static readonly string photoCellIdentifier = "ImageCellIdentifier";
The
UICollectionViewDataSource class contains two abstract members you must implement. Add the following right inside the class:
public override UICollectionViewCell GetCell(UICollectionView collectionView, NSIndexPath indexPath) { var imageCell = collectionView.DequeueReusableCell(photoCellIdentifier, indexPath) as PhotoCollectionImageCell; return imageCell; } public override nint GetItemsCount(UICollectionView collectionView, nint section) { return 7; }
GetCell() is responsible for providing a cell to be displayed within the collection view.
DequeueReusableCell reuses any cells that are no longer needed, for example if they’re offscreen, which you then simply return. If no reusable cell is available, a new one is created automatically.
GetItemsCount tells the collection view to display seven items.
Next you’ll add a reference to the collection view to the
ViewController class, which is the view controller that manages the scene containing the collection view. Switch back to Main.storyboard, select the collection view, then select the Widget tab. Enter collectionView for the Name.
Visual Studio will automatically create an instance variable with this name on the
ViewController class.
collectionViewinstance variable automatically generated by Visual Studio.
Open ViewController.cs from the Solution Explorer and add the following field right inside the class:
private PhotoCollectionDataSource photoDataSource;
At the end of
ViewDidLoad(), add these lines to instantiate the data source and connect it to the collection view.
photoDataSource = new PhotoCollectionDataSource(); collectionView.DataSource = photoDataSource;
This way the
photoDataSource will provide the data for the collection view.
Build and run. You should see the collection view with seven blue squares.
Nice – the app is really coming along!
Showing Photos
While blue squares are cool, you’ll next update the data source to actually retrieve photos from the device and display them on the collection view. You’ll use the
Photos framework to access photo and video assets managed by the Photos app.
To start, you’ll add a view to display an image on the collection view cell. Open Main.storyboard again and select the collection view cell. On the Widget tab, scroll down and change the Background color back to the default.
Open the Toolbox, search for Image View, then drag an Image View onto the collection view Cell.
The image view will initially be much larger than the cell; to resize it, select the image view and go to the Properties \ Layout tab. Under the View section, set both the X and Y values to 0 and the Width and Height values to 100.
Switch to the Widget tab for the image view and set the Name as cellImageView. Visual Studio will automatically create a field named
cellImageView for you.
Scroll to the View section and change the Mode to Aspect Fill. This keeps the images from becoming stretched.
partial, which indicates that the field is in another file.
In the Solution Explorer, select the arrow to the left of
PhotoCollectionImageCell.cs to expand the files. Open
PhotoCollectionImageCell.designer.cs to see
cellImageView declared there.
This file is automatically generated; do not not make any changes to it. If you do, they may be overwritten without warning or break links between the class and storyboard, resulting in runtime errors.
Since this field isn’t public, other classes cannot access it. Instead, you’ll need to provide a method to be able to set the image.
Open
PhotoCollectionImageCell.cs and add the following method to the class:
public void SetImage(UIImage image) { cellImageView.Image = image; }
Now you’ll update
PhotoCollectionDataSource to actually retrieve photos.
Add the following at the top of PhotoCollectionDataSource.cs:
using Photos;
Add the following fields to the
PhotoCollectionDataSource:
private PHFetchResult imageFetchResult; private PHImageManager imageManager;
The
imageFetchResult field will hold an ordered list of photo entity objects, and you’ll get this photos list from the
imageManager.
Right above
GetCell(), add the following constructor:
public PhotoCollectionDataSource() { imageFetchResult = PHAsset.FetchAssets(PHAssetMediaType.Image, null); imageManager = new PHImageManager(); }
This constructor gets a list of all image assets in the Photos app and stores the result in the
imageFetchResult field. It then sets the
imageManager, which the app will query for more information about each image.
Dispose of the
imageManager object when the class finishes by adding this destructor below the constructor.
~PhotoCollectionDataSource() { imageManager.Dispose(); }
To make the
GetItemsCount and
GetCell methods use these resources and return images instead of empty cells, change
GetItemsCount() to the following:
public override nint GetItemsCount(UICollectionView collectionView, nint section) { return imageFetchResult.Count; }
Then replace
GetCell with the following:
public override UICollectionViewCell GetCell(UICollectionView collectionView, NSIndexPath indexPath) { var imageCell = collectionView.DequeueReusableCell(photoCellIdentifier, indexPath) as PhotoCollectionImageCell; // 1 var imageAsset = imageFetchResult[indexPath.Item] as PHAsset; // 2 imageManager.RequestImageForAsset(imageAsset, new CoreGraphics.CGSize(100.0, 100.0), PHImageContentMode.AspectFill, new PHImageRequestOptions(), // 3 (UIImage image, NSDictionary info) => { // 4 imageCell.SetImage(image); }); return imageCell; }
Here’s a breakdown of the changes above:
- The
indexPathcontains a reference to which item in the collection view to return. The
Itemproperty is a simple index. Here you get the asset at this index and cast it to a
PHAsset.
- You use
imageManagerto request the image for the asset with a desired size and content mode.
- Many iOS framework methods use deferred execution for requests that can take time to complete, such as
RequestImageForAsset, and take a delegate to be called upon completion. When the request completes, the delegate will be called with the image and information about it.
- Lastly, the image is set on the cell.
Build and run. You’ll see a prompt requesting permission access.
If you select OK, however, the app … doesn’t do anything. So disappointing!
iOS considers access to users’ photos to be sensitive information, and prompts the user for permission. However, the app must also register to be notified when the user has granted this permission, so it can reload its views. You’ll do this next.
Registering for Photo Permission Changes
First, you’ll add a method to the
PhotoCollectionDataSource class to inform it to re-query for photo changes. Add the following to the end of the class:
public void ReloadPhotos() { imageFetchResult = PHAsset.FetchAssets(PHAssetMediaType.Image, null); }
Next, open ViewController.cs and add the following framework to the top of the file:
using Photos;
Then add this code to the end of
ViewDidLoad():
// 1 PHPhotoLibrary.SharedPhotoLibrary.RegisterChangeObserver((changeObserver) => { //2 InvokeOnMainThread(() => { // 3 photoDataSource.ReloadPhotos(); collectionView.ReloadData(); }); });
Here’s what this does:
- The app registers a delegate on the shared photo library to be called whenever the photo library changes.
InvokeOnMainThread()ensures that UI changes are always processed on the main thread; otherwise a crash may result.
- You call
photoDataSource.ReloadPhotos()to reload the photos and
collectionView.ReloadData()to tell the collection view to redraw.
Finally, you’ll handle the initial case, in which the app has not yet been given access to photos, and request permission.
In
ViewDidLoad(), add the following code right before setting
photoDataSource:
if (PHPhotoLibrary.AuthorizationStatus == PHAuthorizationStatus.NotDetermined) { PHPhotoLibrary.RequestAuthorization((PHAuthorizationStatus newStatus) => { }); }
This checks the current authorization status, and if it’s
NotDetermined, explicitly requests permission to access photos.
In order to trigger the photos permission prompt again, reset the iPhone simulator by going to Simulator \ Reset Content and Settings.
Build and run the app. You’ll be prompted for photo permission, and after you press Ok the app will show the collection view with thumbnails for all the device’s photos!
Where to Go From Here?
You can download the completed Visual Studio project from here.
In this tutorial, you learned a bit about how Xamarin works and how to use it to create iOS apps.
The Xamarin Guides Site provides several good resources to learn more about the Xamarin platform. To better understand building cross-platforms apps, view the Xamarin tutorials on building the same app for iOS and Android.
Microsoft’s purchase of Xamarin introduced many exciting changes. The announcements at Microsoft’s Build conference and Xamarin Evolve can give you guidance on Xamarin’s new direction. Xamarin also released videos of the sessions from the recent Evolve Conference that provide more information on working with Xamarin and the future direction of the product.
Do you think you’ll try Xamarin when building apps? If you have any questions or comments about this tutorial, please feel free to post in the comments section below. | https://www.raywenderlich.com/1044-building-ios-apps-with-xamarin-and-visual-studio | CC-MAIN-2020-16 | refinedweb | 3,555 | 56.66 |
I don’t want to offend anybody, but we need to admit it - there are two types of users that we need to keep in mind when developing mobile apps: control freaks and doormats. When we’re talking about video recording and editing apps, it follows that ‘control freaks’ can also be described as ‘creators’ who want full control over their video projects. Control freaks enjoy doing things their own way, and like having complete freedom in the process of video creating and editing. Unlike control freaks and creators, people who are ‘doormats’ prefer things to be easy, and don’t want to have to wade into the nitty gritty of frame-by-frame video editing. They want to tap a few buttons and get a finished project.
Let’s try to apply this classification to existing video editing apps on Google Play to figure out what type of users they target. Take Magisto, for example. You give Magisto some of your photos and videos from the library, choose an editing style (dance, love, fashion, etc.), and pick a soundtrack. Then, a real magic happens, and in a few minutes you receive a push notification saying your movie is ready. See also: Case study: Mobile technologies to develop Periscope
The result looks quite nice actually, but you don’t get that much control over the creative process. Plus, you have to wait (even if it’s only 2 minutes, we know how much you hate waiting). Imagine if Instagram would grab your selfie, spend some time applying their favorite filters to it, and then suggest that you publish that selfie right away.
Furthermore, with Magisto you get a link to the video stored on Magisto’s server, but do not get the video file stored locally on your device. If you want to download the video to show it to your friends at a party, you’re welcome to do this for a monthly subscription fee.
Magisto is a great video app in its class, and may be perfectly fine for users who don’t demand much control. Unfortunately, however, there is nothing I can recommend to a control freak who happens to be an Android user. Why? Because the majority of social video apps for Android that actually process video recording on a user’s device perform quite poorly. But you shouldn’t blame this on poor programming skills of the app developers who made them. The thing is, as much as we love Android, it just isn’t a perfect platform for video. That’s why it’s really hard to find an amazing social video app for Android on Google Play.
Wait!
There is one app we’ve been developing at Yalantis that might be able to fill the niche. I can definitely walk you through the battle we had with Android to make this app a reality. But before we get down to describing the tech part, you should check out our article about video recording and editing apps, and the top performers on the market.
What’s wrong with Android?
To be fully honest with you, developing video recording apps for Android is a big challenge. Using standard things like MediaRecorder isn’t difficult at all, but we’re talking here about more advanced video recording functionality.
Apple provides its fellows with native functionality, AVFoundation to be specific, which allows for developing a great variety of video features for mobile apps, but Google is somehow falling behind.
Despite all the difficulties, we wanted to make a real Instagram for videos, so we accepted the challenge.
What are the coolest video recording features?
In our project we needed to meet the following requirements:
- Recording of several video chunks with a total 15 second duration.
- “Fast Motion” – recording a time-lapse video at a 10FPS.
- “Slow Motion” – recording a video at a 120FPS.
- “Stop Motion” – recording very short videos consisting of 2-3 frames.
- Merging recorded chunks into a single file.
- Looping music over video.
- Reversing a recorded video.
- Cropping video for uploading to Instagram, Facebook etc.
- Adding a watermark.
Where do you start, huh?
At first we made an attempt to use MediaRecorder, the easiest way to implement video recording functionality on Android. Even though MediaRecorder has been used for developing Android apps for years, and is supported by all versions of the operating system, there is no way it can be customized.
On all versions of Android up to 4.3. your video implementation possibilities are quite limited. You can record a video from a phone camera with the help of Camera class and MediaRecorder, apply standard camera color filters (sepia, black and white, and some others), and that’s pretty much it. What’s more, with MediaRecorder recording starts with a 700 millisecond delay, but for recording small video chunks almost a second delay isn’t acceptable.
However, it’s not true that the sun never shines on Android. For Jelly Bean, or Android 4.1. and all the following versions Google added a possibility to use MediaCodec class which allows you to access low-level media codecs (i.e. encoder and decoder components), and MediaExtractor class which helps you extract encoded media data from a data source.
Luckily, for Android 4.3 there appeared MediaMuxer which facilitates muxing (recording several media streams into a single file) for elementary video and audio streams. This component allows for more creative possibilities: you can code and decode video streams, and also perform some processing as well.
It was a hard decision, but
We made the app compatible only with the Android 4.3 and later versions, so that we could use MediaCodec and MediaMuxer for video recording. This way we could avoid delays at recording initialization.
What’s the tech stack anyway?
Before getting down to the project development, we needed to figure out how we can actually work with MediaCodec. We based our research on the example from Google. The Grafika project represents a compilation of different hacks that can help you deal with new ways of recording and editing videos.
Here comes the meaty part. The list of technologies we used includes:
- OpenGL for rendering of captured frames. It would let us use shaders for frame modification and quickly draw the frames on the camera preview screen.
- MediaCodec for recording videos.
- FFmpeg for video processing.
- Mp4parser for merging of recorded video chunks.
- MediaRcorder for Fast Motion.
How can you use the FFmpeg library?
FFmpeg isn’t the easiest thing to work with, to say the least. It’s an open source, cross-platform and very powerful framework for working with videos and images in mobile apps. It’s written in C and has plenty of plug-ins.
The main difficulty in working with ffmpeg is compiling all of the needed plug-ins and adding them to the project. This is a very time-consuming process that would’ve increased the development time and costs by miles and made our client very unhappy. That’s why we found another way to handle the matter.
We bought a license for a ready-made Android library. But even with this expensive work around, there were some non-trivial issues that we needed to deal with.
The particularity of working with ffmpeg is that you need to use it the same way you use an executable file in the command string. In other words, you have to pass a command string with input parameters, and the parameters that need to be applied to the final video. The main issue here is the absence of the possibility to debug to find the problem in case something went wrong. The only source of information is a log file which is recorded during the ffmpeg operation. It takes quite a lot of time to figure out how one or another command works, and also how to compose a complex command which will perform multiple actions at the same time.
We made a firm decision to assemble our own version of the ffmpeg library, and develop a wrapper for it so we could debug an app at any time.
Implementing the video app features
Slow Motion
Slow motion was one of the highest priority requirements for the app we’ve been developing. Unlike iOS devices, nearly all Android phones fail to provide hardware support for video recording at frame rates required for the Slow Motion effect. There are also no system capabilities that would allow us to "activate" this feature on that small fraction of Android devices that have hardware support for this feature.
However, there is always a way out! We could have achieved the Slow Motion effect with software. Here is how this can be done:
Duplicating frames while recording, or prolonging their duration (time that is spent on showing a single frame).
Recording video with a regular frame-rate, and then duplicating each frame while processing.
We’ve seen this implementation live, and to tell the truth, it sucks. If you can't do it right, don't do it at all. We decided to put off the Slow Motion feature for the Android version of the app until better times. Below is a video with the Slow Motion effect that we recorded on an iPhone.
Fast Motion
Implementing the possibility to record time-lapse videos on Android isn’t hard at all. MediaRecorder allows you to set a frame rate of, say, 10FPS (the standard frame rate for video recording is 30FPS), which means every third frame will be recorded. As a result, the recorded video will be three times faster.
private boolean prepareMediaRecorder() { if (camera == null) { return false; } camera.unlock(); if (mediaRecorder == null) { mediaRecorder = new MediaRecorder(); mediaRecorder.setCamera(camera); } mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); CamcorderProfile profile = getCamcorderProfile(cameraId); mediaRecorder.setCaptureRate(10); mediaRecorder.setVideoSize(profile.videoFrameWidth, profile.videoFrameHeight); mediaRecorder.setVideoFrameRate(30); mediaRecorder.setVideoEncodingBitRate(profile.videoBitRate); mediaRecorder.setOutputFile(createVideoFile().getPath()); mediaRecorder.setPreviewDisplay(cameraPreview.getHolder().getSurface()); mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264); try { mediaRecorder.prepare(); } catch (Exception e) { releaseMediaRecorder(); return false; } return true; }
Stop Motion
MediaRecorder isn’t suitable for instant recording of several frames. As I already mentioned, it creates a long delay at the start of recording. But using MediaCodec solves the performance problem.
Merging recorded video chunks into a single file
This is one of the app’s main features. After recording several chunks, a user should get a single video file.
Initially, we used ffmpeg for this purpose, but then decided to abandon this solution, because it merges videos with transcoding which increases video processing duration. On the Nexus 5, for example, merging 7-8 video chunks into one 15-second-long video takes 15 seconds, and merging more than 100 chunks can take up to a minute. If you use a higher bitrate or codecs that give a better result with the same bitrate, the process takes even longer.
That’s why we used mp4parser, a library that works in the following way: it pulls encoded data out of container files, creates a new container and folds that data there in a consecutive order. Then, it records the information to the container’s header, and returns a solid single video file.
The only disadvantage of working with mp4parser is that all video chunks need to be encoded with the same parameters (codec type, extension, aspect ratio, etc.). Depending on the number of clips, it takes about 1-4 seconds for mp4parser to process videos.
An example of mp4parser output when merging several video files into one:
public void merge(List<File> parts, File outFile) { try { Movie finalMovie = new Movie(); Track[] tracks = new Track[parts.size()]; for (int i = 0; i < parts.size(); i++) { Movie movie = MovieCreator.build(parts.get(i).getPath()); tracks[i] = movie.getTracks().get(0); } finalMovie.addTrack(new AppendTrack(tracks)); FileOutputStream fos = new FileOutputStream(outFile); BasicContainer container = (BasicContainer) new DefaultMp4Builder().build(finalMovie); container.writeContainer(fos.getChannel()); } catch (IOException e) { Log.e(TAG, "Merge failed", e); } }
Reversing a recorded video
We took the following steps to reverse a video:
- Extracting all frames from a video file, and writing them to internal storage (for example, as jpg files).
- Renaming frames to put them in reverse order.
- Compiling one video from all the files.
An example of the command for breaking a video into files with frames:
ffmpeg -y -i input.mp4 -strict experimental -r 30 -qscale 1 -f image2 -vcodec mjpeg %03d.jpg
After that, you need to rename the files of the frames to make them go in the reverse order (i.e. the first frame becomes the last, and the last becomes the first; the second frame becomes the penultimate and the penultimate becomes the second, and so on).
Then, with the help of the following command, you can assemble a video from frames:
ffmpeg -y -f image2 -i %03d.jpg -r 30 -vcodec mpeg4 -b:v 2100k output.mp4
This solution for reversing a recorded video looks neither elegant nor productive, nor does it perform quickly. Since we don’t want our users to wait, the ability to reverse videos may appear in future versions of our app, but not in the version currently under development.
Gif
The Gif feature is basically about creating short videos consisting of a small number of frames which create a gif effect when looped. Instagram just launched Boomerang, an app that produces these sorts of gifs.
The process of implementing gifs is quite simple – we took 6 photos at a regular interval (in our case, 125ms), then duplicated all the frames except the first and the last one in the reverse order to achieve a smooth reverse effect, and collected the frames into one video with the help of ffmpeg:
ffmpeg -y -f image2 -i %02d.jpg -r 15 -filter:v setpts=2.5*PTS -vcodec libx264 output.mp4
- f is a setup like in the input files
- i %02d.jpg are input files with a dynamic name format (01.jpg, 02.jpg, and so on)
- filter:v setpts=2.5*PTS – increasing the duration of each frame by 2,5
To optimize the user experience we currently create an actual video file at the stage of saving and sharing a video so that a user doesn’t have to wait for a video to be processed. Before that, we work with photos which get stored in RAM and then drawn on the TextureView canvas.
private long drawGif(long startTime) { Canvas canvas = null; try { if (currentFrame >= gif.getFramesCount()) { currentFrame = 0; } Bitmap bitmap = gif.getFrame(currentFrame++); if (bitmap == null) { handler.notifyError(); return startTime; } destRect(frameRect, bitmap.getWidth(), bitmap.getHeight()); canvas = lockCanvas(); canvas.drawBitmap(bitmap, null, frameRect, framePaint); handler.notifyFrameAvailable(); if (showFps) { canvas.drawBitmap(overlayBitmap, 0, 0, null); frameCounter++; if ((System.currentTimeMillis() - startTime) >= 1000) { makeFpsOverlay(String.valueOf(frameCounter) + "fps"); frameCounter = 0; startTime = System.currentTimeMillis(); } } } catch (Exception e) { Timber.e(e, "drawGif failed"); } finally { if (canvas != null) { unlockCanvasAndPost(canvas); } } return startTime; }
public class GifViewThread extends Thread { public void run() { long startTime = System.currentTimeMillis(); try { if (isPlaying()) { gif.initFrames(); } } catch (Exception e) { Timber.e(e, "initFrames failed"); } finally { Timber.d("Loading bitmaps in " + (System.currentTimeMillis() - startTime) + "ms"); } long drawTime = 0; while (running) { if (paused) { try { Thread.sleep(10); } catch (InterruptedException ignored) {} continue; } if (surfaceDone && (System.currentTimeMillis() - drawTime) > FRAME_RATE) { startTime = drawGif(startTime); drawTime = System.currentTimeMillis(); } } } }
The rest of the features
We used ffmpeg to implement all the remaining functions, such as looping music over video, cropping video, and adding a watermark for uploading videos to Instagram, Facebook, and other social media outlets.
Now as I’ve been working on this video project for about 9 months I can say that working with ffmpeg is quite easy if you have some experience. For example, a command that loops an audio track over video looks like this:
ffmpeg -y -ss 00:00:00.00 -t 00:00:02.88 -i input.mp4 -ss 00:00:00.00 -t 00:00:02.88 -i tune.mp3 -map 0:v:0 -map 1:a:0 -vcodec copy -r 30 -b:v 2100k -acodec aac -strict experimental -b:a 48k -ar 44100 output.mp4
- y means that you can rescript files without sending a request
- ss 00:00:00.00 is the time at which to start video processing in a given case
- t 00:00:02.88 is the time by which to continue processing of the input file
- i input.mp4 is the input video-file
- i tune.mp3 is the input audio-file
- map is mapping of video and audio channel
- vcodec is a video-codec setup (in a our case we use the same codec with which the video was encoded)
- r is a frame rate setup
- b:v is bitrate setup for a video channel
- acodec is audio-codec setup (in our case we use AAC encoding)
- ar is a sample rate of the audio channel
- b:a is a bitrate of the audio channel
- output.mp4 is a resulting video file
The command for adding a watermark and video cropping:
ffmpeg -y -i input.mp4 -strict experimental -r 30 -vf movie=watermark.png, scale=1280*0.1094:720*0.1028 [watermark]; [in][watermark] overlay=main_w-overlay_w:main_h-overlay_h, crop=in_w:in_w:0:in_h*in_h/2 [out] -b:v 2100k -vcodec mpeg4 -acodec copy output.mp4
- movie=watermark.png – specifies a watermark image location
- scale=1280*0.1094:720*0.1028 – specifying a watermark size
- [in][watermark] overlay=main_w-overlay_w:main_h-overlay_h, crop=in_w:in_w:0:in_h*in_h/2 [out] – adding a watermark and cropping the video
It’s been a long time since we began developing our video recording application, and we’re now fully equipped with the knowledge to implement video recording functionality in our future projects. Here are some things that we’re still working to implement:
Making our own ffmpeg wrapper and using it at the JNI level which will allow us to increase performance, improve flexibility, and decrease the total weight of the library (because not all modules of ffmpeg are actually needed for the project).
Applying our own filters for recorded gifs and videos.
As you’re reading this, we’re getting closer to our goals.
Read also: | https://yalantis.com/blog/video-recording-app-development-how-we-built-instagram-for-videos/ | CC-MAIN-2016-50 | refinedweb | 3,027 | 64.41 |
[img_assist|nid=836|title=|desc=|link=none|align=right|width=240|height=210]Andrew Glover is co-author of Groovy in Action and wrote a test framework called easyb. The new framework promotes a more "conversational" approach to unit testing. DZone's Steven Devijver caught up with Andrew to hear his thoughts on Test-Driven Development, stories and behavior.
Q: Hey Andrew, you're co-author of Groovy in Action and author of the easyb test framework. What is easyb for?
A: easyb is story verification framework built in the spirit of behavior driven development. It's written with mostly Groovy and made to work with Groovy and Java. With easyb, you can write stories that help validate applications-- say for example, a
Stack. With Stacks you can push and pop values-- the
push method should place things into some collection and the
pop method should remove them, right? Using easyb, you could craft a story, which is essentially a series of scenarios, which read as
given some context
when something happens
then something else should happen
With that format, you could write some scenarios like "pop is called on the stack with one value" or "peek is called on a stack with one value"-- you can then code a particular scenario with
givens,
whens, and
thens like this:
scenario "pop is called on stack with one value", {
given "an empty stack with foo", {
stack = new Stack()
stack.push("foo")
}
when "pop is called", {
popVal = stack.pop()
}
then "foo should be returned", {
popVal.shouldBe "foo"
}
and
then "the stack should be empty", {
stack.empty.shouldBe true
}
}
easyb supports a flexible verification syntax too-- see the
shouldBe call that's auto-magically wired to everything within the context of a scenario? You could use
shouldBeEqual,
shouldBeEqualTo,
shouldEqual and of course the negatives of that as well, like
shouldNotBe, etc, etc.
The intent is to create executable documentation-- someone else, say a customer or business analyst could conceivably write the text of a story and then a developer can then implement things like I did above. Incidentally, when running this particular scenario (which is one of a few found in one story), you can produce a report that describes which stories were run that looks something like this:
Story: single value stack
scenario pop is called on stack with one value
given an empty stack with foo
when pop is called
then foo should be returned
then the stack should be empty
Q: Is easyb intended for unit testing? Or can it also be used for integration testing?
A: Yes and Yes! It's entirely up to you how you use it-- out of the box, the framework comes with a plug-in that supports database management via DbUnit, so there is a lot of support for integration testing. For instance, if you find that a particular story requires some prerequisite data, you can put that data in a database via the
database_model call, which takes standard JDBC connection information (URL, user name, driver, etc) and then a
String representation of DbUnit's standard
FlatXmlDataSet type. So for example, in a
given clause you can seed a database like so:
given "the database is properly seeded", {
database_model("org.hsqldb.jdbcDriver",
"jdbc:hsqldb:hsql://127.0.0.1", "sa", ""){
return new File("./src/conf/seed.xml")
.text.text
}
}
That makes integration testing easy, doesn't it? The DbUnit plug-in is a separate jar file you can download from the easyb website-- just ensure you have that jar in your classpath and easyb will find it if you invoke the
database_model call. Our intent with the plug-ins is to have various ones that facilitate higher level testing, so its feasible that there'll be a Selenium plug-in, for instance.
I tend to find that using a story format even at the object (aka unit testing level) helps drive things in a true TTD fashion-- for instance, take the classic Money example from the JUnit Test Infected example-- given an
add method, you could write a unit test (in JUnit) like so:
public class MoneyTest extends TestCase {
public void testSimpleAdd() {
Money m12CHF= new Money(12, "CHF");
Money m14CHF= new Money(14, "CHF");
Money expected= new Money(26, "CHF");
Money result= m12CHF.add(m14CHF);
Assert.assertTrue(expected.equals(result));
}
}
Writing the same code in easyb is something like this:
scenario "two moneys of the same currency are added", {
given "one money object is added to another", {
total = new Money(12, "CHF").add(new Money(14, "CHF"))
}
then "the total amount should be the sum of the two", {
total.amount().shouldBe 26
}
}
Note though that in the process of writing a simple story, I was forced to think about scenarios (before I typed a line of code) and in doing so, I quickly found myself thinking about other scenarios-- like two
Money objects of different currencies. JUnit doesn't stop you from thinking that way either, but I find that the format of story almost begs it upfront.
Q: What advantages does easyb give to developers?
A: It's easy! Seriously though, easyb forces you think about behavior--- about how things should work and then allows you to express those expectations in a natural manner--
order.getPrice().shouldBe "$56" instead of
assert this and that.
Stories facilitate collaboration too-- they're a mechanism to bridge development and stakeholders, much like Fit and Fitnesse are intended to do. Plus, easyb is fun! The story DSL and the expectation framework are pretty fun to use-- you can chain expectations quickly too, such as
var.shouldNotBe null
and
var.length().shouldEqual 6
The lack of parenthesizes and use of spaces makes that easy to code and easy to read.
Q: What made you decide to write easyb in Groovy and not another dynamic language?
A: Ruby already has RSpec, which is an excellent framework and I wanted to make something like that available for the Java world-- Groovy was a natural choice for me as I'm fairly conversant in it. Bias aside, creating DSLs in Groovy is unbelievably simple-- I also had the luxury of coding the initial prototype with Venkat Subramaniam and Jeff Brown, who are amazingly talented Groovy developers.
Q: According to you, is easyb capable of replacing the regular unit testing frameworks like JUnit or TestNG on development projects?
A: You certainly could if you had strong feelings about it, but TestNG (and JUnit 4) have some compelling features! TestNG, for instance, has phenomenal parametric testing and supports parallel testing, test groups, etc. These are mature, well thought-out frameworks that offer some great features that shouldn't be overlooked. I still use both frameworks where appropriate and encourage people to do the same.
Q: Is there integration with JUnit or TestNG so that easyb stories become part of test suites and continuous integration?
A: At this point there is no integration with JUnit or TestNG; however, you can certainly run your easyb stories via Ant (a Maven plug-in is forthcoming) like so:
<easyb failureProperty="easyb.failed">
<classpath>
<path refid="build.classpath" />
<pathelement path="target/classes" />
</classpath>
<report location="target/story.txt" format="txtstory" />
<behaviors dir="behavior">
<include name="**/*Story.groovy" />
</behaviors>
</easyb>
Thus, you can easily set up a CI process that kicks off easyb stories on a scheduled basis or when ever code changes.
Q: Does easyb have IDE integration?
A: Not at this moment, unfortunately. It is run via a Java class, so you can easily create a profile in Eclipse to run stories.
Thanks for the excellent interview Andrew!
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/easyb-introducing-conversation | CC-MAIN-2017-47 | refinedweb | 1,273 | 59.33 |
structs and classes are virtually identical. Both can contain functions and variables (and both can use inheritance).
The difference between them is that structs are 'public' by default (but can have private members), and classes are 'private' by default (but can have public members).
btw what do you mean by the inclusion guards ( #ifdef BLAH_H / #pragma once)? I have no idea what those are/do but the code seems to be working fine without them..
It's working fine for the moment, but can unexpectedly mess up later when you add more files to your project. Here's why:
C++ compiles files in multiple steps. At a basic level, most C++ compilers compile your code like this:
For every .cpp file, pre-process the text in the .cpp files to resolve any macros and defines and includes, then convert the pre-processed text to abstract code. Once all the .cpp files have been compiled to abstract code, combine and compile all the abstract code into binary/assembly/executable files.
There are two things to notice here:
1) C++ only compiles the source files (not the headers)
2) C++ compiles each source file independently of each other.
Because C++ compilers compile each source file independently, if we declare "struct World" in one .cpp file, other .cpp files don't know anything about "struct World" and give compiler errors because "it doesn't exist" as far as that file's compilation can see.
Oops. So what now? Every file that wants to use 'struct World' must declare it. We don't want to have to manually re-write the details of struct World over and over. So instead, we put it into a header file.
Header files aren't compiled directly. Compilers don't really know about header files.
When you say #include, what you're really saying is, "copy and paste the header file into this location before compiling".
So we write the details of struct World in the header file, and make any interested .cpp file #include struct World's header file.
But now we run into a different problem. Imagine we have two more headers, that both use World.
Player.h includes World.h
Monster.h includes World.h
and main.cpp includes both Player.h and Monster.h
When main.cpp is compiled, Player.h will be copied into main.cpp before being turned into assembly, and World.h was already copied into Player.h. So main.cpp has World.h's code copied into it.
But main.cpp also #includes Monster.h, and Monster.h also includes World.h.
So World.h's code gets copied twice into main.cpp.
When you do something like this in your code:
struct World { //Stuff... }; //More stuff... struct World //AGAIN! { //Stuff... };
...you get error messages; you can't declare the same struct twice!
To prevent this from happening, we use inclusion guards.
We wrap the content of World.h so that it only gets copies once for every .cpp that (indirectly) includes it.
Using the preprocessor language that runs before the code is turned into assembly, we say:
#ifndef NAME /If Not Defined: NAME #define NAME //Define: NAME //...header content goes here... #endif NAME //End our preprocessor if-statement
Because NAME has to be a unique name in your project (and unique from any #defines in any 3rd-party header file your project includes), it's common practice to make NAME be in-all-caps the name of your header file, followed by _H and prefixed with a prefix unique to your project.
Something like this:
#ifdef PROJECTNAME_HEADERNAME_H #define PROJECTNAME_HEADERNAME_H //...content inside Headername.h #endif
Alternatively, instead of all that #ifdef/#define/#endif stuff, you can just simply write "#pragma once" at the top of every one of your header files.
#pragma is for compiler-specific commands, and 'once' is the command being passed to the compiler.
#pragma not actually an standard C++ feature - it's a specific thing compilers can choose to implement - but almost every major modern compiler does implement it, so you're almost certainly on safe ground.
I personally continue to use #ifdef / #define / #endif in my projects, but I feel increasingly silly for doing so.
| http://www.gamedev.net/user/89462-servant-of-the-lord/?tab=reputation&app_tab=forums&type=received&st=15 | CC-MAIN-2014-52 | refinedweb | 700 | 77.23 |
Apr14
Configuration Settings Are A Dependency That Should Be Injected
Created on April 14, 2011 at 14:03 by Paul Hiles | Permalink | 11 comments
Dependency Injection does not just apply to obvious dependencies such as repositories and logging components. It is very important to inject ALL dependencies including the less obvious ones. In my experience, one of the most overlooked areas is configuration. Many people seem perfectly happy to extract settings from config deep within their code. This is plain wrong. If you need to reference an AppSetting in your business logic, inject it. If you need a connection string in your data access code, inject that too.
I am always surprised that whilst people usually take the time to abstract out and inject services, managers, repositories and helpers, one area that is often overlooked is configuration, most commonly AppSettings and ConnectionStrings. I have lost count of the number of times I have encountered classes filled with business logic that use ConfigurationManager.AppSettings deep within them. A configuration file is a dependency. There is no doubt about it and unless you inject those configuration settings, there is no way that you can properly unit test your code.
Note that I say properly unit test, because there are of course ways to allow unit tests to run on these kinds of classes. Anyone that disagrees with the first paragraph is probably thinking 'but I can just add those settings to the app.config in the unit test project'. Yes, this will work but it is a hack. Unit tests are about testing a class in isolation. You typically inject all dependencies via the contructor and in a test, these dependencies are stubbed or mocked depending on whether you are doing state or interaction based testing (you will typically do a mixture of both for classes than have at least one dependency). Unit test projects should never have a configuration file and if they do, I am not sure that those tests should even be called unit tests. Any test that interacts with a non-mocked dependency must surely by nature be called an integration test.
So why is it a hack to add config settings to your unit test project? Well, web.config and app.config are files just like any other. So referencing configuration settings within a class means that that class has a direct dependency on the filesystem. You wouldn't feel so comfortable (i hope) if your unit test opened up a file in My Documents, or downloaded a web page from the Internet, so why do people think that is is acceptable to access web.config?
The fact is that the class under test should not care where it is getting its configuration from. It just needs these settings to carry out its functionality. Each class should have a single responsibility and that does not include knowing where to go to look up a few settings. The class should not know that the settings are coming from a file-based configuration at all. Other options such as a database or a web service are just as valid and it is perfectly possible that the configuration source may change in the future. If you inject in your settings, then such a change would have absolutely no effect on the class using the settings. The same could not be said if you were using ConfigurationManager directly.
What about connection strings?
Object relational mapping (ORM) tools such as Entity Framework and LINQ to SQL deserve a special mention because in-built dependencies in auto-generated code files are easy to miss. Both of these ORMs use a main context class. Linq to SQL uses DataContext. Entity Framework 4.0 uses ObjectContext and 4.1 can also use the new DbContext. Without user modification and when using the default constructor of the context class, both EF and Linq2Sql will read the config file in order to obtain a connection string. Fortunately, it is very simple to change this behavior. All three context classes have constructor overloads that take in a connection string, so it is very simple to modify your IoC container configuration to pass through the connection string. Here is a Unity example:
var connectionString = ConfigurationManager.ConnectionStrings["Test"].ConnectionString; container.RegisterType<IThingManager, ThingManager>(); container.RegisterType<IThingRepository, ThingRepository>(); container.RegisterType<IExampleContext, ExampleContext>( new InjectionConstructor(connectionString));
DbContext is slightly different because you subclass DbContext yourself, so additionally, you need to add a constructor to the derived class and call the base constructor.
public class ExampleContext : DbContext, IExampleContext { public ExampleContext(string connectionString) : base(connectionString) { } public DbSet<Thing> Things { get; set; } }
Conclusion
Ok, so this post was a bit of a rant, but I seem to see this problem in almost all of the companies I consult for so it certainly appears to be very common. In case you have not picked up on it yet, the point I am trying to make is that configuration is a dependency like any other and should be injected along with your repositories and managers. Connection strings, app settings and custom config sections are all dependencies. No exceptions.
Sharing
If you found this article useful then we would be very grateful if you could help spread the word. Linking to it from your own sites and blogs is ideal. Alternatively, you can use one of the buttons below to submit to various social networking sites.
Added on April 17, 2011 at 17:36 by Jake | Permalink
I have never really thought about this before but you are absolutely right. Thx.
Added on April 18, 2011 at 12:20 by Paul Hadfield | Permalink
You are completely correct, it's amazing how many unit tests suddenly become difficult or impossible because the code has a hard coded reference to configuration nested deep inside.
Added on April 18, 2011 at 13:01 by Oded | Permalink
Well said - I don't think I have worked in a single place that isn't guilty of the above and have also contributed to this sin myself in the past...
Added on April 18, 2011 at 22:50 by Shawn Hinsey | Permalink
I agree with this in principle and have been doing it for years, but one thing I'd like to point out is that your example is pretty dangerous. Instead of injecting raw types and relying on them getting the correct values because you're injecting them by name, you should create custom configuration types and take your dependencies on them instead. This can prevent really annoying errors like needing to figure out what string got injected as your connection string and where it lives.
Using this model, your general approach is:
1) In your application bootstrapper, pull your config settings out of their runtime store (app.config, whatever) and set up a configuration type.
2) Register this type in the container
3) Use it by taking a dependency on it
Added on April 19, 2011 at 09:31 by Paul Hiles | Permalink
@Shawn - Thanks for your feedback. Interesting perspective. I totally agree with you that a custom config is the way to go when you have multiple related configuration settings that you want to inject into a class, but I am not sure why you would go to the trouble of having a custom config section for a single connection string. Can you explain how injecting the connection string into the context as per my example is dangerous?
Added on April 19, 2011 at 19:34 by Shawn Hinsey | Permalink
In your example it's probably ultimately less problematic, but if you're doing container auto-wiring, such as you'd do with Windsor, which is what I'm most familiar with, you run into the problem that if you have more than one type with more than one string parameter, there's no easy way to differentiate them. Even with auto-wiring you can take the approach above, but I've had a couple of hair pulling days spent trying to track down a problem that ultimately boiled down to some subtle change to the configuration causing something else to be injected.
I agree that having a config type just for a connection string is probably suboptimal, so I think for them you are probably better off using the named connection string approach that the default .NET configuration system supports, but if you need to pass in other strings or ints, etc., you're probably better off creating a specific type for them.
Added on April 20, 2011 at 18:29 by Peter Tran | Permalink
If I use DI to inject IExampleContext to my controller then I cannot use SaveChange() as part of the DbContext. Should I inject the DbContext as well? Thanks for the great article!
Added on April 25, 2011 at 20:02 by Paul Hiles | Permalink
@Peter - The IExampleContext interface that is implemented by the concrete context can include the SaveChanges method. You will find a lot of strong opinions regarding the question of which component should be responsible for committing the unit of work. Most people would agree that SaveChanges should not be called from within a repository class - the repository would not be aware of whether the operation it is carrying out is part of a larger unit of work. Depending on your architecture, a popular technique is for the next layer up (in this case the manager) to call SaveChanges which may suit your needs but a purist might argue that the unit of work should really be committed automatically at the end of the request. An HTTP Module is often used to facilitate this.
Added on April 28, 2011 at 16:31 by Neil M | Permalink
I hope the DI/IOC crowd can guide me a little here. I am struggling with best practices (doubtless from not having a complete understanding of DI/IOC/UnitTesting), and have hit this configuration question.
I agree that deeply embedded calls to ConfigurationManager.AppSettings are not good for testing, but how do I propagate such settings down the chain of classes to such a deep level?
I intended to create an instance of a ProjectSettings poco, which gets/sets strongly typed configurable values, and initialise this as part of the app startup from App.config, and expressly in unit test setups. But making this ProjectSettings instance accessible to some deep seated class requires me to either propagate the instance through every class, or use some form of static/singleton pattern. If I use static/singleton, is this still valid during VS2010 parallel unit testing?
What is best? Bolting on an entire DI framework seems excessive for this problem.
Thanks (I hope)
Added on May 11, 2011 at 12:44 by Paul Hiles | Permalink
@Neil - I would move away from the idea of a single ProjectSettings class and just encapsulate the settings that a particular component requires. In this way, your DAL might take in a connection string, an emailer component may take in an EmailSettings class and an Amazon API wrapper might take in a AmazonCredentials class containing username, token, and securityIdentifier. You can encapsulate these as custom config sections if you desire or just build them from AppSettings when you setup your IoC container.
When used correctly, IoC does not require explicit propagation of dependencies all the way down the chain. Instead a component just has a dependency on the next level down and that in turn has dependencies of its own. So for example in MVC, your controller is the top most component. This might take in a business logic class via its constructor. The BLL class takes in a repository class, and the repository takes in an Entity Framework DbContext. Finally, the DbContext takes in a connection string. This dependency graph is automatically constructed by your IoC container when you ask for the controller. Configuration classes are injected in exactly the same way, with no need for statics or singletons.
For a more detailed overview of dependency injection, keep a look out for Mark Seemans's Dependency Injection in .NET book which will be released shortly,
Added on January 20, 2012 at 22:11 by Matthew Marksbury | Permalink
A very important point that I learned and started implementing a few years ago. It should also be noted that the configuration file is not the evil here, it's the usage of it.
For example, being able to quickly change an XML file for a production app is often necessary.
I solved this by wrapping my configuration calls inside of classes that get injected. For example, I might have an interface called IConnectionStringProvider that has a single method called GetConnectionString(). The default implementation of that would use ConfigurationManager to get the setting, but other implementations could use Environment Variables, WebService calls, or whatever approach is needed.
This article has been locked and further comments are not permitted. | http://www.devtrends.co.uk/blog/configuration-settings-are-a-dependency-that-should-be-injected | CC-MAIN-2014-15 | refinedweb | 2,142 | 51.38 |
How do I close a file descriptor in C?
To close a file descriptor, we use the
close system call. Here’s an example:
#include <unistd.h> #include <assert.h> #include <stdio.h> #define WRITE(F, S) write((F), (S), sizeof(S)) int main(void) { // Do some normal writing. assert(0 < WRITE(1, "This is written to stdout via descriptor 1\n")); // Get another reference to the same stdout pipe. int new_stdout = dup(1); // Writing to the new reference also works. assert(0 < WRITE(new_stdout, "This is written to stdout via new descriptor\n")); // Close our original file descriptor to stdout! close(1); // Writing to our new reference still works. // The pipe is only closed when all references to it are closed. assert(0 < WRITE(new_stdout, "This is also written to stdout via new descriptor\n")); // Close our final reference to the stdout pipe. // This closes the write end of the pipe. close(new_stdout); // Now we can't write to the pipe, because the write end of the pipe has been closed. assert(-1 == WRITE(new_stdout, "This should break\n")); perror("Could not write to new_std. | https://jameshfisher.com/2017/02/16/close-file-descriptor.html | CC-MAIN-2019-09 | refinedweb | 186 | 77.53 |
Created on 2020-03-25 19:09 by fdrake, last changed 2020-06-20 19:53 by Ido Michael.
Since xml.etree.cElementTree does not exist in Python 3.9, the statement that it's deprecated should be removed from the documentation.
I will clean this
This issue looks like the same with
Same core problem (module removed with insufficient document update), but a different action is needed for 3.8 and 3.9.
When I started testing an application with 3.9 and found one of the dependencies broken because it was relying directly on xml.etree.cElementTree, I had to dig into the history to determine that it was removed intentionally. Updated documentation would have helped.
I did file an issue on the dependency as well:
Thank you for catching this Fred. I am surprised that some code uses xml.etree.cElementTree without falling back to xml.etree.ElementTree. In Python 3 you can just use xml.etree.ElementTree, in Python 2 you have to fallback to the Python implementation because the C implementation was optional.
The common idiom is
try:
import xml.etree.cElementTree as ET
except ImportError:
import xml.etree.ElementTree as ET
The Python 2.7 documentation was not clear that xml.etree.cElementTree was optional, so users who didn't dive into the implementation or build process could easily not have known unless someone with a more limited installation used their code.
For the record, I submitted a fix to the dependent:
Are you still working on this @Manjusaka?
If not, I would like to submit a patch for it.
Although the modules has been deprecated for a long time, the removal came as surprise. We are currently debating to bring the module back and warn users that it will be removed in 3.10.
I'm working on it. I will make a PR today.
Tal, is there a decision to this debate or can I just move the dep. warning?
AFAICT from following the discussion, the decision is to hold off on a few deprecations which were causing most of the breakage. However, this does not appear to include xml.etree.cElementTree. Since that has currently been removed in the 3.9 branch, we should indeed fix the docs accordingly.
(Note that previous versions' docs, especially 3.8, should also be updated in this regard, as per the other issue Fred opened about this, #40064.)
(Also, please note that the "master" branch is now for version 3.10; PRs for version 3.9 should be based on the "3.9" branch, and the pull request made for that branch too.)
(My mistake! This should be done from/against the master branch, and then back-ported to 3.9, as usual.)
No, please don't change the docs yet. I want to re-introduce the cElementTree and go through a proper deprecation phase.
Ah, thanks for the update Christian, I must have misunderstood your intentions.
So should the deprecation note be removed in the "master" branch for 3.10?
Thanks Tal.
Yes, I also got the impression we want to clean this, sorry Christian.
Please let me know how to follow up on this. | https://bugs.python.org/issue40065 | CC-MAIN-2020-45 | refinedweb | 531 | 68.67 |
We know that when the operating system loads the executable, it will scan through its IAT table to locate the DLLs and functions the executable is using. This is done because the OS must map the required DLLs into the executable’s address space.
To be more precise, IAT is the table that contains references between the function names and their virtual addresses, which are exported from different loaded modules. Each executable or DLL library contains a PE header, which has all the information that the executable needs for the operating system to start successfully, including the IAT table. To understand where the IAT table is located, we must first talk about the PE header.
Now we’re ready to explore the actual IAT of the process. Let’s first present the program we’ll be using to do that:
#include "stdafx.h" #include <Windows.h> int _tmain(int argc, _TCHAR* argv[]) { HANDLE hFile = CreateFile(L"C:\\temp.txt", GENERIC_WRITE, 0, NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL); if(hFile == INVALID_HANDLE_VALUE) { printf("Unable to open file."); } else { printf("File successfully opened/created."); } CloseHandle(hFile); getchar(); return 0; }
When we compile the program, another createfilee.exe executable will be created. We can start the createfilee.exe program and let it run. It will stop the execution on the getchar() function call, which will wait until we press certain keystroke. After that, start the WinDbg debugger and attach it to the process like this:
Now we’ll use the !dh command to print the PE header elements that we need. Let’s first print all the options of the !dh command, which we can see below. If we pass the -a parameter to the !dh command, we’ll be printing everything to the console window. If we use the -f parameter, we’ll print only the file headers and with -s we’ll print only the section headers.
In our case, we’ll use the -f parameter because we need to dump the file headers. The output below was generated by the “!dh 00400000 -f” command:
0:002> !dh 00400000 -f File Type: EXECUTABLE IMAGE FILE HEADER VALUES 14C machine (i386) 7 number of sections 515EBA3E time date stamp Fri Apr 05 13:49:18 2013 0 file pointer to symbol table 0 number of symbols E0 size of optional header 102 characteristics Executable 32 bit word machine OPTIONAL HEADER VALUES 10B magic # 10.00 linker version 3800 size of code 3A00 size of initialized data 0 size of uninitialized data 11078 address of entry point 1000 base of code ----- new ----- 00400000 image base 1000 section alignment 200 file alignment 3 subsystem (Windows CUI) 5.01 operating system version 0.00 image version 5.01 subsystem version 1B000 18000 [ 3C] address [size] of Import Directory 19000 [ 459] address [size] of Resource Directory 0 [ 0] address [size] of Exception Directory 0 [ 0] address [size] of Security Directory 1A000 [ 2EC] address [size] of Base Relocation Directory 15720 [ 1C] address [size] of Debug Directory 0 [ 0] address [size] of Description Directory 0 [ 0] address [size] of Special Directory 0 [ 0] address [size] of Thread Storage Directory 0 [ 0] address [size] of Load Configuration Directory 0 [ 0] address [size] of Bound Import Directory 181BC [ 180] address [size] of Import Address Table Directory 0 [ 0] address [size] of Delay Import Directory 0 [ 0] address [size] of COR20 Header Directory 0 [ 0] address [size] of Reserved Directory
At the bottom of the output we can see the data directories that we’re after. The data directory that we want to read is the IAT directory, which is located at the 0x181BC RVA address and is 180 bytes in size. Now we know the exact address of the IAT table in memory: if the base address of the executable is 0x00400000 and the RVA of the IAT in the executable is 0x181BC, then the whole address of the IAT table in memory is 0x00400000+0x181BC = 0x004181BC. Also, the size of the IAT is 0x180 bytes and each entry is 4 bytes in size. This is why the whole command should be as follows:
> dps 004181bc L180/4
The output of that command can be seen below (the whole table is presented even though it might be rather long), just so we can observe all the entries in that table:
0:002> dps 004181bc L180/4 004181bc 7c809be7 kernel32!CloseHandle 004181c0 7c864042 kernel32!UnhandledExceptionFilter 004181c4 7c80de95 kernel32!GetCurrentProcess 004181c8 7c801e1a kernel32!TerminateProcess 004181cc 7c80ac7e kernel32!FreeLibrary 004181d0 7c80e4dd kernel32!GetModuleHandleW 004181d4 7c80ba71 kernel32!VirtualQuery 004181d8 7c80b475 kernel32!GetModuleFileNameW 004181dc 7c80ac61 kernel32!GetProcessHeap 004181e0 7c9100c4 ntdll!RtlAllocateHeap 004181e4 7c90ff2d ntdll!RtlFreeHeap 004181e8 7c8017e9 kernel32!GetSystemTimeAsFileTime 004181ec 7c8099c0 kernel32!GetCurrentProcessId 004181f0 7c8097d0 kernel32!GetCurrentThreadId 004181f4 7c80934a kernel32!GetTickCount 004181f8 7c80a4c7 kernel32!QueryPerformanceCounter 004181fc 7c9132ff ntdll!RtlDecodePointer 00418200 7c8449cd kernel32!SetUnhandledExceptionFilter 00418204 7c80aeeb kernel32!LoadLibraryW 00418208 7c80ae40 kernel32!GetProcAddress 0041820c 7c80be56 kernel32!lstrlenA 00418210 7c812f81 kernel32!RaiseException 00418214 7c809c98 kernel32!MultiByteToWideChar 00418218 7c81f424 kernel32!IsDebuggerPresent 0041821c 7c80a174 kernel32!WideCharToMultiByte 00418220 7c839471 kernel32!HeapSetInformation 00418224 7c809842 kernel32!InterlockedCompareExchange 00418228 7c802446 kernel32!Sleep 0041822c 7c80982e kernel32!InterlockedExchange 00418230 7c9132d9 ntdll!RtlEncodePointer 00418234 7c810cd9 kernel32!CreateFileW 00418238 00000000 0041823c 00000000 00418240 00000000 00418244 00000000 00418248 00000000 0041824c 00000000 00418250 00000000 00418254 00000000 00418258 00000000 0041825c 00000000 00418260 00000000 00418264 00000000 00418268 00000000 0041826c 00000000 00418270 00000000 00418274 00000000 00418278 00000000 0041827c 10322e30 MSVCR100D!_crt_debugger_hook 00418280 10327ce0 MSVCR100D!_wsplitpath_s 00418284 10274390 MSVCR100D!wcscpy_s 00418288 10326190 MSVCR100D!_wmakepath_s 0041828c 10323040 MSVCR100D!_except_handler4_common 00418290 10319d40 MSVCR100D!_onexit 00418294 102496d0 MSVCR100D!_lock 00418298 10319fa0 MSVCR100D!__dllonexit 0041829c 10249720 MSVCR100D!_unlock 004182a0 10316310 MSVCR100D!_invoke_watson 004182a4 103329b0 MSVCR100D!_controlfp_s 004182a8 102fb0c0 MSVCR100D!terminate 004182ac 10248680 MSVCR100D!_initterm_e 004182b0 10248650 MSVCR100D!_initterm 004182b4 103151e0 MSVCR100D!_CrtDbgReportW 004182b8 10319ac0 MSVCR100D!_CrtSetCheckCount 004182bc 10362730 MSVCR100D!__winitenv 004182c0 10248080 MSVCR100D!exit 004182c4 102480c0 MSVCR100D!_cexit 004182c8 1031d090 MSVCR100D!_XcptFilter 004182cc 102480a0 MSVCR100D!_exit 004182d0 10248ce0 MSVCR100D!__wgetmainargs 004182d4 10248100 MSVCR100D!_amsg_exit 004182d8 10245130 MSVCR100D!__set_app_type 004182dc 103635f8 MSVCR100D!_fmode 004182e0 103632fc MSVCR100D!_commode 004182e4 10247580 MSVCR100D!__setusermatherr 004182e8 1031ecd0 MSVCR100D!_configthreadlocale 004182ec 10321270 MSVCR100D!_CRT_RTC_INITW 004182f0 10267ee0 MSVCR100D!printf 004182f4 1025f660 MSVCR100D!getchar 004182f8 00000000 004182fc 00000000 00418300 00000000 00418304 00000000 00418308 00000000 0041830c 00000000 00418310 00000000 00418314 00000000 00418318 00000000 0041831c 00000000 00418320 00000000 00418324 00000000 00418328 00000000 0041832c 00000000 00418330 00000000 00418334 00000000 00418338 00000000
First, we can see a number of entries from the kernel32.dll library and later there are entries from the msvcr100d.dll library. All the entries that we’re directly using in our C++ code have been marked in bold font.
We’ve just figured out the library names used by the executable, and all the function names plus their virtual addresses in memory. Let’s also print all the loaded modules with the lmi command. The output of that command can be seen below:
0:002> lmi start end module name 00400000 0041b000 createfilee C (no symbols) 00940000 00949000 Normaliz (export symbols) C:\WINDOWS\system32\Normaliz.dll 10200000 10373000 MSVCR100D (pdb symbols) C:\WINDOWS\system32\MSVCR100D.dll 3d930000 3da16000 WININET (pdb symbols) C:\WINDOWS\system32\WININET.dll 3dfd0000 3e1bc000 iertutil (pdb symbols) C:\WINDOWS\system32\iertutil.dll 5b860000 5b8b5000 NETAPI32 (pdb symbols) C:\WINDOWS\system32\NETAPI32.dll 5d090000 5d12a000 comctl32_5d090000 (pdb symbols) C:\WINDOWS\system32\comctl32.dll 71aa0000 71aa8000 WS2HELP (pdb symbols) C:\WINDOWS\system32\WS2HELP.dll 71ab0000 71ac7000 WS2_32 (pdb symbols) C:\WINDOWS\system32\WS2_32.dll 76390000 763ad000 IMM32 (pdb symbols) C:\WINDOWS\system32\IMM32.DLL 76b40000 76b6d000 WINMM (pdb symbols) C:\WINDOWS\system32\WINMM.dll 77120000 771ab000 OLEAUT32 (pdb symbols) C:\WINDOWS\system32\OLEAUT32.dll 773d0000 774d3000 comctl32 (pdb symbols) C:\WINDOWS\WinSxS\x86_Microsoft.Windows.Common-Controls_6595b64144ccf1df_6.0.2600.6028_x-ww_61e65202\comctl32.dll 774e0000 7761e000 ole32 (pdb symbols) C:\WINDOWS\system32\ole32.dll 77a80000 77b15000 CRYPT32 (pdb symbols) C:\WINDOWS\system32\CRYPT32.dll 77b20000 77b32000 MSASN1 (pdb symbols) C:\WINDOWS\system32\MSASN1.dll 77c00000 77c08000 VERSION (pdb symbols) C:\WINDOWS\system32\VERSION.dll 77c10000 77c68000 msvcrt (pdb symbols) C:\WINDOWS\system32\msvcrt.dll 77dd0000 77e6b000 ADVAPI32 (pdb symbols) C:\WINDOWS\system32\ADVAPI32.dll 77e70000 77f03000 RPCRT4 (pdb symbols) C:\WINDOWS\system32\RPCRT4.dll 77f10000 77f59000 GDI32 (pdb symbols) C:\WINDOWS\system32\GDI32.dll 77f60000 77fd6000 SHLWAPI (pdb symbols) C:\WINDOWS\system32\SHLWAPI.dll 77fe0000 77ff1000 Secur32 (pdb symbols) C:\WINDOWS\system32\Secur32.dll 78130000 78263000 urlmon (private pdb symbols) C:\WINDOWS\system32\urlmon.dll 7c800000 7c8f6000 kernel32 (pdb symbols) C:\WINDOWS\system32\kernel32.dll 7c900000 7c9b2000 ntdll (pdb symbols) C:\WINDOWS\system32\ntdll.dll 7c9c0000 7d1d7000 SHELL32 (pdb symbols) C:\WINDOWS\system32\SHELL32.dll 7e410000 7e4a1000 USER32 (pdb symbols) C:\WINDOWS\system32\USER32.dll
The two libraries kernel32.dll and msvcr100d.dll have been bolded to be easily found. Notice that their base addresses are 0x10200000 and 0x7c800000, which correlates with all the functions in the IAT table. All those function pointers are correct, because the OS filled the IAT table with correct function pointers when the executable was loaded.
The Import Directory
The import function is the function that’s not located in the current module, but is imported from some other module, usually from several. The information about the function must be kept in the import directory of the current module because when the operating system loads the executable and memory and starts it, it must also load all the dependent libraries in the process’s memory space, so that the program can call those functions.
The import table contains IMAGE_IMPORT_DESCRIPTOR structures, which has the following members:
Each IMAGE_IMPORT_DESCRIPTOR element structure in the import directory contains information about a DLL the current module needs in order to reference its symbols and call its functions. The array will always contain another terminating structure, which has its members initialized to zero.
At the beginning of the IMAGE_IMPORT_DESCRIPTOR, we can see a union data structure being used. Union variables occupy the same memory and are normally used to specify that certain variable can have different variable types. In our case, both variables, the Characteristics and OriginalFirstThunk, have the same variable type DWORD, so the union declaration is only used to make an alias for both of those members.
Remember that the union declaration occupies only 4 bytes in our case (which is the size of the DWORD type) and not 8 bytes; this is how the union declarations work. Because of this, the size of IMAGE_IMPORT_DESCRIPTOR data structure is 20 bytes: 4 bytes for the union declaration and 16 bytes for the TimeDateStamp, ForwarderChain, Name and FirstThunk elements.
We haven’t yet mentioned what the elements of the structure actually mean. This is why we’re describing them below:
- OriginalFirstThunk: this element contains the RVA to the IMAGE_THUNK_DATA structure, which we can see below:
We can see that the IMAGE_THUNK_DATA structure is a union structure, which is 4-bytes in size. When we come to this structure, we must remember that a function can be imported by name or by ordinal. In the case of a latter, the Ordinal field of the union in IMAGE_THUNK_DATA structure will have the most significant, but set to 1 and the ordinal number can be extracted from the least significant bits.
The structure actually contains a pointer to the array of RVAs that point to the IMAGE_IMPORT_BY_NAME structures, terminated by 0. Let’s look at how the IMAGE_IMPORT_BY_NAME structure look, which can be seen below:
There are two elements inside the structure:
- Hint: this field is not of particular importance.
- Name: contains the name of the import function; the field is actually a variable-sized pointer to the string.
Keep in mind that the OriginalFirstThunk will contain as many elements as is the number of imported functions for a particular library. Each imported function name represents one element in the array.
- TimeDateStamp
- ForwarderChain
- Name: contains the RVA address where the name of the library is saved.
- FirstThunk: contains the RVA to the array of IMAGE_THUNK_DATA structures, like the OriginalFirstThunk above. Both arrays contain the same number of elements. The OriginalFirstThunk is an array of names of imported functions, also called the ILT. The FirstThunk is an array of addresses of imported functions, also called the IAT.The OriginalFirstThunk uses the AddressOfData element of the IMAGE_THUNK_DATA structure, which points to another structure that contains the Name element of the library. The FirstThunk uses the Function element of the IMAGE_THUNK_DATA structure, which points to the address of the imported function. When the executable is loaded, the loader must traverse the OriginalFirstThunk array to find all the imported function names the executable is using. Then it must calculate the addresses of the functions and populate the FirstThunk array, so that the functions can be called whenever needed.
Conclusion
To conclude, the Import Table contains one entry for each DLL we’re importing from. Each entry contains Import Lookup Table (ILT) and Import Address Table (IAT) pointers [7]. If we would like to go over the whole PE file structure, there’s a great picture available at, which was provided by the OpenRCE team.
References:
[1] Import Address Table, accessible at.
[2] Dynamic-link library, accessible at.
[3] CreateFile function, accessible at.
[4] Linker Options, accessible at.
[5] PE File Structure, accessible at.
[6] Tutorial 6: Import Table, accessible at.
[7] What’s the difference between “Import Table address” and “Import Address Table address” in Date Directories of PE?, accessible at.
Your bold markings are not showing up within the article. Other than that nice article. | http://resources.infosecinstitute.com/the-import-directory-part-1/ | CC-MAIN-2018-05 | refinedweb | 2,219 | 59.8 |
[Solved] Wrong cosine? Why?
Hello everybody. I would need to calculate the cosine of an angle. I used the math.cos () function, but the result I am getting is wrong. can anyone tell me why? thank you.
I am attaching the text:
import math print(math.cos(60))
in the console, as a result, is:
-0.9524129804151563
Otherwise, can you tell me another method with which to calculate the cosine?
Thank you
RTFM... Return the cosine of x radians.
returns its results in radians
The angle has to be given in radians but cos returns a number between -1 and +1
thank you so much. I had foolishly forgotten ... | https://forum.omz-software.com/topic/6961/solved-wrong-cosine-why/1 | CC-MAIN-2022-33 | refinedweb | 109 | 79.56 |
I just got my BT Shield and popped it onto my arduino to start testing.
I followed all the instructions on:
I changed
#include <NewSoftSerial.h>
to
#include <SoftwareSerial.h>
in the Slave.pde example since it wasn’t working and I read that that was what I should do…
and I changed the other reference to the header to comply with that change :
NewSoftSerial blueToothSerial(RxD,TxD);
to
SoftwareSerial blueToothSerial(RxD,TxD);
I also pulled the connector things (I don’t know what they’re called, but they connect two pins together) and moved them from their original placements to the 0 and 1 pins so that they could connect to the arduino tx and rx pins.
Right now only the tx pin is blinking (and in the serial monitor there is all of this unreadable text like
?00ÀÀÀ@```Àüüü@~~`~0000ÆÀÀü@~`~ 000ÀÀÀ@`øüüü@~~`~0000ÆÀÀü@~`~ 000ÀÀÀ@`øøüüü@~~`~0000ÆÀÀÀÀ`~`~0?00ÀÀÀ@``` øøüüüü@~~`~0000ÆÀÀü@~`
, and when I scan for devices on my android phone, I find no devices so I can’t really test anything else.
Any ideas on what I’m doing wrong? | https://forum.seeedstudio.com/t/bt-shield-not-showing-up-on-android-bluetooth-device-scan/16415 | CC-MAIN-2020-29 | refinedweb | 182 | 70.23 |
MicroPython
What is MicroPython?
MicroPython is a full implementation of the Python 3 programming language that runs directly on embedded hardware like Raspberry Pi Pico. You get an interactive prompt (the REPL) to execute commands immediately via USB Serial, and a built-in filesystem. The Pico port of MicroPython includes modules for accessing low-level chip-specific hardware.
The MicroPython Wiki
- correct MicroPython UF2 file for your board:
-
Raspberry Pi Pico W (with urequests and upip preinstalled)
Then go ahead and:
Push and hold the BOOTSEL button and plug your Pico into the USB port of your Raspberry Pi or other computer. Release the BOOTSEL button after your Pico is connected.
It will mount as a Mass Storage Device called RPI-RP2.
Drag and drop the MicroPython UF2 file onto the RPI-RP2 volume. Your Pico will reboot. You are now running MicroPython.
You can access the REPL via USB Serial.
The Raspberry Pi Pico Python SDK book contains step-by-step instructions for connecting to your Pico and programming it in MicroPython using both the command line and the Thonny IDE.
Where can I find documentation?
You can find information on the MicroPython port to RP2040 at;
- Raspberry Pi Pico Python SDK
A MicroPython environment for RP2040 microcontrollers
- Connecting to the Internet with Raspberry Pi Pico W
Getting Raspberry Pi Pico W online with C/C++ or MicroPython
- RP2 Quick Reference
The official documentation around the RP2040 port of MicroPython
- RP2 Library
The official documentation about the
rp2module in MicroPython
There is also a book by Raspberry Pi Press available written by Gareth Halfacree and Ben Everard.
In "Get Started with MicroPython on Raspberry Pi Pico", you will learn how to use the beginner-friendly language MicroPython to write programs and connect hardware to make your Raspberry Pi Pico interact with the world around it. Using these skills, you can create your own electro-mechanical
You can buy the book on the Raspberry Pi Press site.
Which hardware am I running on?
There is no direct method for software written in MircoPython to discover whether it is running on a Raspberry Pi Pico or a Pico W by looking at the hardware. However, you can tell indirectly by looking to see if network functionality is included in your particular MicroPython firmware:
import network if hasattr(network, "WLAN"): # the board has WLAN capabilities
Alternatively, you can inspect the MicroPython firmware version to check whether it was compiled for Raspberry Pi Pico or for Pico W using the
sys module.
>>> import sys >> sys.implementation (name='micropython', version=(1, 19, 1), _machine='Raspberry Pi Pico W with RP2040', _mpy=4102)
So
if 'Pico W' in sys.implementation._machine can be used to detect whether your firmware was compiled for Pico W. | https://www.raspberrypi.com/documentation/microcontrollers/micropython.html#drag-and-drop-micropython | CC-MAIN-2022-40 | refinedweb | 459 | 51.18 |
Hello:
I wrote a java class to connect to database and then use JSP to call the java class to get the output that is read from the database.
It give me error. I use WebSphere.
================ some of the code in java class ============
..... other
connectdb,closedb database method under try and catch
wrote a select method under try and catch
wrote main and call connectdb, the quary method and closedb
... others
=========== JSP code =============
.... others
import package
call the java class under try and catch
.. others
======== end
Please help?
Do you want to copy the whole code?
Thank you
sem
Toronto
Discussions
Web tier: servlets, JSP, Web frameworks: Java class with JSP
Java class with JSP (1 messages)
Threaded Messages (1)
- Java class with JSP by David Crook on July 11 2005 20:31 EDT
Java class with JSP[ Go to top ]
You don't have nearly enough stuff up there to debug whats going wrong.
- Posted by: David Crook
- Posted on: July 11 2005 20:31 EDT
- in response to sem sam
But if all you want is to read some stuff out of the DB and dump it on you page you may want to look at the JSTL SQL tag libraries. They will connect to the database and do queries straight from the JSP page. Its not very scalable or easy to maintain, but it does work.
Here is a paper describing how to use them, look about half way down..
You can download a free copy of the JSTL tags from apache.
Good luck | http://www.theserverside.com/discussions/thread.tss?thread_id=34847 | CC-MAIN-2014-15 | refinedweb | 257 | 88.16 |
Hello,
I'm trying to update a warranty field in my water meter feature class. It is supposed to add 20 years to the installation date. I used ModelBuilder to get the following code:
# Import arcpy module
import arcpy
# Local variables:
WMeter = "WMeter"
WMeter__2_ = WMeter
# Process: Calculate Field
arcpy.CalculateField_management(WMeter, "WarrantyDate", "DateAdd (\"yyyy\",20,[InstallDate] )", "VB", "")
However, I have to manually enter an edit session for this script to run correctly. If I'm not in an editing session it won't update the field the "WarrantyDate" field. Is there a way with Python or ModelBuilder to make it enter into an editing session? I've attempted:
edit = arcpy.da.Editor(workspace)edit.startEditing(False, True)edit.startOperation()
but I always get:
ERROR 001049: ASync operations not allowed while editing.
I have tried swapping out the true/false statements to see if it would change anything but it doesn't seem to matter if each is true or false, I get ASync either way.
Any help is greatly appreciated!
I'll caution you that an edit session is only required in certain circumstances, and this is probably not one of them.
Can you remove anything to do with an edit session and change your calculate field statement to:
I assume it's getting hung up on the escaped double-quote inside the statement, but not 100% sure. | https://community.esri.com/thread/221473-editing-session-in-modelbuilder-or-python | CC-MAIN-2019-13 | refinedweb | 228 | 53.92 |
spacekid434Member
Content count53
Joined
Last visited
Community Reputation117 Neutral
About spacekid434
- RankMember
How to use git in a mutli-user environment,
spacekid434 replied to spacekid434's topic in For BeginnersI've thought of branching, but I wondered if it was the appropriate thing to have a branch per developer. I saw in the link I found that in git, you can fetch then merge, then push which means you won't get a non-fast-forward error.
How to use git in a mutli-user environment,
spacekid434 replied to spacekid434's topic in For BeginnersDoing some more searching I found this. Looks interesting and sort of describes what I'm looking at. I would still really love some good articles so if anyone has anything It would be really great. Thanks.
How to use git in a mutli-user environment,
spacekid434 posted a topic in For Beginners!
OpenGL error with seperated matrices and GLSL
spacekid434 replied to spacekid434's topic in Graphics and GPU ProgrammingI did some research and managed to write a function to convert from char* to wchar* and then to wstring. Unless I've done something wrong, it appears that there is NO debug information at all after compiling the shaders. So this brings be back to my original problem. Thanks for the help though. I really appreciate it!
OpenGL error with seperated matrices and GLSL
spacekid434 replied to spacekid434's topic in Graphics and GPU ProgrammingI could use printf(), but for the sake of my learning, how do I convert a C string to wchar?
OpenGL error with seperated matrices and GLSL
spacekid434 replied to spacekid434's topic in Graphics and GPU ProgrammingThanks for that. I'm having trouble displaying the info log though. The issue is, I don't know how to convert from an GLchar* to a LPCWSTR for use in a MessageBox(). I'm not using ATL or MFC. Any help there would be greatly appreciated as converting between strings has given me a headache before. Thanks.
OpenGL error with seperated matrices and GLSL
spacekid434 posted a topic in Graphics and GPU ProgrammingHello, I recently started playing around with OpenGL on C++ and after getting some of the basics, a simple GLSL implementation. I tried to add very slight random noise to some simple shaded polygons, where the colors are described at the vertices (I don't know the proper name for this) in a pyramid. I made it use FragCoord to get it coords to calculate the noise value, but this uses screen pixels and so the noise would stay the same no matter the prospective you're looking at the model from. I followed the description of a better method described in the top answer [url=""]here[/url], but this doesn't seem to be working. I can't move the shape around and it's stuck at z=0. I can't change any camera perspectives. I think this may be an error in my GLSL as I'm totally new to it and it's quite alien to me. Here's an image of my problem. The first is what it's meant to look like (without the noise) the second what it looks like: [img][/img] [img][/img] Here is my code: frag.glsl [CODE] varying vec4 verpos; float rand2d(vec2 x){ uint n = floatBitsToUint(x.y * 214013.0 + x.x * 2531011.0); n = n * (n * n * 15731u + 789221u); n = (n >> 9u) | 0x3F800000u; return (2.0 - uintBitsToFloat(n)); } float rand3d(vec3 x){ uint n = floatBitsToUint(x.y * 214013.0 + x.x * 2531011.0 + x.z * 27644437.0); n = n * (n * n * 15731u + 789221u); n = (n >> 9u) | 0x3F800000u; return (2.0 - uintBitsToFloat(n)); } void main(void) { vec4 outColor = gl_Color; outColor *= ((0.2 * rand3d(vec3(verpos.x, verpos.y, verpos.z)) + 0.9); outColor.w = 1.0; gl_FragColor = outColor; } [/CODE] vert.glsl [CODE] uniform mat4 view_matrix; uniform mat4 model_matrix; varying vec4 verpos; void main(){ verpos = model_matrix * gl_Vertex; gl_FontColor = gl_Color; gl_Position = gl_ProjectionMatrix * view_matrix * model_matrix * gl_Vertex; } [/CODE] and my render method where I separate my matrices: [CODE] void App::render(){ start = SDL_GetTicks(); //Clear the matrices and pixel/depth buffer(s) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); //Create model matrix and pass to shader glTranslatef(x, y, -6.0f);//Modify the world matrix GLfloat model_matrix_array[16]; glGetFloatv(GL_MODELVIEW_MATRIX, model_matrix_array); glUniformMatrix4fv(model_matrix, 1, GL_FALSE, model_matrix_array); glLoadIdentity(); //Create view matrix and pass to shader glRotatef(roty, 1.0f, 0.0f, 0.0f);//Modify the view matrix glRotatef(rotx, 0.0f, 1.0f, 0.0f);//Modify the view matrix GLfloat view_matrix_array[16]; glGetFloatv(GL_MODELVIEW_MATRIX, view_matrix_array); glUniformMatrix4fv(view_matrix, 1, GL_FALSE, view_matrix_array); glLoadIdentity(); glBegin(GL_TRIANGLE_FAN); glColor4f(1.0f, 1.0f, 1.0f, 1.0f); glVertex3f(0.0f, 0.0f, 1.0f);Color4f(0.0f, 1.0f, 0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, -1.0f); glEnd(); glBegin(GL_QUADS);End(); SDL_GL_SwapBuffers(); frames++; current = SDL_GetTicks(); } [/CODE] I'm sorry that this is such a big post but I really hope it doesn't put people off and that someone can help me. I would really appreciate it. Thanks.
Perlin noise and procedural generation
spacekid434 replied to spacekid434's topic in For BeginnersThanks, I'll have a look at that website when I'm back home. I did put a link to my perlin demo in the post but here's the image anyway: [img][/img] Thanks
Perlin noise and procedural generation
spacekid434 posted a topic in For BeginnersI've recently been working on a 2d RPG adventure game in Java. I have world loading among other things and I want to use procedural generation for the landscape of the game world. I have had no previous experience with procedural generation so this was quite a learning curve for me and I feel I have done fairly well. I finally managed to make a Perlin noise demo with help from this article [url=""]here[/url] and I'm fairly happy of the [url=""]result[/url] ... [CODE]; } [/CODE].
- Double.toString(MRTROLOLO.jpg)
Export JAR using slick with Eclipse
spacekid434 replied to spacekid434's topic in For BeginnersSorry for bumping, but quite a few hours of work now will be for nothing if someone cant help. I don't even need a proper tutorial. If someone could just supply me with the settings they use or how they do it. Im open to new ideas and im willing to learn along the way. Thanks
repost tic tac toe
spacekid434 replied to phil67rpg's topic in For BeginnersDude, you cant expect us to help you if we can't even your read your code. You might want to fix post or you won't get ANY help.
Export JAR using slick with Eclipse
spacekid434 posted a topic in For BeginnersHello Everyone. I've just started working on a new project and I was just checking the export as Jar to make sure that it works. It's a good thing that I did because for some reason I cant get Eclipse to properly export my project. Im using slick2d which is stored in a folder in my root project directory. (this also contains lwjgl.jar and all the win 64x natives) I have tried playing round with a few setting in the export as jar option and I have also tried export as runnable jar. When I export as 'jar', when I try to open the jar it says "Could not find the main class: com.thecreeperlawyer.enginedemo.EngineDemo. Program will exit.". I have looked into the jar with winrar and the manifest seems to say the right thing (from my limited knowledge of manifests) and I can also find "com\thecreeperlawyer\enginedemo\EngineDemo.class". When I export as 'runnable jar', when I open it nothing happens. Could someone please tell me, or provide a thorough tutorial on how to properly export a slick2D/LWJGL project from Eclipse. Also if anyone could provide (if the tutorial doesn't) the best way to export the libs (slick2d.jar, LWJGL.jar and the natives) that would be very much appreciated. Thanks
Java Null Pointer Exception
spacekid434 posted a topic in For BeginnersHello. I'm currently working a a 2D dungeon crawler game. I have a tile based system with a Tile class. This contains a whole heap of public static Tile. The problem is, when I try to access these from anywhere, I get a Null pointer Exception. I don't how i get this though since I quite clearly instatiate the variables. Heres the code [code]public static Tile backgroundRock; public static Tile bars; public static Tile dirt; public static Tile grass; public static Tile grill; public static Tile platform; public static Tile rock; public static Tile shaft; public static Tile table; public static Tile water; static{ bars = new TileBars(0, 0, 0x404040); dirt = new TileDirt(1, 1, 0x7F3300); grass = new TileGrass(2, 2, 0x267F00); grill = new TileGrill(3, 3, 0x3F2F2F); rock = new TileRock(4, 4, 0x808080); shaft = new TileShaft(5, 5, 0xFFD800); table = new TileTable(6, 6, 0xFF4300); water = new TileWater(7, 7, 0x0094FF); platform = new TilePlatform(8, 8, 0x423300); backgroundRock = new TileBackRock(9, 9, 0xFFFCF4); }[/code] Please help, is there something I'm missing...
Slick: Resource not found
spacekid434 posted a topic in For BeginnersI started working with the slick 2D game engine for my upcoming 2D platformer written in Java. I started working on it today and i set it up according to the Slick website. However when I run it it throws this error: [quote]Thu Oct 13 22:06:41 NZDT 2011 ERROR:Unable to determine Slick build number Thu Oct 13 22:06:41 NZDT 2011 INFO:LWJGL Version: 2.7.1 Thu Oct 13 22:06:41 NZDT 2011 INFO:OriginalDisplayMode: 1920 x 1080 x 32 @60Hz Thu Oct 13 22:06:41 NZDT 2011 INFO:TargetDisplayMode: 854 x 500 x 0 @0Hz Thu Oct 13 22:06:41 NZDT 2011 INFO:Starting display 854x500 Exception in thread "main" java.lang.RuntimeException: Resource not found: org/newdawn/slick/data/defaultfont.png at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoader.java:69) at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:169) at org.newdawn.slick.Image.<init>(Image.java:196) at org.newdawn.slick.Image.<init>(Image.java:170) at org.newdawn.slick.Image.<init>(Image.java:158) at org.newdawn.slick.Image.<init>(Image.java:136) at org.newdawn.slick.AngelCodeFont.<init>(AngelCodeFont.java:104) at org.newdawn.slick.Graphics$1.run(Graphics.java:142) at java.security.AccessController.doPrivileged(Native Method) at org.newdawn.slick.Graphics.<init>(Graphics.java:139) at org.newdawn.slick.GameContainer.initSystem(GameContainer.java:752) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:378) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at me.spacekid434.Exordium.Game.main(Game.java:72)[/quote] I'm using Eclipse as my IDE. I have included both slick.jar and lwjgl.jar in the build-path and have them in the root directory. Please help! | https://www.gamedev.net/profile/181128-thecreeperlawyer/?tab=gallery | CC-MAIN-2018-05 | refinedweb | 1,820 | 54.52 |
Seriously take your time. It’s done when it’s done.
All that work i have seen by yourself & contributors is amazing.
All the best… Max
Seriously take your time. It’s done when it’s done.
All that work i have seen by yourself & contributors is amazing.
All the best… Max
the master has spoken!! congrats raz!
love to see your huge effort & dedication to this project
always there to help you
cheers max
just bind a value to
:disabled its basic vuejs stuff…
Awesome! Just amazing what we have there! Cheers Max
Awesome!
really interested in your solution… im working on an mail project too
Here is my Alfred workflow for fast and easy documentation search:
best calendar componet out there!
whats your exact problem? please share some code and infos
just an idea, i used this to download a file (item.download is the download url)
downloadItem () { window.location = this.item.download },
please post a
quasar info output
i would try to import the router where you need it:
import router from '/router/index'; //replace with your correct path
changing
/src-electron/main-process/electron-main.js is fine!
consider using
mainWindow.maximize()
can you create a codepen with your issue?
have you searched online for a solution?
->
you might also have to customize the webpack config to specify the new host, but dont know for sure | https://forum.quasar-framework.org/user/max | CC-MAIN-2022-33 | refinedweb | 230 | 69.07 |
Modal Dialog with Qt Components on Meego
How to make a Modal Dialog with Qt Components on MeeGo
There is a QML Dialog element in Qt Quick Components in MeeGo 1.2 Harmattan API. But this dialog is not modal - i.e. it can be closed by pressing on any part of the dialog's window, not only on the buttons. Such behavior is not good in some cases - any accidental touch can close the dialog's window. There is no way through the API to make this dialog not respond on background clicks.
Surely we can make such a window ourselves without using Dialog element, but it is not a quick or proper way.
From the Dialog's source it can be discovered that a background click generates a privateClicked signal. Let's disable it by adding 2 lines into Dialog's creation:
signal privateClicked onPrivateClicked: {}
and we get truly modal dialog.
Full example of page with dialog
import QtQuick 1.1
import com.nokia.meego 1.0
Page {
QueryDialog {
id: quDialog signal privateClicked onPrivateClicked: {} anchors.centerIn: parent titleText: "Modal Dialog" rejectButtonText: "Cancel" onRejected: { console.log ("Rejected");} } Component.onCompleted: quDialog.open()
} | https://wiki.qt.io/Modal_Dialog_with_Qt_Components_on_Meego | CC-MAIN-2019-43 | refinedweb | 192 | 58.69 |
In December I blogged about a little tool that i wrote to analyze hangs in dumps, and i showed the following output, but didnt really get into the details of why the process was stuck in here...
____________________________________________________________________________________________________________________ GC RELATED INFORMATION ____________________________________________________________________________________________________________________ The following threads are GC threads: 18 19 The following threads are waiting for the GC to finish: 14 16 24 26 27 28 30 31 36 37 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 57 58 59 60 62 63 64 65 66 67 68 69 70 71 72 73 74 77 78 The GC was triggered by thread: 75 The GC is working on suspending threads to continue with garbage collection The following threads can't be suspended because preemptive GC is disabled: 23 25 33 34 35 38 56 61 The Finalizer (Thread 20) is not blocked
The issue the customer is running into is a hang during heavy load. The only way to get out of the hang is to recycle the process (IISReset).
Debugging the issue:
I have seen this issue before on a few occations and although as you will see later it was currently fixed in the framework, but in my earlier cases on this we ended up not needing a fix since the customers I worked with made code changes that made it such that they no longer were subject to the issue.
So what is going on here?
Thread 75 triggered a garbage collection by making an allocation that would have made Gen 0 go over its allocation budget.
0:075> kb 2000 ChildEBP RetAddr Args to Child 1124de6c 7c822124 77e6bad8 000002e8 00000000 ntdll!KiFastSystemCallRet 1124de70 77e6bad8 000002e8 00000000 00000000 ntdll!NtWaitForSingleObject+0xc 1124dee0 79e718fd 000002e8 ffffffff 00000000 kernel32!WaitForSingleObjectEx+0xac 1124df24 79e718c6 000002e8 ffffffff 00000000 mscorwks!PEImage::LoadImage+0x199 1124df74 79e7187c ffffffff 00000000 00000000 mscorwks!CLREvent::WaitEx+0x117 1124df84 7a0d0d0f ffffffff 00000000 00000000 mscorwks!CLREvent::Wait+0x17 1124dfa8 7a0d5289 ffffffff 000d4558 106cb970 mscorwks!SVR::gc_heap::wait_for_gc_done+0x99 1124dfcc 7a0d5fa2 00000000 00000000 00000020 mscorwks!SVR::GCHeap::GarbageCollectGeneration+0x267 1124e058 7a0d691f 106cb970 00000020 00000000 mscorwks!SVR::gc_heap::try_allocate_more_space+0x1c0 1124e078 7a0d7ecc 106cb970 00000020 00000000 mscorwks!SVR::gc_heap::allocate_more_space+0x2f 1124e098 7a08bd32 106cb970 00000020 00000002 mscorwks!SVR::GCHeap::Alloc+0x74 1124e0b4 79e7b43e 00000020 00000000 00080000 mscorwks!Alloc+0x60 1124e180 79e8f41c 79157f42 1124e230 00000001 mscorwks!AllocateArrayEx+0x1d1 1124e244 7937f5c2 064f60b8 064f60b8 064f60b8 mscorwks!JIT_NewArr1+0x167 1124e27c 5088a509 00000000 00000000 00000000 mscorlib_ni!System.Reflection.RuntimeMethodInfo.GetParameters()+0x4a ...
Nothing strange there, allocation and garbage collection happens all the time... however a GC is usually extremely fast and in this case we have 45 threads waiting for the GC to finish...
0:014> kb ChildEBP RetAddr Args to Child 01a0fc74 7c822124 77e6bad8 000002e4 00000000 ntdll!KiFastSystemCallRet 01a0fc78 77e6bad8 000002e4 00000000 00000000 ntdll!NtWaitForSingleObject+0xc 01a0fce8 79e718fd 000002e4 ffffffff 00000000 kernel32!WaitForSingleObjectEx+0xac 01a0fd2c 79e718c6 000002e4 ffffffff 00000000 mscorwks!PEImage::LoadImage+0x199 01a0fd7c 79e7187c ffffffff 00000000 00000000 mscorwks!CLREvent::WaitEx+0x117 01a0fd8c 7a0851cb ffffffff 00000000 00000000 mscorwks!CLREvent::Wait+0x17 01a0fd9c 79f40e96 00000000 13d43bb8 0e874858 mscorwks!SVR::GCHeap::WaitUntilGCComplete+0x32 01a0fdd8 79e7385b 00000001 7a0e607b 00000001 mscorwks!Thread::RareDisablePreemptiveGC+0x1a1 01a0fde0 7a0e607b 00000001 13d43838 00000102 mscorwks!GCHolder<1,0,0>::GCHolder<1,0,0>+0x2d 01a0fe2c 7a0e673e 00000000 7a393704 7a114dea mscorwks!Thread::OnThreadTerminate+0x53 01a0fe38 7a114dea 0e874858 13d4385c 00000000 mscorwks!DestroyThread+0x43 01a0fe94 79f79c4f 00000000 00000000 00000000 mscorwks!ThreadpoolMgr::CompletionPortThreadStart+0x33d 01a0ffb8 77e6608b 000c4c00 00000000 00000000 mscorwks!ThreadpoolMgr::intermediateThreadProc+0x49 01a0ffec 00000000 79f79c09 000c4c00 00000000 kernel32!BaseThreadStart+0x34
so for some reason the GC appears to be taking some time...
There are 2 GC threads (one per logical processor), thread 18 and 19 in this case... Thread 19 is simply waiting for work
0:019> kb ChildEBP RetAddr Args to Child 01e2fd68 7c822124 77e6bad8 000002d0 00000000 ntdll!KiFastSystemCallRet 01e2fd6c 77e6bad8 000002d0 00000000 00000000 ntdll!NtWaitForSingleObject+0xc 01e2fddc 79e718fd 000002d0 ffffffff 00000000 kernel32!WaitForSingleObjectEx+0xac 01e2fe20 79e718c6 000002d0 ffffffff 00000000 mscorwks!PEImage::LoadImage+0x199 01e2fe70 79e7187c ffffffff 00000000 00000000 mscorwks!CLREvent::WaitEx+0x117 01e2fe80 7a0d8898 ffffffff 00000000 00000000 mscorwks!CLREvent::Wait+0x17 01e2fea8 7a0d8987 01e2ff00 77e60eb5 01e2fec8 mscorwks!SVR::gc_heap::gc_thread_function+0x58 01e2ffb8 77e6608b 000d5050 00000000 00000000 mscorwks!SVR::gc_heap::gc_thread_stub+0x9b 01e2ffec 00000000 7a0d88eb 000d5050 00000000 kernel32!BaseThreadStart+0x34
But interestingly enough thread 18 is waiting to suspend all managed threads in order to continue the GC (to avoid for anyone to allocate any more data while it is performing the GC)
0:018> kb ChildEBP RetAddr Args to Child 01defb10 7c821524 77e98ef4 00000f64 01defb34 ntdll!KiFastSystemCallRet 01defb14 77e98ef4 00000f64 01defb34 01defe04 ntdll!NtGetContextThread+0xc 01defb24 7a0de046 00000f64 01defb34 00010002 kernel32!GetThreadContext+0x11 01defe04 7a0defc1 00000f64 1069d328 13aa3858 mscorwks!EnsureThreadIsSuspended+0x3f 01defe4c 7a0e290a 00000000 00000000 13aa3874 mscorwks!Thread::SuspendThread+0xd0 01defe9c 7a086e76 00000000 13aa399c 00000000 mscorwks!Thread::SysSuspendForGC+0x5a6 01deff88 7a0d867b 00000001 00000000 000d4368 mscorwks!SVR::GCHeap::SuspendEE+0x16c 01deffa8 7a0d8987 00000000 13aa39ac 01deffec mscorwks!SVR::gc_heap::gc_thread_function+0x3b 01deffb8 77e6608b 000d4368 00000000 00000000 mscorwks!SVR::gc_heap::gc_thread_stub+0x9b 01deffec 00000000 7a0d88eb 000d4368 00000000 kernel32!BaseThreadStart+0x34
Normally suspending all managed threads happens in a matter of nanoseconds, and pretty much the only thing that could cause the process to be stuck while suspending is if some thread has disabled preemptive GC, i.e. told the GC that it is in a state where it can't be disturbed...
Yun Jin describes PreemptiveGC like this in one of his posts
Preemptive GC: also very important. In Rotor, this is m_fPreemptiveGCDisabled field of C++ Thread class. It indicates what GC mode the thread is in: "enabled" in the table means the thread is in preemptive mode where GC could preempt this thread at any time; "disabled" means the thread is in cooperative mode where GC has to wait the thread to give up its current work (the work is related to GC objects so it can't allow GC to move the objects around). When the thread is executing managed code (the current IP is in managed code), it is always in cooperative mode; when the thread is in Execution Engine (unmanaged code), EE code could choose to stay in either mode and could switch mode at any time; when a thread are outside of CLR (e.g, calling into native code using interop), it is always in preemptive mode.
In our case, as we can see from the output from my tool the following threads have preemptive GC disabled (which is very uncommon)
23 25 33 34 35 38 56 61
In the !threads output it looks like this (notice the PreEmptive GC column)
0:018> !threads ThreadCount: 59 UnstartedThread: 0 BackgroundThread: 59 PendingThread: 0 DeadThread: 0 Hosted Runtime: yes PreEmptive GC Alloc Lock ID OSID ThreadOBJ State GC Context Domain Count APT Exception 16 1 1008 000d0828 1808220 Enabled 06b79310:06b79db0 000f0728 1 MTA (Threadpool Worker) 20 2 1e7c 000d62d0 b220 Enabled 00000000:00000000 000cd190 0 MTA (Finalizer) 21 3 1d3c 000db120 1220 Enabled 00000000:00000000 000cd190 0 Ukn 22 4 1c38 000ed3c8 80a220 Enabled 00000000:00000000 000cd190 0 MTA (Threadpool Completion Port) 23 5 19ac 0013a520 180b222 Disabled 06c17754:06c18bd8 00158e70 2 MTA (Threadpool Worker) 24 6 1fd4 0014d1a0 b220 Enabled 00000000:00000000 000f0728 1 MTA 25 7 1568 0e81aae0 180b222 Disabled 02cea570:02cec4a8 00158e70 2 MTA (Threadpool Worker) 26 8 1ad0 0e82bd58 b220 Enabled 00000000:00000000 00158e70 0 MTA 27 9 1f04 0e829310 b220 Enabled 032b0cb8:032b2938 00158e70 0 MTA 28 a ffc 0e820d18 b220 Enabled 070173e4:070193c0 00158e70 1 MTA 14 b 1ee8 0e874858 1800220 Enabled 02b34750:02b34c78 000cd190 0 Ukn (Threadpool Worker) 30 d 1080 0e8aff40 b220 Enabled 00000000:00000000 0e87c310 0 MTA 31 e 11ec 0e8d1f28 8801220 Enabled 02b418e4:02b42c78 000cd190 0 MTA (Threadpool Completion Port) 33 f 1f1c 0e8e6420 180b222 Disabled 06bc1fa0:06bc3dcc 00158e70 2 MTA (Threadpool Worker) 34 10 1a6c 0e8e8b20 180b222 Disabled 02c449b8:02c45d74 00158e70 2 MTA (Threadpool Worker) 35 11 710 0e8ee550 180b222 Disabled 06cdde68:06cdea48 00158e70 2 MTA (Threadpool Worker) 36 c 1ae0 0e875228 180b220 Enabled 072bd88c:072bf508 000cd190 0 MTA (Threadpool Worker) 37 12 19c8 00147290 180b220 Enabled 06bb4b70:06bb5dcc 000f0728 1 MTA (Threadpool Worker) 38 13 18f0 0e8901d8 180b222 Disabled 02c49ce8:02c49d74 00158e70 2 MTA (Threadpool Worker) 39 14 19b8 0e8f7338 180b220 Enabled 073c6fbc:073c6fe8 00158e70 1 MTA (Threadpool Worker) 40 15 1308 0e8f8610 180b220 Enabled 02af10a4:02af10ac 00158e70 3 MTA (Threadpool Worker) 41 16 1f38 0e8f99e8 180b220 Enabled 070c7154:070c761c 000cd190 0 MTA (Threadpool Worker) 42 17 1be0 0e8facc0 180b220 Enabled 06ad3a74:06ad4cc8 000cd190 0 MTA (Threadpool Worker) 43 18 1efc 0e8fbf98 180b220 Enabled 0321b11c:0321c938 000cd190 0 MTA (Threadpool Worker) 44 19 1470 0e8fd160 1801220 Enabled 06ef6770:06ef7178 000cd190 0 MTA (Threadpool Worker) 45 1a 150 0e8fe438 180b220 Enabled 02a15a34:02a161bc 000cd190 0 MTA (Threadpool Worker) 46 1b 1b10 0e8ff710 180b220 Enabled 06b216b8:06b22d00 000f0728 1 MTA (Threadpool Worker) 47 1c 1c8c 10690ac0 180b220 Enabled 02be2430:02be3b00 000f0728 1 MTA (Threadpool Worker) 48 1d 16c8 10691ad8 180b220 Enabled 06b31520:06b3189c 000f0728 1 MTA (Threadpool Worker) 49 1e 1a5c 10692e20 180b220 Enabled 02ab3a48:02ab50ac 000f0728 1 MTA (Threadpool Worker) 50 1f 1908 10694168 180b220 Enabled 06b686fc:06b69db0 000f0728 1 MTA (Threadpool Worker) 51 20 284 106954d0 180b220 Enabled 0319d1e4:0319e900 000cd190 0 MTA (Threadpool Worker) 52 21 1d74 10696708 180b220 Enabled 06bac634:06baddcc 000f0728 1 MTA (Threadpool Worker) 53 22 58c 106979c8 180b220 Enabled 02c14784:02c15d58 000f0728 1 MTA (Threadpool Worker) 54 23 1860 10698d10 180b220 Enabled 072e1984:072e3508 00158e70 1 MTA (Threadpool Worker) 55 24 1c9c 1069ae00 180b220 Enabled 071a4784:071a5508 000cd190 0 MTA (Threadpool Worker) 56 25 15c4 1069d328 180b222 Disabled 06bc7174:06bc7dcc 00158e70 2 MTA (Threadpool Worker) 57 26 1968 106a09e0 180b220 Enabled 0321da48:0321e938 000cd190 0 MTA (Threadpool Worker) 58 27 1e10 106a3608 180b220 Enabled 070172c8:070173c0 000cd190 0 MTA (Threadpool Worker) 59 28 1a18 106a6418 180b220 Enabled 02ac036c:02ac10ac 000f0728 1 MTA (Threadpool Worker) 60 29 1d2c 106a9148 180b220 Enabled 06a9b5a4:06a9cc90 000cd190 0 MTA (Threadpool Worker) 61 2a 1b50 106ac0a8 180b222 Disabled 06cdaf94:06cdbda8 00158e70 2 MTA (Threadpool Worker) 62 2b 96c 106ae8d0 8801220 Enabled 02c00980:02c01d3c 000cd190 0 MTA (Threadpool Completion Port) 63 2c 1b18 106afa50 180b220 Enabled 0322214c:03222938 000cd190 0 MTA (Threadpool Worker) 64 2d 1d78 106b2d48 180b220 Enabled 06f26e78:06f26eb4 000cd190 0 MTA (Threadpool Worker) 65 2e 198c 106b5cb8 180b220 Enabled 06ed8418:06ed8c98 000cd190 0 MTA (Threadpool Worker) 66 2f 1fcc 106b8b28 180b220 Enabled 0728d400:0728d508 00158e70 1 MTA (Threadpool Worker) 67 30 1958 106bb998 180b220 Enabled 0304e7e8:03050420 000cd190 0 MTA (Threadpool Worker) 68 31 1b48 106beb58 180b220 Enabled 031a5b98:031a691c 000cd190 0 MTA (Threadpool Worker) 69 32 1d80 106c19c8 180b220 Enabled 06b13444:06b14d00 000f0728 1 MTA (Threadpool Worker) 70 33 17cc 106c4838 180b220 Enabled 0700d468:0700f3a4 000cd190 0 MTA (Threadpool Worker) 71 34 1424 106c8890 180b220 Enabled 06b6f8bc:06b6fdb0 000f0728 1 MTA (Threadpool Worker) 72 35 1e00 106bd870 180b220 Enabled 06b8dcbc:06b8ddb0 000f0728 1 MTA (Threadpool Worker) 73 36 1d08 106c95e0 180b220 Enabled 071a081c:071a1508 000cd190 0 MTA (Threadpool Worker) 74 37 1fdc 106ca680 180b220 Enabled 02c05810:02c05d3c 000cd190 0 MTA (Threadpool Worker) 75 38 1b8c 106cb930 180b220 Enabled 0335f888:0335f88c 00158e70 2 MTA (Threadpool Worker) 76 39 1928 106cc900 880b220 Enabled 072bc2f0:072bd508 000cd190 0 MTA (Threadpool Completion Port) 77 3a 1fd0 106d7270 8801220 Enabled 02c18ca4:02c19d58 000cd190 0 MTA (Threadpool Completion Port) 78 3b 1640 106d7908 180b220 Enabled 0294474c:02945a7c 000cd190 0 MTA (Threadpool Worker)
Why have these threads disabled preemptive GC, not allowing the GC to suspend them, and ultimately blocking our process?
All the blocked threads are sitting in this type of callstack... trying to enter a lock (JIT_MonTryEnter). Normally you would see a thread either owning the lock or waiting in an awarelock like in this post, but here it is just spinning trying to enter the lock...
0:056> kb 2000 ChildEBP RetAddr Args to Child 10b0f4d8 0eb65afc 06bc5448 06bc53f8 22a9c796 mscorwks!JIT_MonTryEnter+0xad WARNING: Frame IP not in any known module. Following frames may be wrong. 10b0f504 69918f30 10b0f560 06bc5448 06bc5448 0xeb65afc 00000000 00000000 00000000 00000000 00000000 System_Web_Services_ni+0x28f30
unfortunately the managed stack is not very helpful in telling us where the lock was taken
0:056> !clrstack OS Thread Id: 0x15c4 (56) ESP EIP 10b0f8f8 79e73eac [ContextTransitionFrame: 10b0f8f8] 10b0f948 79e73eac [GCFrame: 10b0f948] 10b0faa0 79e73eac [ComMethodFrame: 10b0faa0]
and if we run !syncblk we have no active sync blocks, so no help there when it comes to finding out who owns this lock...
0:056> !syncblk Index SyncBlock MonitorHeld Recursion Owning Thread Info SyncBlock Owner ----------------------------- Total 165 CCW 4 RCW 1 ComClassFactory 0 Free 8
seems like we are stuck between a rock and a hard place here...
I have shown the command !dumpstack before... it shows a mixture of a managed and native callstack but it is a raw stack, meaning that it will pretty much just display any addresses on the stack that happen to be pointing to code. This means that !dumpstack will not give a true stack trace (i.e. like !clrstack and kb where if functionA calls funcitonB we will see functionA directly below functionB in the stack) and anything shown by !dumpstack may or may not be correct. Anyways, with that little warning, here comes !dumpstack:)
0:056> !dumpstack OS Thread Id: 0x15c4 (56) Current frame: mscorwks!JIT_MonTryEnter+0xad ChildEBP RetAddr Caller,Callee 10b0f47c 638c7552 (MethodDesc 0x63a539a0 +0x62 System.Xml.Schema.XmlSchemaSet.Add(System.Xml.Schema.XmlSchemaSet)), calling mscorwks!JIT_MonTryEnter 10b0f480 793463bb (MethodDesc 0x7923bbc8 +0xdb System.Collections.Hashtable..ctor(Int32, Single)), calling mscorwks!JIT_Dbl2IntSSE2 10b0f498 638b36f6 (MethodDesc 0x63a505f0 +0xa6 System.Xml.Schema.SchemaInfo..ctor()), calling (MethodDesc 0x7923bbc8 +0 System.Collections.Hashtable..ctor(Int32, Single)) 10b0f49c 638b3719 (MethodDesc 0x63a505f0 +0xc9 System.Xml.Schema.SchemaInfo..ctor()), calling mscorwks!JIT_Writeable_Thunks_Buf 10b0f4d8 0eb65afc (MethodDesc 0xeb756f0 +0x5c CustomComponent.Tools.Web.Services.Extensions.ValidationExtension.ProcessMessage(System.Web.Services.Protocols.SoapMessage)), calling (MethodDesc 0x63a539a0 +0 System.Xml.Schema.XmlSchemaSet.Add(System.Xml.Schema.XmlSchemaSet)) 10b0f504 69918f30 (MethodDesc 0x699c4798 +0x3c System.Web.Services.Protocols.SoapMessage.RunExtensions(System.Web.Services.Protocols.SoapExtension[], Boolean)) 10b0f518 699227b3 (MethodDesc 0x699c5250 +0x2cb System.Web.Services.Protocols.SoapServerProtocol.Initialize()), calling (MethodDesc 0x699c4798 +0 System.Web.Services.Protocols.SoapMessage.RunExtensions(System.Web.Services.Protocols.SoapExtension[], Boolean)) ...
!dumpstack is a bit tricky to deal with, but in the stack above if we start from the bottom of the part of the stack that i've shown we have SoapServerProtocol.Initialize() which calls SoapMessage.RunExtensions...
this we can trust because !dumpstack is actually telling us that Initialize is calling RunExtensions in this case (assuming that we believe that Initialize was called).
RunExtensions doesn't tell us what it is calling but that is because it is calling into a custom component
CustomComponent.Tools.Web.Services.Extensions.ValidationExtension.ProcessMessage.
This is in turn calling into XmlSchemaSet.Add which is calling into JIT_MonTryEnter. So long story short... the real stacktrack should look like this
mscorwks!JIT_MonTryEnter+0xad
System.Xml.Schema.XmlSchemaSet.Add(System.Xml.Schema.XmlSchemaSet)
CustomComponent.Tools.Web.Services.Extensions.ValidationExtension.ProcessMessage(System.Web.Services.Protocols.SoapMessage)System.Web.Services.Protocols.SoapMessage.RunExtensions(System.Web.Services.Protocols.SoapExtension[], Boolean)
System.Web.Services.Protocols.SoapServerProtocol.Initialize()
...
!sos.clrstack just wasn't able to rebuild it because of the state it was in
Looking through the output from ~* e !clrstack I find one stack that is currently in XmlSchemaSet.Add and thus is probably the one holding the lock we are trying to enter here
OS Thread Id: 0x1308 (40) ESP EIP 104ce878 7c82ed54 [HelperMethodFrame: 104ce878] 104ce8d0 638b46bf System.Xml.Schema.SchemaNames..ctor(System.Xml.XmlNameTable) 104cebd4 638ca6fb System.Xml.Schema.XmlSchemaSet.GetSchemaNames(System.Xml.XmlNameTable) 104cebe0 638c96fc System.Xml.Schema.XmlSchemaSet.PreprocessSchema(System.Xml.Schema.XmlSchema ByRef, System.String) 104cebf8 638c86b7 System.Xml.Schema.XmlSchemaSet.Add(System.String, System.Xml.Schema.XmlSchema) 104cec04 638c7667 System.Xml.Schema.XmlSchemaSet.Add(System.Xml.Schema.XmlSchemaSet) 104cec60 0eb65afc CustomComponent.Tools.Web.Services.Extensions.ValidationExtension.ProcessMessage(System.Web.Services.Protocols.SoapMessage) 104cec8c 69918f30 System.Web.Services.Protocols.SoapMessage.RunExtensions(System.Web.Services.Protocols.SoapExtension[], Boolean) 104ceca4 699227b3 System.Web.Services.Protocols.SoapServerProtocol.Initialize() 104cece8 6990d904 System.Web.Services.Protocols.ServerProtocolFactory.Create(System.Type, System.Web.HttpContext, System.Web.HttpRequest, System.Web.HttpResponse, Boolean ByRef) 104ced28 699263ab System.Web.Services.Protocols.WebServiceHandlerFactory.CoreGetHandler(System.Type, System.Web.HttpContext, System.Web.HttpRequest, System.Web.HttpResponse) 104ced64 69926329 System.Web.Services.Protocols.WebServiceHandlerFactory.GetHandler(System.Web.HttpContext, System.String, System.String, System.String) 104ced88 65fc057c System.Web.HttpApplication.MapHttpHandler(System.Web.HttpContext, System.String, System.Web.VirtualPath, System.String, Boolean) 104cedcc 65fd58cd System.Web.HttpApplication+MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() 104ceddc 65fc1610 System.Web.HttpApplication.ExecuteStep(IExecutionStep, Boolean ByRef) 104cee1c 65fd32e0 System.Web.HttpApplication+ApplicationStepManager.ResumeSteps(System.Exception) 104cee6c 65fc0225 System.Web.HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(System.Web.HttpContext, System.AsyncCallback, System.Object) 104cee88 65fc550b System.Web.HttpRuntime.ProcessRequestInternal(System.Web.HttpWorkerRequest) 104ceebc 65fc5212 System.Web.HttpRuntime.ProcessRequestNoDemand(System.Web.HttpWorkerRequest) 104ceec8 65fc3587 System.Web.Hosting.ISAPIRuntime.ProcessRequest(IntPtr, Int32) 104cf078 79f35ee8 [ContextTransitionFrame: 104cf078] 104cf0c8 79f35ee8 [GCFrame: 104cf0c8] 104cf220 79f35ee8 [ComMethodFrame: 104cf220] 0:040> kb ChildEBP RetAddr Args to Child 104ce5d4 7c822124 77e6bad8 000002e8 00000000 ntdll!KiFastSystemCallRet 104ce5d8 77e6bad8 000002e8 00000000 00000000 ntdll!NtWaitForSingleObject+0xc 104ce648 79e718fd 000002e8 ffffffff 00000000 kernel32!WaitForSingleObjectEx+0xac 104ce68c 79e718c6 000002e8 ffffffff 00000000 mscorwks!PEImage::LoadImage+0x199 104ce6dc 79e7187c ffffffff 00000000 00000000 mscorwks!CLREvent::WaitEx+0x117 104ce6ec 7a0d0d0f ffffffff 00000000 00000000 mscorwks!CLREvent::Wait+0x17 104ce710 7a0d5dfa ffffffff 00001037 000d4368 mscorwks!SVR::gc_heap::wait_for_gc_done+0x99 104ce788 7a0d691f 0e8f8650 00000014 00000000 mscorwks!SVR::gc_heap::try_allocate_more_space+0x17 104ce7a8 7a0d7ecc 0e8f8650 00000014 00000000 mscorwks!SVR::gc_heap::allocate_more_space+0x2f 104ce7c8 7a08bd32 0e8f8650 00000014 00000002 mscorwks!SVR::GCHeap::Alloc+0x74 104ce7e4 79e754ff 00000014 00000000 00080000 mscorwks!Alloc+0x60 104ce824 79e755c1 639f59e0 02382edc 02af0090 mscorwks!FastAllocateObject+0x38 104ce8c8 638b46bf 02af0e60 00000000 00000000 mscorwks!JIT_NewFast+0x9e 104ce8cc 02af0e60 00000000 00000000 00000000 System_Xml_ni+0x1146bf WARNING: Frame IP not in any known module. Following frames may be wrong. 104ce8d0 00000000 00000000 00000000 00000000 0x2af0e60
But unfortunately it can't give up this lock and finish what it is doing because it is waiting for the GC to finish, so effectively we are in a deadlock.
Thread 40 holds a lock but to release it it needs the GC to complete. The GC can't complete because it can't suspend thread 56 (and other threads) that have preemptive GC disabled. Thread 56 can't enable preemptive GC until it gets the lock owned by thread 40.
I should add, that the only times I have seen this it has been in the lock in XmlSchemaSet.Add, and under heavy load when multiple threads were trying to access the same XmlSchemaSet.
Solution:
So what can we do about this? Well, a hotfix just came out (KB946644) that will fix this problem so if you are running into this issue you can call into support and ask for that hotfix.
In my earlier cases, the customers have locked around the XmlSchemaSet in the custom component since according to the msdn documentation any instance members of XmlSchemaSet are not guaranteed to be threadsafe. This resolved the issue...
Laters,
Tess
ASP.NET ASP.NET Wiki Beta [Via: Scott Hanselman ] Sharepoint Dev Tip: The SharePoint University of...
Link Listing - February 10, 2008
Tess, is that a CLR hotfix or an XML hotfix? We are seeing the same root problem (gc thread cant suspend due to other threads w/ preemptive disabled waiting on crit secs owned by gc thread) but XML is not in the stacks at all.
this hotfix is specifically for this issue... you might be running into the gc/loaderlock deadlock described here though
there are two versions of it, 1. you load up mixed mode (c++) dlls and you either have managed code in dllmain or other managed entry points like static arrays etc. or 2. you are loading up a dll that references a strong named mixed mode dll and you block in the policy resolution, and the load is done using a native loading method like createobject (something that grabs the loaderlock)
in case #1 you have to recompile the dll with /noentry and follow the articles referenced in that post to make sure you have no entry points. in case #2 you should manually load the referenced dll using assembly.load in application_start for example
Sorry for the convoluted answer, the space is a little short in the comments:) I can write a post on case #2 soon if you think you are running into that, but hopefully this helps you in the meantime...
From looking at the stacks, I know 1 thread (#31) is in gc trying to suspend 2 other threads (#17,30) that are in preemptive disabled mode that are trying to enter crit secs owned by the gc thread (#31), hence the deadlock. (note: the non-gc threads waiting on the crit secs are in exception handlers at the time). This is happening in Microsoft CRM 3.0 (which is an asp.net 1.1 app). I am trying to better understand if this can be caused by the crm app code or if that hotfix was for the CLR itself. It looks like an EE crit sec, and my understanding of preemptive is that it can only be set from the CLR unmanaged code. So based on that I am guessing its a CLR bug, but I could be missing something. There is a US MS Case open, would love if you could take a look at the dump I uploaded 🙂 Waiting for TAMs to get thier acts together.
sounds interesting, the support engineer you are working with probably debugs as much as I 🙂 and is probably specializing in CRM, but send me the case number and support engineers name or email address via the contact me section and I'll talk to him or her to see if they need a second opinion...
Tess, we are having exactly the same issue as described in your article and are also working with MS support to resolve it. I'll send you the details and would appreciate if you could give the person we're working with your opinion as they don't seem to be able to find the hotfix you mentioned in the article and we're running out of options to try on our side. This is how the stack looks in our case:
2f8384b4 0a06f23b System_Xml!System.Xml.Schema.SchemaNames..ctor(System.Xml.XmlNameTable)+0x6d1
0a45e7fc 0a043bf0 System_Xml!System.Xml.Schema.XmlSchemaSet.GetSchemaNames(System.Xml.XmlNameTable)+0x43
0a45e7fc 0a043b98 System_Xml!System.Xml.Schema.XmlSchemaSet.PreprocessSchema(System.Xml.Schema.XmlSchema ByRef, System.String)+0x20
0a45e840 0a04f35a System_Xml!System.Xml.Schema.XmlSchemaSet.Add(System.String, System.Xml.Schema.XmlSchema)+0x28
0a45e840 09f6209a System_Xml!System.Xml.Schema.XmlSchemaSet.Add(System.Xml.Schema.XmlSchemaSet)+0x19a
As you can see it's almost the exact replica of what you have in the article.
Hi Paul,
The article doesn't seem to be public yet but the hotfix is available, if they search on the KB 946644, if they can't find it they can contact me.
Tess
Hi Tess,
It seems like our support person already talked to you and indeed, the hotfix for KB 946644 helped in our case (SP1 by itself didn't). I can't say that the situation is completely reolved though as we're till getting strange errors related to schema validation (102 errors out of 33k+ requests) that look like the following:
System.Xml.Schema.XmlSchemaValidationException: The 'Service' element.ThrowDeclNotFoundWarningOrError(Boolean declFound)
at System.Xml.Schema.XmlSchemaValidator.ValidateElement(String localName, String namespaceUri, XmlSchemaInfo schemaInfo, String xsiType, String xsiNil, String)
Surely enough, the 'Service' element IS declared and the same request is executed just fine right before and right after the one that fails. The element may be different in different errors, but it's always the same error and it's just few lines of code away from the place where we had problems with XMLSchemaSet.Add() call previously. Can it be possibly related to the hotfix? It appears that the schema object gets corrupted in some rare cases (internally), which causes it to loose information about some of the elements that are properly declared in the schema.
Also, we do see abnormal GC patterns that were not observed under 1.1 on the same box with the same application (last time tested three weeks ago). This includes large "% time in GC" (even without any external requests), number of Gen 0 to Gen 1 collections is close to 2:1 rather than to recommended 10:1, Gen 2 Heap size is significantly larger than Gen 0 or Gen 1, number of allocated bytes / sec and number of Gen 0 promoted bytes / sec is large even with zero users and no incoming requests and so on. I sent all the details with graphs and correlations to Michael Noto, who is handling our case (SRX080xxxxx0861).
This difference between 1.1 and 2.0 appears to be somewhat similar to what was reported long time ago by "philippe" in one of the comments to your High CPU in GC article (). Any insight as to what might be causing it? Thanks much.
Paul.
Hi Paul,
off the top of my head I can't say that I've seen this before, and I am currently on vacation so I don't have access to my usual resources.
Based on what I know about the hotfix I don't think it is related though as the hotfix deals with blocking the thread for GCs but of course you can never be sure if it causes sideeffects.
You might want to set up a breakpoint and dump on the first occurrence of this System.Xml.Schema.XmlSchemaValidationException so that you can examine the structures, but I would suggest that it is set up in such a way that the breakpoint is disabled after it's been hit once if they are very frequent.
I very frequently get emails like the one I got this morning: "Tess, It sounds like the hotfix for kb946644
I tried to open a support case to get hotfix KB 946644, and was told that this hotfix has been discontinued. Do you know why this would be the case, or if the problem is being addressed in some other way in a future hotfix?
It would be nice to try the hotfix, at least for experimental purposes, to see whether the problem it addresses really is the cause of our symptoms. Is there any other way for us to get a copy?
I had a look and I can't see that it has been discontinued, from the looks of it it will also be included in SP1. I think the confusion might be in that the kb is not released yet, but the hotfix still seems to be there. Unfortunately I can't post the hotfix here, but if you contact your support specialist again they can contact me if they want.
Hi Tess,
Our team has faced with similar deadlock recently. After several weeks of head's pain we finally found the reason. As far as I didn't find any similar information, I want to share it. Possibly it will help somebody and safe time & nerves.
It was reproduced on .NET framework v4.0.
Information about threads prove that we faced with similar situation.
0:000> !threads
ThreadCount: 21
UnstartedThread: 0
BackgroundThread: 21
PendingThread: 0
DeadThread: 0
Hosted Runtime: no
PreEmptive GC Alloc Lock ID OSID ThreadOBJ State GC Context Domain Count APT Exception
...
11 6 d94 0fa8f048 b220 Enabled 6ef8e9c8:6ef8e9c8 005c7b40 0 MTA (GC) System.ExecutionEngineException (03551120)
...
19 11 13d0 0f58ca78 b220 Disabled 6ef8d70c:6ef8e9b0 005c7b40 2 MTA
...
0:011> !threadpool
CPU utilization: 81%
Worker Thread: Total: 9 Running: 1 Idle: 3 MaxLimit: 2047 MinLimit: 4
Work Request in Queue: 1
AsyncTimerCallbackCompletion TimerInfo@0b7e9db8
--------------------------------------
Number of Timers: 1
--------------------------------------
Completion Port Thread:Total: 0 Free: 0 MaxFree: 8 CurrentLimit: 0 MaxLimit: 1000 MinLimit: 4
Dumps of threads #11 and #19 shows that they use the same instance of one object 3c762ae8. However it is “impossible” because this object is born and die in thread #19. And there is no ways how execution can be split on two threads. Except one. The hole is event that is raised inside thread. Framework send execution of handler to another thread.
As result thread #19 raise event. Then handler in thread #11 do his work, allocate memory, but memory is fragmented and GC start working in this thread. GC try to stop all threads but can’t do it because thread #19 is already blocked by #11.
In our case tablet was quite simple – we replaced event by delegate and hole was closed.
0:011> !dumpstack
OS Thread Id: 0xd94 (11)
Current frame: clr!MethodDesc::GetSig+0x3b
ChildEBP RetAddr Caller,Callee
...
2471efd8 73e0370b clr!Thread::StackWalkFrames+0xc1, calling clr!__security_check_cookie
2471efe0 73f69583 clr!MethodDesc::ReturnsObject+0x24, calling clr!MetaSig::MetaSig
2471f058 3c923948 (MethodDesc 3c762ae8 +0x158 SomeNamespace.SomeAlgoObject`1[[System.Double, mscorlib]].DoSmth(...)), calling 3c8ba030
2471f080 73f6cbf6 clr!Thread::HandledJITCase+0xc4, calling clr!Thread::StackWalkFramesEx
...
0:019> !dumpstack
OS Thread Id: 0x13d0 (19)
Current frame: 3c8ba034
ChildEBP RetAddr Caller,Callee
2810e4c4 3c923948 (MethodDesc 3c762ae8 +0x158 SomeNamespace.SomeAlgoObject`1[[System.Double, mscorlib]].DoSmth(...), calling 3c8ba030
2810e53c 3c923695 (MethodDesc 3c762aa8 +0x65 SomeNamespace.SomeAlgoObject`1[[System.Double, mscorlib]].DoWork(...)), calling (MethodDesc 3c762ae8 +0 SomeNamespace.SomeAlgoObject`1[[System.Double, mscorlib]].DoSmth(...))
... | https://blogs.msdn.microsoft.com/tess/2008/02/11/hang-caused-by-gc-xml-deadlock/ | CC-MAIN-2017-30 | refinedweb | 4,855 | 54.12 |
This chapter is part of the Mix and OTP guide and it depends on previous chapters in this guide. For more information, read the introduction guide or check out the chapter index in the sidebar.
In the previous chapter, we used agents to represent our buckets. In the introduction to mix,, we would need to convert the bucket name (often received from an external client) to atoms, and we should never convert user input to atoms. This is because atoms are not garbage collected. Once an atom is created, it is never reclaimed. Generating atoms from user input would mean the user can inject enough different names to exhaust our system memory!
In practice, it is more likely you will reach the Erlang VM limit for the maximum number of atoms before you run out of memory, which will bring your system down regardless.
Instead of abusing the. Because our registry needs to be able to receive and handle ad-hoc messages from the system, the
Agent API is not enough.
We will use a GenServer to create a registry process that can monitor the bucket processes. GenServer provides industrial strength functionality for building servers in both Elixir and OTP.
Please read the GenServer module documentation for an overview if you haven’t yet. Once you do so, we are ready to proceed.
A GenServer is a process that invokes a limited set of functions under specific conditions. When we used an
Agent, we would keep both the client code and the server code side by side, like this:
def put(bucket, key, value) do Agent.update(bucket, &Map.put(&1, key, value)) end
Let’s break that code apart a bit:
def put(bucket, key, value) do # Here is the client code Agent.update(bucket, fn state -> # Here is the server code Map.put(state, key, value) end) # Back to the client code end
In the code above, we have a process, which we call “the client” sending a request to an agent, “the server”. The request contains an anonymous function, which must be executed by the server.
In a GenServer, the code above would be two separate functions, roughly like this:
def put(bucket, key, value) do # Send the server a :put "instruction" GenServer.call(bucket, {:put, key, value}) end # Server callback def handle_call({:put, key, value}, from, state) do {:noreply, Map.put(state, key, value)} end
There is quite a bit more ceremony in the GenServer code but, as we will see, it brings some benefits too.
For now, we will write only the server callbacks for our bucket registering logic, without providing a proper API, which we will do later.
Create a new file at
lib/kv/registry.ex with the following contents:
defmodule KV.Registry do use GenServer ## Missing Client API - will add this later ## Defining GenServer Callbacks @impl true def init(:ok) do {:ok, %{}} end @impl true def handle_call({:lookup, name}, _from, names) do {:reply, Map.fetch(names, name), names} end @impl true def handle_cast({:create, name}, names) do if Map.has_key?(names, name) do {:noreply, names} else {:ok, bucket} = KV.Bucket.start_link([]) {:noreply, Map.put(names, name, bucket)} end end end
There are two types of requests you can send to a GenServer: calls and casts. Calls are synchronous and the server must send a response back to such requests. While the server computes the response, the client is waiting. Casts are asynchronous: the server won’t send a response back and therefore the client won’t wait for one. Both requests are messages sent to the server, and will be handled in sequence. In the above implementation, we pattern-match on the
:create messages, to be handled as cast, and on the
:lookup messages, to be handled as call.
In order to invoke the callbacks above, we need to go through the corresponding
GenServer functions. Let’s start a registry, create a named bucket, and then look it up:
iex> {:ok, registry} = GenServer.start_link(KV.Registry, :ok) {:ok, #PID<0.136.0>} iex> GenServer.cast(registry, {:create, "shopping"}) :ok iex> {:ok, bk} = GenServer.call(registry, {:lookup, "shopping"}) {:ok, #PID<0.174.0>}
Our
KV.Registry process received a cast with
{:create, "shopping"} and a call with
{:lookup, "shopping"}, in this sequence.
GenServer.cast will immediately return, as soon as the message is sent to the
registry. The
GenServer.call on the other hand, is where we would be waiting for an answer, provided by the above
KV.Registry.handle_call callback.
You may also have noticed that we have added
@impl true before each callback. The
@impl true informs the compiler that our intention for the subsequent function definition is to define a callback. If by any chance we make a mistake in the function name or in the number of arguments, like we define a
handle_call/2, the compiler would warn us there isn’t any
handle_call/2 to define, and would give us the complete list of known callbacks for the
GenServer module.
This is all good and well, but we still want to offer our users an API that allows us to hide our implementation details.
A GenServer is implemented in two parts: the client API and the server callbacks. You can either combine both parts into a single module or you can separate them into a client module and a server module. The client is any process that invokes the client function. The server is always the process identifier or process name that we will explicitly pass as argument to the client API. Here we’ll use a single module for both the server callbacks and the client API.
Edit the file at
lib/kv/registry.ex, filling in the blanks for the client API:
## with the given `name` in `server`. """ def create(server, name) do GenServer.cast(server, {:create, name}) end
The first function is
start_link/1, which starts a new GenServer passing a list of options.
start_link/1 calls out to
GenServer.start_link/3, which takes to
GenServer.start_link/3
The next two functions,
lookup/2 and
create/2, are responsible for sending these requests to the server. In this case, we have used
{:lookup, name} and
{:create, name} respectively. Requests are often specified as tuples, like this, in order to provide more than one “argument” in that first argument slot. It’s common to specify the action being requested as the first element of a tuple, and arguments for that action in the remaining elements.. Let’s recap.
The first is the
init/1 callback, that receives the second argument given to
GenServer.start_link/3 and returns
{:ok, state}, where state is a new map. We can already notice how the
GenServer API makes the client/server segregation more apparent.
start_link/3 happens in the client, while
init/1 is the respective callback that runs on the server.
For
call/2 requests, we implement a
handle_call/3 callback that receives the
request, the process from which we received the request (
_from), and the current server state (
names). The
handle_call/3 callback returns a tuple in the format
{:reply, reply, new_state}. The first element of the tuple,
:reply, indicates that.
There are other tuple formats both
handle_call/3 and
handle_cast/2 callbacks may return. There are also other callbacks like
terminate/2 and
code_change/3 that we could implement. You are welcome to explore the full GenServer documentation to learn more about those.
For now, let’s write some tests to guarantee our GenServer works as expected.
Testing a GenServer is not much different from testing an agent. We will spawn the server on a setup callback and use it throughout our tests. Create a file at
test/kv/registry_test.exs with the following:
defmodule KV.RegistryTest do use ExUnit.Case, async: true setup do registry = start_supervised!(KV.Registry) %{registry: registry} end test "spawns buckets", %{registry: registry} do assert KV.Registry.lookup(registry, "shopping") == :error KV.Registry.create(registry, "shopping") assert {:ok, bucket} = KV.Registry.lookup(registry, "shopping") KV.Bucket.put(bucket, "milk", 1) assert KV.Bucket.get(bucket, "milk") == 1 end end
Our test case first asserts there’s no buckets in our registry, creates a named bucket, looks it up, and asserts it behaves as a bucket.
There is one important difference between the
setup block we wrote for
KV.Registry and the one we wrote for
KV.Bucket. Instead of starting the registry by hand by calling
KV.Registry.start_link/1, we instead called the
start_supervised!/2 function, passing the
KV.Registry module.
The
start_supervised! function was injected into our test module by
use ExUnit.Case. It does the job of starting the
KV.Registry process, by calling its
start_link/1 function. The advantage of using
start_supervised! is that ExUnit will guarantee that the registry process will be shutdown before the next test starts. In other words, it helps guarantee that the state of one test is not going to interfere with the next one in case they depend on shared resources.
When starting processes during your tests, we should always prefer to use
start_supervised!. We recommend you to change the
setup block in
bucket_test.exs to use
start_supervised! too.
Run the tests and they should all pass!
Everything we have done so far could have been implemented with an
Agent. In this section, we will see one of many things that we can achieve with a GenServer that is not possible with an Agent.
Let’s start with a test that describes how we want the registry to behave if a bucket stops or crashes:
test "removes buckets on exit", %{registry: registry} do KV.Registry.create(registry, "shopping") {:ok, bucket} = KV.Registry.lookup(registry, "shopping") Agent.stop(bucket) assert KV.Registry.lookup(registry, "shopping") == :error end
The test above will fail on the last assertion as the bucket name remains in the registry even after we stop the bucket process.
In order to fix this bug, we need the registry to monitor every bucket it spawns. Once we set up a monitor, the registry will receive a notification every time a bucket process exits, allowing us to clean the registry up.
Let’s first play with monitors by starting a new console with
iex -S mix:
iex> {:ok, pid} = KV.Bucket.start_link([]) {:ok, #PID<0.66.0>} iex> Process.monitor(pid) #Reference<0.0.0.551> iex> Agent.stop(pid) :ok iex> flush() {:DOWN, #Reference<0.0.0.551>, :process, #PID<0.66.0>, :normal}
Note
Process.monitor(pid) returns a unique reference that allows us to match upcoming messages to that monitoring reference. After we stop the agent, we can
flush/0 all messages and notice a
:DOWN message arrived, with the exact reference returned by
monitor, notifying that the bucket process exited with reason
:normal.
Let’s reimplement the server callbacks to fix the bug and make the test pass. First, we will modify the GenServer state to two dictionaries: one that contains
name -> pid and another that holds
ref -> name. Then we need to monitor the buckets on
handle_cast/2 as well as implement a
handle_info/2 callback to handle the monitoring messages. The full server callbacks implementation is shown below:
## Server callbacks @impl true def init(:ok) do names = %{} refs = %{} {:ok, {names, refs}} end @impl true def handle_call({:lookup, name}, _from, state) do {names, _} = state {:reply, Map.fetch(names, name), state} end @impl true def handle_cast({:create, name}, {names, refs}) do if Map.has_key?(names, name) do {:noreply, {names, refs}} else {:ok, bucket} = KV.Bucket.start_link([]) ref = Process.monitor(bucket) refs = Map.put(refs, ref, name) names = Map.put(names, name, bucket) {:noreply, {names, refs}} end end @impl true def handle_info({:DOWN, ref, :process, _pid, _reason}, {names, refs}) do {name, refs} = Map.pop(refs, ref) names = Map.delete(names, name) {:noreply, {names, refs}} end @impl true def handle_info(_msg, state) do {:noreply, state} end
Observe that we were able to considerably change the server implementation without changing any of the client API. That’s one of the benefits of explicitly segregating the server and the client.
Finally, different from the other callbacks, we have defined a “catch-all” clause for
handle_info/2 that discards any unknown message. To understand why, let’s move on to the next section.
call,
cast, we have a tiny GenServer cheat sheet.
We have previously learned about links in the Process chapter. Now, with the registry complete, you may be wondering: when should we use monitors and when should we use links?
Links are bi-directional. If you link two processes and one of them crashes, the other side will crash too (unless it is trapping exits). A monitor is uni-directional: only the monitoring process will receive notifications about the monitored one. In other words: use links when you want linked crashes, and monitors when you just want to be informed of crashes, exits, and so on.
Returning to our
handle_cast/2 implementation, you can see the registry is both linking and monitoring the buckets:
{:ok, bucket} = KV.Bucket.start_link([]) ref = Process.monitor(bucket)
This is a bad idea, as we don’t want the registry to crash when a bucket crashes. The proper fix is to actually not link the bucket to the registry. Instead, we will link each bucket to a special type of process called Supervisors, which are explicitly designed to handle failures and crashes. We will learn more about them in the next chapter.
© 2012 Plataformatec
Licensed under the Apache License, Version 2.0. | https://docs.w3cub.com/elixir~1.9/mix-otp/genserver/ | CC-MAIN-2020-45 | refinedweb | 2,246 | 66.03 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.