text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
How to create a custom import function in openerp?
I want to create a custom page to let user send the file to server.
After receiving the file, openerp will create records into the database base on the value in the file
what should I do?
If you want to import function, first clear which file extension are use for data like .CSV or .XLS.
anyways, you have to create a function and read the data from .CSV or .XLS file.
import tempfile
import xlrd
from xlrd import open_workbook
def import_data(self, cr, uid, ids, context=None):
temp_path = tempfile.gettempdir()
test_obj = self.pool.get('test.test')
xls_data = base64.decodestring(xls_file)
fp=open(temp_path+'/xsl_file.xls', 'wb+')
fp.write(xls_data)
fp.close()
wb = open_workbook(temp_path+'/xsl_file.xls')
for sheet in wb.sheets():
for rownum in range(sheet.nrows):
if rownum == 0:
header_list = sheet.row_values(rownum)
.....................
.....................
Hope this will help you can help you with that. I have created custom import module for bank statement import, bom import etc. The csv import is already there, but to import, for example, the product should already available in DB and no option to create products on the fly if not existing, while doing the import. So custom import functions are useful. In front end, the user just need to upload a file, rest of the import can be done according to our needs, by reading data from it. Also different file formats like csv, xls, qif etc can be done. Contact me: akhilpsivan@outlook.com. Because its difficult to paste few codes and make you understand the things. | https://www.odoo.com/forum/help-1/question/how-to-create-a-custom-import-function-in-openerp-73235 | CC-MAIN-2016-50 | refinedweb | 296 | 68.06 |
(Analysis by Jonathan Paulson - jonathanpaulson@gmail.com)
If the cows didn't have to slow down, after $T$ seconds cow $i$ will be at position $x_i + T*s_i$. If a faster cow started behind a slower cow, but the faster cow ended up ahead, that means the faster cow will join the slower cow's group (or a group in the middle).
So we want to count the number of cows don't "cross" any cows ahead of them; this is the number of cows who won't join another group (and hence will start their own group).
The most obvious way to to go through each cow and check if any of the cows ahead of them are going to end up behind them. But this is too slow: there are O(N^2) pairs of cows, and N = 10^5. So O(N^2) ~ 10^10, and computers only do about 10^9 operations per second.
Luckily, there is a faster way: start from the back, and keep track of the least far ending position as you go. This only takes about N operations, which is very fast.
Here is my code for the fast approach:
#include <cstdio> #include <vector> #define PB push_back using namespace std; typedef long long ll; int main() { ll n, t; scanf("%lld %lld", &n, &t); vector<ll> S; for(ll i=0; i<n; i++) { ll x, s; scanf("%lld %lld\n", &x, &s); S.PB(x + s*t); } ll ans = 1; ll slow = S[n-1]; for(ll i=n-1; i>=0; i--) { if(S[i] < slow) { ans++; } slow = min(slow, S[i]); } printf("%lld\n", ans); } | http://usaco.org/current/data/sol_cowjog_silver.html | CC-MAIN-2018-17 | refinedweb | 276 | 83.19 |
Feature #17265open
Add `Bool` module
Description
1-line Summary:
rbs would benefit from the existence of common ancestor
Bool for
TrueClass and
FalseClass.
Detail:
Matz: I am aware you rejected a similar request, but could we revisit this in light of RBS?
One use case was for an easy way to check for
true or
false values, instead of simply for truthiness (e.g. for data transfer, strict argument checking, testing, etc.)
I believe there's a new use case:
RBS
In
RBS, the most used types like
String and
Integer have types for "string-like" and "integer-like" objects:
string and
integer (all lowercase).
For example the signature for
Integer#>> is:
def >>: (int) -> Integer
It accepts an
Integer or an object responding to
to_int (summarized by
int) and returns an
Integer (and never another class of object responding to
to_int or not).
There is a similar idea with boolean values, where a method may accept any object and will use it's truthiness, while returning
true | false. For example one of the interface for
Enumerable#all? should look like:
def all?: () { (Elem) -> bool } -> true | false
The user supplied block can return any value, and its truthiness (anything else than
nil or
false) will be used to determine the result of
all?. That result will be
true | false, and no other value.
If RBS is to be popular, there will be many signatures for such predicates (in builtin Ruby, stdlib, any gems, applications, etc.). I feel the best option would be
Bool, if this would be reflected in Ruby itself.
Proposal: a new global module called
Bool, without any method of constant, included in
TrueClass and
FalseClass.
Following reasons for rejection were given at the time:
many gems and libraries had already introduced Boolean class. I don't want to break them.
I looked and found the
bool gem that defines a
Bool module. My proposal is compatible. In any case, this gem looks abandoned, the author Aslak Hellesøy doesn't have the code on github, the gem has had 7000 downloads in the past 6 years and has no public reverse dependency. It also fails to install on my machine.
I am not aware of incompatibilities.
trueand
falseare the only representative of true-false values. In Ruby.
niland
falseare falsy values, and everything else is a true value. There's no meaning for having a superclass of
TrueClassand
FalseClassas
Boolean.
The proposal is exactly to be able to easily write about this duality of
Bool as having only
true and
false as members, and every Ruby object as being implicitly convertible as being truthy or falsy (
bool in RBS).
Discussion in RBS:
Previous feature requests for
Boolean:
Updated by marcandre (Marc-Andre Lafortune) 7 months ago
- Project changed from CommonRuby to Ruby master
Updated by mame (Yusuke Endoh) 7 months ago
Hi, I'd like to add the background.
Currently, RBS provides
bool type as an alias to
top type (a union type of all types). The rationale is because any type is actually accepted in the context of condition expression of
if and
while. Some methods that accept a predicate block, such as
Enumerable#any? and
#select, also accept any type as the return value of the block.
However, (almost) all methods that end with
? returns
true | false. For example, of we write
bool as the return type of
Array#empty?, it means that the method may return any type, which is a bit less precise.
true | false is a bit redundant, so marcandre (Marc-Andre Lafortune) wants to write
Bool for it, But in RBS, a capitalized type corresponds to Ruby's class or module. So, to make the story simple, he is proposing adding a
Bool module in Ruby side instead of RBS.
Personally, I'm unsure if it is good or not to change Ruby for this. If his proposal is accepted, the type of
Enumerable#any? will be:
def any? : { (Elem) -> bool } -> Bool
This looks cryptic to me. I think that the current statement (following) is good enough.
def any? : { (Elem) -> bool } -> true | false
BTW, soutaro (Soutaro Matsumoto) (the original author of RBS) is now thinking the redefinition of
bool as an alias to
true | false. Based on soutaro's idea, the type will be:
def any? : { (Elem) -> top } -> bool
In terms of documentation, it loses the information that the return value of the block is considered as a condition, but I'm okay for it, too.
Updated by shyouhei (Shyouhei Urabe) 7 months ago
Bool and
bool to have different semantics sounds problematic to me. Not against this proposed
Bool, but we need to rename
bool then.
BTW e.g.
class String; include Bool; end shall be prohibited somehow. Maybe a parent class is better?
Updated by marcandre (Marc-Andre Lafortune) 7 months ago
shyouhei (Shyouhei Urabe) wrote in #note-3:
Booland
boolto have different semantics sounds problematic to me.
As I stated,
String and
string have different semantics. Is that also problematic?
Not against this proposed
Bool, but we need to rename
boolthen.
That would defeat the whole purpose.
BTW e.g.
class String; include Bool; endshall be prohibited somehow. Maybe a parent class is better?
A parent class would work too, but one might still write
class Foo < Bool...
Updated by Eregon (Benoit Daloze) 7 months ago
So
Bool would be the "strict" variant, only
true | false, and
bool is anything convertible to a boolean, so any object, right?
I don't like the name
Bool. If anything, I think it should be properly named
Boolean.
I think adding
Boolean on RBS' side (and not in Ruby) is not so bad.
Updated by matz (Yukihiro Matsumoto) 7 months ago
Could you clarify the current proposal?
- bool / Bool / Boolean
- RBS side only / Ruby module as well
I am OK if RBS side only, no Ruby module.
Matz.
Updated by shevegen (Robert A. Heiler) 7 months ago
I tried to make a shorter summary of my thoughts here, so just three points - hopefully that makes it easier to read:
(1) Most of the discussion, I think, happened at - if I then understand the discussion correctly then it would mean that ruby would have to add "module Bool" or "module Boolean".
(2) One slight problem I see with that is that a use case originating (mostly) from RBS, even if it makes sense from the point of view of RBS, would potentially effect change in MRI ruby. I don't have a strong opposing opinion per se, but I think it would be better if the use case would originate from MRI directly, rather than the other way around. See also headius' comment in another issue about other implementations potentially affecting MRI via tests/specifications, without prior discussions. I am not involved in test/specs but I think I understood what headius meant here. This is one reason why I think it should be considered carefully whether change is really necessary in this regard. Keep also in mind that when a "module Bool / Boolean" exists in ruby, new users may ask how this should then be used, and it may be a bit complicated if the explanation is "because RBS uses it", even more so if these users may not need or use RBS (not everyone will use types; some may but others may not).
(3) I know way too little about the internals (admittedly I am not clever enough for high level C, and I am not even kidding here), but if the use case is primarily originating from RBS itself, could there not be another, indirect solution? For example, rather than requiring a change in MRI, could there not be some kind of meta-VM instead, that could be targeted? A bit like rubocop too, perhaps, for RBS? That way people could set it up for their own projects, adjust it as needed, and "simulate" as if a boolean "type" were to exist, without MRI needing to add a module, sort of where you just were to "simulate" that a boolean value exists. Again, I am not sure if this makes any sense what I write, but perhaps it would be better to wait some time, see how RBS shapes up, how the use cases may change, and then re-evaluate in say, two years or so. There are already quite a lot of changes if we look at the ruby-jit, ractor and so forth - it may be more clear how RBS may have to change (or effect change) in a little while.
Updated by Dan0042 (Daniel DeLorme) 7 months ago
mame (Yusuke Endoh) wrote in #note-2:
BTW, soutaro (Soutaro Matsumoto) (the original author of RBS) is now thinking the redefinition of
boolas an alias to
true | false.
I think that's the better choice. Having
bool as an alias to
top is quite confusing. If we want to express that a method returns a truthy/falsy value, maybe
conditional or
cond would be a more meaningful alias for
top.
Updated by marcandre (Marc-Andre Lafortune) 6 months ago
matz (Yukihiro Matsumoto) wrote in #note-6:
Could you clarify the current proposal?
- bool / Bool / Boolean
- RBS side only / Ruby module as well
I am OK if RBS side only, no Ruby module.
Matz.
My preference is:
Boolean, second choice
Bool
- Ruby side also, or if deemed not preferable then
RBSonly (but what if there's user module/class
Boolean?)
- if Ruby side, then base
Class, second choice then
Module.
Ruby side has advantages beyound RBS, especially for communication with other typed systems / data interchange.
I hope that, in RBS,
String is strict,
str is relax. Same for
Boolean (strict
true | false) and
bool is relax (any object).
Updated by matz (Yukihiro Matsumoto) 6 months ago
- Status changed from Open to Feedback
RBS side is up to soutaro (Soutaro Matsumoto) etc. I am still against Ruby side. marcandre (Marc-Andre Lafortune) stated about communication with other typed systems / data interchange as a benefit, but we are not going to make Ruby (type-wise) compatible with statically typed languages. So its benefit is still unclear.
Matz.
Updated by soutaro (Soutaro Matsumoto) 6 months ago
I'm planning to make the semantics of
bool in RBS (in the next release). It will be an alias of
true | false and we will add another type for conditionals, something like
type boolish = top.
So, I feel there is no strong reason from RBS side to add new class/module to Ruby.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/17265 | CC-MAIN-2021-21 | refinedweb | 1,751 | 70.13 |
Introduction to py5bot
Contents
Introduction to py5bot#
This is py5bot. A simple and easy to use programming environment for teaching the very basics of Python programming and creative coding with py5.
You must use the py5bot Jupyter kernel to execute py5bot code.
If you are viewing this page from the py5 documentation website, see the Binder or Live Code options by hovering your mouse over the rocket ship icon at the top of this page.
Each cell in this notebook can contain a series of py5 drawing commands. The drawing commands will be executed by py5bot to create a static image that will be embedded in the notebook.
The main design goal is to provide a simple programming environment that is suitable for beginners. Individual programming concepts can be explained in isolation from more complicated Python concepts like functions or modules.
When hosted on Jupyter Hub and paired with Jupyter Lab’s “Show Contextual Help” feature, py5bot can become an easy to use programming environment for educators to teach Python to beginners.
Below is a simple example.
size(200, 200) background(255, 255, 0) rect(50, 50, 100, 100)
If you are familar with Processing, you might describe this as a static sketch.
Internally, py5 executes these commands inside of a
setup() method to render a single frame.
py5bot rules#
There are a few important rules that you should be aware of.
The size() command should be the first line of Python code. Comments are ignored. If the size() command is omitted, the output size will be 100 by 100 pixels.
When py5bot is run on Windows and Linux computers, you can use the
P2Dor
P3Drenderers. These OpenGL renderers are currently not supported on OSX. The
SVGand
Each cell has its own local namespace. Variables and functions defined in one cell cannot be used in another cell.
There are some less important rules that should be mentioned but aren’t that important:
Calls to smooth() or no_smooth(), if desired, should be after size() and before other Python commands.
A call to pixel_density() would also need to be after size() and before other Python commands, but since it just scales the size of the output, you probably shouldn’t bother using it in py5bot.
That’s basically it.
More examples#
Let’s see a more interesting example.
size(200, 200) background(224) no_stroke() for _ in range(250): fill(random_int(255), random_int(255), random_int(255)) rect(random_int(width), random_int(height), 10, 10)
If you like you can put
size(200, 200) print('draw a red rectangle') fill(255, 0, 0) rect(50, 50, 100, 100)
draw a red rectangle
You can define your own functions. Any functions will be local to only one cell, however.
size(200, 200) def draw_random_circle(): x = random_int(width) y = random_int(height) circle(x, y, random_int(25)) for _ in range(20): draw_random_circle()
Error Message Reporting#
No programming enviroment would be suitable for beginners without appropriate error messages. Observe that in all cases, the error message correctly points to the problem in the code.
Below is what syntax errors look like.
size(200, 200) rect(10, 20, 30, 40))
There is a problem with your code: SyntaxError: unmatched ')' (<py5bot>, line 2) rect(10, 20, 30, 40)) ^
This next example is a programming error:
size(200, 200) impossible = 100 / 0
py5 encountered an error in your code: 1 2 --> 3 impossible = 100 / 0 ZeroDivisionError: division by zero
The next example shows the displayed error message for when a py5 drawing function is used incorrectly.
size(200, 200) rect(1, 2, 3, 4, "extra param")
py5 encountered an error in your code: 1 2 --> 3 rect(1, 2, 3, 4, "extra param") TypeError: The parameter types (int, int, int, int, str) are invalid for method rect. Your parameters must match one of the following signatures: * rect(a: float, b: float, c: float, d: float, /) -> None * rect(a: float, b: float, c: float, d: float, r: float, /) -> None * rect(a: float, b: float, c: float, d: float, tl: float, tr: float, br: float, bl: float, /) -> None
These error messages can be customized. That is a separate topic to be explained elsewhere.
There are py5bot reserved words. You are not allowed to use a reserved word as a variable name.
Ideally py5bot would have syntax hightlighting to color the reserved words differently, but that hasn’t been implemented yet. Let me know if you are interested in working on that.
size(200, 200) red = 255, 0, 0 fill(*red) rect(50, 50, 100, 100)
There is a problem with your code. =================================== Assignment to py5 reserved word "red" on line 3 is discouraged and may causes errors in your sketch. red = 255, 0, 0 ^
Other Renderers#
As previously stated, the
P2D and
P3D renderers only work on Linux and Windows. On OSX, py5bot will replace the
P2D or
P3D renderers with the default renderer after displaying a polite warning.
If you are an OSX user running this through Binder or with this website’s Live Code feature (Thebe), the following cell will work just fine, without a warning, because the Jupyter kernel is running in the cloud on a Linux server. What matters is where the Jupyter server is running, not the Jupyter client.
size(200, 200, P2D) circle(width / 2, height / 2, 150)
When using the the
SVG or
size() must provide a filename for the 4th parameter. For example:
size(200, 200, SVG, '/tmp/drawing.svg') circle(width / 2, height / 2, 150)
The
Code Bypass#
You can use
%%python to bypass py5bot execution. Any variables or functions defined in such a cell will be available to all later cells. Python modules can be imported here as well.
The
%%python bypass meta-command must be at the very beginning of the code cell.
%%python import numpy as np def draw_random_circle(x, y): fill(random_int(255), random_int(255), random_int(255)) circle(x, y, random_int(25)) message = "py5bot is awesome"
The code in the previous cell will be available to regular py5bot cells.
size(200, 200) no_stroke() for _ in range(250): x = np.random.normal(width / 2, 40) y = np.random.normal(height / 2, 40) draw_random_circle(x, y) fill(255, 196) rect(0, 0, width, 30) fill(255, 0, 0) text_size(22) text(message, 10, 20)
| https://py5.ixora.io/tutorials/introduction_to_py5bot.html | CC-MAIN-2022-40 | refinedweb | 1,044 | 61.26 |
Home > Industries > Gaming
Phaser is a popular 2D open source gaming framework that can be used to develop desktop or mobile browser HTML5 games for side-view or top-viewing styles:
Phaser supports both the canvas and WebGL rendering engines, preset complete sprite animations, input management, tile maps, motion tweens, resource loaders, physical systems, particle systems, and so on, and can almost meet any requirement for you to develop a 2D game:
Phaser's most commendable is its plug-in mechanism, and the resulting phaser ecological community. For example, with the help of the isometric plug-in, you can develop a game with a (pseudo) 3D effect:
The next version of Phaser is 3.0 (just released), so the current 2.x version of the maintenance is continued by the community, known as Phaser ce--community Edition.
Most of the features of the Phaser framework are packaged in a single phaser.js file. All we need to do is introduce this framework file into the hosting HTML file and start using Phaser:
<script src= "Lib/phaser.js" ></script>
Almost all of the framework APIs are defined under the Phaser namespace. For example, we start the framework by instantiating the Phaser.game class:
var game = new Phaser.game ()
The framework will create a canvas element of 800x600 pixel size in the document, using the default parameters, as the game's canvas.
√ Specify game size
Of course, we can specify the size of the game ourselves. For example, set the game size to 700x300 pixels:
var game = new Phaser.game (700,300)
√ Specify renderer
Phaser uses the modified Pixi library as the underlying rendering implementation, so it can support canvas and WebGL. By default, phaser will automatically choose, but we can specify the desired rendering engine when we start the framework. For example, the following code enables the canvas renderer:
var game = new Phaser.game (700,300,phaser.canvas)
The Phaser supported renderer options include:
Phaser.auto: Let the frame automatically select the renderer
Phaser.canvas: CANVAS renderer with Pixi
PHASER.WEBGL: WEBGL renderer with Pixi
Phaser.webgl_multi: Use the Pixi WEBGL renderer and enable the multi-texture support mode
Phaser.headless: Headless Rendering. Uses the Pixi canvas renderer, but does not add the canvas element to the DOM, nor does the actual rendering
√ Specify the game canvas parent element
By default, phaser inserts the canvas element that you create into the end of the BODY element of the document. However, we can explicitly specify its parent element.
For example, the following code creates a game canvas in a DOM element with a property ID of Ezgame:
var game = new Phaser.game (700,300,phaser.auto, ' ezgame ')
You can also pass in an HTML element to specify the parent element of the game canvas. For example:
var host=document.queryrselector (' #ezgame ') var game=new phaser.game (700,300,phaser.auto,host)
If you specify an empty ID, the frame uses the BODY element as the parent element of the game canvas. For example:
New Phaser.game (700,300,phaser.auto, "").
It should be noted that the parent element of the game canvas should set the padding to 0 or the frame will be biased when calculating the dimensions.
Wrote a phaser tutorial,, learning the screenshot of the page as shown below, to the friends who just play the game should have some help: | http://topic.alibabacloud.com/a/phaser-desktop-and-mobile-gaming-html5-framework-__html_2_64_20263753.html | CC-MAIN-2018-43 | refinedweb | 561 | 54.52 |
Agenda
See also: IRC log
Manu: Mark's proposal for how we allow Microformats-like markup is interesting. He wants to do this:
Manu:prefix [ : [ resource] ] instead of this:
Manu:[ prefix [ : ] ] resource <--- this is what we have right now in the CURIE spec. That would allow you to do this:
Manu:...</div>
benadida: Don't know if I like that.
ShaneM1: Me neither.
<Steven> Well, I think it is quite clever. I like it, it solves a problem in a neat way
benadida: This is the hGRDDL approach that we talked about some time ago - we shouldn't stuff it into the prefix mapping.
Steven: I think it's clever. We do something special already with a CURIE that doesn't have a value without a prefix.
Manu: I think the approach could work if we merge it with the @profile approach.
ShaneM1: It's inconsistent with the definition of CURIE in the spec.
benadida: Mark's saying that we flip it around, but why isn't there a way to create a auto-prefix, like Pascal "with" statement.
Steven: That is another
solution.
... The thing with Mark's solution is that you don't need more syntax.
... There is a problem to be solved, and I thought that Mark's solution, while it's too late, is a neat one.
benadida: Forget that it might be too late, it's still important, but it requires that we define a whole bunch of prefixes.
<ShaneM1> prefix extension proposal is at
ShaneM1: This proposal stands whether you use @profile or @prefix.
benadida: You're building a DTD for HTML4?
ShaneM1: That's done.
... We need to separate the discussions.
... How do we generate documents that support HTML4 is one discussion.
... How do we support Microformats-like approaches, is the other.
<ShaneM1>
benadida: What do we do with HTML4? How do you validate? Do we want to make it more official?
ShaneM1: We're producing a member submission, which is a profile of RDFa for HTML4.
benadida: Creative Commons would
be happy to support as a member.
... This is a great approach - we should move faster on it.
... We're doing this to allow people that want validation in HTML4 and want to use RDFa.
<scribe> ACTION: Ben ask SWD to approve publication of an updated RDFa Primer [CONTINUES] [DONE] [recorded in]
<scribe> ACTION: Jeremy review and consider expanding the description of TopBraid in the RDFa wiki [CONTINUES] [recorded in]
<scribe> ACTION: Jeremy to demonstrate GRDDL with XHTML/RDFa once the NS URI is set up. [CONTINUES] [recorded in]
<scribe> ACTION: Manu talk with Jamie McCarthy about an AskSlashdot piece [CONTINUES] [recorded in]
<scribe> ACTION: Manu talk with Michael Smethurst at BBC about RDFa [DONE] [recorded in]
<scribe> ACTION: Manu to create test cases for testing relative URI resolution (href/CURIEs/etc). [CONTINUES] [recorded in]
<scribe> ACTION: Manu to upload test harness source code to W3C CVS. [CONTINUES] [recorded in]
<scribe> ACTION: Manu to work with Microformats community to address RDFa as unified markup for uFs. [CONTINUES] [recorded in]
<Steven> "Note. For versions of HTTP that define a Link header, user agents should handle these headers exactly as LINK elements in the document. HTTP 1.1 as defined by [RFC2616] does not include a Link header field (refer to section 19.6.3)."
<Steven>
<scribe> ACTION: Manu to write summary for Semantic Web Use Cases for Ivan. [CONTINUES] [recorded in]
<scribe> ACTION: Manu write a pending test case for literal property and no child nodes. [CONTINUES] [recorded in]
<scribe> ACTION: Manu write the perl code for Slashdot. [CONTINUES] [recorded in]
<scribe> ACTION: Mark create base wizard suitable for cloning [CONTINUES] [recorded in]
<scribe> ACTION: Mark write foaf examples for wiki [CONTINUES] [recorded in]
<scribe> ACTION: Michael to create 'RDFa for uF users' on RDFa Wiki [CONTINUES] [recorded in]
<scribe> ACTION: Ralph think about RSS+RDFa [CONTINUES] [recorded in]
<scribe> ACTION: Ralph to make happen [CONTINUES] [recorded in]
<scribe> ACTION: Shane to update XHTML ns document to point to new XSLT URI [CONTINUES] [recorded in]
benadida: This is just to make sure we're in sync with TAG, great response to Noah, Steven.
Steven: I don't quite grasp the issue he is having.
benadida: It's critical - if TAG
finding notes that we're not compliant, even if we are, it
would be bad.
... I don't think there is an issue.
... Noah is asking, how do you follow your nose.
... We declare @version at the top of the document.
... I think we said that we're modifying the XHTML namespace?
... It's been agreed upon that it will be done.
... That's how we follow our nose, right? We go to the namespace document and we're done, right?
Steven: Is this machine follow your nose, or human follow your nose?
ShaneM1: Machine.
Steven: The namespace document is pure english at the moment, it's not yet RDFa annotated, but it will be.
ShaneM1: nope. The change to the XHTML document is a GRDDL-compliant @profile addition - that's the machine part of this.
benadida: It is follow-your-nose in-so-far as GRDDL is follow-your-nose.
ShaneM1: That's what TimBL told
us to do.
... It doesn't point to RDFa, it points to XSLT transformation.
... Tim's declaration is that everything with an xmlns declaration is RDF.
benadida: is there another path
to the RDFa specification.
... There is @version.
ShaneM1: There is a doctype declaration.
benadida: What's the normative
one?
... The canonical one is the XHTML namespace.
... I'll follow it up, but we want to make sure that they know it is follow-your-nose compliant.
... I believe that we use version="XHTML+RDFa1.0"
ShaneM1: I think Noah's argument is more arcane. His reading of the RFC is that it doesn't permit the interpretation of documents of that media type as containing RDF.
benadida: If that's the case,
GRDDL is broken, because GRDDL is supposed to apply to
XHTML.
... What we're doing with the namespace document is 100% GRDDL compliant.
... GRDDL is REC and we're GRDDL compliant.
... If GRDDL is okay, why is RDFa not okay?
... @profile says this is GRDDL, rel="transformation" states that is how you extract triples.
ShaneM1: There is no normative Schema document for XHTML 1.0
benadida: Then it's not the same thing as the namespace document.
ShaneM1: The namespace document is the thing at the end of @xmlns
<benadida> ACTION: Ben to figure out *how* to do the namespace-doc GRDDL thing [recorded in]
<ShaneM1>
benadida: I will continue TAG discussion.
ShaneM1: RFC alludes to
modularization, but doesn't point to it.
... Modularization doesn't point to RDFa.
... I think that is Noah's argument.
benadida: Yes, but GRDDL.
ShaneM1: He hasn't incorporated GRDDL into his thinking.
benadida: We'll talk about Danny's comment later.
<Steven> (Good call) | http://www.w3.org/2008/09/11-rdfa-minutes.html | CC-MAIN-2021-49 | refinedweb | 1,148 | 66.23 |
Please use this thread to ask questions about or discuss lesson 7. The wiki page, including links to video, notebooks, and resources, is here:
Lesson 7 discussion
with the new vgg16.py, I was getting an error in the fish.ipynb:
from vgg16 import Vgg16
model = vgg_ft(8)
…
Exception: Input 0 is incompatible with layer dense_1: expected ndim=2, found ndim=4
after some debugging I figured that the new Vgg16.create() method returns the output of the vonv lared directly without adding a Flatten(), so the tensor has 4 dimensions (?, 3, 228, 228) instead of 2 what is the input_spec expected by the Dense() layer.
I inserted a Flatten() layer before the line that adds the Dense() layer in Vgg16.ft() and now is working for me (you can argue that the right place to add it is in the Vgg16.create() method, before the return, but I didn’t try that yet). I paste my fix below for those who came across the same issue:
def ft(self, num): model = self.model model.pop() for layer in model.layers: layer.trainable=False model.add(Flatten()) model.add(Dense(num, activation='softmax')) self.compile()
Please forgive me if this turns out to be a silly oversight on my part, but I can’t seem to locate the ipynb that was shown in Lesson 7 here.
This is the notebook that had the ResNet demonstration in it.
I don’t see the ResNet stuff in either the Lesson7.ipynb or the fish.ipynb…
Am I totally missing something obvious, or is there are 3rd notebook that isn’t up yet? Thanks!
Can you show (maybe in a gist) the code that you’re using to create the data that you’re calling vgg_ft() with?
I included resnet50.py on the wiki, but not the notebook that called it - since it’s a pretty trivial addition to the lesson3 notebook so I figured it would be best for people to try creating it themselves. Let us know if you have any issues with this…
so the Global Average Pooling definition, etc, lives on video only? I’ll have to sit down tomorrow and really check out all the pieces and parts and see what lives where.
(Just curious, why is this part of lesson 7 an extension of Lesson 3?)
GlobalAveragePooling2D is part of Keras . There’s nothing only on video other than my simple little networks (the most effective of which is just 3 lines of code).
I used cats and dogs for showing resnet since it’s a good fit for that dataset - I mention why in the lesson.
I have tried Resnet conv + simple FC and Resnet conv + Global_avg_pool on cats and dogs and fisheries. My results are worse than VGG with Batch norm, did any one else give it a try?
Also, I did store the conv features and fed it as input to FC / Global_avg_pooling dense models.
Edit: I realized that I was not using shuffle=False for training batch. Fixing that improved Redux score a lot, but still not much improvement in fisheries. This is probably as expected, as resnet was trained on ImageNet. Feel like retraining last few conv layers might help? I am going to try that next.
Can you comment on the architecture of a CNN that does not output a probability, but a number?
An example would be finding the head and tail pixels of the fishes. This would be 2 coordinates or 4 numbers.
I see how one could use vgg with Dense(4) as the final output, but… is a special activation used or no activation at all?
Additionally, I understand why the input is normalized, but why does the training output need to be normalized [-1,1]. Or does it?
In the “Basic VGG” section of this lesson.
A model is first created using: model = vgg_ft_bn(8)
The layers of this model are set to be untrainable upon creation. The top layers are removed and a dense top layer is added for the classification. This model is then trained on the data. After which the conv layers are used to predict features for the train data.
I don’t understand the point of actually training this model. Wouldn’t it be just as good to create the original vgg16 model minus the top layer and use that to predict the features?
Hello,. | http://forums.fast.ai/t/lesson-7-discussion/252 | CC-MAIN-2018-13 | refinedweb | 734 | 73.37 |
Design. To be a little more descriptive, here is a direct quote from Wikipedia:
The observer pattern (aka. Dependents, publish/subscribe) is a software design pattern in which an object, called the subject, maintains a list of its dependents, called observers, and notifies them automatically of any state changes, usually by calling one of their methods. It is mainly used to implement distributed event handling systems.
The Observable module in the Ruby standard library provides the mechanism necessary for us to implement this pattern, and it is fairly easy to use.
The Planning
In order to use this pattern, we first need to come up with a scenario. An example would be an application that keeps track of the car mileage and reminds us of when we need to take the vehicle in for a service. It is a very simple example, but it will allow us to explore this pattern.
The Basic Structure
The first thing that we are going to do is create a basic structure for our Notifier class that will act as an observer. One item that you really need to pay attention to here is the update() method . This is the callback that the Observable module will use when notifying changes to the observer, and the method name needs to be update().
Let us start by putting together a simple Notifier class:
class Notifier def update() end end
That is as simple as it gets. Next, let us create the structure for the subject, our Car class:
class Car attr_reader :mileage, :service def initialize(mileage = 0, service = 3000) @mileage, @service = mileage, service end def log(miles) @mileage += miles end end
It contains the attributes mileage and service, and methods initialize() and log(). The initialize() method will set the initial values for the car’s current mileage and the mileage that it needs to be taken to service. The log() method will log how many miles it has been driven recently and add it to the total vehicle mileage.
Fixing the Notifier Class
Now that we have an understanding of what our Car class does, we can go ahead and fill in the logic of what we want the notifier to do:
class Notifier def update(car, miles) puts "The car has logged #{miles} miles, totaling #{car.mileage} miles traveled." puts "The car needs to be taken in for a service!" if car.service <= car.mileage end end
This means that every time the observer gets notified, it will print to the screen a message about the mileage usage, along with an additional message about the service status if the total mileage exceeds the service mileage.
Putting It All Together
With the basic structure for the Car class completed and the Notifier class in place, the only thing left for us to do is to put the Observable module in place.
First, we need to include it in our Car class:
require 'observer' class Car include Observable ... end
Next, we will add an observer each time a new instance of the Car class is created. Let us modify the initialize method:
def initialize(mileage = 0, service = 3000) @mileage, @service = mileage, service add_observer(Notifier.new) end
The last change we need to do in this class is in our log method. We need to tell the observer that our object has changed every time we log new miles:
def log(miles) @mileage += miles changed notify_observers(self, miles) end
Here we are calling
changed, which sets the changed state of the object (true by default). And
notify_observers(self, miles), which notifies the observer of a change.
The complete Car class looks like this:
require 'observer' class Car include Observable attr_reader :mileage, :service def initialize(mileage = 0, service = 3000) @mileage, @service = mileage, service add_observer(Notifier.new) end def log(miles) @mileage += miles changed notify_observers(self, miles) end end
To summarize, here are the changes that we made to our Car class:
- In order to use the Observable module, we first need to
requireit;
- Next, we include it with
include Observable;
- When an instance of our Car class is created, an observer is added;
- When the log() method is called, it notifies the observers by asserting that the object has changed
Running the code
Now is the fun part, let us see what happens when we create a new instance of the Car class and call log:
car = Car.new(2300, 3000) car.log(100) => "The car has logged 100 miles, totaling 2400 miles traveled." car.log(354) => "The car has logged 354 miles, totaling 2754 miles traveled." car.log(300) => "The car has logged 300 miles, totaling 3054 miles traveled." => "The car needs to be taken in for service!"
First, we create an instance of the Car class with 2300 miles, and we set that it needs to be taken for service when it reaches 3000 miles.
Every time a new mileage is logged, the Notifier class outputs the mileage added as well as the tally for the miles traveled. When the total miles for the vehicle is greater than the mileage set to be taken in for service, another output is generated. Pretty cool, huh?
Singleton Pattern
The concept of the Singleton pattern is fairly straightforward: only a single instance of a class can exist. This can be useful when an application allows only one object to be instantiated for a given class.
Even though there are mixed feelings amongst developers about this pattern, it is often used in other languages, such as Java and C-based languages. In Ruby, the Singleton module in the standard library can be used to implement this pattern.
Planning
Let’s say that we need to design a class to hold configuration data for our application, and there can only ever exist one instance of this configuration. Sure, we could simulate a Singleton by creating a module, but we would have to make sure that it could not be duplicated or cloned, otherwise it would lose its purpose.
Integration
The first step in creating a Singleton class is to
require and
include the Singleton module in a class:
require 'singleton' class AppConfig include Singleton end
If you try to instantiate this class as you normally would a regular class, a
NoMethodError exception is raised. The constructor is made private to prevent other instances from being accidentally created:
AppConfig.new #=> NoMethodError: private method `new' called for AppConfig:Class
To access the instance of this class, we need to use the
instance() method provided by the Singleton module. When this method is first called, an instance of the class is created, and all subsequent calls return the created instance. Curious to see if this is actually true?
first, second = AppConfig.instance, AppConfig.instance first == second #=> true
True indeed! Now that we now how it works, let’s modify the AppConfig class and add a few things.
#... attr_accessor :data def version '1.0.0' end
Here we added a
data attribute that will hold the data about the configuration, and a
version method that returns the current version. Putting it all together, here is the full class:
require 'singleton' class AppConfig include Singleton attr_accessor :data def version '1.0.0' end end
Congratulation, you have just implemented a Singleton pattern in Ruby! Now, let’s play with it:
AppConfig.instance.data = {enabled: true} => {:enabled=>true} AppConfig.instance.version => "1.0.0" second = AppConfig.instance second.data = {enabled: false} => {:enabled=>false} AppConfig.instance.data => {:enabled=>false}
We first set the
data attribute with arbitrary values and check its version.Next, we duplicate the singleton instance, change its
data value, and confirm that the value changed in the single instance.
Conclusion
This article demonstrated how the Observer Pattern and the Singleton Pattern can be used with Ruby, and I hope the examples presented here can give you the basis for implementing them on your own applications.
(Note: The diagrams in the image for this article were made with)
- Douglas
- Douglas
- Manuel | http://www.sitepoint.com/design-patterns-in-ruby-observer-singleton/ | CC-MAIN-2014-52 | refinedweb | 1,325 | 59.43 |
A client wanted an app to monitor and log the UPS for an embedded windows controller.
This toolbar widget provides a framework to capture win32 WM_POWER events, leverage them to draw a graph of UPS state and optionally shutdown/suspend or hibernate the system.
WM_POWER
The app is very simple and should be a good framework for other, related projects.
As provided to our client, the application does more than as described here, however the additional functions were client specific and not pertinent.
When looking for a similar project, I found pieces, but nothing specific. As such, this project derives from projects by maharishi_b and lcady.
The app builds to a toolbar app that runs all the time and monitors windows power events. At each event, the app logs the event to a CSV log file and if the AC goes offline, the app pops up and shows a graph of the UPS capacity and the time left in mins.
All these features depend on your UPS being plugged in and supporting the standard windows HID power device profile.
During normal operations, the app may be viewed using the toolbar context menu (right click on the toolbar icon). Using 'show' will bring up the normal operations window:
This shows that the UPS is online with 100% battery capacity. The 'High' at the bottom is the windows power systems determination of the battery capacity. Full scale for the battery capacity is 100%.
When the AC line is lost, the main screen will popup and display the power status:
After restoring power, the time remaining is no longer relevant and the system begins re-charging the battery:
Each event that occurs, is logged to the log.csv file in the directory from which PowerMonitor was started. The format is as follows:
PowerMonitor
Each event is logged by time, date and event type. Details of the event follow as per the system power status structure:
[StructLayout(LayoutKind.Sequential)]
public class SystemPowerStatus
{
public ACLineStatus ACLineStatus;
public BatteryFlag BatteryFlag;
public Byte BatteryLifePercent;
public Byte Reserved1;
public Int32 BatteryLifeTime;
public Int32 BatteryFullLifeTime;
Most of the code is self explanatory. The strip chart is probably the most interesting - it is an evolution of lcady's strip chart. I am in the process of making it a general purpose strip chart class, but for now it is something of a simple hack.
I like the fact that this class draws to a BMP and then blits the BMP into a picture view. It means the chart construction is mostly independent of the viewing process and can be offloaded to other threads. This is particularly pertinent when doing data acquisition via potentially slow devices.
It is exceptionally hard to map battery capacity to time left. UPS are getting better, but they mostly use battery voltage and this is a very poor indicator, especially when the battery is old, has been cycled a lot or has been recently cycled.
Time left is only an indicator and actions you take should be based on your own analysis of data collected during AC outages and recharge. | https://www.codeproject.com/Articles/292725/Using-WM-POWER-events-to-monitor-a-UPS?msg=4096075 | CC-MAIN-2017-30 | refinedweb | 514 | 60.14 |
- Just a nitpicky question
- displaying a board with 2D array
- making header function
- link2001 errors in my exceptionhandling program
- Array of pointer variables....
- getline()
- infinite loop problem
- Inserting new nodes in a sorted double linked list
- Linked List Help
- Visual C++ .NET Matrix Size Limiation?
- Detecting Math Errors
- Gdi
- returning an appropriate string for an expected int value ??
- Can Anyone tell me what is wrong with this program?
- Linked List help
- Program Fatal Error- Please Explain
- how to create a process in c++?
- The Tree
- converting from a string to an int
- words count
- Hiding Input
- Conversion and Constructor Functions
- using namespace std;
- Help with simple Arry encoder
- Indirection
- am I declaring my strings wrong?
- Scheduling Algo
- Insertion sort question
- code crashes on Solaris
- Help with code for simple Y2K problem
- Shell
- can i have an example, please?
- Anyone got a faster way of doing this?
- having a problem using variables in file calls
- List of nodes
- getline function with string class
- return reference
- Recursion
- Exceptions. (This should be easy)
- Too fast to test!
- Stream function question
- X-win32 Version 5.4
- **linked list
- going into windows programming
- opening files
- ambigious symbols??
- how do u access a class protected area?
- how do u sort a list of names?
- Sorting Program Problems
- writing a file to a different directory | http://cboard.cprogramming.com/sitemap/f-3-p-796.html | CC-MAIN-2016-18 | refinedweb | 218 | 66.23 |
Build scalable dataviz components with full integration
This section builds up your mental models for dataviz components through the class-based approach. If you don't care about those details, you can jump ahead to React Hooks.
As useful as blackbox components are, we need something better if we want to leverage React's rendering engine. The blackbox approach in particular struggles with scale. The more charts and graphs and visualizations on your screen, the slower it gets.
Someone once came to my workshop and said "We used the blackbox approach and it takes several seconds to re-render our dashboard on any change. I'm here to learn how to do it better."
In our full-feature integration, React does the rendering and D3 calculates the props.
Our goal is to build controlled components that listen to their props and reconcile that with D3's desire to use a lot of internal state.
There are two situations we can find ourselves in:
- We know for a fact our component's props never change
- We think props could change
It's easiest to show you with an example.
Let's build a scatterplot step by step. Take a random array of two-dimensional data, render in a loop. Make magic.
Something like this 👇
You've already built the axes! Copy pasta time.
Props don't change
Ignoring props changes makes our life easier, but the component less flexible and reusable. Great when you know in advance that there are features you don't ned to support.
Like, no filtering your data or changing component size 👉 means your D3 scales don't have to change.
When our props don't change, we follow a 2-step integration process:
- set up D3 objects as class properties
- output SVG in
render()
We don't have to worry about updating D3 objects on prop changes. Work done 👌
An unchanging scatterplot
We're building a scatterplot of random data. You can see the final solution on CodeSandbox
Here's the approach 👇
- stub out the basic setup
- generate random data
- stub out Scatterplot
- set up D3 scales
- render circles for each entry
- add axes
I recommend creating a new CodeSandbox, or starting a new app with create-react-app. They should work the same.
Basic setup
Make sure you have
d3 added as a dependency. Then add imports in your
App.js file.
// ./App.jsimport * as d3 from "d3"import Scatterplot from "./Scatterplot"
Add an
<svg> and render a Scatterplot in the render method. This will throw
an error because we haven't defined the Scatterplot yet and that's okay.
// ./App.jsfunction App() {return (<div className="App"><h1>Hello CodeSandbox</h1><h2>Start editing to see some magic happen!</h2><svg width="800" height="800"><Scatterplot x={50} y={50} width={300} height={300} data={data} /></svg></div>)}
CodeSandbox adds most of that code by default. If you're using create-react-app, your App component has different markup. That's okay too.
We added this part:
<svg width="800" height="800"><Scatterplot x={50} y={50} width={300} height={300} data={data} /></svg>
An
<svg> drawing area with a width and a height. Inside, a
<Scatterplot
that's positioned at
(50, 50) and is 300px tall and wide. We'll have to
listen to those props when building the Scatterplot.
It also accepts data.
Random data
We're using a line of code to generate data for our scatterplot. Put it in App.js. Either globally or within the App function. Doesn't matter because this is an example.
const data = d3.range(100).map((_) => [Math.random(), Math.random()])
d3.range returns a counting array from 0 to 100. Think
[1,2,3,4 ...].
We iterate over this array and return a pair of random numbers for each entry. These will be our X and Y coordinates.
Scatterplot
Our scatterplot goes in a new
Scatterplot.js file. Starts with imports and an
empty React component.
// ./Scatterplot.jsimport React from "react"import * as d3 from "d3"class Scatterplot extends React.Component {render() {const { x, y, data, height } = this.propsreturn <g transform={`translate(${x}, ${y})`}></g>}}export default Scatterplot
Import dependencies, create a
Scatterplot component, render a grouping
element moved to the correct
x and
y position. Nothing too strange yet.
D3 scales
Now we define D3 scales as component properties. We're using the class field syntax that's common in React projects.
Technically a Babel plugin, but comes by default with CodeSandbox React projects and create-react-app setup. As far as I can tell, it's a common way to write React components.
// ./Scatterplot.jsclass Scatterplot extends React.Component {xScale = d3.scaleLinear().domain([0, 1]).range([0, this.props.width]);yScale = d3.scaleLinear().domain([0, 1]).range([this.props.height, 0]);
We're defining
this.xScale and
this.yScale as linear scales. Their domains
go from 0 to 1 because that's what Math.random returns and their ranges
describe the size of our scatterplot component.
Idea being that these two scales will help us take those tiny variations in datapoint coordinates and explode them up to the full size of our scatterplot. Without this, they'd overlap and we wouldn't see anything.
Circles for each entry
Rendering our data points is a matter of looping over the data and rendering a
<circle> for each entry. Using our scales to define positioning.
// ./Scatterplot.jsreturn (<g transform={`translate(${x}, ${y})`}>{data.map(([x, y]) => (<circle cx={this.xScale(:satisfied:} cy={this.yScale(y)}))}</g>);
In the
return statement of our
render render method, we add a
data.map
with an iterator method. This method takes our datapoint, uses array
destructuring to get
x and
y coordinates, then uses our scales to define
cx and
cy attributes on a
<circle> element.
Add axes
You can reuse axes from our earlier exercise. Or copy mine from the CodeSandbox
Mine take a scale and orientation as props, which makes them more flexible. Means we can use the same component for both the vertical and horizontal axis on our Scatterplot.
Put the axis code in
Axis.js, then augment the Scatterplot like this
👇
import Axis from "./Axis";// ...return (<g transform={`translate(${x}, ${y})`}>{data.map(([x, y]) => (<circle cx={this.xScale(:satisfied:} cy={this.yScale(y)}))}<Axis x={0} y={0} scale={this.yScale}<Axis x={0} y={height} scale={this.xScale}</g>);
Vertical axis takes the vertical
this.yScale scale, orients to the
Left and
we position it top left. The horizontal axis takes the horizontal
this.xScale
scale, orients to the
Bottom, and we render it bottom left.
Your Scatterplot should now look like this
Props might update
The story is a little different when our props might update. Since we're using D3 objects to calculate SVG properties, we have to make sure those objects are updated before we render.
No problem in React 15: Update in
componentWillUpdate. But since React 16.3
we've been told never to use that again. Causes problems for modern async
rendering.
The official recommended solution is that anything that used to go in
componentWillUpdate, can go in
componentDidUpdate. But not so fast!
Updating D3 objects in
componentDidUpdate would mean our visualization always
renders one update behind. Stale renders! 😱
The new
getDerivedStateFromProps to the rescue. Our integration follows a
3-step pattern:
- set up D3 objects in component state
- update D3 objects in
getDerivedStateFromProps
- output SVG in
render()
getDerivedStateFromProps is officially discouraged, and yet the best tool we
have to make sure D3 state is updated before we render.
Because React calls
getDerivedStateFromProps on every component render, not
just when our props actually change, you should avoid recalculating complex
things too often. Use memoization helpers, check for changes before updating,
stuff like that.
An updateable scatterplot
Let's update our scatterplot so it can deal with resizing and updating data.
3 steps 👇
- add an interaction that resizes the scatterplot
- move scales to state
- update scales in
getDerivedStateFromProps
You can see my final solution on CodeSandbox. I recommend you follow along updating your existing code.
Resize scatterplot on click
To test our scatterplot's adaptability, we have to add an interaction: Resize the scatterplot on click.
That change happens in
App.js. Click on the
<svg>, reduce width and height
by 30%.
Move sizing into App state and add an
onClick handler.
// App.jsclass App extends React.Component {state = {width: 300,height: 300};onClick = () => {const { width, height } = this.state;this.setState({width: width * 0.7,height: height * 0.7});};render() {const { width, height } = this.state;
We changed our App component from a function to a class, added
state with
default
width and
height, and an
onClick method that reduces size by 30%.
The
render method reads
width and
height from state.
Now gotta change rendering to listen to these values and fire the
onClick
handler.
// App.js<svg width="800" height="800" onClick={this.onClick}><Scatterplot x={50} y={50} width={width} height={height} data={data} /></svg>
Similar rendering as before. We have an
<svg> that contains a
<Scatterplot>. The svg fires
this.onClick on click events and the
scatterplot uses our
width and
height values for its props.
If you try this code now, you should see a funny effect where axes move, but the scatterplot doesn't resize.
Peculiar isn't it? Try to guess why.
Move scales to state
The horizontal axis moves because it's render at
height vertical coordinate.
Datapoints don't move because the scales that position them are calculated once
– on component mount.
First step to keeping scales up to date is to move them from component values into state.
// Scatterplot.jsclass Scatterplot extends React.Component {state = {xScale: d3.scaleLinear().domain([0, 1]).range([0, this.props.width]),yScale: d3.scaleLinear().domain([0, 1]).range([this.props.height, 0])};
Same scale definition code we had before. Linear scales, domain from 0 to 1,
using props for ranges. But now they're wrapped in a
state = {} object and
it's
xScale: d3 ... instead of
xScale = d3 ....
Our render function should use these as well. Small change:
// Scatterplot.jsrender() {const { x, y, data, height } = this.props,{ yScale, xScale } = this.state;return (<g transform={`translate(${x}, ${y})`}>{data.map(([x, y]) => <circle cx={xScale(:satisfied:} cy={yScale(y)})}
We use destructuring to take our scales from state, then use them when mapping over our data.
Clicking on the SVG produces the same result as before, but we're almost there. Just one more step.
Update scales in
getDerivedStateFromProps
Last step is to update our scales' ranges in
getDerivedStateFromProps. This
method runs every time React touches our component for any reason.
// Scatterplot.jsclass Scatterplot extends React.PureComponent {// ..static getDerivedStateFromProps(props, state) {const { yScale, xScale } = state;yScale.range([props.height, 0]);xScale.range([0, props.width]);return {...state,yScale,xScale};}
Take scales from state, update ranges with new values, return new state. Nice and easy.
Notice that
getDerivedStateFromProps is a static method shared by all
instances of our Scatterplot component. You have no reference to a
this and
have to calculate new state purely from the
props and
state passed into
your method.
It's a lot like a Redux reducer, if that helps you think about it. If you don't know what Redux reducers are, don't worry. Just remember to return a new version of component state.
Your Scatterplot should now update its size on every click.
| https://reactfordataviz.com/building-blocks/3/ | CC-MAIN-2022-40 | refinedweb | 1,904 | 59.9 |
JUnit 4.9 - Class and Suite Level Rules
If you have worked with JUnit Rule API which was introduced in version 4.8 then you might probably also think that they are very useful. For example, there is a Rule called TemporaryFolder which creates files and folder before test is executed and deletes them after the test method finishes(whether test passes or fails). For those who are not familiar with Rule API , a Rule is an alteration in how a test method, or set of methods, is run and reported.
Before version 4.9 Rule can only be applied to a test method not to a Test class or JUnit test suite. But with JUnit 4.9 which is not yet released you can use @ClassRule annotation to apply a rule to test class or a test suite. If you want to use JUnit 4.9 download it from JUnit git repository. The @ClassRule annotation can be applied to any public static field of Type org.junit.rules.TestRule. The class level rule can be applied in scenarios where you use normally use @BeforeClass and @AfterClass annotation. Some of the scenarios can be like when you want to start a server or any other external resource before a test class or test suite, or when you want to make sure that your test suite or test class runs within a specified time, etc. The advantage of using class level rule is that they can be reused among different modules and classes.
Let's take an example when we want to make sure that our test suite should run within x seconds otherwise test should timeout. As you can see below we TestSuite AllTests runs TestCase1 and TestCase2. I have defined a suite level rule which would make sure that test run in three seconds otherwise suite will fail.
Test Suite
@RunWith(Suite.class)
@SuiteClasses({ TestCase1.class, TestCase2.class })
public class AllTests {
@ClassRule
public static Timeout timeout = new Timeout(3000);
}
TestCase 1
public class TestCase1 {
@Test
public void test1() throws Exception{
Thread.sleep(1000);
Assert.assertTrue(true);
}
}
TestCase2
public class TestCase2 {
@Test
public void test2() throws Exception{
Thread.sleep(1000);
Assert.assertTrue(true);
}
}
This is the most important feature which will be released in version 4.9 of JUnit.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Richard Langlois replied on Thu, 2011/05/12 - 10:34am | http://java.dzone.com/articles/junit-49-class-and-suite-level-rules?mz=123873-agile | CC-MAIN-2014-35 | refinedweb | 411 | 72.76 |
09 November 2010 07:17 [Source: ICIS news]
SINGAPORE (ICIS)--Arkema posted a third-quarter net income of €130m ($181m), a sharp reversal of the €3m loss incurred in the same period last year, partly due to buoyant demand from Asian markets, the French specialty chemicals company said on Tuesday.
Adjusted net profit in the three months to September surged to €128m from €8m in the previous corresponding period, with sales jumping 41% year on year to €1.56bn, Arkema said in a statement.
Revenue was boosted by a 10.5% increase in sales volumes, while recurring operating income soared to €172m from €36m in the same period last year, the company said.
Earnings before interests, taxes, depreciation and amortisation (EBITDA) for the September quarter more than doubled to €246m from €101m in the same period last year, it said.
Demand in ?xml:namespace>
The acrylic business acquired from US chemical producer Dow also helped increase sales by 11.1%, Arkema said.
For the first nine months of the year, Arkema said swung into a net profit of €289m from a loss of €152m in the previous corresponding period, with sales recording a 33% jump to €4.47bn.
EBITDA for the January-to-September period stood at €624m, a sharp increase from €228m in the same period last year, Arkema said.
The company said it expected its full-year EBITDA to be around €740m, after taking into account an estimated €20m charge from external strikes related to national pension reforms | http://www.icis.com/Articles/2010/11/09/9408394/frances-arkema-swings-to-q3-net-profit-on-buoyant-demand.html | CC-MAIN-2015-11 | refinedweb | 250 | 57.91 |
17 February 2010 17:49 [Source: ICIS news]
By Mark Victory
LONDON (ICIS news)--Many European polycarbonate (PC) players are in the process of moving from quarterly to monthly contract negotiations, because of volatility in the market resulting from the global economic recession, buyers and sellers said on Wednesday.
“There are monthly contracts in a lot of other markets, PC will follow. People were used to having a stable market for three months but this has changed. The market is now volatile,” said a major buyer.
The move to monthly pricing came in the wake of the increased volatility of feedstock bisphenol A (BPA) prices.
From April 2007 to September 2008, BPA contract prices were largely stable, trading at a minimum of €1,430/tonne ($1,959/tonne) FD (free delivered) NWE (northwest ?xml:namespace>
From October 2008 to February 2010, however, prices hit a low of €850/tonne FD NWE and a high of €1,480/tonne FD NWE.
PC prices were unable to react quickly enough to feedstock movements in 2009 due to quarterly pricing, buyers and sellers said.
Quarterly pricing also meant that prices were slower to react to returning demand, which began in the fourth quarter of 2009, due to a recovery in the major end-use automotive industry, sources added.
The result was price confusion in the market, which was seen in the wide range of reported deals done in the small-volume spot market, where prices were reported as low as €1.50/kg and as high as €3.45/kg, although buyers and sellers viewed both of these prices as extremes.
In order to react more swiftly to market movements, many players were now in the process of, or considering, migrating from quarterly to monthly pricing.
“We’ve already stepped away from quarterly contracts in extrusion grade [PC] material. Even our quarterly contracts now sometimes move mid-quarter. Optical and electrical grade [PC] will soon follow,” said a major producer.
A small number of players said they remained unconvinced of the need to move to monthly contracts, questioning whether monthly orders would allow them to manage stock properly.
Importers said they were also concerned over whether they would be able to move to monthly pricing because of the necessary lead times to ship materials.
“I suppose our competitors are able to move to monthly prices, but I don’t know if…importers will be able to follow. We have our lead times,” said an importer.
($1 = €0.73)
For more on | http://www.icis.com/Articles/2010/02/17/9335650/europe-polycarbonate-players-eye-move-to-monthly-contracts.html | CC-MAIN-2014-10 | refinedweb | 418 | 60.75 |
A semaphore for inter-thread synchronization.
The state of a semaphore object is signaled when its count is greater than zero, and nonsignaled when its count is equal to zero. The initialCount parameter specifies the initial count. Each time a waiting thread is released because of the semaphore's signaled state, the count of the semaphore is decreased by one. Use the release function to increment a semaphore's count by a specified amount. The count can never be less than zero or greater than the value specified in the maxCount parameter.
Definition at line 51 of file CSemaphore.h.
#include <mrpt/synch/CSemaphore.h>
Creates a semaphore.
If name is not an empty string, a named semaphore is created. In that case if the semaphore didn't exist it's created. Otherwise, the existing semaphore is linked to this object and then initialCount and maxCount are ignored.
Destructor.
Get the name of the named semaphore or an empty string if it's unnamed.
Definition at line 84 of file CSemaphore.h.
Return true if this is a named semaphore.
Definition at line 87 of file CSemaphore.h.
Increments the count of the semaphore by a given amount.
Blocks until the count of the semaphore to be non-zero.
Definition at line 54 of file CSemaphore.h.
The name of the named semaphore, or empty if unnamed.
Definition at line 55 of file CSemaphore.h. | http://reference.mrpt.org/stable/classmrpt_1_1synch_1_1_c_semaphore.html | crawl-003 | refinedweb | 235 | 68.77 |
A simple client/tool for Let's Encrypt or any ACME server that issues SSL certificates.
Project description
free_tls_certificates is a Python 2/3 client library and command-line client for Let’s Encrypt (or any ACME server) to automatically provision TLS certificates (aka SSL certificates).
The purpose of this library is to make it easier to embed Let’s Encrypt within server provisioning applications without resorting to shelling out certbot as root. You can also use this library as a command-line client like certbot, but it does not require root privs to run. Instead, you are responsible for having a web server running.
Installation
free_tls_certificates can be installed via pip but it requires some of its dependencies’ binary dependencies to be installed first. On Ubuntu (and using Python 3 as an example):
sudo apt-get install build-essential libssl-dev libffi-dev python3-dev python3-pip sudo pip3 install free_tls_certificates
The dependencies that pip will install are:
- Let’s Encrypt’s low-level ACME client library and all of its dependencies.
- idna by kjd.
- cryptography and its dependencies (on Ubuntu: sudo apt-get install build-essential libssl-dev libffi-dev python3-dev).
Command-Line Usage
The command-line tool free_tls_certificate (which becomes available after pip-installing free_tls_certificates, which has an s) can be used to automatically provision a TLS certificate from Let’s Encrypt or generate a self-signed certificate.
To provision a TLS certificate from Let’s Encrypt, you will need to have a web server already running on port 80 (not 443 — domain validation only works on port 80) and access to its static root from the machine you are going to run free_tls_certificates on.
Run:
free_tls_certificate domain-name-1.com [domain-name-2.com ...] /path/to/private.key /path/to/certificate.crt /path/to/website /path/to/acme/storage
On the first run:
- A new 2048-bit RSA private key will be generated and saved in /path/to/private.key, unless a file exists at that path, in which case that private key will be used.
- You’ll be prompted to accept the Let’s Encrypt terms of service. A new ACME account will be created and maintained for you in /path/to/acme/storage.
- An ACME HTTP01 challenge will be requested, a domain ownership verification file will be installed in /path/to/website/.well-known/acme-challenge/..., and it will wait until a certificate is ready.
- When the certificate is ready, the certificate _plus_ the certificate chain will be written to /path/to/certificate.crt (the certificate is written first, just as nginx would expect).
Subsequent runs will be headless and will just do the right thing:
- If certificate file specified exists and is valid for the domains given for at least 30 days, the tool will exit without doing anything (with exit code 3).
- If the certificate file doesn’t exist, isn’t valid for all of the domains given, is self-signed, or is expiring within 30 days, a new certificate will be issued and the certificate file will be overwritten with the new certificate (and chain). (You are responsible for then restarting your web server so it sees the new certificate.)
Since the tool will only issue a new certificate when needed, you can run the tool in a nightly cron job to keep your certificate valid.
You can also use the tool to generate a self-signed certificate. This is handy when spinning up a new machine: Your web server probably won’t start until you have a certificate file in place, but you can’t get a certificate until your web server is running.
To get a self-signed certificate, just add --self-signed:
free_tls_certificate --self-signed domain-name-1.com [domain-name-2.com ...] /path/to/private.key /path/to/certificate.crt
Web Server Support
You need to have a web server running that is serving a directory of static files that free_tls_certificate can write to. It must serve the files over HTTP (port 80) as ACME domain validation does not occur over HTTPS.
You might want to use an nginx configuration like this (or the equivalent for your web stack):
server { listen 80 default; location / { # Redirect to HTTPS. return 301; } location /.well-known/acme-challenge/ { # Serve the Let's Encrypt challenge path (must be # over HTTP, not HTTPS). root /home/ubuntu/public_html; } } server { listen 443 ssl http2; server_name domin-name-1.com; ssl_certificate /path/to/certificate.crt; ssl_certificate_key /path/to/private.key; ... your other directives here... }
In this case, your /path/to/website would be /home/ubuntu/public_html.
Usage as Python Module
The file driver.py contains a complete, working example for how to use this client library. It is the code behind the free_tls_certificate command-line tool.
See driver.py for complete documentation. There are a number of edge cases to handle.
Here’s basically how it works. You would adapt this code for your server provisioning tool:
import requests.exceptions import acme.messages from free_tls_certificates import client domains = ["mailinabox.email", ""] agree_to_tos = None # fill this in on second run per output of exception try: client.issue_certificate( domains, "path/to/some/storage", certificate_file="certificate.crt", agree_to_tos_url=agree_to_tos) except client.NeedToAgreeToTOS as e: print("You need to agree to the TOS. Set this on next run:") print("agree_to_tos = " + repr(e.url)) except client.NeedToTakeAction as e: for action in e.actions: if isinstance(action, client.NeedToInstallFile): print("Install a file!") print("Location: " + action.url) print("Contents: " + action.contents) except client.WaitABit as e: import datetime print ("Try again in %s." % (e.until_when - datetime.datetime.now()))
But see the full driver file for all of the error conditions you need to handle!
Usage Notes
You can request a certificate for multiple domains at once, probably up to 100 (which is Let’s Encrypt’s current maximum). The first domain you specify will be put into the certificate’s “common name” field, and all will be put into the certificate’s Subject Alternative Name (SAN) extension. (All modern browsers accept SAN domains.)
Note that Let’s Encrypt doesn’t yet (at the time of writing) support issuing certificates for internationalized domains.
You may use any Python string type (str, bytes, unicode) to pass domain names. If a domain is internationalized, use Python 2 unicode and Python 3 str instances to pass the Unicode form of the domain name. If the string is already IDNA-encoded (i.e. punycode), you may use any string type.
Testing
To test the library, set up a locally running Boulder server, which is the reference implementation of an ACME server.
- Install docker.
- Download the Boulder source code from.
- Change to the directory that you put Boulder in.
- Run FAKE_DNS=$(hostname -I) test/run-docker.sh (perhaps with sudo depending on your docker setup).
Boulder runs in its test configuration by default which performs “HTTP01” domain validation by querying the docker host machine on port 5002 no matter what domain a certificate is being requested for, which is handy for creating a test server to respond to those requests. (You still have to test with a plausible public domain name, however, so something.invalid will be rejected by your Boulder server.)
Create a virtual environment for testing if you don’t already have one:
virtualenv -ppython3 env source env/bin/activate pip install -r requirements.txt
Add:
127.0.0.1 x1.le.wtf 127.0.0.1 fail.le.wtf
to your /etc/hosts file. This is for our library’s client-side verification of the domain validation check, prior to submission of the challenge response to the ACME server. We use x1.le.wtf and fail.le.wtf as test domains (because boulder won’t issue certificates for invalid domain names, even in testing) that must resolve to localhost.
Start our unit test:
python test.py
This checks that the local Boulder server will issue a certificate for x1.le.wtf, and it checks other aspects of the library.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/free_tls_certificates/ | CC-MAIN-2018-34 | refinedweb | 1,352 | 57.67 |
A Computer-based Smart Rifle With Incredible Accuracy, Now On Sale 551
WheezyJoe writes "A story on NPR reports that the TrackingPoint rifle went on sale today, and can enable a 'novice' to hit a target 500 yards away on the first try. The rifle's scope features a sophisticated color graphics display (video). The shooter locks a laser on the target by pushing a small button by the trigger...."
pfftt... (Score:5, Funny)
Re: (Score:3, Insightful)
No skill.
No Sport.
Might as well go to the game farm and shoot the deer in the small holding pen with a shotgun.
Just like fishing with dynamite.
Sounds like something invented by the same folks who did the Zune.
Re:pfftt... (Score:4, Interesting)
I understand that some people fish for the heck of it, but when I'm bothered enough to do it, it's because I want some fresh fish to eat. I'd use dynamite a heartbeat if it were legal and I had a big group to feed.
Re:pfftt... (Score:5, Informative)
I understand that some people fish for the heck of it, but when I'm bothered enough to do it, it's because I want some fresh fish to eat. I'd use dynamite a heartbeat if it were legal and I had a big group to feed.
Dynamite is indiscriminate, it kills a whole lot of other animals that you don't eat, explosives can harm species like whales that are important apex predators and who rely upon hearing for hunting, if the explosive sinks low enough it can ruin the features on the lake/ocean bottom that are important fish habitat which has already happened through the over-use of ocean bottom trolling nets in many places and it has ruined fisheries to the point where people have begun to sink artificial reefs to try and restore stocks, basically the list over why this is a bad idea goes on
... and on ... and on. Fishing with dynamite is about as intelligent as slaughtering your cows with an RPG.
Re:pfftt... (Score:4, Informative)
God damn hippie.
Re:pfftt... (Score:4, Funny)
God damn hippie.
And proud of it...
Re: (Score:3)
"Really ?? Haven't seen any whales at the local fishin' hole."
I don't know you, but my guess is that's because you don't live near the ocean. Where I previously lived, in Juneau, Alaska, there were whales in the local fishing hole. Now I live in Madison, Wisconsin, and there are no whales in the local fishing hole.
Re: (Score:3)
Sounds like something invented by the same folks who did the Zune.
If one looks at the price tag, one would be tempted to compare the folks with the other (usually white, with its rounded corners protected by a patent) brand.
I'd bet the market-segments for both of the products would show a higher overlap too.
Re:pfftt... (Score:5, Informative)
This article doesn't say it but they throw in an iPad with their app when you buy one of their guns. A $500 iPad is an affordable freebie when you are selling a $17,000 weapon.
Re:pfftt... (Score:5, Insightful)
Bah!
If you're not barefoot and hunting with hand-lapped flint point on a spear, you're cheating.
-jcr
Re:pfftt... (Score:4, Funny)
Citation needed. As far as I know there is no place on earth where you can go a few miles without seeing a McDonalds and forgetting about prey animals.
Re:pfftt... (Score:5, Informative)
Re:pfftt... (Score:5, Insightful)
As for the long distance running adaptation, my hypothesis is we might have evolved that not mainly because of persistent hunting but because of war. There's not really much selection pressure for persistent hunting if you are a social animal (like humans and apes) you can hunt very successfully in groups - lions, hyenas, wolves, dogs, apes etc do it.
In contrast war could have produced rather significant selection pressures. In human-human wars, the predator and prey are the same species- whatever big advantage you have is likely to be in the next generation of survivors. Being able to run away from dozens of persistent enemies till you find a hiding place or till the sun sets keeps your genes alive. In contrast being able to sprint at 80kph for a minute when the enemy can also sprint at 80kph for a minute doesn't help much with your survival when there are many enemies. Being able to run long distances to attack an enemy or carry messages is also helpful.
Re:pfftt... (Score:4, Informative)
There isn't much to back up your thesis, especially since persistence hunting is still practiced in Africa.
Re:pfftt... (Score:5, Funny)
Re:pfftt... (Score:5, Interesting)
Might as well go to the game farm and shoot the deer in the small holding pen with a shotgun.
There are plenty of places that raise and release tame gamebirds with little fear of humans, and charge people to go out and shoot them. Dick Cheney was on of these "hunts" when he shot a lawyer in the face [wikipedia.org].
Re:pfftt... (Score:5, Informative)
Such hunting isn't much easier. When you hunt birds it should take one 1 shot, maybe 2, to take it out of the sky. A "tame" bird has to fly away, just like a wild bird, in order to be shot. It's not like it walks up to you. They're not really tame, just farmed, just as a chicken on a chicken farm isn't tame.
What those ranches provide is time. When you hunt wild birds there's lots of waiting. Either you're walking and waiting for some random bird to be flushed, or you're waiting for them to leave or return (happens only twice a day for ducks).
If the farmed birds flock and you're pumping out shots like a crazy man then, sure, you're just an idiot.
You can argue authenticity all you want, but at the end of the day shooting a small bird flying away with a single shot is actually pretty hard, whether "tame" or not. And unless you're subsistence hunting and doing it on a regular basis, you have to learn somehow. Clay pigeons don't exactly zig-zag.
Re:pfftt... (Score:5, Interesting)
Such hunting isn't much easier. When you hunt birds it should take one 1 shot, maybe 2, to take it out of the sky.
.
Yup, true dat. I bought a single shot German break-action rifle and every once in a while when I take it to the range somebody comes over for a look (sometimes they even mistake my KB for a shotgun) and then criticises me for not buying a bolt action repeater. I usually reply by asking them how many shots they feel are optimally optimally needed to take down one deer. I only do target shooting but even I know that the answer is one shot, two at the most if something goes very wrong and for a rapid second shot I'm better off with a double rifle than a 5 shot bolt action repeater since semi automatic rifles are forbidden here except for shooting at paper targets and getting caught hunting with a semi auto rifle can get your firearms license revoked for a loooooong time.
Re:pfftt... (Score:5, Funny)
Hmm.... organizing hunts where lawyers can be shot in the face... sounds like a business model!
Re:pfftt... (Score:4, Interesting)
While you can fault his activity as that of an utter coward, you cannot fault his aim.
Re: (Score:2)
I'd mod you up, but you're already at five. You deserve a 6 out of 5, my good man.
Re:pfftt... (Score:5, Insightful)
No, the Super aEgis II (sentry gun) is the ultimate "Aimbot". I wouldn't fucking go near one of those in a time of war. Hell, I wouldn't walk in front one even if someone told me it was in shutdown mode.
2nd Amendment Question (Score:5, Interesting)
Where do you draw the line between what is and isn't a firearm?
Does the 2nd Amendment allow (in your mind at least) a citizen to have a rocket launcher or a laser gun?
What are you going to do when the technology of simple side arms develops to the point where you an take out a room full of people by pressing a trigger and letting you gun do all the aiming etc..?
Would genuinely like to hear from a pro gun NRA type.
Re: (Score:3) owne
Re:2nd Amendment Question (Score:4, Interesting)
Re: (Score:3)
Everything is a chemical.
A non-chemical weapon would be a weapon that does not consist of baryonic matter.
Re:2nd Amendment Question (Score:4, Insightful)
Re: (Score:3, Informative)
First a couple clarifications: The Second Amendment doesn't allow or create a right to keep and bear arms for us. The Second Amendment simply protects the right from being infringed upon by our government (read it and see). The right to keep and bear arms is actually derived from our Natural Rights. This is often difficult for non-Americans to understand since rights are given or allowed by the government of most other countries. In the USA, while our Constitution is the foundation for all our laws and
Why? (Score:3, Interesting)
A gun is a weapon first and foremost (Score.
Re: (Score.
actually, lots of snipers are interested (Score:2, Interesting)
According to the previous article professional snipers (swat, hostage rescue, etc.) are interested, mainly because of the video record of exactly what the aim point was.
Re: (Score:3, Funny).
i'm pretty sure the problem is the people NOT authorized to use legal force, like my gf's husband...
Not just for putting holes in paper (Score:5, Insightful)
This weapon will never be used in anger by any entity authorized to use lethal force in anger:
You cannot possibly be that naive. That specific weapon may not be used in combat but the basic technology will without a doubt make its way to people who will use it to kill living beings, either human or animal. I'm not even making a moral judgement about that, it's just a clearly obvious fact.
snipers would never use this,
They might not use that particular system but I promise you snipers can and will use a targeting/tracking system should one be available that fits their mission parameters. I would be deeply shocked if such technology was not being very actively worked on by the military.
it is too expensive and is unnecessary for the average foot soldier, and too large and cumbersome to be used on anything other than a rifle that is stationary and supported, ie on a target range.
Technology can be miniaturized and will be. Furthermore if the technology is large and needs support, it isn't exactly hard to attach it to a vehicle. The military does it all the time.
This technology is clearly designed for target and hunting use only, which would completely negate the point of both activities.
The technology is designed to cause a bullet to hit a target more reliably. The nature of the target is irrelevant. Plus you are contradicting yourself. If it can be used for hunting then it is portable. It if is designed for hunting there is little difference between hunting animals and hunting humans beyond the fact that humans can (and will) shoot back.
Re: Not just for putting holes in paper (Score:5, Informative)
Re:It will be used by your kid (Score:4, Insightful)
In a situation where there are other armed people, you want something that can just keep shooting, you'd just "spray and pray" something that this gun can't do. In something where you've got no chance of return fire (like in designated "gun free zones" like in Sandy Hook) it doesn't much matter because you can just walk up to someone and shoot them point blank if you want because they have no way to (effectively) defend themselves.
When it comes to kids, its important that kids learn at an early age to shoot responsibly. The problem is, too many kids get their first experience about firearms from Hollywood, from GTA and from rap music, rather than responsibly target shooting/hunting. The key is to teach them responsibility and facts, not that shooting a gun is a toy, nor that guns should be feared.
Re:It will be used by your kid (Score:5, Insightful)
"This weapon will never be used in anger"
I bet every hot head, whose gone on a gun rampage has said that, and every dad whose kid gets hold of it.
Gun rampages are typically entered into with cool calculation and a bit of psychopathy/sociopathy; they are done by mentally ill persons or political zealots. The one exception I can think of is the Texas Tower Sniper, and it turned out he had a brain tumor.
Re:It will be used by your kid (Score:4, Informative)
Long guns are almost never used to kill people (domestically, anyway). Your odds of being beaten to death with fists are five times greater. For the rampage killer pistols make more sense for a whole host of reasons.
Re: (Score:3)
I disagree. Concealment is a pretty big plus for these kinds of people - if you lug a rifle around populated areas people start calling the cops. Beyond that, pistols are lighter, pistol ammunition is lighter and deadly enough at close range, pistols are faster to reload, it's easier to shift targets with a pistol, and it's harder to grapple someone with a pistol. Beyond that these guys are mostly penniless losers, and pistols are cheaper.
Re:It will be used by your kid (Score:5, Informative)
I disagree. Concealment is a pretty big plus for these kinds of people - if you lug a rifle around populated areas people start calling the cops.
In most of those cases, the crazies go by car until the very spot where they start shooting, so they can easily transport pretty much any gun they want.
Beyond that, pistols are lighter
Doesn't really matter - it's only a factor when you have to lug it around for a considerable amount of time to notice the difference. When actually shooting, a heavier rifle is still easier to handle because most of its weight is supported by your shoulder.
pistol ammunition is lighter
It's not, actually. The case is shorter and has less powder, but the bullets themselves are heavier. For example, a Federal HST 147 gr 9x19mm round (which is about the best as you can get in this caliber in terms of stopping power and overall efficiency on unarmored targets) weights the same as a Hornady TAP 62 gr
.223 round, while the latter is considerably more efficient and deadly.
Not that it's really relevant - a person can easily carry 6 30-round mags of 5.56mm concealed (under a jacket or vest, say), which is more than was ever actually used in such circumstances.
And, of course, there are many rifles chambered in pistol cartridges - Hi-Point carbines, Kel-Tec Sub-2000, Beretta CX4, Marlin Camp 9 and 45, Ruger PC9 and PC4, and semi-auto replicas of various submachine guns - Thompson, PPSh, PPS, Uzi etc. Not to mention pistol-caliber AR uppers.
pistols are faster to reload
Only insofar as "hand meets hand" arrangement of the mag well, which is not exclusive to pistols, either. From the list above alone, four carbines are designed in the same way.
it's easier to shift targets with a pistol
Not so. Shifting targets with a pistol requires a wide movement of both arms, which at the same time bear the full weight of the firearm. With a rifle, you only have to swing one arm - the one supporting the front - and even then a good half of the gun's weight is not moved much and is supported by the shoulder. This is especially true of straight blowback pistol-caliber carbines, which tend to be less front-heavy due to bolt's position and weight (Sub-2000 in particular has a very heavy bolt that is completely behind the pistol grip - it rides in the stock tube).
and it's harder to grapple someone with a pistol.
I doubt it comes to that often (but if you seriously think it is a consideration, a knife bayonet on the rifle would largely rectify this problem).
Beyond that these guys are mostly penniless losers, and pistols are cheaper.
Not really. A Hi-Point carbine can be easily had for $300, and even less if you look around - that's 50% less than a Glock 17. Going into "real rifle" territory, a WASR AK-47 can be had for around $400 (still less than a Glock); a Chinese SKS that takes AK mags, for $500. A used Mini-14 in 5.56mm can be found for under $600; a Kel-Tec SU-16 in the same caliber, for as little as $400.
For a handgun, the cheapest I can think of that isn't woefully inadequate (i.e. fires a reasonably potent round and can be quickly reloaded) would be Tokarev or a clone - e.g. Zastava M57, which would go no lower than $200; or one of Hi-Point pistols for about $150. But both of those are kinda crappy and not particularly reliable, and that's not that big of a difference in price compared to a much more reliable and powerful AK.
Besides all that, don't you think that your points don't quite match the observed facts? I mean, in most rampages so far, we have seen the perpetrators use long guns. One can argue whether that is the most suitable weapon for it or not, but that's what actually get used.
Re:It will be used by your kid (Score:4, Informative)
Besides all that, don't you think that your points don't quite match the observed facts? I mean, in most rampages so far, we have seen the perpetrators use long guns.
No, in fact I don't think that's true at all. The guy who shot Rep Giffords used pistols, as did the VA Tech shooter.
Re:It will be used by your kid (Score:4, Informative)
What are we talking about, movie theaters and classrooms, targets 15' away and moving?
Yes, exactly.
A rifle with more moving parts will be more likely to misfire or jam
A rifle doesn't have to have more moving parts. In fact, a 9mm pistol would have a more complicated internal mechanism than a 9mm carbine (because the carbine can use straight blowback thanks to the ability to stick a heavier bolt into it, while the pistol would have to be locked breech or some form of delayed blowback).
easier for someone to grab onto
I very much doubt that is a practical consideration.
more difficult to control.
A rifle is far easier to control than a handgun. Inexperienced handgun shooters, until they're taught the Weaver stance and learn to do it right from practicing it, have pretty crappy accuracy (yes, even at 15 feet). Seen it plenty of times firsthand. Not so with a rifle, it's a much more "intuitive" interface, so long as you shoulder it (even if it's not done quite right).
Why on Earth would you use a clunky rifle
Because it's faster to aim (so it's not really "clunky")?
Note that we're talking AR, AK and similar carbines here, as short as a civilian-legal firearm can get (without ATF stamps and other hurdles). Not a full-sized medium- or high-power rifle, like a
.308 or .30-06. The point here is having a stock, not having a longer barrel. Weight-wise, you can trim an AR down to around 5.5 lbs (with a plastic lower and carbon fiber forend and stock). Or you can take Kel-Tec SU-16, which is 4.7 lbs, and takes the same standard AR mags.
And if you look at pistol-caliber carbines, they can be surprisingly light. Sub-2000 is under 4 lbs unloaded, and most of that weight is in the heavy bolt that is in the stock tube - so the shoulder bears most of it. Aiming it is lightning quick, much more so than with a full-size pistol.
If you figure you'd reload either weapon at least once, you're looking at what, 60 rounds for the rifle, 40 for the handgun?
Reliable 40-round mags for ARs do exist, so it would be 80 rounds for a rifle. For pistols, you can get 30-rounders, though they're somewhat unwieldy.
How many reloads do you think are realistic in this situation?
Adam Lanza reloaded six times (tactical reloads - fresh magazine before each room; he didn't actually spend all 30 rounds in every mag).
Re: (Score:2)
There is a reason guns have targeting/tracking systems when used in anger
Sure. But I know very few people who are "angry" at the deer that they are planning on having for dinner. (I'm excluding military applications for this)
Sometimes the point it just to hit the target and it doesn't matter who gets credit for the aiming.
Um, if the point isn't to demonstrate/exercise your skills in the field, why not go buy your game meat from the store?
Re:A gun is a weapon first and foremost (Score:5, Funny)
Re:A gun is a weapon first and foremost (Score:5, Insightful)
(I'm excluding military applications for this)
I'm not. The primary application for any targeting system is military. The fact that it can be used for game or target practice is secondary.
Um, if the point isn't to demonstrate/exercise your skills in the field, why not go buy your game meat from the store?
Apparently it wasn't sufficiently obvious that I was talking about military applications. When you are trying to kill something dangerous it doesn't really matter if you or a computer does the actual aiming. However even if we are talking about hunting, the important decision was to pull the trigger. That is when the person controlling the weapon decided to kill something. Focusing on how the aiming is being done kind of misses the most important thing.
I don't really understand the point of "demonstrating your skills" by killing some harmless creature. That is just killing for fun which is frankly rather barbaric and certainly not very respectful of the life that was just ended. I don't object to hunting if you really need the food (not applicable for most of us) or if there are humane environmental considerations. But most hunters I know do it because they find it to be fun. They enjoy the act of killing something and sometimes they also enjoy the challenge of accomplishing that feat. But if they really wanted a challenge, why not do it with a knife or at worst a bow, up close and personal. Using a rifle that can kill at several hundred yards to hunt a woodland creature is not exactly a huge challenge. If you want to test your sharpshooting abilities, you don't need to kill something to do that. Hunting isn't evil but it frequently is pointless and cruel.
Re: (Score:3)
Since when did they sell game meat in stores?
Aside from the obvious problem that "game meat" doesn't come from stores by definition, even when you can find it (e.g. duck meat, which is relatively easy because it's common in Chinese and French cooking) it isn't from the same (sub-)species as the wild version and tastes different because it's been raised on commercial feed instead of foraging.
What country do you live in? (Score:3)
Re:Why? (Score:5, Insightful)
Next you'll be petitioning against adding rifling to barrels.
Now I know its not the same but the point of shooting is to hit the target accurately.
You want accuracy and not blind luck so you add rifling to the barrel.
This is just another feature which improves accuracy.
If your point isn't accuracy then sure do whatever you want. You could do it with one arm tied behind your back just as a challenge.
Re:Why? (Score:5, Insightful)
Next you'll be petitioning against adding rifling to barrels.
Agreed. The "real" way to do something is whatever somebody grew up with. People talk about a manual tranny being real driving, but I say it's degenerate ever since they added synchromesh. A caveman, heck, somebody from the early 19th century would think a modern rifle is cheating.
Re: (Score:2)
Re: (Score:3)
Pfah. *Real* men solve multivariate differential calculus problems entirely in their head. A few charcoal marks on the wall are permissable for truly complicated problems, but only after the first couple hours of work.
Re: (Score:3)
I would agree. But for most people the primary purpose of a car isn't "fun", but "get from A to B". Similarly for a gun, the entire reason the device was invented is to kill things.
Re: (Score:3)
So....Fantastic! The sooner the better.
Actually add human recognition software to this and it could reduce accidental (and deliberate) deaths drastically.
Funnily enough just like self driving cars. Computers just do it better.
Re: (Score:2)
You will quickly learn the point when the target is shooting back at you.
Relax, your skeet have no trigger fingers.
Re: (Score:2)
If you want aim assist, play a console FPS. Otherwise, what's the point?
TFA
"They like to post videos; they like to be in constant communication with groups or networks," Schauble says. "This kind of technology, in addition to making shooting more fun for them, also allows shooting to be something that they share with others."
...
Rifle maker Remington Arms wants to use the technology in rifles it wants to sell for around $5,000.
Answer: this is the "iPad of guns" - owning and using one set's the owner a head over the others (with the "Android" version to be sourced from Remington).
Apropos "head over the others" - I imagine it won't be so funny if the term "share to shooting" would be used under some other meanings/contexts. You know... the ongoing success of the sharing may highly depend which end of the gun is used in sharing.
Re: (Score:2)
If this is the iPad of guns...then I am dying to see what HP comes up with! Perhaps it'll shoot cake mix and spite instead of bullets?
Re:Why? (Score:5, Insightful)
Re: (Score:3)
More sinisterly, this means that someone can shoot the president from farther away, for example, a range of 265 ft, without any training.
Re: (Score:3)
There are a whole lot of people out there who can hit the kill zone on a man-sized target from 265 ft. That's not a long distance for a rifle. A novice could probably do it after a lesson and a half hour of practice. Qualification range for marines is 500 meters from a prone position.
Re: (Score:3)
Your novice wouldn't even get a chance to fire, even with this rifle.
Assuming the Secret Service saw the assassin. I suspect camouflage/hiding is at least as important as marksmanship. Heck, Reagan came within a hair's breadth of being killed by a guy with a pistol. Sheer luck he didn't die.
Re: (Score:3)
Re: (Score:3)
From the graph the 300WM PGF FSSP is almost 100% accurate out to about 1000 yards, or almost two thirds of a mile. At that distance it's going to be virtualy impossible to locate someone from a single shot if they're even moderately hidden. Moreover if someone is commited to their cause it's not a question of whether they can get away - the odds of that after taking out a high profile target are pretty slim unless they're an experienced professional. The question is only whether they can get the chance to
Re: (Score:3)
To me it completely misses the point of shooting, whether target shooting or hunting (and for hunting it completely removes the sport aspect).
For some hunters, the point is to get food.
Re: (Score:3)
Bah, that's nothing, I once killed a polar bear with a banana.
Show us the video!
What could possibly go wrong? (Score:3, Interesting)
A gun with an internet-connected onboard computer. Malware for it could be deadly.
Re: (Score:2)
A gun with an internet-connected onboard computer. Malware for it could be deadly.
To say nothing about malintent.
Re:What could possibly go wrong? (Score:5, Funny)
A gun with an internet-connected onboard computer. Malware for it could be deadly.
Malware doesn't kill people... people kill people.
(grin)
Obligatory (Score:5, Funny)
Don't you know Linux [thenanobyte.com] is secure by default?
Imagine a Beowulf cluster of these!
Uh
... never mind.
Sounds compltely useless as a sniper weapon. (Score:3, Insightful)
Snipers use cover and concealment to hide their position. That's not really going to happen with a glowing video display and a spotter with a glowing iPad. Sounds like little more than an expensive toy.
Re: (Score:2)
Re: (Score:3)
It's probably still too expensive; but I wouldn't count it out of the 'lite' end of the sniper market just yet.
Outside of jurisdictions where(either because they are large and rough, or because the sheriff is compensating for something) some sub-group of the police are practically a standing army, a lot of police forces spend most of their time doing things that require little or no marksmanship(during which time budget cuts or apathy are liable to come after their range time), with the occasional incident
Re:Sounds compltely useless as a sniper weapon. (Score:5, Insightful)
any trained sniper already has a ballistics computer and range finder wherever they go. It's called their head.
That's what some engineers said when they first came out with this wussy CAD stuff. Sliderule and paper is all you need. Probably some truth to it in the early days, but the tech improves.
Re:Sounds compltely useless as a sniper weapon. (Score:5, Informative)
Actually, most snipers now carry around a ballistics computer that their spotter uses to calculate the hold offset. This is sold for example by the folks that sell the 408 Cheytac. (The CheyTac holds the -- non-published-or acknowledged -- record for the longest wartime kill in Afghanistan / Pakistan btw. at a distance of approximately 2 miles.) The military buys the 408 CheyTac and ballistics calculator as a complete "system".
I should also point out that despite what the article says, it will still take an experienced shooter to shoot this to its maximum potential. How you hold and handle the rifle will affect its recoil and its accuracy as the rifle recoils while the bullet is still in the barrel. The rifle will also need to compensate for mirage at longer distances. Hard to hit something at 1,000 yards when the target keeps dancing around in your sights.
Re: (Score:2)
It's not designed with the military in mind. Just not rugged enough. This is designed for the rich hunting and target shooting crowd in benign environments.
But law enforce has taken an interest. Not for the targeting capability, but for the video. Now the brass can look over a sniper's shoulder and see what he sees. The video recording also allows for later evaluation.
Re: (Score:3)
You mean like this one?
(See my earlier reply regarding the 408 CheyTac sniper system. This is the associated linky.
Um.... (Score:2)
While the computer will do a better job with regard to bullet drop and deflection due to wind (assuming the computer is given correct information about wind, that is), there's still the question of shake when it comes to "pulling the trigger" on the laser. To some degree, this is nothing more than a wee bit more automation than you get from using a computer to calculate what your sight adjustment should be. A wee bit.
Interesting as a technology experiment, but... (Score:2)
...outside of static target shooting, it doesn't appear to be of much use; and, for static target shooting it is only of value as an evaluation tool.
I saw this in a movie and they used to frame some (Score:2)
I saw this in a movie and they used to frame some up in assassination
Re: (Score:2)
I presume there is a hardwired failsafe that requires the trigger to be held down for the gun to be able to fire. You just keep the trigger held while fine tuning the aiming.
Re:Cancel? (Score:4, Informative)
According to the article that is exactly how it works.
Re:Cancel? (Score.
Quoted from the Ars Technica [arstechnica.com] article, from back when Slashdot originally ran the article [slashdot.org].
Re:Cancel? (Score:5, Informative)
A gun that decides when to fire is nothing new. Battle Ship main guns did this before WWII. The target was locked in, and the firing computers (Mostly mechanical) fired when the pitch and roll of the ship allowed a hit. And they didn't have an abort.
But the big problem that the summery overlooks is that its just about as hard to put a laser range finder on a target as it is to put a bullet on target.
Re:Cancel? (Score:5, Informative).
Re: (Score:3)
Re: (Score:3)
Re: (Score:3)
Re: (Score:3)
Re: (Score:3)
You should probably WTFV - there is no need to hold it on the target, it basically just marks the target spot first and then fires as soon as the shooter manages to put the gun in the right position to hit it.
Re: (Score:2)
That's true. But it does allow you to "shoot" with the laser without missing with a bullet
Tanks work the same way (Score:5, Interesting)
The FCS on a tank works mostly the same way.
The sight is mounted on a mirror that can pivot in two axis on good tanks, an one axis on an Abrams. The ballistic computer knows what ammunition is in the breach (a user input - by the loader on good tanks, by the gunner on an Abrams) and so knows the ballistic profile of the round being fired. A slew of other sensors measure crosswinds, barrel droop, and the like. The laser rangefinder provides range, and an angle encoder in the turret slip ring provides rate of turret rotation, which provides a measure of target relative motion.
Gunner tracks target and then lases to get range. The FCS then jumps the gun barrel in both elevation and rotation while the sight mirror jumps back in the other direction(s) to keep the sight picture unchanged. The gunner fires, and the round impacts where the ballistic solution says it should.
From the gunner's perspective, you lay on target, track for a second, then fire the laser and fire the gun in close succession ("lase and blaze") and the round "magically" flies out and hits the target - no matter if you are moving, the target is moving, or both. You can be driving along at 60 km/h and hit a target moving 60 km/h 2500m away on the first shot.
DG
Re:Tanks work the same way (Score:5, Interesting)
Hmmm.... looks like the M1 Abrams might be a proper tank after all.
Line-of-Sight Stabilization Systems [astronautics.com]
The dual-axis head mirror can be operated with either analog or digital VME control electronics.
The dual-axis system provides improved image acquisition, improved target tracking, and maintains the sight aim retinal at the sight's center of view.
The dual-axis system is available in two configurations. The larger assembly is designed for the M1 Abrams head assembly envelope. The smaller unit will fit within the M60 tank or standard M36 sight head periscope sight.
A great book on the M1 Abrams: King Of The Killing Zone [amazon.com]
Hats off to Her Majesty's research establishment for the development of Chobham armour [wikipedia.org].
Re:Tanks work the same way (Score:5, Insightful)
Are you implying that a tank with one of the best operational records in the history of tanks
You don't get "one of the best operational records in history" by pitting your tank against competing models that are two generations older than it (and then also trimmed-down export models). You rack up kill count, yes, but it's not the same thing. And I'm not aware of any instance in which Abrams actually went against any of its direct competitors.
And yes, Abrams does have quite a few WTF moments about it compared to most other modern tanks that are in the same category. They aren't secret, either; but there's no real point in fixing them since massive tank-on-tank WW2-style battles are not happening any time soon, and it does work great against older tanks or in counter-insurgency operations, which is the kind of things that it's actually being used in today.
Re: (Score:2)
One could capture a series of snapshots of the aiming point, use some sort of smoothing algorithm to filter out the jitter and figure out what the intended target point is.
From that point on, its similar to how a marksman shoots. You don't try to hold the rifle perfectly still. You squeeze as the crosshair swings across the target.
Re: (Score:3)
On a gun that decides when it's time to fire, I hope there's a cancel button.
I also bet there's someone that gets this, pulls the trigger at a picture of someone they hate, and then leaves the gun lying around their house. It wouldn't work, not that it wouldn't be fun to try.
The abort would be when you release the trigger.
Re: (Score:2). Ca
Re: (Score:3)
How does it detect the wind at 100, 200, 300, and 400 yards? How does it detect the change in wind speed over that full distance? It is impossible.
While, in this implementation the wind speed is not automatically detected, it [wikipedia.org] is [leosphere.com] not [gizmag.com] impossible [klimasnakk.com] to do it... and neither new [bts.gov] (at least as old as 1977).
Re: (Score:2)
You use a spear? Try not clipping your fingernails for a month and hunting like a real man.
That said, it's a very curious definition of 'fair' when a game's historical stats are as lopsided as hunting. Call me back when team wildlife kills and butchers the hunters at a rate with, say, three orders of magnitude, of the rate at which team hunters kills and butchers the wildlife...
Re: (Score:2)
Call me back when team wildlife kills and butchers the hunters at a rate with, say, three orders of magnitude, of the rate at which team hunters kills and butchers the wildlife.
I'll settle for even odds. Anything less challenging and you might as well use a slaughterhouse.
Re: (Score:2)
I'm not sure if tragicomic alcohol-related accidents help compensate; but slaughterhouses(by virtue of the absolutely punishing pace and general powerless expendability of the peons on the line) actually chew people up pretty hard. They process livestock a great deal faster, of course; but the rates of occupational morbidity and mortality aren't pretty.
Re: (Score:3)
I for one welcome our new cowardly infantery overlords. They can join the heroic drone pilots bravely risking life and limb on their daily commute to Langley so they can keep the world safe from goat herds and wedding parties. | https://tech.slashdot.org/story/13/05/16/016209/a-computer-based-smart-rifle-with-incredible-accuracy-now-on-sale?sbsrc=md | CC-MAIN-2016-44 | refinedweb | 6,766 | 70.94 |
Biking data from XML to analysis, revised
Am I getting slower every day?
If you’ve ever been a bike commuter, you’ve probably asked yourself this question. Thanks to these little devices we can now attach to ourselves or our bicycles, we can now use our own actual ride data to investigate these kinds of questions, as well as questions like these:
- If I’m going to work from home one day a week, which day would maximize my recovery?
- Do I tend to ride faster in the morning or the evening?
Last year, I wrote a few posts about learning how to parse a set of Garmin XML data from 2013 and analyze it using pandas, matplotlib, and seaborn. This year I redid the same analyses, with a new installment of data from 2014.
This new and improved version is both prettier and more interesting, thanks to having more data, and more experience with pandas and seaborn (not to mention that pandas and iPython notebook have both been updated a few times). In this series of posts, I’ll show what I did to improve on my initial attempts. In later posts, I’ll present some new analyses of higher-density time series data, which include detailed 3D location information in the form latitude, longitude, and altitude.
First, get a rough idea of what is going on
One of the first questions we wanted to ask was about relative speeds in the city vs. suburbs, and morning vs. evening.
Last year I made this plot, roughly separating city vs. suburbs based on the length of the trips.
This year I used time of day to get better separation.
python sns.set(style="ticks")
python sns.jointplot("hour", "average_mph", data=filtered)
And that gave me a clear idea of what might work, but this kind of plot is not very easy to look at if you’re not a data person.
Clarify relevant questions
So to make it a little clearer, and be able to ask better questions, I wrote a simple helper function to add a flag for the four main categories I wanted: morning city, morning suburbs, evening city, evening suburbs.
On further revision, I added a couple of extra flags for two kinds of ‘outliers’: one I called ‘evening later’ for when Caltrain was delayed, plus ‘other’ for those random data points that were probably due to the Garmin device being on or off when it shouldn’t be, aka, user error.
Then I used a list comprehension to apply that to the ‘hour’ column in my dataframe, which I had previously generated using the awesome built-in datetime handling that pandas makes so easy. Then I made a better plot with seaborn.
def leg_definer(hour): """ (int) -> (str) Helper function to identify trip legs by hour, i.e. 6 AM: first leg (to Caltrain) - morning_city 7 AM: second leg (to work) - morning_suburb 4 PM (16): 3rd leg (return to Caltrain) - evening_suburb 5 PM (17): 4th leg (return home) - evening_city later - evening_later (Caltrain delays, etc) other hour: other >>> leg_definer(6): 'morning_city' """ legref = {6:"morning_city", 7:"morning_suburb", 16:"evening_suburb", 17:"evening_city", 9:"other", 11:"other", 12:"other", 15:"other", 18:"evening_later"} return legref[hour] filtered["leg_flag"]=[leg_definer(hour) for hour in filtered["hour"]] g = sns.lmplot("day", "average_mph", filtered, hue="leg_flag", fit_reg=False) g.set(xticks=[0,5,10,15,20,25,30], ylim=(5,24))
And I think that’s pretty cute, but it doesn’t really show any meaningful patterns, because we’re looking at all the months on top of each other (day here is coming from the date, aka ‘day of the month’).
Focus on how to see what you’re looking for
I realized grouping by weekday was more likely to be interesting. And then I got caught by one of those random little pandas gotchas: weekday is a method, not a datetime attribute. So it’s item.weekday(), not item.weekday (so it’s different from month, day, or hour).
I also had to use a plotting trick of adding horizontal ‘jitter’ to make it easier to see all the dots on the scatterplot that would otherwise be overlapped.
filtered["weekday"]=[item.weekday() for item in filtered["zoned"]] sns.set_context("talk") g=sns.lmplot("weekday", "average_mph", filtered, hue="leg_flag", x_jitter=0.15, fit_reg=False) g.set(xlim=(-1,6), ylim=(5,24))
This plot is starting to make more sense. Now, we can start making some observations and generating new hypotheses (which may or may not match up with our previously qualitative impressions).
- Suburbs have higher average speeds (gold and red dots) than city (hypothesis: traffic)
- Morning city is much faster than evening city (hypothesis: traffic)
- Morning suburb is a bit faster than evening suburb (hypotheses: tired? slope? wind?)
- Fewer rides on Thursdays (weekday 3) make sense, since this cyclist usually works from home on Thursdays (but not always). Interestingly, Thursday rides seem to cluster toward the faster end (hypothesis: taking Wednesday off to recover makes Thursday’s ride easier)
Sometimes a different type of plot works better to get your point across
Now, it seemed like the relevant finding was that Mornings were generally faster than Evenings, and I decided to focus on the Suburbs rides since they were less vulnerable to the confounding (!) effects of traffic.
I chose a violin plot to show the distributions more clearly. I hadn’t done this quite this way before, but it’s actually very easy to create a plot object, and add a couple of subplots. Then I tweaked it a little more: I adjusted the range of the axes, as well as getting rid of the tick-marks along the edge, to make it less cluttered. (Also, I didn’t know how to add a title to a plot, so I had to look that up!)
f, ax= plt.subplots() g = sns.violinplot(evening_suburb["average_mph"], evening_suburb["weekday"], color="Blues") g = sns.violinplot(morning_suburb["average_mph"], morning_suburb["weekday"], color="Yellow") g.set(ylim=(10,24)) title("Morning(yellow) vs. Evening(blue) Suburb") sns.despine()
At the end of the day, I think this plot makes it pretty clear that Morning rides have higher average speeds than Evening rides. Which made me wonder: if we plot the route on a map, can we find out if the road is actually slightly inclined, such that it’s actually downhill in the morning, and uphill at night? The incline must be minimal, since this cyclist perceives it as being ‘flat’.
Sure enough, a rough estimate of the route using this awesome tool at cycleroute.org gives a plot that looks something like this:
Next time: Plotting the measured velocities over the distance of the actual route. | https://szeitlin.github.io/posts/biking_data/biking-data-from-xml-to-plots-revised/ | CC-MAIN-2022-27 | refinedweb | 1,119 | 60.75 |
*nvim.txt* Nvim NVIM REFERENCE MANUAL Nvim *nvim* *nvim-intro* Nvim is based on Vim by Bram Moolenaar. If you are new to Vim see |help.txt|, or type ":Tutor". If you already use Vim see |nvim-from-vim| for a quickstart. Nvim is emphatically a fork of Vim, not a clone: compatibility with Vim is maintained where possible. See |vim_diff.txt| for the complete reference of differences from Vim. Type <M-]> to see the table of contents. ============================================================================== Transitioning from Vim *nvim-from-vim* To start the transition, create ~/.config/nvim/init.vim with these contents: set runtimepath^=~/.vim runtimepath+=~/.vim/after let &packpath = &runtimepath source ~/.vimrc Note: If your system sets `$XDG_CONFIG_HOME`, use that instead of `~/.config` in the code above. Nvim follows the XDG |base-directories| convention. See |provider-python| and |provider-clipboard| for additional software you might need to use some features. Your Vim configuration might not be entirely compatible with Nvim. For a full list of differences between Vim and Nvim see |vim-differences|. The |'ttymouse'| option, for example, was removed from Nvim (mouse support should work without it). If you use the same |vimrc| for Vim and Nvim, consider guarding |'ttymouse'| in your configuration like so: if !has('nvim') set ttymouse=xterm2 endif Conversely, if you have Nvim specific configuration items, you could do this: if has('nvim') tnoremap <Esc> <C-\><C-n> endif For a more granular approach use YXXYexists()|: if exists(':tnoremap') tnoremap <Esc> <C-\><C-n> endif Now you should be able to explore Nvim more comfortably. Check |nvim-features| for more information. ============================================================================== top - main help file | https://neovim.io/doc/user/nvim.html | CC-MAIN-2017-43 | refinedweb | 265 | 68.47 |
Please see the attached presentation. Initially, in the image, the Line is behind the chart and on activation the Line comes on top of the bars.
Hi,
I have tested and found the issue you have mentioned regarding (Chart to Image feature). We will figure it out soon.
Your issue has been logged into our issue tracking system with an issue id: CELLSNET-14067. We will inform you when it is sorted out.
Thank you.
Hi, <?xml:namespace prefix = o
Thank you for considering Aspose.
Please try the attached latest version of Aspose.Cells for .NET. We have fixed the issue of “the Line is behind the chart”. We are still working on the issue of “the order of Legend Items”.
Thank You & Best RegardsThank You & Best Regards
The issues you have found earlier (filed as 14067) have been fixed in this update.
This message was posted using Notification2Forum from Downloads module by aspose.notifier.
This issue is not fixed with the new version of Aspose.Cell (4.9.0). The sample PPT is attached for troubleshooting.
This issue is not fixed with the new version of Aspose.Cell (4.9.0).
Hi,
I could not find the issue using one of your template file. I extracted the excel file from your ppt file and then took the image from the chart. I have attached the output chart image by Aspose.Cells for .NET v4.9.0.2 (also attached).
Could you give us details, where is the issue. We will check it soon.
Thank you. | https://forum.aspose.com/t/line-column-chart-image-changes-on-activation/141024 | CC-MAIN-2021-10 | refinedweb | 256 | 76.72 |
/** * Definition for a point. * class Point { * int x; * int y; * Point() { x = 0; y = 0; } * Point(int a, int b) { x = a; y = b; } * } */ public class Solution { public int maxPoints(Point[] points) { if(points.length <= 0) return 0; if(points.length <= 2) return points.length; int result = 0; for(int i = 0; i < points.length; i++){ HashMap<Double, Integer> hm = new HashMap<Double, Integer>(); int samex = 1; int samep = 0; for(int j = 0; j < points.length; j++){ if(j != i){ if((points[j].x == points[i].x) && (points[j].y == points[i].y)){ samep++; } if(points[j].x == points[i].x){ samex++; continue; } double k = (double)(points[j].y - points[i].y) / (double)(points[j].x - points[i].x); if(hm.containsKey(k)){ hm.put(k,hm.get(k) + 1); }else{ hm.put(k, 2); } result = Math.max(result, hm.get(k) + samep); } } result = Math.max(result, samex); } return result; } }
some help needed, where did I miss, following your logic, trys to do some optimization
public class Solution { public int maxPoints(Point[] points) { // Write your code here if (points == null || points.length == 0){ return 0; } if (points.length < 2) return points.length; int res = -1; for (int i = 0; i < points.length; i++){ int samex = 1; int samep = 0; HashMap<Double, Integer> count = new HashMap<Double, Integer>(); for (int j = i + 1; j < points.length; j++){ if (points[j].x == points[i].x && points[j].y == points[i].y){ samep++; } if (points[j].x == points[i].x){ samex ++; continue; } double slope = (double)(points[j].y - points[i].y) / (double) (points[j].x - points[i].x); if (count.containsKey(slope)){ count.put(slope, count.get(slope) + 1); } else { count.put(slope, 2); } res = Math.max(res, count.get(slope) + samep); } res = Math.max(res, samex); } return res; }
}
right, I was thinking about the question: what if there are multiple parallel lines? Do you have any answer to it now?
@bitvector & &yjiang01
The above code takes care of parallel lines, since it is considering lines passing through one point at a time.
Lines having the same slope and passing through one point are the same line.
It might be okay to start j from i+1, since you've already counted all lines before that.
I tried to change the j start from i+1 and got a wrong result. Can anyone explain why?
@weimamarvin It's not a problem. Yes, if a point has points both on its left and its right, then they'll get counted separately. But since the solution tries every point as the center, it'll also try the leftmost and the rightmost of all those points as the center, and then the separate counting doesn't matter because one side will be empty.
@weimamarvin said in Accepted Java solution, easy to understand.:
@StefanPochmann
if we add the sentence if(k==0.0) k=0.0 after the computing slope sentence. Actually, I tried it and it failed.
I just tried that and it worked. As I had expected. What exactly did you do? Probably better show the whole code.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/24011/accepted-java-solution-easy-to-understand | CC-MAIN-2017-43 | refinedweb | 530 | 78.65 |
COSMOS ME Minutes 23OCT08 Change Analyzer Status
Logistics
- Conference call on 23OCT08 at 2:00 PM EST. Ajourned at 2:30 PM EST.
- Participants: Brad, Merri, Mark M., Jeff
Discussion & Status
- Currently using JAXB 2.0. Chris sent email about using a more current version, but we are going to have to use the version that has been approved through IPzilla since we cannot go to Java 1.6.
- In the SVN repo the code has been updated to completely remove Tuscany and JAXB bindings have been put in its place.
- Dependencies on SPI have also been removed, the SDD Schemas were run through JAXB also.
- If SDD changes we will have to re-run the schema through JAXB to generate new bindings. Don't think that this will happen anytime soon, just something to be aware of.
- The ChangeAnalyzer in the SVN repo can now be run start to finish. There is some information missing when logical combinations are generated in the flattenedSDD.xml. The class is InstallUnitCombinationProducer, the method is generatePlans. Brad hasn't been able to narrow the problematic code down yet, the code is repetitious and hard to debug.
- There is also some namespace mapping work that needs to be completed.
- Action Item: Mark to run CA using Tuscany version and through JAXB version and do a line by line comparison of the outputs.
- Code is probably ready to be moved over into the Eclipse repo, we just not sure what the right approach to take is, that is do we tag the existing version, make a new branch, put this in a separate package, etc...
- Action Item: Merri to add discussion of code migration for CA to next week's ME call agenda. | http://wiki.eclipse.org/COSMOS_ME_Minutes_23OCT08_Change_Analyzer_Status | CC-MAIN-2014-42 | refinedweb | 288 | 63.29 |
>> 1) Introduce a new reader (with a new namespace in case of DVD) depending on >> libdvdread (in which I fixed a few things I will forward to them). >By reader you mean demuxer or protocol to speak in libavformat terms. A protocol. Namespace (right now) is "dvdread". So you open a disk with "dvdread:e:/" or "dvdread:/cdrom3/", etc The demuxer is extended by using stream startcode 0x1ff and checking for a small header id (like "EXTNAVDATA"). > 2a) Extend the mpeg ts reader to allow commands to be send to the API > through a new stream type. That stream type includes "commands" to be > executed. >Will PS be supported ? >Is STREAM_TYPE_COMMAND needed if you use STREAM_TYPE_DATA and >CODEC_ID_DVDNAVDATA ? I am not sure I understand the question? > 4) Introduce a new seek flag to allow seek on chapter level, and fix the > end-generation for duration calculation. >We are reworking seeking API currently during GSOC, any ideas are >welcome. Maybe an AVSEEK_FLAG_CHAPTER ? Yes, that would be the idea. Also there need to be some way's to feed commands back to the protocol. I know this has been discussed here before (ege the E_AGAIN thread), but it seems to me more devices/protocols (example video camera's, network broadcast layered protocol's, server's with channel switching) could benefit from a [generic] send command layer. >>. >Well, why not, if it's clean and simple. >Will you work with current PS demuxer ? I think you can add a dvd >special demuxer but it must reuse code in PS demuxer when it can. It works 100% with the current demuxer. [...] Erik | http://ffmpeg.org/pipermail/ffmpeg-devel/2009-June/071228.html | CC-MAIN-2017-30 | refinedweb | 268 | 66.13 |
An important objective of the work item type definition language and the work item query language is for definitions to be portable between Team Foundation Servers. It is also important for third-party integrations to be able to find and refer to specific fields. To this end, the work item type definition language includes the concept of a field reference name. Field reference names are globally unique in the same way that a namespace in the .NET Framework is globally unique.
For this reason, field reference names cannot be renamed. If, for example, you changed the field name "Title" to "Header," the field reference name of that field remains the same.
In keeping with the .NET namespace tradition, two namespaces are predefined: System and Microsoft. The System namespace includes all system fields that are mandatory for Team Foundation system functions. The Microsoft namespace defines all required fields for work item types defined by Microsoft. Customers and partners can create their own field namespaces for custom work item types. | http://msdn.microsoft.com/en-us/library/ms194997.aspx | crawl-002 | refinedweb | 168 | 65.83 |
22 March 2011 14:13 [Source: ICIS news]
LONDON (ICIS)--Coalition attacks on Libya will keep oil prices supported for longer than expected, while future reconstruction in Japan and a loss of some nuclear power capacity in the country will prevent any sharp falls, market sources and analysts said on Tuesday.
Crude oil futures were trading marginally in negative territory, with Brent just below $115/bbl and remaining supported by the ongoing conflict and unrest in North Africa and the ?xml:namespace>
The attacks on
Oil prices have not moved any higher from last Friday, before the military action began, a report from trade advisory company Petromatrix said.
“The internal fights within the coalition about who is in charge and in command is starting to get more coverage than the actual bombings and that will continue to make it hard to forecast what tomorrow will bring in Libya,” Petromatrix said.
Physical oil traders said there was little or nothing of the 1.6m bbl/day capacity coming out of the country because it was too risky to send tankers to load there, there were difficulties insuring cargoes, freights were too high, and with the current embargo trading was difficult.
With the bombing likely to keep supplies disrupted for longer, Commerzbank analyst, Carsten Fritsch, said prices will remain elevated above $100/bbl at least until the middle of this year, and then if
Demand for oil in
On Tuesday 15 March, oil prices posted losses of more than $5/bbl with Brent reaching below $109/bbl as the Japanese crisis created uncertainties and resulted in investors taking money out of the markets.
As the week progressed and a clearer picture emerged in Japan, prices recovered as violence in Libya continued. The markets were anticipating an increase in demand from Japan to compensate for the loss of nuclear power and for reconstruction.
“Concerns over the impact of events in
At GMT 13:05, May Brent was at $114.31/bbl, down $0.65/bbl from the close on Monday, while April WTI was at $101.58/bbl, down $0.75/b | http://www.icis.com/Articles/2011/03/22/9446148/libya-attacks-japan-demand-to-extend-support-for-oil-prices.html | CC-MAIN-2015-22 | refinedweb | 349 | 54.05 |
NAME
user_namespaces − overview of Linux user namespaces
DESCRIPTION).
Capabilities)). Consequently, unless the process has a user ID of 0 within the namespace, or the executable file has a nonempty inheritable capabilities mask, the process.
The rules for determining whether or not a process has a capability in a particular user namespace are as follows: associated with a process’s mount namespace allows that process to create bind mounts and mount the
When:
a) −1 value) unmapped. This is deliberate: (uid_t) −1 is used in several interfaces (e.g., setreuid(2)) as a way to specify "no user ID". Leaving (uid_t) −1:
Writes that violate the above rules fail with the error EINVAL.
In order for a process to write to the /proc/[pid]/uid_map (/proc/[pid]/gid_map) file, all of the following requirements must be met:
*
+
+
Writes that violate the above rules fail with the error EPERM.
Interaction
with system calls that change process UIDs or GIDs
The transition only−−−rwx".(2) (−1 TO
Namespaces are a Linux-specific feature.
NOTES
Over
Use.12 added support the last of the unsupported major filesystems, XFS.
EXAMPLE
The −rs # Need Linux 3.8 or later
Linux 3.8.0
$ id −u # Running as unprivileged user
1000
$ id −g
1000
Now start a new shell in new user (−U), mount (−m), and PID (−p) namespaces, with user ID (−M) and group ID (−G) 1000 mapped to 0 inside the user namespace:
$ ./userns_child_exec −p −m −U −M ’0 1000 1’ −G −t proc proc /proc
bash$ ps ax
PID TTY STAT TIME COMMAND 1 pts/3 S 0:00 bash 22 pts/3 R+ 0:00 ps ax−handling("−i New IPC namespace\n"); fpe("−m New mount namespace\n"); fpe("−n New network namespace\n"); fpe("−p New PID namespace\n"); fpe("−u New UTS namespace\n"); fpe("−U New user namespace\n"); fpe("−M uid_map Specify UID map for user namespace\n"); fpe("−G gid_map Specify GID map for user namespace\n"); fpe("−z Map user's UID and GID to 0 in user namespace\n"); fpe(" (equivalent to: −M '0 <uid> 1' −G '0 <gid> 1')\n"); fpe("−v Display verbose messages\n"); fpe("\n"); fpe("If −z, −M, or −G is specified, −U is required.\n"); fpe("It is not permitted to specify both −z and either −M or −G.\n"); fpe("\n"); fpe("Map strings for −M and −G consist of records of the form:\n"); fpe("\n"); fpe(" ID−inside−ns ID−outside−ns−delimited records of the form: ID_inside−ns ID−outside−ns length Requiring the user to supply a string that contains newlines is of course inconvenient for command−line == −1) { == −1) { /*)) == −1)−>pipe_fd[1]); /* Close our descriptor for the write end of the pipe so that we see EOF when parent closes its descriptor */ if (read(args−>pipe_fd[0], &ch, 1) != 0) { fprintf(stderr, "Failure in child: read from pipe returned != 0\n"); exit(EXIT_FAILURE); } close(args−>pipe_fd[0]); /* Execute a shell command */ printf("About to exec %s\n", args−>argv[0]); execvp(args−>argv[0], args−−line options. The initial '+' character in the final getopt() argument prevents GNU−style permutation of command−line options. That's useful, since sometimes the 'command' to be executed by this program itself has command−line options. We don't want getopt() to treat those as options to this program. */ flags = 0; verbose = 0; gid_map = NULL; uid_map = NULL; map_zero = 0; while ((opt = getopt(argc, argv, "+imnpuUM:G:zv")) != −1) {]); } } /* −M or −G without −U) == −1) errExit("pipe"); /* Create the child in new namespace(s) */ child_pid = clone(childFunc, child_stack + STACK_SIZE, flags | SIGCHLD, &args); if (child_pid == −1)) == −1) /* Wait for child */ errExit("waitpid"); if (verbose) printf("%s: terminating\n", argv[0]); exit(EXIT_SUCCESS); }
SEE ALSO
This page is part of release 4.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at−pages/. | https://man.cx/user_namespaces(7) | CC-MAIN-2017-47 | refinedweb | 659 | 58.01 |
How to find the center of the contour?
I've got some images with contours and I need to find their center. I'm using OpenCV and it will be great if the solution will be within OpenCV.
2 votes
I've got some images with contours and I need to find their center. I'm using OpenCV and it will be great if the solution will be within OpenCV.
If you need to solve it with OpenCV, just use momentum and find contours for the mask. Here is how you can make it.
import cv2 import imutils import numpy as np import matplotlib.pyplot as plt mask = cv2.imread('resources/img.png') plt.imshow(mask, cmap='gray') plt.show()
gray = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (5, 5), 0) thresh = cv2.threshold(blurred, 60, 255, cv2.THRESH_BINARY)[1] cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) for c in cnts: # compute the center of the contour M = cv2.moments(c) cX = int(M["m10"] / M["m00"]) cY = int(M["m01"] / M["m00"]) # draw the center of the shape on the image cv2.circle(mask, (cX, cY), 7, (0, 0, 255), -1) # show the image plt.imshow(mask, cmap='gray') plt.show()
This is how you can get the centers for every contour.
Great, it is so helpful. Thanks | https://ai-pool.com/d/how-to-find-the-center-of-the-contour | CC-MAIN-2020-45 | refinedweb | 227 | 79.87 |
patchlesspatchless
patchless is a tiny Scala library which provides:
- A data type
Patch[T], which extends
T => Tand encapsulates a set of updates to be performed to values of type
T.
- A typeclass
Patchable[T], which supports the data type above.
It uses shapeless to derive
Patchable[T] for any case class.
DependencyDependency
Patchless is published to Maven Central – put this in your build.sbt:
libraryDependencies += "io.github.jeremyrsmith" %% "patchless" % "1.0.4"
UsageUsage
The core of patchless provides only two simple way to create a
Patch[T] for any given
T:
- The
applysyntax (macro-driven):
import patchless.Patch case class Foo(a: String, b: Int, c: Boolean) val patch = Patch[Foo](b = 22)
- The
Patch.diff[T]static method:
case class Foo(a: String, b: Int, c: Boolean) val a = Foo("test", 22, true) val b = Foo("patched", 22, true) val patch = Patch.diff(a, b) patch(a) // Foo("patched", 22, true) patch(Foo("wibble", 44, false)) // Foo("patched", 44, false)
Additionally, the
patchless-circe module provides decoders directly from JSON to
Patch[T]. See below for details.
Using the patch fieldsUsing the patch fields
The primary advantage of
Patch[T] over simply
T => T is that the updated fields can be accessed as a shapeless
Record of
Options. Each field retains the name from the original case class
T, but its value type is lifted to an
Option of the original type within the case class.
The
Record is accessible in two ways. The first is simply by the
updates member of the
Patch value:
println(patch) // Some("patched") :: None :: None :: HNil
This alone doesn't turn out to be all that useful from a typelevel standpoint - Scala doesn't inherently know the type of the
updates field, so your options there are limited.
So patchless does some additional type voodoo to allow you to recover a statically known
Record for a
Patch[T] of a concrete, statically known type
T. This is done with the implicit enrichment method
patchUpdates, which allows you to do typelevel things like mapping over the updates
HList or summoning typeclasses for it:
object mapUpdates extends Poly1 { implicit def cases[K <: Symbol, T](implicit name: Witness.Aux[K] ) = at[FieldType[K, T]] { field => name.value.name -> field.toString } } patch.patchUpdates.map(mapUpdates).toList // List(("a", "Some(patched)"), ("b", "None"), ("c", "None"))
Please note that this only works for a concrete
T. If
T is abstract (such as in a polymorphic method over
Patch types) then you'll still have to parameterize over various
HList types and require various implicit shapeless
Aux typeclasses over them as usual – starting with
Patchable.Aux[T, U] where
U will be inferred to the type of the
Updates record for
T.
def doPatchyStuff[T, U <: HList, A <: HList](patch: Patch[T])(implicit patchable: Patchable.Aux[T, U], liftAll: LiftAll.Aux[MyTC, U, A], toList: ToList[A, Any] ) = ???
Also, be aware that
patchUpdates involves a typecast; it's assumed that the
Updates of the
Patch[T] value has the same type as the
Patchable[T] that is in implicit scope. This is usually a safe assumption, but it's not guaranteed to be safe. In an effort to make it as close as possible to a guarantee,
Patchable is defined as
sealed, which means that only the blessed derivations can ever be used to create it; these ought to be deterministic for a particular
T, but Scala provides no way to express this and thus a typecast is still necessary.
patchless-circepatchless-circe
Derived decoders and encoders are provided in the
patchless-circe module.
In build.sbt:
libraryDependencies += "io.github.jeremyrsmith" %% "patchless-circe" % "1.0.2"
There are two different imports, depending on how you're using circe. You need to have at least
circe-generic, and you can also optionally use
circe-generic-extras (which is marked as a provided dependency in case you don't use it).
You also need to be using automatic derivation for this to be of any use; it's not possible to derive a
Patch[T] decoder for a semiauto or manual decoder of
T.
For vanilla automatic derivation:
import io.circe.generic.auto._ import patchless.circe._ import cats.syntax.either._ // for working with results case class Foo(aString: String, bInt: Int) val parsed = io.circe.parser.parse("""{"aString": "patched"}""") parsed.valueOr(throw _).as[Patch[Foo]].valueOr(throw _) parsed.updates // Some("patched") :: None :: HNil parsed(Foo("blah", 22)) // Foo("patched", 22)
Configurable derivation is the same, but import
patchless.circe.extras._ instead; your implicit
Configuration will be used to derive the decoders for
Patch types.
Encoders work the same way, but be aware that the JSON output depends on the printer used – in particular, you'll typically want to
dropNullKeys if you're outputting
Patch values to JSON..
Code of ConductCode of Conduct
The patchless project supports the Typelevel Code of Conduct and wants all its channels to be welcoming environments for everyone. | https://index.scala-lang.org/jeremyrsmith/patchless/patchless/1.0.4?target=_2.12 | CC-MAIN-2019-22 | refinedweb | 827 | 63.09 |
Monday Jul 25, 2011
Tuesday Sep 11, 2007
Using.
Monday Sep 03, 2007
HowTo - JAXB for simple Java-XML serialization
By pelegri on Sep 03, 2007
Friday Apr 13, 2007
Support for JSON in JAX-WS at GlassFish
By pelegri on Apr 13, 2007
@BindingType(JSONBindingID.JSON_BINDING) public class MyService { public Book get() { return new Book(); } public static final class Book { public int id = 1; public String title = "Java"; } }
Monday Mar 05, 2007
Fast Infoset between IBM's JVM and GlassFish
By pelegri on Mar 05, 2007
Check out Oleksiy's blog and the Fast Infoset Interoperability Project at Java.Net. Also check Pauls' activation instructions and a list of Other TA Spotlights on FI.
Wednesday Jan 24, 2007
New.
Tuesday Dec 19, 2006
JAXB 2.1 is Final
By pelegri on Dec 19, 2006
Tuesday Dec 05, 2006
JAXP 1.4 FCS/Final Now Available
By pelegri on Dec 05, 2006
The implementation is delivered as two JARS (API + Impl) for simplicity and uses the StAX and JAXP implementations from GlassFish which build on Xerces 2 Java. Check the blogs from Santiago and Norm for some more details.
All this code is Open Source, and so will be the implementation of the next version of the spec.
Thursday Nov 16, 2006
New JAXB 2.1 and JAX-WS 2.1 EAs
By pelegri on Nov 16, 2006
Saturday Sep 30, 2006
WADL OpenSourced - Describing RESTful Web Services
By pelegri on Sep 30, 2006
There WADL articles, as well as Marc's blog.
Tuesday Aug 08, 2006
Finding.
Sunday Jun 25, 2006
JAXP 1.4 JARs added to Java.Net Maven Repository
By pelegri on Jun 25, 2006
Earlier additions to the repository include:.
Thursday Jun 15, 2006
OSS Nokalva Releases Tools for Fast Infoset - That makes 6 Implementations...
By pelegri on Jun 15, 2006.
Wednesday May 31, 2006
Fast Infoset Support in Noemax's SDO and in Project FIFI
By pelegri on May 31, 2006...
Friday May 05, 2006
Fast. | https://blogs.oracle.com/theaquarium/tags/xml | CC-MAIN-2015-48 | refinedweb | 331 | 73.17 |
Class path is a relevance of Operating system rather than java, but it plays an important role for JVM. Standard packages are defualt java packages which helps programmers to go easy.
CLASSPATH
The class path is a way for JVM to know from where to invoke the execution of a class. Or where to find the specified class. Usually the class path is set to point the current directory. And Moreover the class path is more of a Operating System based concept it has a very less direct link to Java. However it is really important that you have defined a proper CLASSPATH in your environment variables of Operating System.
The Classpath is actually important because it tells the compiler or JVM where to find the required class. There is a reason that in a properly packaged software system the class with the main method is placed in the directory where it is completely accessible without any package restriction. To understand this reason clearly first look at following program.
/* place this file in packagedemo directory and name it PackageDemo.java */ package packagedemo; class PackageDemo{ public static void main(String[] args){ System.out.println("hello world"); } }
Now you'll need to compile this and you'll get a class file under directory packagedemo. Now try and run it and see what Happens.
Exception in thread "main" java.lang.NoClassDefFoundError: PackageDemo (wrong name: packagedemo/PackageDemo)
Does your system shows this Exception too ? If so, you are going in right direction. Because you need the package reference to run this program as this is in the java package and classpath restriction applies on this one too.
you need to come out from the packagedemo and then run it from here using following command
java packagedemo.PackageDemo
this will result in output
hello world
Well, yeah, this is Classpath all about to tell the JVM where it stands and where to find what.
The reason the class with main method places outside the package restriction is only to make it accessible without this limitation.
Standard packages
The standard packages are commonly known as language package and classes which are shipped with the JDK. there are lots of common packages like
java.lang java.util java.applet java.awt javax.sql javax.swings java.net
and many more..
these are the core part of Java Language specification. The purpose of these packages is to make it easier for programmer to do work and leave the additional hardware support responsibility to JVM. The java.lang is already included in every program you make so you don't need any special operation to work on this but if you want to use any of other package or class from it, you will be needing to import the package or any class from that package. for example if you want to work with the Random class from java.util, you must use any of these approach:
import java.util.*; class UseRandom{ private Random r; //class Definition }
or
import java.util.Random; class UseRandom{ private Random r; //class definition }
I hope this little illustration is enough to make you understand how to import and use standard packages.
Note: using private with the variable fields is not a rule but is a good practice. It is a good practice in OOP and specially java to keep data private and access it through the public methods commonly known as getter and setters. | http://www.examsmyantra.com/article/47/java/classpath-and-importing-standard-packages-in-java | CC-MAIN-2019-09 | refinedweb | 571 | 63.09 |
chown, fchownat — change owner and group of a file relative to directory file descriptor
SYNOPSIS
#include <unistd.h>
int chown(const char *path, uid_t owner, gid_t group); int fchownat(int fd, const char *path, uid_t owner, gid_t group, int flag);
DESCRIPTION. If both owner and group are -1, the times need not be updated.
Upon successful completion, chown() shall mark for update the last file status change timestamp of the file..
RETURN VALUE
Upon successful completion, these functions shall return 0. Otherwise, these functions shall return -1 and set errno to indicate the error. If -1 is returned, no changes are made in the user ID and group ID of the file.
ERRORS
These functions shall fail if: The following sections are informative.
EXAMPLES
None.
APPLICATION USAGE.
RATIONALE.
FUTURE DIRECTIONS
None.
SEE ALSO
chmod(), fpathconf(), lchown()
The Base Definitions volume of POSIX.1-2008, <fcntl.h>, . | https://reposcope.com/man/en/3p/chown | CC-MAIN-2019-51 | refinedweb | 146 | 57.27 |
SocialCG/2017-05-31/minutes
Contents
Social Web Incubator Community Group
31 May 2017
Attendees
- Present
- sandro, ajordan, MMN-work, aaronpk, nightpool, ben_thatmustbeme, jaywink, astronouth, SocialCG, astronouth7303
- Chair
- aaronpk
- Scribe
- nightpool
Contents
- Topics
- Summary of Action Items
- Summary of Resolutions
Just to sum up what's happened so far in mumble, we're going to start the meeting absent cwebber2 Sandro called cwebber2, but got voicemail
<aaronpk> trackbot, start meeting
<Loqi> Benthatmustbeme made 1 edit to Socialwg/AccountDiscovery
<aaronpk>
<sandro> Meeting: Social Web COMMUNITY Group Teleconference
<aaronpk> scribenick: nightpool
SocialWG
sandro: The Social Web Working Group hasn't had any particular changes from last week, still working on ActivityPub and WebSub.
... w3c advisory committee is still voting on whether to extend the SWWG charter
... To work on activitypub for longer, given recent interest
... may turn into community spec if charter doesn't get extended
<aaronpk>
<Loqi> [Aaron Parecki] Micropub is a W3C Recommendation
aaronpk: micropub has turned into a reccomendation since last week
... the websub testsuite is up to date, and at websub.rocks
... we're looking for implementation reports, especially from people running current websub impls
... People who are running mastodon, for example, would count here and are welcome to submit reports.
<aaronpk> details here
<aaronpk>
nightpool: how many changes have their been to websub since it was PubSubHubbub?
aaronpk: not many, we're aiming for compatability
aaronpk: there's a changelog section in the spec.
... For example, we're looking at implementing 410 Gone as a response from subscriberrs
<aaronpk>
<Loqi> [Alkarex] #106 Suggestion: Use HTTP 410 Gone
<MMN-work> Good addition to the websub spec!
aaronpk: because implmentations are doing it
... and it doesn't break interoperability with existing hubs
<sandro>
<Loqi> [sandhawke] #4 Forwarding
GH issue #4 - Forwarding
sandro: there's been a longstanding question in decentralized systems: what do you do when a server goes down?
... issue #1 talks about this a bit, and there's a lot of discussion there.
... When I was looking at setting up a w3c mastodon instance, and talking to the systems people
... they said that they didn't want to run mastodon, because eventually they would to shut it down, and users would be stranded
... But they'd be willing to run a forwarding server, and that seems to be the majority usecase
... where people want to shut things down orderly, do sunsets
<aaronpk> websub section on redirects:
sandro: But I wanted to make sure that we talked about it in the spec, and look into what current implementations do for redirects
... and behave gracefully in allowing users to move to different sites
<Loqi> Benthatmustbeme made 1 edit to Socialwg/AccountDiscovery
aaronpk: what's the goal for this call?
sandro: experience, mostly, looking at what people have tried
... or other feedback
<cwebber2> oh shoot
<cwebber2> I thought it was in 40 mins
nightpool: I'm worried about the longevity of these redirect servers
... most companies, they want to have a sunsetting process, and then be done with it
... not maintaining something in perpetuity
<Zakim> MMN-work, you wanted to discuss redirect and URI stuff
sandro: I agree that's a concern, but I think that if you're being responsibile, you need to do redirects in perpetuity
... URLs get printed in books, etc
<ajordan> heya cwebber2!
<cwebber2> hi
MMN-work: when you do redirects for your URLs, you still have to have an identifier, and you still have to have local databases
... and you have to figure out what to change in the local database
sandro: right, the question here is "when do you consider this permanent", and when do you change your users URL
<MMN-work> The point I was trying to get across was that when you move servers the URL/URI for that account stored with other servers is going to have to be updated - if the URI is supposed to be a fully functional URL
<Zakim> ajordan, you wanted to clarify
<jaywink> from experience in the diaspora world for some years I would guess typically server admins who shut down their pod want to shut down evrything, not run a service after that. And many times the server just disappears of the grid without warning.
sandro: Gargron brought this up, that's he's worried about user account hijacking
cwebber2: I'm sympathetic to this, although I feel like once you have access to somebodies account you have lots of opportunities to do damage
ajordan: I think the problem here is that most things you mentioned can be undone, you can undo side-effects, undelete post, etc.
<MMN-work> nightpool++
<Loqi> nightpool has 1 karma
<MMN-work> good summary of my rambling :)
ajordan: but once you've been redirected to another server, there's no way to tell that server to "unredirect" you
aaronpk: there's a non-technical solution for this too, and a post like "hey, i'm leaving X domain, go find me on this domain instead"
... and that only works while the servers up, but I'm reluctent to push a technical solution when a solution exists anyway
cwebber2: this comes out of solutions like that--both on pump.io and on mastodon, I've seen lots of people leave messages like that, which basically imitate the behavior we have here
... but you're right, people do work around it at the moment
<sandro> +1 make this work for non-human systems!
ajordan: I may be architecture astronauting, but I think it's really cool how standards have interop with non-technical systems
... but if you go with the nontechnical solution of "post a note saying you've moved" you'll break interop
<MMN-work> aaronpk & sandro: I'm a proponent of making social systems requiring social interactions, including some things that are "too manual" .)
sandro: Am I hearing any disagreement? Or do most people think that systems should handle redirects cleanly?
nightpool: I think the 30 day heuristic alleviates my concerns
cwebber2: apologies about missing the time
Using Discourse for activitypub
cwebber2: Maloki and Gargron brought up hosting a activitypub category on the mastodon discourse forum
<MMN-work> cwebber2: I am text-only today
cwebber2: I know that there's some existing people who don't feel comfortable using Github
cwebber2: MMN-work, would you like to give a summary of your concerns?
<MMN-work> You can continue discussing
<MMN-work> and I'll write something up
cwebber2: Anyway, this could provide an alternative, although we would have to talk about how to bridge convos back and forth
ajordan: chris I'm definitely with you on fragmented conversations, I find most things beside issue trackers hard to use
... Is there a compelling reason to keep tracking for this group on Github, rather then switching to something like Gitlab?
... Because that seems like it would solve both problems.
<jaywink> +1 on that, fragmentation would make following discussions harder
cwebber2: Gitlab came up in the working group discussions. The problems raised is that many people on the working group used Github, and they would have to create another account (federated identities...)
... and that the w3c has some amount of tooling around it, although I'm unsure what exactly that is.
sandro: I'm not aware of any tooling, although I might be missing something?
aaronpk: I think it has to do with backing things up for legal purposes
sandro: But we're not under the w3 org, so we might not be getting that.
cwebber2: Discourse does seem to walk a line between issue trackers and traditional forum software
... it allows for closing/resolving threads, for example
<cwebber2>
ajordan: as another note, gitlab has a "sign in with github" button, which may alleviate concerns about additional accounts?
sandro: I just tried using Discourse this morning, I had never used it before and I liked it.
... I liked the forwarding discussion I saw there, and I linked one way but should like the other way
<ajordan> sandro++
<Loqi> sandro has 41 karma in this channel (48 overall)
sandro: If the idea here is to use Mastodon's discourse installation, would other projects feel excluded?
cwebber2: It might be, I don't know. We would have to ask
<ben_thatmustbeme> we need to make our own decentralized social system so we can design a decentralized social system
cwebber2: I wouldn't feel that about my own projects, but I know that there might be some concerns there, and we would have to hear from other people
... another concern is what happens to our discussions if their installation goes down
... And that's a concern for Github as well, big services (such as google code) have gone down in the past
<MMN-work> I'll paste this:
<MMN-work> I am personally not using GitHub because I cannot accept their TOS. I agree that using a separate discussion forum is much work (and I'm probably the only one currently to actively avoid GitHub).
<saranix> MMN-work: I actively avoid github too
<MMN-work> Suddenly GitHub (or whatever third party we host through) could change TOS, forbid a certain participant to log in because of actions in some _other_ repository etc.
<MMN-work> The bottom line is I don't think it should be hosted on a domain not controlled by the community of SWICG. Also "multiple accounts" is very little an issue if the tool used has third party logins (OpenID/OAuth)
<jaywink> so could mastodon ;)
ajordan: I think my main concern is about how other people perceive this, *we* know that we just happen to be there, but the average observer would not
... so I'm a tentative -1 between backup concerns and that.
<MMN-work> cwebber2++
<Loqi> cwebber2 has 85 karma
<saranix> agree should be hosted under w3 control. Also agree with bad optics of Mastodon, even if control wasn't an issue. optics being that they are very silicon valley hipster of the day
sandro: does gitlab solve this problem? or would we have to have an installation of gitlab or an installation of discourse somewhere else?
<sandro> MMN-work, would you be fine with gitlab.com ?
ajordan: I think gitlab.com would solve these problems?
aaronpk: gitlab.com seems like it would just be moving to another 3rd party service, which doesn't solve the concerns MMN-work raised.
<astronouth7303> aaronpk: gitlab can be self-hosted for free. It's FOSS+Premium.
sandro: I can't find evidence of it right now, but I think MIT may have a gitlab instance? Which we may be able to use, given that w3c is somewhat part of MIT.
sandro: i would prefer it be w3c branded, but that may be another option
<aaronpk> astronouth7303, i know, that's not my point
<ajordan> an sorry I just read MMN-work's comment again, I missed a bit
<MMN-work> sandro aaronpk: Currently GitLab hosts git.gnu.io for us, so if we had a domain name to use I'm sure they could take care of the hosting part
<MMN-work> sandro aaronpk: us = GNU social
cwebber2: It sounds like we don't want to use discourse, and we can continue this conversation on better venues later.
aaronpk: we can continue this discussion on IRC or other forums.
<sandro> MMN-work, okay, that sounds like a good option.
cwebber2: we have a little bit less people here then usual, do we want to get this shortname thing over with or postpone for another week?
Social Web Incubator Community Group Shortname
aaronpk: naming is always a rabbithole, I would consider punting
<tantek> good morning #social!
cwebber2: we can do this discussion async
sandro: let's poll the channel on consensus for the name
... and maybe we can set it aside quickly
<tantek> I see I made it just in time for the fun part ;)
<ajordan> morning tantek
<saranix> shortnaming?
cwebber2: okay, 1 minute summary: our full name is set in stone, but we're considering two options for the short name
<tantek> here's the big question? which one has a twitter account available? :P
cwebber2: one option is SWICG, which is hard to pronounce but keeps the "incubator" and "web" aspects, which may be important for some people
<sandro> I strongly prefer "SocialCG" to "SWICG" because (1) easier to say (2) easier to guess what it means, (3) less likely to mean something else (semantic web?)
cwebber2: the other option is SocialCG, because it's more pronounceable and implies more continuity with SocialWG
... and we mention the incubator and web aspects on the wiki page heavily
<MMN-work> tantek: No problem, we could register TheReal$shortname
<tantek> note: the WG has commonly been referred to as both SWWG and SocialWG and it doesn't seem to be confusing anyone
<tantek> re: that semweb comment sandro
aaronpk: Just to mention where this would be used--it's the namespace for the wiki page, it's the account for social media, and on the w3 url for the group.
<tantek> so maybe we don't have to pick?
<ajordan> SocialCG++
<Loqi> socialcg has 1 karma
<cwebber2> SocialCG
<sandro> SocialCG
<MMN-work> SWICG
<jaywink> +1 SocialCG
<nightpool> SocialCG
<tantek> SWICG++
<Loqi> swicg has 1 karma
<aaronpk> well we need to pick one to use consistently in URLs at least
<MMN-work> (don't really mind though)
<tantek> yeah same.
<geppy> Sorry I'm late.
<tantek> for the wiki I have to admit /Socialcg has a certain parallelism with /Socialwg
<ajordan> we seem to be evenly divided
<ajordan> excellent
<rhiaro> +1 SocialCG
<geppy> + SocialCG
<ben_thatmustbeme> SWICG++
<Loqi> swicg has 2 karma
<tantek> like I said, maybe we don't have to choose to only have one
<saranix> SWICG++
<Loqi> swicg has 3 karma
<ajordan> tantek++
<Loqi> tantek has 57 karma in this channel (345 overall)
<sandro> 7-2 right?
<sandro> now 7-3
<astronouth7303> +1 SocialCG, if only because it looks wordish, not just a jumble of letters.
<tantek> !karma SocialCG
<Loqi> socialcg has 1 karma
<sandro> 8-3
<tantek> !karma SWICG
<Loqi> swicg has 3 karma
<rhiaro> SocialCG++
<Loqi> too much karma!
<tantek> lol
<aaronpk> lol karma votes don't work cause rate limiting
cwebber2: Looks like 8-3, which isn't a complete landslide victory, and it doesn't capture everybody
... but it does seem to be the leaning here
sandro: that's what we're currently using on the wiki, right?
<ajordan> I count 4 for SWICG
<nightpool> (rhiaro, sorry, that was my fault: cwebber called for votes but I didn't transcribe that part)
cwebber2: this feels extremely bikesheddy, and we do need to choose one
<sandro> I might be upset with swicg -- feels like bad branding
<tantek> I prefer SWICG in for branding/comms in general. I am OK with w3.org/wiki/Socialcg as a parallel to w3.org/wiki/Socialwg (and /Socialig FWIW)
cwebber2: does anyone feel strongly that they would be upset if one of these was chosen?
<geppy> I'm fine with SWICG being used for the wiki or whatever, and I'm fine with CWICG being used, but I feel like it's clearer especially to outsiders if I talk about SocialCG
<ajordan> +1 to remark about talking to outsiders
cwebber2: we're seeing some preferences, and maybe some strong preferences
... and it looks like we might not be in trouble from the people here if we choose socialcg
<tantek> we have @SocialWebWG right?
cwebber2: we might want to throw this out to the rest of the world, or decide on this once and for all here
<tantek> did someone register @SocialWebCG ?
<saranix> could go both ways when talking to outsiders. They could get confused about diff between CG and WG, also could be missing the Incubator and Web apsects. OTOH, SWICG is opaque at first glance as well.
<tantek> again, parallelism in context
cwebber2: aaronpk do you have a preference whether we decide now or not?
aaronpk: I see arguments for both sides, but we should probably close this discussion sooner rather then later
cwebber2: should we do a resolution?
<tantek> cwebber2, what about narrowing the decision to just the wiki path?
<tantek> rather than a general bikeshed discussion?
sandro: that would be useful
<cwebber2> PROPOSED: Accept majority of straw poll as SocialCG for group shortname.
<cwebber2> +1
<sandro> +1
<ajordan> +1
<nightpool> +1
<geppy> +1
<MMN-work> +0
<jaywink> +1
<MMN-work> That's how you do it right? .)
<geppy> tantek: Someone registered @SocialWebCG on Twitter in May of 2017, I'm guessing that means right now someone here grabbed it.
<sandro> yeah +0 is don't care, but leaning yes, -0 is don't want to stop things but don't feel great, -1 mean STOP
<MMN-work> Cool, thanks for the explanation.
<astronouth7303> +1
<tantek> -0
<aaronpk> +0
<ben_thatmustbeme> 0
<saranix> -0
cwebber2: we don't have any -1s, and we did do this based off of the poll
RESOLUTION: Accept majority of straw poll as SocialCG for group shortname.
cwebber2: it's a mix of positive and wishy-washy, so I feel we can probably close this
aaronpk: we are at the top of the hour, and that's the end of our scheduled time
... we still have three or four things on the agenda
... but we can probably wait until next week?
<MMN-work> agreed
cwebber2: yes, meetings are weekly now
aaronpk: everyone is welcome to continue chatting on IRC or on the call, it just won't be part of the official minutes
<geppy> Thanks, all, sorry I missed everything.
cwebber2: we officially finished painting a bikeshed!
<ben_thatmustbeme> nightpool++ for minuting
<Loqi> nightpool has 2 karma
<cwebber2> geppy: np, it happens! I missed the first 20 minutes too, and I scheduled it... oops.
do I do generate minutes, or should someone else?
<aaronpk> trackbot, end meeting
<aaronpk> agenda for next week is up
Summary of Action Items
Summary of Resolutions
[End of minutes] | https://www.w3.org/wiki/SocialCG/2017-05-31/minutes | CC-MAIN-2019-30 | refinedweb | 2,982 | 55.58 |
January--garnet
Febuary--amethyst
March--aquamarine
April--diamond
May--emerald
June--Pearl
July--ruby
August--peridot
September--sapphire
October--opal
November--topaz
December--turquoise
then it says to add code to call the function when the user selects the Display Information option. for now, hard code the month in the function call to your birth month. add a menu item to the main menu to display all birthstones. add code to call getbirthstone for each month.
im not sure how to do any of this here is my header file:
//enumerations enum month { January, Febuary, March, April, May, June, July, August, September, October, November, December }; enum birthstone { garnet, amethyst, aquamarine, diamond, emerald, pearl, ruby, peridot, sapphire, opal, topaz, turquoise }; //functions birthstone getbirthstone(); void mainMenu()
and here is my .cpp file
// BirthdayInfo.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "BirthdayInfo.h" #include <string> #include <iostream> using namespace std; int menu; string name; string space (6, ' '); char diamonds = 4; void mainMenu() { system ("cls"); cout << "Hello, " << name << "\n" << endl; int x=1; int y=1; while (x<=5) { cout << space << diamonds; x=x+1; } x=1; cout << "\n\n****************Main Menu****************" << endl; cout << "1) Display Information\n" << "2)\n" << endl; while (x<=5) { cout << space << diamonds; x=x+1; } cout << "\n"; cin >> menu; } int _tmain(int argc, _TCHAR* argv[]) { cout << "What is your name?" << endl; getline (cin, name, '\n'); mainMenu(); return 0; }
if someone can help me with any of this i would greatly appreciate it thx i dont need someone to give me the entire thing i just need some one to steer me in the right path to help me out a bit thx
This post has been edited by Flash0429: 08 May 2007 - 05:34 PM | http://www.dreamincode.net/forums/topic/27789-need-help-with-functions-plz-goin-crazy/ | CC-MAIN-2017-04 | refinedweb | 292 | 61.8 |
On 22.07.13 13:13, Jim Fulton wrote:
On Sat, Jul 20, 2013 at 11:43 PM, Christian Tismer <tis...@stackless.com> wrote:Third rant, dear Zope-Friends (and I mean it as friends!).
AdvertisingIn an attempt to make the ZODB a small, independant package, ZODB has been split into many modules.Maybe not as many as you think: persistent, transaction, ZEO, ZODB and BTrees. 5 <shrug>I appreciate that, while I think it partially has the opposite effect: - splitting BTrees apart is a good idea per se. But the way as it is, it adds more Namespace-pollution than benefits: To make sense of BTrees, you need the ZODB, and only the ZODB! So, why should then BTrees be a top-level module at all? This does not feel natural, but eavesdropping, pretending as something that is untrue. I think: - BTrees should either be a ZODB sub-package in its current state, - or a real stand-alone package with some way of adding persistence as an option.I don't agree that because a package depends on ZODB it should be in ZODB. There are lots of packages that depend on ZODB.
This is generally true. In the case of BTrees, I think the ZODB is nothing without BTrees, and BTrees make no sense without a storage and carry those _p_<attributes> which are not optional. BTrees would make more sense as a standalone package if the persistence model were pluggable. But that is also theoretical because I don't see right now how to split that further with all the C code. That made me think it belongs to ZODB, what else could it support, and who would ever install ZODB without it.
I agree with your sentiments about namespace pollution. You and I may be the only ones that care though .3 ;).
Yay, actually I care mainly because just trying 'pip install ZODB'spreads out n folders in my site-packages, and 'pip uninstall ZODB' leaves n-1
to pick the names by hand. That's why I want things nicely grouped ;-) cheers - | https://www.mail-archive.com/zodb-dev@zope.org/msg06354.html | CC-MAIN-2018-22 | refinedweb | 347 | 80.31 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Hi all,
I'm new to this forum, but it seemed like there are a lotta smart people here, so I wanted to throw an idea out there and see what the 1337 hax0rz think about it. (btw, I'm also not a programming languages guy, so please correct me if I'm wrong)
The Problem:
So there are a lot of programming languages out there, but pretty much all of them are about programming a process, rather than programming an algorithm. Consider the following program:
#include "myCards.h"
int main(void)
{
myCards cards;
cards.shuffle();
cards.deal();
return 0;
}
This is usually a clear way of writing a program, any programmer that looks at the code can deduce the algorithm, namely, create the cards, shuffle them, then deal them. However, this C++ code is actually programming a very specific process and not necessarily the algorithm that it seems to imply. In actuality, you know very little about this code. All we can deduce from this code is that an object was created, and two of its member functions were called in succession. It could be the code that sets up and detonates a bomb, or the code to break eggs and fry them sunny-side up. It depends completely on how myCards.h is defined, and the algorithmic information is buried within the complex relationships.
It is no wonder that understanding someone else's code can be as time-consuming as rewriting the code from scratch. Code without comments is tantamount to writing unusable code.
The Idea
When you are playing cards, you will use this algorithm. Get the cards, shuffle them, then deal the cards. The algorithm is very consistent and reusable - you do the same thing whether you're getting ready to play blackjack, poker, or even go-fish even though those games are very different. So the idea is to create a programming language where you can program the algorithm, and capture everything that is important to the algorithm, separating it from what is process specific.
So looking back on the algorithm -- 1.Get the cards, 2. Shuffle them, 3. Deal the cards. What is different between this and the C++ code before is that here, we don't care how to do it, just get it done. It doesn't matter if I riffled the cards, or pulled some out the middle and put them on top, just as long as they get randomized -- which is what I meant by shuffling them.
The algorithmic way of writing this code would be -- 1.get to the state where i have the cards 2. get to the state where the cards are shuffled 3. get to the state where the cards are dealt. The difference is that this is goal-based -- get there, I don't care how you do it, rather than process-based -- do these specific things.
The Language
The language will be a goal-based language, where each function contains the goal of the function -- which state it will be in after it finishes, and will guarantee that it will get there if the initial conditions are met. Each function will contain a sequence of subgoals to get it from the initial state to the goal state. During compilation/run-time, a STRIPS-like calculation will be done to plan which specific functions to call to create an AST corresponding to the tree of goals and subgoals where each function is a node.
Correctness can be proven since each function is supposed to go from its initial state to goal state, if it doesn't, the the function is incorrect and the run-time can isolate more bugs.
Conclusion
The difficulty in reusing old code is the lack of pre/post information about functions. So a language that requires that information in the form of initial conditions / goal state will allow for greatly reusable code, that will be able to self modify for various purposes at run-time.
K, so there's more to it if people are interested... but right now, it's still in the planning stages, and could use some help from some experts. A simple proof-of-concept version was written in python. I'm trying to work out a version 2 that will compile to parrrot vm. Any ideas/comments/people that are interested?
Interesting idea, but a generic solution for finding the intermediate states between a start state and an end state cannot exist, can it?
generic solution for an arbitrary search from start state to end state = AI, the programmer is supposed to write the function. think of initial to goal state as like if the parameters of the function are valid integers within range (initial), this function will output the product of the integers(goal)
If you restrict yourself to int types, and anything that can be mapped to it, you might want to look at BDDs. It is pretty easy to encode the relation between your pre- and poststates as BDDs and extract a function out of it.
I would say that "process" and "algorithm" are synonyms, so that you need to choose better terminology. It sounds to me like you are talking more about program synthesis from specifications, which is an old and well-studied area. You can probably find some links by searching for "program synthesis."
Sounds like Prolog to me. Can you say how the approach differs? In particular, separating the algorithm (logic) from the process (control) seems to be exactly what the original aim of Logic Programming was, and achieving this by goal-directed search is pretty much how Prolog operates. See e.g. Kowalski's Algorithm = Logic + Control paper (mentioned elsewhere on LtU - e.g. here).
I'm still looking into the subject, but just right off the top of my head, this approach is still very similar to standard imperative programming, except that the functions you call might be swapped out for another one that does the same thing. and it doesn't make heads pop =P (I learned lisp instead of prolog)
Do you know Icon? It is goal-directed like Prolog, but imperative. Do you have something similar in mind?
I tend to think the general problem of this approach is to specify the goal in a way that is sufficiently abstract, concise and leads to an efficient computation that can be found by the solver automatically. Finally the solution has to be understandable and robust. Just think about the goal "sorted list" and what it takes to specify it instead of writing just a few lines of quicksort in Python or Haskell that directly implements a solution in the form of a particular "process" ( which is indeed an algorithm: a step-by-step mechanical process for doing computations ).
Right, that's the part that I'm working out now. Goal/initial are all states, and defining states in a way that is intuitive to humans, and can be understood by a solver is hard... but it's also what we normally use languages to do. A simple state info would be a list of object value pairs. So to make it intuitive we need to make the objects reflect how we categorize objects. A helpful "library" to include in programs would be a common sense definition of different objects. The hope is that w/ a sorting algorithm, you can just write init: x is-a list, goal: x.list.sorted = true.
Or, nearly impossible with current techniques. [This is the same as asking in COQ to develop a witness (=function) of an ordering relation without user-guidance. Now, mathematically this can be done (just try all algorithms up to a certain complexity), but I don't think COQ can do this without user guidance, yet.]
sorry, i don't understand the issue. Is it with resolving whether or not we're in a specific state?
Unless I misunderstood you. You want to derive a function which takes an unordered list to an ordered list. That means your specification of that function is at least second-order (one order up from prolog). There is no general [effective] means to derive such a function.
[Removed a lot of crap]
...it would be a hard problem, but any automated planning system, just like a human, can sort a list without using a hard-coded algorithm, and could probably do it in polynomial time, too (abstractly, of course).
Caveat: the subject is quite a deep rabbit hole; it's called Automated planning and scheduling. NASA has been using it for awhile in their spacecraft! Read about that here. Generally you can search for "automated planning" and find alot about it.
Other people have used the same idea. Google for goal-based programming. Another similar idea is called declarative metaprogramming, which is somewhat like having IO monads in a logic language.
Hopefully this helps.
Planning through constraint propagation means they're doing some kind of propositional logic. Their problems reduce to problems which are solvable by a sat-solver. Maybe often this means problems will be solvable in polynomial time (if the structure of the problem allows it). However, it is also easy to encode NP-hard problems in SAT, so, no dice: it may be fast for most problems, but you can always encode a problem for which the planner will not find a solution fast (probably).
[I assume they don't restrict their problems to problems which may equivalently stated as boolean logic terms which are in Horn form, because for those particular problems fast linear solvers exist.]
[Actually, I am one of the very few people who believe that P=NP might actually be true. But until someone comes around who actually proves that, and exploitable algorithms come up, that discussion is moot.]
[Hehe, I think I actually found a (small beginning of) an attack on the problem. But I will not share ;-)]
Hmmm, maybe I need to reword some stuff, I didn't intend to build a function generator (that takes in specifications for initial / goal state, and produces a function), but a function library (where the computer can decide which of its functions are best suited to the task at hand). It would be cool if the compiler can fill in the gaps in case it isn't supplied with the function, but just the specs. btw, thanks to everyone for their input, got lots of reading material to go through now =D.
out of a library without explicitly given heuristics, or run-time heuristics, is hard (assuming you know which functions satisfy the specification). If you don't know which functions satisfy a given specification, it is in general impossible for a computer to decide because it would need to give an automatic proof for that.
Forgive me if I've misunderstood your post, but it sounds like you are essentially trying to embed Floyd-Hoare logic directly into your language (although I'm not quite sure how your comment about "self-modification" would fit in with that understanding). That's essentially what Eiffel's design-by-contract system does (or any one of a number of other DbC systems for that matter). A statically analysed version of the same idea is implemented in languages like SPARK.
All of which is not to say that your idea is bad, rather that it isn't necessarily new. So you may be able to pick up some good ideas by examining how other languages have implemented the same concept. Or perhaps there's more to your idea than I've been able to glean from your comment. In which case it might be helpful if you could clarify how you think what you're trying to do differs from what Eiffel and SPARK have done.
There is an applied difference between design-by-contract and automated planning:
Ah. Thank you. I'd overlooked the comment to the effect that "...a STRIPS-like calculation will be done to plan which specific functions to call..." I agree, that sounds like automated planning (the reference to STRIPS should have caught my eye). There's certainly been plenty of work in that area, including practical applications in areas like spacecraft autonomy (e.g. compiling high-level science goals down to low-level spacecraft commands).
In a way, what you are describing is generic programming. "Generic programming" is a vague phrase, sometimes simply used to refer to the use of templates/generics, but I am referring to the specific approach advocated by Alexander Stepanov and David Musser.
Their idea is that you start with a concrete algorithm that is a completely specific "process" (in your terminology), and continually "lift" the algorithm, abstracting over unnecessary details while never giving away the fundamental performance/correctness attributes of the algorithm. Once you have done this "lifting" to the logical/practical limit, what you are left with is the essential algorithm, expressed in terms of a set of constrained parameters.
This is a hack job of explaining his definition of generic programming, so I recommend:, where you can find his classic papers on the subject as well as a draft of his upcoming book which takes it slowly and thoroughly.
The idea of generic programming in C++ is a very innovative way of approaching algorithms. You can seperate containers from algorithms. They interact through iterators. With C++09 coming, the relationship is expressed through concepts, making the job of communication between parts more clear.
I fear I don't see why it is innovative. Are you referring to template meta-programming or to generic programming in general?
What I mean is the STL. It provides a new way of thinking about algorithms. An algorithm is not bound to a particular implementation of storage and you can use it on a wide variety of containers.
With the object-oriented way, for each container class, you have to implement all the algorithms you want. Now you implement only one algorithm and your container class only has to provide iterators.
This is idea is far from new, nor did it originate with C++. I suggest you pursue the LtU archive, where generic programming in its many forms, as well as Stepanov's earlier work (in Ada) were discussed many times.
However, Stepanov stated that only in C++, not in Ada or Scheme, could he realized his theory, combining abstraction with efficiency.
Like I said, all this was discussed here many times.
Please try to post your replies by clicking "reply" so as to maintain threading.
I've thought a bit about the suggestion, and given that the language for goals/post-conditions is decidable (something like a version of Hoare logic with bounded quantifiers), then I think the STRIPS algorithm can be made, in principle, to do what you want it to.
Three caveats:
I think this idea is well worth exploring. Not that I'm an expert on the planning literature, but I haven't seen planning technology applied to code construction like this before.
Ireland and Stark (2006). Combining Proof Plans with Partial Order Planning for Imperative Program Synthesis. In Journal of Automated Software Engineering 13(1).
The article descibes an implemented system that does synthesis of code based on Hoare logic. It does not handle prewritten subroutines, though handling that is mentioned in the further work section. The central technique is searching the space of proofs in Hoare logic of the desired specification, and the heart of the discussion is focussed on the problem area #2 I described above, where they use partial-order planning. They describe its application to some small examples that involve synthesis of loops. The authors cite no similar work (except their own prior efforts), but some prior works on automatic program synthesis by Manna and Waldinger (1977 & later), Dershowitz and Reddy (1994), and also some later articles on logic program synthesis.
Thanks everyone for your comments, there's a lotta stuff that I gotta read up before thinking about this further. It seems like people are saying that it's a good idea to have a goal-based system/ separating the logic from control, but it's been done before with different systems.
I still think this is a new idea though, mostly from the aspect that i haven't gotten to describe (the post was pretty long already) -- the objects and relations that will be required for the state information, and how functions will be chosen. I'll try to read up to do some proper comparisons to prolog/automated planning/eiffel/spark.
I think it is clear that, while there is extensive relevant literature, your idea is quite different from anything described in this thread.
As a point of etiquette, the LtU social norms are to use real names; cf. Marc Hamman Most of us here post with our real, full names as a sign that we are prepared to stand behind our opinions, from The Spirit of LtU. So maybe...
The secret it out!! my pseudonym is my real-nym in disguise =P
I gave some unintelligent remarks which I'll try to clarify, for your benefit (I hope).
You're going to do something with pre-/postconditions, goals and program derivation. so you'll end up doing something with logic (and you are restricted to fundamental results in logic).
To go up the tree, here are some fundamental results (I can think of):
1. your pre- postconditions are stated in (or equivalent to) propositional logic. You can use BDDs to describe the relation and derive functions from them. (Or use BDD checking to see that your relation holds).
2. your pre- postconditions are stated in/equivalent to first-order predicate calculus (in Horn from) (i.e. Prolog). You can use unification for specific instances to derive solutions. You can get rid of the Horn form restriction, I think at the expense of a possible exponential blow-up of the term (I am not sure here).
3. your pre- postconditions are stated in (second or) higher order logic and you try to find matching functions. You can use tactics and heuristics to prove correctness to spec. But don't expect proofs for programs complexer than, say fac, and maybe, sorting.
4. your pre- postconditions are stated in (second or) higher order logic. You can use tactics and heuristics to derive programs, and proofs from the specifications. But don't expect to derive any programs complexer than, say fac.
5. your pre- postconditions are stated in higher order logic but over a finite domain. Try to use quantified BDDs. I have no idea where you'll end up.
I think I understand what you're trying to say... that if you have a well-defined pre-post condition, you can use a variety of methods to create functions. That's good and well, and I would probably include something like it in the language, but the problem I see with making that an integral part of the language is that there isn't a one-size fits-all planner.
Mainly, the issue is, as you mentioned, the expressiveness of the knowledge representation. Too expressive, and the search space explodes, and it'll take forever. Not expressive enough and there will be many things you can't solve.
I hoped to have a very expressive kr, ideally one that can express anything you need with as little repetition as possible, that will store the function's information. As a programming language, the programmer will input the plan (in a classical planning sense) and we won't have to worry about search space. The compiler would have enough information, however, to swap out functions that do the same thing, but better for a given situation.
I think your suggestion would work well with this idea, because you can convert a subset of the functions in a library from the high order logic into a lower order one, and create other functions on the fly. But I wanted to leave that part up to the programmer to fiddle with, not the language.
I know I wasn't clear on the STRIPS part, which prolly confused a lotta people, but it's mostly because there are still a lotta hairy issues w/ it and I didn't think it was central to the main idea of the language. The reason why I need a planner here is to find out about state information. It's impossible and illogical to store ALL the information in the world, you should store just the ones you need -- you don't care about the weather when you're cooking indoors, but you do care about the weather when you need to go out to buy eggs. So the planner is supposed to spit out the information that each algorithm needs by using a library of information retrieval functions (ie the goal is to obtain information). The other option (which I think is less elegant) would be to make no distinction between information retrieval functions and normal functions, and require the user to call them manually (through goal-based function calls, of course), but then garbage collection would be needed.
So hopefully that was clear, if not, please wait till I read up on some of these papers, make some decisions on this aspect of the language, and write a follow-up to the original post.
It's the logical expressiveness of your language, or the equivalence of it to a certain logic, which decides up to what point you can automate parts of your language. [I don't write professionaly about it so it takes some time to state it correctly.]
I don't understand STRIPS, didn't read enough about it. (I think I saw linear backchaining [uh, I meant linear search], which would hint at a weak form of boolean logic, or some resolution based proving. Dunno)
What I get from most planners is that they start out as general solvers on (something as expressive as) boolean logic, and features are added which improve the expressiveness.
I would say that is the wrong way around. If you want to implement a [planner] language with a certain expressiveness, choose a logic which will satisfy your requirements, choose a technology you can easily implement, and build a planner language around that.
[Actually, if it is a planning language you want. I guess I would look at QBDDs, unless you're looking at infinite domains.]
[And also, sorry to say, but I don't like your approach which would work with adding features. Say you start out with something equivalent to boolean logic and add some specific search algorithms on first-order predicates you also want to express. You'll end up with some (weird) hybrid between a classical boolean solver and Prolog. Plz, just implement something akin to Prolog and you're done, and you'll know exactly what is expressible and what not. Or go QBDDs, or something.]
What kind of problem would this type of language be best at solving?
Do you have an example of a small problem best done in this language? Some pseudo-code for your language would be interesting. | http://lambda-the-ultimate.org/node/3077 | CC-MAIN-2018-22 | refinedweb | 3,889 | 61.16 |
to make it extra annoying, I also added a Prowl () alert as well so that it actually rings her phone when motion is sensed. The prowl link, takes you straight to the twitter account to view the offending picture.
I found that using wifi, I needed both a strong power adapter and a powered hub in order to consistently get video snapshots from the webcam. Also, certain webcams work better than others. You can see a basic wiring diagram at the top. Not sure if it matches my code below. Just make sure you put the Motion Detectors data line on the same pin you specify in code.
You could easily use this as a home motion detector/burlgar alarm. Next step is to get it to automatically turn the light on and off in there.
Here’s the sensor I used: ... 00_s00_i00
Here’s the code. You’ll need to grab prowlpy and Twython python bits and fswebcam for linux (apt-get install fswebcam) in order for this to work. My indenting may be off from the copy/.paste:
- Code: Select all
import RPi.GPIO as GPIO
import time
import sys
import prowlpy
import os
from twython import Twython
#set your prowl API key here
apikey = ‘YOUR PROWL KEYb’ #Dummy API-key
p = prowlpy.Prowl(apikey)
#Set the GPIO pin (board numbering) for the PIR
PIR = 11
pirState = False
pirVal = False
GPIO.setmode(GPIO.BOARD)
GPIO.setup(PIR, GPIO.IN)
while True:
pirVal = GPIO.input(PIR)
if (pirVal == True):
#use this to sound an alarm if you have speakers
# os.system(‘mpg321 ./alarm.mp3′)
#if motion is detected, check one more time to be sure
pirVal2 = GPIO.input(PIR)
if (pirVal2 ==True):
print “Alarm”
try:
# Prowl it!
# p.add(‘Poop Detected’,'Possible Poop Detection’,”Motion”,1, None, “”)
#grab a snapshot
os.system(“fswebcam /home/pi/scripts/test.jpg”)
#tweet it!
twitter = Twython(
twitter_token = ‘YOUR TOKEN’,
twitter_secret = ‘YOUR SECRET’,
oauth_token = ‘YOUR TOKEN’,
oauth_token_secret = ‘YOUR SECRET’
)
twitter.updateStatusWithMedia(‘/home/pi/scripts/test.jpg’, status=’I just pooped’)
except Exception,msg:
print msg
#wait and do it again
time.sleep(25) | https://www.raspberrypi.org/forums/viewtopic.php?t=26200&p=238264 | CC-MAIN-2015-48 | refinedweb | 349 | 67.04 |
Witz
Preprocessing
The first phase in compilation of witz templates is preprocessing. Preprocessing itself can be broken to include phase, when other template or template snippets are combined to form single body and subblock replacement phase, when all references to subblocks are replaced with defined subblocks.
#include path
This attempts to retrieve and include another witz template into current body. Witz preprocessor attempts to retrieve language specific version of template first, then defaults to common version.
#define id
This starts a new template subblock which ends with another subblock. First subblock (the one before first #define is encountered) has id "MAIN". The value of "MAIN" subblock represents the template at the end of preprocessing process.
#id
Occurence of # followed by id that is neither "include" or "define" inserts a subblock defined by #define id. This process is recursive - subblock keep replacing until there are no more #id in the block, however any id can be replaced only once (to avoid infinite recursion). If id is encountered for which there is no corresponding #define, original text is left unchanged. If there are more definition for single id, the last one is used. ## is always replaced with single #.
Example:
base.witz:
<html>
<title>#TITLE</title>
<body>#BODY</body>
</html>
#define TITLE Generic title
page.witz:
#include base
#define TITLE The page title
#define BODY
<b><i>Hello world!</i></b>
Witz code
Witz code is marked by '$' character (the end of code within template is determined by syntax rules, in rare situations you might have to enclose expressions in parenthesis). Character '$' can be escaped to be inserted into text as '$$'.
Values
The type of values processed by Witz is basically equal to U++ Value. Skylark understands some of Value types and perform operations on them (like arithmetics for numbers, concatenating Strings), but generally any Value can be yielded by expression evaluation (either come from handler variables, or be created by function). Skylark always uses .ToString method of Value to convert final expression value before inserting it to the document.
Witz primary values come from shared variable space of Skylark handler; the only other way to define a new variable is by using for statements. Variable names follow C/C++ rules, '.' before variable designates session variable, '@' designates cookie.
There are also value literals, generally following JavaScript syntax:
numbers: 1, -100, 1.23
strings: "string"
arrays: [1, "hello", 123]
maps: { "key1":"value1", "key2":123 }
Expressions
When expressions are used as logical values, then:
if expression yields a number (int, int64, double, bool), it is false if it is Null or zero, true otherwise
if expression yields a ValueArray, it is false if ValueArray is empty, true otherwise
in other cases, it is false if expression is Null, true otherwise (note that this means that empty String is false, non-empty true).
In following table, thick lines divide operators with the same priority, with topmost items having the highest priority:
Operator
map[key]
Map value at key. If there is no such
map.field
Same as map["field"]. Note that there are 4 special pseudo-fields with different meaning.
variable._first
Variable must be control variable of for cycle. Returns true if current iteration is first.
variable._last
Variable must be control variable of for cycle. Returns true if current iteration is last.
variable._index
Variable must be control variable of for cycle. Returns zero-based index of iteration.
variable._key
Variable must be control variable of for cycle. Returns the key when iterating the map.
array[index]
Array element at position index.
function(value1, value2, ...)
Call to function (functions are defined in C++).
handler(value1, value2, ...)
If the id before '(' represents Skylark handler, then what looks like function call gets actually translated to into the URL link to Skylark handler function, replacing value parameters at their corresponding positions in the URL, so they act as handler parameters.
-number
Unary minus.
~number
Bit-wise complement.
!value
Logical not. 1 when value represents false, 0 otherwise.
number * number
Multiplication.
number / number
Division.
number % number
Modulo.
String + String
Concatenates Strings.
number + number
Addition.
number - number
Subtraction.
number << number
Shift left.
number >> number
Shift right.
number < number
number > number
number <= number
number >= number
Comparison of numbers.
String < String
String > String
String <= String
String >= String
Comparison of Strings.
value == value
Equality.
value != value
Inequality.
number & number
Binary and.
number ^ number
Binary xor.
number | number
Binary or.
value && value
Logical and. If first value is false, second value is not evaluated, just skipped.
value || value
Logical or. If first value is true, second value is not evaluated, just skipped.
value ? value : value
Conditional expression. Only necessary expressions are evaluated.
Statements
Note that for structured statements, '$/' is an alternative to '$endfor' and '$endif'.
Statement
$expression
Expression. It gets evaluated and the result is put into the document.
$if(condition)
...
$endif or $/
Conditional statement.
$else
Conditional statement with else clause.
$for(variable in array_or_map)
$endfor or $/
Loop iterating through ValueMap or ValueArray. Variable is assigned all values of container while special pseudo-fields are provided for this value:
._first true if element is first in the container
._last true if element is last in the container
._index index of element
._key if container is a map, returns the key of current element
$else
The optional else clause is invoked if no iteration is performed (map or array is empty).
Standard library
Function
Description
cycle(i, p1, ..., pn)
Returns p[i mod n].
raw(text)
Returns argument as "raw" text that is not supposed to be html escaped.
count(array_or_map)
Returns the number of elements in ValueArray or ValueMap; 0 if parameter is neither.
post_identity()
Returns a complete definition of FORM hiddent field needed for CSRF attack prevention.
js_identity()
Returns a script required that provides CSRF protection for AJAX POSTs.
set()
Returns html formatted list of current shared variables. Useful for debugging.
render(witz_template, p1, ...)
Inserts another template into output. Current variables are copied to new template, parameters p1... are copied as $_1, $_2 etc. This is similar to using #include, but with it is easier to pass arguments to included template this way.
Last edit by cxl on 05/04/2015. Do you want to contribute?. T++ | https://www.ultimatepp.org/src$Skylark$Witz$en-us.html | CC-MAIN-2017-30 | refinedweb | 1,025 | 51.75 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
PYAHK AutoHotKey via Python
Introduction
Aut.
Requirements
- Written for Python2.7, not currently py3k compatible.
- Written against AutoHotkey_H ANSI 32-bit Version 1.1.8.1 (on WinXP).
- A copy of the ANSI 32-bit dll must be provided either in the system location of your version of Windows, or in the same folder as the ahk.py file. (The required dll is not provided as part of this distribution, see the AutoHotKey.dll site for download instructions (alternate link AutoHotKey_H).)
Installation
Get the version from PyPI using pip:
pip install pyahk
Or download the latest commit from bitbucket and:
`python setup.py install`
Testing
A helper script "runtests.py" is provided in the project root to run the entire test suite. Runnable test scripts are provided for each sub-module in the test folder. Tests require Michael Foord's mock library.
Usage
First import the ahk module:
import ahk
Lower level function wrappers provide more direct access to the underlaying dll:
ahk.start() # Ititializes a new script thread ahk.ready() # Waits until status is True ahk.execute('a := WinActive("Notepad")') # Sets a to the return value of WinActive print ahk.get('a') # prints the HWND of the found window or 0x0 as a string
Object wrappers are also provided which have a somewhat cleaner interface:
script = ahk.Script() # Initializes with start and ready commands as above a = script.winActive("Notepad") # Sets a to the return value of the method print a # prints the HWND of the found window as an int, or None script.variable('b', float) # Creates a transparent variable attribute ahk.execute("Random, b, 0.0, 1.0") # Stores value in `b` print 100*script.b # b is retrieved and converted to its saved type (float)
See the Docs for further details.
ToDo
- Re-write script.Function to use the now working ahk.call function.
- Extend setup script with options to download/install the correct dll?
- Add remaining Control commands to Control class.
- Add doc-tests?
- Extend Script class with something to replace ahk.execute (maybe some kind of subroutine wrapper?).
- Add examples directory.
- Add optional unicode support. | https://bitbucket.org/kitsu/pyahk | CC-MAIN-2017-43 | refinedweb | 374 | 69.89 |
vb.net Call function with arguments on new thread
I want to call function with argument by value but I'm getting error.
Private Sub myFunction(ByVal fruit As String) MsgBox(fruit) End Sub
Dim newthread As New System.Threading.Thread(AddressOf myFunction("apple")) newthread.Start()
Error I'm getting:
'AddressOf' operand must be the name of a method (without parentheses).
See also questions close to this topic
- tile windows inside FlowLayoutPanel
- Connect SQL Server Local Database to Visual Basic
i'm working on a visual Basic Program and i'm trying to put a login session for user (check Image1) now the problem is i'm trying to connect the database with the program and i'm getting error from the local data (check Image 2). So i'm not very good in Visual Basic Language so can you please help me to solve this problem.
- Calculations using a number & applying weights
Please help me in writing a program in Vb.net. I have got a number, in which each digit i have to multiply alternatively by 1 and 2.
eg:
number : 4 8 7 9 weights : 1 2 1 2 weighted values: 4 16 7 18
Now I need to add all the weighted values. If the weighted value is a two digit number, we need to add it separately.
eg: sum of digits of weighted values for 4879:
weighted values: 4 16 7 18 sum : 4+1+6+7+1+8 = 27
Then we need to subtract the sum from next multiple of 10. eg:
sum :27 next multiple of 10 : 30 Result : 30 -27 = 3
Now, suppose the sum is a multiple of 10 itself, then consider the sum as the next multiple of 10 and hence the result will be 0. eg:
sum :30 next multiple of 10: 30 result : 30- 30 = 0
Thanks for your help!
- How can I run a function in a separate thread without holding up the Qt GUI loop?
I'm writing a Python app with a GUI written with PyQt5. I have a function
doBigComputation(onComplete, onProgress)which calculates something, taking potentially several minutes, and then should call
onCompletewith the result. Throughout the computation, it should call
onProgresswith a report on how far in the computation it is, so that the app can show a progress bar.
While this computation is happening, I want the GUI loop to keep running, rather than freezing up. There are several Python multithreading idioms (
threading,
multiprocessing, at least) and it's not clear to me which I should use for this purpose.
- Multithreading - Why does the following program behave this weirdly?
The outline of the program:
We have two threads (
t1and
t2) that write an integer value, then flush the written value to RAM.
Another thread (
t3) checks whether the value coincidences with the one written by
t1or
t2, and if not, prints it.
public class Container { int a; volatile boolean b; public static void main(String[] args) { Container container = new Container(); Thread t1 = new Thread() { @Override public void run() { for (;;) { container.a = 409; container.b ^= container.b; } } }; Thread t2 = new Thread() { @Override public void run() { for (;;) { container.a = 102; container.b ^= container.b; } } }; Thread t3 = new Thread() { @Override public void run() { try { Thread.sleep(100); } catch (InterruptedException e) { e.printStackTrace(); } for (;;) { if (container.a != 409 && container.a != 102 ) System.out.println(container.a); } } }; t1.start(); t2.start(); t3.start(); } }
What I thought would happen:
Since
aisn't
volatile, I thought
t3would cache
aand just never print anything.
What actually happens:
For a second or so (no matter long you let
t3sleep), it prints in rapid succession either 102 or 409. Then, printing stops (forever).
What exactly happens here?
- DispatchQueue bound to exact Thread
im developing an app, which uses some framework to draw 3D staff via openGL. This framework requires me to call
draw()method from exact the same Thread.
So i created a serial DispatchQueue and started CADisplayLink in it, calling
draw()at 60FPS. There are few other methods that i have to call from this exact thread, like
start()and
stop(). This makes queues perfect solution to me.
As you may know DispathQueue does not guaranteed to execute every task on the same thread. Which is quite stressful for me, as it may break my app.
I don't really like the idea to create NSThread and implement my own queue on it.
Are there any way to bind DispatchQueue to exact Thread? Maybe NSOperationQueue can be bound? | http://quabr.com/51298899/vb-net-call-function-with-arguments-on-new-thread | CC-MAIN-2018-39 | refinedweb | 748 | 63.39 |
FLVPlayback source with query strings (parameters) doesn't loadsupervu@gmail.com Aug 31, 2010 8:36 AM
Flash version: CS4
AS version: AS3
------------------------------
I'm currently trying to use the FLVPlayback component and pass a source FLV that's living on a cloudfront webserver. The problem is that the cloudfront requires authentication in the form of query strings in the source FLV. For example:
import fl.video.*; var mainMovie:FLVPlayback = new FLVPlayback(); mainMovie.source = ""; trace(addChild(mainMovie));
As soon as I take away the "dummyquery", it works fine. When I add a query string, it breaks (nothing loads).
Here is the error output I get:
[object FLVPlayback] VideoError: 1005: Invalid xml: URL: "" No root node found; if url is for an flv it must have .flv extension and take no parameters at fl.video::SMILManager/ at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at flash.net::URLLoader/onComplete()
It adds on "&FLVPlaybackVersion=2.1" to the end.
I saw a different article that said I should add a dummy variable at the end like "&dummy=.flv" because I was told that Flash is basically looking for an .flv extension at the end and you can trick it, but it doesn't work because they add on additional code.
Does anyone know how to work around this?
1. Re: FLVPlayback source with query strings (parameters) doesn't loadAndrei1 Aug 31, 2010 8:48 AM (in response to supervu@gmail.com)
I don't know how cloudfront authentication works but I doubt you can use FLVPlayback for urls structured this way. Chances are, you need to write your own player taking into account cloudfront NetConnection/NetStream routine.
2. Re: FLVPlayback source with query strings (parameters) doesn't loadsupervu@gmail.com Aug 31, 2010 8:53 AM (in response to Andrei1)
Thanks for the quick reply.
If I take the source URL that's supposed to go in there, and paste it into a browser.. it gives me access to the file. So that's the only way I know that it works with cloudfront's authentication.
The authentication basically requires two queries: expiration, and signature.
Using that same URL and putting it into my FLVPlayback Component's source, it breaks because of the query strings.
3. Re: FLVPlayback source with query strings (parameters) doesn't loadAndrei1 Aug 31, 2010 9:27 AM (in response to supervu@gmail.com)
Again, I don't believe you can use FLVPlayback for this purpose and you should look into writing your own streaming routine because, it seams, FLVPlayback's way of dealing with urls is not flexible enough.
4. Re: FLVPlayback source with query strings (parameters) doesn't loadkglad Aug 31, 2010 9:32 AM (in response to Andrei1)
(i think there is a way to append an identifying query string to the source parameter. check these forums for a similar issue within the past few weeks that, i believe, the op found a work-around.)
5. Re: FLVPlayback source with query strings (parameters) doesn't loadsupervu@gmail.com Aug 31, 2010 9:53 AM (in response to Andrei1)
Thanks.
Aside from that, how do you customize a component? I downloaded the FLVPlayback 2.5 component from the Flash Media Server Software Tools page () and then installed it like it told me to:
Using Windows 7:
- I downloaded the file
- Unzipped it
- Placed it into this directory: C:\Program Files (x86)\Adobe\Adobe Flash CS4\Common\Configuration\Component Source\ActionScript 3.0\FLVPlayback\
- Edited the SMILManager.as to remove the code that appends the "&FLVPlaybackVersion" code
- Opened Flash
It still appends the code when I go to test the movie. Did I not install it correctly? Am I supposed to put it somewhere else? Do I need to tell flash to load a customized version? If so, how?
6. Re: FLVPlayback source with query strings (parameters) doesn't loadAndrei1 Aug 31, 2010 9:55 AM (in response to supervu@gmail.com)
Here is an excerpt from Adobe article (). It seems it explains xml error::
- If you specify the RTMP, RTMPT, RTMPS, RTMPE, or RTMPTE
7. Re: FLVPlayback source with query strings (parameters) doesn't loadsupervu@gmail.com Aug 31, 2010 12:48 PM (in response to supervu@gmail.com)
I figured it out. I'm just posting here so that other people have the solution to this.
I have been following this article: -Google-Gadget/Page1.html
And inside it, there's a section that talks about using the FLVPlayback component with URLs containing query strings (parameters).
Be very careful of the line change.. it actually changes "<" to ">".
Also, you can copy the component .as files straight from your Program files:
C:\Program Files (x86)\Adobe\Adobe Flash CS4\Common\Configuration\Component Source\ActionScript 3.0\FLVPlayback\fl
......and copy that whole folder into the directory of your project.You can then customize those .as files as needed.
When you update those .as files, it will compile during compilation of your FLA.
8. Re: FLVPlayback source with query strings (parameters) doesn't loadscott.tompkins Nov 30, 2010 11:53 AM (in response to supervu@gmail.com)
Hey supervu,
I know you already found a workaround to make this work, just figured I'd post an alternative to editing the FLVPlayback compoent. I also am retrieving an FLV file via a .Net ASHX file. To keep both worlds happy, I used a URLRewriter module to translate for me... this one translates what I am using for the FLVPlayback component source: to be resolved as
There is of course no flv folder in the root of my application...
Here is my very simple URLRewriter class in vb.net:
Imports Microsoft.VisualBasic
Imports System
Imports System.Web
Public Class URLRewriter
Implements IHttpModule
Public Sub Init(ByVal inst As System.Web.HttpApplication) Implements System.Web.IHttpModule.Init
AddHandler inst.BeginRequest, AddressOf Me.OnBeginRequest
End Sub
Public Sub OnBeginRequest(ByVal app As Object, ByVal e As EventArgs)
Dim inst As HttpApplication = CType(app, HttpApplication)
Dim req_path As String = inst.Context.Request.Path
Dim trans_path As String = ""
Dim search As String = "/flv/"
Dim pos As Integer = req_path.IndexOf("/flv/")
If pos > -1 Then
Dim key as string = req_path.Substring(pos + search.Length, (req_path.LastIndexOf(".flv") - (pos + search.Length)))
HttpContext.Current.Response.Redirect("~/API/resource.ashx?ID=" & key )
End If
End Sub
Public Sub Dispose() Implements System.Web.IHttpModule.Dispose
End Sub
End Class
and make sure you add this to your web.config.. this will cause URLRewriter to intercept all HTTP requests, and redirect as needed.
<system.web>
<httpModules>
<add name="URLRewriter" type="[Namespace].URLRewriter"/>
</httpModules>
</system.web>
Hope this helps someone.
9. Re: FLVPlayback source with query strings (parameters) doesn't loadMarIfla Nov 29, 2011 2:32 AM (in response to supervu@gmail.com)
I'm having the same problem and I've tried to solve it the same way. But it does't seem to work.
No READY-event is fired. I added a STATE_CHANGE listener, the event is fired and the event.state is VideoState.CONNECTION_ERROR...
Has the behaviour perhaps changed in one of the latest Flash versions? I'm using FLVPlayback version 2.5.0.26. | https://forums.adobe.com/message/4051871?tstart=0 | CC-MAIN-2018-09 | refinedweb | 1,190 | 60.11 |
I’ve followed and it works well! However, I want to use the generated .so library in a new Android application, and I simply don’t know how to do this... I’ve been struggling for days and if any step-by-step tutorial can be shared that would be helpful!
This is the Android Project MyApp which I used to generate the .so files:
MainActivity :
Java Class : MyNDK
header file: com_demo_ble_myapp_MyNDK.h
Cpp file: MyLibrary
And this is the structure of my new Android project useSOLib, I simply copy all the so files from MyApp\app\src\main\libs to useSoLib\app\src\main\jniLibs
And this is MainActivity in useSoLib:
I can Build-> Make Project successfully, but when I run it on the device, the app shows "Unfortunately, useSoLib has stopped." and crushed. I know I miss some steps here, but I'm new to Android Studio so I have no clue where I should start with... thank you in advance for any suggestions! :)
You should import MyNDK and use its instance. JNI library is related to Java class. Change the code:
public class MainActivity extends AppCompatActivity{ @Override protected void onCreate(Bundle savedInstanceState){ super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Log.i("MyTag", new MyNDK().getMyString()); } }
You can either export the MyNDK class as a jar package or create an aar bundle. Then import the Java class into your new project with JNI library.
The post How to Build *.so Library Files into AAR Bundle in Android Studio may help you. | https://codedump.io/share/3GjgvRfE5CxZ/1/how-to-use-the-generated-so-library-in-another-android-project | CC-MAIN-2017-13 | refinedweb | 253 | 57.77 |
## Resource dump and change dump manifests (Ex. 2.4, sect 2.2.1, 5.1.1, 7.1.1, etc)
At several points, I found myself wondering if the content-type of the resource should be mandatory here.
## 2.2.1. Source Perspective
Para 6, "Linking to Related Resources": I was thinking how and where this linking would be applied. I think a forward reference to section 8 would help here.
General: I'm not seeing any discussion of resource subsets. IIRC, these show up clearly in the uses cases, and I thought they were mentioned in the Michigan meeting. Section 9.1 has a general allusion to the notion of different subsets of resources, but I'm not seeing any clear guidance how this is done. I think it's handled through multiple capability lists, which begs some questions about how discovery might work. (I can guess at how I think it's meant to happen, but I think the spec should be clear than that.)
## 2.2.2. Destination Perspective
General comment, about timing and synchronization: I think some discussion of timing and possible optimisations (especially based on change lists and change dumps) would be helpful here. When can a destination pass over a particular set of changes. What information should it keep about past synchronisation operations? Is it always required to scan any change list that it finds, or are there any outer-level indicators that can be used to safely ignore them?
## 2.2.4. Discovery Perspective
I think a forward reference to section 9 would be helpful.
## 3. Sitemap Document Formats
In light of Richard's comments, is it worth underscoring that the attributes are presented without namespace prefixes?
Para 3: <url>/<rs:md>, "capability" attribute: I think this can appear only on a top-level <urlset> or <sitemapindex> element, but the text kind-of suggests it can appear anywhere. Saying "When the attribute is not used, this signifies that the resource is subject to synchronization" seems to create a kind of "post hoc ergo propter hoc" kind of relation, which seems a bit confusing to me. Suggest just specify where it may appear.
Para 3: <url>/<rs:md>, "hash" attribute: if the source supports content negotiation for this resource, what does the hash refer to? (suggest: the representation returned when no Accept: header is specified).
Para 3: <url>/<rs:md>, "path" attribute: saying it "conveys the file path of..." seems a bit unclear to me, in that it seems to refer to a file in the host file system, whose format would depend on the system. I think a description that is more clearly system neutral would be more helpful to ensure interop. A particular case I was thinking about was: what if a path segment contains a "/" character - how would that be represented? When using ResourceSync to create a mirror of a web site (which needs to allow relative references to work properly), is it safe to use the path value to construct a URI for a mirrored resource, or should that be derived from the <loc> element?
## 8.1. Mirrored Content
General: is a mirror required to always deliver identical content to the original source? I.e. is it safe to assume that the rs:md@hash for the original resource also applies to the mirror? If not, should there be some way to give a different hash for the mirror?
## 8.2. Alternate Representations
" • A recommended type attribute that conveys the Media Type of the alternate representation."
Should there also be a way to indicate other HTTP headers that can affect the result returned (i.e. values for other headers mentioned in an HTTP response 'Vary:' header)? Language and device type have been mentioned. I don't think the specification should cover this in detail, but I think this might be a candidate for using an extension point, such as additional rs:md attributes.
## 8.3. Patching Content
How can a destination be sure that a patch is applicable to a particular version of a resource that it has; I think a way to specify the hash of the resource representation to which the patch applies would be appropriate. (I think the hashes expressible here are the has pif the resulting resource and the hash of the patch itself.)
## 8.5. Prior Versions of Resources
"A second approach consists of pointing to a TimeGate associated with the time-generic resource. A TimeGate supports negotiation in the datetime dimension, as introduced in the Memento protocol [Memento Internet Draft], ...".
The referenced document here, like all IETF Internet Drafts, says: "Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress".
As such, it seems to me that this is an inappropriate citation in a document that is in the latter throes of becoming a standard. Do you not have a more permanent specification for Memento? (I'd suggest requesting Informational RFC publication for the current memento draft, through the RFC editor independent submissions stream. This does not preclude a later move for standard status when you're ready to do the IETF last-call dance. Or you could just try for non-WG standard status through the IETF. But I'm not sure if either if these would happen on a suitable timeframe for ResourceSync approval as a NISO standard.)
## 8.7. Republishing Resources
Provenance is mentioned in the motivation for this - I can't help wondering if a W3C Provenance vocabulary URI might be more appropriate than a new link relation; e.g. prov:wasDerivedFrom or prov:wasQuotedFrom (the latter seems a bit odd, but I think the use here falls within the defined meaning).
That's all from me! Nice job!
#g
-- | https://groups.google.com/g/resourcesync/c/GYBm_2krFCU | CC-MAIN-2022-21 | refinedweb | 985 | 60.65 |
In this article we are going to focus on Data types that are available in Java.
Data types in Java
In java we have two main data types.
- Primitive data types
- Reference data types
Primitive Data Types
These data types are to store simple values.
Ex: numbers, characters, Booleans ect.
Let’s have an overall look on those primitive data types.
Reference Data Types
This data type refers to objects and is used to store complex data like strings, arrays, classes, etc.
Primitive vs Reference
Since you know how to do variable declaration, I want you to store the value ‘62’ in a variable named ‘physicsMarks’. Go try it and come back!
I guess that you have written your code like this;
int physicsMarks = 62;
Actually, there’s nothing wrong in the above piece of code. However, if you pay attention to the size of the data type that you have used, you can see that it requires 4 bytes to store an integer value. But here we have just stored two digits which only requires 1 byte. Therefore, the most suitable code can be written as;
byte physicsMarks = 62;
With that I think you have realized that paying attention to the size of data type is also required when declaring variables.
We can declare integer variables as follows;
int population = 123_463_000;
When writing large numbers, we use commas ‘,’ in between (123, 456, 000). Like wise we can use underscores as shown above to separate the parts of a given number.
Let's see how to declare variables of type 'long'.
long population = 6_432_736_000L;
As you can see, you need to add L to the end of the number. If not java will think it as an integer and will give an error.
Go through the following code section to get more understanding on primitive data types.
float price = 10.99F; // If 'F' or 'f' is not used, java sees this number as a double char grade = 'A'; // need to use single quotes boolean isMale = false; // here true and false are reserved key words in java
Now let's see an example of a reference data type. 'Date' is such an example. If we look at the code it should be like this;
package com.company; import java.util.Date; public class Main { public static void main(String[] args) { Date today = new Date(); System.out.println(today); } }
Here,
Date is the class of the reference type. However, to use that you have to import the library
java.util.Date as shown above. Else this will give an error.
We have used the keyword
new to allocate memory since this is not a primitive type. That makes 'today' an object (an instance) of the class '
Date'. The output of this program will gives you the exact date and time that you run the program.
Reference data types have members. Therefore, the object '
now' can access the methods in the class 'Date' by using the dot operator (
now.). Give it a try by yourself :)
Discussion (0) | https://dev.to/chathurashmini/basics-of-java-5-55ph | CC-MAIN-2021-43 | refinedweb | 504 | 73.78 |
Imaging you work for a record company and you need to retrieve some information about some artistes.
You can use constructor to quickly initialize the data members and print them out.
Take note that parameters are not needed for destructor.
#include <iostream> #include <string> using namespace std; class Rock { public: string name; string track; Rock(string, string); //This is the constructor ~Rock(); //This is the destructor }; Rock::Rock(string n2, string t2) { name = n2; track = t2; } Rock::~Rock() { cout << "This is Rod Stewart Signing Off"; cout << endl; } int main() { string n1 = "Rod Stewart"; string t1 = "I was only Joking"; Rock rs(n1,t1); // instance declaration cout << "This is " << rs.name << " Singing " << rs.track; cout << endl; return 0; } | https://codecrawl.com/2015/01/10/cplusplus-constructor-parameters/ | CC-MAIN-2019-43 | refinedweb | 117 | 69.92 |
#include <rte_common.h>
#include "rte_ioat_rawdev_fns.h"
Go to the source code of this file.
Definitions for using the ioat rawdev device driver
Definition in file rte_ioat_rawdev.h.
Name of the device driver
Definition at line 24 of file rte_ioat_rawdev.h.
String reported as the device driver name by rte_rawdev_info_get()
Definition at line 26 of file rte_ioat_rawdev.h.
Enqueue a fill operation onto the ioat device
This queues up a fill operation to be performed by hardware, but does not trigger hardware to begin that operation.
Enqueue a copy operation onto the io "rte_ioat_perform_ops" call i.e. before any new operations are enqueued.
Trigger hardware to begin performing enqueued operations
This API is used to write the "doorbell" to the hardware to trigger it to begin the operations previously enqueued by rte_ioat_enqueue_copy()
Returns details of operations that have been completed
If the hdls_disable option was not set when the device was configured, the function will return to the caller the user-provided "handles" for the copy operations which have been completed by the hardware, and not already returned by a previous call to this API. If the hdls_disable option for the device was set on configure, the max_copies, src_hdls and dst_hdls parameters will be ignored, and the function returns the number of newly-completed operations. | https://doc.dpdk.org/api-20.11/rte__ioat__rawdev_8h.html | CC-MAIN-2021-39 | refinedweb | 214 | 50.57 |
An easy framework to support DJI Tello scripting in Python 3
Project description
easyTello
easyTello is a Python library created to provide users with a simple way to interface and send commands to the DJI Tello drone, as well as to simply and easily teach students how to control the drone using Python 3. All the commands outlined in the DJI Tello SDK 1.3.0.0 are present in this library.
Installation
To install the library, simply run:
pip install easytello
or to install from cloned source:
$ git clone $ cd easyTello $ python setup.py install
Note: easyTello requires OpenCV-Python. If you don't have it installed, simply run:
pip install opencv-python
For more information on OpenCV-Python click here.
Examples
Creating a drone object in Python:
from easytello import tello my_drone = tello.Tello()
Programming the drone to takeoff, fly in a square and then land:
my_drone.takeoff() for i in range(4): my_drone.forward(100) my_drone.cw(90) my_drone.land()
Toggling state of video stream:
# Turning on stream my_drone.streamon() # Turning off stream my_drone.streamoff()
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/easytello/ | CC-MAIN-2021-10 | refinedweb | 201 | 65.73 |
Azure Service Bus libraries for Python
Microsoft Azure Service Bus supports a set of cloud-based, message-oriented middleware technologies including reliable message queuing and durable publish/subscribe messaging.
Libraries for data access
The latest version of the Azure Service Bus library is version 7.x.x. We highly recommend using version 7.x.x for new applications.
To update existing applications to version 7.x.x, please follow the migration guide.
Version 7.x.x
To send and receive messages from an Azure Service Bus queue, topic or subscription, you would use the latest version of the
azure-servicebus. This also allows to manage your Azure Service Bus resources like queues, topics, subscriptions and rules, but not the namespace itself.
Version 0.50.x
The older verson allows you to send and receive messages from an Azure Service Bus queue, topic or subscription, but it lacks a lot of the new features and performance improvements available in the latest version of the same package.
Libraries for resource management
To manage your Azure Service Bus resources like namespaces, queues, topics, subscriptions and rules via the Azure Resource Manager, you would use the below package: | https://docs.microsoft.com/sv-se/python/api/overview/azure/servicebus?preserve-view=true&view=azure-python | CC-MAIN-2022-05 | refinedweb | 194 | 53.21 |
java codes
java codes hi .. i need a login code in java with restriction that with 3 error attempts then the program will terminate .. thanks
java codes
java codes why is every application allowed to use classes System and String without first importing the item image converting to byte codes - Java Beginners
java image converting to byte codes i want to convert an image to human unreadable format which is uploaded by client to server.How can i
Java Example Codes and Tutorials
Java Tutorials - Java Example Codes and Tutorials
Java is great programming... the name to Java and modified the language to take advantage of the
burgeoning World Wide Web.
Java is an object-oriented language, and this is very
similar to C
Java Tutorials - Java Example Codes and Tutorials
Lambda Expression in Java 8
Lambda Expression in Java 8: Learn the power of Lambda Expression
In this section we will learn about the Lambda Expression which is introduced
in the Java...
in the Java 8. This is the biggest upgrade introduced in the Java so far
show codes of this
show codes of this search for the number of occurrence of "and" and "the" in the following sentence by writing a codes. the student like the best teacher at the end of the lessons and others
Spring AOP tutorials and example codes.
in pure java
and there is no need of special compilation process. It does... in a Java EE web
container or application server.
It is another way of thinking about
Iterator in java, Iterator Java Examples
The Iterator is an java interface, it can be used to iterate the java collection objects. In this Java iterator tutorial you will learn how to define of Java Iterator interface
HTML codes
HTML codes Hi,
I am trying to find HTML codes to learn HTML. Can any one html me?
Thanks
Hi,
Please check HTML examples and HTML5 Tutorials page.
Thanks
java application - Applet
java application codes in repetition and decision codes in repition and decision
java
java hello,i am learning java from past 1 month,i learnt till the inheritance & i want to write a program to test weather a given string is palindrome or not...,can you please help me or give codes
import
codes for displaying in calendar
codes for displaying in calendar can i get jsp codes for displaying comments, when the pointer is placed over the particular date in calendar
files uploding and downloading codes
files uploding and downloading codes any one know JSP codes for upload files,download files and delete files from a created virtual memory
Needed jsp codes
Needed jsp codes jsp code for employee payroll.producing a payslip of every employees monthly calculating the gross pay of a particular employee considering the income tax and producing a report which can be printed out
java
java sir I want to create an html page which is going to be interact with the servlet program.
i want to do it by using eclipse ide can you give me the step by step as well as codes?
thanks
Here is a code
Open Source Charting and Reporting Tools in Java
Open Source Charting and Reporting Tools in Java... for which source codes are available under a General User?s License agreement. Some of them are given
below :
JfreeChart: This is a free java library for creating
Java basics
Java Basics are necessary to learn by programmers who are willing to learn Java language. These basics must be followed every-time a program is made in Java.
Java language is completely specified that helps the programmer
JAVA Basics
JAVA Basics What is byte code? Java programs are compiled and compiler generate a class files which contain byte codes. These byte codes can be run in any platform which makes java platform independent language
Online Java Class
Online Java Classes
Today, softwares have made their foray into every field... to simplify tasks everywhere. Any
Software needs a platform to be developed on and Java... to be designed in the best
possible way.
For developing software on Java technology, one
No complete codes for Simple Form Controlle Example
No complete codes for Simple Form Controlle Example No complete codes
java programming - Java Beginners
java programming heloo expert,
thanks for your coding
i recently asked about the question on abt how to program a atm system
may i noe in which platform can i run those codes ??
netbeans ?? any suggestions Tutorial with examples
Java Tutorial with examples What is the good urls of java tutorial with examples on your website?
Thanks
Hi,
We have many java tutorial with examples codes. You can view all these at Java Example Codes
Java BigDecimal hashCode example
Java BigDecimal hashCode example
.... For every number method generates different hash codes.
Method generated hash codes does...).hashCode();
Java_BigDecimal_hashCode.java
java coding
java coding i am using netbean to my project and it is Desktop Application.i want my textfield to accept only numbers or only alphabets .........plez plez plez do help me..........i am just beginner to java codes
HTML color codes
HTML Color Codes
In this tutorial we will show you how to get the different color codes for
designing your HTML pages. You will be able to find the hexadecimal color
code by color name.
We have also provided the tool to create
Welcome to Java Developers paradise!
SCJP Module-1 Question-5
Encapsulation in Java
are the four concepts of Object Oriented Programming (OOPs). Encapsulation in Java is the technique of binding or wrapping the data and the codes in a class private... the class and thus provides security.
Example of Encapsulation in Java:
package
HTML Color Codes
:5px 0px 5px 5px;
}
HTML Web color codes
In this page we have created a list of most used HTML Web color codes. We have
shown the color... color codes:
Name/String
Code
Name
Code
java program - Java Beginners
java program 1.Write a program in java to input a sentence and find out how many palindromes are there in the sentence.
2. Write a program in java...; Hi Friend,
Try the following codes:
1) Palindrome Example
import
java applets - Java Beginners
calculator using java codes?...
4.write a java application to open the file...java applets 1.write main method for display clock applet including... a java applet programme to implement moving a ball from top to bottom... without
java applet notpad
java applet notpad i need Codes Or programe in java applet ; i want add this programe to my site html > please help me our
Learn Java Applets, if you want to learn java applets then please visit the following link
Java - Java Beginners
codes (sorting and partitioning) to Java. It should be able
to execute any set... and contrast it to the quicksort
algorithm
(f) Write a java program
java code and logic - Java Beginners
java code and logic Q1:
how to write java program for the following:
*
* * *
* * * * *
* * *
*
Q2...
T
S Hi Friend,
Try the following codes:
1
simple calculator - Java Beginners
simple calculator how can i create a simple calculator using java codes? Hi Friend,
Please visit the following link:
Thanks
designing on my own but some codes are not compiling help. Don't ignore me please
Convert To Java Program - Java Beginners
Convert To Java Program Can anyone convert this C program codes to Java codes,please...thanks!
#include
int array[20];
int dcn;
int cntr=0;
void add();
void del();
void insert();
void display();
void exit();
void
java - Java Interview Questions
and that contains data and codes with behavior.
In Java everything happens within... classes in languages like C++ and Pascal.
But in Java one can define his/her own... helloObject.helloWorld()
For read in details :
Tutorial For Java beginners
Tutorial For Java beginners I am beginners in Java. Is there any good tutorial and example for beginners in Java? I want to learn Java before Jan 2014.
Thanks
Yes,
If you spend time you can learn Java in the month
web crawler - Java Beginners
web crawler Sir, i want to develop a web crawler using java & java applet.Can u suggest me any ebook that can help me developing that? Can u provide me with some codes regarding the development of web crawler
java
java diff bt core java and java compilation error - Java Beginners
java compilation error Hello,
I sent a previous message regarding... because I figured out I had the codes typed in wrong. After I corrected the codes I still received this message. My program compiles with no errors but I just
java
java what is java
java script - JSP-Servlet
java script How to open a form while clicking a image button? Hi Friend,
You can use the following codes:
1) 'ImageButton.html'
Click the Image Button
2) 'form.html'
Form
Learn Hibernate programming with Examples
. Here is the pre-requisite of learning the Hibernate with
these example codes... knowledge and experience in Java programming
specially the database driven... the Hibernate based codes.
Hibernate Configuration files -
Understand the various
java
java different between java & core java
Java
Java Whether Java is pure object oriented Language Exception - Handle Exceptions in Java
different types of exception in Java
with the example codes... Java Exception - Handle Exceptions in
Java
What is Exception in Java? How
java - Java Beginners
java hello sir . i have some problem in my java program .i have inserted records into sql server database.
the table name is "proj". now i want... help me . Thank you sir. My three codes are as under:--
1 | http://www.roseindia.net/tutorialhelp/comment/89288 | CC-MAIN-2015-06 | refinedweb | 1,594 | 53.61 |
UNSOLVED Snap point to point
Surprisingly I couldn't find an answer to this:
When dragging a point or a contour from a point, is it possible to have it snapped to other points (and how)?
like a point is snapping to a guide? maybe you have to elaborate your question a bit more.
but you could create a tool where the dragging is snapped to all available points.
as a pro-tip: you can subclass a Editing Tool and overwrite modifyDraggingPoint(point, delta).
Good luck
I'd like to position a point exactly on top of another point (while dragging a contour the first point is a part of). That's even simpler than snapping to a guide, since snapping to a line means jumping to the closest point on a line, while in my case I just want my point to snap to a single point.
So a new tool is required?
Hi @oribendor,
At one point I had tried writing a script that would snap to vertical metrics. As an example for you, I've made some adaptations to that script to make it align to other points. Works pretty well, and you can adjust the threshold. Off the top of my head, I'm not sure how to make handles’s relative position to their on-curve points stable while aligning, but this is a solid start. Let me know if you figure that aspect out. Here's a demo video and the script. Run it as a start-up script in RF so the observer is always running. If anyone has any tips for streamlining this type of code, I'd be interested in seeing that, too.
from mojo.events import addObserver threshold = 20 class point2PointSnapper(): ''' A start-up script that makes oncurve points snap to each other within a certain threshold. ''' def __init__(self): addObserver(self, "mouseDragged", "mouseDragged") def mouseDragged(self, notification): # Location of mouse when dragged mouse_x = notification['point'].x mouse_y = notification['point'].y # print(notification) # print(mouse_x) # print(mouse_y) g = CurrentGlyph() if g != None: if g.selection != (): if len(g.selection) == 1: # Registering all the points in the glyph you may want to align to. al_pts = [] for c in g: for pt in c.points: # You probably only want to align to on-curve points. if pt.type != "offcurve": al_pts.append((pt.x, pt.y)) for pt in g.selection: # Aligning a point to itself is... mindblowing, but useless, so we'll disregard current selection as alignable. al_pts.remove((pt.x, pt.y)) pt.x = mouse_x pt.y = mouse_y # For each point you could align to, snap-to if nearby. for al_pt in al_pts: if al_pt[0] - threshold < mouse_x < al_pt[0] + threshold: if al_pt[1] - threshold < mouse_y < al_pt[1] + threshold: print("snapping single point") pt.x = al_pt[0] pt.y = al_pt[1] point2PointSnapper()
cool!
an different approache! I like it | https://forum.robofont.com/topic/727/snap-point-to-point | CC-MAIN-2021-49 | refinedweb | 478 | 75.71 |
This page describes the programming guidelines and associated conventions used for Aida development.
Until we get good at this let's use the Basic Naming Convention (see section 18)
- Avoid using the "#pragma prefix" pragma in an IDL file to specify the package name (which can be used by the jidl switch --auto-package). Instead use jidl switch --prefix-package <module> edu.stanford.slac.aida. This will create the files in directory edu.stanford.slac.aida.<module>, and will put a matching package directive at the head of the source files. (Q: Is there a good reason to use #pragma prefix from the C++ perspective instead?).
- module names should be all lower-case (because these will become java package names, which should be all lower-case according to Java conventions)
- interface names should start with Uppercase first letter, because these will become java interface names which have capital initial in Java conventions. We additionally say they should end in capital I to denote an interface.
- method names should be lower case first letter, since they will become class method names, which have lowercase first letter by convention in Java and C++.
java -Dorg.omg.CORBA.ORBClass=com.ooc.CORBA.ORB \ -Dorg.omg.CORBA.ORBSingletonClass=com.ooc.CORBA.ORBSingleton \ MyApp
We may choose to enforce this in our code at a later date. See the Orbacus FAQ in \package\aida\tool\orbacus\OB_JOB-4.0.5\Orbacus_FAQ.txt
We use a separate IDL module for each servant (da, magnet, name etc). You might ask, why not just different interfaces in one module, like an overall "aida" module. The answer is because an IDL module equates to a java package, and each java package must be in a unique directory. The jidl compiler enforces that by creating a directory for each module it comes across, and putting a "package" statement, with the name of the module, at the top of each file it creates for that module. Correspondingly, the .class files for each package go in their own directory. And this is useful for distributing the servant code, since if each servant's code goes in its own directory, we can easily create a .jar file of the files in that one directory, and deliver that .far file to the machine which is going to host that servant.
Test clients go in package "test", and should import the package which they test (rather than being included with the package which they test).
Documentation should be web based and placed in the Aida web site. The web site filesystem location is afs\slac\package\common\doc. There is a template document for Aida web documents in Aida Template Document.
Please follow the Javadoc documentation conventions and style rules laid out in How to Write Doc Comments for the JavadocTM Tool. . A summary list of tags is here.
Additionally, for method documentation, the javadoc comment should be split into functional summary paragraph - similar to the familiar "Abs:" tag we used on VMS, optionally followed by the longer systematic description paragraph(s) corresponding to the Rem: tag we used on VMS. Note that to distinguish paragraphs, javadoc requires the HTML <p>...</p> tag.
Eg:
/** * <p> Maps a query string to a named data item accessible by AIDA. </p> * * <p> The mapping is defined the regular expression list stored in the * directory service database 'transform' field. </p> * * @param query An AIDA query string * @param transform A sed regular expression which defines how to * re-write the query to get an Aida accessible data item. * @return An AIDA accessible named data item (or, possibly another * query). */ private String transformTarget(String query, String mapExpr) { ...
Javadoc for Corba interfaces should give the details of the implementation in the systematic description paragraph:
Eg
/** * <p> Defines the AIDA Directory Service API implementation. <p> * * <p> DaNameServiceI_impl implements the DaNameServerI IDL interface. <p> */ public class DaNameServerI_impl extends DaNameServerIPOA { ...
[Aida Home Page][SLAC Controls Software Group][ SLAC Home Page]
Author: Greg
White, 15-Jul-2001
Modified by: Greg White, 1-Aug-2001. Added Doc Conventions
Ron MacKenzie, 1-Jul-2002, Added Table of Contents and Exception Handling and Logging section. | http://www.slac.stanford.edu/grp/cd/soft/aida/aidaConventions.html | CC-MAIN-2014-15 | refinedweb | 686 | 54.02 |
Linux
2019-03-06
NAME
chroot - change root directory
SYNOPSIS
#include <unistd.h>
int chroot(const char *path);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
chroot(): in its user namespace) may call chroot().
This call changes an ingredient in the pathname resolution process and does nothing else. In particular, it is not intended to be used for any kind of security purpose, neither to fully sandbox a process nor to restrict filesystem system calls. In the past, chroot() has been used by daemons to restrict themselves prior to passing paths supplied by untrusted users to system calls such as open(2). However, if a folder is moved out of the chroot directory, an attacker can exploit that to get out of the chroot directory as well. The easiest way to do that is to chdir(2) to the to-be-moved directory, wait for it to be moved out, then open a path like ../../../etc/passwd.
A slightly trickier variation also works under some circumstances if chdir(2) is not permitted. If a daemon allows a "chroot directory" to be specified, that usually means that if you want to prevent remote users from accessing files outside the chroot directory, you must ensure that folders are never moved out of it. filesystem, other errors can be returned. The more general errors are listed below:
CONFORMING TO
SVr4, 4.4BSD, SUSv2 (marked LEGACY). This function is not part of POSIX.1-2001.
NOTES
SEE ALSO
REFERENCED BY
capsh(1), chdir(2), clone(2), getrandom(2), mount(2), pivot_root(2), getcwd(3), syslog(3), system(3), core(5), proc(5), capabilities(7), path_resolution(7), schroot(1), dchroot(1), slapd(8), switch_root(8), bootptab(5), clsync(1), dchroot-dsa(1), chroot(8), ftpd(8), chroot(2), rssh.conf(5), opendkim.conf(5), nsca-ng.cfg(5), pure-ftpd(8), tntnet.xml(7), haveged(8), openarc.conf(5), kftpd(8), opieftpd(8), jailkit(8), jk_check(8), jk_chrootlaunch(8), jk_chrootsh(8), jk_cp(8), jk_init(8), jk_jailuser(8), jk_list(8), jk_lsh(8), jk_socketd(8), jk_uchroot(8), jk_update(8) | https://reposcope.com/man/en/2/chroot | CC-MAIN-2022-21 | refinedweb | 343 | 54.93 |
Aren't there utilities to move messages from a folder back to
SOUP or mailbox format *.MSGs - whatever you are importing?
It seems you could filter to a temporary folder, run your
f_order, run your utility that puts messages back into your
import format, save the filter file that specifies your filters
(because this filters to the temp folder), copy over a copy of
the filter file that specifies the proper psuedo-group, do your
re-import, and restore your regular filter file.
6 steps, but not unmanageable from a batch file.
--
Steve Washam Walla Walla, Washington sew@valint.net ---------------------------------------------------------------
It's hard to make predictions -- especially about the future. Casey Stengle | http://www.vex.net/yarn/list/199707/0014.html | crawl-001 | refinedweb | 112 | 61.97 |
When you subscribe to a channel in Retlang, you get an IUnsubscriber back. The equivalent of this in Rx is just IDisposable. This makes AnonymousDisposable is a fairly vital class in ReactiveExtensions. I even used it in the last post. It’s a pity someone decided to mark it as internal (again). So here’s another implementation of it:
public class AnonymousDisposable : IDisposable { private readonly Action _onDispose; public AnonymousDisposable(Action onDispose) { _onDispose = onDispose; } public void Dispose() { _onDispose(); } }
Technorati Tags: Reactive Extensions,Rx,Retlang,Open Closed Principle
One thought on “Reactive Extensions: AnonymousDisposable”
*clank* You’re right. The same holds true of AnonymousObserver. I’m still not very impressed with Microsoft’s love affair with static methods, though. It feels like unnecessary code hiding to me. And I note that, in their own code, they don’t call Disposable.Create. | https://colourcoding.net/2010/08/03/reactive-extensions-anonymousdisposable/ | CC-MAIN-2019-04 | refinedweb | 139 | 50.02 |
This action might not be possible to undo. Are you sure you want to continue?
• The statements that enable you to control program flow in a C# application fall into three main categories: selection statements, iteration statements, and jump statements. • In each of these cases, a test resulting in a Boolean value is performed and the Boolean result is used to control the application's flow of execution. • You use selection statements to determine what code should be executed and when it should be executed. C# features two selection statements: the switch statement, used to run code based on a value, and the if statement which runs code based on a Boolean condition.
The if statement's syntax follows-the square brackets denote the optional use of the else statement (which we'll cover shortly): • if (expression) statement1 [else statement2] .• The most commonly used of these selection statements is the if statement. • The if statement executes one or more statements if the expression being evaluated is true.
control is passed to statement2. Also note that statement1 and statement2 can consist of a single statement terminated by a semicolon (known as a simple statement) or of multiple statements enclosed in braces. If expression results in true. control is passed to statement1. expression is any test that produces a Boolean result.• Here. If the result is false and an else clause exists. This is in contrast to languages such as C++ where you're allowed to use the if statement to test for any variable having a value other than 0. . • One aspect of the if statement that catches new C# programmers off guard is the fact that the expression evaluated must result in a Boolean value.
you can specify an expression that returns an integral value and one or more pieces of code that will be run depending on the result of the expression. a switch statement consists of only one conditional statement followed by all the results that your code is prepared to handle. It's similar to using multiple if/else statements. but although you can specify multiple (possibly unrelated) conditional statements with multiple if/else statements.The switch Statement • Using the switch statement. Here's the syntax: • switch (switch_expression) { case constant-expression: statement • jump-statement • case constant-expressionN: .
uint. control is passed to the first line of code in that case statement. the switch_expression is evaluated. you must provide a jump-statement for each case statement unless that case statement is the last one in the switch. . long. • First.• There are two main rules to keep in mind here. and then the result is compared to each of the constant-expressions or case labels. the switch_expression must be of the type (or implicitly convertible to) int. or string (or an enum based on one of these types). char. • Second. including the break statement. ulong.defined in the different case statements. First. Once a match is made.
or looping. • The do/while statement takes the following form: do embedded-statement while ( Boolean-expression ) . a specified simple or compound statement is executed until a Boolean expression resolves to true. for statements enable you to perform controlled iteration.• In C#. do/while. the while. • In each case.
• Because the evaluation of the while statement's Booleanexpression occurs after the embedded-statement. you're guaranteed that the embedded-statement will be executed at least one time. .
Control is then passed to the line of code following that loop's or conditional statement's embedded statement.The break Statement • You use the break statement to terminate the current enclosing loop or conditional statement in which it appears. the break statement has no parentheses or arguments and takes the following form at the point that you want to transfer out of a loop or conditional statement . • Having the simplest syntax of any statement.
using System. if (i == 66) break.Text.• • • • • • • • • • • • • • • • • • • • • using System. i <= 100.WriteLine(i).Generic.Collections.Linq. } } } } . using System. i++) { if (i%6==0) Console. using System. for (i = 1. namespace ConsoleApplication22 { class Program { static void Main(string[] args) { int i.
The continue Statement • Like the break statement. However. the continue statement enables you to alter the execution of a loop. instead of ending the current loop's embedded statement. the continue statement stops the current iteration and returns control back to the top of the loop for the next iteration. .
It's the result of that conversion that is passed back to the caller. and it causes an immediate return to the caller.• The return statement has two functions. it evaluates whether the returnexpression can be implicitly converted into a form compatible with the current method's defined return value. It specifies a value to be returned to the caller of the currently executed code (when the current code is not defined as returning void). . The return statement is defined with the following syntax: • return [ return-expression ] • When the compiler encounters a method's return statement that specifies a return-expression.
880/ 94 90 30 41 . 89. 89. $9.483 445 47 .9 4: .78 43974 8 903 5.%0-70.38107 4:9 41 . 445 47 .08 90 1443 1472 .70390808 47 .902039 .3/ 9. 89.9 445 8 47 .7:20398 . 89..43/943. 41 .:77039 03. 9 .4/0 1443 9.902039 3 .902039 90 -70. 89.9 90 5439 9.8 34 5.902039 W 4: :80 90 -70.902039 W .90 90 .39 94 97.902039 8 02-0//0/ 89.550.43/943.902039 94 90723.43/943.902039 .3 90 825089 839.3 89.
2085. < < < < ..W W W W W W W W W W W W W W W W W W W W W :83$8902 :83$890240.3 8973(.943 .9. :83$890236 :83$8902%09 3.94380307.88!747.78 39 147 1 4384079030 1 -70..04384055.2 89...4/.
.43974 -.4393:0 89.4393:0 89. 445 40.:77039 445 8 02-0//0/ 89.902039 90 .:77039 907.902039 W 0 90 -70.907 90 00.4393:0 $9.07 3890.943 ./ 41 03/3 90 . 89.902039 89458 90 .-08 4: 94 .3/ 709:738 .:943 41 . 94 90 945 41 90 445 147 90 309 907.943 .%0.902039 90 .902039 03.
.9 .4/0 8 349 /0130/ .108 .8 709:733 .902039 .07 %0 709:73 89.43.880/ -.:90/ .:808 .4/0 03 90 . 1472 .07 41 90 .9 850.425.07843 9.908 0907 90 709:73 05708843 .3 -0 25. W 709:73 709:73 05708843 ( W 03 90 .:0 94 -0 709:730/ 94 90 ..90 709:73 94 90 .42507 03..W %0 709:73 89.9 .:77039 2094/ 8 /0130/ 709:73 . 94 90 .4/ . .902039 9.07 ..3/ 9 .9-0 9 90 .43.8 94 1:3..4:39078 .108 ...902039 8 /0130/ 9 90 1443 839.0790/ 394 .9438 9 850.9 8 5.3 220/..:0 9 8 90 708:9 41 9.:.:77039 . 2094/ 8 709:73 89. 709:73 05708843 9 0.:77039 00.. | https://www.scribd.com/presentation/64286836/15812-anu1 | CC-MAIN-2017-09 | refinedweb | 1,181 | 69.38 |
>>-#137-Johnson's Russia List
Released on 2012-10-10 17:00 GMT
Johnson's Russia List
2011-#137
2 Monitoring: Putin Says Iron Rule Inefficient Method of Government.
2. BBC Monitoring: Russian premier hails 'effective' work in tandem with
president.
3. Interfax: Putin Categorically Denies 9/11 Attacks Were Organized By U.S.
Secret Services.
4. ITAR-TASS: Putin promises to lose 1/2 kg of weight in 6 months.
5. Kommersant: Vladimir Putin promises voters a "new country"
6. Russia: Other Points of View: Patrick Armstrong, MY PRESIDENTIAL ELECTION BET.
7. Moskovskiye Novosti: Olga Kryshtanovskaya, MEDVEDEV'S COURT. Dmitry Medvedev
remains a leader without a team of his own, which makes his chances for another
term of office flimsy.
8. Andrei Liakhov: WILL MEDVEDEV SOON DECLARE HIS CANDIDACY AND RUN AS AN
INDEPENDENT?
9. Moscow Times: Police Tests Over, Only 21 Generals Flunk Out.
10. ITAR-TASS: 140 top police officers fail performance reviews.
11. The Voice of Russia: Russians have fewer children than they'd like to have.
12. RFE/RL: New Russian Law Signals Tougher Anti-Abortion Stance, Could Spark
Social Divide.
13. Moscow Times: Vladimir Frolov, Medvedev Has Lost His 2012 Bid.
14. Interfax: Yurgens Dismisses Russian Popular Front as PR Move.
15. Svobodnaya Pressa: Yurgens/Gontmakher Forecast of 'Major Crisis' if Medvedev
Is Not Candidate Eyed.
16. Moscow Times: Nikolai Petrov, Putin Will Need a Long Shower After the Vote.
17. Jamestown Foundation Eurasia Daily Monitor: Pavel Baev, The Prospect of
Putin's Return Comes Into Focus.
18. Moscow Times: Vladimir Ryzhkov, All in the Family.
19. Moscow Times: New State Body Stirs Crackdown Fears.
20. Moscow News: LiveJournal users fear election crackdown.
21. Russia Profile: Virtual Nationalism. Recent Cases Indicate that Social
Networks Are a Catalyst for Spreading Nationalism in Russia.
22. Moscow News: Magnitsky case reopened in a bid to clear dead lawyer's name.
23. Moskovskiye Novosti: Yelena Panfilova, head of Transparency International
Russia and chairwoman of the Russian President's Council for the Development of a
Civil Society and Human Rights: The Magnitskiy Case -- Three Scenarios.
ECONOMY
24. ITAR-TASS: Russian rouble can become regional reserve currency Putin.
25. ITAR-TASS: Russia's industries to reach pre-crisis level in 2013-2014
minister.
26. Russia Profile: Poor Russia. The Number of People Who See Themselves as Poor
Is Growing in Russia.
27. Izvestia: Academics propose progressive income tax.
28. Moscow News: Elite living in Moscow. Why Moscow, a city of both billionaires
and dilapidated housing, is so expensive, is often a mystery to newcomers. In a
3-part series we look at the real cost of living here.
29. Moscow Times: Tim Osborne, Yukos Bankruptcy 5 Years On.
FOREIGN AFFAIRS
30. Interfax: Putin Praises U.S. For Responsible Decision to Raise Debt Ceiling.
31. Interfax: Russia's Putin Critical Of NATO Strategy In Libya.
32. Valdai Discussion Club: Dmtry Suslov, Michael McFaul and the future of the
"reset"
33. Komsomolskaya Pravda: RUSSIAN FOREIGN MINISTRY: RESOLUTION ON OCCUPATION OF
ABKHAZIA AND SOUTH OSSETIA IS BUT PR STUFF. MOSCOW'S RESPONSE TO THE AMERICAN
SENATE RESOLUTION.
34. Interfax: Russia's response to U.S. Magnitsky case blacklist could concern
senators, congressmen - analyst. (Sergei Markov)
35.: Yuri Mamchur, Russia and the Arab Spring: the Kremlin's
Short-Term Gains Are Russia's Long-Term Losses.
36. Moscow News: Mark Galeotti, Mythologizing the 'mafiya'
37. Interfax: Only 5% of Russians Call Lukashenko True Friend.
38. AFP: Medvedev scraps Ukraine visit after gas merger fails.
#1
BBC Monitoring
Putin Says Iron Rule Inefficient Method of Government
NTV Mir
August 1, 2011
Russia does not accept totalitarian rule because it is inefficient and destroys
freedom and creativity, Prime Minister Vladimir Putin has said. Gazprom-owned
Russian NTV showed him talking to participants in the pro-Kremlin Seliger-2011
youth forum in central Russia on 1 August.
"Maybe Russia and the Russian people really need a kind of iron hand and
radically harsh punishment measures," a young woman suggested, addressing Putin.
"Do you believe this yourself?" he inquired, and when she replied that she did "a
little bit", Putin said:
"It's a pity. It's a pity because it is an inefficient method of government.
Particularly in present-day conditions, this leads absolutely nowhere, because
all the elements of a totalitarian regime contain the main thing that does harm.
Of course, during the era of Stalinism, millions of people died in (prison)
camps, and it is terrible; but even more important is that totalitarian forms of
government completely kill off freedom and people's creative endeavour, and no
state can fill this gap.
"As a result of this, the economy, the social sphere, and politics all become
inefficient, and the state is doomed. This is exactly what happened to the Soviet
Union. You and I, we don't want a repeat of this, do we? Which means we must not
do this."
[return to Contents]
#2
BBC Monitoring
Russian premier hails 'effective' work in tandem with president
Text of report by state-owned Russian news channel Rossiya 24 on 1 August
(Presenter) The prime minister (Vladimir Putin) has spoken (at the Seliger youth
forum) about how he views the results of his work alongside the country's
president (Dmitriy Medvedev).
(Putin) Dmitriy Anatolyevich and I have known each other for a long time, as you
know. We have worked a lot together, and we have common views on the fundamental
questions of the country's development, and on international affairs. But we are
different people, of course. And we may have some differences in taste and our
own notions on how to take certain steps.
But it is very important that we have established a style of work whereby we hear
each other, listen to each other and take balanced, considered decisions within
the framework of our responsibilities. And on the whole this proverbial tandem,
which there has been so much talk about, has genuinely succeeded as an effective
instrument.
(Official state TV channel Rossiya 1 showed Putin appearing to suggest that he
would indeed be returning to the Seliger forum, in answer to a question from a
young woman as to whether he would be attending next year's forum, and in what
capacity. "Is it interesting for you to sit and chat with me and listen to my
answers to your questions," Putin asked the young audience, eliciting cheers and
resounding positive replies. "And is it important to you what capacity I am here
in?" Putin asked, provoking equally resounding cries of "no!". "This is the
answer to your question," Putin said.)
[return to Contents]
#3
Putin Categorically Denies 9/11 Attacks Were Organized By U.S. Secret Services
LAKE SELIGER, Tver region. Aug 1 (Interfax) - Claims that the terror attacks on
September 11, 2001, were organized by the United States intelligence agencies are
"complete nonsense," said Russian Prime Minister Vladimir Putin.
"This is complete nonsense, it is impossible," Putin said on Monday, responding
to a question posed by an attendee of the Seliger 2011 youth forum.
"To imagine that U.S. intelligence services did it deliberately, with their own
hands, is complete nonsense," the prime minister said
Only people who do not understand the workings of security agencies can say that,
Putin said. "It is impossible to conceal it," he said.
I cannot imagine how "any of the current or former U.S. leaders could have such
an idea," he said.
[return to Contents]
#4
Putin promises to lose 1/2 kg of weight in 6 months
LAKE SELIGER, Tver region, August 1 (Itar-Tass) Prime Minister Vladimir Putin
arrived at the youth forum Seliger-2011 in the Tver region on Monday, August 1,
and promised its participants to lose half a kilo of weight in six months.
Putin, who flew in by helicopter, first walked up to a stand titled "Vladimir
Putin and a Healthy Way of Life" where a hundred young men and women were waiting
for him on a platform that at the same time served as a large scale.
They had learned about healthy nutrition, had been taught to cook healthy food
and play sports.
One of them, Nikita Italyantsev, who had lost 37 kilograms in one year and now
weighs 113 kilograms, greeted the prime minister. Putin heard his story and said
that such effort required the will of steel.
The young men and women invited Putin to join them on the scale. When he did, the
combined weight came to 6,300 kilograms. "We are not slim," the prime minister,
smilingly.
In reply, the young people promised to lose a tonne over the next year. "I, too,
promise to lose half a kilo," Putin added.
In a different part of the camp, another forum participant proposed to amend the
traffic rules in order to allow a left turn against the red light if this will
not create problems for other vehicles and pedestrians.
A special plastic sign with a green arrow on a streetlight will indicate that
such turn is allowed, he said.
The author of the idea, Alexander Shumsky, said his proposal would not take much
money to implement but would reduce traffic jams by 15 percent, save gasoline,
while a plastic sign will cost a mere 20 roubles.
He said this had already been done in the United States, Canada, Germany and
other countries.
Putin immediately telephoned Deputy Interior Minister Viktor Kiryanov and told
him about the suggestion. "The guys here say that Germany is already using this.
Work on this please," he said.
While taking a walk on the camp's grounds, Putin suddenly came up to the climbing
wall where one alpinist with a safety rope was training at the time. He watched
the athlete and when he came down, Putin decided to follow suit and got half the
way without the safety rope, but then came down.
Another forum participant demonstrated a tandem bicycle but cautioned that it was
not easy to use because it required synchronised teamwork.
"Dmitry Anatolyevich [Medvedev] and I will come over here and try it," Putin
said, smilingly.
Activists from different youth organisations showed their video presentations.
Dmitry Chugunov, coordinator of the Stopkham project, spoke about his how
organisation was fighting rude behaviour on the roads.
Putin supported the organisation's work and urged its members not to oversee even
minor violations in everyday life.
"If we do not pass by such violations, discipline will be better, safety will be
better, and this is one of our biggest problems," he added.
The next presentation was by Ecology Project supervisor Tikhon Chumakov who spoke
of the head of one of the rural settlement in the Moscow region, who had agreed
to clean up an unlawful dumping ground after local activists had piled up a part
of the waste from that site in front of his office.
Suddenly, a young man came up the prime minister and introduced himself as a
journalist of Seliger's -TV. He suggested making a documentary about one of his
working days. "Why not," Putin replied and shrugged, suggesting negotiating the
date later.
Putin first visited the forum in 2009. Forum organisers plan to show to the prime
minister the best projects presented by the participants this year. These include
proposals on how to regulate traffic and deal with traffic jams, introduce
energy-saving technologies, fight unscrupulous retailers that sell products past
their shelf lives, and promote healthy lifestyles.
Putin also plans to talk with more than 4,000 participants in the "Politics"
shift to discuss various aspects of life in the country, including domestic
political issues.
Seliger is a youth educational forum held since 2005 on Lake Seliger in the Tver
region, near the city of Ostashkov (370 kilometres from Moscow). The Federal
Agency for Youth Affairs is the organiser of the forum.
This is an educational mega project that brings together the leaders of Russian
youth organisations and thousands of the best students who are interested in
politics, economics and innovations.
In 2005, the forum was attended by 5,000 people. Nowadays no less than 15,000
young and talented people who have their own projects come to the forum.
Before coming to the forum each participant goes through strict selection
procedures in his respective field. Shift organisers choose the best and most
successful ones.
Nine thematic shifts have been held within the framework of the Seliger-2011
forum: Innovations and Technical Creativity; Entrepreneurship; Information Flow
(journalism); Politics (public initiatives); Technology of the Good
(Volunteerism); ARTPARADE (creativity); a fitness shift called "Run after Me";
"Everyone is at Home" (housing and utilities); and an international shift.
Traditionally, some prominent figures come to the forum every year. This time,
high-ranking representatives of the executive and legislative branches of
government, regional governors, entrepreneurs, mass media top managers, athletes,
writers, artists, and actors addressed the forum participants.
[return to Contents]
#5
Kommersant
August 2, 2011
Vladimir Putin promises voters a "new country"
By Natalia Bashlykova
Yesterday, a video appeared online in which United Russia leader, Vladimir Putin,
explained that, with United Russia's help, a "new Russia" will be created. The
prime minister's press service claims that Vladimir Putin does not have any
relation to the video clip, and United Russia representatives are calling it a
"people's ad". The country's political opposition believes the ruling party has
begun its State Duma campaign.
The video clip "We are building a new Russia" appeared yesterday on United
Russia's official website, as well as the party's online blogs on Twitter and
YouTube. Compiled from fragments of Mr Putin's speeches it sounds like an
undivided address by the party's leader. The prime minister talks about raising
social benefit payments (increasing pensions, stipends, as well as government
workers' salaries), the modernization of healthcarein the regions, development
and promotion of sport in the society, and United Russia's role in contemporary
history, calling it "a cornerstone of politics and the economy." The video clip
ends with landscape shots and the prime minister's words: "We are building a new
Russia!" The clip also shows the head of the United Russia Supreme Council, State
Duma speaker Boris Gryzlov, deputy prime minister and coordinator of the
All-Russian Popular Front (ONF) Vyacheslav Volodin, and acting secretary of the
United Russia General Council Presidium, Sergey Neverov. In chronological
sequence, they appear before President Dmitry Medvedev, who appears in the video
clip only once, and is even preceded by Nadezhda Babkina, who is currently taking
part in the joint primaries of the party and the ONF.
After the video clip was posted online, many United Russia members copied it into
their blogs and commented. "It's all beautiful, except for N. Babkina, who is
known to be excited by the party," United Russia MP Aleksandr Khinshtein writes
on his Twitter page. "Adding to favorites! We are building a new Russia!" writes
a member of the Tyumen regional Duma, Sergey Romanov.
Meanwhile, the prime minister's spokesman, Dmitry Peskov, told Kommersant that
the video clip was not coordinated with Putin and he does not know who authored
it. "Right now, I don't know whether or not we will issue a compliant and ask for
it to be removed. I think that if there was something inappropriate, United
Russia would not have posted it," said Mr Peskov, adding that he personally had
not seen the video and is currently on vacation.
According to Kommersant's sources, the project was managed by the head of the
Public Council under the United Russia General Council Presidium, Aleksey
Chesnakov. "Chesnakov was given carte blanche to conduct an online party
campaign, and when he met with bloggers, he made project suggestions," a source
within the party told Kommersant. Mr Chesnakov, meanwhile, told Kommersant that
"the person who had suggested a number of clips is a party member, but does not
want his identity disclosed. We liked this sample of people's advertising. If we
are given video clips addressing the party's problems, we will post these on our
website as well," he explained, and said that United Russia's official campaign
will be launched after the party congress, as it should be.
"I have some serious doubts that there are such philanthropists out there who
would invest money in this and set priorities in such an intelligent manner. This
is certainly a false start of a campaign," head of the legal service for the
CPRF, State Duma Deputy Vadim Solovyev, told Kommersant. According to him, by law
it is impossible to "make any claims against United Russia, though it is a
blatant campaign ad that smells of the bribery of voters. The clip could not have
been made without any approvals. All parties act under the law, and United Russia
by self-created principles," noted the leader of the LDPR faction at the State
Duma, Igor Lebedev. "I don't believe that United Russia had nothing to do with
it. All that's left for them to do is use public funds to post slogans that
United Russia is brainpower, honor, and dignity of our era," deputy head of the
Just Russia Party faction, Gennady Gudkov, told Kommersant.
Experts show solidarity with the party members. "It's hard for me to imagine the
enthusiasts who would create this clip. Perhaps United Russia is testing various
forms of campaigning. Before this, there was the online clip called 'I'll rip it
for Putin!'" said political scientist Mikhail Vinogradov. According to him, the
showing of a large number of United Russia members and various accents support
the idea that the party did have something to do with it. In turn, the director
of the Institute of Election Technologies, Evgeny Suchkov, says that,
technically, United Russia is doing everything correctly. "It needs to raise the
stagnant ratings, and it is doing so with the help of Vladimir Putin," he
explained. And having dedicated the majority of time in the video clip to the
prime minister and not the president, the party, according to the expert, made it
clear as to who is more important.
[return to Contents]
#6
Russia: Other Points of View
August 1, 2011
MY PRESIDENTIAL ELECTION BET
By Patrick Armstrong
It sometimes seems that the only story in Russia today is who will run for
President and the Kommentariat is parsing every word uttered by Putin or Medvedev
in its search for clues. Neither has yet said anything definite (and no more
would either: the fear is that the Russian bureaucracy ever alert to power
shifts would stop working altogether). Readers are reminded that we heard
similar speculation before: Gorbachev would not step down; Yeltsin would not
(could not some said) step down; Putin would change the Constitution and stay on.
In some cases, there are Russia watchers who have stoutly maintained all these
positions.
I was amused by a recent and rather lengthy think piece which concluded that
the possibilities were that Medvedev, Putin or someone else would be the next
President. I believe that people who watch Russia should do better than that; and
I am putting my bets down:
1. Medvedev will run for President and Putin will not.
2. There may be another candidate from the Team who runs.
Medvedev will run for President and Putin will not.
I believe that the decision was made some years ago that Putin would not serve
more than two terms and that he would hand off to a trusted member of the Team
which been running Russia since 2000. What Putin did, by stepping down as
President and re-appearing as Prime Minister, was something not before seen: for
probably the first time in Russian history there are two power centres which are
cooperating. Many people simply cannot grasp the concept and insisted for some
years that Medvedev was just a place-holder; now, curiously, the conventional
view is becoming that Medvedev is somehow opposed to Putin and that Putin will
take back the reins.
I maintain that there has been a Plan since 2000; that that Plan can be seen in
the speeches of the two and especially in Putin's Russia at the Turn of the
Millennium. Or to take a more recent example, here is Putin reflecting on what
has been accomplished and what remains to be done. I do not apologise for the
length of the quotation; in my opinion too few actually read what the man says.
He remembers what he faced in 2000:
The scale of the tasks was directly proportionate to the problems Russia was
facing at the beginning of the 21st century. We entered the new century after a
default that spurred inflation growth and led to bankruptcies and unemployment.
At least one-third of ... Most importantly, we have ensured stability.
In short, three problems: economic failure, an ineffective state and lawlessness.
He did not mention the fourth: a Russia that was considered to be a declining
power, on its way to negligibility. But, "Most importantly, we have ensured
stability". That sums up what Putin thinks he did as President.
The interviewer than asks him whether, "after solving high-priority problems in
the past decade, we now need qualitative changes and some kind of a breakthrough
in all spheres of the country's life?" Putin answers:.
Qualitative change is, of course, one of Medvedev's continually-repeated themes.
I fail to see any serious disagreement between the two here. Putin "restored
stability" during his two presidential terms and now is the time for a
"qualitative breakthrough". Phase I then Phase II of the long job of rebuilding
Russia after what Putin once called the "blind alley" of communism.
Making the "qualitative breakthrough" will, of course be more difficult; although
there were many who thought that stability could not be restored a favourite
example is a piece that appeared in 2001, the title says it all, "Russia is
Finished". It will also take much longer, and, in some respects, will never be
completed because the target of "modernity" is continually moving.
Therefore, the Plan has moved into another phase and that is the job of Medvedev,
whom Putin picked and nurtured (and they were both grown in Anatoliy Sobchak's
nursery). I see no serious evidence that Putin is dissatisfied with his choice.
Compare and contrast lists like this do not convince me that there is a strategic
difference between the two.
And there is a little clue: in another interview, Putin dropped a pretty
significant hint when he said he was "fed up with foreign policy". Foreign policy
is a rather large part of the President's job.
Therefore, I expect that The Plan will be adhered to and Medvedev will run for a
second term and Putin will not.
There may be another candidate from the Team running
In 2006 the political party Just Russia ( ) was created. It was clear that there
was a good deal of involvement by the Kremlin in its creation. This was puzzling
because United Russia ( ) is the "Kremlin party"; why would the Kremlin want to
create a second establishment party with a slightly different flavour?
The Russian political scene is rather barren. The only detectable raison d'etre
of the majority party is the division of the spoils of power. The Communists and
Zhirinovskiy's personality party have little to offer the majority. Russian
liberals are quarrelsome and play more to outside opinion than to Russian
interests. Perhaps the hope was that Just Russia would attract membership away
from the Communist and liberal electorates but, if so, there is little evidence
that it has. Although the party has secured seats in the Duma and in regional
legislatures, it is very much a second fiddle to the all-dominating United
Russia.
At the time just before the 2008 presidential election it seemed possible to me
that the reason for Just Russia's creation could be that two Team candidates
would run, one for United Russia and one for Just Russia. In this event, the
election would be more competitive than yet another run of the Team candidate
against Zhirinovskiy and Zyuganov a very tired contest indeed and one we have
seen in almost every presidential election since 1991. Secondly, the contest
would establish Just Russia as a viable party and Russia would have a species of
political pluralism.
But, if this were the plan, it did not happen and Russia had another election in
which the Team candidate, supported by the machinery of United Russia, faced off
against Zhirinovskiy and Zyuganov. And, of course, Medvedev won as he would have,
with or without the power of incumbency.
There are signs that the Team is not very enthusiastic about United Russia; as
Putin said recently "Frankly speaking, United Russia, our leading political
force, needs an influx of new ideas, proposals and people in these
circumstances". Medvedev has more than once called for more political
competition. But, as long as United Russia is the dominant party for lack of
competition, why would it ever want to be creative? All it has to do is agree and
anticipate.
So, in a way, the Team is a victim of its success. In contrast to the Yeltsin
period in which "pedestal parties" (eg Russia's Choice, Our Home Russia) were
cobbled together at the last moment and performed poorly, United Russia has been
more carefully constructed. So the Team has a reliable base of support; but that
base so dominates political discourse in Russia that the creativity necessary for
Phase II ("qualitative change") is stifled. Hectoring United Russia to be
creative won't change the reality that it is an association of apparatchiks and
would-be apparatchiks.
Could my imagined 2008 scenario play out in next year's presidential election? I
believe that it is possible. I did not see much evidence of it although the move
of Sergey Mironov from the Federation Council to the Duma is interesting, as are
his attempts to distinguish Just Russia from United Russia. If Just Russia were
to run a credible Team candidate, it would offer a route out of the political
stagnation that Medvedev and Putin complain about.
So I believe that the possibility of two Team candidates, one of them Medvedev
and the other not Putin, each supported by one of these parties is something to
watch for.
Other points
Putin's future. I have no opinion on whether Putin will stay on as Prime Minister
in the next presidential term. However, I believe that he has come to the end of
his possibilities. He was the right man for Phase I (reversing the decline) but
not so good for Phase II (qualitative change). And, as far as Russian's image in
other countries is concerned, as long as he stays in power, there will more years
of speculation that "the ex-KGB officer" is really running the show. More of this
would hinder the development of Phase II which requires a peaceful environment
and outside investment.
When to make the declaration, I see a disagreement between Medvedev and Putin on
the timing. Putin, ever cautious, has said he would prefer to get the Duma
elections over with first; Medvedev keeps saying he will announce "soon". What
they both fear is the kratotropism of Russian officials. Should Medvedev declare,
there will be a tendency to regard Putin as yesterday's man and he will lose
traction. Even more so, should Putin declare, then Medvedev would immediately
become "nobody's man". On the other hand, it can be argued that the growing
speculation frenzy can itself paralyze action. Thus the timing requires nice
judgement and it is understandable that there could be different ideas about when
to do it.
Election turnout. Russian electoral turnout, at least in presidential elections,
is on the high side by world standards in the mid- to high-60s. US presidential
turnouts have been gently drifting down to the low 50s; Canadian federal
elections are also drifting down to the mid- to low-60s; British general election
turnouts are similar and French presidential turnouts, while higher than the
others, also show a downturn. Two common and opposed explanations are given for
low turnouts: either disgust with what is on offer or acceptance of the probable
outcome. Opinion polls, over many years and with many different polling
organisations, suggest that Russians are generally content with their leaders.
Thus it seems likely that the Russian turnout (somewhat inflated by improbable
results, especially in Chechnya) will be at least in the 60s.
[return to Contents]
#7
Moskovskiye Novosti
August 2, 2011
MEDVEDEV'S COURT
Dmitry Medvedev remains a leader without a team of his own, which makes his
chances for another term of office flimsy
Author: Olga Kryshtanovskaya (Center for the Studies of the Elites, Institute of
Sociology, Russian Academy of Sciences)
FAILURE TO INSTALL HIS OWN TEAM IN THE ECHELONS OF POWER MIGHT COST DMITRY
MEDVEDEV THE SECOND TERM OF OFFICE
President Dmitry Medvedev wields enormous powers. Or is he?
He is supposed to have the power to fire all senior state
officials. Does he really wield this power? Can Medvedev fire
Premier Vladimir Putin? He can, in theory. In practice, he cannot.
Because power is nothing unless whoever wields it has resources at
his disposal.
Presidential team
A loyal team is every president's most important resource.
Absence of the team or lack of loyalty within it spell serious
problems for national leaders which we remember from the days of
Mikhail Gorbachev and Boris Yeltsin.
Formation of the team is the first thing done by whoever has
climbed to the pinnacle of political power. Did Medvedev manage it
in his three years of presidency?
Medvedev's men
Staff shuffles we have been seeing in the upper echelons of
state power since May 2008 involve personnel that might be divided
into three categories: 1. Medvedev's mates; 2. Putin's pals; and
3. neutrals i.e. professionals. Most of the appointments made
between May 2008 and nowadays were of the 2. and 3. categories.
Medvedev's faction as such expanded but insignificantly.
The loyalists might be counted on fingers of one hand:
Alexander Konovalov is the Justice Minister, Konstantin Chuichenko
is Presidential Advisor, Nikolai Vinnichenko is Presidential
Plenipotentiary Representative in the Urals Federal Region. Anton
Ivanov, the head of the Supreme Court of Arbitration, was promoted
to this position in Putin's days. All four of them are Medvedev's
university mates.
As matters stand, Medvedev's mates and pals mostly occupy
secondary positions within the state hierarchy. Medvedev himself
remains a president without team, surrounded as he is by Putin's
pals on all sides. These latter occupied practically all (95%)
positions of power by the early 2011, leaving Medvedev's own men
but several more or less instrumental positions.
Two Politburos
Putin arranged everything to his liking by the end of 2001.
He had the so called Economic Politburo meeting with him every
Monday. Numerical strength of this group varied at different times
between 10 and 12 closest associates. Putin also had the so called
Strategic Politburo installed and functioning. It included the
heads of 5-7 security structures, the premier, and the
Presidential Administration director. This group met with Putin
Saturdays.
Putin met with "some Cabinet members" every Monday - economic
ministers plus Igor Sechin from the Presidential Administration
and presidential advisor (first Andrei Illarionov, then Arkady
Dvorkovich).
Medvedev changed everything. He chairs analogous meetings
once a month (twice a month during the crisis). Staff composition
of these meetings stopped being permanent. One does not have to be
a genius to guess that the anti-crisis center was not exactly at
the president's office.
Here is one other difference. Putin paid so much attention to
economic matters in the first two years of his presidency that he
met with Premier Mikhail Kasianov twice a week. These meetings
were quite thorough, lasting up to an hour and a half.
Frequency of the meetings between the president and the
premier went down with Medvedev in office. In 2009, Medvedev and
Putin met less than once every fortnight. The meetings averaged 5
(!) minutes in length, according to the official data from the
president's own web site.
The situation with Medvedev's meetings with security
ministers is different. These conferences remained quite regular.
Every Friday the president meets with all permanent members of the
Russian Security Council that includes security ministers, the
premier, Presidential Administration Director Sergei Naryshkin,
and chairmen of both houses of the parliament.
All participants in these meetings (without exception) are
Putin's pals and proteges. The premier himself attends but every
second meeting.
Other echelons
Key positions in the corridors of power occupied by Putin's
men, Medvedev chose to focus on installation of his pals into the
second and third echelons of state power.
Major staff shuffles were initiated within the Presidential
Administration. And yet, most of these staff changes were made
right after Medvedev's inauguration. Moreover, a good deal of the
new appointees were men close to Putin (like Naryshkin). Others
had worked with Medvedev in the government.
By and large, it was not establishment of a new team that
Medvedev set out to accomplish in May 2008. He merely promoted his
old pals from the government. Putin took his people from the
Kremlin to the government, Medvedev shifted his own from the
government to the Kremlin.
Rejuvenation
Regional elites are the only sector of the establishment
Medvedev introduced considerable staff changes in.
Medvedev replaced 8 governors in 2009, 9 in 2009, and 17 in
2010. What counts is that he never hesitated to sack heavyweights
and old-timers, the people ruling their respective regions for
more than two decades - Murtaza Rakhimov, Kirsan Ilyumzhinov,
Mintimer Shaimiyev, Yegor Stroyev, Nikolai Fyodorov, Victor
Ishayev, Vladimir Chub, Eduard Rossel, Alexander Filipenko, Yuri
Neyelov, and even Yuri Luzhkov in Moscow. Three governors he
sacked had been initially from security structures - Vladimir
Kulakov, Aleksei Lebed, Murat Zyazikov.
Before Medvedev, governors constituted the oldest age group
of the Russian political and administrative establishment
(averaging 63 years). With Medvedev in office, this group
eventually became 15 years younger.
To make a long story short, one might say when analyzing
Medvedev's staff decisions and changes of the last three years
that the president's hands were kind of tied. Unable to encroach
on the interests of Putin's men occupying positions of power, he
had to concentrate on rejuvenation of the second and third
echelons of state power and regional elites.
Medvedev's efforts upped the number of women in the corridors
of power from 2% to 6%; reduced the number of men from
St.Petersburg in the positions of power in Moscow from 21% to 12%;
reduced the number of the so called siloviki in the corridors of
power from 42.3% to 20.7%; brought in businessmen (every fourth
federal functionary in Russia these days joined civil service from
private business structures); and reduced the old Soviet
nomenclature from 40% to 18%.
Elites' new style
The elites did change with Medvedev in the Kremlin. Even
their performance became better transparent and dynamic.
These positive changes notwithstanding, Medvedev remains a
president without a team i.e. a general without an army.
Unfortunately, it makes his chances for re-election quite flimsy.
[return to Contents]
#8
From: "Andrei Liakhov" <andrei.liakhov@integrites.com>
Subject: WILL MEDVEDEV SOON DECLARE HIS CANDIDACY AND RUN AS AN INDEPENDENT?
Date: Sun, 31 Jul 2011
Any presidential campaign is very expensive. It is a universal rule of modern
politics that the better funded candidate always wins. This is equally true of
the US, Russia, France, Lithuania, Austria. I randomly selected countries with
different political systems and different structure of the electorate. This rule
becomes even more important where there are no major political differences
between candidates' platforms. However any, even the best funded candidate, needs
a well organized and oiled election machine to persuade the electorate that he is
the best of available choices. These are the basic starting points of any
election campaign.
Where one of the candidates is an incumbent head of state, the track record of
his last office is important, but not crucial (Bush Sr. is not a suitable example
as Clinton campaign was much better funded and organized, and I cannot find a
recent example of a better funded incumbent president losing his second election
campaign). Although establishment support usually plays only a very modest role
in a developed society, any CIS elections are often heavily influenced (if not
determined) by the establishment throwing its collective support behind a chosen
candidate.
Medvedev has to consider all of these factors before deciding whether to run as
an independent.
1. Funding Before moving to civil service Medvedev owned quite a large chunk of a
very large and very successful forestry business (Ilim Pulp) which he allegedly
sold for (on various estimates) anything between us$350 million and US$500
million. Which is enough of course to secure his grandchildren's future (if and
when he has any), but hardly sufficient to win the 2012 Presidential race. Rumors
(and nothing is ever confirmed or denied or established beyond any reasonable
doubt) have it that since becoming a civil servant and following his accession to
the very top of Russia's bureaucratic food chain Medvedev has acquired interests
in the Russian gold industry. Irrespective of whether it is true or not even the
most average of investment managers could have easily doubled DAM's wealth right
up to 2008. However (a) presidential races are very rarely funded from own
pockets; and (b) even DAM's own pocket may not be sufficiently deep. In his years
of presidency he has failed to build (unlike VVP) relationships which could
generate the required US$1,5 billion to secure the election;
2. Same is true of organization required to win none of thee political parties
associate themselves with DAM and his recent chaotic firings of civil servants
certainly did not put nomenklatura behind him. I strongly doubt that the Right
Force under Prokhorov has the organization and discipline required to run an
effective election campaign. Needless to say that it does not have an appeal to
the bulk of the Russian electorate and it is strongly doubtful that DAM and
Prokhorov could turn it around before the polling date;
3. Both nomenklatura and big business dislike DAM for a variety of reasons. His
performance record was chequered even before the 2008 election (National Projects
was a spectacular failure, so is Rosnano, the reform of the Armed Forces is not
producing any meaningful results, "High tech Russia" remains largely a fig of
DAM's and Dvorkovitch imagination and his U turns on Libya and Iran badly
misfired). Thus there are no good reasons either for the support of the
establishment or for high popularity ratings.
4. On top of everything else he remains to be seen (and for the first two years
of his presidency he was not noticed to be trying to get out) largely to be in
the shadow of VVP. His imagine never progressed away from a that of a "Zitz
Chairman" (to use Ostap Bender terminology). He has failed to develop a
compelling image of a strong determined independent leader with his own agenda.
Dmitry Medvedev was thrown to the very top ill prepared and well before his
political; maturity. Unfortunately he failed to learn on the job. He is
intelligent enough to understand all of this. The biggest intrigue currently is
whether his vanity will prevail over reason. This question is, I think, beyond
comprehension of any, even most learned and experienced Kremlinologists.
[return to Contents]
#9
Moscow Times
August 2, 2011
Police Tests Over, Only 21 Generals Flunk Out
By Alexander Bratersky
Only 21 police generals have flunked re-evaluation tests, which wrapped up
Monday, despite earlier reports that 143 had been fired as part of an ongoing
police reform, Interior Minister Rashid Nurgaliyev said.
In total, 327 generals have cleared the tests, Nurgaliyev told Rossia-24
television.
Those who failed did so over problems with their income declarations or "matters
related to discrediting the law enforcement system," Nurgaliyev said, Interfax
reported. He did not elaborate.
Last week, Kremlin chief of staff Sergei Naryshkin reported to President Dmitry
Medvedev that 143 of the country's 340 police generals had failed the tests.
Nurgaliyev did not comment on the discrepancy.
Naryshkin, who chaired a presidential re-evaluation commission that examined
police top brass, said at the time that most were dismissed for reaching the
mandatory retirement age, not misconduct.
The Kremlin-backed reform of the police force has been ongoing since March. It
includes trimming the work force by 200,000 officers through mandatory
re-evaluation tests and introducing a new social security system for the
remaining 1 million officers, whose salaries are to triple starting next year.
In total, some 227,000 policemen have been dismissed since the start of the
reform, Medvedev said last week. Nurgaliyev said Monday that 875,000 officers
passed the re-evaluation.
The re-evaluation tests, which were not public, were graded by internal
commissions that base their decisions mainly on an officer's service record. No
clear guidelines for the tests were released, fueling accusations that the
process might be biased.
Also on Monday, Medvedev signed into law a bill on public councils at the
Interior Ministry stepping up public control over police.
Some officers who failed the re-evaluations were told that the reason was
"discrediting information," without further explanation, said Anton Tsvetkov, who
sits on the public council for the Moscow police.
"They were never even told what that meant," Tsvetkov told Russian News Service
radio Monday.
A senior State Duma deputy with United Russia said some dismissals were made "for
the sake of firings," with no reason other than to meet the Kremlin-ordered
quotas for layoffs.
At the same time, no inquiries were opened into those fired for alleged
wrongdoing, even though "they understood why they were fired," said the lawmaker,
himself a retired policeman who asked not to be identified to avoid a backlash
from party officials.
"There is a certain skepticism about the reform," he said. "The process should
have been more open to public: Petrov was laid off because he reached a certain
age, and Savelyev was fired because he was suspected of wrongdoing."
Nurgaliyev, who has served as the country's top cop since 2004, said last month
that he was shocked by some of the things that he had learned during the
re-evaluation process, including that some officers owned property abroad or
operated businesses parallel to their police work.
[return to Contents]
#10
140 top police officers fail performance reviews
MOSCOW, August 2 (Itar-Tass) Deputy Interior Minister Sergei Gerasimov said more
than 140 top police officers have failed the reevaluation procedure and had to
step down. The candidacies of 122 of them were never put up before the Evaluation
Commission under the Russian president.
"An emphasis placed on extraordinary re-evaluation of top command was the first
thing done; special attention was paid to this process," Gerasimov said.
He reminded that the Commission, led by Kremlin chief of staff Sergei Naryshkin,
included representatives of all the agencies concerned, including the Interior
Ministry, the Federal Security Service, the Federal Fiscal Monitoring Service, as
well as public organization activists, such as composer Ilya Reznik and Public
Chamber member Anatoly Kucherena.
The Commission representatives cannot be suspected of bias, as all its candidates
underwent objective and profound scrutiny.
"As a result, we carried out preliminary selection, and if the Interior Ministry
had questions or sufficient reasons not to present this or that employee, we
intentionally did not put up his candidacy (for consideration)," the deputy
interior minister said.
"In all, 143 police senior police officers have been relieved of their duties. Of
those, the Commission reviewed and did not approve 21 officials. The candidacies
of 122 officials were never discussed; they were screened out by the minister's
decision and by the results of preparatory work," Gerasimov said.
According to Gerasimov, a large number of police officers opted not to take the
re-evaluation procedure.
"During preliminary work, a large number of personnel decided to quit police
bodies. At an earlier stage, they tended their resignations. They were mostly the
personnel who were aware that they would not pass the evaluation procedure," he
said.
Gerasimov said the evaluation procedure ruled out the possibility for senior
police officials to "square accounts" with annoying subordinates. The evaluation
commission under the president was "exemplary" in deciding on human resource
issues. Its experience was used by the Interior Ministry's Central Evaluation
Commission and its regional branches.
In his opinion, "the objectivity of re-evaluation was quite high."
"I'm drawing your attention to the fact that some officials - by taking advantage
of the re-evaluation, might have attempted to square accounts with uncompromising
personnel. But the very system of arranging re-evaluation practically ruled out
this possibility," the deputy interior minister said.
All the issues related to passing the decision on this or that police official
were made collectively. "In this event, the role of one official, even if he has
a high rank, is decreased to a considerable extent."
A system of information about violations during re-evaluation was functioning.
On Monday, Interior Minister Rashid Nurgaliyev said 875,000 police had passed
performance review. As for top police officers, the Central Evaluation Commission
assessed the performance of 327 Generals. Twenty-one Generals - who stood before
the evaluation commission -- were not approved due to unauthentic income
declarations or discrediting police work.
[return to Contents]
#11
The Voice of Russia
August 2, 2011
Russians have fewer children than they'd like to have
By Elena Kovachich
In 2010, Germany became the country with the lowest birthrate in Europe 8.5 per
1000 people. The country with the highest birthrate is Ireland 16.5 newborns per
1000. The average figure for Europe is 10.7.
Russia is a bit ahead of this figure namely, 12.5. Experts say that now, Russia
has every condition for a demographical breakthrough.
The Earth's population is constantly increasing. Very soon, there will be 7 bln
people on the Earth. However, this is caused, first of all, by the population
increase in African countries, India and China. In Europe, the situation is
different experts predict that in the mid 21st century, Europe's population will
be smaller than it was in 2005. In such a situation, to stop Europe's population
from decreasing (to say nothing of increasing), each European family must have as
many as three children.
Anatoly Antonov, an expert in demography, says:
"In Russia, only about 6% of families have three or more children. In Europe, the
figure is somewhere between 12% and 15%. The matter is that the living standards
in Europe are higher than in Russia. Still, even in Europe, this figure is
smaller than it must be to keep a stable population level."
There are many factors which influence the birth rate living standards,
traditions, religion and so on. However, most experts today adhere to the
so-called "gender theory" to explain why most families now have no more than one
child.
Here is what another expert in demography Sergey Zakharov says:
"One can hardly expect many children in a family where both the husband and the
wife have to work to make ends meet. If a woman has to make a choice of either
earning money or spending her time with children, she would think twice before
having another child. To improve this situation, governments should probably
allow women to have more flexibility in work schedules, open more nursery schools
and so on. In countries where these problems are solved better, the birth rate is
usually higher."
However, since 2006, Russia is witnessing a constant increase in birthrate.
Experts believe that this is stirred by the government's program of subsidies to
women who have more than one child. For this money, a mother, for example, can
buy a house or car. She can also spend it on her child's education or add a big
sum to her pension fund.
"Thanks to these measures of the Russian government, in 2009, the birth rate in
Russia exceeded the death rate for the first time in the last 20 years," says
Olga Antonova, Deputy Head of the Department of statistics of population and
health care.
"The latest birth rate peak in Russia was in 2010. These children were born by
women from the 1980s generation. Now, they are coming out of the childbearing
age, and the generation which was born in the early 1990s is approaching the
reproductive age. However, fewer people were born in our country in that time
than in the 1980s. Small wonder if their children will be fewer in number as
well."
A recent poll of young people in Russia showed some interesting results. Asked,
"How many children would you like to have?", most young people answered: "Three
or more." But, asked "How many children are you planningto have?", they wrote:
"Two". The explanation is simple the financial means of many young people in
Russia do not allow them to have as many children as they would like to have.
So, there can be only one way out living standards in Russia must be increased.
With this, experts say, Russia would have all the chances to stabilize the
balance between death and birth rates somewhere by 2025.
[return to Contents]
#12
RFE/RL
July 30, 2011
New Russian Law Signals Tougher Anti-Abortion Stance, Could Spark Social Divide
By Tom Balmforth
MOSCOW -- Father Maksim Obukhov, a graying Orthodox priest, began his campaign
against abortion 18 years ago, when he would stand in the streets handing out
flyers to mostly uninterested passersby.
But in a country that has long had one of the highest abortion rates in the
world, Obukhov had little success in pushing his cause -- until now.
President Dmitry Medvedev last week signed a law requiring advertisements for
abortion services to warn patients of the health risks of terminating a
pregnancy. The legislation is widely seen as the first step in restricting
abortion in Russia, where it has long been available on demand and is legal in
the first trimester.
Sitting in the Moscow office of his For Life advocacy group, dressed in a black
cassock, Obukhov tells RFE/RL that he welcomes the new law but favors an outright
ban on advertising for abortion services.
"There is now a vicious and closed cycle whereby it is profitable for clinics to
administer as many abortions as possible. So they create advertising campaigns
which are obtrusive," Obukhov says. "This is a cause for major concern and
dissatisfaction for many citizens, and actually most of the population has backed
this new law."
The Soviet Union left behind an enduring abortion culture in Russia. It was
widely practiced because contraception was mostly unavailable and sex education
was minimal. According to the Levada Center, a public-opinion polling
organization, 61 percent of Russian women over 55 years of age have had
abortions.
Although widespread, abortion is less common than in the past because of the
availability of reliable contraception.
Legislation expected to reach the State Duma in the fall includes provisions like
scrapping free abortions at state clinics, outlawing the "morning-after pill"
without prescription, and a weeklong waiting period after applying for an
abortion.
Other proposals under consideration would compel married women to get permission
from their husbands, and minors from their parents, before undergoing abortions.
Lobbying by the resurgent Orthodox Church, which has seen its influence rise
dramatically over the past decade, only partially explains the drive to curb the
number of abortions. Russia's ongoing demographic crisis, in which birthrates
have plummeted since the collapse of the Soviet Union, is also prompting
authorities to consider restrictions on the practice.
Russia's population, currently 141 million, has long been in decline. Women, on
average, have 1.4 children per capita fewer than the 2.1 that authorities say are
necessary to boost the population. The Health Ministry says nearly 1.3 million
abortions were performed in 2009.
Igor Beloborodov, head of the Demographic Research Institute, which works closely
with Obukhov's For Life group, tells RFE/RL's Tatar-Bashkir Service that reducing
even a small reduction in abortions could reverse this trend.
"If today we managed to reduce the number of abortions even by 30 percent, then
instead of real population losses, we would be experiencing fairly perceptible
population growth," Beloborodov says.
Demographic Decline
The recent drive to restrict abortions has riled Russia's fledgling feminist
movement, which like its larger counterparts in the West sees reproductive
freedom as a cornerstone of women's rights. There are also fears that restricting
abortion could create an unregulated black market.
Natalya Bitten, a bubbly Muscovite in her forties who recently founded a group
called For Feminism, sees the advertising restriction as hypocritical and based
on "disinformation."
"In point of fact, a medical abortion carried out in a special establishment by
doctors is safe. And if these people are going to say that they have to present
all the information and list the possible consequences of the operation, then
fine, go ahead," Bitten says, "But let them also point out that childbirth is
actually a greater danger than, for instance, abortion."
Bitten also rejects the claims that restricting abortions would reverse Russia's
demographic decline.
She notes that when Poland banned abortion, it had no marked effect on
birthrates, which she says are now lower than in Russia. (The indexmundi.com
website lists Russian "crude birth rate" at 11.05 births per 1,000 people and
Poland's at 10.01.)
Women's rights activists say Russia's population decline is linked to poor social
and economic conditions, extremely low average male life expectancy, and a new
"wave" of emigration by Russia's middle class.
The potential overhaul of Russia's traditionally liberal abortion policy appears
to be consolidating the country's traditionally weak feminist movement.
Bitten, for example, says she founded her organization last year in response to
the legislative debate. She says her group has gathered more than 3,000
signatures in its petition drive against the legislation, which has also come
under fire from other quarters.
Meanwhile, the city of St. Petersburg has seen both anti-abortion and
abortion-rights rallies in recent weeks, the latter sporting slogans such as "My
Body Is My Business" (in Russian, "Moye Telo, Moye Delo").
Vulnerable Segment
One of the provisions under discussion that has women's rights activists
particularly upset stipulates that married women would have to provide clinics
with permission from their husbands, and teenagers from their parents.
Critics say restricting the right to abortion downgrades women's status in
society and makes them even more vulnerable to crimes such as domestic violence.
"In our experience, there are cases when women have been forbidden from having
abortions and have been exposed to domestic violence as a result," Andrei
Sinelnikov, deputy director of the ANNA center, which deals with violence against
women, says. "If this draft law is passed, then of course it will put women in a
more vulnerable position in which they won't have the right to be in command of
their own bodies."
Domestic violence is a major problem in Russia, thought to claim around 15,000
lives a year.
But Russia's modest women's rights movement lacks the influence in the halls of
power enjoyed by the powerful Orthodox Church, which is ultimately pushing for an
outright ban on abortion.
Church officials, for example, have been invited to join a group developing more
conservative restrictions to be discussed in the State Duma's next session in the
fall.
Obukhov says he favors an all-out ban on abortion but recognizes that "Russian
society is not ready for this kind of move."
"I, of course, support these laws because this is an operation that maims, and it
is used mainly by healthy women who don't have any pathology," Obukhov says.
"Abortions have very bad consequences, not to mention that they are bad from a
moral point of view. Society must somehow protect itself from dying out and if
possible protect unborn children and the women themselves."
[return to Contents]
#13
Moscow Times
August 1, 2011
Medvedev Has Lost His 2012 Bid
By Vladimir Frolov
Vladimir Frolov is president of LEFF Group, a government-relations and PR
company.
Earlier this year, I argued on these pages that the continued uncertainty from
the ruling tandem over which member was going to run for president was
undermining the political stability that the tandem justifiably viewed as their
key achievement and hampering long-term economic growth.
I further argued that the best option for Prime Minster Vladimir Putin and
President Dmitry Medvedev was to quickly announce that they would maintain the
tandem arrangement into Medvedev's second presidential term. It seemed at that
time a no-brainer. All other options looked bad, including Putin's return to the
Kremlin or Medvedev's ouster by a third candidate nominated by Putin.
Theoretically, this option is still on the table as long as no tandem member has
announced his candidacy. Many in the West still hope and pray that this is what
will happen in December. But it won't.
The window of opportunity for this closed in early May, the day Putin announced
the creation of his All-Russia People's Front. It was a clear sign that he was
laying the political groundwork to justify and ensure his return to the Kremlin
in 2012.
Medvedev's liberal advisers have arrogantly sought to frame his second term and
his program for modernization as a repudiation of Putin and his system of
"managed democracy," labeling Putin's so-called stability as "stagnation." This
raised the specter of a Mikhail Gorbachev-style unraveling of the country with
Medvedev's Kremlin losing control as it pushed for faster political
liberalization during his second term despite insufficient public support.
One of Medvedev's mistakes was not to distance himself from radical proposals
from his advisers, particularly from the Institute for Contemporary Development
think tank. These include dismantling Russia's security services and adopting a
subservient pro-Western foreign policy. Medvedev's own public statements
beginning in May have also indicated his willingness to push for deep political
changes during his second term, including significant easing of registration
procedures for political parties and opening the door to return direct popular
elections for governors. In foreign policy, some of Medvedev's actions, such as
Russia's mediation efforts in Libya and discussions with German Chancellor
Angela Merkel on the future of Transdnestr, have also raised eyebrows.
Medvedev's advisers have cast him as a Boris Yeltsin-style destroyer of Putin's
system, while in fact all he needed to do was to run as a Chinese Communist Party
incremental modernizer. By the time Medvedev realized that, as he made it clear
during his May news conference, it was already too late. Putin could no longer
trust Medvedev with continuing his cause.
It is a sign of despair in Medvedev's camp that some of his advisers are now
calling upon him to openly challenge Putin and declare his presidential candidacy
at the Yaroslavl Global Policy Forum in September.
The strategy is to pre-empt Putin and force him into a position where he has to
either endorse Medvedev as his own choice for president or repudiate his prote.
Medvedev would be wise to ignore this self-serving advice to become another
Yeltsin or former Ukrainian President Viktor Yushchenko. Instead, he should focus
on finding the right political role to continue his modernization agenda in
another capacity. This might help him return to the main political stage, perhaps
as a contender in the 2018 or 2024 presidential race.
[return to Contents]
#14
Yurgens Dismisses Russian Popular Front as PR Move
MOSCOW. July 30 (Interfax) - Igor Yurgens, head of Russia's Institute of
Contemporary Development (INSOR), in a radio program on Saturday, expressed
skepticism about the Russian Popular Front, an emergent broad public association
based on an initiative by Prime Minister Vladimir Putin.
"I think it's a public relations move before the elections. After that we simply
won't see, hear or know what it was they were trying to come up with," Yurgens
told the Ekho Moskvy (Echo of Moscow) radio.
Moreover, he argued that the Front is a potential obstacle to many necessary
economic reforms.
"If Putin stays within the Russian Popular Front, all economic programs that
intelligent people are writing will be smashed against the cliffs of this front.
Because the trade unions, the women's organizations and the agrarians will say no
- no to amendments to the Labor Code, no on all other important issues. This
would be quite difficult to fight," Yurgens said.
In talking about next year's presidential election, Yurgens said: "If some bright
independent figure came forward, such as (well-known anti-corruption activist
blogger Alexei) Navalny, I'd be glad, this would impart dynamism to this entire
procedure. It would be a very correct step in that direction."
Navalny is "a man with perfectly structured views and is absolutely ready to try
himself out," Yurgens said.
Such a candidate's possible defeat in the election would nevertheless be a good
start, Yurgens argued. "That's how warming up always works."
"However, if the ruling tandem puts forward a new candidate for president, say
(Moscow Mayor Sergei) Sobyanin, (First Deputy Prime Minister Igor) Shuvalov or
(Russian Railways chief) Yakunin, it will be a political figure that is no
novelty for the population. If they together think up such a candidate, support
for this decision will be no different in extent to the support for the tandem,"
Yurgens said.
[return to Contents]
#15
Yurgens/Gontmakher Forecast of 'Major Crisis' if Medvedev Is Not Candidate Eyed
Svobodnaya Pressa
July 27, 2011
Report by Andrey Polunin, under the rubric "Politics: Our Authors," with
interview of Nikolay Vladimirovich Petrov, lead expert of the Carnegie Moscow
Center, and comment by Yevgeniy Minchenko, director of the International
Institute of Political Expert Studies: Putin Will Modernize Russia. After 2012 we
will be living worse economically but more happily politically.
Igor Yurgens and Yevgeniy Gontmakher, the leaders of the Institute of
Contemporary Development (INSOR), predict a disaster for Russia. The present
"stability" has become a synonym not even of stagnation but of deterioration
throughout all areas of life, they believe. And if President Dmitriy Medvedev
does not announce his candidacy in the 2012 election, it will mean a very grave
major crisis for the country. Vedomosti is publishing the appeal of the INSOR
leaders.
To believe Mr Yurgens and Mr Gontmakher, the crisis scenario looks like this.
Russian stock markets will collapse, the exodus of capital and the intellectual
elite abroad will increase drastically, and actions by dissatisfied people that
may actually be extremist in character will begin right in the country. The final
undermining of the social sphere will occur because of the collapse of the
already weak economy. Medicine and education will become completely fee-based,
pensions will be reduced, and the political regime will get tougher and become
completely marginalized -- as in Belarus.
Moreover, as Yurgens and Gontmakher emphasize, it is not even necessary for
Vladimir Putin to return to the Kremlin for that to happen -- "It will suffice to
nominate some third candidate who will inevitably come out of the premier's ranks
if Dmitriy Medvedev resigns."
In order to save Russia, the leaders of INSOR implore Dmitriy Medvedev to give an
answer very soon to the "question that everyone has come to hate" about the
candidate for Russian Federation president and push him to "set his mind to cross
the Rubicon" and enter into a "equal and impartial" dialogue with society.
"Very relevant here are decentralization of the state and ensuring real media
freedom (including establishing Public Television), a cardinal liberalization of
the laws on party development and noncommercial organizations, and much more"
that will appear in this dialogue, the INSOR leaders recommend. Nikolay Petrov,
lead expert of the Carnegie Moscow Center
, gives his opinion on what is behind this appeal.
(Polunin) Nikolay Vladimirovich, are we in reality facing a choice: stagnation
and deterioration (Putin) or modernization (Medvedev)?
(Petrov) We indeed are facing two political courses, and they are personified. It
is a different matter that Medvedev is neither the initiator nor the commander in
chief of the modernization course. He is its symbol that the not very large
forces that are truly interested in meaningful modernization are gathering
around.
Yes, Dmitriy Anatolyevich Medvedev is not very decisive, not a very good
politician, and not very well schooled, and he has few political forces. But the
problem is not that the course of modernization -- which is important and serious
-- is not personified by the strongest politician. The problem is that the choice
that Gontmakher and Yurgens are writing about was made long ago. The course that
we see now and that the INSOR leaders are criticizing (justifiably in part) will
continue.
(Polunin) Then why were there the conversations about modernization?
(Petrov) Two or three years ago when people started talking about modernization,
there was a serious crisis. The political elite and the business elite were
afraid that the current model of the economy would be unable to keep them fed for
another 10 years and were seeking alternatives. If the situation with oil prices
had been different and oil had been cheaper, perhaps the modernization course
would have been supported by the big players. But that did not happen.
What do we have now? Perhaps privately Medvedev is an adherent of modernization,
at the very least it was specifically that course that allowed him to position
himself as an independent player. But any president -- Medvedev or Putin -- will
i mplement not what seems good to him but what the balance of political and
business elite forces is inclined toward. This balance is now clearly inclined
toward preserving the old course.
(Polunin) But why?
(Petrov) Because of the idea to leave well enough alone. Because without a dire
need and without a threat to life and prosperity, the political elites will never
undertake large-scale modernization.
The radicalism of Yurgens and Gontmakher's statements is associated with the idea
that they made the choice long ago and unequivocally. They are the ones who for
the most part formulated the ideas that are associated with Medvedev, although
often they were not articulated by the president himself. Yurgens and Gontmakher
are much more striking supporters and spokesmen of the modernization course. The
situation is clear to them: perhaps the chance that Medvedev will remain
president is extremely small, but everything possible must be done in this
situation.
(Polunin) Okay. The elite select the political course. But is there a chance that
it will change after 2012?
(Petrov) The course will change in a serious way after the presidential election.
We can expect social expenditures to be cut back and populist policy that has
been followed all these past few years to be revised. In my view they resorted to
populism because the prospect of an early presidential election held sway over
the government. So we were living every year as if it were an election year --
the government was afraid of lowering its popularity and raised pensions even at
the height of the crisis. This course physically cannot continue.
It seems to me that Yurgens and Gontmakher are dissembling when they say that the
stagnation of the course that Putin symbolizes is a social disaster. Any
president after 2012 will revise the extent of the state's social obligations and
reduce them. This process, by the way, will trigger the mechanism of political
changes that will push the government to reactive modernization.
(Polunin) So then there will be modernization?
(Petrov) I think that reactive modernization is inevitable -- regardless of who
is president. It is not the modernization where Medvedev, like the good tsar,
will come down from above with wonderful developments. It is the forced reaction
of a political system that is encountering new problems and that must become more
complex in order to survive.
In this sense Putin, as more realistically the next president, is good if only
because he controls a larger power resource and can realize much more serious
steps than Medvedev can.
(Polunin) Might the very fact that Medvedev will leave in 2012 produce a reaction
in the West? The fall of Russian stock markets, for example?
(Petrov) That is Putin's problem and the problem of the program that he comes
with. Alternatives are possible here. I do not think that Putin will come back
with the words "All right, that's it, we'll live in the old way now!" He needs to
raise his legitimacy in order to implement a painful course after 2012. So like
it or not, he must come out with some message. One of the possible options is a
statement that Medvedev was proclaiming the correct things. That they were not
implemented for various reasons but received support inside the country and
abroad. And that now he, Putin, is really ready to move in the direction that
Dmitriy Medvedev wanted to but could not lead the country toward.
I think that the fears that Yurgens and Gontmakher share are understandable and
have grounds. But it is specifically for that reason that the government will
respond to these points. And hence, the predicted disaster will not occur.
(Polunin) So the political system will become more liberal all the same?
(Petrov) I think so. After the 2012 election, the (actual) government will
encounter greater social activism and protest sentiments. There are two ways out
of that: to tighten the scr ews (there are neither the resources nor the desire
of the political elites for that), or to restore political competition and
elements of federalism.
I in fact believe that Putin blocked all the changes (and the political elites
are calling for them) only because he constructed the system where all power is
in his hands but he is outside the formal center of power. At the same time, any
political changes can do damage above all to Putin himself: they affect the
actual rather than the formal leader. From that comes my explanation of why
specifically Putin should be the next president. Political modernization is
necessary and inevitable, and the main condition for conducting it is combining
the posts of the formal and real leader of Russia and abandoning the model of the
tandem. Another Opinion Yevgeniy Minchenko, director of the International
Institute of Political Expert Studies:
(Minchenko) In my view the leaders of INSOR underestimate the skepticism that
exists regarding Medvedev among the Western elites. The overtures that were made
to him at the start of his presidency did not prove to be really justified in the
eyes of the West. For the Americans themselves, Medvedev proved to be
insufficiently pro-American and weighed down by imperial recidivism.
Besides, I do not think that in the West they attach much significance to the
theoretical differences between Medvedev and Putin. More likely there is a nuance
associated with the succession of power: as though it is not very respectable if
Putin returns. Although that is also a ruse: de Gaulle came to power and then
left and then came back once again -- and no disaster occurred.
Actually I believe that from the standpoint of observing the Western rules of the
game, simultaneously nominating Putin and Medvedev as candidates in the
presidential election would be the most elegant move. They would close up the
entire electoral field, they would compete, and the ruling elite would remain as
before no matter which of them won. At the same time, all the formal signs of the
democratic process would be observed. I think that is a perfectly possible
scenario. (end of Minchenko's part) "He very much wants to return to the
Kremlin."
It is curious that the INSOR leaders' appeal coincided in time with the
publications in foreign mass media of the statements of certain Russian
"politicians and diplomats" regarding which of the ruling tandem would run in the
2012 election. The Internet publication newsru.com with a reference to the
Reuters Agency quotes these people's words.
"I think that Putin will be nominated, and he probably has already decided
everything for himself," one of the top officials said. According to him, Putin
is worried that Medvedev, whom he has known for more than 20 years, does not have
strong support among the political and business elite and ordinary electorate.
And this support is simply essential in order to preserve stability in the event
that the reform plans are carried out. A much larger number of people support
Putin than do Medvedev. Medvedev has in fact overestimated his "weight" within
the system.
Another high-ranking representative confirmed: "Putin wants to return, he really
wants to return." According to his information, the premier was upset by
Medvedev's attempts to demonstrate his political independence and become firmly
established in power. But both communicate quite well on a regular basis, he
added.
A third source also confirmed that Putin is close to making the decision to
return to the Kremlin. At the same time, the source attempted to dispel the fears
that his return would mean a new period of disastrous stagnation. He proposed
that after becoming president, Putin certainly might appoint a reformist premier.
What Yurgens and Gontmakher Write
"... in fact one of the parties of the 't andem' is conducting vigorous political
agitation for continuing the course of stability, which in our concrete
conditions has become a synonym not even for stagnation (we went through that
stage in the pre-crisis 2002) but rather for obvious deterioration in all areas
of Russian life. This is the origin of the idea of forming the All-Russia
People's Front (the analogy with the German 'Democratic' Republic, may it rest in
peace, thrusts itself upon us) with social promises not backed up by any economic
foundation being given out right and left."
"But what about the other side, the president? We see attempts to move the
situation from deterioration toward progress in the fight against corruption, in
improving the business climate, and in shaping effective foreign policy. But
there is still no decisive turning point. The impression is created that even the
most elementary actions by Dmitriy Medvedev on the path to modernization are not
simply talked to death but are directly sabotaged and even repudiated by
counteractions."
"... what will happen if Dmitriy Medvedev, because of some factors unknown to
society, refuses to run for the presidency in 2012? It can be assumed with
confidence that the very fact that the current president refuses to continue his
functions would cause a major crisis in the country. The well known Mechela case
would seem minor in comparison with the fall of Russian stock markets. And we
will also add a sharp step-up in the processes of capital outflow and emigration
from Russia, which are underway in any case. The sense of fairness, which has
long been trampled by inexcusable corruption and the state's contemptuous
attitude toward its own population, can be transformed into any, even the most
extreme acts on the Manezh (Square) model. The collapse of the already weak
economy will undermine the material base of existence of the social sphere once
and for all. The already-started processes of paid services squeezing out free
services offered in education and public health will become widespread. There
will also have to be strict limits on spending for pension support. In this
situation, to preserve the status quo, the authorities will have to toughen the
political regime in the style of our partners in the Union State. That is the
price of preserving the policy of maintaining 'stability.' It is not even
necessary for Vladimir Putin to directly return to the office of president for
this kind of economic, social, and political disaster to occur. It will suffice
to nominate some third candidate who will inevitably come out of the premier's
ranks if Dmitriy Medvedev resigns."
"... might someone else be found for the role of Russian reformer (besides
Medvedev -- Svobodnaya Pressa)? Unfortunately, our political system is built in
such a way that we face a choice not between leaders with different modernization
programs, but instead just two rigidly personified courses: 'stabilization' as a
synonym for stagnation, deterioration, and the inevitable national disaster; and
modernization as a very risky but still not hopeless project."
"The danger of an economic collapse... may make an ally out of big business,
which has been keeping quiet until the right time. Decisive steps to reduce
administrative pressure on medium-sized and small business... will draw the
sympathies of this social stratum. There is one more reserve -- the most advanced
universities and research centers, where our intellectual elite and the best part
of our youth who are systemically concerned about the situation in the country
are concentrated."
"One small thing holds us back. Dmitriy Medvedev must set his mind and cross his
personal Rubicon, turning directly to society with a call to undertake together
the difficult job of pulling the country out of the swamp that we all have fallen
into."
[return to Contents]
#16
Moscow Times
August 2, 2011
Putin Will Need a Long Shower After the Vote
By Nikolai Petrov
Nikolai Petrov is a scholar in residence at the Carnegie Moscow Center.
Although we are only at the early stages of December's State Duma elections,
there are already three signals that give us a clear idea of how the vote will
turn out.
The first signal is that Prime Minister Vladimir Putin is throwing himself
directly into this election campaign as he has never done before. For the past
year, he has regularly conducted United Russia congresses in the federal
districts and it made deals with the regional political elite under this formula:
loyalty in exchange for lucrative projects. Putin has been actively building up
his All-Russia People's Front and turning himself into the leader of a corporate
state.
The second signal is that Putin has retained the most odious election officials
in his power vertical, those associated with numerous scandals and violations.
Most prominent among them are Vladimir Churov, head of the Central Elections
Commission since 2007, and Valentina Gorbunova, who has headed the Moscow city
election committee since 1994. Gorbunova was responsible for the scandalous
results of the 2009 elections that prompted representatives of every minority
Duma party to protest electoral falsifications.
It is thus no surprise that in response to the question of what he would do the
morning after the presidential elections, Putin recently said he would wash the
dirt off of himself both in the hygienic and political meanings of the word.
Indeed, there will be plenty of dirt in the upcoming Duma and presidential
elections.
The third signal is that President Dmitry Medvedev recently met with
representatives of regional election commissions, but the only public statements
to come from those talks referred to continuing improvements to the electoral
system.
At least one reason for the meeting was to deliver a report concerning numerous
violations committed during legislative elections in March for the Tambov region
that gave United Russia a clearly padded 65 percent of the vote. That report was
prepared by the former head of the Tambov election committee. The lack of a
public response by the president suggests that he endorses such a conclusion.
Thus, there is no doubt that the authorities are preparing for a massive
falsification of the Duma elections in December. Or more accurately, they are
preparing results in advance that would give at least 60 percent of the vote to
Putin's front the only way it can possibly achieve a significant victory.
A final confirmation that such a scenario is in the works will come when
international election observers from the Organization for Security and
Cooperation in Europe are denied entry to Russia on some trumped-up technicality
or by false Kremlin claims that the observers have a Western bias.
But there are at least two positive aspects to this rather dour picture. First,
the wide dissemination on the Internet of the Tambov report suggests that
millions of Internet users will also learn of a repeat of these electoral
violations in the December vote. Second, far more members of the political elite
are likely to oppose falsified election results this year. Thus, we can expect a
spate of public scandals resulting from voter fraud and increased pressure on the
government to rectify the situation.
[return to Contents]
#17
Jamestown Foundation Eurasia Daily Monitor
August 1, 2011
The Prospect of Putin's Return Comes Into Focus
By Pavel K. Baev
As it happens all too often in Russian rumor-ridden politics, news that is taken
seriously comes from abroad, and the Reuters analysis on Prime Minister Vladimir
Putin's newly-crystallized intention to return to the Kremlin made a stronger
impression than most half-informed speculations (Nezavisimaya Gazeta, July 28;, July 27). His heavy-handed involvement in building the so-called
"People's Front" around the United Russia party has been demonstratively
unilateral, and his disappointment in President Dmitry Medvedev's performance is
all too clear, but Reuters' sources named a reason that has really driven the
point home: that is what he really wants (, July 28). The intrigue
created by ambiguous statements of the two co-rulers about their joint
decision-making at the right moment informed by the best interests of the country
has been abruptly terminated, and Putin's third six year-long presidential term
has become a pre-determined fact of life.
One immediate response was given by Igor Yurgens and Yevgeny Gontmakher, the
leaders of Institute of Contemporary Development (INSOR), who argued that
Medvedev could not deny responsibility for implementing his modernization program
and had to cross his personal Rubicon (Vedomosti, July 27; Ekho Moskvy, July 30).
They assert that Putin's move back to the presidential office would result in a
fast deterioration of the economic situation instead of promised stability and
that will require severe repression against rising discontent, which amounts to a
national catastrophe. There is a distinctly desperate tone to this analysis, but
Gleb Pavlovsky, a well-known Kremlin court insider, argues in a dispassionate
manner that the "tandem" has become dysfunctional but maintains uncertainty about
the elections in order to camouflage the lack of a common political platform
(Vedomosti, July 29). The problem is not that the two men cannot agree on optimal
aims and goals but that the ownership of power is the only goal, and nobody is
prepared to give up his share of this property unless forced to.
Medvedev has little control over the financial flows generated by this ownership,
so Putin does not really perceive him as a contender merely a talking head that
has developed some undue pretensions (Novaya Gazeta, July 21). His plan for the
new presidency quite probably includes some reforms and he presents Pyotr
Stolypin, a conservative reformer who managed Russia's modernization at the start
of the twentieth century, as his role model (Ezhednevny Zhurnal, July 28). He
probably does not understand how limited his options really are by the sum total
of his commitments to "special friends" and by the well-informed mistrust in his
motives in the active part of the society, which is conveniently hidden by the
carefully censored opinion polls (Moskovsky Komsomolets, July 29).
As for Medvedev, he cuts a thoroughly unconvincing figure to pin any hopes on, so
there is not much response to the INSOR campaign for rallying support. The strong
and sustained outpouring of capital, which Pavlovsky calls the "price of
uncertainty," undermines the modernization vision, which requires a leap of
investment in innovations, while for Putin's stability, the flight of money and
people means merely a drain of the pool of discontent. Medvedev tries to keep his
show on the road and generate positive impressions; addressing a meeting of
judges he praised their role in improving the investment climate and criticized
the government for sabotaging his initiatives in modernizing the juridical system
(Kommersant, July 27; Moskovsky Komsomolets, July 28).
It is exactly in Medvedev's helplessness in getting the courts in a semblance of
order that disqualifies him most in the eyes of potential supporters, and last
week brought yet more evidence of that when the parole plea of Platon Lebedev,
the closest associate of Mikhail Khodorkovsky, was turned down. The court
proceedings were so blatantly rigged that even Dmitry Muratov, the editor of
Novaya Gazeta, who had been inclined to give Medvedev the benefit of the doubt,
says that he has lost all hope (Ekho Moskvy, July 28; Novaya Gazeta, July 29).
Another high-profile case is the investigation of the imprisonment and death of a
business lawyer Sergei Magnitsky, which goes nowhere despite Medvedev's promise
to get to the bottom of it (Moskovskiy Novosti, July 29). The US State Department
instruction to deny visas to all officials connected to this shameful case irked
Russian professional "patriots" who demand an "adequate" response
(, July 30; Kommersant, July 28).
The controllable and corrupt law enforcement system is indeed one of the core
elements of Putin's system of power, hence the angst about the external pressure
for acting on Medvedev's discourse on the independence of the courts and respect
for law. Many of Putin's minions have good reasons to worry about finding
themselves on the next "not welcome" list, which would mean no access to the
"safe havens" carefully prepared in the West (Novaya Gazeta, July 29). They also
find it difficult to use the walled castles built in various natural paradises
around Russia as smart bloggers reveal their existence to the disgruntled general
population (, July 29). The distance between this passive
discontent and angry protests may turn out to be far shorter than the ruling
kleptocracy assumes; one symptom of the widespread disappointment in the existing
order is the strongly expressed desire to reinstate in the electoral bullet in
the "Against all" option (, July 28)..
[return to Contents]
#18
Moscow Times
August 2, 2011
All in the Family.
Any discussion of who controls Russia typically focuses on the ruling tandem and
the differences between the two leaders' public statements and political
positions.
Often, that discussion widens to include officials who were Putin's friends or
colleagues during his St. Petersburg days, members of the siloviki and others who
came to prominence under St. Petersburg Mayor Anatoly Sobchak. Sociologist Olga
Kryshtanovskaya, director of the Institute of Applied Politics, has written
extensively about how most senior officials working under Prime Minister Vladimir
Putin and President Dmitry Medvedev have come from the Federal Security Service,
armed forces, law enforcement and other siloviki structures.
But this discussion does not fully answer the question of how the elite are
perceived by Russians and how stable the social order in the country is. We know
the attitude of society toward the president, prime minister and political
parties, but we know little about what the people think about the ruling elite.
In the 1990s, a crisis of legitimacy applied exclusively to the business elite
who had accumulated enormous wealth by privatizing former Soviet government
assets. The word "oligarch" became a pejorative term to denote the nouveau riche
who most Russians felt had illegally seized enormous wealth and acquired
unwarranted political influence.
Nevertheless, even those who sincerely hated President Boris Yeltsin, Prime
Minister Yegor Gaidar and Anatoly Chubais, who headed the privatization program,
never questioned their political legitimacy. This is because Yeltsin came to
power in a landslide victory in the 1991 election and won a hard-fought victory
over Communist leader Gennady Zyuganov in the 1996 election. In 2000, Putin
triumphantly won the last relatively free election on a wave of popular
enthusiasm and the expectation that he would bring order and rapid economic
growth to the country.
But in the 2000s, the authorities seriously undermined their legitimacy. Putin's
run for re-election as president in 2004 was a showcase of political manipulation
and abuse of Kremlin administrative resources. During the 2007 State Duma
elections, single-mandate districts were eliminated and the authorities refused
to register numerous opposition parties. Elected governors were replaced by a
system in which Moscow appointed its own loyal servants. In addition, mayors of
most large cities were effectively replaced by city managers, who are appointed
by the Kremlin and are accountable to Moscow, not the people.
Putin has built a power vertical for himself that is staffed with servile
bureaucrats. But in the eyes of the people, that vertical consists of
irresponsible and corrupt officials. The people view their relationship with
government officials as adversarial "us versus them." The people have no say in
the way the state is run.
Moreover, members of Russia's ruling elite are attempting to pass on their power
to their children to create a self-perpetuating "bureaucratic aristocracy." For
example, Sergei Matviyenko, 38, the son of St. Petersburg Governor Valentina
Matviyenko a Kremlin favorite who is slated to become the next speaker of the
Federation Council has achieved mind-boggling success in banking during his
mother's reign. Deputy Prime Minister Sergei Ivanov's son Sergei Jr., 31, is the
chairman of Sogaz Insurance, which is closely affiliated with Gazprom. Ivanov's
other son, Alexander, 34, is a top executive at Vneshekonombank. Andrei Murov,
41, son of Federal Guard Service director Yevgeny Murov, heads Pulkovo Airport.
Denis Bortnikov, 37, son of Federal Security Service director Alexander
Bortnikov, heads the VTB North-West bank. Dmitry Gryzlov, son of Duma Speaker
Boris Gryzlov, works for the Dar foundation that has close ties to businesses run
by Putin's friends. Anastasia Misharina, daughter of Sverdlovsk Governor
Alexander Misharin, runs a large real estate and timber business in the region.
There are thousands of these types of examples. Having amassed significant
fortunes under the patronage of their influential and powerful parents, don't be
surprised if within five or 10 years many of these privileged sons and daughters
enter politics and occupy key positions at the pinnacles of federal and regional
authority all in an effort to continue their "family businesses."
But by seizing power and wealth through murky deals that are not subject to
public scrutiny, the business and political elite alienate themselves further
from the people.
This is risky business. They may think they can hide from the people forever in
their luxurious villas with hundreds of bodyguards and an entire army on their
side. But, then again, so did former Egyptian leader Hosni Mubarak and former
Tunisian President Zine El Abidine Ben Ali and their cronies, to name only two
recent examples of the thousands that history has to offer.
[return to Contents]
#19
Moscow Times
August 1, 2011
New State Body Stirs Crackdown Fears
By Natalya Krainova
Stirring new fears of an election season crackdown on the political opposition,
the Kremlin announced Friday that President Dmitry Medvedev has created an
intergovernmental commission tasked with "coordinating the activities" of federal
and regional agencies in fighting extremism.
The commission will draft proposals for the president and the government,
including legal initiatives, aimed at "forming state policies" to fight
extremism, the Kremlin said on its web site. It also will help draft
international treaties on fighting extremism and "work out measures" to "improve"
efforts by federal and regional authorities and nongovernmental organizations to
counter extremism.
It will make yearly reports to the president on extremist activities in Russia
and provide "organizational guidance" to permanent working groups dealing with
the "harmonization" of interethnic relations in the regions.
It will be in the commission's powers to control the fulfillment of its orders by
federal and regional authorities; request and receive information from
authorities at any level and from nongovernmental organizations; and invite
experts in government agencies and public activists to take part in its work.
Medvedev appointed Interior Minister Rashid Nurgaliyev as the commission head and
Federal Security Service chief Alexander Bortnikov as deputy head, according to
the decree posted on the Kremlin's web site.
The commission comprises 16 officials, including the ministers of defense,
culture, education and science, regional development, mass media, and sports and
justice; the heads of the Foreign Intelligence Service, the Investigative
Committee, the Federal Migration Service and the Federal Customs Service; and
three other officials.
Alexander Verkhovsky, director of the Sova Center, which tracks extremism and
xenophobia, said the commission's work would be useful if it dealt with serious
problems like terrorism, not "with the wide spectrum" of notions included in the
government's definition of extremism.
"It's no secret that authorities often search for extremism where there isn't any
but it's convenient to find," Verkhovsky said by telephone, referring to
extremist charges often brought against the political opposition and human rights
activists.
A 2002 federal law gives a broad definition of extremism, which, among other
things, includes inciting any kind of hatred; making "patently false" public
accusations against officials; obstructing the work of authorities through
violence or the threat of violence; and "committing a crime out of revenge for
the illegal actions of other people."
Vladimir Mukomel, top anti-extremism expert at the Institute of Sociology at the
Russian Academy of Sciences, said coordinating the work of government agencies
was important.
"But there are serious concerns that authorities will try to suppress any kind of
protest sentiment in society, which is growing, under the guise of fighting
extremism," Mukomel said by telephone.
Voters will elect a new State Duma in December and cast ballots in a presidential
election in March.
Alexei Mukhin, an analyst with the Center for Political Information, a think
tank, questioned the need for the new commission, noting that fighting extremism
is already the mandate of the National Anti-Terrorism Committee, the Interior
Ministry and the FSB.
"It was probably to add a bonus to the ministers' salaries, which are not small
as it is," Mukhin said.
Medvedev declared his intention to create the commission at a meeting of the
State Council in late December, after some 5,500 nationalists and football fans,
shouting racial slurs, clashed with police on Manezh Square in central Moscow at
a protest of the killing of fan Yegor Sviridov during a brawl between fans and
Caucasus natives.
In July, Prime Minister Vladimir Putin told a meeting with religious leaders that
a structure similar to the much-criticized Soviet National Affairs Ministry would
be created "in the near future."
Medvedev spoke against the creation of a new ministry in December
[return to Contents]
#20
Moscow News
August 1, 2011
LiveJournal users fear election crackdown
By Olga Khrustaleva
The biggest-ever hack attack on LiveJournal, the world's biggest blogging
network, and its prominent opposition voices, has prompted bloggers to fear a new
wave of shut-offs closer to the elections.
Last week, from Monday to Friday, a massive series of DDoS attacks, believed to
emanate from computers in Latin America, hit LiveJournal's Qwest and Verizon
servers hitting the network's most prominent anti-government critics, including
anti-corruption blogger Alexei Navalny.
The bloggers are hitting back, however, accusing authorities of wanting to
quieten opposition in the run-up to the elections but insisting the clampdown
would be unsuccessful.
"I suppose that the attacks will continue, because their purpose is to prevent
the dissemination of information about corruption, the party of crooks and
thieves [United Russia], blue buckets... Sagra, Ramzan Kadyrov's whims and so
on," Navalny wrote on his LiveJournal page. "Nothing can be done about it. We
will continue to expose the crooks, and they will hinder exposures."
Similar attacks on LiveJournal took place in March and were then called the worst
in its history. Last week's attacks, however, were even worse. Hackers shut
LiveJournal down for a whole week, starting July 25.
While SUP specialists were doing their best to restore service, Internet users
have been actively discussing possible reasons for the attacks. One of the most
popular theories was that the attacks were a response to some radical opposition
posts by Navalny and other bloggers.
"I don't see any commercial motive in the attacks. I can't think of anyone who
would benefit from LiveJournal's work being interrupted from a business point of
view. It seems there should be some other motive," Ivan Zassoursky,
editor-inchief of Chastny Korrespondent, told The Moscow News. "I think it's a
pre-election [thing]."
Zassoursky said the aim of the attacks was not to destroy the network
permanently, but to show that it was not reliable as an instrument. "There are
hardly any independent media in Russia," he said. "LiveJournal is unique because
it unites independent journalists [this means] they can be shut off altogether."
Alexander Arkhangelsky, a political commentator for RIA Novosti, drew parallels
with the situation in Egypt, when the Internet was shut down for the entire
country.
LiveJournal was attacked "not because of something, but for some reason,"
Arkhangelsky wrote in his blog for RIA Novosti. "In case of turmoil like in Egypt
or Tunisia, the weapon of virtual destruction can be used quickly and
consistently without shutting down the whole Internet."
Arkhangelsky added that it would not prevent prominent bloggers such as Navalny
getting their message out, due to their high profiles, but "someone wants to
destroy the opportunity of [new] Navalnys ripening in the garrulous and glib
Internet community."
However, some prominent computer entrepreneurs disagreed with the conspiracy
theory about the Live- Journal attacks. Yevgeny Kaspersky and Alexei Exler said
the reasons for the failure of LiveJournal might be technical rather than
political.
"In the end the patient is rather DeadJournal than LiveJournal," Kaspersky wrote
on his standalone page, which he is now using instead of a LiveJournal account.
"It looks like the problems are clinical. And to solve them not only the
technical part should be upgraded, but also the part where the fish starts to
rot," in an apparent reference to the "head" of the forum SUP, the company that
runs LiveJournal.
Meanwhile, many bloggers are developing new ways to express and share their
views. Some, following Kaspersky's example, are starting their own websites,
while others are creating Facebook and Google+ accounts.
"Until I had a strong reason to believe it, I didn't have to worry about any
alternative [places to blog]," wellknown blogger Anton Nosik wrote. "When the
heating or sewage system is being repaired in your apartment you can move out for
a couple of days, but there is no serious need to think about a new permanent
home, if the current one suits you. Unfortunately, the situation changes
radically when you realize that your house is to be demolished and the bulldozers
are on their way."
[return to Contents]
#21
Russia Profile
August 1, 2011
Virtual Nationalism
Recent Cases Indicate that Social Networks Are a Catalyst for Spreading
Nationalism in Russia
By Pavel Koshkin
Though historically Russia has been a multinational country, bringing together
more than 100 ethnicities, the spread of nationalism via the Internet is posing a
growing threat to the integrity of the Russian society. Nationalist rallies like
the Manezh Square riots last December and other activities organized by the
neo-Nazi National-Socialist Organization (NSO) North have been coordinated
primarily through the Internet, including social networks VKontakte and Twitter,
recent research conducted by the Levada Center found.
While experts from the Moscow-based Sova think-tank on xenophobia see the role of
social networks as limited to a means for quick communication between
nationalists, their counterparts from the Higher School of Economics believe that
Internet propaganda could also serve to polarize a wider, undecided audience.
The nationalist protests last December at Manezh Square that united around 4,000
teenagers, college students and football fans, were a wake-up call for Russian
society about growing nationalist sentiment among young Russians. Though the
riots were predicated by the murder of a young Spartak football club fan during a
brawl split along racial lines, calls for the gathering were widely coordinated
through Internet blogs and forums. On Twitter, for instance, a post was widely
disseminated to "put a person at every subway station and track down Caucasians"
who were rumored to be uniting to strike back. Likewise, Lev Molotkov, the leader
of the NSO-North Nazi branch and an IT specialist from Sergiyev Posad, frequently
used the Internet to organize meetings and popularize his radical ideas.
"Planning collective events on the Internet is becoming routine for young
Russians, and this makes them significantly different from the older
generations," wrote Denis Volkov, the Levada researcher, in his report. "Young
people aged 18 to 24 are the most active Internet-users. Most of them spend
several hours per day using social networking."
Currently, there are more than 35 pro-nationalist organizations in Russia,
ranging from the moderate People's Union to the radical North Brotherhood, to the
outlawed Slavic Union. Most of them engage in Internet propaganda through their
own Web sites. Alexander Verkhovsky, the director at the Sova Center for
monitoring extremism in Russia, argues that the figure doesn't account for the
entirety of nationalist activity in Russia. "It's difficult to say exactly how
many nationalistic groups operate in Russia because of the huge number of small
underground and secret organizations," he said.
VKontakte boasts more than 1,000 Russian nationalist groups, which can attract
from anywhere between 3,000 and 110,000 members. When low standards of living
exacerbate already tense relations with national minorities in Russia, social
networks may encourage an undecided audience to sign up for a nationalist group
and identify with their ideology, said Valeria Kasamara, the head of the
Laboratory for Political Research at the Higher School of Economics. "The
Internet is a good tool to manipulate undecided young people," she said.
Unlike Kasamara, Verkhovsky argued that "social networks don't play a significant
role in the growth of nationalism," because participation in a group can't
significantly change the mindset of a person and his core values. "It's simply a
good communication tool to coordinate [the group's] activity," he said. "In
reality, levels of nationalism have been rather stable for the last decade."
This opinion is partly reflected in research gathered from a January 2011 Levada
poll, according to which 47 percent of Russians believe that the last decade
hasn't seen a significant increase of nationalist sentiment; rather, people have
just started talking more about the country's nationalism problem. However, 39
percent of those polled said that the increase in nationalist sentiment among
Russians has been a common and alarming trend over the last ten years. National
prejudices (11 percent of respondents), aggressive behavior by national
minorities (37 percent), low standards of living in Russia (25 percent), and the
Russian authorities' interest in fueling nationalist rhetoric (four percent) are
among the major reasons for the growth of nationalism, according to the poll.
At the same time, participation in nationalistic groups on social networks
doesn't necessarily mean that nationalists will attack immigrants from the
Caucasus or other national minorities, Verkhovsky noted, and most Russians who
hold a nationalistic ideology are hardly likely to assault people. Their actions
will not go beyond kitchen-table talks, he said. "Radicalism is a personal
characteristic which can be developed only in a certain environment among circles
of radical nationalists, who are in the active minority," Verkhovsky said.
Today it's not taboo to identify oneself as a nationalist, because the term is
widely conflated with patriotism, which may remove some of the stigma from
identifying oneself as a nationalist. For example, one of the biggest
nationalistic groups in the social network VKontakte "I'm Russian" brings
together more than 110,000 people. The participants of this group identify
themselves as active patriots who defend the idea of Russia's national and
cultural integrity and seek to protect it from undesirable foreign interference.
Kasamara stressed that concepts of patriotism should be separated from
nationalism. "Nationalism is a sign of an authoritarian regime which does not
indicate sincere patriotism and true love for Russia," she said. "Nationalists
can't stand constructive criticism because they are blindly obsessed with the
idea of Russia's supremacy and power, which helps them to assert themselves and
bolster their egos by chanting 'Russia is for Russians,' or beating
representatives of national minorities. It's not real patriotism. Real patriotism
means the critical analysis of a problem, serious attempts to resolve it and to
reform the country."
The lack of good tolerance programs is another reason behind the growth in
nationalist sentiments among young Russians. "Tolerance lessons in schools are
not effective because they are too abstract and don't target certain audiences,"
Verkhovsky said. If high-school students have problems with their counterparts
from national minorities, they should discuss these concrete cases in class to
nip the problem in the bud, instead of discussing abstractions like humanity and
human goodwill, he said.
[return to Contents]
#22
Moscow News
August 2, 2011
Magnitsky case reopened in a bid to clear dead lawyer's name
By Andy Potts
Supporters of Sergei Magnitsky are hopeful that they can clear his name
following the reopening of the case against him.
The lawyer, who died in custody in Nov. 2009, was accused of embezzlement but the
case was dropped following his death.
However, following a recent court ruling over the Lukoil crash, a precedent has
been established that enables relatives of dead people held responsible for
crimes or accidents to continue to try to clear the names of their loved ones.
But while the reopening of the case is good news for Magnitsky's supporters,
there was anger over the Russian Interior Ministry's dismissal of a report from
the Presidential Council for Human Rights.
Earlier the council had concluded that senior Interior Ministry staff were
implicated in the Magnitsky scandal.
Welcome surprise
Dmitry Kharitonov, the lawyer representing Magnitsky's family, said the reopening
of the case was unexpected.
"This did not come from our initiative," he told Kommersant. "But in any case it
is positive news. We hope that a complete investigation will lead to a full
rehabilitation of Sergei."
However, Kharitonov warned that returning to the case was no guarantee of
clearing Magnitsky's name.
"It is difficult to imagine that now [the authorities] will suddenly recognize
that they kept an innocent man in jail for a year."
Opportunity
Igor Trunov, who is representing the family of one of the victims of the Lukoil
crash, explained how the ruling in his case had changed the rules for other
contentious legal affairs in Russia.
"Now the defense has an opportunity to present new evidence, ask for independent
examinations and review the admissibility of evidence presented after the initial
termination," he told Kommersant.
"Moreoever, if we do not agree with the results of a new investigation we can
apply to bring the case back to the courts."
Reports dismissed
However it appears that there will be no legal action against senior Interior
Ministry officials despite a damning report from the Presidential Human Rights
Council.
Investigator Boris Kibis said that the ministry regarded that report as
"inadmissible" and said officials had done nothing wrong.
He concluded that there could be no criminal case launched against them.
Magnitksy's employer, Hermitage Capital added that the Head of the Interior
Ministry's Central Federal District, Pavel Lapshov, had written to the investment
fund's lawyers.
"No data has been found indicating any violations of human rights, access to
justice or restrictions on lawyers," Lapshov wrote, according to a press release
from Hermitage.
A representative of the company accused Russia of whitewashing the case and
called for "concerted global action" to bring about justice.
"Even with the entire world watching, the Russian authorities simply ignore the
obvious criminal conduct of officials in their own government and have no
interest in obtaining justice for the young life that was been cruelly taken," a
statement read.
[return to Contents]
#23
Impact of Magnitskiy Case, Three Scenarios It Could Follow
Moskovskiye Novosti
July 29, 2011
Commentary by Yelena Panfilova, head of Transparency International Russia and
chairwoman of the Russian President's Council for the Development of a Civil
Society and Human Rights: "The Magnitskiy Case -- Three Scenarios"
The attempts of other states to push our authorities into some kind of action, as
is happening now in the Magnitskiy case, usually produce exactly the opposite
result. Everyone starts sulking, getting angry, and moving the debate to the
sphere of politics, not law.
It is a different matter if the voice of international business, investors, and
business associations rings out more vigorously. It exerts a positive influence
much more frequently. Because in our government too there are people with common
sense who understand that to butt horns diplomatically with other governments may
be entertaining and sometimes even fun, but to scare off investors is simply
stupid.
Why is everyone so angry?
We need to acknowledge that some progress has been made in the case of Sergey
Magnitskiy.
A year ago we were told that really nothing terrible had happened, that Sergey
Magnitskiy himself died in the Butyrki SIZO (investigative detention center),
nobody killed him, he was there on fully substantiated grounds, and in general it
was incomprehensible why everyone was so angry and was giving such attention to
the case.
It came to the point where representatives of the MVD (Ministry of Internal
Affairs) Investigations Committee appeared at press conferences and gave their
assurance that the charge of stealing 5.4 billion rubles from the state treasury,
which Sergey Magnitskiy brought out, should be addressed to Magnitskiy himself.
And that in general there were no facts indicating that anyone from law
enforcement could be mixed up in this. For the medical part of the investigation
too the finding was that the doctors acted entirely within instructions and none
of them was at fault in any way.
Losing the Whole Picture
Despite the fact that the investigation did get moving, however, the main
obstacle was a lack of desire to look at the totality of events linked to Sergey
Magnitskiy's life and death as a single whole. That which we call the "Magnitskiy
case" is served up as an assortment of distinct stories that are not
interrelated: separately about prison medicine, separately about keeping in
custody, separately about the investigation related to Magnitskiy and the
Hermitage Capital Fund, and separately about the theft of budget capital that
Sergey reported.
As before, the problem of "conflict of interests" slips past the attention of the
investigation. How could it happen that the people enlisted to investigate the
Magnitskiy case were the same people that, in Magnitskiy's opinion, were involved
in re-registering the companies that belonged to the Heritage Fund and in the
subsequent misappropriation of taxes paid by these companies?
In principle, there is no normal law enforcement system where such a thing could
happen. If there is evidence against associates of some particular organ and it
was given before a case was opened against the person who gave the evidence, in
no way can these associates be enlisted to investigate the case against that
person.
What is preventing an investigative check of all Magnitskiy's reports? Or those
complaints of crimes that Sergey's boss Duncan at the Firestone firm bombarded
the prosecutor's office and the investigations committee with? What seems to be
the problem? Go and check. Where does this stubborn lack of desire to look at the
problem as a whole come from?
We Will Not Surrender "Our Own People"
And what is preventing them from giving us an explanation of the concrete role of
those people whose names figure in the materials of the expert examination done
by the Russian President's Council for the Development of a Civil Society, and in
the many documents? For example, why did the investigator make precisely those
decisions, which seem to many legal experts to go beyond the framework of legal
norms? Why were the particular documents and items of proof received at trial
without examination?
It would be necessary to answer all of these questions directly. But at this
point no one wa nts to explain anything to us "with feeling, plainly, slowly and
clearly." Instead of that we see how some of the people on the Magnitskiy list
are successfully going through re-certification. Apparently the Soviet
departmental principle is operating here: we do not surrender "our own people,"
"our people are not at fault for anything." It is unimportant here whether they
are big or small, divided spoils with anyone, or what heights their connections
reach to.
Faith in Justice
From the standpoint of the political losses our government is bearing in the eyes
of Russian citizens and the international community, it would have been possible
to "surrender our own people." Because too much depends on how and with what the
Magnitskiy case ends.
In the first place, it has become a kind of symbol in the search for answer to
the question: is supremacy of the law possible at all in contemporary Russia? The
most diverse branches of our law enforcement system turned out to be mixed up in
the case: the police, the courts, the investigation, and the prosecutor's office.
Furthermore, it cannot be forgotten that Sergey was trying to call attention to
violation of our own laws. After all, budget money had been stolen. That is to
say, willingly or not he became a fighter against economic fraud and corruption.
I do not think that he really wanted this. Most likely he just believed in
justice and decided to follow it to the end.
And here we see what they have shown us all -- look at what happens to those who
fight for justice. Does this mean that in Russia supremacy of the law ends
exactly where big financial interests begin?
In the second place, it is perfectly obvious that there is a media-societal
consensus in relation to such a high profile case. Everyone wants to know the
truth. We can even say that a societal mandate exists for the truth. And how the
government behaves in this situation will show us to what extent it is willing to
take account of society's demands and to what extent it is only concerned about
its own interests.
In the third place, the Magnitskiy case is directly relevant to the question of
Russia's investment attractiveness. And its conclusion will be a clear indicator
of the degree to which our state is capable of protecting investments,
entrepreneurs, and simply people who work in business. Protect them against just
this kind of happenings organized by representatives of the various organs of
government.
Three Scenarios
At this point I see three possible scenarios by which the investigation of the
Magnitskiy case may develop.
The first scenario. The Investigations Committee of the Russian Federation, which
is handling the case, amazes us all this autumn with long-awaited and excellent
results. We get answers to all our questions. They explain to us what role all
the people they named played in the fate of Sergey Magnitskiy. They explain to us
how they simultaneously were accused by him of complicity in the theft of 5.4
billion rubles and conducted an investigation of him. They explain to us why the
leadership of these people did everything they could to ignore this unconditional
conflict of interest. Of course, this is the ideal scenario.
The second scenario. They present us with a couple of "pawns" from the lower part
of the list and say that they are to blame for everything. They are punished to
the full extent of the law -- or maybe just half -- and supposedly justice has
triumphed. They may even tell us where the 5.4 billion rubles went.
The third scenario. They tell us that there was nothing but an amazing series of
coincidences. The people just happened to coincide in time and space. But nobody
did anything bad. Or perhaps it may occur that the statute of limitations runs
out (this autumn it will be two years since Magnitskiy died) and they will tell
us that it is too late and nothing can be done.
It Is Possible
In any case, I think that one of these scenarios will be carried out before
winter. And the second scenario is, of course, the most likely. But then we will
have to fight hard to tip the scales toward the first scenario.
And there really is hope for this. Because more and more people understand the
essence of the problem. After all, we are not fighting just to understand why
Sergey Magnitskiy died. Not only so his relatives can finally find out what
happened to him and draw at least some consolation from that.
As the case progresses it is becoming clear that the scheme for unlawful recovery
of taxes was not a one-time thing, but was used with enviable regularity.
Therefore the further challenge is to see that our right as citizens of the
Russian Federation to know what really goes on behind closed doors triumphs.
As experience shows, if we fight for almost two years these doors open a little.
In other words, this is an extremely slow process that is accompanied by both
passive sabotage and active counteraction by the other side. But it is possible.
We simply need to work.
[return to Contents]
#24
Russian rouble can become regional reserve currency Putin
LAKE SELIGER, Tver region, August 1 (Itar-Tass) The Russian rouble can become a
regional reserve currency, Prime Minister Vladimir Putin said.
"Other reserve currencies should appear in the world and the rouble can become a
regional reserve currency. That' s quite possible," Putin said at a meeting with
the participants in the Seliger-2011 youth forum on Monday, August 1.
"We all understand present-day realities and what it will be like depends not on
a piece of paper but on the quality of the economy," he said.
Speaking of the advantages of the rouble, Putin said, "The rouble is quite a
stable, reliable and freely convertible currency, unlike the Chinese yuan."
"We did not restrict capital export even during the 2009 crisis. Yes, we lost a
part of our gold and currency reserves, but reputation is more important," he
said.
Some of the settlements with other countries, including Belarus, are already made
in roubles. "We made 90 percent of settlements with Belarus by money transfer,
and 60-65 percent in cash," the prime minister said.
In his opinion, the regional role of the Russian rouble would increase as the
Common Economic Space becomes effective from next year.
"We are creating the Customs Union and the Common Economic Space, and the rouble
will fight for its niche in a dignified way," Putin said.
First Deputy Prime Minister Igor Shuvalov earlier confirmed Russia's intention to
make the rouble a regional reserve currency in the CIS in the years to come.
"Speaking of the potential reality in the nearest future, it's the CIS states. If
this happens in the CIS, it will be possible to use the rouble as a reserve
currency in such countries as China, India and Arab countries. We have all
possibilities for that," he said.
In his opinion, the introduction of the rouble as a regional currency in the CIS
will be the first step towards its status as a reserve currency.
He believes that the rouble will be used even more within the Eurasian Economic
Community's fund.
Russian presidential aide Arkady Dvorkovich said the Russian rouble would acquire
the status of a reserve currency gradually and there is no set schedule for that
process.
"There is no schedule for making the rouble a reserve currency. We may have a
long-term strategy, and we actually have it," he said.
He expressed confidence that "the Russian policy will lead to the strengthening
and stabilisation of the rouble" but this process will "go gradually".
"It is impossible to turn the rouble into a reserve currency overnight. It may
become a regional reserve currency first and then a full-fledged international
one," he added.
"It would be reasonable to have reserves in other currencies in order to avoid
some risks," Dvorkovich said, adding that the dwindling role of the U.S. dollar
"is not an artificial tendency".
"Diversification of reserve currencies may lead to greater stability of the world
financial system," he said.
"We do not know yet what the consequences of the crisis will be for reserve
currencies: There are many risks, and everyone is looking at what is happening in
the United States: With the outbreak of the crisis many expected a dramatic fall
of the U.S. dollar down to two U.S. dollars for one euro. In fact, volatility is
high and the crisis increases it. This in turn increases risks for all market
players that use the U.S. dollar for payments," the aide said.
However Vice Prime Minister and Finance Minister Alexei Kudrin believes it is
hardly probable that new reserve systems will replace the U.S. dollar in the
world in the foreseeable future.
Theoretically, only the yuan could become a new world reserve currency. This may
take about ten years, provided the Chinese leaders display political will for
that, he said.
"I do not think new big currency alliances will emerge in the near future. I
think that if China liberalises its economy and wishes to ensure the
convertibility of the yuan, it will be the shortest way. I believe this could
take ten years, but after that the yuan will be in demand, and this is the
shortest way to the creation of a new world reserve currency," Kudrin said.
[return to Contents]
#25
Russia's industries to reach pre-crisis level in 2013-2014 minister.
MOSCOW, August 2 (Itar-Tass) Most of Russia's industries will reach the
pre-recession level in 2013-2014, Russian Minister of Industry and Trade Viktor
Khristenko said in his feature published in the Tuesday issue of the RG-Business,
a supplement of the Rossiskaya Gazeta daily.
"It is vitally important for us to maintain rhythmical financing of target
programs and to begin their implementation in due time," the feature reads.
According to Khristneko, the ministry of industry and trade is currently
integrating all types of expenses under the state programs. The process, in his
words, will be over in 2012. "The Ministry of Industry and Trade is forming five
such programs: aviation, ship-building, pharmaceuticals, electronics, and a
large-scale state program for the development of backward industries," he said.
The program for the development of the aircraft-building sector aims to help
Russia by 2015 be ranked among the world's top three in the sector, to be among
global leaders in the civil segment, among the world' s top three helicopter
makers, and among the world's five leaders in the engine-building market.
It is also planned to bring the share of Russian manufacturers on the
pharmaceutical market to 37 percent by 2015. "And by 2020, Russian manufacturers
will account for at least 50 percent of the Russian market of medicines,"
Khristenko said.
He also pointed that the development strategy in the automotive industry proved
to be successful. In 2010, Russia motor car market grew by 29.9 percent
practically reaching the figure of 1.76 million cars. Concurrently, Russian car
output almost doubled (1.2 million). "Impressing growth rates are reported this
year as well. We are actually returning to the volumes of the pre-recession 2007,
although with an utterly new market structure and domestic production," he said.
In his words, the ministry predicts that a total of 2.7 million cars will be sold
in 2011. "According to our estimates, by 2014 we will be Europe's number one
market having exceeded the three million level," he stressed.
[return to Contents]
#26
Russia Profile
August 1, 2011
Poor Russia
The Number of People Who See Themselves as Poor Is Growing in Russia
By Svetlana Kononova
The Public Opinion Foundation (FOM), a Russian NGO that conducts sociological
research, has found that 45 percent of Russians feel poor. Only one percent of
respondents in FOM's poll consider themselves to be rich, while more than a third
of respondents believe that from 11 to 30 percent of people in Russia are rich,
and one in four claims to have rich acquaintances in their social circle.
The definition of poverty in Russia remains controversial. The official minimum
wage is 7,411 rubles ($265) per month in Moscow, and 6,473 rubles ($230) per
month in the rest of the country. About 22 million people in Russia (15 percent
of the population) earn less than the minimum wage. State statistics officially
count these people as "poor."
But the minimum wage amounts to hardly anything in real life. For example,
utility payments for a small, one-bedroom apartment start from 2,500 rubles
($90), and renting a flat costs from 7,000 rubles ($250) in small towns and from
28,000 rubles ($1,000) per month in Moscow. A universal transportation pass for
one month in Moscow costs 2,380 rubles ($85). Even if the hypothetical
minimum-wage earner doesn't rent a flat, what's left after housing and
transportation expenses is not enough to live on.
According to the State Statistics Committee, only 11 percent of the country's
population earns more than 35,000 rubles ($1,250) per month. The average monthly
income in Russia is 18,500 rubles ($660). More than half of all Russians make
less than this, but are they "poor?" By comparison, in the United States and most
Western European countries, the lower middle class starts at a monthly income of
$2,000 to $2,500 per person.
Another criterion that some researchers use to define poverty is the share of
income spent on food. People who spend more than a half of their income on food
are deemed poor. In Russia, this group is estimated at 50 to 60 percent of the
population, according to various surveys. About a third of the country's
population can afford only food, about a half only food, clothes, and cheap
household appliances, and the rest expensive goods, such as cars. "The number of
people who think of themselves as poor has increased since 2008. This probably
reflects the consequences of the economic crisis, when many incomes dropped and
living standards took a downward turn," said Ekaterina Sedykh, the director of
the "Dominants" project at FOM.
According to the survey, most people who see themselves as poor have a low level
of education, live in villages and small towns, are retired or will retire soon.
Respondents who describe their incomes as average are young or middle-aged, live
in Moscow and other big cities, and are active Internet users. "This group
includes the so-called 'people of the 21st century' who lead a modern lifestyle,
traveling abroad, using bank cards, making purchases via the Internet, and
investing money in education. These people can have a high standard of living
even if they are not rich. For example, they can organize a trip abroad with
minimal expenses, because they are familiar with traveling in general they know
how to book cheap tickets through the Internet and where to stay," Sedykh
explained.
Respondents who see themselves as poor or with an "average" income gave a wide
range of ways to become rich. The poor believe that the secret to financial
success is "knowing the right people," "shiftiness, the ability to beguile" and
"the availability of initial capital to start up a business." Respondents with
average incomes prioritize "good education and high [professional]
qualifications," "knowing the right people" and "hard work." "Such people believe
that it is absolutely essential to make a conscious effort to improve their
financial situation and standard of living," Sedykh concluded.
"Poor and middle-income respondents understand 'knowing the right people' in
different ways. The first group believes that the 'right connections' should be
used for problem solving. But for the second group, it is a stimulus for personal
development. Personal and professional contacts with successful people generate
new ideas and new prospects and promote further self-development. Respondents who
have already climbed out of poverty value and build long-term, mutually
advantageous relationships with other people," Sedykh said.
Elena Kovalenko, a social policy expert at the Institute for Urban Economics,
thus described Russian poverty: "The level of poverty in the country in general
has been slashed in half during the period of economic growth mostly due to
growing minimum and average wages and a high employment level. Paradoxically, the
current programs meant to support vulnerable social groups don't have any
measurable influence on the fight against poverty," she said.
People who worked in the "gray area" of the economy and received "gray salaries
in envelopes" suffered the heaviest blow from the economic crisis their incomes
dropped sharply. But they aren't the only ones at risk of sliding into poverty.
"It's not retired people who are at the highest risk of poverty in Russia, as is
often believed, but households with children. In 2008 to 2009, mostly families
with three and more children accounted for the growing numbers of the poor.
Families with small babies aged one to two years are also at risk," Kovalenko
said. Thus it is not surprising that the average Russian family has one or two
children. Sociological research shows that the majority of young Russians do not
plan to have children in the next two to three years, and the number of those who
do not want children at all is also growing.
In Russia, the "Gini coefficient," a statistic that determines income and wealth
inequality, is about 40 the same level as in some Arabic countries, which
recently experienced revolutions.
[return to Contents]
#27
Izvestia
August 2, 2011
Academics propose progressive income tax
[summarized by RIA Novosti]
Economists from the Russian Academy of Sciences have submitted a 95-page report
to Prime Minister Vladimir Putin in which they encourage the government to save
less and spend more while oil prices are high, and to reintroduce a progressive
income tax system.
Putin met with the economists on July 11 and suggested they contribute to the
government's Strategy 2020. Less than three weeks later, Director of the
Institute of Economics Ruslan Grinberg, his first deputy Alexander Rubinstein and
Andrei Gorodetsky, section head in the Academy's "modern economics and innovative
development institutes" academic council submitted a report that argues for
continued state intervention in the Russian economy. The economists described
Russia as having moved away from the "desired socio-economic standards of
Euro-Atlantic nations" to more of a third-world status with alarmingly polarized
personal incomes.
The economists' main conclusion is that surplus revenue from high oil prices
needs to be spent. The adjusted mid-2011 budget forecasts a surplus of nearly 3
trillion rubles, but only 700 billion rubles is currently allocated to spending
on economic development and other government needs.
The economists noted that the government is clearly working to overcome the
budget deficit, accumulate reserves and cut inflation, but warned that there will
soon be a budget surplus, and that saving money is therefore counterproductive.
They propose allocating 40% of this surplus revenue (given oil prices higher than
$85 per barrel) to boosting the state coffers and putting 60% toward new
industrialization.
Their report suggests other ways of raising revenue, including tax increases and
reverting back to a progressive income tax system. The economists argue that this
will not only greatly supplement the budget, but also create a favorable public
image the tax system will be perceived as broadly fair since the rich will pay
more.
They also propose making individuals pay their own pension contributions
(currently the employers pay). Nevertheless, the economists decided that raising
the retirement age would be counterproductive for economic, demographic and
political reasons.
Another key recommendation was to plan the federal budget for five years, instead
of three. The economists advised binding planned changes in economic institutions
to the "political cycle" of 10 years, i.e. two presidential terms.
The Russian Academy of Sciences economists are not the only ones working on
proposals for Strategy 2020. Since early 2011, a team of experts led by Yaroslav
Kuzminov and Vladimir Mau has been working on the same task.
"We are in tough competition with Mau and Kuzminov," Ruslan Grinberg said. "The
government has been listening to them and them alone, for the last two decades.
If our voices are heard and the Strategy 2020 team adopts any of our proposals, I
would consider it my personal victory."
Vladimir Mau was very terse in his response.
"We do not deal in competition," he said. "We are engaged in science."
[return to Contents]
#28
Moscow News
August 1, 2011
Elite living in Moscow
Why Moscow, a city of both billionaires and dilapidated housing, is so expensive,
is often a mystery to newcomers. In a 3-part series we look at the real cost of
living here
By Oleg Nikishenkov
Ever wondered why Moscow hotels are so expensive when public transport is so
cheap? Or why a Starbucks coffee in Moscow costs more than the same coffee in New
York? This three-part series examines the driving factors behind the cost of
goods and services in elite, mid-range and budget price ranges to identify the
factors which generate such a wide spectrum of prices in the city.
To the newly arrived tourist or the uninitiated expat, Moscow can be cripplingly
expensive. Order a coffee without glancing at the menu, and you risk forking out
up to $8, according to this year's Mercer survey, which ranked the Russian
capital the fourth most expensive city in the world for expat life.
Walking around the capital, with its often-shoddy Soviet-era buildings, beggars
and stray dogs, visitors can be hard pushed to work out why prices should be so
much higher than in London or New York.
Disposable income
The high number of ultra-rich goes some way to explaining the phenomenon. With 79
at the last count, Moscow has more billionaires than any other city in the world,
according to Forbes. In most countries, prices in the high-end segment are fairly
inelastic, since the extremely well-off can always be counted on to spend above
the going rate.
Disposable incomes are higher in Russia than in the EU countries and the United
States due to a combination of high salaries in the top-end segment and low
taxes. With a flat tax rate of 13 percent, top-end salaries for business
executives such as investment bankers are now returning to pre-crisis levels,
according to Forbes magazine. And the big shots take home a lot more cash than
their counterparts in New York or London, which makes them less concerned about
blowing as much as $50 on a couple of beers in a downtown cafe.
Prestige factor
Added to this is a novelty value slapped on to certain goods and services. Forbes
recorded that hotel stays in Moscow are on average 40 percent more expensive than
in London. Tariffs for rooms in the city's most expensive hotel, The Ritz
Carlton, start at 32,000 rubles ($1,150) and soar as high as 125,000 rubles
($4,500), but the hotel has still recorded 95 percent occupancy rates for every
month this year, according to a survey conducted by Business New Europe.
The reason is that people stay in the hotel because of its high price tag to
treat themselves and show off. This stems in part from a desire to live to the
excess when times are good after years of shortages under the Soviet Union.
"In Moscow certain goods tend to sell better the more expensive they are as they
are considered more prestigious," said Ekaterina Andreyanova, a retail analyst at
Rye, Man and Gor Securities.
A similar trend can be seen in the luxury goods sector. Alina Demidova, the
founder of Elite Club, which manages assets for private wealthy Russians, said
that prices for luxury in some segments are on average 50 percent higher than in
Western countries. A Jones Lang LaSalle study conducted in July found Moscow's
Stoleshnikov Pereulok to be the third most expensive shopping street in Europe
after London's New Bond Street and Paris's Avenue Montaigne.
The prestige factor is particularly noticeable in imported wine sales. A bottle
of French or Italian wine that may sell for around $7 a bottle in European stores
can cost as much as $100 in a high-end Russian supermarket. Demidova, of Elite
Club, says that the mark-up is in part a product of high customs duties, which
exceeds 30 percent on some items. However, it can also be attributed to the fact
that the Russia does not have the same tradition of wine drinking that Europe and
the United States have, so people are more likely to buy a bottle for its
prestigious label than for its taste and quality.
"A crate of Chateau Petrus wine, which costs $2,000 in London, can sell for
$10,000 in Moscow," Demidova said.
Expat life
For expats, the main focus of the Mercer survey, high prices tend to stem from a
desire to replicate the lives they were used to living in their home countries.
Products considered everyday in Western Europe, such as lasagna sheets or German
wheat beer, fall under the luxury segment in Moscow due to their scarcity and
import costs.
Furthermore, highly-paid expat workers tend to be more carefree than their
colleagues back home due to bumped up salaries as compensation for living abroad
and the low tax rate. This has created a niche in the market for letting agents
and service providers to provide marked up expat services, often with a focus on
those with no knowledge of Russian, who therefore have little other choice.
The price of luxury
Minimum price of a premium food basket (week's shopping for two people) 5,000
rubles ($180)
Cup of coff ee $8
Three-course meal (without alcohol) $150
Gin&Tonic $5.50
Beer $20
One night accommodation $1,000-$4,500
Two hours in a VIP limousine $110
Haircut $1,000
Shoes $200-$300 (Prada)
Rent $5,000 per month (4-bedroom elite apartment near Moscow State University)
[return to Contents]
#29
Moscow Times
August 1, 2011
Yukos Bankruptcy 5 Years On
By Tim Osborne
Tim Osborne is director of GML Ltd.
The Yukos Oil Company was forced into bankruptcy by the Moscow Arbitration Court
five years ago on Monday. Its assets were seized by the state, and its top
managers imprisoned or chased from the country. Its legacy of progressive
corporate governance and transparency was decimated in favor of shadowy state
control.
Nobody knows for sure why the Russian government destroyed its most successful
post-Soviet company. It is certain, though, that the Yukos affair was a clear
marker of Russia's economic torpor and a signal to domestic entrepreneurs and
foreign investors alike that their assets are simply there for the taking.
The biggest victims of the destruction of Yukos, however, are not former CEO
Mikhail Khodorkovsky or his business partner Platon Lebedev, who have nonetheless
endured an ordeal that would test anyone. Rather it is the Russian people on
whose behalf the government supposedly acted who have lost the most.
Although Russian stock trades around a 30 percent discount to other emerging
markets and Russia's economy lagged the other BRICs' GDP growth by nearly 5.5
percent in 2010, according to the International Monetary Fund, the fallout from
the Yukos affair cannot be measured in financial terms alone. The longer-term and
far more detrimental effect is that there is now an assumption of political
interference, corruption and the arbitrary use of state powers in civil disputes.
In June, the IMF confirmed that strengthening property rights and the rule of law
together with reform of the judiciary and civil service are critical issues for
Russia's economic development. The fund said Russia's poor business climate
discourages investment, which, combined with political uncertainty, contributes
heavily to net capital outflows. Reform or recession was the IMF's underlying
Even almost a decade after Khodorkovsky's arrest, the specter of the Yukos affair
still haunts investors' decision making. In Jochen Wermuth and Nikita Suslov's
June 16 comment in The Moscow Times titled "20 Ways to Improve Russia's
Investment Climate," a chief investment officer of one of the world's largest
pension funds perhaps put it best: "The government stole assets from Yukos and
Shell. You complain, you get expelled, like the BP manager. If you push too hard,
you may even get killed in London, Vienna, Dubai or in pretrial detention. Now
tell me why should I invest my clients' money in Russia."
With aging infrastructure in dire need of modernization, you might reasonably
expect the government to bend over backward to tempt investors back and offer
them, at the least, a level playing field. Eventual Russian entry to the World
Trade Organization will undoubtedly help, but the decision to withdraw from the
Energy Charter Treaty was a big step backward, removing protection mechanisms for
investments made after the date of withdrawal.
As the former majority shareholder of Yukos, we are critically aware of the need
for that protection. Without recourse to binding international arbitration, we
would be nowhere. Instead, an independent tribunal sitting in The Hague is
currently hearing our arguments in the largest-ever commercial arbitration.
Despite the Kremlin's protests, the tribunal confirmed that Russia was fully
bound by the Energy Charter Treaty until its formal withdrawal in October 2009.
President Dmitry Medvedev's subsequent proposals to replace the energy treaty
included clauses that would actually sanction discriminatory treatment against
foreign investors. This measure will hardly encourage those same investors to
part with the $2 trillion the Energy Ministry says is required to modernize the
energy sector, increase production and improve supply.
But access to funding is not the most problematic factor for doing business in
Russia. In its 2010-11 Global Competitiveness Report, the World Economic Forum
reported that corruption was overwhelmingly identified by global businesses as
the single most problematic factor for doing business in Russia.
The tragic case of lawyer Sergei Magnitsky shows how devastating that can be.
Magnitsky, after uncovering a massive fraud allegedly perpetrated by corrupt
officials in the Interior Ministry, was arrested on falsified charges and then
refused medical treatment while in pretrial detention unless he testified against
his client. He died in prison. Only now three years afterward and following
pressure from Western governments and international human rights organizations
has the Russian government initiated an investigation, although no one has been
arrested yet.
No one should be under any illusions: Corruption was a problem before Yukos was
destroyed. Some even speculate that Khodorkovsky was targeted because he was too
vocal in highlighting corruption in state-owned companies. It has become much
clearer since the beginning of the Yukos affair that corruption in Russia is now
so endemic that it is simply a fact of life.
As the Russian government once again prepares to embark on a major state
privatization program, foreign investors must clearly make their own calculations
about whether the potential success of their Russian ventures outweighs the
risks. For us, that calculation is simple. We have lost far too much already.
[return to Contents]
#30
Putin Praises U.S. For Responsible Decision to Raise Debt Ceiling
LAKE SELIGER, Tver region. Aug 1 (Interfax) - Although the U.S. economy is living
like a parasite on its dollar monopoly, the U.S. has made a responsible decision
to raise the sovereign-debt ceiling and avoid a default, said Russian Prime
Minister Vladimir Putin.
"Actually, in general there is nothing good about it, it simply postponed more
systemic decisions," he said at the Seliger 2011 forum, commenting on the
compromise reached between Republicans and Democrats to increase the U.S. debt
ceiling.
This goes to show that, "this country (the U.S.) lives on credit, it means that
it does not live within its means and is laying part of the burden of its
problems on the entire global economy, parasitizing on the global economic and
dollar monopoly," he said.
"But they had enough common sense and responsibility to make a balanced
decision," Putin said.
A possible U.S. default would not bring anything good for global economy, he
said. "The modern economy is globalized, and all countries depend on each other
in one way or another, with the U.S. economy being one of the locomotives of
global economy, and if there is a systemic failure there, then it is not good,"
Putin said.
"And the point is not even that some countries, including Russia and China, have
a substantial part of dollars in gold reserves, the point is that a systemic
failure in the entire economy is possible," he said.
Some U.S. experts, though, would like to see a default, he said. "The U.S. is
interested in this and in dollar devaluation so as to create better export
conditions," he said, adding that as a result Americans would have been able to
beat Chinese import and European competitors.
[return to Contents]
#31
Russia's Putin Critical Of NATO Strategy In Libya
Interfax
Lake Seliger (Tver Region), 1 August: Russian Prime Minister Vladimir Putin can
see no prospects of a military settlement of the Libyan conflict.
"It would be very good if the countries that have accumulated a large stocks of
weapons used them as a restraining element instead of resorting to them wherever
they please, because the use of force, of military force, does not lead to the
final settlement; what one needs is political processes," he said in reply to a
question from a female participant in the forum on (Lake) Seliger.
The prime minister also admitted that he did not quite understand how to treat
the NATO statement to the effect that it was willing to go all the way in the
Libyan conflict. "They have indeed declared that they will go the whole hog to
victory, but it is not very clear because the UN mandate does not give one the
right to wage war on anyone, to press for victory over someone; it gives one the
right to protect civilians against air strikes launched by one of the sides," he
explained.
"It is unclear who they are going to fight till final victory," Putin said.
There is also another side to the issue, he said. "The situation in Iraq remains
effectively unsettled, and it is even worse in Afghanistan - an entire wedding
party, over 100 people, were killed there by just one air strike last year," the
prime minister said.
"I believe it is not yet clear what a war till final victory in Libya is, and
judging by what they are getting in other countries where similar operations are
being carried out, the end turns out to be pretty sluggish, and it's unclear what
it will end up in," Putin said.
[return to Contents]
#32
Valdai Discussion Club
August 1, 2011
Michael McFaul and the future of the "reset"
By Dmitry Suslov
Dmitry Suslov is Deputy Director for Research at the Council on Foreign and
Defense Policy; Member of the Valdai Discussion Club.
Special Assistant to the U.S. President and Senior Director of Russian and
Eurasian Affairs Michael McFaul has been appointed United States Ambassador to
Russia. This is an extraordinary event in Russian-U.S. relations and in U.S.
foreign policy in general. After Robert Strauss, McFaul will be only the second
U.S. ambassador to Russia who is not a career diplomat. Strauss was U.S.
ambassador first to the U.S.S.R. and then to Russia in 1991-1992, some of the
most pivotal years in the history of this country. That much is symbolic in
itself. President George H.W. Bush appointed Strauss to the position at a time
that was decisive for our bilateral relations, for the United States, and for the
rest of the world. The Soviet Union was still an influential superpower, and
Strauss, a seasoned politician and businessman, was a better choice than a career
diplomat used to strict subordination and waiting for State Department
instructions on every matter of course. Moreover, he was a politician from the
rival party, a prominent Democrat and an influential figure in the Carter
administration, but also the kind of man who could facilitate Russia's democratic
transformation at a turning point in its history. At that moment, Strauss was the
man for the job.
McFaul is not a politician, but he is as versed in democracy and democratization
as Strauss was, at least in theory. A professor at Stanford University, he is one
of the most prominent specialists on Russia in the United States, particularly on
its domestic development and democratization. A typical representative of the
so-called liberal internationalists, he believes that democratization is possible
in the majority of countries, if not all, and that such transformations are
instrumental in making the policy of these countries, including Russia, more
favorable to the United States.
Having taken a job in the White House (as a special assistant, he was on the
National Security Council staff), McFaul did not renounce his liberal attitudes.
Yet he was also capable of acting as a realist when he became the chief architect
of the "reset" strategy in Russian-American relations. That strategy is based on
the willingness of Russia and the United States to reassess their national
priorities and back away from ideological conflicts that do not concern their
vital interests. For the U.S., that list of vital interests includes Afghanistan,
Iran, nuclear non-proliferation, and nuclear security. Washington needs Moscow's
support on these issues and, in return, it is ready to back down on less
important interests, for instance, in Georgia.
Today, McFaul is primarily thought of as the theoretical and practical advocate
of the "reset," a man whose name is largely associated with the high dynamism of
recent Russian-American cooperation and the general positive spirit of their
bilateral ties. In this context, his potential nomination as U.S. ambassador to
Russia will symbolize the continued effort of the Obama administration to pursue
the "reset" and further develop these relations during a second term. Obama is
likely to be reelected despite economic difficulties in the United States, if
only because, first and foremost, the Republicans have been unable to find a
competitive candidate due to the party's strong general drift to the right.
Like Strauss twenty years ago, McFaul will become an ambassador to Russia at a
time when both the country and its relations with the U.S. are at a crossroads.
The presidential elections in Russia will take place in March 2012, but it is
already obvious that the two main claimants Dmitry Medvedev and Vladimir Putin
have different views on relations with the United States and, most likely, on
domestic development as well.
In this context, McFaul's appointment shows that U.S. policy on Russia will
retain the same high level of priority for Obama in 2013-2016 that it did during
his first term. As the president's special assistant, McFaul has already
fulfilled his mission by introducing the "reset." Both Moscow and Washington
officially recognize the symbolic change. It is also borne out by the bilateral
achievements of the last two and a half years and the sides' continued attempts
to preserve a positive dynamic following the New START Treaty's entry into force
last February and on the eve of presidential elections in both countries. Now,
with relations again beset by contradictions and mistrust as at the end of 2008,
the task is to preserve what has been achieved rather than to improve upon it.
The White House believes that to this end, McFaul will be more useful in Moscow
than in Washington or Stanford.
Until recently, McFaul was expected to return to Stanford University for teaching
and research, perhaps even before Obama's first term expires. The decision to
keep him in the civil service, which was made by top administration officials no
sooner than mid-June, shows that Obama and other top officials appreciate his
work and want to stabilize the positive character of bilateral ties and prevent
new sources of tension. McFaul will now have to perform the very complicated task
of cooperating with the Russian authorities in order to preserve and even
strengthen the toolkit of "reset" strategies accumulated by Russia and the U.S.
over the last two and a half years.
That's the problem. It is one thing to send signals about the desire to preserve
positive cooperation with Russia and quite another thing to pragmatically apply
it. Regrettably, McFaul's ability to preserve and even consolidate partnership
with Russia during Obama's second term is a big question. The problem does not
boil down to his personality. In most cases, the art of diplomacy and the
personal factor can only color the undertones of interstate relations. The
decisive role belongs to systemic factors, primarily national interests, as well
as the place of each state in world politics, the place of bilateral ties in the
context of general foreign policies, and domestic factors and restrictions. Even
the most seasoned diplomat and skilful politician is powerless to reverse the
effects of such systemic forces on the trend of bilateral relations.
This is exactly the case of current Russian-U.S. relations. They have not seen
the best of times since the start of this year and are unlikely to match the
successes of the "reset" by 2012.
To begin with, both sides are failing to implement the agenda that they set as
the foundation of the second stage of the "reset" a period that began with the
START III Treaty entering into force and that will end with the 2012 presidential
elections in both countries. That agenda is gradually becoming negative. It boils
down to missile defense and Russia's accession to the WTO. These issues play a
major role in bilateral cooperation, and both sides consider them decisive for
its future, but there is no progress on either issue.
On missile defense, progress is obstructed by Russia's reluctance to give up on
the classic interpretation of bilateral nuclear parity inherited from the Cold
War and based on the concept of Mutually Assured Destruction; meanwhile, the U.S.
remains unwilling to limit missile defense in Europe in quantity, quality, and
geography. So far, the sides have been unable to negotiate legally binding or
even political guarantees to the effect that hypothetical U.S. missile defense
systems will not be directed against Russia's strategic nuclear forces.
Washington rejected both of Russia's proposals on the said guarantees and a
joint missile defense system. Moscow is using this rejection as a pretext for a
massive buildup of ballistic missile production in a bid to restore its
traditional strategic parity with the United States and reinforce its status of
the second nuclear superpower. These actions are perpetuating a philosophy of
deterrence in bilateral relations and running the risk of a new nuclear arms
race.
At the NATO summit in Lisbon six months ago, Russia, the United States, and NATO
decided to try to cooperate on missile defense but failed to overcome a stalemate
and reach their planned targets by this July. This failure is already producing a
negative effect on bilateral ties. The sides hoped to improve this atmosphere by
demonstrating that they can cooperate on the CIS, Iran, Afghanistan, and
START-III, thereby overcoming their differences and pursuing their interests in
parallel. In June and July, however, they were forced to acknowledge a lack of
progress on missile defense with no hope for removing major obstacles in the near
future. In this context, their potential for cooperation may seem to be almost
exhausted.
Progress on the WTO is a bit more straightforward, especially since the U.S. is
putting in an obvious effort, but the results are still unclear. Despite
Washington's real, albeit limited pressure on Georgia, Tbilisi shows no desire
for compromise. Moreover, it continues to resort to all kinds of provocations in
order to sabotage talks with Moscow. It is enough to mention that it has now
accused Russia of organizing and funding attempted acts of terror in Tbilisi. In
turn, the European Union and the United States have been unable to satisfy
domestic commercial lobbies and accept Russia's entry to the WTO without
concessions on subsidies to car makers, agriculture, and so on.
Yet another failure to get Russia into the WTO by the proposed deadline (late
2011 to early 2012) will deal a heavy blow to Russian-U.S. relations and cause a
new wave of disappointment in America and prospects for bilateral cooperation
among the Russian political elite, especially the supporters of President Dmitry
Medvedev, who has put the accession to the WTO at the top of his priorities. His
chances for reelection will suffer as well. Skeptics will receive yet another
confirmation of the alleged impossibility of a stable bilateral partnership and
Washington's inability to help Moscow promote its major national interests. They
are bound to use this argument on the eve of the presidential elections, thereby
creating a bad environment for bilateral relations after 2012.
The disappointment of the Russian president and supporters of stable partnership
with the United States in its actions in Libya is another major factor in the
crisis of the "reset." Having decided not to block UN Security Council Resolution
1973 last March, Medvedev took a big political risk. He departed from the Russian
tradition of staunchly countering humanitarian interventions and sacrificed
Russia's economic interests in Libya. By so doing, he inaugurated a new mode of
cooperation with the United States on settling conflicts within foreign states
and on the world scene. This new policy would require Russia to pursue its
interests in a given conflict proceeding from specific material interests rather
than ideological dogmas, including attempts to get some benefits from Washington
for its cooperation on the given conflict.
These hopes did not materialize. As subsequent events have shown, the Western
coalition, including the United States, simply used Russia to legitimize a
military campaign that has turned into an operation to replace the existing
regime and far exceeded the bounds of its original mandate. The coalition is
playing a dishonest game with Russia it is supporting Russia's mediation in
Libya but using it for purposes that have nothing to do with Russia's own
interests or goals. The coalition wants to remove Muammar Gaddafi from power and
help the rebels win a war rather than reach the political settlement Moscow
advocates on the basis of compromise. The United States and other members of the
Western coalition have removed Moscow from participation in determining Libya's
political and economic future. They are discussing the issue with Libyan rebels
behind closed doors.
As a result, bilateral relations have returned to their previous ideological
stalemate both on Libya and on the general issue of interference into foreign
domestic crises. Syria is the best case in point. The sides have already taken
contrary positions on the issue. Washington wants the UN Security Council to
adopt a resolution denouncing Bashar al-Assad's regime, whereas Moscow objects to
any international interference, not to mention UN sanctions. Russian-U.S. policy
disagreements on Iran have also become more pronounced. Bilateral relations have
backslid on this important issue. The spirit of Russian-U.S. cooperation that
only recently prevailed on both Libya and Iran is dissipating. The Russian
leadership is disappointed and even offended by this change (Medvedev emotionally
expressed his frustration in an interview with The Financial Times in June). It
again seems that the sides are unable to build a stable partnership on global
political and military-political issues.
Finally, adverse domestic pressure is growing stronger in both countries and is
not conducive to preserving the positive trends that were manifest in their
relations over the last two years. In Russia, it is expressed in the growing
disappointment of those in the political elite who had hoped for steady
improvement in bilateral relations with the advent of the Obama administration
and the "reset" strategy. The remaining elements of the Russian political
spectrum did not believe in the possibility of long-term partnership from the
start.
In the United States, it can be seen in the restrictions that Republican
politicians are now imposing on the Obama administration. Their foreign policy
views are heterogeneous and largely contradictory (a mix of nationalist
isolationism and imperialistically leaning neo-conservatism); however, when it
comes to Russia, most Republicans can agree that a partnership is not vital or
even desirable to the pursuit of major American interests. They believe that it
is much more important for the U.S. to protect the "purity of its values" i.e.,
to approach Russia according to its adherence to democratic ideals and to
support U.S. allies in Eastern Europe and the CIS, rather than build cooperation
with Moscow. As a result, Republicans almost overwhelmingly view Obama's "reset"
with Moscow not only as a blunder but even as a threat to U.S. interests and
values that could spoil its image as "the champion of freedom and democracy" and
weaken its standing in the world arena. Republicans are putting strong pressure
on Obama to curtail his projects of cooperation with Russia, including arms
control, and to pursue a more critical and tougher line towards Moscow in
general.
The continued consolidation of Republican political power in part, as a result
of economic circumstances leaves little hope for improvement in bilateral ties.
The Obama administration cannot afford to take steps that may be interpreted as
concessions. This is a major factor behind the stalemate on missile defense,
inasmuch as Obama's consent to restrict the U.S.-planned system would be
tantamount to political suicide. It stands equally in the way of Russia's WTO
accession: Washington cannot afford to put too much pressure on Tbilisi, and
President Mikheil Saakashvili is well aware of it. Meanwhile, the Republicans are
compelling the White House to take other steps that are bound to displease
Russia. This applies in particular to U.S. support for its allies in Eastern
Europe and the CIS, including the deployment of a new air force base in Poland
and military supplies for Georgia. Having won control of the House of
Representatives in the beginning of this year, Republican legislators are
becoming major irritants to bilateral ties. Currently, they are discussing a bill
on sanctions against a number of high-ranking Russian officials from the
so-called "Magnitsky list" and a complete freeze on constructive dialogue with
the State Duma.
All these negative trends are taking shape on the eve of presidential elections
in both countries, which means that if they are not overcome in the remaining few
months of this year, it will be very difficult to hope for steady improvement
thereafter. It is particularly disconcerting to see such challenges arise in
Russian relations with what is the most progressive and reasonable U.S.
administration of the last few decades. In the near future, the United States is
unlikely to see another administration with so much in its favor for better ties
with Russia, and if the current window of opportunity closes, it is unlikely that
a new one will open any time soon. Most likely, subsequent U.S. administrations
will simply come to the conclusion that with Russia, there are no right answers
whether it is George W. Bush's confrontational and ideological approach or
Obama's constructive and realistic policy shift.
As an ambassador in Moscow, McFaul is unlikely to cope well with these
developments or even slow them down. No matter how symbolically or politically
charged he may be, as an American ambassador to Moscow, he will carry out
Washington's policy, even if it eventually contradicts the larger goals that
prompted Obama to appoint him to the position. That policy line will take shape
under the impact of a mutual failure on the part of Russia and the U.S. to come
to terms on the issues that they both placed at the top of their current agenda,
adding fresh disappointment to the handling of the "Arab revolutions" and further
impetus to domestic pressure against future cooperation. Obama's ambassador to
Moscow will have little recourse when the administration itself is often unable
to exert decisive influence over such circumstances.
The history of Soviet and Russian relations with the United States shows that the
personality of an American ambassador in Moscow is the least important factor in
determining the nature of those ties. Indeed, the ambassador's mission is not to
elaborate U.S. policy as regards Russia but to pursue it as skillfully as he can.
In other words, the ambassador is supposed to build relations with the
government, the political elite, and, as is now increasingly the case, the civil
society of the state to which he is appointed. If this policy as well as the
policy of the state in question rules out constructive partnership, the
ambassador is able to do little but make things worse.
It turns out that McFaul would be more influential and useful for promoting
bilateral partnership if he stayed in the White House and continued taking an
active part in the elaboration rather than implementation of U.S. policy towards
Russia and if Obama rather than Medvedev or Putin were his target audience. It
is unclear how much the architect of the "reset" will be able to influence the
White House from Moscow. Clearly, his influence will be smaller than it is now.
By virtue of bureaucratic protocol, he will have to work through the Department
of State rather than directly with the president. In the meantime, it is the
president who determines foreign policy in America, and influence grows in
proximity to him both institutionally and geographically.
It is also unclear how effective McFaul will be as a vehicle rather than
architect of U.S. foreign policy and to what extent his efforts to build
relations with Russian leaders and social and political elites will promote the
improvement of bilateral ties. It may well happen that McFaul is not the best
choice for this role. That much can be inferred from his political and
ideological convictions as a champion of democratization, an independent civil
society, and human rights and, perhaps more importantly, his friendly relations
with the Russian opposition both literally and figuratively. It is no secret
that his appointment as the president's special assistant and senior director of
Russian and Eurasian affairs did not come as a pleasant surprise for many leading
Russian politicians, especially those who associate themselves with the so-called
"Putin regime." McFaul has been their adamant critic for many years.
These convictions were revealed in McFaul's activity as senior director of
Russian affairs and in his concept for U.S. policy towards Russia in the wake of
the "reset." It is enough to mention the formation of the U.S.-Russian Bilateral
Presidential Commission's Civil Society Working Group, which is co-chaired by
McFaul from the American side, and the organization of "civil society summits" in
parallel with bilateral talks at the top official level, or the White House's
general rhetoric on the need to promote dialogue not only with the Russian
government but also with civil society. For the sake of fairness, however, it
must be said that this was by no means the primary agenda of the "reset."
Apparently, McFaul's arrival in Moscow will only enhance this component of U.S.
diplomacy. He will do more than anyone else to invigorate the U.S. Embassy's
contacts with Russian opposition groups, human right champions, and other civil
society representatives. His relations with Russia's political leadership,
especially if Putin again becomes president in March 2012, are bound to be
difficult. The consolidation of the so-called second track of American diplomacy
(dialogue with civil society) may take place at the expense of the first. If
Russia chooses a new road in March 2012, it would be hard to imagine a better
ambassador than McFaul, but, for the time being, that scenario remains highly
unlikely.
[return to Contents]
#33
Komsomolskaya Pravda
August 2, 2011
RUSSIAN FOREIGN MINISTRY: RESOLUTION ON OCCUPATION OF ABKHAZIA AND SOUTH OSSETIA
IS BUT PR STUFF
MOSCOW'S RESPONSE TO THE AMERICAN SENATE RESOLUTION
Author: Yevgeny Lukianitsa
[Russian diplomat: American legislators ignore established facts.]
The Russian Foreign Ministry made a statement yesterday in
connection with the resolution adopted by the U.S. Senate where
Abkhazia and South Ossetia were called "occupied territories". The
document adopted in Washington pointed out that Russia had failed
to honor cease-fire obligations and done nothing at all to
facilitate the return of Georgian refugees to their homes. The
Georgian Foreign Ministry took it from there and announced that
"The resolution supports territorial integrity of Georgia and
recognizes Abkhazia and South Ossetia as Georgian regions occupied
by Russia."
The Russian Foreign Ministry emphasized that the resolution
adopted in Washington was entirely groundless. "We have explained
on more than one occasion already how inappropriate the use of the
term "occupation" is in this context."
"The statements made by American legislators indicate either
a lack of knowledge of international law or deliberate neglect of
established facts," said a source within Russian diplomatic
circles. "In other words, this so called resolution is but PR
stuff and nothing more."
Russian diplomats emphasized that not a single Russian
servicemen could be found anywhere in Georgia. Russian servicemen
are only to assigned to Abkhazia and South Ossetia, two countries
whose sovereignty Russia formally recognized following the
criminal war launched by Mikhail Saakashvili in August 2008.
[return to Contents]
#34
Russia's response to U.S. Magnitsky case blacklist could concern senators,
congressmen - analyst
MOSCOW. Aug 1 (Interfax) - Moscow ought to formulate its measures in response to
the U.S. Department of State's decision to place a travel ban on Russian
officials allegedly involved in the death of Hermitage Capital lawyer Sergei
Magnitsky within the next week, Russian parlamentarian Sergei Markov told
Interfax.
"Generally speaking, the Foreign Ministry can do it next week. Nothing is
standing in its way. It should be done right now, as long as this topic remains
relevant," Markov said on Saterday.
This issue, however, will unlikely require an emergency session of the Russian
State Duma, the lower chamber of parliament, which adjourned for summer recess,
he said.
"The State Duma is unlikely to meet especially to sort out this issue. The
Foreign Ministry will make this decision," he said.
Russia's retaliatory measures will primarily concern members of the U.S. Senate
and Congress who insisted on compiling a blacklist over Magnitsky's death, the
Russian deputy said.
"I think that it will apply to people primarily linked to the drafting of this
blacklist - senators and congressmen who adopted appropriate resolutions. It will
possibly concern State Department officials responsible for adopting this
decision," he said.
Markov, however, said he did not rule out that Moscow's measures could contain
not only a response to Washington's biased attitude toward the Magnitsky case.
"In principle, these retaliatory measures could also concern those who beat
journalists of the Russia Today television station, as well as those who refused
to open criminal cases following this incident. They will possibly recall the
deaths of Russian children adopted by Americans, and consequently will recall
attempts to hush up these facts," Markov said.
A Russian Foreign Ministry source told Interfax earlier that the ministry had
started to formulate measures in response to the decision of the U.S. Department
of State to impose a travel ban on a number of Russian officials allegedly
involved in Magnitsky's death in a Moscow jail in November 2009.
"In compliance with orders given by the president, the Foreign Ministry of Russia
is drafting adequate retaliatory measures," the source said.
[return to Contents]
#35
August 1, 2011
Russia and the Arab Spring: the Kremlin's Short-Term Gains Are Russia's Long-Term
Losses
By Yuri Mamchur
When the recent anti-government demonstrations began in the Arab world, the
planet's only superpower--the United States of America--became actively involved.
The American government cheered, making public statements supporting Arab
nations' rights to freedom. But given how much closer Russia is to the Arab world
than the United States--geographically speaking, at least--it's worth asking
where Russia has been during the Middle East's great upheaval.
More Russians than Americans travel to Egypt. According to RusTourism News, in
March 2009 alone 300,000 Russian tourists traveled to Egypt. In March 2010, that
number grew by 90.4 percent. Oil prices affect Russia more than they do
America--after all, not only private businesses, but Russia's federal budget is
strictly tied to the price per barrel of oil. Simply put, stability in the Arab
world would seem to matter at least as much--if not more--to Russia as it does to
the US. But action, or in this case, inaction, may speak louder than words.
The dearth of official Russian involvement in the "Arab spring" demonstrates the
country's fading influence in the world, at least the type of influence needed to
carry out precise international intelligence operations and foresee long-term
geopolitical effects. While some have said that the US intelligence community may
have helped facilitate the Arab spring (or at least desired it), no one is even
giving Russian intelligence the honor of such speculation and rumor. Instead,
Russia's most notable intelligence activity of recent international memory was
the embarrassment over last year's spy scandal, when Russian intelligence
officers were kicked out of the US after being caught spying for Russia.
Embarrassingly for Russia, the only "intelligence" those intelligence officers
ever obtained were nothing more than street rumors and data from daily print
media, all of which could have been easily found online, without ever leaving
Moscow.
Perhaps Russia didn't show up at the Arab spring because the upheaval doesn't
seem to carry any political threat to the Kremlin's current inhabitants. Middle
East instability has increased the price of oil. As a result, the Dmitry
Medvedev-Vladimir Putin team has benefited, gaining the ability to balance
Russia's troubled budget and to boost the country's social programs--great
outcomes for them in light of Russia's upcoming parliamentary and presidential
elections and Russians' rising dissatisfaction with the nation's leaders.
Speaking about Russians' discontent with Medvedev and Putin, some Washington DC
think-tank scholars have suggested the possibility of a similar Russian uprising
against the Kremlin. Such claims are unfulfilled desires of the anti-Putin
Washington establishment. After centuries of authoritarianism and a decade of
poverty during the 1990s, Putin gave Russians all they wanted: relative
stability, freedoms and rising incomes. Unfortunately, the means became goals,
and Putin's team became too caught up with balancing the status quo for the sake
of stability. No technological, scientific, or entrepreneurial advancements took
place in the country, and small- and medium-sized private business barely saw the
results of Russia's new-found wealth. Russia's financial health has become
heavily dependent on oil revenues, which have blurred the leader's vision for the
nation.
Events in Libya serve as a great case study. Libyan instability means two
contradicting things for Russia: rising oil prices (good) and the loss of an
economic and strategic partner (bad). The positive trend in the oil market is a
very shortsighted gain that doesn't really help Russia's long-term national
policy. It may have put Russia's federal budget into the black, but the Russian
military will lose significant defense markets in the Middle East in the long
term. The end of Muammar Gaddafi's regime--a good thing for all of us who like to
see iron-fisted despots removed from power--could eventually mean an end to at
least $4 billion worth of Russian weapons sales over the next five years. "There
is a chance we might lose something," Russian Defense Minister Anatoly Serdyukov
said in a press conference. His job depends on a Medvedev-Putin reelection. With
Russia sitting on the sidelines, it has been in no position to shape what a
post-Gaddafi Libya might look like and is probably missing its chance for
influence going forward.
The Arab spring has short-term positive and long-term negative effects for
Russia. Most importantly, during the biggest upheaval in the Middle East in
modern history, Russia involuntarily positioned itself as a silent bystander.
Eventually, Medvedev, Putin, and the Russian intelligence community will be the
ones to blame for foregoing Russia's national interests in pursue of higher oil
revenues in the short term. But, "Each country," as Aldous Huxley said, "gets the
leader it deserves." Since the fall of the Soviet Union, Russians have sought
internal stability and personal financial gain. In the meantime, the world around
Russia has continued to evolve. One day, Russians may wake and find themselves in
a world that does not favor Russia and its interests. Such macroeconomic and
geopolitical conditions will outweigh small personal gains. Maybe then, Russia
will be ready for its own Eurasian spring.
Yuri Mamchur directs the Real Russia Project at Discovery Institute in Seattle
and manages the Russia Blog.
This article was written for and originally published on July 28 2011 by
bitterlemons-international.org ((c) bitterlemons-international.org).
[return to Contents]
#36
Moscow News
August 1, 2011
Mythologizing the 'mafiya'
By Mark Galeotti
Mark Galeotti is Clinical Professor of Global Affairs at New York University's
SCPS Center for Global Affairs. His blog, "In Moscow's Shadows," can be read at:
We have an incurable fascination with organized crime. From the willingness of
the Japanese Yakuza to cut off a joint of their own finger to atone for mistakes
to Don Corleone of "The Godfather" talking about the honor of the Mafia while
ordering killings and beatings, they seem larger than life.
No wonder that the Russian "mafiya" caught the global imagination. Violent and
macho vory v zakone with their tattoos and opaque slang make irresistible fare
for thriller writers, movie producers and journalists looking for a snappy
headline. Of course as with so much mythology, it is a heady cocktail of truth,
misunderstanding and wild speculation. In fact, the vory are in decline. There
are still a few hundred criminals calling themselves 'thieves-in-law' but the
days when that really meant something are almost over. Nowadays you can simply
buy the title.
It used to be that every criminal tattoo meant something: what crimes you
committed, where you went to prison. If you got one without having earned it, you
could find that patch of skin being cut away from you by angry vory. Today, just
get a commercial artist to give you whatever inking you think looks cool and
dangerous as long as you stay out of prison you're unlikely to be taken to task.
Indeed, tattoos are so 20th century. The new generation godfathers, the
avtoritety, are more likely to be sharp-suited entrepreneurs mixing crime and
business. They hardly want to brand themselves as crooks and embarrass themselves
the next time they are launching an IPO or sunning themselves on the beach at St.
Tropez.
But the myths remain powerful, and inform even the most important decisions. The
most recent example was on July 25, when the U.S. administration rolled out its
new Strategy to Combat Transnational Crime, warning that "Russian and Eurasian
organized crime networks represent a significant threat to economic growth and
democratic institutions."
At the same time President Barack Obama signed an executive order giving the U.S.
Attorney General new powers to freeze the assets of transnational organized crime
gangs. It listed four especially serious threats, including "the Brothers'
Circle, formerly the Family of Eleven, formerly the Twenty."
Who? Treasury Under-Secretary David Cohen called the Circle, also apparently
known as "the Moscow Center," "a multiethnic criminal group composed of leaders
and senior members of several criminal organizations largely based in countries
of the former Soviet Union." In other words, the bosses of the Russian "mafiya."
Again, who? I've been working on the Russian and Eurasian underworld for some 20
years and I've never heard serious talk of a Brothers' Circle, let alone Moscow
Center. Nor, as near as I can tell, has Russian law enforcement.
The term bratsky krug ("brothers' circle") was sometimes used for the most senior
vory v zakone, but in the past, and not to suggest any kind of formal
organization. And "Moscow Center" was a Cold War term for the KGB's HQ.
So what is going on? I hope that the U.S. administration just wanted to note that
it regarded Russian organized crime as a serious threat and chose to use some
general term as a placeholder. After all, the executive order doesn't say that
only these groups can have their assets frozen, just that it applies to groups
like these.
But I do worry about whether Western law enforcement is still caught up on the
myths of the "mafiya."
Recently I heard an FBI agent repeat an alarmist figure from last year, that
there were 300,000 Russian gangsters outside the country's borders. By contrast,
the Japanese police estimates there are maybe 5,000 Yakuza outside Japan, and
there are perhaps 4,000-5,000 Italian criminals active in the rest of the world.
Were this figure accurate it isn't then Russian criminals would outnumber all
other expatriate mobsters in the world put together!
Meanwhile, I've come across British cops who think that every Russian gangster
must have tattoos, and German investigators who genuinely believe that somewhere
like a scene out of some Bond movie there is a ruling council running all
post-Soviet organized crime.
The loose but entrepreneurial criminal networks based in Russia such as the
Solntsevskaya, Tambovskaya and personal empires of such underworld chieftains as
"Taro" and "Ded Khasan" are certainly formidable. They traffic Afghan heroin into
Russia and Europe; they plunder the Russian state; they engage in cybercrime
around the world. But until Western law enforcers can shed the temptation to see
these criminals in terms of their mythology and history, it will be that much
harder to deal with them.
[return to Contents]
#37
Only 5% of Russians Call Lukashenko True Friend
MOSCOW. Aug 1 (Interfax) - The number of Russians who call Belarus a friend has
declined, while the attitude to the Belarusian leader has worsened even more,
Levada Center told Interfax. The center held the poll in 130 towns and cities in
45 regions on July 15-19.
The share of those who call Belarus a friend declined from 76% last year to 68%.
The latest events in Belarus damaged the prestige of Belarusian President
Alexander Lukashenko.
Nineteen percent of the respondents call him "an esteemed leader of the
Belarusian people" at present. The indicator stood at 30% in 2007. The number of
people who call Lukashenko "the last dictator in Europe" doubled, from 10% to
20%.
Only 5% of the respondents called Lukashenko "the only true friend of Russia."
Twenty-four percent found it difficult to answer the question.
Some 40% of the respondents said they would not mind Lukashenko's retirement, 26%
said they did not want it, and 31% could not answer the question.
Thirty-five percent of Russians still think that Lukashenko in office is good for
Russia, 24% say they prefer a different leader in Belarus, and 41% are unable to
say which would be better for Russia.
At the same time, only 11% of the respondents showed interest in the opposition
protests in Belarus. Another ten percent said they were "irritated with and
indignant at" the protests. Fifteen percent said they watched the protests "with
sympathy and respect," and 26% had no feelings.
Thirty-nine percent of Russians condemned the dispersal of protest rallies in
Belarus this January, and 26% expressed their indignation, the center said.
[return to Contents]
#38
Medvedev scraps Ukraine visit after gas merger fails
(AFP)
August 1, 2011
MOSCOW Russian President Dmitry Medvedev has scrapped plans to visit Ukraine for
a Navy Day parade after Kiev balked at Moscow's proposal to merge the countries'
state gas firms, Kommersant said on Monday.
Citing sources in the Ukrainian foreign ministry and the Kremlin administration,
the newspaper said the signing of an agreement to merge Russia's Gazprom with
Ukraine's Naftogaz was the main condition of Medvedev's Sunday visit to the
Crimean port of Sevastopol, where the Russian Black Sea Fleet is based.
"Since we deemed this disadvantageous, we were told: then there won't be any
visit," the Ukrainian foreign ministry source told Kommersant.
"Despite the current state of Russian-Ukrainian ties, we are interested in any
contact between the presidents."
The Kremlin source added: "When it became known that this would not happen,
Dmitry Medvedev made a decision not to go to Ukraine."
A Russian defense ministry source told AFP on Friday that Medvedev had cancelled
plans to go to Sevastopol to preside over a Navy Day parade amid an unexpected
surge in tensions, saying the status of the parade was unexpectedly downgraded.
Medvedev on Sunday visited the port of Baltiisk in the Russian exclave region of
Kaliningrad instead. His spokeswoman Natalia Timakova has denied to AFP that the
Kremlin chief had any plans to go to Sevastopol. a Russian-led customs union.
Ukraine has also repeatedly insisted that any joint business with Gazprom should
be implemented on equal terms and ruled out an outright merger.
Moscow fears Ukraine may be ramping up its ties with NATO despite its non-aligned
status.
Russia in June protested the arrival of a US Navy cruiser equipped with a
ballistic missile defence system in the Black Sea to take part in naval exercises
with Ukraine.
| http://www.wikileaks.org/gifiles/docs/30/3001445_-os-2011-137-johnson-s-russia-list-.html | CC-MAIN-2014-42 | refinedweb | 31,173 | 50.06 |
Making a Pseudo distributed Hadoop Cluster.
December 8, 2012 3 Comments
Hadoop Cluster
History of Hadoop
Hadoop is an open source framework written in java for processing mahout and complex data sets in parallel. Doug Cutting, the developer of Hadoop named it after his son’s toy elephant. It evolved to support Lucene and Nutch after the release of a paper by Google about GFS in 2003. It works with unindexed, unstructured and unsorted data. The main parts of Hadoop are
HDFS (Hadoop Distributed File System)
HDFS is a
- Distributed : Data is split into blocks and stored in different data nodes for faster execution by doing computation nearer to data.
- Scalable : According to demand the resource can be scaled by distributing computation and storage across many servers. Data is broken down into blocks which can be executed in smaller chunks by map reduce programs , thus scalability can be increased
filesystem which stores metadata in NameNode and application data in DataNode. Fault tolerance is achieved by replication factor of 3.
Map Reduce
It is a framework used for parallel processing humungous data sets using clusters. It has two phases.
- Map phase : In this phase the data need to be separated out which need to be processed.
- Reduce phase : In this phase the data from the map phase is collected and analysis has to be done.
Cluster
A cluster is a group of computers connected via a network. Similarly a Hadoop Cluster can also be a combination of a number of systems connected together which completes the picture of distributed computing. Hadoop uses a master slave architecture.
Components required in the cluster
NameNodes
Name node is the master server of the cluster. It doesnot store any file but knows where the blocks are stored in the child nodes and can give pointers and can re-assemble .Namenodes comes up with two features say Fsimage and the edit log.FSImage and edit log
Features
- Highly memory intensive
- Keeping it safe and isolated is necessary
- Manages the file system namespaces
DataNodes
Child nodes are attached to the main node.
Features:
- Data node has a configuration file to make itself available in the cluster .Again they stores data regarding storage capacity(Ex:5 out f 10 is available) of that particular data node.
- Data nodes are independent ,since they are not pointing to any other data nodes.
- Manages the storage attached to the node.
- There will be multiple data nodes in a cluster.
Job Tracker
- Schedules and assign task to the different datanodes.
- Work Flow
- Takes the request.
- Assign the task.
- Validate the requested work.
- Checks whether all the data nodes are working properly.
- If not, reschedule the tasks.
Task Tracker
Job Tracker and task tracker works in a master slave model. Every datanode has got a task tracker which actually performs the task which ever assigned to it by the Job tracker.
Secondary Name Node
Secondaryname node is not a redundant namenode but this actually provides the check pointing and housekeeping tasks periodically.
Types of Hadoop Installations
- Standalone (local) mode: It is used to run Hadoop directly on your local machine. By default Hadoop is configured to run in this mode. It is used for debugging purpose.
- Pseudo-distributed mode: It is used to stimulate multi node installation using a single node setup. We can use a single server instead of installing Hadoop in different servers.
- Fully distributed mode: In this mode Hadoop is installed in all the servers which is a part of the cluster. One machine need to be designated as NameNode and another one as JobTracker. The rest acts as DataNode and TaskTracker.
How to make a Single node Hadoop Cluster
A Single node cluster is a cluster where all the Hadoop daemons run on a single machine. The development can be described as several steps.
Prerequisites
OS Requirements
Hadoop is meant to be deployed on Linux based platforms which includes OS like Mackintosh. Larger Hadoop production deployments are mostly on Cent OS, Red hat etc.
GNU/Linux is using as the development and production platform. Hadoop has been demonstrated on Linux clusters with more than 4000 nodes.
Win32 can be used as a development platform, but is not used as a production platform. For developing cluster in windows, we need Cygwin.
Since Ubuntu is a common Linux distribution and with interfaces similar to Windows, we’ll describe the details of Hadoop deployment on Ubuntu, it is better using the latest stable versions of OS.
This document deals with the development of cluster using Ubuntu Linux platform. Version is 12.04.1 LTS 64 bit.
Softwares Required
- Java JDK
The recommended and tested versions of java are listed below, you can choose any of the following
Jdk 1.6.0_20 Jdk 1.6.0_21 Jdk 1.6.0_24 Jdk 1.6.0_26 Jdk 1.6.0_28 Jdk 1.6.0_31
*Source Apache Software Foundation wiki. Test resukts announced by Cloudera,MapR,HortonWorks
- SSH must be installed.
- SSHD must be running.
This is used by the Hadoop scripts to manage remote Hadoop daemons.
- Download a latest stable version of Hadoop.
Here we are using Hadoop 1.0.3.
Now we are ready with a Linux machine and required softwares. So we can start the set up. Open the terminal and follow the steps described below
Step 1
Checking whether the OS is 64 bit or 32 bit
>$ uname –m
If it is showing a 64, then all the softwares(Java, ssh) must be of 64 bit. If it is showing 32, then use the softwares for 32 bit. This is very important.
Step 2
Installing Java.
For setting up hadoop, we need java. It is recommended to use sun java 1.6.
For checking whether the java is already installed or not
>$ java –version
This will show the details about java, if it is already installed.
If it is not there, we have to install.
Download a stable version of java as described above.
The downloaded file may be .bin file or .tar file
For installing a .bin file, go to the directory containing the binary file.
>$ sudo chmod u+x <filename>.bin >$ ./<filename>.bin
If it is a tar ball
>$ sudo chmod u+x <filename>.tar >$ sudo tar xzf <filename>.tar
Then set the JAVA_HOME in .bashrc file
Go to $HOME/.bashrc file
For editing .bashrc file
>$ sudo nano $HOME/.bashrc # Set Java Home export JAVA_HOME=<path from root to that java directory> export PATH=$PATH:$JAVA_HOME/bin
Now close the terminal, re-open again and check whether the java installation is correct.
>$ java –version
This will show the details, if java is installed correct.
Now we are ready with java installed.
Step 3
Adding a user for using Hadoop
We have to create a separate user account for running Hadoop. This is recommended, because it isolates other softwares and other users on the same machine from hadoop installation.
>$ sudo addgroup hadoop >$ sudo adduser –ingroup hadoop user
Here we created a user “user” in a group “hadoop”.
Step 4
In the following steps, If you are not able to do sudo with user.
Then add user to sudoers group.
For that
>$ sudo nano /etc/sudoers
Then add the following
%user ALL= (ALL)ALL
This will give user the root privileges.
If you are not interested in giving root privileges, edit the line in the sudoers file as below
# Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL
Step 5
Installing SSH server.
Hadoop requires SSH access to manage the nodes.
In case of multinode cluster, it is remote machines and local machine.
In single node cluster, SSH is needed to access the localhost for user user.
If ssh server is not installed, install it before going further.
Download the correct version (64bit or 32 bit) of open-ssh-server.
Here we are using 64 bit OS, So I downloaded open ssh server for 64 bit.
The download link is
The downloaded file may be a .deb file.
For installing a .deb file
>$ sudo chmod u+x <filename>.deb >$ sudo dpkg –I <filename>.deb
This will install the .deb file.
Step 6
Configuring SSH
Now we have SSH up and running.
As the first step, we have to generate an SSH key for the user
<div> user@ubuntu:~$ su - user user@ubuntu:~$ ssh-keygen -t rsa -P "" Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa): Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 9d:47:ab:d7:22:54:f0:f9:b9:3b:64:93:12:75:81:27user@ubuntu The key’s randomart image is: [........] user@ubuntu:~$
Here it is needed to unlock the key without our interaction, so we are creating an RSA keypair with an empty password. This is done in the second line. If empty password is not given, we have to enter the password every time when Hadoop interacts with its nodes. This is not desirable, so we are giving empty password.
The next step is to enable SSH access to our local machine with the key created in the previous step.
user@ubuntu:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys </div>
The last step is to test SSH setup by connecting to our local machine with user. This step is necessary to save our local machine’s host key fingerprint to the useruser’sknown_hosts file.
user@ubuntu:~$ sshlocalhost 12.04.1 ... user@ubuntu:~$
Step 7
Disabling IPv6
There is no use in enabling IPv6 on our Ubuntu Box, because we are not connected to any IPv6 network. So we can disable IPv6. The performance may vary.
For disabling IPv6 on Ubuntu , go to
>$ cd /etc/
Open the file sysctl.conf
>$ sudo nano sysctl.conf
Add the following lines to the end of this file
#disable ipv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1
Reboot the machine to make the changes take effect
For checking whether IPv6 is enabled or not, we can use the following command.
>$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
If the value is ‘0’ , IPv6 is enabled.
If it is ‘1’ , IPv6 is disabled.
We need the value to be ‘1’.
The requirements for installing Hadoop is ready. So we can start hadoop installation.
Step 8
Hadoop Installation
Right now the latest stable version of Hadoop available is hadoop 1.0.3.
So we are using this tar ball.
We create a directory named ‘utilities’ in user.
Practically, you can choose any directory. It will be good if you are keeping a good and uniform directory structure while installation. It will be good and when you deal with multinode clusters.
>$ cd utilities >$ sudo tar -xvf hadoop-1.0.3.tar.gz >$ sudo chown –R user:hadoop hadoop-1.0.3
Here the 2nd line will extract the tar ball.
The 3rd line will the permission(ownership)of hadoop-1.0.3 to user
Step 9
Setting HADOOP_HOME in $HOME/.bashrc
Add the following lines in the .bashrc file
# Set Hadoop_Home export HADOOP_HOME=/home/user/utilities/hadoop-1.0.3 # Adding bin/ directory to PATH export PATH=$PATH:$HADOOP_HOME/bin
Note: If you are editing this $HOME/.bashrc file, the user doing this only will get the benefit.
For making this affect globally to all users,
go to /etc/bash.bashrc file and do the same changes.
Thus JAVA_HOME and HADOOP_HOME will be available to all users.
Do the same procedure while setting java also.
Step 10
Configuring Hadoop
In hadoop, we can find three configuration files core-site.xml, mapred-site.xml, hdfs-site.xml.
If we open this files, the only thing we can see is an empty configuration tag <configuration></configuration>
What actually happening behind the curtain is that, hadoop assumes default value to a lot of properties. If we want to override that, we can edit these configuration files.
The default values are available in three files
core-default.xml, mapred-default.xml, hdfs-default.xml
These are available in the locations
utilities/hadoop-1.0.3/src/core, utilities/hadoop-1.0.3/src/mapred, utilities/hadoop-1.0.3/src/hdfs.
If we open these files, we can see all the default properties.</pre>
Setting JAVA_HOME for hadoop directly
Open hadoop-env.sh file, you can see a JAVA_HOME with a path.
The location of hadoop-env.sh file is
hadoop-1.0.3/conf/hadoop-env.sh
Edit that JAVA_HOME and give the correct path in which java is installed.
>$ sudo nano hadoop-1.0.3/conf/hadoop-env.sh
#The Java Implementation to use export JAVA_HOME=<path from root to java directory>
Editting the Configuration files
All these files are present in the directory
hadoop-1.0.3/conf/
Here we are configuring the directory where the hadoop stores its data files, the network ports is listens to…etc
By default Hadoop stores its local file system and HDFS in hadoop.tmp.dir .
Here we are using the directory /app/hadoop/tmp for storing temparory directories.
For that create a directory and set the ownership and permissions to user
>$ sudo mkdir –p /app/hadoop/tmp >$ sudo chownuser:hadoop /app/hadoop/tmp >$ sudo chmod 750 /app/hadoop/tmp
Here the first line will create the directory structure.
Second line will give the ownership of that directory to user
The third line will set the rwx permissions.
Setting the ownership and permission is very important, if you forget this, you will get into some exceptions while formatting the namenode.
1. Core-site.xml
Open the core-site.xml file, you can see empty configuration tags.
Add the following lines between the configuration tags.
<property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description> A base for other temporary directories. </description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> <description>The name of the default file system.</description> </property>
2. Mapred-site.xml
In the mapred-site.xml add the following between the configuration tags.
<property> <name>mapred.job.tracker</name> <value>localhost:9001</value> <description> The host and port that the MapReduce job tracker runs </description> </property>
3. Hdfs-site.xml
In the hdfs-site.xml add the following between the configuration tags.
<property> <name>dfs.replication</name> <value>1</value> <description>Default block replication</description> </property>
Here we are giving replication as 1, because we have only one machine.
We can increase this as the number of nodes increases.
Step 11
Formatting the Hadoop Distributed File System via NameNode.
The first step for starting our Hadoop installation is to format the distributed file system. This should be done before first use. Be careful that, do not format an already running cluster, because all the data will be lost.
user@ubuntu:~$ $HADOOP_HOME/bin/hadoop namenode –format
The output will look like this
09/10/12 12:52:54 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = ubuntu/127.0.1.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 0.20.2 STARTUP_MSG: build = -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 ************************************************************/ 09/10/12 12:52:54 INFO namenode.FSNamesystem: fsOwner=user,hadoop 09/10/12 12:52:54 INFO namenode.FSNamesystem: supergroup=supergroup 09/10/12 12:52:54 INFO namenode.FSNamesystem: isPermissionEnabled=true 09/10/12 12:52:54 INFO common.Storage: Image file of size 96 saved in 0 seconds. 09/10/12 12:52:54 INFO common.Storage: Storage directory .../hadoop-user/dfs/name has been successfully formatted. 09/10/12 12:52:54 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 ************************************************************/
Step 12
Starting Our single-node Cluster
Here we have only one node. So all the hadoop daemons are running on a single machine.
So we can start all the daemons by running a shell script.
user@ubuntu:~$ $HADOOP_HOME/bin/start-all.sh
This willstartup all the hadoop daemonsNamenode, Datanode, Jobtracker and Tasktracker on our machine.
The output when we run this is shown below.
user@ubuntu:/home/user/utilities/hadoop-1.0.3$ bin/start-all.sh startingnamenode, logging to /home/user/utilities/hadoop-1.0.3/bin/../logs/hadoop-user-namenode-ubuntu.out localhost: starting datanode, logging to home/user/utilities/hadoop-1.0.3/bin/../logs/hadoop-user-datanode-ubuntu.out localhost: starting secondarynamenode, logging to home/user/utilities/hadoop-1.0.3/bin/../logs/hadoop-user-secondarynamenode-ubuntu.out startingjobtracker, logging to home/user/utilities/hadoop-1.0.3/bin/../logs/hadoop-user-jobtracker-ubuntu.out localhost: starting tasktracker, logging to home/user/utilities/hadoop-1.0.3/bin/../logs/hadoop-user-tasktracker-ubuntu.out user@ubuntu$
You can check the process running on the by using jps.
user@ubuntu:/home/user/utilities/hadoop-1.0.3$ jps 1127 TaskTracker 2339 JobTracker 1943 DataNode 2098 SecondaryNameNode 2378 Jps 1455 NameNode
Note: If jps is not working, you can use another linux command.
ps –ef | grepuser
You can check for each daemon also
ps –ef | grep<daemonname>eg:namenode
Step 13
StoppingOur single-node Cluster
For stopping all the daemons running in the machine
Run the command
>$stop-all.sh
The output will be like this
user@ubuntu:~/utilities/hadoop-1.0.3$ bin/stop-all.sh stoppingjobtracker localhost: stopping tasktracker stoppingnamenode localhost: stopping datanode localhost: stopping secondarynamenode user@ubuntu:~/utilities/hadoop-1.0.3$
Then check with jps
>$jps 2378 Jps
Step 14
Testing the set up
Now our installation part is complete
The next step is to test the installed set up.
Restart the hadoop cluster again by using start-all.sh
Checking with HDFS
- Make a directory in hdfs
</pre> </li> </ol> hadoop fs –mkdir /user/user/trial
If it is success list the created directory.
hadoop fs –ls /
The output will be like this
drwxr-xr-x - usersupergroup 0 2012-10-10 18:08 /user/user/trial
If getting like this, the HDFS is working fine.
- Copy a file from local linux file system
hadoop fs –copyFromLocal utilities/hadoop-1.0.3/conf/core-site.xml /user/user/trial/
Check for the file in HDFS
hadoop fs –ls /user/user/trial/ -rw-r--r-- 1 usersupergroup 557 2012-10-10 18:20 /user/user/trial/core-site.xml
If the output is like this, it is success.
Checking with a MapReduce job
Mapreduce jars for testing are available with the hadoop itself.
So we can use that jar. No need to import another.
For checking with mapreduce, we can run a wordcountmapreduce job.
Go to $HADOOP_HOME
Then run
>$hadoop jar hadoop-examples-1.0.3.jar
This output will be like this.
The above shown are the programs that are contained inside that jar, we can choose any program.
Here we are going to run the wordcount process.
The input file using is the file that we already copied from local to HDFS.
Run the following commands for executing the wordcount
>$ hadoop jar hadoop-examples-1.0.3.jar wordcount user/user/trial/core-site.xml user/user/trial/output/ The output will be like this 12/10/10 18:42:30 INFO input.FileInputFormat: Total input paths to process : 1 12/10/10 18:42:30 INFO util.NativeCodeLoader: Loaded the native-hadoop library 12/10/10 18:42:30 WARN snappy.LoadSnappy: Snappy native library not loaded 12/10/10 18:42:31 INFO mapred.JobClient: Running job: job_201210041646_0003 12/10/10 18:42:32 INFO mapred.JobClient: map 0% reduce 0% 12/10/10 18:42:46 INFO mapred.JobClient: map 100% reduce 0% 12/10/10 18:42:58 INFO mapred.JobClient: map 100% reduce 100% 12/10/10 18:43:03 INFO mapred.JobClient: Job complete: job_201210041646_0003 12/10/10 18:43:03 INFO mapred.JobClient: Counters: 29 12/10/10 18:43:03 INFO mapred.JobClient: Job Counters 12/10/10 18:43:03 INFO mapred.JobClient: Launched reduce tasks=1 12/10/10 18:43:03 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=12386 12/10/10 18:43:03 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 12/10/10 18:43:03 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 12/10/10 18:43:03 INFO mapred.JobClient: Launched map tasks=1 12/10/10 18:43:03 INFO mapred.JobClient: Data-local map tasks=1 12/10/10 18:43:03 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=10083 12/10/10 18:43:03 INFO mapred.JobClient: File Output Format Counters 12/10/10 18:43:03 INFO mapred.JobClient: Bytes Written=617 12/10/10 18:43:03 INFO mapred.JobClient: FileSystemCounters 12/10/10 18:43:03 INFO mapred.JobClient: FILE_BYTES_READ=803 12/10/10 18:43:03 INFO mapred.JobClient: HDFS_BYTES_READ=688 12/10/10 18:43:03 INFO mapred.JobClient: FILE_BYTES_WRITTEN=44801 12/10/10 18:43:03 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=617 12/10/10 18:43:03 INFO mapred.JobClient: File Input Format Counters 12/10/10 18:43:03 INFO mapred.JobClient: Bytes Read=557 12/10/10 18:43:03 INFO mapred.JobClient: Map-Reduce Framework 12/10/10 18:43:03 INFO mapred.JobClient: Map output materialized bytes=803 12/10/10 18:43:03 INFO mapred.JobClient: Map input records=18 12/10/10 18:43:03 INFO mapred.JobClient: Reduce shuffle bytes=803 12/10/10 18:43:03 INFO mapred.JobClient: Spilled Records=90 12/10/10 18:43:03 INFO mapred.JobClient: Map output bytes=746 12/10/10 18:43:03 INFO mapred.JobClient: CPU time spent (ms)=3320 12/10/10 18:43:03 INFO mapred.JobClient: Total committed heap usage (bytes)=233635840 12/10/10 18:43:03 INFO mapred.JobClient: Combine input records=48 12/10/10 18:43:03 INFO mapred.JobClient: SPLIT_RAW_BYTES=131 12/10/10 18:43:03 INFO mapred.JobClient: Reduce input records=45 12/10/10 18:43:03 INFO mapred.JobClient: Reduce input groups=45 12/10/10 18:43:03 INFO mapred.JobClient: Combine output records=45 12/10/10 18:43:03 INFO mapred.JobClient: Physical memory (bytes) snapshot=261115904 12/10/10 18:43:03 INFO mapred.JobClient: Reduce output records=45 12/10/10 18:43:03 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2876592128 12/10/10 18:43:03 INFO mapred.JobClient: Map output records=48 user@ubuntu:~/utilities/hadoop-1.0.3$
If the program executed successfully, the output will be in
user/user/trial/output/part-r-00000 file in hdfs
Check the output
>$hadoop fs –cat user/user/trial/output/part-r-00000
If output is coming, then our installation is success with mapreduce.
Thus we checked our installation.
So our single node hadoop cluster is ready
References
- For downloading hadoop tar ball.
- For downloading open-ssh server
- For downloading jdk 1.6 | https://amalgjose.com/2012/12/ | CC-MAIN-2017-47 | refinedweb | 3,849 | 60.51 |
This project utilizes the AmazonDRS Arduino library to initiate frictionless purchases on Amazon by scanning NFC tags.
Looking for some help getting started with AmazonDRS Dash Replenishment for Arduino and still need to get your Amazon accounts setup, authorize 'Log in with Amazon', exchange tokens, etc? Hop on over to the Getting Started guide at the AmazonDRS GitHub repo and try out the Amazon Dash Button for Arduino project.
1/12/2017 - Coming Soon!
I'm working on creating an Arduino shield(aimed at the Due and Zero) that will incorporate the ATWINC1500 WiFi and the PN532 transceiver for NFC all in one shield sized package. Stay tuned for more updates!2/26/2017 - The Shields Are In!
Check out the video below for a complete demo of Replenisher! I The shield combines NFC, WiFi, EEPROM, and an LCD. The open source design and libraries for interacting with the shield are all available here! Big thanks to Andium(a new IoT startup I work at) for working with me to spin off a small batch of these shields to make available to the community! You can grab one here. I created a project hub on hackster where I'll start posting more projects and ideas that spin off from the new shield.Motivation
The Amazon Dash Button for Arduino is cool, but once the excitement of interacting with the API wears off there really isn't anything too mind blowing about hitting a button to place an order. So I decided to create this project to show a different way you might want to initiate frictionless Dash Replenishment around your home or office.
The idea is to use NFC tags to create a more fluid replenishment process for products that might not necessarily pair with a "smart" device. A smart fridge that orders groceries is great, but does your T.V. remote really need a WiFi connection to order new batteries? Utilizing the unique product identifiers Amazon uses called ASINs we can burn an identifier into each RFID tag. The sticker tags could be placed anywhere! Manufacturers could even print them onto their packaging or place them inside their devices. Since RFID tags are passively powered they require no batteries, no WiFi, no time consuming setup. The only device we need to connect to WiFi is the NFC scanner that we're about to create!Getting StartedHardware
First lets make sure you have all of the necessary hardware for the project. I've used the Arduino MKR1000 but any WiFi 101 enabled Arduino should work.
In order to scan NFC tags we're also going to need an NFC antenna and transceiver. I've decided to go with Adafruit's PN532 NFC shield with a built-in antenna. The shield is designed for the typical Arduino Uno/Due/Zero/Etc shield form factor but there are breakouts on the board for SPI so we can easily wire up our MKR1000. In addition to headers that ship with the shield and the Arduino we'll also need, jumper cables, and some NFC tags. Specifically Ntag203 Mifare Ultra-light tags(I chose the round stickers, you'll see why in a minute!) This specific flavor or RFID tags are compatible with Adafruit's shield as well as most Android phones and tablets I've tried.
If you haven't already, solder the two rows of header pins to your MKR1000 and solder the provided shield headers to your PN532 shield. We'll be using SPI so you only need to solder the NFC shield headers to the strip labeled for power (IOr, RST, 3v, 5v, GND...) and the SPI header(MOSI, SS, MISO...etc). You'll also want to make sure you bridge 'SEL 1' and 'SEL 0' with a blob of solder to configure the shield for SPI. If you're a bit lost on setting up the shield check out Adafruit's getting started guide for the PN532 shield.Let's wire up the shield!
Outlined in this Fritzing diagram are all the connections you'll need to make between the shield and the Arduino. Take note below of SEL 1 and SEl 0. Those are the copper contacts that need to be bridged together, each witch a blob of solder, so we can use SPI to communicate with the shield instead of I2C.
Here's a quick rundown on all the connections made above. Arduino on the left ':' shield on the right....
- 5v : 5v
- VCC : 3v
- Gnd : Gnd
- MISO : MISO
- SCK : SCK
- MOSI : MOSI
- DIO 7 : RST
- DIO 6 : SS
Great! Now that we've got the hardware we can get some practice using the shield and NFC by configuring our registration and product RFID tags.Creating Product Specific RFID Tags
Each product we want to seamlessly replenish using our 'Replenisher' will need to be burned into it's associated RFID sticker/tag. I've chosen these 5 products(ASINs) while creating my Dash Replenishment Device.
- Slot 1: AAA Batteries(B00NTCHCU2)
- Slot 2: Coffee(B00PQ4SLCY)
- Slot 3: Plastic Cutlery(B010RLC7P2)
- Slot 4: Plastic Cups(B014VIECFK)
- Slot 5: Notebook(B01DN8TB5U)
At this time creating slots, Dash Replenishment Devices and adding ASINs are tasks aimed at the device manufacturer or developer. In order for this tool to gain widespread adoption I'm imaging Amazon opening up slot creation and ASIN selection to the end user.
In anticipation of the new Arduino shield I'm creating for 'Replenisher' (I should hopefully receive by the end of the month!) I'm working on a sort of mash-up of my favorite PN532 libraries I've found on GitHub. Consider it a "Best of" consisting of Adafruit and Seed Studio's PN532 libraries, as well as Don Coleman's NDEF library. The library will have a handful of example sketches and will utilize hardware interrupts by default (IRQ pull-ups are already wired on the shield). As soon as I get the shield and test I'll release that library and more detailed instructions working with NFC. So for now I'll keep these steps sort of high level. If you'd like to give this a go now you should still be able to perform what you need to do using Adafruit's library. Just double check you've used the correct SS and RST pins.
- Format each(5) Ntag203 Mifare Ultra-light tag to support NDEF.
- Add a 'text record' containing the corresponding Slot Id to each tag.
- Print some sticker labels for the tags!
This process changes the structure of the RFID tag to support the NDEF standard. This way our tags are bit more universal and can be easily deciphered by any reader that supports NDEF. This will also standardize the terminology I'll use to refer to the data being passed around.Add a Text Record
NDEF tags can contain messages, and messages contain one or more records. Our text record contains the 'Slot Id' we pulled from our Dash Replenishment Device. You can find these in your Dash Replenishment Developer Portal, just edit your newly created device.
NdefMessage message = NdefMessage(); message.addTextRecord("0a6028b7-7509-4b91-b97e-3e291f386461"); //slotId write(message);
Repeat this process for all 5 tags! If you read one of the tags the message data contained on the tag should look like this...
NDEF Message 1 record, 43 bytes NDEF Record TNF 0x1 Well Known Type Length 0x1 1 Payload Length 0x27 39 Type 54 T Payload 02 65 6E 30 61 35 30 32 38 62 37 2D 37 36 30 39 2D 34 62 39 31 2D 62 39 37 65 2D 33 65 32 39 31 66 33 38 36 34 37 31 .en0a6028b7-7509-4b91-b97e-3e291f386461 Record is 43 bytes
The '.en' is the encoding type for the for the text record 'english'. To really do this properly we could have added a mime media record with a type of "application/json" and stored something like...
{"slotId":0a6028b7-7509-4b91-b97e-3e291f386461}
on the tag but this is just a proof of concept for now so the text record will do just fine.Print Labels
The last step is to print some labels with images of our products. This way we can easily identify the tags and what we'll be replenishing when we scan them! I picked up some Avery 6450 labels and added images using a template in MS Word.
Then just place the tags wherever they make sense. I placed the cup on the cupboard door, stuck the batteries to the back of the remote, the coffee on the side of the coffee maker, the cutlery on the shelf and the notebook to the inside cover. When it's almost time to replace these items just scan and... voilá order placed!
There's one last tag to create! That's our device registration tag. This tag will be placed on the side of 'Replenisher' the scanning device. So when you first unbox the device and are ready to authorize 'Log in with Amazon' and manage your product selections you can do this without even going to the computer!
Just take your 'Login with Amazon' consent request url...
Shorten it using a tool like bit.ly or Google link shortener...
And write a new uri record to an RFID tag!
NdefMessage message = NdefMessage(); message. write(message);
Now when the user scans the device with their NFC capable phone or tablet they can register their products and slots and authorize replenisher right then and there.
Next lets take a closer look at the amazonDashNfc sketch and see how it all works!amazonDashNfc The Arduino Sketch for 'Replenisher'
#include "AmazonDRS.h" #include "AnduinoNFC.h" AmazonDRS DRS = AmazonDRS(); AnduinoNFC NFC = AnduinoNFC(); //WiFi creds ----------------------------------------------------------------- char ssid[] = ""; // your network SSID (name) char pass[] = ""; // your network password (use for WPA, or use as key for WEP) //---------------------------------------------------------------------------- void setup() { Serial.begin(115200); while (!Serial) { ; // wait for serial port to connect. Needed for native USB port only } //Setup NFC NFC.begin(); //Start up DRS DRS.begin(ssid,pass); //initialize slots DRS.retrieveSubscriptionInfo(); //check slot statuses } void loop() { //scan nfc tag containing slotId //if scanned id matches a valid slot and the slot is available //request replenishment for the supplied slot if (NFC.packetReady()) { NfcTag tag = NFC.read(); //attempt to read the RFID tag tag.print(); //and print the results to the terminal NdefMessage message = tag.getNdefMessage(); NdefRecord record = message.getRecord(0); //grab the bits that contain the DRS Slot ID int payloadLength = record.getPayloadLength(); byte payloadBytes[payloadLength]; record.getPayload(payloadBytes); String payloadString = ""; //store the RFID msg bits in a String for comparison for(int i=3; i<payloadLength; i++) { payloadString += (char)payloadBytes[i]; //load up the cmp string with payload less the encoding } if(slotId0 == payloadString) //eventually if(slotId[i] has a match and slotStatus[i] is available { //we have a match! replenish the products associated with that slot! DRS.requestReplenishmentForSlot(payloadString); } else { Serial.print("Sorry, slot "); Serial.print(payloadString); Serial.println(" is not available at this time"); } } }
Pretty short right? The amazonDashNfc sketch is fairly simple! Essentially the 'Replenisher' device boots up, connects to WiFi, checks the subscription info, and then waits for a tag to be scanned!
DRS.retrieveSubscriptionInfo(); //check slot statuses
Upon startup this method contacts the Dash API endpoint /subscriptionInfo and grabs the latest slot status information. This will pull in and locally save all slot id's and whether or not they were authorized for replenishment. When a tag is scanned we check for a matching slot id and verify its status. If there's a match we...
DRS.requestReplenishmentForSlot(payloadString);
request replenishment! The rest of the sketch just waits for a tag to come within range of our devices RFID antenna and then deals with parsing the payload of the NDEF record.
In it's current form all confirmations and message responses we receive back from the Dash API are printed to the serial terminal. This works fine for testing and development but to make this a truly stand alone system I'm working on developing some alternative notifications... maybe a piezo speaker with a chime or an LCD screen.
Stay tuned for more updates to this project as I make changes to incorporate the WiFi/NFC shield, a product case, and other improvements! Thanks for taking a look! | https://amazonwebservices.hackster.io/bcarbs/amazondrs-nfc-replenisher-98608c | CC-MAIN-2022-27 | refinedweb | 2,048 | 72.56 |
GitHub Readme.md
This is a Heroku buildpack for GeoDjango apps. It extends the original Python buildpack by adding GEOS, Proj.4 and GDAL, per the GeoDjango installation instructions.
This is a Heroku buildpack for Python apps.
Example usage:
$ heroku create --buildpack git://github.com/dulaccc/heroku-buildpack-geodjango.git $ git push heroku master ... -----> Python app detected -----> Installing runtime (python-2.7.8) -----> Checking for GDAL Fetching and installing GEOS 3.3.2 Caching ... GEOS installed -----> Checking for Proj.4 Fetching and installing Proj.4 4.7.0 Installing ... Proj.4 installed -----> Checking for GDAL Fetching and installing GDAL 1.8.1 Installing ... GDAL installed ----->/dulaccc/heroku-buildpack-geodjango.git
The buildpack will detect your app as Python if it has the file requirements.txt in the root.
It will use Pip to install your dependencies, vendoring a copy of the Python runtime into your slug.
Then you need to set two Django settings in order for
GEOS and
GDAL to work properly.
settings.py
from os import environ GEOS_LIBRARY_PATH = environ.get('GEOS_LIBRARY_PATH') GDAL_LIBRARY_PATH = environ.get('GDAL_LIBRARY_PATH')
All libraries are stored in the directory
/app/.geodjango.
You can also provide arbitrary releases Python with a
runtime.txt file.
$ cat runtime.txt python-3.4.2
Runtime options include:
Other unsupported runtimes are available as well.
Copy the snippet above into CLI. | https://elements.heroku.com/buildpacks/acechase/heroku-buildpack-geodjango | CC-MAIN-2018-30 | refinedweb | 221 | 54.49 |
public class RGBImage1 extends Image1 {
private byte rgbData[];
}
rgbData
RGBImage1 extends Image1 and introduces field rgbData (and maybe some methods the example doesn (see Figure 3). It can be a difficult problem to find because the finalizer might be "hidden" in a deep class hierarchy.
RGBImage1
Image1
finalize()
One way to avoid this problem is to re-arrange the code so that it uses the "contains," instead of the "extends," pattern, as follows:
public class RGBImage2 {
private Image1 img;
private byte rgbData[];
public void dispose() {
img.dispose();
}
} anywhere else), and will queue up only the Image1 instance for finalization (see Figure 4). Since re-arrange your code in the manner described above, however. In that case, as a user of the class, you will have to do a little more work to ensure that its instances do not hold on to more space than necessary when they are being finalized. The following code illustrates how:
public class RGBImage3 extends Image1 {
private byte rgbData[];
public void dispose() {
super.dispose();
rgbData = null;
}
}
RGBImage3
RGBImage3 is identical to RGBImage1 but with the addition of the dispose() method, which nulls the rgbData field. You are required to explicitly call dispose() after using an RGBImage3 instance to ensure that the rgbData array is promptly reclaimed (see Figure 5). I recommend explicit nulling of fields on very few occasions; this is one of them.(); }
}
Image2
NativeImage2
Image2 (see Figure 6). Class NativeImage2 is declared to be final so that users cannot subclass it and re-introduce the memory-retention problems described in the previous section.
nativeImg have also retained the corresponding Image2 instance, which is precisely what you are trying to avoid. Assume, however, that the NativeImage2 class will be accessible only from the Image2 class. This is the reason why it has no public methods (its dispose() method, as well as the class itself, is package-private).
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/Java/Article/30192/0/page/2 | CC-MAIN-2015-48 | refinedweb | 342 | 53.61 |
I am looking for a way to communicate (back and forth) between Jint and C#.
Is there a way? I have no problem of running JavaScripts in Jint after loading them to the engine but I still have an issue getting the callbacks on the other hand - from the JavaScript back to C#(maybe using some kind of ObjectForScripting? or other predefined settings?)
Thanks
In C#, provide a class with a method you want to run.
public class JavaScriptHelper { public string Method(string input) { return String.Concat("Hi", input); } }
Then pass the class to the engine.
var engine = new Engine(); engine.SetValue("helper", new JavaScriptHelper()); var source = @" var result = helper.Method('Hello');" engine.Run(source); | http://m.dlxedu.com/m/askdetail/3/ab10c5f889832bc3f09e1b8a16cc85aa.html | CC-MAIN-2018-22 | refinedweb | 115 | 68.06 |
Can Your Visual Studio Do That?
A significant limitation of Visual Studio is its inability to handle more than one language in a single project. This is because of the way VB and C# compilers are designed;although both compilers produce IL binaries, their output is limited to standalone executables and libraries, rather than object modules that can later be consolidated into an executable. Consequently, Visual Studio has no built-in mechanism to mix multiple CLR languages into a single executable.
Another limitation of Visual Studio is that it does not support modules at all (generated by the /target:module option in VB and C# compilers). However, with the help of make, you can bypass both of these shortcomings.
Say you have a project, sample.exe that consists of three source files in three different languages: C#, VB, and IL assembly. With the standard mechanisms of Visual Studio .NET, it is not possible to manage such a diverse project. But by using the make utility and doing a little bit of homework, you can use the VS.NET development environment to deal with this exotic project.
Start by creating a new project. Visual Studio supports make only within C++ projects, so your only recourse is to create a new C++ project. Name your project " sample," and click on Finish.
Next, fill in your application settings as shown. As mentioned earlier, you have to provide handlers for three distinct cases: Build, rebuild, and clean. Fill in boxes as in Figure 3.
Add the source files to the project by right clicking on the Source Files folder in Solution Explorer, and selecting Add New Item from the menu. In the dialog box, replace <Enter name> with VBClass.vb, and click Open. In the text window that appears, paste in the following code:
Public Class VBClass
Public Sub Method
System.Console.WriteLine("Hello from VBClass")
End Sub
End Class
public class CSClass {
public void Method() {
System.Console.WriteLine("Hello from CSClass");
}
}
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/cplus/Article/9766/0/page/2 | CC-MAIN-2016-26 | refinedweb | 358 | 66.94 |
One of the most popular Node.js web frameworks is Express.js. It enables you to rapidly and easily create APIs and other web applications. However, constructing a server is only half the battle; the other half is keeping it running. You should read the logs to have a solid grasp of what's going on with your application.
However, if not done correctly, logging can be a headache (as in searching through thousands of not-so-important log entries in search of one line with an actual meaningful error message).
There are two ways to build a web server using Node.js: the hard way and the easy way (Express.js). The same can be said about logging in to Express.js, which may be done in either a difficult or simple way (using Morgan).
You'll discover what Morgan is and how to utilise it with Express.js in this tutorial.
- Introduction
- Getting Set Up
- Getting Started with Morgan NPM
- Log Output Format
- Redirecting the Log Output
- Logging into Multiple Destinations
Introduction
Morgan is a middleware for node.js that logs HTTP requests and is commonly used in Express projects. Express is a node.js routing and middleware web framework that is quick, unprejudiced, and simple.
An Express application is nothing more than a collection of middleware function calls. However, before going into Morgan, we must first comprehend what middleware functions are.
The middleware pattern is just a list of handler methods, each of which calls the next in line once it has done its work. This list is used by Express to pre-process requests with whatever logic you wish to include in your application. Authentication validations, request structure validation, adding new attributes, and many others are examples.
Every function you add to the list will be called with the request object, the response object if the function needs to break the regular flow, and a callback function to ensure that the next function in line is called.
Morgan npm provides exactly this, as you'll see, with a middleware function that will pick up the request object and record everything you need, such as the method used, the origin IP, the requested URL, and so on.
The request object (req), the response object (res), and the next middleware function in the application's request-response cycle (usually designated by a variable named next()) are all available to middleware functions.
Consider the following three function arguments:
- Request Object (req)
The request query string, parameters, body, HTTP headers, and so on are all properties of the HTTP request.
- Response Object (res)
When an HTTP request is received, an Express application sends an HTTP response.
- Next Middleware Function
Next() is executed if the current middleware function does not complete the request-response cycle.
In reality, multiple middleware functions can be used at the same time. When there are multiple, they are executed one by one in the order in which they were used in the express.
Why Should You Use Morgan?
Morgan simplifies the work of logging HTTP requests to and from your application in a single statement. Normally, developers must write all of the logging code by hand. They must tell Node.js/Express.js what to store, how to save, and where to save it.
Morgan takes care of it for you. It gathers logs from your server and prepares them for reading. It also has a few predetermined defaults built-in, saving you the time and effort of having to set up all of the logging yourself.
It can be highly useful when launching a new project, but it's also very powerful, thus it's also suitable for big projects.
Getting Set Up
Morgan is installed via NPM, just like any other Node.js module:
npm install morgan
After that, you must tell Node.js to include Morgan in your app:
const morgan = require('morgan');
That concludes the fundamentals. Morgan is now up and running and ready to use.
Getting Started with Morgan NPM
This isn't a difficult module to use; it doesn't have a lot of features or configuration choices, but it does one thing really effectively. It gives you a middleware function for any framework that supports that pattern (Express or otherwise).
Here's how to incorporate it into your project:
const express = require('express') const morgan = require('morgan') const app = express() const port = 8080 app.use(morgan('combined')) app.get('/', function(req, res) { res.send('Hello, World!!!') }) app.listen(port, () => { console.log(`Sample app listening at{port}`) })
The above code demonstrates how simple it is to use Morgan: simply need it and use the user function to add it as a middleware.
That's all you'll need to get started logging; in fact, the code above prints the following line on every request:"
It's worth noting that Morgan isn't given much in the way of setup; it was built to have some extremely helpful defaults. In reality, you won't be able to do anything with this module except tweaking its output and the logs' destination.
Log Output Format
Morgan's logs may be customised to include exactly the information you require, which is one of its most useful features. You can format your logs in one of two ways:
- Pre-defined Log
This module already has a simple pre-configured set of items to log; all you have to do now is choose the combination that best meets your needs.
- Manually by using Tokens
You can also easily build new ones if the pre-defined ones aren't enough.
#1 Pre-defined Log
There are five predefined formats that you can utilise to quickly obtain the information you require. They are as follows:
- combined - This sets your logs to the Apache standard combined format
- common - Refers to the Apache common format standard
- dev - A log format that is colour-coded (based on request status)
- short - Less than the normal format, with only a few items you'd expect to see in a request logline
- tiny - Even less, simply the reaction time and a few extras
If you want to use the format function, you'll need three arguments: tokens, req, and res. The HTTP request is req, and the HTTP response is res. A token is an object that contains all declared tokens. The function should return a string that will be the logline, or undefined/null if you don't want to log anything.
Let's have a look at these three possibilities:
Using a predefined format string:
app.use(morgan('tiny'))
Using a format string of predefined tokens:
app.use(morgan(':method :url :status :res[content-length] - :response-time ms'))
It's worth noting that predefined format strings can produce the same results as predefined tokens. Examine what they have to offer. Morgan's GitHub site and the Express' documentation for Morgan middleware both have a list of all their predefined format strings and tokens.
When utilising predefined tokens, keep in mind that they must always be declared as strings, with a colon before the token's name :method.
Using a custom format function
app.use(morgan((tokens, req, res) => { return [ tokens.method(req, res), tokens.url(req, res), tokens.status(req, res), tokens.res(req, res, 'content-length'), '-', tokens['response-time'](req, res), 'ms' ].join(' ') }))
Returning to the previous example, here's how the module logs the same request in several formats:
The output of ‘Combined’ format:"
The output of ‘Dev’ format:
49.207.184.55 - GET / 304 556.460 ms - -
The output of ‘Tiny’ format:
GET / 304 - - 545.730 ms
These are quite well-formatted, and if you don't have any unique requirements, they'll suffice. However, if you need more granular control over the format of your loglines (for example, to add extra information that isn't readily available, or to arrange them in a more human-readable way), you can use tokens to add the information you need.
#2 Log Tokens
Create your own custom tokens and logging format as another alternative. Morgan has complete access to the contents of an HTTP request and response. This means that even if your application uses custom HTTP headers, Morgan can still log them.
You must generate your own tokens if you wish to customise your own middleware routines. Simply call morgan.token() with a name and a callback function to create a token. A string value is expected to be returned by this callback function.
Tokens are basic placeholders that you can use in a middleware format string. The predefined formats effectively perform the same thing, but you can mix and match any of the 13 tokens to get the exact logline you want.
Morgan npm gives you tokens like the client's user agent, the requested url, and the response time, among other things. To receive a complete list of tokens, see the whole documentation.
Let's imagine your application generates a custom HTTP header called "user-type," and you want to log the content of this header. To accomplish this, take these steps:
morgan.token('user-type', function(req, res) { return req.headers['user-type'] })
The preceding line generates a new custom token, which you can use in your Morgan log format by adding :user-type.
app.use(morgan(':method :url :status :user-type'));
Output:
Server listening on port :8080 GET / 200 admin
What if these pre-defined tokens aren't enough?
You can use the same approach, but instead of supplying a string, you can send a function to which you can add as much logic as you need.
Directly inserting a token for a certain header, for example, would look like this:
morgan.token("host", function(req, res) { return req.headers['host'] })
What if you wanted to send it back to them in JSON format?
morgan.token("json", function(req, res) { return JSON.stringify({ url: req.url, method: req.method, httpVersion: req.httpVersion }) })
Because the token must be a string, remember to call the stringify method. The following is the output of a token like this:
{ "url":"/", "method":"GET", "httpVersion":"1.1" }
Redirecting the Log Output
We haven't talked about the logs' output location yet, but by default, the logs are written to standard output (which is usually your terminal window). Given that distributed architectures are now the norm, logging into the standard output isn't particularly useful. In reality, it's comparable to using Morgan instead of console.log.
Morgan npm, fortunately, allows you to replace the logs' output location by overwriting the stream that was used to write them. This, however, will require a fundamental understanding of how Node.js Streams function; after all, this isn't just about specifying a destination path, but also about writing the entire output procedure.
The middleware method returned by the module allows an optional second parameter to define the stream to utilise in order to accomplish this.
By creating a new Stream object and providing it to the middleware, you may route the logger's output to a single file, as shown below:
let logStream = fs.createWriteStream(path.join(_dirname, ‘file.log’), { flags: ‘a’ }) // setup the logger app.use(morgan('update', { stream: logStream }))
The concept of allowing you direct access to the output stream, while low-level, allows developers a lot of flexibility. You may or may not be familiar with Node.js's Stream object, but it's a standard interface that any module that utilises it must implement; in other words, everyone uses streams in the same way, so you can rest assured that it will meet your needs.
There are also modules that provide stream-compatible interfaces to well-known storage, such as Mongoose-Morgan, which allows you to directly stream Mongoose logs into MongoDB. If you can't locate a morgan-compatible module, develop a function that returns a writable stream and transmits the data where you need it.
The example below demonstrates how simple it is to build a writable stream that can be used with Morgan:
const express = require('express') const morgan = require('morgan') const Writable = require("stream").Writable const app = express() const port = 8080 let logStream = fs.createWriteStream(path.join(_dirname, 'file.log'), { flags: 'a' }) // setup the logger app.use(morgan('update', { stream: logStream }))(`Sample app listening at{port}`) })
The code above, obviously, does nothing more than transmit data to the terminal window, as Morgan npm does by default, but you get the idea. Instead of console.log, you can use S3 compatible code or an HTTP request to transmit the log to an ELK instance that is completely controlled. All of them are real and viable possibilities, due to the stream access provided by the module.
Logging into Multiple Destinations
Finally, the skip option – another attribute of the optional second argument – is a handy little technique. You can use it to create a function that tells the logger which events to ignore and which to log.
Although the following example focuses on one use case, having a higher level of verbosity in development environments versus merely logging truly vital information on production environments is another example.
const express = require('express') const morgan = require('morgan') const app = express() const port = 8080 // Skip requests that aren't for the homepage function isNotHomePage(req, res) { return (req.path !== "/") } app.use(morgan('combined', { skip: isNotHomePage })) app.get('/', function(req, res) { res.send('Welcome Home!') }) app.get('/hello', function(req, res) { res.send('Hello, World!!!') }) app.listen(port, () => { console.log(`Sample app listening at{port}`) })
It's very basic; the function receives both the request and response objects, allowing you to select whether or not to log the request based on that information. The event will be skipped if the function returns TRUE; otherwise, the logger will handle it.
It doesn't accomplish much on its own, but if you start adding loggers and different skipping criteria, you can design logic that sends log lines to different destinations depending on your needs.
The following code demonstrates how to leverage almost everything we've studied so far to develop a logging mechanism in Morgan that saves information about unsuccessful requests to a file while writing information about successful requests('Welcome Home!') }) app.get('/hello', function(req, res) { res.send('Hello, World!!!') }) app.listen(port, () => { console.log(`Sample app listening at{port}`) })
The skip logic is the key here, as it allows us to separate the data flow into two independent middleware logs based on the status code received (in this example).
The two streams in the following section of the code describe where to save each logger's output as well as the format. Although the format for all circumstances is the same in this example, you could easily use different formats for each case.
Finally!!!
We went over how to install Morgan, how to use the built-in presets, and how to make your own in this post. To get started with Morgan, you'll usually just need to add two or three lines of code.
Morgan is a powerful tool that allows you to generate unique logging formats for your Express.js applications. Morgan can be used for both small projects that require quick and easy logging and larger applications that require specialised solutions.. | https://www.atatus.com/blog/a-beginners-guide-to-morgan-npm-logger/ | CC-MAIN-2022-05 | refinedweb | 2,541 | 54.52 |
A Scheme Tutor program, written by Ms. Sowmya Ramachandran and Gordon Novak, is available on the computers in the CS 307 labs. The program files can also be downloaded from the FTP Directory for Software and Scheme Tutor . You should download all the files in the directory /tutor/ to the directory /Plt/ where DrScheme resides.
The purpose of the tutor is to provide practice with some of the basic Scheme functions. Most of the problems presented by the tutor are questions from previous exams, so running the tutorials will help you to make a better grade.
The tutor provides specific instruction for some of the more common errors. The program also has some tutorial text that can be displayed at any time.
To satisfy the assignments using the tutor, run the assigned lesson to completion, then print the interaction screen and hand it in to your discussion section TA.
To run the Scheme Tutor,
If using MacGambit, load the file tutorial.scm .
(In MacGambit, remember that every answer given to the tutorial must be terminated by clover-return (hold down the ``clover'' key beside the space bar while pressing return). If you forget and hit return by itself (or if the computer just sits there and does not respond), do a clover-return to make it accept the answer.)
When the program presents a problem, you can give an answer; you can also give other commands: | http://www.cs.utexas.edu/~novak/cs307tutor.html | CC-MAIN-2013-20 | refinedweb | 236 | 61.77 |
I had to find somewhere through necessity and wanted value and a little comfort too. My expectations were low, bit from the minute I turned off the A664, Manchester Road I was in another world! The weather did help, being a rare evening with brilliant sunshine, but the grounds were immaculate and inviting and Reception a delight, with home-made shortbeads to nibble at one end of the counter as I was invited to check in.
Free wifi and real ale on draught were very welcome. The room we well appointed, with more biscuits to go with coffee as I got ready to enjoy the complementary spa and very well equiped fitness centre. In the event I checked out the facilities, and took a long walk to a very pleasant pub in Middleton some miles away, and The Ship by the canal on my return (A real 'locals' pub...go on, try it!).
My first meeting the following day, after an excellent nights sleep, was in the bar area, overlooking fine views to the hills beyond. The staff were attentive and provided all we required. When time came to check out, again, the bill was a very pleasant suprise!
The meeting went so well I have to go back to the area soon. I know where I shall be staying...and hoping for rain, so I can sample the Spa and Fintness Centre!
- ACCOR, Expedia, Hotels.com, Orbitz, Cancelon, Travelocity, | https://www.tripadvisor.com/ShowUserReviews-g580426-d522555-r163610595-Mercure_Manchester_Norton_Grange_Hotel_and_Spa-Rochdale_Greater_Manchester_Englan.html | CC-MAIN-2017-43 | refinedweb | 240 | 72.46 |
On Tue, Nov 27, 2001 at 03:45:10PM +0100, Russell Coker wrote: >. If you do such modifications to the upstream source on otherwise portable packages, please be sure to guard it in appropriate #ifdef's, usually #ifdef __linux__ ... #endif I don't know about build dependencies and stuff, but the same holds true. Please make sure if you modify otherwise portable packages that we don't take notice of the changes next time we try to compile it on GNU/Hurd. Thanks, Marcus -- `Rhubarb is no Egyptian god.' Debian brinkmd@debian.org Marcus Brinkmann GNU marcus@gnu.org Marcus.Brinkmann@ruhr-uni-bochum.de | https://lists.debian.org/debian-devel/2001/12/msg00522.html | CC-MAIN-2017-43 | refinedweb | 105 | 64.71 |
A group blog from members of the VB team
While most servicing releases do not include new functionality, Visual Studio 2010 SP1 introduced an important new compiler feature that enables Visual Basic to target new platforms that were not previously supported. This was mentioned in some of the initial SP1 blog posts such as Jasonz blog.
This is a strategic investment by Microsoft in the future of VB. This provides VB with an increased agility in the future for new platforms to support Visual Basic.
This blog provides more information about the feature; let me know if you have more questions.
WHAT IT IS?
The new command line option /vbruntime* (with an asterisk) will embed a reduced version of the Visual Basic runtime into the compiled assembly and therefore eliminate the dependency on the VB Runtime assembly since this assembly does not ship on all .Net platforms such as Windows Phone 7 and XNA.
The feature can be used from the VBC command line compiler or by adding an entry <VBRuntime>Embed</VBRuntime> into the .vbproj file.
In general, its intended use is only for specific project templates that target platforms that don’t ship with a VB runtime.
WHEN SHOULD I USE IT?
The simple answer to this is you should never need to use this directly. The feature has been implemented to allow Microsoft Partner teams to create Visual Basic project templates for platforms that previously didn’t support VB. When such VB project templates eventually become available, you as a VB developer will be able to do File>New Project for the new project types, and /vbruntime* will be used under-the-hood.
WHAT’S INCLUDED?
The intention is to provide optimal VB experience for .Net VB users while avoiding the need to deploy to full VB runtime, thus the embedded functionality includes:
· Conversion
· VB specific attributes
· Support for various VB language features such as error handling (.Net style), foreach loop and string comparison
· A few useful constants and VB functions such as Chr and Asc
WHAT ARE ITS LIMITATIONS?
In order to avoid bloating assemblies with functionality that either is not used or no used much, some of the functionality provided in this assembly on the currently supported framework has been omitted.
· Legacy VB Functions – Such as Left, Mid.
· Legacy Like operator
· Old Style VB “On Error Goto Error” Handling
· Late Binding
· Much of the My Functionality
Many of these limitations do not apply to developers writing VB code using .Net idioms. For those trying to port existing legacy code over and use this functionality it may require a bit of rewriting to use .NET constructs such as the substring function, Try Catch exception handling and use of framework functions rather than the helper class functionality provided in the My Namespace.
The new mode is intended to be used in new platform and new project templates – but it can be used for existing templates though you may run into problems with code generation that depends on missing functionality. An example of this is Windows Forms Application with the application Framework project option checked.
COULDN’T I HAVE PREVIOUSLY DONE THIS WITH OTHER COMPILER SWITCHES?
It was previously possible to target some of these platforms with some cumbersome workarounds using the existing /vbruntime- switch. It was not an ideal solution and required implementation of a number of additional items in order to make it work. This new switch is a simpler more elegant approach that makes it much easier.
DOES THIS MEAN THAT I GET LESS FUNCTIONALITY THAN IF I USE C#?
From the list of limitations you will see that most of them relate to legacy VB functionality. So for developers writing modern .NET code there should be no real difference. The exception to this rule may be late binding / Dynamic.
In C# this was introduced in the 2010 product cycle. If the platform that you are targeting does not include the dynamic language runtime then there is no difference as the C# dynamic feature uses this. This is the case for many of the existing platforms that this feature is intended to be used on. So the lack of late binding / dynamic support is consistent across both languages.
HOW DO I KNOW IT’S ACTUALLY DONE ANYTHING?
To see the effects of using this switch you can create a simple console application and compile it using the new functionality (either command line or project file). To see the embedded use tools such as “Reflector” or ILDASM to disassemble the assembly. You’ll notice that “Microsoft.VisualBasic“ namespace was added to your assembly.
You can also see that there is no reference to VB runtime assembly in the in the referenced assembly list in your assembly header.
WHEN SHOULD I USE THIS MYSELF? WHAT IS MICROSOFT’S RECOMMENDED BEST PRACTICE FOR USING THE FLAG?
Most users will never use it themselves directly. It’s there only for consumption by people who create project templates – i.e. mainly just for us to use here in Microsoft. The recommendation is that if you create a project template for a platform where Microsoft.VisualBasic.dll doesn’t exist, then use the flag.
HOW DOES THE RELEASED VB SUPPORT FOR WINDOWS PHONE 7 WORKS?
The release of Visual Basic support for windows phone tools 7 allows developers to create VB applications that runs on the phone even though the phone does not have VB runtime. This is done by adding to each VB application a VB Core assembly. This solution has a few issues, for example C# applications cannot reference VB class library.
We plan to use the VB Core mode for future versions of Windows Phone tools to solve these limitations.
Does this mean that XNA will finally support VB.NET?
I use Left() and Mid() all the time.
Then start using substring.
XNA is one of the platforms that could benefit from this feature - more info to come.
Does this mean support for the Yield keyword?
No, it's unrelated to the "Yield" keyword.
(A preview of the Yield keyword for VB was found in the "Async CTP",.)
This is great, with the exception of the My namespace. Why is it that we've been told to use / extend "My" since 2.0 was introduced, and now it's been mentioned as legacy or unneeded helper bloat?
Yield is a CTP for Async and the SP1 does NOT include async work. This was simply an addition to the 4.0 functionality.
For "My" functionality - I'm not sure microsoft has recommended this however it was implemented to provide helpers and simplify some of the things people were trying to do. These helpers have been removed when using this switch as many are just not applicable for these new platforms in much the same way as some of them were not applicable for other platforms. Also as developers become more familiar with the .NET framework in general type were able to use the framework types directly.
That said, there is nothing stopping the user from reimplementing them in there own code just as there is nothing stopping them from reimplementing the legacy functions. Its just that out of the box these are not implemented.
Its a tradeoff between how well used the features are, how much time it takes to implement and test and how it effects the final size/performance.
This should have been done time ago (although I'd also remove the Asc and Chr functions). I've done ton of VB6 developing in the past (and I still have to do it some times), but sincerely, I don't like all the legacy support offered in VB.NET... I seriously dislike when I see .Net code using things like Left, Mid, DateDiff, etc.
Hector, I cannot agree more with you that we are in 2011 and .NET is a well established product. Many of the legacy functionality was there to assist people in moving from VB1-6 to VB.NET but in reality there are not large amounts of people still doing this work. Providing legacy functionality is still needed but this should not be to the detriment of all .NET developers.
Some of the My Functionality not supported is not even relevent on new platforms - examples may be My.Computer.Filesystem on platforms. If developers still need this it can be implemented in there own assemblies.
There is also a move away from supporting older legacy concepts already in place - try using On Error Goto error handling in Statement Lambda's. The language has evolved and developers should as well, so that they can take full advantage of new functionality.
Why on earth are functions like Left, Mid, and Right, being considered "legacy" functions. Those functions have always been a part of the "Basic" and thus VB idiom. Removing them seems to be a completely arbitrary decision. In my book, choices are a good thing.
Personally, I've never understood why, given it's 2011, and we have 3+gz machines, that the compiler/linker can't link in only those methods that are actually used. That's something that the DOS versions of Quickbasic could do, but .NET can't. Get that right, and all this nonsense about "bloated libraries, blah blah" would go out the window. If you use a function, it's included, if not, it's not. <Sigh>
As to the My Namespace, meh. It's convenient, but not terribly so, and if you look under the covers it's really just a big hack job. I could take it or leave it.
And fwiw, LEFT and MID +cannot+ be simply replaced with SUBSTRING. The older style functions are much easier to use in the general case and intelligently handling things like a null string object, whereas substring and related instance methods cannot.
You have a good point about linking, we did consider it for VB Core, but the time and effort it would have taken us to implement it did not fit the schedule. So while I agree that these functions are very useful, we needed to draw the line somewhere on what would be included. We used the following considerations:
1) Make the embedded code as minimal as possible
2) Embed functionality that is core to the language, i.e. functions that the compiler knows about
We are considering publishing the source code for portions of the VB Runtime that are not included in VB Core so users can do the linking themselves.
Avner Aharoni
Program Manager VB/C# languages team
These functions are runtime functions for VB and not actually part of the language - look in the language spec and you will hardly find even a mention of any of them. They are "legacy" in the respect that many of them are implemented directly in the .NET framework allowing them to be used interchangeably in different languages. This is useful for samples in multiple languages and conversion of code. That said there may be differences but these for the most part are small.
Quickbasic/GW Basic/VB 1-6 had all these functions but it does not mean that improvements cannot be made. The ommission was made to enable the platforms for VB developers without bloating the EXE too much. This does NOT effect your existing code targeting supported platforms - only new code targeting these new platforms. The workaround if you absolutely cannot live without these functions is to simply create your own implementation. As Avner pointed out these are looking to be published but if you cannot wait simply use a tool like reflector to extract the function.
I believe the bigger benefit is support for these platforms and this is a much bigger win that loosing some functionality "out the box". If you really need this functionality whether it be My functionality, these legacy functions, etc. you can put it back in your own implementation.
To make it clear - this does not effect the existing behaviour for existing supported platforms. So on the desktop - My, Legacy, On error, late binding are still fully supported.
We understand this is only for new platforms but even calling Left and Mid "Legacy" is a worry for future VB development.
The fact that these functions are not part of the language specification and are implemented in the vb runtime means that you are always able to implement your own versions of them. With the intention of Microsoft publishing these allowing you to see the implementation and extend them yourself I feel is a value add. You will not be tied to MS implementation but can extend it with additional overloads and functionality. Legacy is not such a bad word - look at the compatibility libraries as an example. These are still available for use if you need them but things have moved on. I would still consider them legacy functionality as well.
I'm sure there are many functions in the .NET Framework that people have used in the past that get deprecated and people adapt there code to use new ways of work. .NET is not a new platform now and almost all the functionality is contained within the framework and more easy to find examples of. These legacy functions are simply wrapper functions around .NET functionality - there is no magic going on that can't be implemented in user implemented functions.
Whats you worry on future VB development ? Just look at this as enabling VB Development on more platforms not previously supported. Is this a positive move ?.
Look at the other work being carried out such as Async and Iterators for a future version on VB. This should demonstrate that VB has a very bright future along with C#. The compilers team has a co-evolution strategy which means features should be implemented in both languages. This VBCore functionality is part of the strategy so that you as a developer could chose you language to target more platforms. I believe that the benefits from this far outweight the downsides. Developers are smart people and with these new platforms which will make use of this feature there will be some limitations but these are fairly small in the big picture and for those writing .NET code this should not pose much of a problem. | http://blogs.msdn.com/b/vbteam/archive/2011/01/10/vb-core-new-compilation-mode-in-visual-studio-2010-sp1.aspx | CC-MAIN-2014-15 | refinedweb | 2,384 | 63.29 |
CFoo foo; // just an object whose class has operator() defined
foo(); // like a function; makes operator() work
#include <algorithm>
#include <iostream>
#include <string>
#include <vector>
using namespace std;
class starts {
char ch;
public:
starts(char c) : ch(c) {}
bool operator() (const string &s) {
return s.size() && s[0] == ch;
}
};
int main()
{
vector<string> vec(3); // vector of three elements
vec[0] = "sir";
vec[1] = "zebra";
vec[2] = "home";
vector<string>::iterator iter =
find_if(vec.begin(), vec.end(), starts('z'));
cout << *iter << "\n";
return 0;
}
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/tips/Tip/32491 | CC-MAIN-2018-30 | refinedweb | 114 | 54.93 |
public class ColorType extends Object
A typesafe enumeration of colors that can be fetched from a style.; } }
public static final ColorType FOREGROUND
ColorType for the foreground of a region.
public static final ColorType BACKGROUND
ColorType for the background of a region.
public static final ColorType TEXT_FOREGROUND
ColorType for the foreground of a region.
public static final ColorType TEXT_BACKGROUND
ColorType for the background of a region.
public static final ColorType FOCUS
ColorType for the focus.
public static final int MAX_COUNT
Maximum number of
ColorTypes.
protected ColorType(String description)
Creates a new ColorType with the specified description.
description- String description of the ColorType.
public final int getID()
Returns a unique id, as an integer, for this ColorType.
public String toString()
Returns the textual description of this
ColorType. This is the same value that the
ColorType was created with.. | https://docs.w3cub.com/openjdk~8_gui/javax/swing/plaf/synth/colortype/ | CC-MAIN-2019-43 | refinedweb | 136 | 51.55 |
Using Visual Studio 2005 to Perform Load Testing on a SQL Server 2005 Reporting Services Report Server
Runying Mao
Heidi Steen
Microsoft Corporation
Lukasz Pawlowski
Donovan Smith
Tudor Trufinescu
Dave Wickert
Microsoft Corporation
September 2006
Applies to:
Microsoft SQL Server 2005 Reporting Services
Microsoft Visual Studio 2005 Team System
Summary: Microsoft Visual Studio 2005 Team System includes a load-test tool that you can use for performance and stress testing of a Microsoft SQL Server 2005 Reporting Services deployment. The load-test tool runs the tests that you create, and optionally logs the data into a SQL Server 2005 database. You can monitor tests as they progress, review performance data after the tests are completed, and precisely determine what the threshold is for a specific report-server deployment. This article contains step-by-step instructions for creating a Web page load test, and sample code and instructions for creating a unit test. Also, instructions are provided for setting up the load test that you use to specify load patterns. The article assumes that you have the AdventureWorks sample database and reports, so that you can try these steps on your computer. (25 printed pages)
Click here to download the Word document version of the article, UsingVSforLoadTestingonSQLServer.doc.
Contents
About Project REAL
Introduction
Setting Up
Creating a Web Test
Creating a Unit Test
Extending Web Tests and Unit Tests
Creating a Load Test
Creating a Test Results Database
Checking Results
Conclusion
About Project REAL
Project REAL is an effort to discover best practices for creating business intelligence (BI) applications that are based on SQL Server 2005, by creating reference implementations that are based on actual customer scenarios. This means that customer data is brought in-house and is used to work through the same issues that customers face during deployment.
These issues include the following:
-
By working with real deployment scenarios, we gain a better understanding of how to work with the tools. Our goal is to address the full gamut of concerns that a large company would face during its own real-world deployment.
This white paper offers a detailed technical discussion on how to perform performance analysis for SQL Server 2005 Reporting Services using Visual Studio 2005 Team Edition for Software Testers. The infrastructure discussed in this white paper refers to how the Reporting Services performance and throughput tests were implemented in Project REAL.
For an overview of Project REAL, see the Project REAL: Technical Overview white paper. A number of papers, tools, and samples will be produced over the lifetime of Project REAL. To find the latest information, visit the Project REAL Web site.
Introduction
This article describes how to use Visual Studio 2005 Team Edition for Software Testers to run performance characterization tests for SQL Server 2005 Reporting Services. You can use this article as a guideline for capacity planning or to assess performance before rolling out reports on a production server.
This article contains step-by-step instructions for setting up a project, creating Web page and unit tests, creating and configuring a load test, running the test, and evaluating the results. After you create the tests, you can run them on different server configurations to quantify the improvement in performance when you change hardware components or modify a report definition or query, or specify different rendering formats.
Choosing Reports
This article uses the AdventureWorks sample reports and database to illustrate key concepts. You can use the sample reports if you want to use the sample code and steps provided, or you can work with your own reports and modify the code and steps accordingly. When you perform load tests, the reports must be able to run with no user interaction required. If the report prompts for data-source credentials or parameter values, you must temporarily modify the report to use stored or integrated credentials and default parameters for the purpose of running the tests.
You can install a subset of the Visual Studio 2005 components. The following screen shot shows the Team Developer and Tester tools that are used in this exercise. The tools that you will use include Performance Tools, Code Analysis Tools, and Testing Tools.
You must also have a language project installed. The sample code provided in this article is in Microsoft Visual C# 2005, but you can use another language if you want to use your own code.
Figure 1. Visual Studio 2005 setup (Click on the image for a larger picture)
AdventureWorks Sample Database and Reports
AdventureWorks is a sample relational database that is included with SQL Server 2005. If you want to use the AdventureWorks sample reports, first make sure that the Reporting Services samples are installed. By default, they are located at <drive>:\Program Files\Microsoft SQL Server\90\Samples. If not they are not installed, you must install them. For instructions on how to install and uninstall the samples, see Installing Samples in SQL Server 2005 Books Online. You can also download the samples from the Microsoft Download Center.
In this exercise, we will use the following single-page and multipage reports:
- Company Sales
- Product Catalog
- Employee Sales Summary
Employee Sales Summary prompts for a parameter value. When you create a unit test, you will specify a parameter value to pass to the report at run time. This allows the report to run unattended.
All of these reports retrieve data from the AdventureWorks sample database, using Microsoft Windows authentication and your credentials to connect to SQL Server 2005.
Before you start, verify that you can access the AdventureWorks sample database and run the reports by starting Report Manager and opening each report..
Figure 3. Add a Web test to TestProject1. (Click on the image for a larger picture)
A browser window pops up automatically. It will look similar to Figure 4. You will use this browser window to add URLs for each report that you want to include in the test.
Figure 4. Browser window (Click on the image for a larger picture)
-.
You now have a Web test that contains a list of recorded actions.
Figure 6. Recorded actions in the Web test (Click on the image for a larger picture)
Next, you will run the test. Visual Studio 2005 provides a Run button in the toolbar. On the tab named WebTest1.LoadTest, it is the first button with a green arrow on it.
Figure 7. Run the test. (Click on the image for a larger picture)
- Switch to the WebTest1.webtest tab, click the Run button, and then select Run Test to replay the reports. The Test Results window will show whether or not the test was successful.
If you do not see the Test Results window, on the Test menu, select Windows, and then select Test Results.
Figure 8. View test results. (Click on the image for a larger picture)
If an error occurs, you can click the Run button again and select Debug Test to diagnose the error.
Creating a Unit Test
Unit tests are programmatic tests that are written in Visual C# (or other programming languages) and used to exercise other source code by directly calling the methods of a class, passing appropriate parameters, and then (if you include Assert statements) testing the values that are produced against expected values.
In this section, you will learn how to create a unit test that calls a single report on a local report server, and run the report with a specific parameter value.
- In Solution Explorer, double-click Unit Test1.cs. Delete the existing code, so that the file is empty. The following lines of code should be deleted.
using System; using System.Text; using System.Collections.Generic; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace TestProject { /// <summary> /// Summary description for UnitTest1 /// </summary> [TestClass] public class UnitTest1 { public UnitTest1() { // // TODO: Add constructor logic here // } Method1() { // // TODO: Add test logic here // } } }
- Copy the following sample code and paste it into the unit test. You can use the following code as a template and update it to assign different values to the parameter, render multiple reports, and so on.
This sample code displays the Employee Sales Summary report. The report has a parameter named EmpID, and the code sets the value to 275.
using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; namespace RSLoadTest { public class Report : WebTest { protected const string REPORTSERVER = ""; // Report's URL address. protected string m_urlName; // EmpID is a Report parameter. protected int m_EmpID; // ThinkTime between each report rendering. protected int m_thinkTime; public Report() { this.PreAuthenticate = true; // Set Report Name. m_urlName = "%2fAdventureWorks+Sample+Reports%2fEmployee+Sales+Summary"; // Set value to Parameter EmpID. m_EmpID = 275; // Set think time to 35 seconds. m_thinkTime = 35; }:Command", "Render"); // Choose to set Think Time to 30 seconds. request.ThinkTime = m_thinkTime; yield return request; } } }
Extending Web Tests and Unit Tests
In the previous section, you used the sample code in a unit test to learn how to set parameter values programmatically. Although the sample code illustrates basic principles that can help you get started, it has two limitations that you might want to address before you create similar tests for an actual test environment. Namely, the value that is passed to the EmpID parameter is hard-coded, and the output format is always the default HTML-rendering extension. To perform realistic load tests on your reports, you should try a variety of rendering extensions and run reports with different parameter values to get a complete picture of how the report performs when you vary the query parameters.
Testing for Rendering Formats
To specify different rendering formats, consider incorporating URL access into your tests. In Reporting Services, each report can be accessed through its URL. You can specify parameter values on a report URL to vary the rendering extension, test device configuration settings, or specify a data source. The URL must be a fully qualified path to the report. For more information about the URL parameters that are used for accessing SQL Server 2005 reports, see Using URL Access Parameters.
The following code snippets show you how to specify rendering extensions, so that you can run tests for different rendering formats.
- To render a report in Microsoft Office Excel, use the following code snippet.
- To render a report in PDF, use the following code snippet.
- The code snippet should be placed in function.
- The following code provides a complete example.:Format", "EXCEL"); request.QueryStringParameters.Add("rs:Command", "Render"); // Choose to set Think Time to 30 seconds. request.ThinkTime = m_thinkTime; yield return request; }
Testing with Dynamic Data
To work with dynamic data, use the data-binding features in Visual Studio 2005 to pass query-parameter values to a report. In the sample for the unit test, the Employee Sales Summary sample report has a parameter named EmpID that is set to an m_EmpID member variable. In most cases, parameter values are stored in a database table (in this example, values for parameter EmpID are from table [AdventureWorks].[HumanResources].[Employee], from the column EmployeeID). To pull a value for parameter EmpID dynamically from that database table and assign it to parameter EmpID, you can create a data source that connects to table [AdventureWorks].[HumanResources].[Employee] and then bind column EmployeeID to parameter EmpID. For detailed instructions on how to set up data binding in Visual Studio 2005 Team System, see How to: Add Data Binding to a Web Test.
Creating a Load Test
To run the unit test that you just created, you must define a load test that sets the load pattern that you want to use. In this exercise, the primary goal of the load test is to simulate multiple users accessing a server simultaneously. By adding a Web page test or a unit test to a load test, you can simulate multiple users opening connections and making multiple HTTP requests.
In a previous section, you created a unit test. In this section, you will create a load test, and then add the unit test that you created.
-.
Optionally, on the Counter Sets page, you can specify custom performance counters to use during the test run. This is useful if the reports or queries are run on a remote computer. If they are, you can add counters from the remote computer and monitor them locally when the test runs. If all processing is local, you do not have to specify a counter set. The load-test tool provides access to local performance counters by default.
-
While the load test is running, you will see the results displayed in the Test Results window as In Progress. You can click In Progress to view information about the test as it runs.
Figure 16. Monitor the test. (Click on the image for a larger picture)
To monitor server performance, use the Requests/Sec and Avg. Response Time counters. If you added other counters when you create the load test, you can find them in the Counters pane. Double-click them to add them to the graph.
Conclusion
Now that you have a basic understanding of how to use the load-test tool with reports, you can build upon that knowledge by creating and running tests on configurations that are used in your organization.
In most cases, you will want to run a load test on a computer different from the one used to run the report server. Additionally, to mimic actual user activity, configure multiple user sessions across multiple computers. This configuration is called controller-agent configuration, and the computer that runs the load test is the controller. The computers that host the user sessions are created as agents. Controller-agent configurations are beyond the scope of this white paper, but if you want to learn more about controllers, agents, and Visual Studio 2005 Team Edition for Software Testers, see the following links on the MSDN Web site:
-. | http://msdn.microsoft.com/en-US/library/aa964139(v=sql.90).aspx | crawl-003 | refinedweb | 2,298 | 53.41 |
wubi.exe for ubuntu 11.10 fails if the default startup folder doesn't exist
Bug Description
wubi.exe file in ubuntu-
wubi.exe from ubuntu-
---wubi-
01-03 01:09 INFO root: === wubi 11.10 rev241 ===
01-03 01:09 DEBUG root: Logfile is c:\users\
01-03 01:09 DEBUG root: sys.argv = ['main.pyo', '--exefile=
01-03 01:09 DEBUG CommonBackend: data_dir=
01-03 01:09 DEBUG WindowsBackend: 7z=C:\Users\
01-03 01:09 DEBUG WindowsBackend: startup_folder=None
01-03 01:09 ERROR root: unsubscriptable object
Traceback (most recent call last):
File "\lib\wubi\
File "\lib\wubi\
File "\lib\ntpath.py", line 90, in join
TypeError: unsubscriptable object
for only wubi.exe log result:
-----wubi-
01-03 01:13 INFO root: === wubi 11.10 rev245 ===
01-03 01:13 DEBUG root: Logfile is c:\users\
01-03 01:13 DEBUG root: sys.argv = ['main.pyo', '--exefile=
01-03 01:13 DEBUG CommonBackend: data_dir=
01-03 01:13 DEBUG WindowsBackend: 7z=C:\Users\
01-03 01:13 DEBUG WindowsBackend: startup_folder=None
01-03 01:13 ERROR root: unsubscriptable object
Traceback (most recent call last):
File "\lib\wubi\
File "\lib\wubi\
File "\lib\ntpath.py", line 90, in join
TypeError: unsubscriptable object
Related branches
- Ubuntu Installer Team: Pending requested 2012-02-24
- Diff: 98 lines (+30/-9) 3 files modified
Problem was due to the startup folder not existing. i.e. deleted. Re-creating it manually solved the problem.
The code should probably handle this more gracefully as the startup folder is not required to install or run ubuntu with Wubi.
This looks like a bug caused by the attempt to remove "wubi.exe" from the startup folder. This logic was introduced in 11.04 I believe, which inserts wubi.exe in the startup folder if a user tries to do a normal dual boot but all four primary partitions are used.
The problem is that the startup folder is not set. It looks like ntpath.py should handle this, but it probably isn't. The workaround is likely to set the startup environment variable by editing the registry (regedit) and setting:
'HKEY_ LOCAL_MACHINE' , 'SOFTWARE\ Microsoft\ Windows\ CurrentVersion'
'\Explorer\ Shell Folders',
'Common Startup'
Relevant code:
backends\ win32\backend. py
\lib\wubi\
def remove_
existing_ binary( self): join(self. get_startup_ folder( ), 'wubi.exe') exists( binary) :
MOVEFILE_ DELAY_UNTIL_ REBOOT = 4
ctypes. windll. kernel32. MoveFileExW( binary, None,
MOVEFILE_ DELAY_UNTIL_ REBOOT)
log.exception ("Couldn' t remove Wubi from startup:")
binary = os.path.
if os.path.
try:
except (OSError, IOError):
============
\lib\wubi\
backends\ win32\backend. py
def join(a, *p):
"""Join two or more pathname components, inserting "\\" as needed"""
path = a
for b in p:
b_wins = 0 # set to 1 iff b makes path irrelevant
if path == "":
b_wins = 1
elif isabs(b):
b_wins = 1
# This probably wipes out path so far. However, it's more
# complicated if path begins with a drive letter:
# 1. join('c:', '/a') == 'c:/a'
# 2. join('c:/', '/a') == 'c:/a'
# But
# 3. join('c:/a', '/b') == '/b'
# 4. join('c:', 'd:/') = 'd:/'
# 5. join('c:/', 'd:/') = 'd:/'
if path[1:2] != ":" or b[1:2] == ":":
# Path doesn't start with a drive letter, or cases 4 and 5.
# Else path has a drive letter, and b doesn't but is absolute.
path[-1] not in "/\\"):
b_wins = 1
elif len(path) > 3 or (len(path) == 3 and
# case 3
if b_wins:
path = b
else:
# Join, and ensure there's a separator.
assert len(path) > 0
if path[-1] in "/\\": <== line 90, should never get here | https://bugs.launchpad.net/wubi/+bug/910948 | CC-MAIN-2015-18 | refinedweb | 586 | 58.79 |
Recent Notes
Displaying keyword search results 51 - 60.:
Use the <xsl:with-param> and <xsl:param> tags to apply parameters to XSL stylesheets:
<?xml version="1.0" encoding="ISO-8859-1"?> <xs...
XSLT by default writes namespace declarations in the output. Most of the time it's spurious, sometimes outright wrong. Take this XML:
<?xml version="1.0" encoding="ISO-8859-1"?> <ev...And this XSL:
<?xml version="1.0" encoding="ISO-8859-1"?> <xs...where DateUtil.java is:
import java.util.Date; public class DateUti...The output is (with JDK1.6):
Title: <br xmlns:java=" namespace declaration went to the <br> element, not the timestamp where it belongs. To remove the namespace info, add exclude-result-prefixes to the XSL:
<?xml version="1.0" encoding="ISO-8859-1"?> <x... | http://www.xinotes.net/notes/keywords/value/xml/p,6/ | CC-MAIN-2014-10 | refinedweb | 130 | 62.64 |
Yesterday, I purchased and installed FlashBuilder 4.7, 64 bit windows version.
With it, I built my project for ios, and installed an export release (air3.4 for ios) on my IPad 1 for testing.
Two "unforgivable" problems showed up:
1. Most of the artwork looks utterly spoiled, as if entirely different filter coefficients
were being applied to the bitmaps.
(note: builds with FlashPro 5.5, FlashPro 6.0 and FlashBuilder 4.6 / air3.1 for ios (32 bit) of
the same code all show consistent and proper results instead)
2. The application runs much slower than when it was exported for release from FlashBuilder 4.6.
Is there any way to deal with these problems?
(apart from applying for a refund)
Thanks in advance
PS: Tests on the desktop (using the air simulator) using FB 4.7 show the same filter problem.
.. additional info:
I've tried installing the 32 bit windows version of FlashBuilder 4.7 (on a different system, to rule
out incidental system problems) as well, but the problems are the same with that version.
More concretely, below you can find a screenshot of my app, built with FlashBuilder 4.6 (and
would be same when built with FlashPro5.5, 6.0 ), which uses
filtered vector art images, which are subsequently converted to bitmapdata.
At the bottom of it, you can find the same screenshot, but this time the same code
was built with FlashBuilder 4.7.
Any ideas or suggestions? Anyone?
Same page, but now built with FlashBuilder 4.7:
Digging a bit further, it appears that at least the gradient glow and the gradient bevel
bitmap filters are disfunctioning when build with FlashBuilder 4.7.
Upgrade from Air3.4 to Air 3.5 did not yield a change.
Building with air 3.5 using FlashBuilder 4.6 shows proper results. (and FlashPro cs5.5, cs6.0).
So I guess the problem is in the compiler of FlashBuilder 4.7.
To FlashBuilder Team of Adobe: Is this problem ackknowledged?
If no, please let me know what I should do.
If yes, has a bugfix been scheduled? For when?
Thanks in advance
Can you provide a sample project that recreates your issue? I'll be happy to see if 4.6 and 4.7 behave differently (with the same AIR 3.5 1066) both locally and on an iPad. PM me a link if you don't want to share it publicly.
Sure, I guess to this post I can only attach an image or video, so here's a link:
The nice thing about this problem is its reproducability: the problem is always there,
wether the build targets mobile as3 for ios, flashplayer or air, 3.4 or 3.5.
The TestButton.fla (created with Flash CS6.0) has created the .swc next to it.
It contains the filtered graphics. (ignore SwcBuild.fla - just a copy).
The FB4_7 project targets flashplayer 11.5. It imports said swc in its project settings,
and its 5 line .as code simply puts the MovieClip instance on stage.
Thanks in advance
I can verify it definitely happens in FB4.7 but not in 4.6. The only way I was able to get around the issue was applying the filter programmatically. Then it worked without issue: sh/filters/GradientBevelFilter.html
You should open up a bug with your sample:
Thanks Sinious,
I tried to file the bug at the link that you provided, but it does not accept a post without
specifying product details from drop down lists.
However, there's only a handful of products in the list, and FlashBuilder 4.7 is not among them.
Any advice on that?
Thanks in advance
I would file it as an Adobe Flash Player v11.5 bug that all users will encounter that's of type `Incorrectly functioning w/ workaround`. Attach the file you shared above and mention the code-based workaround.
I followed your suggestion, and filed it as FlashPlayer 11.5 bug w. workaround. In retrospect, I regret the last, as the bug
was positioned as only as "priority 3rd high" - presumably because of the existence of the theoretical workaround.
For me, the work around is not realistic, because I use these filters at hundreds of places. Finding, then removing them,
and adding them programmatically would be an adminstrative nightmare, and I'd loose the WYSIWYG advantage of Flash Pro completely.
Summarized - until fixed, I cannot use FB4.7. to continue the development of my app in.
Anyway, I'll keep my fingers crossed that the "FlashPlayer bug" will make it to the FlasBuilder team (in some coorperations I've worked,
there'd be high chances such "ill-filled in form" would be ignored) - and that they'll give it high priority, which I think it deserves.
Thank you for your prompt help.
I wish it was more prompt but I lost power and net for days in that US blizzard over the weekend. Whew, happy to be back.
Please post your bug report ID here so we can vote on the issue. The number of people voting on it is what largely controls the speed it is fixed. Also go in the Flash Builder forum and post the bug link there.
As for the workaround, their nature is definitely to be annoying. They're not ideal. FB "just working" is the best solution by far but a known workaround for someone else who doesn't have thousands of filters might make or break their deadline. It also gives valuable clues to the patching team on where the problem may lie. I'd always include a workaround if you find one.
I see your point. The bug ID is 3499140.
By the way, how does the voting process work?
Is someone from the flashbuilder team appointed the task of counting votes
on forum pages like this? Or is there a specific webpage where people can vote
which bugs should be fixed first?
People can vote/verify in the bottom right. Thanks for reporting, I'm sure it will help someone!
I see, thanks.
You're welcome and good luck!
Mmm.. looks like bureaucracy prevents the bug to be considered (see bugbase link above).
Zhe Wang, a Lead Manager of Adobe seems to have rejected the bug-report, because
"it should be filed as an FB bug instead of a FlashPlayer Bug".
Apparently, there's no way to report a bug to the FB team. I can't hide that its
very frustrating and annoying.
I voted on your bug but I agree the right place to file a Flash Builder bug is rather confusing.
You might try here:
I see some Flash Builder tickets in there and if you go here:
That is what they suggest to use at the top.
:-) Thanks Nabren! I posted the bug on that location as well.
Was there any further word on this.. or better yet a fix or work around other than programmatic filtering?
Well, I spent my effort to file the bug - in spite of the lacking of a "proper place" to file it.
You can add your vote as well:
A customer friendly Adobe affiliate, called Sinious on these forums, has been willing to look
into it and has confirmed that it looks like a FB 4.7 compiler bug.
Yet as you can see from the link above, it looks like the bug report is not taken seriously - the
responsible adobe manager Zhe Wang is wining that it should be posted on a forum, and
to me it looks like nothing shows that any action will be taken.
Apart from programmatic filtering, you could redesign the filters of your artwork.
From my short research, it looked like the main problems were caused by use of the filters
that use color-gradients. (gradient bevel, gradient glow).
You could replace them by non-gradient bevel and non-gradient glow filters, and thus retain the
WYSIWYG advantage.
(you can mimic the behaviour of a multi-color gradient glow by stacking multiple
single color gradient glows, for instance).
For me, it would still mean that switching to FB4.7 would cost way too much work.
I'm still waiting for an update with the fix.
Hi Marius, I am not sure if this is working for sure or not but you might try the following:
Instantiate a library item in a SWC by extending it to another Class.
eg: if the item is SomeClip
use: var mc:SomeClipExt = new SomeClipExt();
and in SomeClipExt do this:
public class mc:SomeClipExt extends mc:SomeClip{
}
At least one of my Library items is retaining the bevel filter I applied to it in Flash Pro.
Sorry I do not have time now to test this further.
Hi DachFlach,
Thank you for hinting this other potential workaround. I'll see if I can reproduce.
This workaround would mean that each of the exported main classes in flash pro would need to be dissected hard-coded into sub-classes in the code, at least for each sub-class (movieclip) that uses such a filter - hard code an extended class for these classes, and then rebuild an analogon of the original main class at runtime that uses these extended classes to in a similar hierarchical buildup.
That is, even if the workaround would work, it'd still cost tons of work for me
to convert everything - WYSIWYG objects in flash pro (complete menus, sceneries, etc)
generally consist of loads of hierarchy of sub-objects, and I've used these filters on hundreds of places.
Apart from that, because of the hard-coded solutions, the result is badly maintainable: Editing (changing hierarchical structure)
of MovieClip internals in flash pro may make them out of sync with the hard-coded solutions.
Anyway, thanks.
Hi Marius,
I was wrong. It was not the extending of the class that made the filters work. For whatever reason, now all filters I add to a MovieClip in Flash Pro and export to a SWC are working.
Including gardient glow.
Here is a libnk to a SWC that has a button call btnPink that has a bevel and a gradient glow and is working for me when used in FB 4.7.
Maybe I am missing what the problem is now?
Strange - from flashdevelop, btnPink seems accessible, while it does not seem so from FB47..
Could you post the .fla as well?
(also, such that I can verify that the button looks the same in flash pro)
Thanks in advance
By the way, in the link above:
There's "Test.zip", which contains a fla swc and test program that I created to test.
Perhaps you could check if the test works properly for your system as well?
Many thanks in advance
I have not tried in FlashDevelop but it works in FB 4.7 for me.
My FLA that is.. I have not tried yours yet.
Sorry - you've posted a link that says FilterTest.fla, but if I click on it, I get the swc.
Could you correct the link for me? Thanks in advance
.. ah wait - it works if I copy it to the adress line of the browser
I just tried your TestButton and it seems to be working OK for me in FB 4.7.
Not sure how to fix that link.
Interesting.
When I tried to open your fla, my flash pro remarked that it was based on air 3.6,
while 3.5 is the highest I have installed. I agreed with the suggestion to convert the input to air 3.5.
Then I build the swc.
Using the button from that swc in FlashDevelop, it shows identical to the button shown in flash pro (as always), as it should:
Howver, using the button from that swc in FB47, it no longer shows the pink:
.. but resuming your last remark - let me check:
you tested my TestButton, and in FB4.7, it shows identical results as in flash pro?
.. if that's true, I guess I either need to upgrade to air3.6 to make it work with
FB 4.7 (at the time, 3.5 was the latest) and/or re-download and install FB 4.7...
Yes, identical. A faint box in both. Both have same font issue but filter looks the same.
I just targeted AIR 3.2 on my test and it worked without issue.
That's bizarre - and hope giving at the same time.
I've just installed air3.7 and the air 3.7 sdk - (with compiler) in FB47.
Build targeted air 3.7 - still same problem. - with your button - the pink is gone.
Build again, targeting air 3.5 - still same problem.
As you can see at the top of this thread, I've been testing this on several systems with 32bit as well as 64 bit version of FB4.7,
with all kinds of air versions. The problem is consistent for FB4.7, while absent for FB4.6.
The only thing I can think of now, is that the problem has been silently fixed for FB47, and you downloaded
the fixed version. I'll try uninstalling it and re-download and reinstall it. Or would there be another kind of smart course of action?
I installed FB4.7 only a few days ago via Creative Cloud.
I should note that initially the filters were not working for me. Which is why I found this thread.
You might want to try adding a Flash Button in your SWC and see if you get any results.
I do not have one in the SWC I sent to you but I previously imported a SWC which did.
That said, it works fine in FB without any SWC with a button or other component in it but maybe something triggered in FB from the time I did have one in there. Shot in the dark.
Eh yes.. I just uninstalled FB.4.7, downloaded, reinstalled - nothing changed
So you suspect something may have triggered things into working by adding a Flash Button.
I tried creating new a new Flash Button in Flash Pro, in the swc with the pink button (did you mean something like that).
Unfortunately, no magic bug healing on my system.
If at any moment you think of another thing that may have done it, please don't hesitate to mention it.
Are you building a Flex Project or ActionScript Project?
Mine is Flex using both MX & Spark.
• I am declaring it as a WindowedApplication.
• I am adding my ActionScript components to a visual element:
<s:SpriteVisualElement
protected function theRoot_addedToStageHandler(event:Event):void {
var main:Main = new Main();
theRoot.addChild(main);
}
Main is my main AS Class in which I add the button from the SWC to.
• I am also waiting for the added_to_stage event to fire before adding the button.
addEventListener(Event.ADDED_TO_STAGE,Init);
Try this:
I've been building as3 projects, without use of flex or mx, waiting for ADDED_TO_STAGE event before
starting the application.
I can't open TestButton2.fxp, both Flash Catalist CS5 and CS6 come up with next dialog when I try to open it:
I've just created a new Project, using Flex with MX & Spark, and including the swc with the button.
The one and only source (mxml-) file now contains:
<?xml version="1.0" encoding="utf-8"?>
<s:WindowedApplication xmlns:fx=""
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:
<fx:Declarations>
<!-- Place non-visual elements (e.g., services, value objects) here -->
</fx:Declarations>
</s:WindowedApplication>
I don't know how to deal with flex/mx/spark/ mxml-files.
Could you paste the full content of the mxml file that you use to test the button, such that
I can copy it and try for myself as well?
Can you open it directly in Flash Builder?
File-> Import Flash Builder project
Ah, I see - that works indeed. Learned another thing :-).
Indeed, your project, using flex/mx/spark does show the pink of the button that is contained in the swc properly.
That's good. On the first sight, it looks like FB47 can be used using these addons. Many thanks.
(I'll need to figure out how to if it does not come at the price of additional memory usage, and if subsequential builds for
ios and android are without problems. - yet if the answers of these questions are positive - you surely have
saved me - anyway - many thanks!!)
Retrying as as3 project still does not work.
From our exchange above, I'm not 100% certain: Have you succesfully tested with normal as3 projects as well?
(The question is, if something triggered a "fix" at your system, or that that did not happen, or and our results are
in sync now)
Same code, same button, but built as normal as3 project:
Hi Marius,
Your fxp did not show the filter.
I'll note, for whatever reason, it took a long time build to the debugger..?
I have not successfully set up an actionscript project to use filters.
I need to use a Flex Project so I can use some of the components available.
Which is too bad as I am having a helluva time getting SWC items ( made in Fl Pro ) to be handled by Classes in FB.
Glad to hear you have a potential solution for your project.
Back to the Battle! | http://forums.adobe.com/thread/1132057 | CC-MAIN-2014-15 | refinedweb | 2,898 | 75.5 |
How can I check how much video ram is being used/free (in this case when using opengl)? First thing I can think of is to find a way to check memory amount used by the image or whatever related to graphics and with it calculate used/free memory. But this is probably incorrect and/or ineffective. Do graphic apis have this functionality built in?
#1 Members - Reputation: 454
Posted 20 March 2013 - 05:13 PM
#2 Members - Reputation: 663
Posted 20 March 2013 - 05:17 PM
If you're using GL2+, just do this:
int getFreeVideoMemory() { int availableKB[4]; #ifdef GL234 if(GLEW_NVX_gpu_memory_info) glGetIntegerv(GL_GPU_MEMORY_INFO_CURRENT_AVAILABLE_VIDMEM_NVX,&availableKB[0]); int temp=GLEW_ATI_meminfo; if(GLEW_ATI_meminfo) glGetIntegerv(GL_TEXTURE_FREE_MEMORY_ATI,availableKB); return availableKB[0]; #endif return 0; }
#3 Members - Reputation: 454
Posted 20 March 2013 - 05:27 PM
Does that work with GL 3.2?
#4 Members - Reputation: 663
Posted 20 March 2013 - 05:33 PM
It *should*. I use it with GL 3.3 just fine.
#5 Crossbones+ - Reputation: 2705
Posted 20 March 2013 - 09:18 PM
Wow, didn't know that, nice one
Edited by jbadams, 04 April 2013 - 05:45 AM.
Restored post contents from history.
#6 Members - Reputation: 454
Posted 21 March 2013 - 10:27 AM
I have 1gb video ram but i always get different result, i added
std::ofstream outfile; outfile.open ("log.txt", std::fstream::out | std::fstream::app); outfile << (int)availableKB << std::endl; outfile << std::endl; outfile.close();
between #endif and return 0;
and the output is
2554664 //first run 2488132 //second run 3799208 // third run
I called the function only once after initializing opengl.
Is there something that i am doing wrong?
What does #ifdef GL234 do?
#7 Members - Reputation: 663
Posted 21 March 2013 - 10:38 AM
Which brand is your card?
Try initializing availableKB to -1 or something to make sure it's being set. Also note that it's an array, try printing (int)availableKB[0]
#8 Members - Reputation: 454
Posted 21 March 2013 - 11:22 AM
Which brand is your card?
I am using ATI, but I was looking for universal solution regardless of brand.
Try initializing availableKB to -1 or something to make sure it's being set. Also note that it's an array, try printing (int)availableKB[0]
if i set availableKB to anything the value will not change afterwards, and printing (int)availableKB[0] gives negative value.
#9 Members - Reputation: 663
Posted 21 March 2013 - 12:45 PM
There is no universal solution. You have to use manufacturer/brand/? specific extensions.
If you use this, does it print out either brand?
int getFreeVideoMemory() { int availableKB[]={-1,-1,-1,-1}; if(GLEW_NVX_gpu_memory_info) { glGetIntegerv(GL_GPU_MEMORY_INFO_CURRENT_AVAILABLE_VIDMEM_NVX,&availableKB[0]); printf("NVidia card\n"); } int temp=GLEW_ATI_meminfo; if(GLEW_ATI_meminfo) { glGetIntegerv(GL_TEXTURE_FREE_MEMORY_ATI,availableKB); printf("ATI card\n"); } return availableKB[0]; }
#10 Members - Reputation: 454
Posted 21 March 2013 - 01:15 PM
no it doesn't print out anything
#11 Crossbones+ - Reputation: 12257
Posted 21 March 2013 - 01:42 PM
There are plenty of factors that could cause the amount of available video RAM to vary. On a modern OS the window manager may be using some of the video RAM. On windows Vista or higher video RAM is virtualized so results can vary. Video RAM may be fragmented from previous runs so free space can be affected. With anything that uses a dedicated/shared memory architecture (e.g. Intel) you'll get a figure that may be just the amount of dedicated RAM left (i.e. quite low) or may be the total of dedicated + available system memory, or anything in between. Basically this isn't a figure that you should be basing crucial decisions in your program on; instead it seems better to me to use rough ballparks (e.g. with GL 3.x level hardware things are fairly well standardized at the ~1gb mark or higher).
It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.
#12 Members - Reputation: 454
Posted 21 March 2013 - 03:15 PM
Turns out that ATI Radeon HD 6450 1GB DDR3 for whatever reason doesn't have support for
GLEW_ATI_meminfo
I changed GLEW_ATI_meminfo to GL_ATI_meminfo and now the program prints out brand of the card.
There is no support for GL_ATI_meminfo or GL_NVX_gpu_memory_info but the code seems to work no problem with this.
But I still get negative value.
Edited by proanim, 21 March 2013 - 04:14 PM. | http://www.gamedev.net/topic/640599-check-video-ram-usage/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024 | CC-MAIN-2016-44 | refinedweb | 756 | 53.92 |
a customer has ask this question, he has a network setup in which DNS name to address translation is ok but address to name translation is not allowed. in this setup he has used an applet to connect to a server and run into a huge delay because of this code in SocketPermission.java, method impliesIgnoreMask: // XXX: if all else fails, compare hostnames? // Do we really want this? if (this.cname == null) { this.getCanonName(); } if (that.cname == null) { -> that.getCanonName(); } return (this.cname.equalsIgnoreCase(that.cname)); this line doing reverse DNS hangs for long time. since the comment says "do we really want this", it makes me think can this be considered to removed? we have already compared IP address above this. | https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4975882 | CC-MAIN-2018-51 | refinedweb | 122 | 69.48 |
A React audio player with UI
React H5 Audio Player
React audio player component with UI. It provides time indicator on both desktop and mobile devices.
Audio player component that provides consistent UI on different browsers.
Flexbox design with CSS shapes. Mobile friendly. No extra dependencies.
Supported browsers: Chrome, Firefox, Safari, Opera, Edge, IE (≥10)
Breaking change from 0.x to 1.x
In 1.x, we use
prop-types package instead of using it directly in React. Thus we dropped support under
Installation
npm i --save react-h5-audio-player
Usage
import AudioPlayer from "react-h5-audio-player"; const Player = () => ( <AudioPlayer autoPlay src="" onPlay={e => console.log("onPlay")} // other props here /> );
Props
HTML Audio Tag Native Attributes
More native attributes detail: MDN Audio element
Other Props
hidePlayer {Bool} [false]
Indicates if the audio player is hidden.
progressUpdateInterval {Number} [500]
Indicates the interval that the progress bar UI updates.
listenInterval {Number} [1000]
Indicates how often to call the
onListened prop during playback, in milliseconds.
onAbort {Function (event)}
Called when unloading the audio player, like when switching to a different src file. Passed the event.
onCanPlay {Function (event)}
Called when enough of the file has been downloaded to be able to start playing.
onEnded {Function (event)}
Called when playback has finished to the end of the file. Passed the event.
onError {Function (event)}
Called when the audio tag encounters an error. Passed the event.
onListen {Function (currentTime)}
Called every
listenInterval milliseconds during playback.
onPause {Function (event)}
Called when the user pauses playback. Passed the event.
onPlay {Function (event)}
Called when the user taps play.
onDragStart {Function (event)}
Called when the user start dragging the time indicator. Passed the event.
onDragMove {Function (event)}
Called when the user is dragging the time indicator. Passed the event.
onDragEnd {Function (event)}
Called when the user finish dragging the time indicator. Passed the event.
UI Overwrites
React H5 Audio Player provides built-in class names for developers to overwrite.
For example:
// In a SASS or LESS file .react-h5-audio-player { .toggle-play-wrapper { .toggle-play-button { // Remember to use !important to overwrite inline styles. background-color: red !important; } } }
You can find more class names by inspecting element on you browser.
To be compatible with some old browsers, you can add prefixers to flex container
.react-h5-audio-player { .flex { display: -webkit-box; display: -webkit-flex; display: -ms-flexbox; display: flex; .toggle-play-wrapper { flex: 1 0 60px; -webkit-box-flex: 1 0 60px; -moz-box-flex: 1 0 60px; -ms-flex: 1 0 60px; } .progress-bar-wrapper { flex: 10 0 auto; -webkit-box-flex: 10 0 auto; -moz-box-flex: 10 0 auto; -ms-flex: 10 0 auto; } } }
Advanced Usage
Access to the audio element
You can get direct access to the underlying audio element. First get a ref to ReactAudioPlayer:
<ReactAudioPlayer ref={c => (this.player = c)} />
Then you can access the audio element like this:
this.player.audio | https://reactjsexample.com/a-react-audio-player-with-ui/ | CC-MAIN-2019-35 | refinedweb | 484 | 52.15 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
betelgeuse@pena ~/test/java $ java -version
java version "1.6.0"
Java(TM) SE Runtime Environment (build 1.6.0-b105)
Java HotSpot(TM) Server VM (build 1.6.0-b105, mixed mode)
betelgeuse@pena ~/test/java $ javadoc -source 1.3 Hello.java
Hello.java:3: as of release 1.4, 'assert' is a keyword, and may not be used as
an identifier
(use -source 1.3 or lower to use 'assert' as an identifier)
private void assert(){;}
^
1 error
betelgeuse@pena ~/test/java $ cat Hello.java
public class Hello
{
private void assert(){;}
public static void main(String[] args)
{
System.out.println("Hello World!");
}
}
This makes old packages fail when trying to build javadocs using sun-jdk-1.6. I
reported this upstream so let's see what happens.
Would be nice if they also respected -source <=1.4 and enum keyword :( same
issue, but happens in 1.5 too, I think.
Same seems to happen with sun-jdk-1.7 from experimental-overlay.
(In reply to comment #1)
> Would be nice if they also respected -source <=1.4 and enum keyword :( same
> issue, but happens in 1.5 too, I think.
was wrong, 1.5 is fine
Aihe: 6507179: javadoc -source 1.3 does not work with jdk6
Päiväys: Fri, 30 Mar 2007 15:38:41 -0700
Lähettäjä: Mark Reinhold <mr@sun.com>
Vastaanottaja: betelgeuse@gentoo.org
CC: scott.seligman@sun.com, tom.marble@sun.com
Petteri,
A fix for the javadoc bug in Sun's JDK that you asked me about was
checked in today and will show up in 6u2, due to ship in June.
Scott Seligman (cc'd) did the actual work -- thanks Scott!
- Mark
Seems to be fixed (tried it with assert) in sun-jdk-1.6.0.02_alpha02 that's in
java-experimental overlay. That confirms the resolution of upstream bug.
1.7 (recent sun-jdk-1.7.0.0_alpha11) is still broken though... might be useful
to report it too.
1.6.0.02 fixes this and I removed all the work around dependency atoms. | http://bugs.gentoo.org/158720 | crawl-002 | refinedweb | 372 | 70.9 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
README
This is a driver for the Bosch BME280 temperature/pressure/humidity sensor, for use with MicroPython-based boards.
About the BME280
The Bosch BME280 Environmental Sensor is a combined temperature, pressure and humidity sensor. It can communicate via I2C or SPI; this driver uses I2C.
See the datasheet at for details.
Using the library
Use ftp to copy
bme280.py to the
flash or
flash/lib directory on the board. Then:
import machine import bme280 i2c = machine.I2C(0, pins=('GP11', 'GP10')) bme = bme280.BME280(i2c=i2c) print(bme.temperature, bme.pressure, bme.humidity)
Detailed usage
The
temperature,
pressure and
humidity properties are convenience functions that provide human-readable string values to quickly check that the sensor is working. In practice, the methods to use are:
get_temperature(): returns the temperature in hundredths of a degree celsius. For example, the value 2534 indicates a temperature of 25.34 degrees.
get_pressure(): returns the atmospheric pressure. This 32-bit value consists of 24 bits indicating the integer value, and 8 bits indicating the fractional value. To get a value in Pascals, divide the return value by 256. For example, a value of 24674867 indicates 96386.2Pa, or 963.862hPa.
get_humidity(): returns the relative humidity. This 32-bit value consists of 22 bits indicating the integer value, and 10 bits indicating the fractional value. To get a value in %RH, divide the return value by 1024. For example, a value of 47445 indicates 46.333%RH. | https://bitbucket.org/oscarBravo/wipy_bme280 | CC-MAIN-2018-43 | refinedweb | 266 | 51.85 |
Are you using the new System.Threading.Tasks.Dataflow.dll library, either from its CTPs or from the .NET 4.5 Developer Preview or Beta? We’d love to hear about it, and if you have time, what your experiences have been (good or bad). What kind of solution are you building, and how are you using TPL Dataflow in it? Has the library helped from a code maintenance perspective? From a performance perspective? Are there features you’d like to see added in the future?
Please email me at stoub at microsoft dot com. We’re looking forward to hearing from you. Thanks in advance!
Join the conversationAdd Comment
Hi Stephen,
As soon as the version is released, I'm ready to use it.
Bu before that, I have question.
Is there a way to plug a TPL Dataflow, a buffer Block for instance with a SQL Server Service Broker to have a persistant and reliable system ?
Any work on it ?
Regards,
Sébastien RICHEZ
Sébastien, thanks for the comment. Regarding your question, you can implement the ITargetBlock<TInput> and ISourceBlock<TOutput> with whatever implementation you like, including wrapping other systems.
A few years ago we @ InKnow wrote a very sophisticate and powerful data-directed server back-end for our Ontology services platform using the CCR; it was a great experience and the CCR in many cases did the job for us. We are in the process of re-writing the whole platform using Dataflow and we are thrilled with the API thus far. We had grown accustomed the CCR way of creating composition of dataflow; we were worried that we would lose that with TPL and Dataflow but we can do the same type of things with TPL and Dataflow no problem. We decided to keep the semantics of Causalities from CCR and have an elegant implementation with Dataflow. We have also redesigned and Predictive Modeling and Decision Support back-end with TPL and Dataflow as well.
Thank you so much for this great technical contribution!
InKnow Technologies, thanks for taking the time to share your experiences!
Hi Stephen – thanks for the great work you have done. I am busy getting to grips with TPL Data flow (did something similar for Java in 98, with pipes and filters, data pumps etc, so I have a fair idea of the complexities you simplified for us developers) and was wondering if we could evolve a Visual Quick Reference Poster for these blocks… I did something as a reminder for me, but I guess other noobs could also benefit… screencast.com/…/4yCb0Wf4B
What I want to do requires a lot of visuals that are not readily available, would be nice to have access to some standard icons for this technology.
I have started a project for USAID using these blocks (deadlines are crazy) had to reverse engineer the wire protocol, no ready for the proxy extender as well as Tee integration into the Azure cloud.
If you have any feedback visuals…would love to hear from you.
Hi Stephen,
I've performed some changes to a my 'bar-code generator program' by using TPL and Dataflow and the things seem to go well.
In practice I reduced sensibly the code involved to keep the flow as asynchronous as possible and even while I'm testing it more and more,
I have the sensation that it is the right way to keep the thing more reliable and simple at best .
I used some of your technique I read in "async tutorials" like TPL with Async CTP and other.
I recon this program was a bit complicated , but now it's incredibly easy to read, more responsive and fast.
Thanks a lot for all that, I think this is the future.
stefano
Marineheiro: Thanks for the poster! I'll take a look and will be in touch if I have any feedback.
robo.warrior2: Thanks for sharing the details of your project. I'm glad you've found success with it.
TDF is awsome! i really hope the move out of bcl into nuget doesnt mean there will be less focus on the library and that we'll loose msdn docs and things like that, right now its a little bit difficult to find info on TDF, especially the latest versions (post ctp)
Maybe that wil change though 🙂
i'm currently using dataflow for a web testing app, where a limited number of browser instances are pooled to do a bunch of page test (navigating to a url, clicking around)
i figured that is a perfect match for dataflow (right?)
aL,
Thanks for your feedback. The move out of the BCL into NuGet is to facilitate a faster turnaround of features. So you should expect to see more focus not less. The support and documentation requirements of the OOBs are the same as any framework piece, so you will not be loosing the MSDN documentation. Please let us know what issues you are having difficulty finding info on, I can point to you to the right resources.
Thanks
-alok
It's been just over a years that we been using the TPL and TDF and each day that design and code with it we learn so much more. We use the Dataflow in our Azure compute services to process millions of knowledge sources per second. It is also our in-process messaging paradigm between agent-process-hub (Reasoning Blackboard). Now we are experimenting with TPL and TDF with our WPF Clients.
We need a high-speed asynchronous messaging between Views, ViewModels and our TPL/TDF-centric Repository. We are developing software in the Bio-engineering space where objects can interaction freely across domains and sending messages to objects and controllers is a natural bit of process.
We have had some difficulty in debugging and hope that some the new parallel task debugging capabilities will reduce some of our stress. Our team is in the process of writing some white papers with regards to our experiences wit the TPL/TDF, WCF and WPF.
So far it looks promising! We will keep you posted.
Tavi Thurmond
InKnowWorks, Corp.
Chief Software Architect
I used this namespace to the great extent. This feature is just wow. | https://blogs.msdn.microsoft.com/pfxteam/2012/03/06/are-you-using-tpl-dataflow-wed-love-to-know/ | CC-MAIN-2017-30 | refinedweb | 1,038 | 70.84 |
Drag and Drop problem!gabo Jul 8, 2009 11:14 AM
Hi,
I have a "rich:dragSupport" inside a "div" which is inside a "a4j:outputPanel". At first the "div" is not rendered, and after the user clicks on a link y reRender the "a4j:outputPanel" with the "div" inside and the "rich:dragSupport" in the "div", but I get the following error on Firefox:
"drag: Element with [j_id13:j_id179:lsForm:j_id189] ID was not found in the DOM tree. Probably element has no client ID or client ID hasn't been written. DnD's disabled. Check please!"
Does anyone have an idea why this is happening?
Thanks!
This content has been marked as final. Show 3 replies
1. Re: Drag and Drop problem!ilya_shaikovsky Jul 9, 2009 6:49 AM (in response to gabo)
add actual code please.
2. Re: Drag and Drop problem!scic Jul 14, 2009 8:52 AM (in response to gabo)
I have a similar problem (richfaces 3.3.1.GA):
I have a dataTable with dropSupport around it and dragSupport inside it:
<h:form> <rich:dragIndicator <f:facet <h:graphicImage </f:facet> </rich:dragIndicator> <rich:dragIndicator <h:panelGrid <rich:panel <rich:dropSupport <rich:dataTable <f:facet <h:outputText </f:facet> <rich:column <rich:dragSupport <rich:dndParamdrag1</rich:dndParam> </rich:dragSupport> <h:outputText </rich:column> </rich:dataTable> </rich:panel> </h:panelGrid> </h:form>
then I have a backing bean like this:
public class BugBean { // ---- state private List<String> _list; // ----- contructor public BugBean() { setList(new LinkedList<String>()); getList().add(new String("aa")); } public void setList(List<String> _list) { this._list = _list; } public List<String> getList() { return _list; } public void dropListener(DropEvent dropEvent) { } }
When I filter the list after the value "s" (which changes the datatable entries from aa to none in the example above) and I hit backspace and s again and again REALLY fast over several seconds. Thus I change the filtering of the list several times a second (from aa to none and back). Then eventually I get the javascript error message like in the first post:
drag: Element with [j_id718:table:0:j_id723] ID was not found in the DOM tree. Probably element has no client ID or client ID hasn't been written. DnD's disabled. Check please!
If you cannot type fast enough you can increase the object size, so the update takes longer and the error is more likely to occur.
I guess the different "ajax regions" are no longer synchronized after a while because one update is faster than the other.
As a temporary solution I just commented out the js error messages in the js files and it works now for me. (Since the update of the other ajax region is eventually done. there is no furhter error).
But may be this should be addressed more professionally. ;-)
I also tried several combinations of rerender, limittolist, and ajaxSingle but maybe I have just not found the right one.
3. Re: Drag and Drop problem!ilya_shaikovsky Jul 15, 2009 6:17 AM (in response to gabo)
you should read the docs on a4j:queue and use it to avoid problems caused by requests concurrency | https://developer.jboss.org/message/61439 | CC-MAIN-2020-29 | refinedweb | 524 | 61.97 |
Facebook Introduces Hack: Statically Typed PHP 230
Posted by Unknown Lamer
from the sml-and-php-fall-in-love dept.
from the sml-and-php-fall-in-love dept..
Spy (Score:2)
So many bugs (Score:5, Insightful)
So many of the bugs that have tripped me up over the years would have been solved by simply having static typing.
Re: (Score:3, Insightful)
Re:So many bugs (Score:5, Insightful)
It allows you to free your mind, express your inner coding monkey artist.
And then you get to debug...
Re: (Score:2)
I get sick of spending half my time dicking around with static types, casts, etc and spinning my wheels chasing down type errors
If you spend half, or any appreciable part of your programming time dealing with static typing issues, then you're doing something seriously wrong. Occasionally the compiler will be a pain in the ass about it, but most of the time static typing should require nary a thought. If it's otherwise, you're not thinking about types clearly. The static typing is there to catch you when you screw up. It also serves as useful documentation for the next poor schmuck that has to look at your code, and for good measure
Re: (Score:2)
So do you rely on implicit or automagical type conversions? That doesn't do much for error checking. No, I don't do web programming, but I've worked on code that had to do a lot of string to whatever conversions. The biggest pain, but one of the most important things, is detecting errors in the string format. You need an explicit function to do that right, and a way to handle the errors. Once you have the explicit function, what's so hard about declaring that it returns an object of class ObiWan?
Re: (Score:2)
Dynamic typing frees your mind to think about the problem at hand and the best solution to that problem
Quite the contrary. It frees your mind to think about the problem incorrectly, then requires your mind to churn through many incorrect ideas before finally settling on the correct one. Meanwhile, static typing deals with a large chunk of correctness for you, and gets you to the correct solution faster by pointing out the errors before you hit them at runtime.
I get sick of spending half my time dicking around with static types, casts, etc
If you're casting, you're not thinking about the types correctly yet. Learn to code a bit better.
I'll take dynamic typing any day and be done twice as fast,
No, you won't. Because every time the static type
Re: (Score:2)
Otherwise, you'd have to run your program, and discover that type error at runtime.
Don't be absurd. Type problems don't show up during testing - they show up 3 weeks after the code has been released, and then only in situations that are practically impossible to reproduce. That makes debugging more interesting. Any idiot can find type problems when the compiler does it for you.
Re: (Score:3)
Usually the compiler will scream "Hey fuckwad, your integer array ain't a string! Error on line 205", and then you dutifully go correct the error. Much better than a dynamic language that happily takes your screwup seriously and you have to follow a debugger chain through half a dozen nested functions to find where you buggered up.
Re: (Score:3)
when I work with dynamically typed languages I tend to spend similar amounts of times if not more figuring out what really is and is not supported with this dynamic object in front me
Much worse than writing dynamically typed code is reading it. If I write function foo in a dynamically typed language, I know it's supposed to return an integer. If I'm reading somebody else's code, I often winding up guessing that an integer makes sense. Of course it may return a float or a string on odd Tuesdays when the moon is full. Much nicer for it to explicitly say it returns an integer, and have the compiler check that that's the truth.
Re: (Score:3)
Re: (Score:3)
Here's an example:
The null pointer dereference.
With a properly set up type system, you express one type as the non-null pointer. You then express a second type - the maybe type, which describes something that is maybe something else. Now you use the Maybe Pointer as the result of allocation. Now in order to use the pointer, you need to get it out of the Maybe or the type system will complain, which enforces that you actually check for null before dereferenecing. It proves that your program will never de
Re: (Score:2)
A goodly portion of the young developers don't really know what the word "compile" means.
Re: (Score:2)
Instead of listening to your argument, they just dismiss you for being old (you know, early 30s) and not understanding the "new" technology.
You could point out that dynamic typing was introduced one year after static typing. Dynamic typing is state-of-the-art, circa 1958.
hate the name (Score:5, Insightful)
"Hack" as a language name? Really?
People are going to explain this at dinner parties. People who kind of understand that programming is more than being good at operating a computer as an end user but don't really know the difference between sysadmin, devops, programmer, business analyst, and DBA let alone what those roles really do are going to ask questions. Those questions will be things like "what kind of programming?", "what technologies do you use?", and "what are you working on right now?" The answer will be something about putting together a quick Hack program to change values in a database, and then it gets awkward.
Plus, did they consider at all how easily this will get confused with Haxe?
Re:hate the name (Score:5, Funny)
"The answer will be something about putting together a quick Hack program to change values in a database"
I take it you don't get invited back to dinner very often.
Re: (Score:2)
"I'm a hack hacker who hacks Hack."
Re: (Score:2)
I'm just waiting for its new socket library to be called NetHack.
Re:hate the name (Score:5, Funny)
Hey, three focus groups chose this over "Kludge".
Re: (Score:2)
It shows they have a sense of humor, given the effort it must have taken to make a 'better PHP'.
Re: (Score:2)
Re:hate the name (Score:5, Insightful)
Re: (Score:2)
It's actually very clever marketing. Now everybody that searches "facebook hack" will get this as the first result. They get lots of hits for their new language and obfuscate any articles teaching people how to exploit FB accounts. Win-win.
Uh, people exploiting Facebook?
Ironically, you have that completely backwards.
Don't feel bad for making that common mistake. The billion other users did too.
Re: (Score:3)
this is my maxim: Geeks should not be allowed to name the things they create.
"Hack" as a name for a programming language is egregious. It's like naming your newborn baby "Wipe"
I wish my maxim weren't true, but it is.
Re: (Score:2)
> "Hack" as a language name? Really?
Because "Brainfuck" was already taken.
Re: (Score:2)
Okay, stupid name aside, this is awesome. I've never had a single good thing to say about Facebook or Mr. Zuckerberg, but this could totally change that. Lots of devs disparage PHP, but they're all idiots -- PHP is heavily used because it's heavily useful. I haven't used HACK yet, but if it's not a buggy piece of junk might truly be great. I've yet to find a language that lets me go dynamic when I'm prototyping but gradually type when I see fit. So...Sweet!
That said, static typing isn't all it's cracked up
PHP blew chunks until about 2 years ago. PHP =Lego (Score:2)
> Lots of devs disparage PHP, but they're all idiots
or they haven't looked closely at the newest changes to the language in the last two years.
Or they are talking more about the "PHP community", thousands of "scripters" who use PHP because it's easy to build things in PHP, in the same way that it's easy to build things with Legos.
I helped write the PHP certification test, which suggests I know a little something about PHP. My PHP code is used by many large universities. I could go on, but suffice to s
Sarcastically Typed (Score:5, Funny)
We really need a sarcastically typed language. That would be truly awesome.
Re: (Score:2)
def c$ as string(as.IF!)
Re: (Score:3, Interesting)
I'm pretty sure that's Perl. You can define something first as a scalar, then refer to it as an array, because HA HA I WAS JUST KIDDING!
You can also take a single variable and make it behave entirely differently depending on context. As a scalar, it works fine, but if you try to refer to it as an array, rm -rf
/.
Not saying you'd want to do those things, but you can.
Re: (Score:2)
Perl can be extremely frightening sometimes. Still useful though!
Re: (Score:3)... [perl.org]
Re: (Score:2)
Design by contract? (Score:5, Insightful)
This sounds a bit like layering design by contract on top of a typically dynamically typed language rather than being a strictly statically typed language. It's an interesting approach and would seem to achieve their goals of faster but more robust development.
Nope (Score:5, Insightful)
In fact, most PHP files are already valid Hack files.
No, no, no, and no.
The single biggest problem with PHP is the tendency for old code and old programmers to keep their bad habits around when moving to new projects. PHP lacked vital modern features (like static typing and namespaces) for so long, and it's evolved so many workarounds (like magic quotes), that programmers have learned the wrong way to accomplish basic tasks. Now they have a new language, supporting the right way to do these things... but the old and broken ways still work. Sure, there will be a few programmers that will use the new way and be thrilled about the good technique, but then time crunches will set in, and code reviews will be rushed (or nonexistent), and those old ways will creep in, bringing the bugs with them.
Backwards-compatibility with a broken language is a great way to improve a new language's adoption, and a terrible way to build a new language's reputation.
Re: (Score:2)
PHP lacked vital modern features (like static typing
Modern features like static typing.
Modern features like static typing...
Seriously?
Re: (Score:2)
Re: (Score:2)
The whole point of this project is back-compat with PHP. If you don't need that, then there are already half a dozen better languages which have all those other new features...
Re: (Score:2)
Fine. s/magic quotes/mysql_real_escape_string()/g... or if you prefer, preg_replace('/magic quotes/g', 'mysql_real_escape_string()', $post);
That's deprecated too, but I'd bet there's still folks out there clinging desperately to it and using it daily in production code.
Also, you should consider that I don't keep track of PHP's changes, because they'd have to change most of the language for it to be salvageable. I'm not trolling with outdated FUD; I'm arguing with one almost-outdated fact. It's almost outdat
Variadic SQL queries (Score:2)
Shared hosting conservatism (Score:2)
They were removed completely as of PHP 5.4.0.
That's fine so long as all users of your application are on hosts that offer PHP 5.4.0 or later. Shared hosts tend to stay on old PHP, possibly with backported security fixes, so as not to expose their customers to breaking changes. They also tend to limit the extent to which customers can configure the web server, PHP, and database. I had to wait for my annual renewal in order to move my personal web site from another host to WebFaction in order to get halfway recent PHP.
Re: (Score:2)
Sounds like a good band-aid for PHP codebases (Score:4, Insightful)
Every few months someone announces a new fad language despite them rarely bringing anything new to the table, or the new things they do bring not being significant enough to warrant switching from some other well-established one.
I'm actually happy with this one, because it serves an easier to justify purpose: migrating your existing PHP codebase and developers to something that is immediately better and familiar.
Ugly (Score:2)
I guess it wouldn't hurt to have static typing in PHP, but for the love of god, why not just pick a more standard syntax?
Some of their examples:
?hh
...
function f1(): ?resource {
}
public function bar(): (string, string) {
Might look good in some languages, but in PHP I would really expect that type before the function name.
Of all the things to fix... (Score:2)
I'd have thought the horrible global function namespace would have been a top priority.
Seriously [twimg.com], what is this, C?
Wait, PHP had no lexical scoped closures? (Score:2)
Really? WHAT YEAR IS THIS?
Static languages coming back (Score:2)
Microsoft added static typing to Javascript (with Typescript). Now Facebook is adding static typing to PHP. After a few years of dynamic languages being in vogue, is the pendulum swinging back to statically typed languages?
Re: (Score:2)
Yes, mainly because there are literal mountains of shit code that no one can debug, and all of a sudden the notion that maybe having basically one variable type is a bad thing.
Re: (Score:2)
Difference is the static typing here is "opt-in". I think that helps. "If you like your current type practices you can keep them".
I kinda want more specific types. (Score:2)
I've contemplated a language/library with extra strict typing for doing real-world calculations.
For example, if floating point variables x and x were be classed as 'Lengths', and you stated "a = x * y;" a would have to be classed as an Area.
If you then stated "a *= 2;" a would still be an area (of twice the size), but "v = a * x;" would return a Volume class, and "a = a * x;" would be a compile time error (trying to assign a Volume result to an Area class)
other included types would be Time intervals, with a
Re: (Score:2)
Some languages do have what you want, e.g. F# [microsoft.com]. Others allow this to be done as a library, like C++ [boost.org].
But this feature is sufficiently niche that I wouldn't expect it in PHP.
Re: (Score:2)
Not a bad idea. If your classes have no virtual methods and they don't allow mixins or other crazy runtime dynamic shenanigans, then in theory the compiler can optimize them down to un-boxed numbers. I wonder how man OO languages are very good at that though. The hard part might be the base class. If you don't even have un-boxed numbers and the base class has virtual methods... you're kind of stuck with something less than optimal.
Oh and of course time*length isn't velocity. length/time is. Division i
Re: (Score:2)
I've contemplated a language/library with extra strict typing for doing real-world calculations. For example, if floating point variables x and x were be classed as 'Lengths', and you stated "a = x * y;" a would have to be classed as an Area.
You want C++11. You can infer the Kg/m/s powers etc using a small amount of template hackery. The thing that C++11 adds is user defined literals, so you could type:
length a = 4m;
which gives it a syntax whih doesn't suck.
The name seems a poor choice (Score:2)
One of the first things you should consider when choosing a name for your project these days is: how relevant will search results be when people Google for it?
Sloppy implementation + bad idea = ? (Score:2)
Sorry, slop and rigor don't really combine well.
Re: (Score:2, Informative)
FaceBook is developing an in-house version of PHP that adds optional static typing and some other features.
Re:English? (Score:5, Informative)
No. Facebook is developing a new language which is syntactically similar to, and inter-operates with, PHP. Calling Hack a "version of PHP" is like calling Delphi a "version of ALGOL".
Re:English? (Score:4, Funny)
Can someone please convert the summary in english, with basic explanations?
My interpretation of the summary is: Facebook has sunk a lot of investment into a turd, and now they've determined that that turd needs polishing.
Re:English? (Score:5, Insightful)
My interpretation is that they've put a lot of work turning PHP into Java or C. Why go through all this effort when they could simply used one of the C-like strongly typed languages is beyond me. All that effort could have been put into creating a PHP-to-Java converter or something along those lines.
But hey, it's Mark Zuckerberg's bazillion dollars. If he wants to tape testicles on a eunuch, be my guest.
Re:English? (Score:5, Interesting)
Re: (Score:2)
Java has had quick edit/reload cycles since there was hot code replacement. If you see Java developers recompiling / restarting their entire project every time they make a change, then unfortunately they just don't care enough and/or are incompetent.
For projects that I'm assigned to, the first thing I look at is making sure the turnover time is below 10 seconds.. if it isn't, I fiddle with it until it is. And yes, that works for Facebook sized websites as well.
They invented a square wheel to go with their
Re:English? (Score:4, Interesting)
That's not what Object programming considers an object. It would be more like:
"A struct and all the functions that use it as the first argument" depending on the language. Some languages would have it be a pointer to the above. One of the critical aspects of an object is that is has a local scope for each instance. In C this either means you are holding it on the stack, or you've done a malloc.
That said, you can definitely do object programming in C. Existence proof is offered by valac -C which takes Vala code and emits a C equivalent. (It uses the GObject protocol to do so.) Mind you, the code that it emits is nearly unreadable, but it *is* object programming in C.
Re: (Score:3)
Why go through all this effort when they could simply used one of the C-like strongly typed languages is beyond me.
Because they already have a huge PHP codebase. That said, Twitter moved from Ruby to Scala for their back-end [theregister.co.uk]; it seems it can be done. I suspect Facebook have more of a lots-of-programmers problem: they all know PHP, and they might not all know, say, Scala.
to creating a PHP-to-Java converter or something along those lines.
You mean take PHP source and translate it into readable, maintainable Java source? That's all but impossible. They're very different languages, and source-to-source translators tend to produce pretty unreadable code when faced with that kind of task. A h, Interesting)
PHP is productive in that it can work as a fast prototyping language. Even I recognize that, and in fact use it as such.
Where it falls down is that it doesn't enforce the kind of discipline that better languages do. It is well and truly the BASIC of the 21st century.
That's not to say you cannot write good code in PHP. I do strive when I have to work in PHP (which is more than I would like) to write well-formed code, use newer language structures and so forth. But still, and maybe it's because I like the crutch, I just feel better coding in statically typed languages, and to be pretty blunt, if PHP is evolving into that kind of language, there are far better languages out there.
Re: (Score:3, Insightful)
"For pragmatic-minded people, PHP is an extremely productive language to work in. No compiling, or waiting for compiling, no object files to mess with or get out of sync, and still relatively good speed. "
You mean, like any other interpreted languages you could choose?
"The down-side has always been that the language also had many sloppy characteristics."
You mean, like those avoided by any other interpreted languages you could choose?
Re:English? (Score:5, Insightful)
The reason I develop in PHP is because I write consumer software to deploy on web hosts, and I don't chose the web hosts, and PHP is the overwhelming 'standard'. Probably the world would be better if Python replaced PHP, but bitching about the tools people use with arguments that the users of said tools simply cannot relate to, just seems ridiculous, and annoying because it just undermines the good work some seriously talented/hardworking/worthy people are doing. There is absolutely no reason PHP cannot continue to evolve, step by step, without forcing the entire hosting industry to adapt to something else (which is never going to happen).
I just ported a project from ASP.net to PHP, and my God the problems I saw in that language. Whenever I look at Ruby code, I just see the most inelegant syntax - it just looks like it was deliberately designed to combine being cryptic in some places, with pulling in English words where self-describing layout would be better. And then there are a host of academic languages which have elegant concepts, but poor libraries, or just are too over-complex for real world use.
For what it's worth, I think the world would be better off if Javascript was replaced too (I get annoyed by that much more than PHP actually, because some basic programming constructs are just hacks, and it is literally impossible to write elegant looking code).
Re: (Score:2)
I'll remember that the next time I'm navigating the horror that is the PHP function library.
Re: (Score:2)
I think I just went blind!!!!
Re:English? (Score:4, Insightful).
Re: (Score:2)
Wow, they couldn't port a simple web app into a mature language in 10 years. You have to wonder what kind of programmers they hire.
Re: (Score:2, Funny)
So not much different than PHP then...
(ducks)
Re:English? (Score:5, Funny)
So not much different than PHP then...
(ducks)
It generates Perl?
Re: (Score:2, Troll)
Re: (Score:3)
convert the summary in english
They reinvented soft typing.
;-)
Re: (Score:3)
Php (Personal Home Pages) is already something it was never intended to be. It's the "hey thats a nice feature, let me add a crappy implementation of it" language. If you mean "turn PHP into something never inteded to be" as turning it into something not absolutely horrible to work with then OK. Anything they do is going to be better than PHP. You would have to try really hard to make it worse.
People always moan about how horrible PHP is, and I always assume that the people moaning are trying to learn the language without having a basis in C because they have come straight from Ruby or some other perfectly designed load of academic twaddle nobody uses.
The reality is that PHP is like C, an amazingly flexible and well used tool. Yes, it has tons of quirks due to its slow evolution where they maintain backwards compatibility, but that is it's strength in the real world since nobody wants to rewrite
Re: (Score:2, Interesting)
that's great.. but the reason Scala is so sweet is that it does type inference. This means all your code is purposely typed even though you don't have to type all the tries (AC doesn't have to apologize for puns). This is going to be a nightmare where your last five dynamic functions his the bug that kills you.
It's so sad that the fact that Mark got extremely lucky means that all this excellent computer science is wasted on a cesspool like PHP.
Re: (Score:2)
the reason Scala is so sweet is that it does type inference
Haskell also does type inference.
Re: (Score:2)
Hack does type inference. It doesn't do let-polymorphism because that plus subtyping is formally undecidable. But you only have to declare the types of functions, not every variable inside those functions.
Re: (Score:3)
Many years ago I was a fierce opponent of static typing and loved the power of Obj-C and Python (was a NeXT/Mac head.) C++ and Java were crap (especially since Java didn't have type variables at the time.) Then I tried Haskell and my mind was duly blown. Now I'm a huge proponent of static typing, even if I still can't stand Java and avoid C++ unless necessary. IMHO Scala is the current sweet spot for statically typed general purpose programming language.
I wish there was just some voluntary static typing in python. By this I don't mean run time voluntary type checking. that makes it slower. No I mean a pre-run time filter that optimizes the
.pyc to the extent it can.
Re:static typing is awesome (Score:4, Informative)
Let me introduce you to
You are welcome.
Re: (Score:2)
"This saw sucks. I can't drive a nail with it. And what's with this useless hammer? Look at the jagged splintery bits I get when I try to cut wood with it!"
I don't get why there's a holy war between static and dynamic typing in the first place. Different tools for different problems. I don't trust any software engineer very far when they say things like "$LANGUAGE [sucks|rules] because it's [statically|dynamically] typed."
Re: (Score:2)
Then I tried Haskell and my mind was duly blown. Now I'm a huge proponent of static typing, even if I still can't stand Java and avoid C++ unless necessary.
While I have mixed feelings about Haskell as a whole, it's got the best damned type system I've ever seen. My advice to designers of new languages is just copy Haskell's type system.
Re: (Score:2)
Haskell's NIH version of SQL is tedious. Why didn't they just implement SQL?
I haven't tried SQL, but in general Haskell's libraries are one of its weaknesses.
Also, closures seem to me to violate the premise of static variables and fixed variable scoping. When you introduce closures into a language they are bound to cause all the same problems as global variables.
I don't agree. A closure should produce a new pure function (some languages allow otherwise, but not Haskell). With functions as first class objects, you can just pass and return them like data. There is nothing about that that suggests the evils of globals.
Re: (Score:2)
Actually that sounds a lot like Haskell.
BTW, why is it called Ceylon instead of Sri Lanka?
Re: (Score:2)
Misspelled Cylon.
[John]
Re: (Score:2)
Gavin was probably drinking tea at the time.
Tea from Sri Lanka, e.g. Dilmah, still uses the colonial name on its packaging.
Given that Java has coffee-related connotations...
Re: (Score:2)
Re: (Score:2)
How is this different/better than Asp.Net?
ASP.NET requires a Windows server. PHP (and presumably Hack) can be run on any cheap hosted Linux server with cpanel.
That said, ASP.NET/C# is a far better and more coherent development platform than PHP. But PHP's near-zero barrier to entry will keep it on top for the forseeable future.
Re: (Score:2)
Ive been running ASP.Net MVC stuff on a RaspPi under mono and Nginx now for several months with no issues...
Re: (Score:2)
Re: (Score:3)
Ever tried to access 8-bit byte arrays and write them to a binary file in PHP?
Yes, I have.
It's hell and it takes a lot of work to go around all the stupid dynamic typing.
No, it's not. It was simple and painless.
Had you taken a few minutes to read the documentation, you wouldn't have suffered through "hell".
Maybe programming isn't for you? Have you considered a career more suited to your talents?
Re: (Score:2)
Because all the examples and explanations I've found were nothing but "simple"
You're either not very good at searching, terrible at programming, or both.
A quick look at the documentation and you'll find handy functions like fseek, fread, fwrite, pack, and unpack. It's easier than doing the same in Java, C#, or even C.
You won't run across any problems caused by dynamic typing. What strange and unusual approach did you take?
Re: (Score:2)
You won't run across any problems caused by dynamic typing. What strange and unusual approach did you take?
Posting on Slashdot about it, apparently.
Re: (Score:2)
In any sane programming language, you can access the bytes directly as an indexed array.
You can do that in C; but you shouldn't. There are too many things you don't know:
1. Debug or release? That could change the structure layout.
2. Structure padding.
3. Byte-order.
IIRC, they call such a cast "type punning". They got away with it in network stacks on BSD; but that was a long time ago, *and* they no doubt had knowledge of all the compiler switches and CPU architecture.
For a language like PHP to have f
Re: (Score:2)
Why do we need to "pack" and "unpack" the data? In any sane programming language, you can access the bytes directly as an indexed array.
Because it's a simple way to control the endianness and low level layout of binary data. Few "sane" programming languages provide that level of control with direct memory accesses, Ada being one exception and even that requires some extra incantations to make work.
Re: (Score:2)
Wait. A single function call is a "complex work-around" in your world?
Besides, you only need pack and unpack to get an array (as specified in your silly post). You don't actually need them to read binary data from a file, modify it, and write it back to a file.
In any sane programming language, you can access the bytes directly as an indexed array.
Fun fact: You can do that in PHP as well! Use the [] operator on the string you read in to access and modify individual bytes.
What's so damn hard about reading the documentation anyway? I'd think you'd have at the very least made sure that you had
Re: (Score:2)
This is like justifying strapping a jet engine and wings to a Volkswagen beetle insisting that because it got you to the airport, well by God, it can get you airborne as well!
Re: (Score:3)
So
... What would you have changed? What would your top-priority have been?
Re: (Score:2)
Silly and irrelevant? | http://developers.slashdot.org/story/14/03/20/1924211/facebook-introduces-hack-statically-typed-php/funny-comments | CC-MAIN-2014-42 | refinedweb | 5,302 | 72.56 |
For better or worse, the data in existing JavaScript ICE Code Editor installs (like that at) store their data in browser localStorage with a single localStore key:
codeeditor. The value pointed to by the
codeeditorkey is a JSON string that lists all of the projects that have been stored.
The UI lists projects by title:
Titles are also guaranteed to be unique, so it might make sense to index the localStorage for the Dart version of ICE Code Editor by title. Doing so would make all the more sense since Dart has such a nice
HashMap-based interface into localStorage.
As nice as it might be to switch the storage strategy, I think switching an entire app from JavaScript to Dart is enough of a change. So for now, I will create a new
Storeclass that works with the existing JavaScript storage. In addition to not rocking the boat too much, this also give me the change to deploy the Dart application side-by-side with the JavaScript version so that I can test it out before making the official switch over. If I deploy to the URL, it will still have access to the same localStorage store since both the
/iceand
/ice-betaURLs resides on the same server.
Still, it makes sense for
Storeto implement the
HashMapinterface just like Dart's localStorage does—that way my code will not need to change much should I ever decide to migrate. At the risk of prematurely optimizing, I will not implement the store with the same
HashMap<String, String>signature. Instead, the values returned will be of a self-defined
Projectclass:
library ice; import 'dart:collection'; class Store implements HashMap<String, Project> { // Need code here } class Project { String name, code; bool autoupdate = true; String type = 'text/plain'; }In other words,
Storewill be responsible for serializing and de-serializing projects.
By implementing the HashMap interface, I give myself a built-in TODO list:
ice-code-editor git:(master) ✗ dart_analyzer lib/store.dart file:/home/chris/repos/ice-code-editor/lib/store.dart:7:7: Concrete class Store has unimplemented member(s) # From Map: Iterable<String> keys Iterable<Project> values int length bool isEmpty bool containsValue(Project) bool containsKey(String) Project [](String) void []=(String, Project) Project putIfAbsent(String, () -> Project) Project remove(String) void clear() void forEach((String, Project) -> void) # From HashMap: void addAll(Map<String, Project>) 6: 7: class Store implements HashMap<String, Project> { ~~~~~It makes the most sense to start with the two square bracket operators for lookup and setup.
So I start with a test:
import 'package:unittest/unittest.dart'; import 'package:ice_code_editor/store.dart'; import 'dart:html'; main() { // ... group("store/retrieve", (){ test("it can store data", (){ var it = new Store(); it['foo'] = {'bar': 42}; expect(it['foo']['bar'], equals(42)); }); }); }I am not testing the underlying implementation of localStorage here—just the interface. In fact, I may skip testing that the data persists in localStorage. Rather I may get the test passing first, then switch to localStorage while continuing to keep the test passing.
First steps, first: I need to get the test passing:
class Store implements HashMap<String, Object> { Store() { } HashMap _docs = {}; void operator []=(String key, Object data) { _docs[key] = data; } Project operator [](String key) => _docs[key]; }With that, I have my passing test:
PASS: store/retrieve it can store dataNow, it is time to vary the implementation.
Which I do with my #pairwithme pair!
Update: While pairing, it became apparent that the
Projectclass was premature. We ended up deleting it and returning just HashMaps. We also found it necessary to add a test for localStorage after all. But it works. Another successful pairing -- yay!
Day #747 | https://japhr.blogspot.com/2013/05/over-thinking-class-design.html | CC-MAIN-2016-40 | refinedweb | 608 | 58.62 |
c++ - ids - set cmake compiler to clang
In cmake, how can I test if the compiler is Clang? (2)
We have a set of cross platform CMake build scripts, and we support building with MSVC and GCC.
We're trying out Clang, but I can't figure out how to test whether or not the compiler is Clang with our CMake script.
What should I test to see if the compiler is Clang or not? We're currently using
MSVC and
CMAKE_COMPILER_IS_GNU<LANG> to test for MSVC and GCC, respectively.
A reliable check is to use the
CMAKE_<LANG>_COMPILER_ID variables. E.g., to check the C++ compiler:
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang") # using Clang elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU") # using GCC elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Intel") # using Intel C++ elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "MSVC") # using Visual Studio C++ endif()
These also work correctly if a compiler wrapper like ccache is used.
As of CMake 3.0.0 the
CMAKE_<LANG>_COMPILER_ID value for Apple-provided Clang is now
AppleClang. To test for both the Apple-provided Clang and the regular Clang use the following if condition:
if (CMAKE_CXX_COMPILER_ID MATCHES "Clang") # using regular Clang or AppleClang endif()
Also see the AppleClang policy description.
Just to avoid any mispelling problem, I am using this:
if (CMAKE_CXX_COMPILER_ID MATCHES "[cC][lL][aA][nN][gG]") #Case insensitive match set(IS_CLANG_BUILD true) else () set(IS_CLANG_BUILD false) endif ()
For making the regex case insensitive, I tried everything here without success (doesn't seem to be supported in CMake). | https://code.i-harness.com/en/q/994aa2 | CC-MAIN-2020-10 | refinedweb | 248 | 58.92 |
Last Updated on August 27, 2020
Sentiment.
Kick-start your project with my new book Deep Learning With Python, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Update Oct/2016: Updated for Keras 1.1.0 and TensorFlow 0.10.0.
- Update Mar/2017: Updated for Keras 2.0.2, TensorFlow 1.0.1 and Theano 0.9.0.
- Update Jul/2019: If you are using Keras 2.2.4 and NumPy 1.16.2+ and get “ValueError: Object arrays cannot be loaded when allow_pickle=False“, then try updating NumPy to 1.16.1, or update Keras to the version from github, or use the fix described here.
- Update Sep/2019: Updated for Keras 2.2%.
Need help with Deep Learning in Python?
Take my free 2-week email course and discover MLPs, CNNs and LSTMs (with code).
Click to sign-up now and also get a free PDF Ebook version of the course..
Tying all of this together, the complete code listing is provided below.
Running this example fits the model and summarizes the estimated performance.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome..
Tying all of this together, the complete code listing is provided below..
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. in the comments and I will do my best to answer. , thanks for your tutorial. but I’m wondering if you have any tutorial about Aspect-based sentiment analysis
What is aspect-based sentiment analysis?
A multi-class classification problem where each sentence is associated to an ‘aspect’. There are two forms; categorical and term-based. Datasets available:
SemEval-2014
SemEval-2015
Two papers worth reviewing:
Deep Learning for Aspect-Based Sentiment Analysis By Bo Wang and Min Liu
Aspect-Based Sentiment Analysis Using a Two-Step Neural Network Architecture by S Jebbara and P Cimiano.
There are unsupervised versions of it (term extraction) but categorical is probably more desirable. This dataset would need to be labeled.
Thanks for sharing.:
jim can u sent me the file test that you did ?
When an element in the data is labeled as an integer, let’s say 4 for example, could that represent any word that has occurred 4 times in the data or is it a representation of a unique word?
Hi brent, each integer represents a unique word.
Hi Jason,
Thanks for the great tutorial, great as usual!
You mentioned that “each integer represents a unique word”, why?
My assumption is that we have mapped each word to its frequency in the whole corpus. If my assumption is true, so two words could come up with the same frequency. For example “Dog” and “Cat” both could repeat 10 times in the corpus.
Could you please if my assumption is wrong?
Thanks,
Words were ordered by frequency then assigned integers based on that frequency.
Well, thank you so much for this great work.
I have a question here. I didn’t understand why we use ReLU instead of tanh as the activation function. Most people use SGD or backpropagation for training. What did we use here? I do not know about ADAM. Can you please explain why did you use it for training?
ReLU has better properties than sigmoid or tanh and has become the new defacto standard.
We did use SGD to fit the model, we just used a more fancy version called Adam.
Hi, thanks a lot for this HELPFUL tutorial. I have a question, could be an odd one. what if we use pre-trained word2vec model. I mean if we just use pre-trained word2vec model and train our neural network with movie reviews data. Correct me if I am wrong!
Or best way is to train word2vec with movie reviews data, then train neural network with same movie reviews data then try.
Kindly guide me. Thanks
Sounds great, try it.
but should I use pre-trained word2vec model (trained with wiki data) or train from scratch with by using movie reviews data or amazon product reviews data. thanks
Try both and see what works best on your problem.
Hi Jason,
I am trying to use tfidf matrix as input to my cnn for document classification. I don’t want to use embedding layer. Can you help me how this can be achieved. I have not seen any example which shows tfidf as input to Cov1d layer in Keras. Please help
Sorry, I have not used tfidf as input to a CNN.
Dear Jason, I do thank you for this great post. Could you give me any indications on how to extend this approach to multiclass classification?
Set the number of nodes in the output layer to the number of classes and change activation to softmax.
I see that you have not used any regularizers in the model. How is overfitting avoided?
Here, by an underspecified model and under-training the model.
You could also try dropout and weight regularization.
Hey Jason great review but I was wondering how I could use the created model to predict the sentiment of a new inputted text.
You can encode your test using the same method as in the problem in order to make a prediction.
I hope to have many more NLP examples soon.
5000 means ?
That is the number of samples in the train and test sets.
I am getting an error: name ‘sequence’ is not defined in the line: X_train = sequence.pad_sequences(X_train, maxlen=500)
Ensure you copy all of the code, including this line:
Hi Jason,
Thanks for this nice post . I have a question about how to use the model to predict sentiment of a new text .
myText=”Hello, this is a my review”
How to structure this text into model ? and use predict function.
I have a suite of posts on this topic scheduled for the coming weeks if you can hang on?
I am stuck with making new predictions too..
What is the problem exactly?
I am having a difficult time encoding new, inputted text. Am I able to simply pass [“I hate computer bugs”] to pad_sequences()? Or is it a different approach? Any pointers would be very much appreciated!
The words must be encoded as integers first, see this post:
Hi Jason,
Thanks for the blog and tutorial. Since you talked about IMDB standard dataset. I just wrote a blog mentioning the accuracies of state-of-the-art models for sentiment analysis on IMDB dataset. You can see it here:
Thanks for sharing.
Hi jason,
you did a great work. i appreciate your effort. Actually i just want to know,in “One-Dimensional Convolutional Neural Network Model for the IMDB Dataset”. how i can know the answer of these question.
1)number of neurons used in this tutorial?
2)no of hidden layers?
3)no of out put layers?
4)if no of hidden layers maximize and minimize then output will affect or not?
5)how many parameters is used?
6)how we can draw a network diagram in python using code or library?
please answer me briefly. thanks
We cannot know for sure, these configuration hyperparameters cannot be specified analytically. You must use experiments to discover what works best for a given problem.
You can draw a network diagram in Keras, here are the API details:
how i can do experiments? on which bases i can do experiments? do you have any tutorial in which you take a dataset manually in any NN for sentiment analysis?
Yes, I have a few posts scheduled for later this month.
hi jason
i want to know that how i can use this model for my own data set which is in csv file. the data set have polarity of idiom sentences.
I will have an example on the blog soon.
ok jason i am waiting.. please also send notification to me. because on your blog there is no notification or email recieved about any answer.
Hi Jason, great post! I have learnt a lot from your previous posts. Thanks a lot!
For sentiment analysis, we can get better result (0.8822 in this case) if we change the output layer to softmax activation.
Great tip, thanks!
Hi Jason.
I have used a CNN Model for Sentiment Analysis.
I have a set of my own strings to test the model on.
What is the conversion I need to make on my Strings to input them to the model?
Thanks.
You can use a bag of words model or a word embedding.
I have posts on both, perhaps start here:
Thank You.
I will go through it.
Let me know how you go.
Hi, Does Conv2D give better accuracy. What will be the input shape in that case. Have your covered Conv2D in your book on Deep Learning for NLP.
Thanks.
It is not a question of accuracy, rather what is appropriate for the data.
1D data like a sequence of words requires a 1D convolutional neural net.
Hi Jason, in one of the examples in your book on NLP (chapter on text classification) the model.predict requires three input arrays as we are working with 3 channels. Though model.evaluate runs fine i fail to understand where do we define the three input in predict.sentiment function.
Will be helpful if you can guide.
You can pass 3 inputs to the predict function as an array:
Does that help?
Hi Jason,
You used pre-built dataset but if I want to run this model on my dataset (e.g. in my case i want to run this on my tweets dataset) how can I made my dataset compatible to this blog code. Please give some suggestions or reference so that I can make tweets dataset accordant to this model.
Best Regards,
This post shows how:
Thanks Jason,
I learned a lot from your posts.
This model performs well for binary classification but poorly on multiclass. My question is, in case of multiclass (say 10 or more classes), what changes in layers or their parameters will improve the accuracy?
I would recommend trying a suite of configurations to see what works best on your specific data.
hey jason, hi great tutorial but i don;t know why i am getting an error in pad.sequences line it is saying name sequnce is not defined i do not understand why i am getting this error? can you help?
Sorry to hear that. Are you able to check that you copied all of the code?
Hey thanks for the great work, just wanted to know that how to print out the results of this prediction as what are the positive and negative comments?
What problem are you having exactly?
i also have this mistake: NameError Traceback (most recent call last)
in ()
—-> 1 X_train = sequence.pad_sequences(X_train, maxlen=500)
2 X_test = sequence.pad_sequences(X_test, maxlen=500)
NameError: name ‘sequence’ is not defined
Ensure you have copied all code required.
hey Jason, how can I use “predict” with the model? for example, I have a text and I want to see the outcome based on the model. TIA
You can use model.predict() with new data as an argument.
Note that new data will need to be prepared in the same way as the training data.
can you share how to do a prediction with the model?
Sure:
Hey Jason,
How do I classify the user as angry or not angry based on the type of words in the review sent?
Start by collecting examples of angry and not angry movie reviews.
hi Mr jason … can u tell howa how can i do the file test of this model … and thanks !
What do you mean by “file test”?
Very nice article! Clear and precise. I just have some reserves concerning the use of deep learning for sentiment classification. Sure, it seems to be the future somehow, but for now, we just get better results with Bayesian methods. For instance, I worked with the same dataset (see) and get 91.6% accuracy with Bayesian learning. Do you think deep learning performs less because the task is too simple? Or because the dataset is too small?
I’d love to hear your thoughts about that
It sees CNNs are doing very well also:
Hye Jason. I would like to tell that as my first attempt I tried multi-layerd perceptron and i have this issue that is it okay to use the same data for validating as well a testing? if we will validate our training on test data then the result will be biased??
And whenever I am increasing no. of epochs loss is increasing simultaneously and accuracy is just around 86% always. Please guide me.
It is a good idea to evaluate the skill of a model on data not used to train it, learn more here:
Hi Jason!!!
what will be x_train,x_test,y_train and y_test in case of twitter sentiments
where labels are floating point values?
I am not getting what are these lists?
can you clear it?
Inputs will be text, output will be a sentiment class label.
Train and test will be some split of the data for fitting and evaluating the model respectively.
I want to know the reason for increasing validation loss after 2nd epoch..?? is there any mistake i am doing?
Perhaps the model is overfitting?
It is the MLP code in your example what should i do to increase the accuracy as well as remove overfitting.. will dropout layer works??
It may. Here are more ideas:
Thank you Jason for this!
I am really new to deep learning and NLP.
Here are few naive questions that I have:
– Why did you select 32 as the parameter?
– Can we use this learned model to now predict any other text data? say for eg, i wanted to evaluate feedback data of customers, can i use the same model to do so? If so, how?
– How are you taking care of stop words and other irrelevant terms in this text?
Thanks in advance 🙂
I used experimental testing to configure the model. You can learn more here:
Yes, within reason (e.g. from the same domain).
I often remove stop words. Here is a fuller example:
Hi Jason,
What is the current benchmark accuracy for imdb movie review classification?
I’m not sure, accuracy above 88% is excellent.
thank you Jason for the prompt response 🙂 Really appreciate it.
No problem.
Hey Jason!
excellent stuff. How to use your code to predict the same corpus and model to predict employee feedback information? I do not understand how to use this on some other data. How to use your code as starting point?
This process will help you work through a new predictive modeling problem:
Your tutorials are very helpful as a beginner like me. I have a doubt if we are using a dataset having only two labels (class & text) and then how many input neurons should I create. Is it 1 or more than one..??
This is a common question that I answer here:
What is the use of Flatten()?
In some models, the network will have a 2d or 3d internal shape to the data. Flatten squashes this down to 1D as the fully-connected layers expect.
thank you…..
Hi Jason,
Thanks so much. Great post!
I have the followng questions:
1. You explained well why maxlen was set to 500. Is there a way to examine other lengths is an easy way (like we grid search over the model hyperparameters), and not manually? Is it important?
2. On the “One Dimensional CNN” section, you used pool_size=2 within the maxpool function. What is reason/benefit for that?
3. Is the flatten layer right after it is because the pool_size is 2? I mean – I won’t be needing it if I just take the default?
You can experiment with other lengths.
Max pool of 2 reduces he size of the filter maps to 1/4 their size. It is a commonly used configuration.
Flatten is to reduce the filter map structure down to a vector that the dense layer can take as input.
OK Thank you
You’re welcome.
Hi.
I have a question, assuming that i want to inject some hand-crafted features into CNN layers for sentiment analysis.
At first i want to know is it possible?
And then how can i use fully connected layer to do this?? I dont want to use any methods like svm or … just want to use combinitation of deep and hand-crafted features directly in cnn. Thank you
Yes, you could have a multiple-input model, one input is the text, the other are the new features.
I have many examples of this type of model on the blog, perhaps start here:
how can I predict the result for my new comment?
ex = “this movies is fantastic”
You must prepare the data in the same way and call model.predict().
Perhaps try this tutorial:
Hello
thanks for this tutorial
I have a question about how to use cross- validation ?
This post shows you how:
Hi, how do you modify the learning rate?
Good question, this tutorial explains how:
Hi, Jason, thank you again for these kind tutorials. I’m learning many things here.
In the CNN case, I see from the model summary that the number of parameters in the first Embedding layer is 160000 (5000*32) and I can understand this. But why is the number of parameters of the first Conv1D layer 3104? I guess the 500 input words are transformed to 500 * 32 output values in the Embedding layer and the convolution is done for this 500*32 inputs values with kernel size 3. Is my understanding correct?
And I found the only way I can make 3104 seems to be 3*32*32+32 but I cannot think of any way the 1D conv parameters are applied for the input values. Could you elaborate on 1-D convolution in this case? (I read but the explanation is confusing to me..)
A good starting point is to summarize the model and look at the output shape of each layer.
Hi, Jason, this is a follow-up question to my previous question.
I thought 3104 = (32+1) * (3*32) so there are 3 kernel params for each 32 values for the inputs, and these kernel values are each different on each 32 inputs(3*32) and all the 32 inputs values are used to make each one of 32 output values with bias(thus * (32+1)). Hope you could understand what I mean.. 🙂
Hi Jason,
I’m getting an error with the command (X_train, y_train), (X_test, y_test) = imdb.load_data()
using Python 3.6.8 [Anaconda], win32 version on Windows10:
Traceback (most recent call last):
File “”, line 2, in
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\keras\datasets\imdb.py”, line 59, in load_data
x_train, labels_train = f[‘x_train’], f[‘y_train’]
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\numpy\lib\npyio.py”, line 262, in __getitem__
pickle_kwargs=self.pickle_kwargs)
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\numpy\lib\format.py”, line 692, in read_array
raise ValueError(“Object arrays cannot be loaded when ”
ValueError: Object arrays cannot be loaded when allow_pickle=False
I’m sorry to hear that, perhaps try updating NumPy?
Ok, I upgraded from ‘1.16.3’ –> ‘1.16.4’ and obtained the same error.
Does (X_train, y_train), (X_test, y_test) = imdb.load_data() still work for you?
This is an interesting tutorial, would be a pity if it could no longer be used1
I have a fix, add these lines to the start of the code example:
Based on:
Brilliant, thanks, that brings us forward! Before uncorking the champagne bottle, two more incompatibilities show up next:
1)
print(“Mean %.2f words (%f)” % (numpy.mean(result), numpy.std(result)))
Traceback (most recent call last):
File “”, line 1, in
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\numpy\core\fromnumeric.py”, line 3118, in mean out=out, **kwargs)
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\numpy\core\_methods.py”, line 87, in _mean ret = ret / rcount
TypeError: unsupported operand type(s) for /: ‘map’ and ‘int’
2)
pyplot.boxplot(result)
pyplot.hist(result)
Traceback (most recent call last):
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\matplotlib\units.py”, line 168, in get_converter
if not np.all(xravel.mask):
AttributeError: ‘numpy.ndarray’ object has no attribute ‘mask’
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “”, line 1, in
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\matplotlib\pyplot.py”, line 2659, in hist
**({“data”: data} if data is not None else {}), **kwargs)
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\matplotlib\__init__.py”, line 1810, in inner
return func(ax, *args, **kwargs)
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\matplotlib\axes\_axes.py”, line 6534, in hist
self._process_unit_info(xdata=x[0], kwargs=kwargs)
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\matplotlib\axes\_base.py”, line 2135, in _process_unit_info
kwargs = _process_single_axis(xdata, self.xaxis, ‘xunits’, kwargs)
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\matplotlib\axes\_base.py”, line 2118, in _process_single_axis
axis.update_units(data)
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\matplotlib\axis.py”, line 1467, in update_units
converter = munits.registry.get_converter(data)
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\matplotlib\units.py”, line 181, in get_converter
converter = self.get_converter(next_item)
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\matplotlib\units.py”, line 187, in get_converter
thisx = safe_first_element(x)
File “C:\Users\XXX\AppData\Local\Continuum\anaconda3\envs\env_python_3.6\lib\site-packages\matplotlib\cbook\__init__.py”, line 1635, in safe_first_element
raise RuntimeError(“matplotlib does not support generators ”
RuntimeError: matplotlib does not support generators as input
Sorry to hear that.
I can confirm that the code listings work with Keras 2.2.4 and TensorFlow 1.14.0.
Are you able to confirm that you copied all of the code and that your libraries are up to date?
1) It works now, many thanks. You might want to include the code in your example above so it works for others:
import numpy as np
np_load_old = np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
2) The error message resulted from the following code (under # Summarize review length):
result = map(len, X) # Bad version
result = [len(x) for x in X] # Good version which creates no errors.
Did you just correct this? Otherwise I have no explanation why copy-and-paste would have introduced “result = map(len, X)” before.
Brilliant work!
Thanks. I am hoping there is a new Keras release around the corner that already incldues the fix.
The code works as is with Python 3.6 run from the command line. Is it possible you were running in an IDE/Notebook or using an older version of Python?
I’m using python 3.6.8 in command-line mode with Windows 10.
I’ve never encountered anyone so dedicated to support their online learning service as you. I’m amazed.
In summary, thanks to your input, the example works fine now.
– Dr. Royal Truman
Thanks, I’m very happy to hear that it’s now working!
Hi Jason.
I was trying to implement Conv2D for the same thing. I had kept the kernel size as (3,3) however this didn’t seem to work. But as i switched to Conv1D after looking at your example i got it working. Could you let me know why you decided to use 1D and if there are any specific use cases for either.
Thanks
Swarupa
I don’t believe that a conv2d would be appropriate for text input.
We use a conv1d because we are working with 1d sequence of words.
Hi Mr. Jason, could tell me how can I do a prediction with the model picking a single comment out of the dataset as the input?
Yes, you can have one sample input and call predict().
This will help:
Jason, thank you for your beautiful lessons. I see that in this example you are not using any sliding window, or timesteps for your input in the lstm model. However, in other lstm examples, and generally in the internet, i have seen that it’s useful to transform the dataset with timesteps, when using lstm models? Is it true? When i need to preproccess the data with timesteps? Thank you in advance
You’re welcome.
This is text data, not time series. If you want to see examples of LSTMs for time series, start here:
Thank you for this tutorial
Is it right to use lstm or cnn-lstm in this dataset?
You’re welcome.
A CNN or LSTM with a word embedding would probably most appropriate for this type of problem.
Hi Jason,
Thanks a lot for your blog/tutorial/book, which are excellent.
I have been struggling trying to make a similar notebook (using imdb data from keras.datasets) work on a TPU(s) by using Google Colab.
Any advice on how to make the code in this tutorial work on TPU?
Thank you,
Alex
You’re welcome.
Sorry, I don’t know about colab:
I recommend running examples on your workstation.
As comparison, a traditional approach with CountVectorizer and TfidfTransformer reaches 84.26% accuracy with MultinomialNB and 88.68% with LinearSVC (using top 5k words)
Well done! Thanks for sharing. | https://machinelearningmastery.com/predict-sentiment-movie-reviews-using-deep-learning/ | CC-MAIN-2022-27 | refinedweb | 4,369 | 67.76 |
import "github.com/elves/elvish/pkg/store"
Package store defines the permanent storage service.
buckets.go cmd.go db_store.go dir.go shared_var.go store.go temp_store.go
Parameters for directory history scores.
ErrNoMatchingCmd is the error returned when a LastCmd or FirstCmd query completes with no result.
ErrNoSharedVar is returned by Store.SharedVar when there is no such variable.
NoBlacklist is an empty blacklist, to be used in GetDirs.
Cmd is an entry in the command history.
DBStore is the permanent storage backend for elvish. It is not thread-safe. In particular, the store may be closed while another goroutine is still accessing the To prevent bad things from happening, every time the main goroutine spawns a new goroutine to operate on the store, it should call Waits.Add(1) in the main goroutine before spawning another goroutine, and call Waits.Done() in the spawned goroutine after the operation is finished.
MustGetTempStore returns a Store backed by a temporary file, and a cleanup function that should be called when the Store is no longer used.
NewStore creates a new Store from the given file.
NewStoreFromDB creates a new Store from a bolt DB.
Dir is an entry in the directory history.
type Store interface { NextCmdSeq() (int, error) AddCmd(text string) (int, error) DelCmd(seq int) error Cmd(seq int) (string, error) Cmds(from, upto int) ([]string, error) CmdsWithSeq(from, upto int) ([]Cmd, error) NextCmd(from int, prefix string) (Cmd, error) PrevCmd(upto int, prefix string) (Cmd, error) AddDir(dir string, incFactor float64) error DelDir(dir string) error Dirs(blacklist map[string]struct{}) ([]Dir, error) (name string) (string, error) (name, value string) error (name string) error }
Store is an interface satisfied by the storage service.
Package store imports 12 packages (graph) and is imported by 10 packages. Updated 2020-09-18. Refresh now. Tools for package owners. | https://godoc.org/github.com/elves/elvish/pkg/store | CC-MAIN-2020-40 | refinedweb | 307 | 56.76 |
Opened 4 years ago
Closed 3 years ago
#10546 closed enhancement (fixed)
implement a custom cusps() method for principal congruence subgroups Gamma(N)
Description
The command "Gamma(n).cusps()" computes a complete list of inequivalent cusps for the congruence subgroup
Gamma(n), but it is very slow.
For example, look at the time required just for n=8:
sage: time Gamma(8).cusps()
CPU times: user 1086.11 s, sys: 5.60 s, total: 1091.71 s
Wall time: 1092.30 s
[Infinity, 0, 1, 2, 3, 4, 5, 6, 7, -1/2, -1/3, -1/4, -1/5, -1/6, 2/3, 3/4, 4/5, 5/6, 5/3, 11/6, 8/3, 11/3, 14/3, -3/8]
I've defined a function f(n) below that returns a complete list of inequivalent cusps for the group Gamma(n) (n>2). The elements of the list are in so-called "reduced_cusp form". The cusp 1/n in the list is the one equivalent to Infinity. Please use f(n) to make a patch for Gamma(n).cusps(), n>2.
def f(n):
C=[0..n-1]
n1=integer_floor((n-1)/2)
n0=integer_floor(n/2)
for r in [1..n1]:
if gcd(r,n)==1:
C.append(r/n)
if n0==n/2 and gcd(r,n0)==1:
C.append(r/n0)
for s in [2..n1]:
for r in [1..n]:
if gcd([s,r,n])==1:
m=r
while gcd(m,s)>1:
m=m+n
C.append(m/s)
return(C)
Here is an example for n=8:
sage: time f(8)
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.00 s
[0, 1, 2, 3, 4, 5, 6, 7, 1/8, 1/4, 3/8, 3/4, 1/2, 3/2, 5/2, 7/2, 1/3, 2/3, 11/3, 4/3, 5/3, 14/3, 7/3, 8/3]
The following gives an idea of the speed of f(n):
sage: time len(f(1260))
CPU times: user 16.66 s, sys: 0.41 s, total: 17.07 s
Wall time: 17.07 s
497664
The following checks the length 497664 of the list of inequivalent cusps:
sage: prod([p(2*e) - p(2*e-2) for (p,e) in 1260.factor()])2
497664
Attachments (1)
Change History (8)
Changed 3 years ago by davidloeffler
comment:1 Changed 3 years ago by davidloeffler
- Dependencies set to #10335, #11422, #11598, #5048, #10453, #11601
- Status changed from new to needs_review
I made a patch based on the code by rje (Ron Evans, I presume?) above.
It implements cusps, and also are_equivalent and reduce_cusp. The cusps code is more or less identical to Ron's code but marginally faster; it also sorts its output, and returns infinity rather than 1/N in line with Sage's conventions for the definition of "reduced".
I built this on top of #11601, so it now depends on that and its prerequisites (the series of patches which goes #10335 - #11422 - #11598 - #5048 - #10453 - #11601). I'm hoping these will get positively reviewed soon (the first three already have been), but if not it would be easy to recreate the patch without the dependency on #11601.
comment:2 Changed 3 years ago by janv
- Status changed from needs_review to positive_review
This works perfectly.
comment:3 Changed 3 years ago by jdemeyer
- Reviewers set to Jan Vonk
Jan, in the future you should add your name as Reviewer. And please also add yourself to
comment:4 Changed 3 years ago by jdemeyer
- Milestone changed from sage-4.8 to sage-5.0
comment:5 Changed 3 years ago by davidloeffler
- Keywords sd35 added
comment:6 Changed 3 years ago by davidloeffler
- Keywords changed from sd35, inequivalent cusps, principal congruence subgroup, Gamma(n), cusps() to sd35 inequivalent cusps, principal congruence subgroup, Gamma(n), cusps()
comment:7 Changed 3 years ago by jdemeyer
- Merged in set to sage-5.0.beta1
- Resolution set to fixed
- Status changed from positive_review to closed
Patch against 4.7.1.alpha4 + #11601 and its prerequisites | http://trac.sagemath.org/ticket/10546 | CC-MAIN-2014-42 | refinedweb | 688 | 74.08 |
Do?!;-)
As ever, Tony, you’ve managed to uncover a few bugs in the system which hadn’t previously been reported to us.
Firstly, the ‘download this unit’ page e.g. should include a direct link to the XML content. Some of these are missing and I’m not 100% sure the others are correct. We are investigating and will make sure that every unit has a correct link. I realise you’d still need to work out the id number and then scrape this page to automatically locate the xml file, but that’s a good deal less effort than what you appear to have documented above.
The issue with the RSS feeds having relative links not proper urls for the images is also a bug, and we will look into it. In the mean time, you may wish to use the tags which contain the full paths for every embedded asset in the RSS.
Finally, the issue that the content xml contains network locations rather than web links to the assets. This is also a bug, but is a bigger deal for us to fix. We will try to deal with this when we migrate the entire site to its new Moodle 2 platform, later this year.
@jenny Thanks for the reply… I thought about emailing you direct, but as the OU is ahead of the game ito publishing structured content, I thought it worth airing some of the issues so that anyone pursuing a similar path is alerted to some of the gotchas that may result… At the moment the easiest place to identify the view page id that is shared by the XML file is the RSS feed. Scraping HTML links can often be a little messy, eg in terms of finding the right link to scrape. (Are there any metadata fields that could carry the resource id, maybe?)
What would be similar would be an OPML feed that links to the XML pages, or an extension to the current OPML feed that includes such links (I think OPML is extensible if you can find an appropriate namespaced element to use/add in?)
The next post in this series may appear a little ranty, and relates to something we’ve discussed before, but I not sure I blogged it which means I can’t find/remember the answer…! So, erm, apologies in advance… also works to get the content xml for each unit and all you need is the unit code rather than any nasty ids. This however involves a core moodle hack which I will not be able to repeat when we move to Moodle 2 :(
We do have an RDF for each unit e.g. (just stick &format=rdf on the end of the unit URL, or follow the meta link embedded in the page) but it doesn’t currently include links to alternative formats. Would that be useful?
One problem we face here is in trying to serve a niche market with limited resources. We’ve tried ‘build it and they will come’ in LabSpace before though, with limited success and I find it depressing building things that don’t get used. But if there are small improvements we can make, then I’m sure the people who direct the development list will be willing to listen (sadly I don’t get to build whatever I want!!)
Hi Jenny – thanks for the reply… I appreciate the lack of uptake and the lack of freedom to build whatever you want are both big negatives, and I’m really wary of posting things that come across as rat hole requests for a niche market of one. One reason I play so much is to try to find ways in to content that have relatively low barriers to entry, that make the content available in an expressive way “as data”, ideally in a standard form, and that have a reasonable chance of being replicated by others. There’s also an element of trying to find ways of supporting discovery, though what the best level of granularity is for discovery is a constant source of confusion to me!
Hi Tony — thanks for alerting us to the bugs. I’m also keen to pick up on your points about how we structure content, not just in OpenLearn but more widely. Early attempts to appproach tagging elements from a pedagogic view or even a more sophisticated functional view rather than ‘just’ a structural/presentational view got quickly bogged down but it is timely to raise this again. I’ll flag the issue to various people here.
Thanks David; it was really instructive riffing with Jonathan Fine around the idea of secondary products. In many cases, good markup can help generate these. I know there’s an apparent overhead in adding semantic markup to materials, but looking through various course units I think it’s still not unusual to see presentational markup being used where semantic tags should be. It may be that the tooling used to create/edit markup is a hindrance here, eg in terms of: a) helping people select the right structured markup; b) previewing it appropriately (the latter is key – folk add italics styling because they want to see italics, whereas maybe they should use booktitle tags and let the publishing engine display it as italics.
If we can identify *useful* secondary products that are powered by structured markup, then feeding those secondary products (and seeing the value in use that arises from them) provides a driver back to doing the markup properly.
I think it’s also the case that use often reveals errors – so for example, if I want to see an XYZ formatted bibliography of resources, and I notice a book I know I referenced isn’t shown, it encourages me to go back to the materials to check I marked it up using referencing markup. | https://blog.ouseful.info/2012/03/13/do-we-need-an-openlearn-content-liberation-front/ | CC-MAIN-2018-26 | refinedweb | 983 | 61.4 |
Hi Miles,
I am calculating the Entanglement Entropy(EE) of a Bose-Fermi mixture chain. There are two degenerated ground states which belong to two different QN blocks ( The Bose particle number and Fermi particle number are both conservation.), one is located in the (Nb,Nf)=(22,0) Hilbert sub-space and the other is in the (Nb,Nf)=(21,1) space. The total number of sites is 36. Besides, I am using the IQMPS in the DMRG calculation.
I can do the EE calculation of the single ground state and it worked well. But I think the truly ground state |phi> is a linear combination of these two states and the coefficients can be determined by minimizing the EE (This way can be found from D.N.Sheng's paper). I try to evaluate these values and the key function used in my program is .plusEq(psi1,{"Maxm",3000,"Cutoff",1E-9}). The following is my code:
#include "itensor/all.h"
#include <fstream>
#include "itensor/decomp.h"
#include<math.h>
using namespace std;
using namespace itensor;
int main(int argc, char* argv[])
{
//Parse the input file //DMRG parameter
if(argc != 2) { printfln("Usage: %s inputfile_bsfm",argv[0]); return 0; }
auto input = InputGroup(argv[1],"input");
auto N = input.getInt("N"); // auto N=80;
auto sites2=BF(N);
readFromFile("sites_NbNf220", sites2);
IQMPS psi2(sites2);
readFromFile("psi_NbNf220", psi2);
auto sites3=BF(N);
readFromFile("sites_NbNf211", sites3);
IQMPS psi3(sites3);
readFromFile("psi_NbNf211", psi3);
ofstream SaveFile("EE_mixed.dat");
auto pi=4*atan(1.0);
std::complex<double> z {0, 1};
auto imag=z;
auto b=36;
auto c1=0.5;
auto c2=sqrt(1-pow(c1,2));
auto fai=pi;
psi2 *= c1;
psi3 *= c2*exp(imag*fai);
auto psi=toMPS(psi2);
auto psi1=toMPS(psi3);
println("Add two states ");
// psi=psi*c1+psi1*c2*exp(i*fai)
psi.plusEq(psi1,{"Maxm",3000,"Cutoff",1E-9});
//Entanglement Entropy
printfln("\n");
println("Entanglement Entropy");
//Given an MPS or IQMPS called "psi",
//and some particular bond "b" (1 <= b < psi.N())
//across which we want to compute the von Neumann entanglement
//"Gauge" the MPS to site b
if(b%2==0){
psi.position(b);
//Here assuming an MPS of ITensors, but same code works
//for IQMPS by replacing ITensor -> IQTensor
//Compute two-site wavefunction for sites (b,b+1)
// IQTensor wf = psi.A(b)*psi.A(b+1);
ITensor wf = psi.A(b)*psi.A(b+1);
//SVD this wavefunction to get the spectrum
//of density-matrix eigenvalues
auto U = psi.A(b);
// IQTensor S,V;
ITensor S,V;
auto spectrum = svd(wf,U,S,V);
//Apply von Neumann formula
//spectrum.eigs() is a Vector containing
//the density matrix eigenvalues
//(squares of the singular values)
Real SvN = 0.;
for(auto p : spectrum.eigs())
{
if(p > 1E-12) SvN += -p*log(p);
}
SaveFile<<b/2<<" "<<SvN<<endl;
}
println("\n");
SaveFile.close();
return 0;
}
It can be built but something wrong when I run the code. The error message is ITensor has different index structure. I believe that the error occurred at the psi.plusEq(psi1,{"Maxm",3000,"Cutoff",1E-9}); line, because the message "Add two states" has been output on the screen but there is no "Entanglement Entropy" string. So, my question is whether it's possible to do this or are there some tricks?
Thank you.
Best,
Chenrong
Hi Chenrong,
If I understand your question correctly, the short answer is that you cannot add IQMPS/IQTensors with different total QN, since it would result in an IQMPS/IQTensor with ill-defined QN flux. You should be able to convert the IQMPS to MPS using the toMPS function ( ) and then add them together.
toMPS
Cheers,
Matt | http://itensor.org/support/1626/how-to-add-two-mps-of-different-qn-blocks | CC-MAIN-2020-34 | refinedweb | 609 | 57.06 |
POS .
In most cases the
mq_*()
library interfaces listed above are implemented on top of
underlying system calls of the same name. Deviations from
this scheme are indicated in the following table:
POSIX message queues have been supported on Linux since kernel 2.6.6. Glibc support has been provided since version 2.3.4.
Support for POSIX message queues is configurable via the
CONFIG_POSIX_MQUEUE kernel
configuration option. This option is enabled by
default.
POSIX message queues have kernel persistence: if not removed by mq_unlink(3), a message queue will exist until the system is shut down.
Programs using the POSIX message queue API must be
compiled with cc
−lrt to link against the real-time
library,
librt.−−).
The
RLIMIT_MSGQUEUE
resource limit, which places a limit on the amount of space
that can be consumed by all of the message queues belonging
to a process's real user ID, is described in getrlimit(2).
On Linux, message queues are created in a virtual filesystem. (Other implementations may also provide such a feature, but the details are likely to differ.) This filesystem can be mounted (by the superuser) using the following commands:
# mkdir /dev/mqueue # mount −t (but see BUGS)..).
For a discussion of the interaction of POSIX message queue objects and IPC namespaces, see ipc_namespaces(7)...
An example of the use of various message queue functions is shown in mq_notify) | http://manpages.courier-mta.org/htmlman7/mq_overview.7.html | CC-MAIN-2021-17 | refinedweb | 230 | 54.32 |
Find SubArray with given Sum in Python
Hi, guys, we are given with the array or list in Python. Our task is to Find a subarray with given sum in Python.
You all should know about the subarray before attempting the given question. So I advise checking “What is a subarray?”
Algorithm part:-
- Make a function name find and pass the array, length of the given array and the sum to find in the array.
- Run a loop from 0 to length of the array.
- Take a variable name currsum and assign the first element of the array.
- now take a variable j and make it i+1
- Now if the j is less than or equal to n the while loop will run.
- If currsum is greate than the given sum or j is equal to n the break the loop or currsum is equal to given sum the print the indexes.
- If the above conditions are not satisfied.
Then add the next element to the currsum.
Python program: Find SubArray with given Sum
Now here is the code
def subsum(arr,n,sum): for i in range(n): currsum=arr[i] j=i+1 while j<=n: if currsum==sum: print ("Sum found between") print("indexes %d and %d"%( i, j-1)) return 1 if currsum>sum or j==n: break currsum=currsum+arr[j] j+=1 print ("No subarray found") return 0 # Driver program print("Enter the array") arr=list(map(int,input().split(" "))) n=len(arr) sum=int(input("Enter the sum to find in the array\n")) subsum(arr,n,sum)
Here is the output:-
| https://www.codespeedy.com/find-subarray-with-given-sum-in-python/ | CC-MAIN-2020-50 | refinedweb | 270 | 78.38 |
You may have seen my post on co-(Elgot algebras),
in which I mentioned I had been using some exotic recursion schemes for my
gmpint package. I came across a similar example, this time for Mendler-style
recursion schemes. To my knowledge, it is the only published example of
a Mendler-style catamorphism.
The problem is the inverse of the last problem: given the of limbs (here
Word64s), we wish to return Haskell's
Integer type. With a vanilla
catamorphism we can write the following:
import Data.Functor.Foldable import Data.Word
wordListToInteger :: [Word64] -> Integer wordListToInteger = cata a where a Nil = 0 a (Cons x xs) = fromIntegral x + base * xs
We can write this as a Mendler-style catamorphism as follows:
import Data.Functor.Foldable import Data.Word
asFix :: [a] -> Fix (ListF a) asFix = cata Fix
wordListToInteger :: [Word64] -> Integer wordListToInteger = mcata ma . asFix where ma f (Cons x xs) = fromIntegral x + base * f xs ma _ Nil = 0
This implementation is slightly slower than a simple catamorphism, so it was not published to Hackage, but hopefully it is instructive. | http://blog.vmchale.com/article/mendler-catamorphisms | CC-MAIN-2021-17 | refinedweb | 179 | 62.88 |
..
Data Contracts
A data contract is an abstract description of a set of fields with a name and data type for each field. The data contract exists outside of any single implementation to allow services on different platforms to interoperate. As long as the data passed between the services conforms to the same contract, all the services can process the data. This processing is also known as a loosely coupled system. A data contract is also similar to an interface in that the contract specifies how data must be delivered so that it can be processed by an application. For example, the data contract might call for a data type named Person that has two text fields, named FirstName and LastName. To create a data contract, apply the DataContractAttribute to the class and apply the DataMemberAttribute to any fields or properties that must be serialized. When serialized, the data conforms to the data contract that is implicitly built into the type.
Reusing Existing Types
A data contract has two basic requirements:
A stable name.
A list of members.
The stable name consists of the namespace uniform resource identifier (URI) and the local name of the contract. By default, when you apply the DataContractAttribute to a class, it uses the class name as the local name and the class's namespace (prefixed with "") as the namespace URI. You can override the defaults by setting the Name and Namespace properties. You can also change the namespace by applying the ContractNamespaceAttribute to the namespace. Use this capability when you have an existing type that processes data exactly as you require but has a different namespace and class name from the data contract. By overriding the default values, you can reuse your existing type and have the serialized data conform to the data contract.
' Define the data contract. <DataContract(Name := "Customer", Namespace := "", IsReference := True)> _ Public Class User Private privateName As String <DataMember(Name := "Last", EmitDefaultValue := True, IsRequired := True, Order := 2)> _ Public Property Name() As String Get Return privateName End Get Set(ByVal value As String) privateName = value End Set End Property Private privateAge As Integer <DataMember(Order := 1)> _
For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers. | https://msdn.microsoft.com/en-us/library/system.runtime.serialization.datacontractattribute(v=vs.95).aspx?cs-save-lang=1&cs-lang=vb | CC-MAIN-2015-27 | refinedweb | 377 | 50.46 |
17 February 2012 10:12 [Source: ICIS news]
SINGAPORE (ICIS)--Jiangsu Hengli Chemical Fibre’s 400,000 tonne/year polyethylene terephthalate (PET) bottle chip unit at ?xml:namespace>
“The PET bottle chip lines were shut earlier this month and the polyester facility at the same site is currently producing PET fibre chips,” the source said.
“We will monitor the market and make a decision [when to restart the plant] in the second half of the year,” the source said.
Jiangsu Hengli has a fibre chip capacity of 450,000 tonne/year at the same site.
Other major Chinese PET producers include Jiangsu Sanfangxiang and China | http://www.icis.com/Articles/2012/02/17/9533133/chinas-jiangsu-hengli-pet-bottle-chip-plant-shut-on-poor-margins.html | CC-MAIN-2014-52 | refinedweb | 105 | 60.24 |
Enter student details using Class
Write a C++ Program to enter student details by Passing parameters to constructors. Here’s a Simple C++ Program to enter student details using Class in C++ Programming Language.
What is Class and Objects in C++?
- Class is a user defined data type, which holds its own data members and member functions, which can be accessed and used by creating instance of that class.
- The variables inside class definition are called as data members and the functions are called member functions.
- Class is just a blue print, which declares and defines characteristics and behavior, namely data members and member functions respectively. And all objects of this class will share these characteristics and behavior.
- Objects are instances of class, which holds the data variables declared in class and the member functions work on these class objects.
- Each object has different data variables. Objects are initialized using special class functions called Constructors.
Below is the source code for C++ Program to enter student details using Class which is successfully compiled and run on Windows System to produce desired output as shown below :
SOURCE CODE : :
/* C++ Program to enter student details using Class */ #include<iostream> using namespace std; class Student { private: int marks; char grade; public: Student(int m, char g) { marks= m; grade= g; } void show() { cout<<"\nMarks ="<<marks<<endl; cout<<"\nGrade = "<<grade<<endl; } }; int main() { Student s1(730, 'A'), s2(621,'B'); cout<<"Record of student 1 :: -----------------"<<endl; s1.show(); cout<<"\nRecord of student 2 :: -----------------"<<endl; s2.show(); return 0; }
OUTPUT : :
/* C++ Program to enter student details using Class */ Record of student 1 :: ----------------- Marks =730 Grade = A Record of student 2 :: ----------------- Marks =621 Grade = B Process returned 0
Above is the source code and output for C++ Program to enter student details using Class which is successfully compiled and run on Windows System to produce desired output.…. | http://www.techyshed.com/cpp-student-details-passing-parameters-constructors/ | CC-MAIN-2018-30 | refinedweb | 311 | 56.79 |
I'm trying an exercise in a book I read, which basically says "Write a template function that takes as an argument an array of five items of type T, and returns the largest item. Test it in a program with an array of 5 int values and 5 double values."
I've tried everything, but did not get it to work. There is my code:
I get an error, saying:I get an error, saying:Code:#include <iostream> using namespace std; template <typename T> T max5(T,int); int main() { int numbers[5] = {0}; for (int i = 0; i < 5; i++) { cout << "Enter the number for element num " << (i+1) << "\n"; cin >> numbers[i]; } cout << "The largest number in the array is: "; int max = max5(numbers, 5); cout << max << "\n"; return 0; } template <typename T> T max5(T arr[],int n) { T max = 0; for (int i=0; i<n; i++) if arr[i]>max max = arr[i]; return max; }
.\Source1.cpp(14) : error C2440: 'initializing' : cannot convert from 'int *' to 'int'
There is no context in which this conversion is possible
Line 14 is:
BTW, I defined the maximum number as int only becuase I don't know how to tell the compiler to check wheter it is an int, double, just as the exercise says.BTW, I defined the maximum number as int only becuase I don't know how to tell the compiler to check wheter it is an int, double, just as the exercise says.Code:int max = max5(numbers, 5); | http://cboard.cprogramming.com/cplusplus-programming/82616-problem-templates.html | CC-MAIN-2015-35 | refinedweb | 256 | 55 |
Device and Network Interfaces
- USB video class driver
#include <sys/usb/clients/video/usbvc/usbvc.h> #include <sys/videodev2.h> usbvc@unit-address
The usbvc driver is a USBA (Solaris USB Architecture)-compliant client driver that supports the USB Device Class Definition for Video Devices specification, Versions 1.0 and 1.1. The usbvc driver supports a subset of the video controls and formats described in the USB specification.
The usbvc driver also implements the Video4Linux2 API (V4L2), Version 0.20 for applications. For more information on the V4L2 API, visit.
Note that the usbvc driver supports the video capture function only and that video output is not supported. For more information on supported USB video-class devices and functions, visit.
The usbvc driver reads video data from the isochronous endpoint of the device. Bulk data endpoints are not supported.
MJPEG and UNCOMPRESSED video formats are supported. Isochronous data are read from the isochronous input device frame-by-frame and are maintained in a buffer array within the driver. Video frames are read from the driver using the read(2) or mmap(2) I/O method. For read(2), each read returns a buffer of a video frame. For mmap(2), each VIDIOC_DQBUF ioctl returns the buffer structure v4l2_buffer. (A video frame buffer pointer is included in the structure). See the V4L2 API for buffer structure and other related data structure information.
A brief overview of supported ioctl requests appears below. For more detailed information, refer to the V4L2 API document. Note: ioctl information presented in the V4L2 API document may differ slightly from the content of this manpage. In such cases, you should rely on the information in this manpage.
Query the device capabilities. Besides device capabilities, the usbvc driver returns structure v4l2_capability which includes information on the driver, data bus and OS kernel. The Version structure member has no meaning in Solaris and is always set to 1.
Enumerate the video formats supported by the device.
Set a video format.
Get a video format.
Request the usbvc driver to allocate video data buffers. If a buffer is set to zero, the driver stops reading video data from the device and releases all allocated buffers. (For mmap(2) only).
Query a given buffer's status. (For mmap(2) only).
Enqueue an empty buffer to the video data buffer array. (For mmap(2) only).
Dequeue a done buffer from the video data buffer array. (For mmap(2) only).
Start reading video data.
Stop reading video data.
Enumerate all device inputs. Currently, the usbvc driver supports one input only.
Get the device's current input. At this time, the usbvc driver supports one input only.
Set the device's current input. At this time, the usbvc driver supports one input only.
Query the device and driver for supported video controls. Currently, the usbvc driver supports the brightness, contrast, saturation, hue, and gamma video controls.
Get the device's current video control.
Set the device's current video control.
Get streaming parameters, the number of frames per second and number of buffers used internally by driver in read/write mode.
Set streaming parameters, the number of frames per second and number of buffers used internally by driver in read/write mode.
An open was attempted after the device has already been opened.
An unsupported ioctl is received or an ioctl is attempted with an out-of-range value.
The driver received an unrecoverable device error or the device did not respond or the device stalled when attempting an access. A read(2) or ioctl(2) did not complete due to a peripheral access.
The driver received an open(2) request for a device for which the attach failed.
The driver received an open(2) request for a disconnected device.
32-bit ELF kernel module. (x86)
64-bit ELF kernel module. (x86)
64-bit ELF kernel module. (SPARC)
Device node for isochronous input from USB video device and device control.
See attributes(5) for descriptions of the following attributes:
cfgadm_usb(1M), ioctl(2), open(2), mmap(2), read(2), libusb(3LIB), attributes(5),ugen(7D), usba(7D), attach(9E)
Oracle Solaris Administration: Common Tasks
Universal Serial Bus Specification 1.0, 1.1 and 2.0— 1996, 1998, 2000
USB Device Class Definition for Video Devices 1.0 and 1.1— 2003, 2005
Video4Linux2 API (V4L2), Version 0.20
In addition to being logged, the following messages may appear on the system console. All messages are formatted in the following manner:
Warning: <device path> (usbvc<instance num>):Error Message...
The device has been hot-removed or powered off while it was open and a possible data transfer was in progress. The job may be aborted.
This device has been disconnected because a device other than the original one has been inserted. The driver informs you of this fact by displaying the name of the original device.
The device was hot-removed while open. A new device was hot-inserted which is not identical to the original device. Please disconnect the device and reconnect the original device to the same port.
The USB video device will be power-managed when the device is idle.
If a USB video device is hot-removed while active, a console warning is displayed requesting you to put the device back in the same port and telling you of potential data loss. Hot-removal of an active video device is strongly discouraged.
Always close all applications before hot-removing or hot-inserting a device. If an application is open when a device is hot-removed, inserting the device in a different port will create new /dev/videoN links. Moving an active device to another port is not recommended. | http://docs.oracle.com/cd/E23824_01/html/821-1475/usbvc-7d.html | CC-MAIN-2015-48 | refinedweb | 948 | 60.61 |
On Tue, Jul 24, 2012 at 09:50:21AM +0800, Thomas Goirand wrote: >. node was in /sbin, nodejs is I'll reply with my Ubuntu email now. I don't think causing a bigger delta is good for either Debian or Ubuntu. Let's please keep the namespace clean. I'll talk with whoever introduced it, but we can upload a temp metapackage and upload it with time for a beer after. > from making funny suggestions. I assure you I'm not laughing :) > > Thomas > Paul -- All programmers are playwrights, and all computers are lousy actors. #define sizeof(x) rand() :wq
Attachment:
signature.asc
Description: Digital signature | https://lists.debian.org/debian-devel/2012/07/msg00680.html | CC-MAIN-2017-51 | refinedweb | 106 | 77.43 |
I built a git hosting service called Hosted Gitea and discovered that automated deployment of VPS boxes on Digital Ocean is super simple. Here is how I got it working.
When a customer signs up a chain of events is set in motion:
- Their details are stored.
- A background task starts.
- Use the Digital Ocean API to create a fresh Ubuntu VPS box.
- Use the Gandi DNS API to point a domain name at the box.
- Run some Ansible scripts to provision Gitea on the new box.
The Digital Ocean API part consists of just a handful of function calls using python-digitalocean:
from digitalocean import Droplet, Manager # get our DO API token from an environment variable token = environ.get("DIGITALOCEAN_ACCESS_TOKEN") or exit("DIGITALOCEAN_ACCESS_TOKEN is not set.") manager = Manager(token=token) ssh_keys = manager.get_all_sshkeys() droplet = Droplet(token=token, ssh_keys=ssh_keys ... etc.) droplet.create() # then you can wait for the droplet to be created: actions = droplet.get_actions() for action in actions: if action.type == "create": while action.status != "completed": sleep(2) print("Waiting for box to deploy.")
It's as simple as that to launch a new VPS box and wait for it to come online. After this step there are a few other things that happen like emails being sent, Ansible scripts to set up Gitea etc. but the part of bringing a new box online is done.
Photo by Todd Cravens on Unsplash.
Discussion | https://dev.to/chr15m/digital-ocean-deployment-automation-is-simple-5be0 | CC-MAIN-2020-45 | refinedweb | 235 | 68.26 |
Currently nsMathMLChar uses GetBoundingMetrics and DrawString methods
of the older nsRenderingContext. These are high level methods that operate
with Unicode character input. Most of the time, this works OK for our
mathfontUnicode.properties database of Unicode character-part support and
mathfontFONTNAME.properties database of PUA and supplementary font support of
other stretchy characters.
For bug 407059, however, we'll need methods based on glyph indices (instead
of Unicode characters). For this, the newer gfxTextRun/gfxFont/gfxContext
classes would be appropriate.
The newer classes already support Unicode characters through gfxFontGroup, and
the nsRenderingContext methods above are simply wrappers around these newer
classes, so it would seem sensible to also use these classes (instead of
nsRenderingContext) in nsMathMLChar even for Unicode-based lookup.
The newer classes are also better for testing existence of a glyph in a
particular font, which is useful at least when using
mathfontUnicode.properties, because there all glyphs in characters built from
parts should come from the same font.
The gfxFont class provides methods for getting the font tables necessary for
bug 407059.
nsMathMLChar:
nsRenderingContext:
gfxTextRun:
gfxFont:
gfxContent:
gfxFontGroup:
I think it would be best if the work on this bug is done on top of bug 407439, which already has a patch reviewed.
> // A table consists of "nsGlyphCode"s which are viewed either as Unicode
> // points or as direct glyph indices, depending on the type of the table.
> // XXX The latter is not yet supported.
Is the plan to make nsGlyphCode use only glyph indices or to allow a mix of Unicode points and glyph indices?
Also, should we move to the newer classes in one go or is it worth doing the work in several steps. For example starting to remove GetBoundingMetrics and DrawString?
(In reply to comment #2)
> Is the plan to make nsGlyphCode use only glyph indices or to allow a mix of
> Unicode points and glyph indices?
For fonts where we continue to use mathfontFONTNAME.properties files we'll want to continue to lookup by Unicode point because glyph indices are likely to change between versions of fonts. Glyph indices will only be useful when they come from information in that font (i.e. the CMAP or MATH table).
It may be possible to lookup Unicode points to get glyph indices store glyph indices in nsGlyphCode, but I don't know whether there is any advantage in doing that. I wouldn't make that change unless there is a clear advantage in doing so.
> Also, should we move to the newer classes in one go or is it worth doing the
> work in several steps. For example starting to remove GetBoundingMetrics and
> DrawString?
As I see it, removing uses of nsRenderingContext's GetBoundingMetrics and DrawString will be pretty much all the work. The public nsMathMLChar methods may well continue to have an nsRenderingContext parameter, but aRenderingContext->ThebesContext() will be used to get a gfxContext for drawing.
I'm not concerned about the "#ifdef SHOW_BORDERS" code. That can either be removed or changed to use gfxContext.
Feel free to remove the "Composite"/child char code too.
That might be a good first step.
> I'm not concerned about the "#ifdef SHOW_BORDERS" code. That can either be
> removed or changed to use gfxContext.
I proposed to Daniel to create a first patch removing the "#ifdef SHOW_BORDERS".
I'm not going to mark a dependency, but whoever actually does this bug, please have a look at bug 651016 and its dependencies and see if some of that work makes sense to do as a side-effect.
(In reply to Karl Tomlinson (:karlt) from comment #4)
> Feel free to remove the "Composite"/child char code too.
> That might be a good first step.
For people who want to work on this bug: MathJax fonts require composite char for (bug 701758 comment 4), so finally we might want to keep this code.
The use of gfxPlatform::ResolveFontName() is preventing the use of any mathfontNAME.properties files except for mathfontUnicode.properties with downloaded fonts.
A key difference between the nsRenderingContext/nsFontMetrics API and
nsMathMLChar's needs is that nsRenderingContext's concept of a font is really
a CSS font specification, which might include several families and will fall
back to other families if none of the specified families match the particular
glyph. nsMathMLChar wants to work with a particular font and know that it is
that font only that will be used.
gfxFont provides a font-specific interface. A gfxFont can be obtained from a
gfxFontGroup (which corresponds to nsFontMetrics) via GetFontAt().
GetFontEntry()->FamilyName() will indicate whether the first font in the font
group actually matches the font family requested. It looks like gfxTextRun
(which is not-necessarily font-specific) is still needed to draw an measure
text.
Created attachment 653943 [details] [diff] [review]
Patch V1
Just an experimental patch (may not be quite correct or may be subject to memory leak) to discuss the way to fix this bug. I think we can use such a DrawGlyph method (and similarly GetGlyphBoundingMetrics). Later the aGlyph parameter could be either a unicode character or a glyph index. I think that we'll still need DrawString and GetBoundingMetrics on mData (which may contain several characters) when "mDrawNormal" is true...
(In reply to Frédéric Wang (:fredw) from comment #10)
> I think that
> we'll still need DrawString and GetBoundingMetrics on mData (which may
> contain several characters) when "mDrawNormal" is true...
I wonder whether there is much point in using nsMathMLChar if there are
multiple characters. If not, perhaps GlyphCode::code could be copied from
nsMathMLChar::mString, to remove the need for an mDrawNormal case.
A complication here is that the graphics team are working on a new API,
project name Azure, to replace gfxContext. The new API is currently only
functional on some platforms. It is close to being usable on all platforms,
but currently only canvas code is using the new API, and the API does not look
complete for other purposes.
At this stage, the new API provides only for drawing, not measuring text, so
canvas is still using gfxFontGroup for font selection and gfxTextRun for
laying out text. It is then copying glyphs ids and positions across to the
newer GlyphBuffer format. I assume at some stage there will be a better way
to setting up glyph info for the new API.
Given these changes are happening, I don't think there's much point
rearranging the code here too much if the only motivation is moving away from using nsRenderingContext to a different text drawing API.
That is not the only motivation. This is what I see we want to gain:
1. A way to access font tables and get glyph ids.
This is currently gfxFont, and glyph ids can be put into a gfxTextRun to
Draw. gfxTextRun is higher level than needed here because each string here
will use only one font, but gfxTextRun is designed to fallback to other
fonts for missing characters.
2. When using mathfontUnicode.properties, a way to check that the same
font is used for each part.
Currently, getting the gfxFonts from gfxTextRun will tell us that.
However, I expect some of this will change in the future, though I don't know
when. It is at least months away.
Thinking about bug 407059, I think we are going to want to fetch glyph id's
during stretch and then store them, perhaps with positions, in the
nsMathMLChar for drawing. The new gfx::GlyphBuffer looks most compatible with
this.
I'm beginning to think, despite what I said in comment 3, that it may be best
to store glyph ids and positions in the nsMathMLChar, during stretch, even
when using .properties files. That way we won't need two different code paths
at drawing. (If measuring of the parts can be shared too, even better.) It
will also be more efficient. By the time we've checked that each part is
using the same font, we'll have much of this information.
So if I understand correctly, the plan for this bug would be:
- Do not wait for the Azure API to be ready.
- Modify TryParts to verify that all the glyphs come from the same font.
- Modify the stretch code to directly convert the nsGlyphCode's to a gfx::GlyphBuffer (with glyph ids and perhaps positions) that we would store on nsMathMLChar.
- Modify the rendering code to display the glyphs in gfx::GlyphBuffer via a gfxTextRun.
Then in bug 407059 we could create a gfx::GlyphBuffer from the infos contained in the MATH table.
I'm taking this bug and will start to work on this as soon as I have more free time.
Created attachment 669193 [details] [diff] [review]
Patch V2
This patch removes the use of nsRenderingContext. It is still experimental, but I'm just asking feedback to be sure that the approach is correct. Some remarks:
1) the mDrawNormal that allowed several characters is actually used with "centered" operators (+, - etc), invisible operators and when the stretch is not necessary. I removed it but I haven't really verified if everything still work correctly in that case (in particular these characters used the parent style context). I suspect that the "centered" and "invisible" operators are not really necessary with modern fonts and could be removed.
2) I haven't worked on the previous suggestions yet. I think parts should always come from the same font so it is probably possible to store the nsFontMetrics on the nsMathMLChar instead of recomputing it each time. The stretch code could also directly create a gfx::GlyphBuffer instead of using mGlyphTable, TopOf, etc. Regarding the positions of glyphs, I'm wondering what to do with these AppUnitsPerDevPixel conversions.
3) I see memory leaks. I haven't verified but I suspect it is due to the nsFontMetrics.
4) Perhaps the gfxContext operations are not exactly the same as those that was done before. For example RTL or conversions involving AppUnitsPerDevPixel.
Two questions:
- What is the best way to get the glyph id nsGlyphCode::code? The functions in gfxFont take uint32 parameter but I'm note sure it is a good idea to convert the PRUnichar* to an integer. Otherwise we can do as in the canvas example mentioned in comment 12 and use a temporary gfxTextRun created from the glyph code.
- When we have a list of fonts (when we draw the character "normally" or when we use a variant from the unicode table) can we just set the glyph id to the unicode code point?
Created attachment 670382 [details] [diff] [review]
Patch V3
So here is an updated patch (fixing memory leaks, BTW). Ideally, I would like to do all the computation in the Stretch routine, so
nsGlyphTable* mGlyphTable;
nsGlyphCode mGlyph;
nsString mFamily;
could be replaced by
nsRefPtr<nsFontMetrics> mFontMetrics;
gfx::GlyphBuffer* mGlyphBuffer;
where mGlyphBuffer contains one or more glyphs to draw and mFontMetrics is shared by all the glyphs. I'm not sure whether we should repeat the glue in mGlyphBuffer. If not, we can just use mGlyphBuffer[4] instead but in that case I don't think we can do all the positioning in the Stretch routine. Perhaps that's what I'm going to do to start with.
However, some contructions use parts from different fonts. For example \u21D1 = \u21D1@1\uFFFD\uFFFD\uE10E in STIXNonUnicode. I suspect that in most cases and in particular with OpenType fonts, all the parts come from the same fonts. But for the moment, it seems that we'll have to keep storing mGlyphTable and the font number of each glyph (nsGlyphCode::font) in mGlyphBuffer so that we can change the font used to draw each glyph (and in that case, maybe it's not relevant to store mFontMetrics on the nsMathMLChar). Or maybe we can also remove these constructions that rely on different fonts?
Created attachment 819369 [details] [diff] [review]
Patch V3 (rebased)
Here is a rebased version of Frédéric’s last patch.
I also looked into switching to using glyph id instead of characters, but there are several issues:
* gfx::GlyphBuffer() is internal to gfxFont.cpp, and in general I can’t find any glyph-based thebes API.
*.
(In reply to Khaled Hosny from comment #17)
> * gfx::GlyphBuffer() is internal to gfxFont.cpp, and in general I can’t find
> any glyph-based thebes API.
gfxShapedText gfxShapedWord and gfxTextRun are the public structures for storing glyph indices and positions. Only gfxTextRun seems to support measuring the text.
If measuring is not required, then the GlyphBuffer in 2D.h may be an option.
> *.
It may be necessary to add a generic gfxFont::GetGlyph() method for OpenType fonts that uses the CMAP table.
The alternative, I guess, is to use the APIs to shape a string of Unicode characters, with only a single character, and then retrieve the glyph id from the resulting gfxShapedText, or derived class.
The gfxShapedText family of classes can be initialized from glyph ids and positions using SetGlyphs(). Perhaps CompressedGlyph::SetComplex() with DetailedGlyph will be necessary if there are vertical offsets, but it will be needed even if IsSimpleAdvance() returns false, so SetSimpleGlyph() is only an optimization, that is not necessary here.
With gfxTextRun, AddGlyphRun() is also required.
I think the idea was that there would be a single DetailedGlyph per unicode
character, but each DetailedGlyph can only contain glyphs from a single
gfxFont, so it may be necessary to add some bogus characters to the
gfxTextRun in some situations. The aLength parameter for the gfxTextRun constructor would be at least the number of fonts required, or perhaps the number of DetailedGlyphs for the simplest implementation.
Created attachment 8340749 [details] [diff] [review]
Part 1 - use gfx/thebes classes for measuring/drawing glyphs v3
Yet another rebase.
So I came back to this today. As Karl said above, it's probably best to store the glyph ids and metrics on the mMathMLChar. The call to nsMathMLChar::MeasureGlyph in the stretch code will be replaced by something that checks whether a glyph exists and returns the ID + metrics. The nsMathMLChar::DrawGlyph and nsMathMLChar::MeasureGlyph in the draw code will just use the saved data. I'll try to write a new patch this week.
Created attachment 8341290 [details] [diff] [review]
patch - v4
Here is a tentative patch to store the glyph ID and metrics on the frame. However, I'm not sure how to use the gfx* classes.
Karl, can you have a quick look at nsMathMLChar::GetGlyphData and give more hints about how to extract the glyphID from the textrun?
See for an example to follow that gets glyph ids from the run.
They are stored in a kind of compressed format, so check IsSimpleGlyph() first to find out where the id is.
Created attachment 8341645 [details] [diff] [review]
Patch V5
Created attachment 8341943 [details] [diff] [review]
Patch V6
Created attachment 8341945 [details] [diff] [review]
Allow nsGlyphCode to store glyphID.
The patches should now allow to handle glyph by glyph index. I see a weird rendering with the vertical bar when the STIX fonts are used, tough.
Some investigations done today:
- we will probably need to rewrite the code to allow the more general GlyphAssembly table.
- gfxFontEntry has some functions to access the Open Type tables like gfxFontEntry::GetFontTable. Probably a class similar to nsGlyphTable to retrieve info from the Open Type table should be implemented (perhaps two classes nsGlyphTableFromProperties and nsGlyphTableFromOpenTypeMath deriving from the same base class)
- The XeTeX code is (it seems that mozilla::NativeEndian can do that)
I'm wondering if it's worth implementing the logic on the gfx/thebes side, like what is done for gfxSVGGlyphs.
- I think the appropriate place to plug the Open Type search is in StretchEnumContext::EnumCallback ; do the SetFontFamily call before setting glyphTable and then try to get a glyphTable from context->mChar->mFontMetrics->GetThebesFontGroup()->GetFontAt(0)->GetFontEntry().
Created attachment 8342254 [details] [diff] [review]
Patch v7
I've merged the patch that allows nsGlyphCode to store either a glyph index or a Unicode code point. To fix the rendering bug mentioned above, I kept drawing by Unicode code point for the mathfontUnicode.properties. For the other tables, a conversion from Unicode code point to glyph index is performed. This is probably not really useful at the moment, but at least it allows to check that the draw-by-glyph-index code is working as expected.
For the record, I've started some of the work indicated in comment 27 today. I'll keep the latest version of the patches on my GitHub MozillaCentralPatches repository.
Created attachment 8343927 [details] [diff] [review]
Patch V8
Created attachment 8344396 [details] [diff] [review]
Patch V9
See comments in 407059.
Comment on attachment 8344396 [details] [diff] [review]
Patch V9
Review of attachment 8344396 [details] [diff] [review]:
-----------------------------------------------------------------
> - aRenderingContext.SetFont(fm);
aRenderingContext.SetFont(mFontMetrics); should still be called before the Stretch/MaxWidth exit. At least the msqrt code assumes the font metrics to be set by the stretchy code. I'll update the patch later.
Created attachment 8345135 [details] [diff] [review]
Patch V10
Refreshing the patch to fix a warning-as-error + a crash due to the font not set on the nsRenderingContext.
Created attachment 8359268 [details] [diff] [review]
Patch V11
Refreshing patch..
(In reply to Karl Tomlinson (:karlt) from comment #36)
>.
>
The latest version of the patch for bug 407059 is:
I also have a WIP patch for the mirroring that changes a bit things:
> ).
(In reply to Frédéric Wang (:fredw) from comment #37)
> >).
So I think that was at least
(In reply to Frédéric Wang (:fredw) from comment #38)
> So I think that was at least
>
> nsMathMLmrootFrame.cpp#l378
I wonder whether that is using a different nsFontMetrics than Reflow().
I don't think it was intentional that it got the metrics from the nsMathMLChar.
I think the GetIntrinsicWidthMetrics was getting the font metrics from the
rendering context because that looked similar to what Reflow() did before
But still, it looks like I wrote that code wrong, because it was meant to
match Reflow().
Comment on attachment 8359268 [details] [diff] [review]
Patch V11
I would like to avoid having mFontMetrics on the nsMathMLChar. This is
essentially being used as a way to pass a parameter between SetFontFamily() or
GetMetricsFor() and MakeTextRun(), but it would be better to be explicit about
that.
This can be done by using an nsRefPtr<nsFontMetrics>* out parameter for
SetFontFamily(). Probably better would be nsRefPtr<gfxFontGroup>*. The only
additional thing that nsFontMetrics provides over gfxFontGroup is
AppUnitsPerDevPixel() but that is available from an nsPresContext or
nsDeviceContext.
If mGlyphs on the nsMathMLChar is replaced with a gfxTextRun for each part,
then the draw methods will not need to use SetFontFamily, the nsMathMLChar
will not need to keep mGlyphTable nor mFamily, and PaintForeground can use the
same code for DRAW_NORMAL and DRAW_VARIANT.
If a MakeTextRun method is added to the glyph table, other code won't need to
know about the glyph storage, and nsGlyphCodes can be opaque pointers or ids
(that the table can handle). Even OpenType MATH glyph tables should be able
to construct the gfxTextRun so that the same measuring code can be used for
all text runs.
It should be fine to pass a null property provider AFAICS.
Some code can be shared by having a helper function to measure text of a
gfxTextRun and return nsBoundingMetrics.
>+ aPresContext->DeviceContext()->GetMetricsFor(font,
>+ mStyleContext->StyleFont()->
>+ mLanguage,
>+ mStyleContext->PresContext()->
>+ GetUserFontSet(),
>+ mStyleContext->PresContext()->
>+ GetTextPerfMetrics(),
>+ *getter_AddRefs(fm));
This has fewer line breaks, making it easier to read:
aPresContext->DeviceContext()->
GetMetricsFor(font,
mStyleContext->StyleFont()->mLanguage,
mStyleContext->PresContext()->GetUserFontSet(),
mStyleContext->PresContext()->GetTextPerfMetrics(),
*getter_AddRefs(fm));
but I assume mStyleContext->PresContext() is aPresContext.
> // Update the font and rendering context if there is a family change
Please update this comment.
>- mDrawNormal = !glyphFound;
>+ if (!glyphFound) mDraw = DRAW_NORMAL;
This is no longer necessary, I assume because Stretch() sets DRAW_NORMAL.
>- mCtx.IntersectClip(aRect);
>+ mThebesContext->NewPath();
>+ gfxRect clip(NSAppUnitsToFloatPixels(aRect.x, aAppUnitsPerGfxUnit),
>+ NSAppUnitsToFloatPixels(aRect.y, aAppUnitsPerGfxUnit),
>+ NSAppUnitsToFloatPixels(aRect.width, aAppUnitsPerGfxUnit),
>+ NSAppUnitsToFloatPixels(aRect.height, aAppUnitsPerGfxUnit));
nsLayoutUtils::RectToGfxRect()
Similarly in PaintRule.
>+ mThebesContext->Rectangle(clip);
Use SnappedRectangle(clip) as having clean edges hid the joins in the parts.
>+PaintRule(gfxContext* aThebesContext,
>+ int32_t aAppUnitsPerGfxUnit,
>+ nsRect& aRect)
>+{
>+ if (!aRect.IsEmpty()) {
I don't think this test is necessary.
>+ aThebesContext->Rectangle(rect, true);
SnappedRectangle() is more explicit than the boolean parameter.
>-.
>+ return (other.font == font && other.IsGlyphID() == IsGlyphID() &&
>+ ((IsGlyphID() && other.glyphID == glyphID) ||
>+ (!IsGlyphID() && other.code[0] == code[0] &&
>+ other.code[1] == code[1])));
No need for "other.IsGlyphID() == IsGlyphID()" as this is covered by comparing
the font members.
>+ typedef enum {
>+ DRAW_NORMAL, DRAW_VARIANT, DRAW_PARTS
>+ } DrawingMethod;
In C++, this is usually
enum DrawingMethod {
DRAW_NORMAL, DRAW_VARIANT, DRAW_PARTS
};
(In reply to Karl Tomlinson (:karlt) from comment #40)
> >-.
Yes, I think that was the most convenient way to handle this without changing to much code (eventually this should be rewritten to support arbitrary number of part). I'm not such how many constructions will be affected... What do you suggest?
(In reply to Frédéric Wang (:fredw) from comment #41)
> What do you suggest?
I guess the simplest approach would be to leave missing parts empty (instead of setting them to glue) and leave their bounding metrics at zero.
Just check that the bars in
<mrow><mo>|</mo><mi>·</mi><mo>|</mo></mrow>
still have finite size.
Created attachment 8373615 [details] [diff] [review]
Patch V12
Comment on attachment 8373615 [details] [diff] [review]
Patch V12
This all looks great, thanks. I think this should be landed with just a couple of minor changes below.
>+ mGlyphs[0]->Draw(thebesContext, gfxPoint(r.x, r.y + mUnscaledAscent),
This can be gfxPoint(0.0, mUnscaledAscent), because the transform has been
applied to make r.TopLeft() zero, and I think that would be clearer.
>+nsMathMLChar::PaintVertically(nsPresContext* aPresContext,
>+ gfxContext* aThebesContext,
>+ nsFont& aFont,
>+ nsGlyphTable* aGlyphTable,
>+ nsRect& aRect)
> {
> // Get the device pixel size in the vertical direction.
> // (This makes no effort to optimize for non-translation transformations.)
> nscoord oneDevPixel = aPresContext->AppUnitsPerDevPixel();
>+ nsRefPtr<gfxFontGroup> fontGroup;
This variable is no longer needed. Similarly in PaintHorizontally.
>- if (! family.Equals(aFont.name)) {
>+ if (!*aFontGroup || !family.Equals(aFont.name)) {
I wonder what required this.
It looks sensible and probably means some other aFont.name code can be
removed, but leave that for another time.
(In reply to Frédéric Wang (:fredw) from comment #46)
>
I've modified semantics-1-ref.xhtml and that seems to make the test pass again now.
I checked the painting code and indeed the remaining use of the Glyph Table was really only for the glue, which we don't need to handle specially anymore. So I've cleanup the code even further and now the painting only works on the textruns. I have also simplified the GlyphTable API and that will be more convenient for the changes with the MATH table:
Created attachment 8374986 [details] [diff] [review]
Patch V13
Comment on attachment 8374986 [details] [diff] [review]
Patch V13
Removing all this code is excellent. Usually it is easier to review changes in separate patches, but this wasn't too bad as interdiff happened to work ok.
The only issue I see is from this code now having additional effect on the missing middle glyph.
> // _cairo_scaled_font_glyph_device_extents rounds outwards to the nearest
> // pixel, so the bm values can include 1 row of faint pixels on each edge.
> // Don't rely on this pixel as it can look like a gap.
> start[i] = dy - bm.ascent + oneDevPixel; // top join
> end[i] = dy + bm.descent - oneDevPixel; // bottom join
The middle missing glyph has zero metrics, and so the oneDevPixel will make the glues on either side overlap, which can make the join visible.
Can you handle the missing glyph specially or avoid adding oneDevPixel when bm.ascent + bm.descent < 2 * oneDevPixel, please?
Similarly for horizontally.
I think the semantics-1 failure at
is likely due to different bounding metrics in the operator built from
horizontal parts.
Now that the missing parts are not filled with glue their metrics have ascent
and descent of zero. For U+00AF, descent should be negative.
Can you adjust this code to compute ascent and descent only from existing
glyphs (or non-empty metrics), please?
Having an initial loop to find the first non-empty glyph may be the simplest
approach.
Created attachment 8375348 [details] [diff] [review]
Patch V14
I landed a trivial mingw fixup: | https://bugzilla.mozilla.org/show_bug.cgi?id=663740 | CC-MAIN-2016-36 | refinedweb | 4,031 | 54.93 |
Dask Benchmarks
This work is supported by Continuum Analytics and the Data Driven Discovery Initiative from the Moore Foundation.
Summary
We measure the performance of Dask’s distributed scheduler for a variety of different workloads under increasing scales of both problem and cluster size. This helps to answer questions about dask’s scalability and also helps to educate readers on the sorts of computations that scale well.
We will vary our computations in a few ways to see how they stress performance. We consider the following:
- Computational and communication patterns like embarrassingly parallel, fully sequential, bulk communication, many-small communication, nearest neighbor, tree reductions, and dynamic graphs.
- Varying task duration ranging from very fast (microsecond) tasks, to 100ms and 1s long tasks. Faster tasks make it harder for the central scheduler to keep up with the workers.
- Varying cluster size from one two-core worker to 256 two-core workers and varying dataset size which we scale linearly with the number of workers. This means that we’re measuring weak scaling.
- Varying APIs between tasks, multidimensional arrays and dataframes all of which have cases in the above categories but depend on different in-memory computational systems like NumPy or Pandas.
We will start with benchmarks for straight tasks, which are the most flexible system and also the easiest to understand. This will help us to understand scaling limits on arrays and dataframes.
Note: we did not tune our benchmarks or configuration at all for these experiments. They are well below what is possible, but perhaps representative of what a beginning user might experience upon setting up a cluster without expertise or thinking about configuration.
A Note on Benchmarks and Bias
you can safely skip this section if you’re in a rush
This is a technical document, not a marketing piece. These benchmarks adhere to the principles laid out in this blogpost and attempt to avoid those pitfalls around developer bias. In particular the following are true:
- We decided on a set of benchmarks before we ran them on a cluster
- We did not improve the software or tweak the benchmarks after seeing the results. These were run on the current release of Dask in the wild that was put out weeks ago, not on a development branch.
- The computations were constructed naively, as a novice would write them. They were not tweaked for extra performance.
- The cluster was configured naively, without attention to scale or special parameters
We estimate that expert use would result in about a 5-10x scaling improvement over what we’ll see. We’ll detail how to improve scaling with expert methods at the bottom of the post.
All that being said the author of this blogpost is paid to write this software and so you probably shouldn’t trust him. We invite readers to explore things independently. All configuration, notebooks, plotting code, and data are available below:
- dask-kubernetes for cluster deployment
- Jupyter notebook for benchmarks
- Jupyter notebook for plots
- Benchmark results data on GCS
Tasks
We start by benchmarking the task scheduling API. Dask’s task scheduling APIs are at the heart of the other “big data” APIs (like dataframes). We start with tasks because they’re the simplest and most raw representation of Dask. Mostly we’ll run the following functions on integers, but you could fill in any function here, like a pandas dataframe method or sklearn routine.
import time def inc(x): return x + 1 def add(x, y): return x + y def slowinc(x, delay=0.1): time.sleep(delay) return x + 1 def slowadd(x, y, delay=0.1): time.sleep(delay) return x + y def slowsum(L, delay=0.1): time.sleep(delay) return sum(L)
Embarrassingly Parallel Tasks
We run the following code on our cluster and measure how long they take to complete:
futures = client.map(slowinc, range(4 * n), delay=1) # 1s delay wait(futures)
futures = client.map(slowinc, range(100 * n_cores)) # 100ms delay wait(futures)
futures = client.map(inc, range(n_cores * 200)) # fast wait(futures)
We see that for fast tasks the system can process around 2000-3000 tasks per second. This is mostly bound by scheduler and client overhead. Adding more workers into the system doesn’t give us any more tasks per second. However if our tasks take any amount of time (like 100ms or 1s) then we see decent speedups.
If you switch to linear scales on the plots, you’ll see that as we get out to 512 cores we start to slow down by about a factor of two. I’m surprised to see this behavior (hooray benchmarks) because all of Dask’s scheduling decisions are independent of cluster size. My first guess is that the scheduler may be being swamped with administrative messages, but we’ll have to dig in a bit deeper here.
Tree Reduction
Not all computations are embarrassingly parallel. Many computations have dependencies between them. Consider a tree reduction, where we combine neighboring elements until there is only one left. This stresses task dependencies and small data movement.
from dask import delayed L = range(2**7 * n) while len(L) > 1: # while there is more than one element left # add neighbors together L = [delayed(slowadd)(a, b) for a, b in zip(L[::2], L[1::2])] L[0].compute()
We see similar scaling to the embarrassingly parallel case. Things proceed linearly until they get to around 3000 tasks per second, at which point they fall behind linear scaling. Dask doesn’t seem to mind dependencies, even custom situations like this one.
Nearest Neighbor
Nearest neighbor computations are common in data analysis when you need to share a bit of data between neighboring elements, such as frequently occurs in timeseries computations in dataframes or overlapping image processing in arrays or PDE computations.
L = range(20 * n) L = client.map(slowadd, L[:-1], L[1:]) L = client.map(slowadd, L[:-1], L[1:]) wait(L)
Scaling is similar to the tree reduction case. Interesting dependency structures don’t incur significant overhead or scaling costs.
Sequential
We consider a computation that isn’t parallel at all, but is instead highly sequential. Increasing the number of workers shouldn’t help here (there is only one thing to do at a time) but this does demonstrate the extra stresses that arise from a large number of workers. Note that we have turned off task fusion for this, so here we’re measuring how many roundtrips can occur between the scheduler and worker every second.
x = 1 for i in range(100): x = delayed(inc)(x) x.compute()
So we get something like 100 roundtrips per second, or around 10ms roundtrip latencies. It turns out that a decent chunk of this cost was due to an optimization; workers prefer to batch small messages for higher throughput. In this case that optimization hurts us. Still though, we’re about 2-4x faster than video frame-rate here (video runs at around 24Hz or 40ms between frames).
Client in the loop
Finally we consider a reduction that consumes whichever futures finish first and adds them together. This is an example of using client-side logic within the computation, which is often helpful in complex algorithms. This also scales a little bit better because there are fewer dependencies to track within the scheduler. The client takes on a bit of the load.
from dask.distributed import as_completed futures = client.map(slowinc, range(n * 20)) pool = as_completed(futures) batches = pool.batches() while True: try: batch = next(batches) if len(batch) == 1: batch += next(batches) except StopIteration: break future = client.submit(slowsum, batch) pool.add(future)
Tasks: Complete
We show most of the plots from above for comparison.
Arrays
When we combine NumPy arrays with the task scheduling system above we get dask.array, a distributed multi-dimensional array. This section shows computations like the last section (maps, reductions, nearest-neighbor), but now these computations are motivated by actual data-oriented computations and involve real data movement.
Create Dataset
We make a square array with somewhat random data. This array scales with the number of cores. We cut it into uniform chunks of size 2000 by 2000.
N = int(5000 * math.sqrt(n_cores)) x = da.random.randint(0, 10000, size=(N, N), chunks=(2000, 2000)) x = x.persist() wait(x)
Creating this array is embarrassingly parallel. There is an odd corner in the graph here that I’m not able to explain.
Elementwise Computation
We perform some numerical computation element-by-element on this array.
y = da.sin(x) ** 2 + da.cos(x) ** 2 y = y.persist() wait(y)
This is also embarrassingly parallel. Each task here takes around 300ms (the time it takes to call this on a single 2000 by 2000 numpy array chunk).
Reductions
We sum the array. This is implemented as a tree reduction.
x.std().compute()
Random Access
We get a single element from the array. This shouldn’t get any faster with more workers, but it may get slower depending on how much base-line load a worker adds to the scheduler.
x[1234, 4567].compute()
We get around 400-800 bytes per second, which translates to response times of 10-20ms, about twice the speed of video framerate. We see that performance does degrade once we have a hundred or so active connections.
Communication
We add the array to its transpose. This forces different chunks to move around the network so that they can add to each other. Roughly half of the array moves on the network.
y = x + x.T y = y.persist() wait(y)
The task structure of this computation is something like nearest-neighbors. It has a regular pattern with a small number of connections per task. It’s really more a test of the network hardware, which we see does not impose any additional scaling limitations (this looks like normal slightly-sub-linear scaling).
Rechunking
Sometimes communication is composed of many small transfers. For example if you have a time series of images so that each image is a chunk, you might want to rechunk the data so that all of the time values for each pixel are in a chunk instead. Doing this can be very challenging because every output chunk requires a little bit of data from every input chunk, resulting in potentially n-squared transfers.
y = x.rechunk((20000, 200)).persist() wait(y) y = y.rechunk((200, 20000)).persist() wait(y)
This computation can be very hard. We see that dask does it more slowly than fast computations like reductions, but it still scales decently well up to hundreds of workers.
Nearest Neighbor
Dask.array includes the ability to overlap small bits of neighboring blocks to enable functions that require a bit of continuity like derivatives or spatial smoothing functions.
y = x.map_overlap(slowinc, depth=1, delay=0.1).persist() wait(y)
Array Complete
DataFrames
We can combine Pandas Dataframes with Dask to obtain Dask dataframes, distributed tables. This section will be much like the last section on arrays but will instead focus on pandas-style computations.
Create Dataset
We make an array of random integers with ten columns and two million rows per core, but into chunks of size one million. We turn this into a dataframe of integers:
x = da.random.randint(0, 10000, size=(N, 10), chunks=(1000000, 10)) df = dd.from_dask_array(x).persist() wait(df)
Elementwise
We can perform 100ms tasks or try out a bunch of arithmetic.
y = df.map_partitions(slowinc, meta=df).persist() wait(y)
y = (df[0] + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10).persist() wait(y)
Random access
Similarly we can try random access with loc.
df.loc[123456].compute()
Reductions
We can try reductions along the full dataset or a single series:
df.std().compute()
df[0].std().compute()
Groupby aggregations like
df.groupby(...).column.mean() operate very
similarly to reductions, just with slightly more complexity.
df.groupby(0)[1].mean().compute()
Shuffles
However operations like
df.groupby(...).apply(...) are much harder to
accomplish because we actually need to construct the groups. This requires a
full shuffle of all of the data, which can be quite expensive.
This is also the same operation that occurs when we sort or call
set_index.
df.groupby(0).apply(len).compute() # this would be faster as df.groupby(0).size()
y = df.set_index(1).persist() wait(y)
This still performs decently and scales well out to a hundred or so workers.
Timeseries operations
Timeseries operations often require nearest neighbor computations. Here we look at rolling aggregations, but cumulative operations, resampling, and so on are all much the same.
y = df.rolling(5).mean().persist() wait(y)
Dataframes: Complete
Analysis
Let’s start with a few main observations:
- The longer your individual tasks take, the better Dask (or any distributed system) will scale. As you increase the number of workers you should also endeavor to increase average task size, for example by increasing the in-memory size of your array chunks or dataframe partitions.
- The Dask scheduler + Client currently maxes out at around 3000 tasks per second. Another way to put this is that if our computations take 100ms then we can saturate about 300 cores, which is more-or-less what we observe here.
- Adding dependencies is generally free in modest cases such as in a reduction or nearest-neighbor computation. It doesn’t matter what structure your dependencies take, as long as parallelism is still abundant.
- Adding more substantial dependencies, such as in array rechunking or dataframe shuffling, can be more costly, but dask collection algorithms (array, dataframe) are built to maintain scalability even at scale.
- The scheduler seems to slow down at 256 workers, even for long task lengths. This suggests that we may have an overhead issue that needs to be resolved.
Expert Approach
So given our experience here, let’s now tweak settings to make Dask run well. We want to avoid two things:
- Lots of independent worker processes
- Lots of small tasks
So lets change some things:
- Bigger workers: Rather than have 256 two-core workers lets deploy 32 sixteen-core workers.
Bigger chunks: Rather than have 2000 by 2000 numpy array chunks lets bump this up to 10,000 by 10,000.
Rather than 1,000,000 row Pandas dataframe partitions let’s bump this up to 10,000,000.
These sizes are still well within comfortable memory limits. Each is about a Gigabyte in our case.
When we make these changes we find that all metrics improve at larger scales. Some notable improvements are included in a table below (sorry for not having pretty plots in this case).
We see that for some operations we can get significant improvements (dask.dataframe is now churning through data at 60/s) and for other operations that are largely scheduler or network bound this doesn’t strongly improve the situation (and sometimes hurts).
Still though, even with naive settings we’re routinely pushing through 10s of gigabytes a second on a modest cluster. These speeds are available for a very wide range of computations.
Final thoughts
Hopefully these notes help people to understand Dask’s scalability. Like all tools it has limits, but even under normal settings Dask should scale well out to a hundred workers or so. Once you reach this limit you might want to start taking other factors into consideration, especially threads-per-worker and block size, both of which can help push well into the thousands-of-cores range.
The included notebooks are self contained, with code to both run and time the computations as well as produce the Bokeh figures. I would love to see other people reproduce these benchmarks (or others!) on different hardware or with different settings.
Tooling
This blogpost made use of the following tools:
- Dask-kubernetes: for deploying clusters of varying sizes on Google compute engine
- Bokeh: for plotting (gallery)
- gcsfs: for storage on Google cloud storage
blog comments powered by Disqus | http://matthewrocklin.com/blog/work/2017/07/03/scaling | CC-MAIN-2019-09 | refinedweb | 2,663 | 55.84 |
A Complete Introduction Guide To TypeScript
Piero Borrelli
・10 min read
As the power of TypeScript has been rising during the last few years, thousands of developers decided to start using this Javascript superset to empower their code even more. This guide aims to be a quick primer for all the developers out there who would like to learn how to use TypeScript and use it in their next project.
#1 The word Types means: use them!
One of the biggest features of TypeScript is compile time type-checking to prevent any mismatch in the types you are using for your variables and yes, you can actually use types in TypeScript, here are some examples about how you can use them:
// legal let isReady : boolean = false; let decimal : number = 20; let name : string = "Dev.to"; let numbers : number[] = [1,2, 3]; // illegal let isReady : boolean = 10; let decimal : number = "not a number"; let name : string = true; let numbers : number[] = "not an array of numbers";
#1.1 Can I use multiple types for my variables?
Of course you can, by simply using the any type for one of your variables you will be able to assign different value types like:
let unknown : any = 30; unknown = "what is this variable?";
If you want to restrict the types you can assign to your variables, you can eventually use the pipe operator like this:
let multiple : boolean | number = 10; multiple = true; // still valid
#1.2 What if don’t want to specify the type of a variable?
No problem! TypeScript supports both implicit and explicit typing. In the first case, you will be specifying variables’ types precisely like we have seen up to this point, in the second case, the type will be automatically assigned to a variable whenever you first initialize it a value, this mechanism is better known as type inference.
let explicit : number = 10; // explicitily using the type 'number' let implicit = 10; // inference of the type 'number'
Notice how type inference comes in handy in other useful cases like function return values:
// inference will set 'number' as the return type for this function function add(a: number, b: number) { return a + b; }
#1.3 Can I check the type of my variables?
Wanna make sure you are using the right type? the right class? you can use that by using the instanceof operator like this:
import { Cake } from './cake.model'; let cake = new Cake('eggs', 'milk'); if (cake instanceof Cake) { console.log("We've got a cake here!"); }
This is especially useful for user defined types and it will also work when you are inheriting properties from another object.
#1.4 Can I convert the types of my variables?
This type of operation is typically known as casting and it can be performed in special cases in TypeScript where we might need to handle a variable using a specific type. Let’s suppose you defined a variable of type any but you now want to use some common string functions on that variable that you cannot access now since the type is not string, you can tell TypeScript to handle that variable as a such using:
let unknown : any; let unknown = "hello"; console.log("length is : ", (<string>unknown).length);
#2 Working with Arrays
Everything that mentioned above can be pretty much adapted when it comes to using Arrays in TypeScript:
// legal let numbers : number[] = [1, 2, 3]; let strings: string[] = ["hello", "good", "world"]; let booleans : boolean[] = [true, false, true]; let whatever: any[] = ['Hello World', 10]; // illegal let numbers : numbers[] = [1, true, 3]; let strings: string[] = ["hello", 1, "world"]; let booleans : boolean[] = [true, 100, true]; // other possibilities include let numbersAndBooleans = (number | boolean) = [true, 100]; // using the pipe operator let matrix = number[][] = [[10, 20]];
#2.1 Using tuples
Quite a new concept, tuple types allow you to express an array where the type of a fixed number of elements is known, but need not be the same. Consider if you wanted to represent a value as a pair of a boolean and a number:
// Using a tuple let x: [boolean, number]; x = [true, 10]; // initialization x = [10, "hello"]; // illegal initialization
#2.2 Something I really missed: Enums!
This great addition to Javascript is something that I was really missing from the old days when I would code using Java, enums are basically a set of named constants. There are three types of enums:
- Numeric Enum
- String enum
- Heterogeneous enum
In order to not make this article too long, I won’t go into too much detail about enums, just remember that they are especially useful if you want to better document your intent or to create a set of distinct cases like:
enum Direction { Up = 1, Down, Left, Right, } movePlayer(Direction.Up);
#3 What about objects?
Objects are another important part of Typescript, let’s try to analyze them in better detail with an example:
// Javascript definition let user = { name: "piero", surname: "borrelli" } // Typescript definition is the same let user = { name: "piero", surname: "borrelli" } // except that now name and surname will be considered as {name: string, surname: string}
The two properties of the objects we have are inferenced to type string, meaning that any attempt to assign a value of different type to them will be invalid:
user.name = 35; // invalid
#3.1 Object type
Object is a type that doesn’t fall in the primitive-type category like: boolean, string, number, symbol, null, undefined and it follows this syntax:
let user: { name: string, surname: string};
#4 Functions
When it comes to functions, Typescript introduces the possibilty to use types when working with them, the first place we want to use them for example is for function parameters:
// define types for the function parameters function test(name: string, surname: string) { return name + " " + surname; } let fullName = test("piero", "borrelli"); // legal let fullName = test(10, "borrelli"); // illegal
Another place where you might want to specify a type is when returning a value from a function. Notice that, in the case of the function above, the return type was automatically infered to be of type string. Let’s see how we can explicitly define the return type of a function using:
// define return type for the function function test(name: string, surname: string): string { return name + " " + surname; } // illegal function test(name: string, surname: string): string { return name.length; // will return a number here which is not expected }
#5 The OOP part
Since the release of ECMAScript 6, Javascript programmers have been able to build their programs using the object-oriented approach. This approach is also supported by Typescript, so let’s try to analyze how we would use it by making some examples:
class Point{ x: number; y: number; constructor(x: number, y: number) { this.x = x; // where 'this' refers to the current object this.y = y; } getPoints() { return "x: " + this.x + " y: " + this.y; } } let Point = new Point(10, 20);
To most of the people out there who have worked with languages like C# or Java this will look extremely familiar, we have a class named Point that has two members x and y, we can access them freely (more on this later) and we also call a class method called getPoints(). We can then create an instance of an object of type Point by using the new keyword.
Using access modifiers
Not going on too much detail on this since it’s a complete different topic, but keep in mind that in Typescript you can also define access modifiers for your classes’ variables like this:
class Point{ private x: number; private y: number; constructor(x: number, y: number) { this.x = x; // where 'this' refers to the current object this.y = y; } getPoints() { return "x: " + this.x + " y: " + this.y; } }
As with basically all the object oriented programming languages, we can use access modifiers to establish who is going to be able to access our class data. By default, public is set as a member default modifier, private and protected are used respectively when you want a member to not be accessible outside of its class(private) and when you want a member to be accessible only inside its class or deriving classes.
Inheritance
As already mentioned earlier, Typescript supports the most used object-oriented patterns including inheritance. So using Typescript you will be able to define a class and then define a subclass of it which will inherit the superclass base methods and members:
class Animal { move(steps: number = 0) { console.log(`Animal moved ${steps}m.`); } } class cat extends Animal { meow() { console.log('Meow'); } } const cat= new Cat(); cat.meow(); cat.move(1000);
Interfaces
Another common object oriented technique that you might to use it to create an interface. This in possible in Typescript, where the main focus is type-checking, we can use interfaces to give names to these types. So basically when using them we will be creating a group of related methods and members that will describe a particular object:
interface Box{ width: number, height: number }
Conclusion
The idea behind this article was just to be able to give you a quick primer on what you can do with Typescript, how it can help us solve common problems and its different approach with Javascript.
Hope it gave you a new vision on something useful!
If you are interested in going into more detail with TypeScript, you can checkout these resources: here and here
Thanks for reading,
Piero Borrelli.
I created the Web Almanac. Ask me anything about the state of the web!
AMA about the Web Almanac and the state of the web.
Formatting Language-Sensitive Lists in JavaScript With ListFormat
John Au-Yeung -
SameSite cookies explained(Secure your site by learning how to explicitly mark your cross-site cookies.)
Jamaluddin Mondal -
How to Add AutoComplete Input to Your Vue.js App
John Au-Yeung -
By adding three backticks followed by ts you can colourize your code in this article which will improve readability.
Hey, Do you mind I translate your article to Chinese ?, I promise I'll paste the original URL and recommend your blog.
Please contact me via pieroborrellidev@gmail.com so we can arrange this.
That's OK | https://dev.to/borrellidev/a-complete-introduction-guide-to-typescript-4a66?utm_campaign=Angular%20Weekly&utm_medium=email&utm_source=Revue%20newsletter | CC-MAIN-2020-05 | refinedweb | 1,690 | 55.78 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.