text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
How to read arguments from the shell, that some other program is streaming through a bash pipe? Secondly, is
print i a proper way to stream data to the environment?
My search lead me trough
osto the
subprocessmodules, but then: The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
However, I do not want one program to spawn the other, thej sut need to write and read from a fifo.
This will read one line at a time from stdin and print it (you'd obviously do something else with the line, such as split() it into tokens or parse it some other way):
import sys for line in sys.stdin: print line | https://www.codesd.com/item/how-to-receive-data-from-the-environment-such-as-hitting.html | CC-MAIN-2019-13 | refinedweb | 125 | 67.89 |
span8
span4
span8
span4
Hello, thank you very much for helping me to review this case.
I usually use custom transformers and SQLExecutor to go to the database and query additional attributes of a feature. Sometimes my database reading fails due to some special condition or the business validation indicates that I must generate an inconsistency. In programming languages, there is a stack trace that can be shown to the user where that error was generated.
In FME how can I get the name of the custom transformer where the validation error was generated to show it as part of the message to the user?
Unfortunately it seems that FME has very limited functionality for introspection, and programmatically retrieving e.g. transformer names doesn't seem possible.
I suspect that the simplest solution for you is to introduce e.g. an AttributeCreator in each custom transformer and manually set the transformer name there, which you can then use later on if there's an error.
This is probably the best scenario if you need to access the transformer name later, however it will return the transformer name, not the transformer instance name. If you have the same custom transformer two or more times in the workspace, they will all return the same value.
I filed this with development as FMEENGINE-63743, so we'll see what happens.
Presumably these are rejected features. If so I think there are a couple of log hints to help:
myTransformer1_SQLExecutor_<REJECTED> Feature Counter 0 2147745794 (TeeFactory): Cloned 1 input feature(s) into 0 output feature(s)
If it says "Cloned x input feature(s)" and if x is more than one, then you know that transformer failed. And you know the name of the transformer because it says on the stat (myTransformer1_SQLExecutor).
Also, it would write a temporary file when a failure occurs, and that is logged too:
Stored 1 feature(s) to FME feature store file `C:\Users\imark\AppData\Local\Temp\wb-cache-IntExample-CFDRXe\myTransformer1_SQLExecutor_0 2 fo 2 _lt_REJECTED_gt_ 0 3705cc8a59ac24bbfc5c77f9122c2e82e74253c5.ffsupdating'
So that's another helpful hint.
I don't know if you're trying to get this information programmatically within the workspace, but if you can deal with the log after the fact, then this method works I think.
Still, I think we could do better, and if you want to post an idea about logging on the ideas pages, I happen to know we're doing a major review of logging right now, so it might get some notice.
I for one would like to get a pythonic way of determining the custom transformer instance name. In addition to ediaze's scenario, it would allow for using parameters in the python code, as the parameter names are prefixed by the custom transformer instance name.
You could also put a Logger transformer inside each custom transformer. Set it to log just a message only, where the message includes the custom transformer name. Then you'll get a message in the log to tell you which transformer is running.
I'm not 100% sure I know your scenario, so this is a "try this, it might work" rather than a "this is definitely the way to do it" type of answer!
There is no official way of getting the transformer name, but try this hack in a PythonCaller:
import fme def get_custom_transf_name(feature): for k, v in fme.macroValues.items(): if k.endswith('_XFORMER_NAME'): feature.setAttribute('XFORMER_NAME', v)
It will return the transformer name in the attribute "XFORMER_NAME".
Wouldn't that just give the last custom transformer name, in a workspace that has multiple custom transformers?
Yes I think it would if the code was to run in the main workspace, so the code above should definitely run in a PythonCaller inside each custom transformer. I probably should've made that clear, thanks for pointing it out.
No even with the pythonCaller inside the a custom transformer.
Hello @david_r this is my case:
I try to group the functionality by custom transformers and in one of them an error was generated when accessing the database, I solved the problem, but I was wondering if it is possible to know the source error by some attribute:
According to your code snippet, I tried to get the name DriverDbReader but it recovered was CsvReader. This is why I better create an array to know what was happening and I realized that more than one key ends in _XFORMER_NAME:
Result for "_transformer_name": DriverValidator_CardIdDuplicateFilter, Start, DriverValidator_DriverIdDuplicateFilter, DriverValidator_DriverDbReader, DriverValidator, CsvReader_CsvFileReader, DriverValidator_CardIdEntityFilter,CsvReader
Answers Answers and Comments
25 People are following this question.
How to pass a user parameter to a custom transformer and used in a pythonCaller? 2 Answers
PythonCaller's output lost all attributes! 3 Answers
How can I carry over attributes from a GIS .shp to a SnakeGrid MicroStation .dgn, using FME? 1 Answer
Bug PythonCaller - Change between FME 2016 and 2017 2 Answers
How to expose a large amount of attributes created inside a PythonCaller? 1 Answer | https://knowledge.safe.com/questions/109948/how-to-get-the-current-transformer-name.html?smartspace=attribute-handling_2 | CC-MAIN-2020-16 | refinedweb | 835 | 59.74 |
* Foreach directive used for moving through arrays, 24 * or objects that provide an Iterator. 25 * 26 * @author <a href="mailto:jvanzyl@apache.org">Jason van Zyl</a> 27 * @author <a href="mailto:geirm@optonline.net">Geir Magnusson Jr.</a> 28 * @author Daniel Rall 29 * @version $Id: Foreach.java 945927 2010-05-18 22:21:41Z nbubna $ 30 */ 31 public class Foreach extends Directive 32 { 33 /** 34 * Return name of this directive. 35 * @return The name of this directive. 36 */ 37 public String getName() 38 { 39 return "foreach"; 40 } 41 42 /** 43 * Return type of this directive. 44 * @return The type of this directive. 45 */ 46 public int getType() 47 { 48 return BLOCK; 49 } 50 51 52 } | http://pmd.sourceforge.net/pmd-5.1.2/xref/net/sourceforge/pmd/lang/vm/directive/Foreach.html | CC-MAIN-2018-09 | refinedweb | 116 | 61.43 |
open 1.2.0
Open.dart #
Open whatever you want in Dart, such as URLs, files or executables, regardless of the platform you use.
Documentation #
Development #
License #
Open.dart is distributed under the MIT License.
Changelog #
Version 1.2.0
Version 1.1.0
- Added a local instance of the
xdg-openscript version 1.1.3.
Version 1.0.0 #
- Initial release.
import 'package:open/open.dart'; /// Demonstrates the usage of the `open()` function. Future<void> main() async { // Open a URL in the default browser. await open(''); // Open a URL in the given browser. await open('', application: 'firefox'); // Open a URL in the given browser, using the specified arguments. await open('', application: 'chrome', arguments: ['--incognito'] ); // Open an image in the default viewer // and wait for the opened application to quit. await open('funny.gif', wait: true); }
Use this package as an executable
1. Install it
You can install the package from the command line:
$ pub global activate open
2. Use it
The package has the following executables:
$ open_cli
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: open: :open/open.dart';
We analyzed this package on Jan 16, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
- Dart: 2.7.0
- pana: 0.13.4
Health suggestions
Fix
lib/src/io/open.dart. (-1.49 points)
Analysis of
lib/src/io/open.dart reported 3 hints:
line 22 col 7: DO use curly braces for all flow control structures.
line 30 col 7: DO use curly braces for all flow control structures.
line 55 col 13: DO use curly braces for all flow control structures.
Fix
bin/open_cli.dart. (-0.50 points)
Analysis of
bin/open_cli.dart reported 1 hint:
line 18 col 7: DO use curly braces for all flow control structures.
Fix
lib/src/io/wsl.dart. (-0.50 points)
Analysis of
lib/src/io/wsl.dart reported 1 hint:
line 7 col 5: DO use curly braces for all flow control structures. | https://pub.dev/packages/open | CC-MAIN-2020-05 | refinedweb | 343 | 69.68 |
Abstract:.
Welcome to the 112th edition of The Java(tm) Specialists' Newsletter. Helene flew to Durban for a few days so I had the pleasure of being "mom" to our kids Maxi (7) and Connie (3). We had a great time together. Besides being "mom's taxi", we also took the opportunity to drive to Hermanus, a small ex-whaling town about an hour drive, where we saw 10 Southern Right Whales. My camera does not zoom well, and I preferred enjoying the view to taking pictures, but here is a lucky shot of a whale blowing his spout. We saw some tails stuck in the air and a bit of breaching, but I did not capture any of those on camera. Do send me an email if you intend visiting Cape Town, and time permitting, I will show you around Hermanus :)
Some feedback re my plan to get Java programmers involved in the fight against HIV/AIDS, as mentioned last month. After interviewing several people who are working amongst those affected by HIV/AIDS, I have sadly concluded that there is little to nothing we can do to help.
Learning.JavaSpecialists.EU: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
In my newsletter, I have mentioned my seminar on Design Patterns several times. Over the years, hundreds of Java and C++ programmers have attended and have enhanced their way of thinking regarding OO.
There are several techniques that I use to make sure that people learn:
I still believe that my course is the quickest way in which a programmer can learn to think Object Orientated. My course is cheap (I was told this by two of my most recent customers). The best price works out at only EUR 185 per person per day of training (conditions apply). However, this is still out of reach for quite a large number of software companies.
So, what should you do, if your company cannot afford EUR 185 per day for you to be trained in Design Patterns? There is a very good alternative, the book Head First Design Patterns, which will cost you about USD 30 from Amazon.com. A lot cheaper than any seminar! And if you share the book, 20 people could learn from one book, which works out to USD 1.50 per developer.
"Head First" is a new series of books by O'Reilly. When I opened the predecessor book "Head First Java", I knew O'Reilly had produced a winner. They were using the techniques that worked so effectively in my own courses:
Since the original Design Patterns - Erich Gamma et al (GoF Book), I have been waiting for a book like this, that agrees with my interpretation of GoF and contains well-implemented examples. On the front cover, there is the statement: "Learn why everything your friends know about Factory pattern is probably wrong". This is so true! Martin Fowler, in Refactoring, used the name Factory Method for a static method that constructs objects. When I asked him about that, he admitted that it was quite different to the GoF Factory Method, and wondered why none of his reviewers had picked that up!
My course goes into a lot more depth than the book does, especially during our exercise and discussion times, but for a book, their interpretation of GoF is spectacular. There are a few places where they missed some points, so if you read the book, add these points, and you will pretty much get my course for only USD 30!
java.util.concurrent.Callableand
java.util.concurrent.Futureto see how a command pattern could return results to the client.
java.awt.Toolkit.
java.util.concurrent.Executors.
What I found interesting was the book's choice of patterns and the examples that they showed were almost identical to mine. We could not have copied from one another, since I wrote my examples long before the book, and my course material is not publicly available. Reading this book is like hearing myself talk, which might explain my enthusiasm for it.
Here are the patterns that I cover: Adapter, Command, Composite, Decorator, Factory Method, Façade, Flyweight, Iterator, Proxy, Singleton, State and Strategy.
And here are the patterns covered in the book: Abstract Factory, Adapter, Command, Composite, Decorator, Factory Method, Façade, Iterator, Observer, Proxy, Singleton, State, Strategy and Template Method.
Almost identical!
I left out Observer and Template Method because they were too simple and the Abstract Factory because I have not used that myself. And I included the Flyweight because it is a performance tuning pattern. In addition, I also added a few J2EE patterns (not listed above).
Another "nice touch" of the book was to teach OO Principles, one at a time:
This book is probably not advanced enough if you are already actively working with patterns. If that is you, I would rather recommend Design Patterns Explained, which in my opinion goes into more depth for the patterns.
The idea of the Null Iterator comes from the GoF book, and is similar to the Null Object Pattern by Bobby Woolf. I don't know if that link is the best, if in doubt, buy the PLoPD3 book. That book is a bit dated though, beware.
Here is an implementation of the NullIterator, according to Kabutz ;-) It is more correct than the one in the book, in that mine throws the exceptions you would expect from a normal empty list. Note that I am suppressing the unchecked warnings using the new annotation @SuppressWarnings. This annotation does not seem to be implemented in the Sun javac compiler, but both IntelliJ IDEA 5.0 and Eclipse 3.1 support it.
import java.util.*; public class NullIterator implements Iterator { private static final Iterator iterator = new NullIterator(); // since we cannot get any objects with next(), // we can safely cast it to your type, so we can // suppress the warnings @SuppressWarnings("unchecked") public static <T> Iterator<T> getIterator() { return iterator; } private NullIterator() {} // always empty! public boolean hasNext() { return false; } // Empty collections would throw NoSuchElementException public Object next() { throw new NoSuchElementException("Null Iterator"); } // Should only be called after a next() has been called. // Since next() can never be called, the correct exception // to throw is the IllegalStateException. public void remove() { throw new IllegalStateException("Null Iterator"); } }
So, if a method needs to indicate that it contains no elements, such as if you want to iterate over the leaves in a Composite pattern, you can simply return the NullIterator instance. This makes the client code easier, since you do not need to test for null anymore.
The Null Object Pattern is a special case of the Strategy pattern and can be used in most places where you are testing for whether a field or method return value are null.
That's all for this newsletter, I look forward to your comments as always :)
Kind regards
Heinz
P.S. Yes, I do get a tiny commission if you buy the book through the Amazon link that is included in this email. In my experience with other book reviews, only about 5 readers end up buying the book through the link, so it costs me far more to send you this email than I would ever make through referral fees. I am not reviewing that book for the referral fees, but because I feel it is a great book to... | https://www.javaspecialists.eu/archive/Issue112.html | CC-MAIN-2018-13 | refinedweb | 1,235 | 58.92 |
Cocos2D Tutorial for iOS: How To Make A Space Shooter iPhone Game
In this Cocos2D, this Cocos2D tutorial is for you! You’ll learn how to make a complete game from scratch, with no prior experience necessary!
If you’re completely new to programming in general you might want to check out this slightly easier introduction first.
This Cocos2D tutorial is also good for intermediate developers, because it covers some neat effects such as parallax scrolling, pre-allocating CCNodes, accelerometer movement, and particle systems.
Without further ado, let’s get blasting!
Install Cocos2D
To make this game, you’ll need to be a member of the iOS developer program (so you can run this game on your iPhone) and have Xcode and the Cocos2D framework installed.
If you already have Cocos2D installed, feel free to skip to the next section. Otherwise, here are some instructions for how to install Cocos2D on your Mac:
- Download Cocos2D from this page. Be sure to pick the very latest version – at the time of this writing it is 1.0.0-rc2 in the Unstable category. Don’t worry that it says Unstable – it actually works quite well! :]
- Double click the downloaded file to unzip it, and (optionally) store it somewhere safe.
- Open a Terminal (Applications\Utilities\Terminal), and use the cd command to navigate to where your cocos2d folder. Then run the ./install-templates.sh command to install your Xcode templates, like the following:
$ cd Downloads $ cd cocos2d-iphone-1.0.0-rc2 $ ./install-templates.sh -f -u
If all works well, you should see several lines saying “Installing xxx template”.
Then restart Xcode, and congrats – you’ve installed Cocos2D!
Hello, Cocos2D!
Let’s get started by creating a “Hello World” Cocos2D project.
Start up Xcode, go to File\New\New Project, choose the iOS\cocos2d template, and click Next. Name the project SpaceGame, click Next, choose a folder to save your project in, and click Create.
Compile and run your project, and you should see “Hello World” appear on the screen:
Adding Resources
To make this iPhone game, you are going to need some art and sound effects with a space theme.
But don’t whip out MS Paint quite yet – luckily my lovely wife has made some cool space game resources you can use in this project!
So go ahead and download the space game resources and unzip them to your hard drive.
Once you’ve unzipped the resources, drag the Backgrounds, Fonts, Particles, Sounds, and Spritesheets folders into the Resources group in your Xcode project. (Basically everything except the Classes folder).:
- Backgrounds: Some images that you’ll use to create a side-scrolling background for the game. Includes images of a galaxy, sunrise, and spatial anomolies (that will move quite slowly), and an image of some space dust (that will go in front and move a little bit faster).
- Fonts: A bitmap font created with Glyph Designer that we’ll use to display some text in the game later on.
- Particles: Some special effects we’ll be using to create the effect of some stars flying by, created with Particle Designer.
- Sounds: Some space-themed background music and sound effects, created with Garage Band and cxfr.
- Spritesheets: Contains an image in the pvr.ccz format containing several smaller images that we’ll be using in the game, including the asteroid, space ship, etc. This was created with Texture Packer – you’ll need this if you want to look at the pvr.ccz.
Don’t worry if you don’t have any of these tools installed – you don’t need them for this tutorial, since you can use these premade files. You can always try out these tools later!
Here’s what Sprites.pvr.ccz looks like by the way:
In case you’re wondering why we’re combining all those images into a large image like this, it’s because it helps conserve memory and improve performance while making the game.
It’s a good practice to get into, so we’re starting you out with it early! :]
Adding a Space Ship
Let’s start things out nice and simple by adding the space ship to the screen!
Start by opening HelloWorldLayer.h, and add two new instance variables inside the @interface:
CCSpriteBatchNode *_batchNode; CCSprite *_ship;
The first variable (_batchNode) is necessary because we’re storing all of our images inside a single image, and using this helps us batch up all the drawing work.
The second variable (_ship) represents the space ship on the screen.
Next move to HelloWorldLayer.m, and replace the init method with the following:
-(id) init { if( (self=[super init])) { _batchNode = [CCSpriteBatchNode batchNodeWithFile:@"Sprites.pvr.ccz"]; // 1 [self addChild:_batchNode]; // 2 [[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:@"Sprites.plist"]; // 3 _ship = [CCSprite spriteWithSpriteFrameName:@"SpaceFlier_sm_1.png"]; // 4 CGSize winSize = [CCDirector sharedDirector].winSize; // 5 _ship.position = ccp(winSize.width * 0.1, winSize.height * 0.5); // 6 [_batchNode addChild:_ship z:1]; // 7 } return self; }
Let’s go over this step by step:
- Creates a CCSpriteBatchNode to batch up all of the drawing of objects from the same large image. Passes in the image name (Sprites.pvr.ccz).
- Adds the CCSpriteBatchNode to the layer so it will be drawn.
- Loads the Sprites.plist file, which contains information on where inside the large image each of the smaller images lies. This lets you easily retrieve the sub-images later with spriteWithSpriteFrameName.
- Creates a new Sprite using the SpaceFlier_sm_1.png image, which is a sub-image within the large image.
- Gets the size of the screen from the CCDirectory – we’ll need this in a second.
- Sets the position of the ship so that it’s 10% along the width of the screen, and 50% along the height. Note that by default the position of the ship is the center of the ship.
- Adds the ship to the batchNode so that the drawing of the sprite is batched up.
Compile and run your project, and you should see your ship image appear on the screen!
Adding Parallax Scrolling
We have a cool space ship on the screen, but it looks like it’s just sitting there! Let’s fix this by adding some cool parallax scrolling to the scene.
But wait a minute – what in the heck.
It’s really easy to use parallax scrolling in Cocos2D. You just have to do three steps:
- Create a CCParallaxNode, and add it to the layer.
- Create items you wish to scroll, and add them to the CCParallaxNode with addChild:parallaxRatio:positionOffset.
- Move the CCParallaxNode to scroll the background. It will scroll the children of the CCParallaxNode more quickly or slowly based on what you set the parallaxRatio to.
Let’s see how this works. Start by opening HelloWorldLayer.h, and add the following inside the @interface:
CCParallaxNode *_backgroundNode; CCSprite *_spacedust1; CCSprite *_spacedust2; CCSprite *_planetsunrise; CCSprite *_galaxy; CCSprite *_spacialanomaly; CCSprite *_spacialanomaly2;
Then switch to HelloWorldLayer.m, and add the following to the bottom of your init method:
// 1) Create the CCParallaxNode _backgroundNode = [CCParallaxNode node]; [self addChild:_backgroundNode z:-1]; // 2) Create the sprites we'll add to the CCParallaxNode _spacedust1 = [CCSprite spriteWithFile:@"bg_front_spacedust.png"]; _spacedust2 = [CCSprite spriteWithFile:@"bg_front_spacedust.png"]; _planetsunrise = [CCSprite spriteWithFile:@"bg_planetsunrise.png"]; _galaxy = [CCSprite spriteWithFile:@"bg_galaxy.png"]; _spacialanomaly = [CCSprite spriteWithFile:@"bg_spacialanomaly.png"]; _spacialanomaly2 = [CCSprite spriteWithFile:@"bg_spacialanomaly2.png"]; // 3) Determine relative movement speeds for space dust and background CGPoint dustSpeed = ccp(0.1, 0.1); CGPoint bgSpeed = ccp(0.05, 0.05); // 4) Add children to CCParallaxNode [_backgroundNode addChild:_spacedust1 z:0 parallaxRatio:dustSpeed positionOffset:ccp(0,winSize.height/2)]; [_backgroundNode addChild:_spacedust2 z:0 parallaxRatio:dustSpeed positionOffset:ccp(_spacedust1.contentSize.width,winSize.height/2)]; [_backgroundNode addChild:_galaxy z:-1 parallaxRatio:bgSpeed positionOffset:ccp(0,winSize.height * 0.7)]; [_backgroundNode addChild:_planetsunrise z:-1 parallaxRatio:bgSpeed positionOffset:ccp(600,winSize.height * 0)]; [_backgroundNode addChild:_spacialanomaly z:-1 parallaxRatio:bgSpeed positionOffset:ccp(900,winSize.height * 0.3)]; [_backgroundNode addChild:_spacialanomaly2 z:-1 parallaxRatio:bgSpeed positionOffset:ccp(1500,winSize.height * 0.9)];
Compile and run your project, and you should see the start of a space scene:
However this isn’t very interesting yet, since nothing is moving!
To move the space dust and backgrounds, all you need to do is move the parallax node itself. For every Y points we move the parallax node, the dust will move 0.1Y points, and the backgrounds will move 0.05Y points.
To move the parallax node, you’ll simply update the position every frame according to a set velocity. Try this out for yourself by making the following changes to HelloWorldLayer.m:
// Add to end of init method [self scheduleUpdate]; // Add new update method - (void)update:(ccTime)dt { CGPoint backgroundScrollVel = ccp(-1000, 0); _backgroundNode.position = ccpAdd(_backgroundNode.position, ccpMult(backgroundScrollVel, dt)); }
Compile and run your project, and things should start to scroll pretty neatly with parallax scrolling!
However, after a few seconds goes by, you’ll notice a major problem: we run out of things to scroll through, and you end up with a blank screen! That would be pretty boring, so let’s see what we can do about this.
Continuous Scrolling
We want the background to keep scrolling endlessly. The strategy we’re going to take to do this is to simply move the background to the right once it has moved offscreen to the left.
One minor problem is that CCParallaxNode currently doesn’t have any way to modify the offset of a child node once it’s added. You can’t simply update the position of the child node itself, because the CCParallaxNode overwrites that each update.
However, I’ve created a category on CCParallaxNode that you can use to solve this problem, which you can find in the resources for this project in the Classes folder. Drag CCParallaxNode-Extras.h and CCParallaxNode-Extras.m into your project, make sure “Copy items into destination group’s folder” is checked, and click Finish.
Then make the following changes to HelloWorldLayer.m to implement continuous scrolling:
// Add to top of file #import "CCParallaxNode-Extras.h" // Add at end of your update method NSArray *spaceDusts = [NSArray arrayWithObjects:_spacedust1, _spacedust2, nil]; for (CCSprite *spaceDust in spaceDusts) { if ([_backgroundNode convertToWorldSpace:spaceDust.position].x < -spaceDust.contentSize.width) { [_backgroundNode incrementOffset:ccp(2*spaceDust.contentSize.width,0) forChild:spaceDust]; } } NSArray *backgrounds = [NSArray arrayWithObjects:_planetsunrise, _galaxy, _spacialanomaly, _spacialanomaly2, nil]; for (CCSprite *background in backgrounds) { if ([_backgroundNode convertToWorldSpace:background.position].x < -background.contentSize.width) { [_backgroundNode incrementOffset:ccp(2000,0) forChild:background]; } }
Compile and run your project, and now the background should scroll continuously through a cool space scene!
Adding Stars
No space game would be complete without some stars flying by!
We could create another image with stars on it and add that to the parallax node like we have with the other decorations, but stars are a perfect example of when you'd want to use a particle system.
Particle systems allow you to efficiently create a large number of small objects using the same sprite. Cocos2D gives you a lot of control over configuring particle systems, and Particle Designer is a great way to visually set these up.
But for this tutorial, I've already set up some particle effects for some stars racing from right to left across the screen that we can use. Simply add the following code to the bottom of the init method to set these up:
NSArray *starsArray = [NSArray arrayWithObjects:@"Stars1.plist", @"Stars2.plist", @"Stars3.plist", nil]; for(NSString *stars in starsArray) { CCParticleSystemQuad *starsEffect = [CCParticleSystemQuad particleWithFile:stars]; [self addChild:starsEffect z:1]; }
By adding the particle systems to the layer, they automatically start running. Compile and run to see for yourself, and now you should see some stars flying across the scene!
Moving the Ship with the Accelerometer
So far so good, except this wouldn't be much of a game unless we can move our space ship!
We're going to take the approach of moving the space ship via the accelerometer. As the user tilts the device along the X-axis, the ship will move up and down.
This is actually pretty easy to implement, so let's jump right into it. First, add an instance variable inside the @interface in HelloWorldLayer.h to keep track of the points per second to move the ship along the Y-axis:
float _shipPointsPerSecY;
Then, make the following changes to HelloWorldLayer.m:
// 1) Add to bottom of init self.isAccelerometerEnabled = YES; // 2) Add new method - (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { #define kFilteringFactor 0.1 #define kRestAccelX -0.6 #define kShipMaxPointsPerSec (winSize.height*0.5) #define kMaxDiffX 0.2 UIAccelerationValue rollingX, rollingY, rollingZ; = acceleration.x - rollingX; float accelY = acceleration.y - rollingY; float accelZ = acceleration.z - rollingZ; CGSize winSize = [CCDirector sharedDirector].winSize; float accelDiff = accelX - kRestAccelX; float accelFraction = accelDiff / kMaxDiffX; float pointsPerSec = kShipMaxPointsPerSec * accelFraction; _shipPointsPerSecY = pointsPerSec; } // 4) Add to bottom of update CGSize winSize = [CCDirector sharedDirector].winSize; float maxY = winSize.height - _ship.contentSize.height/2; float minY = _ship.contentSize.height/2; float newY = _ship.position.y + (_shipPointsPerSecY * dt); newY = MIN(MAX(newY, minY), maxY); _ship.position = ccp(_ship.position.x, newY);
Let's go over this bit-by-bit:
- Adding this line sets the layer up to receive the accelerometer:didAcccelerate callback.
- here's some info though. Anyway, after we run the filter, we test to see how much it's tilted. A rotation of -0.6 along the x-axis is considered "baseline", and the closer it gets to 0.2 in either direction, the faster the ship is set to move. These values were all gotten via experimentation and what "felt right"!
- Sets the position of the ship based on the points per second to move along the Y axis computed earlier, and the delta time since last update.
Compile and run your project (on your iPhone, the accelerometer does not work on the Simulator), and now you should be able to move the spaceship up and down by tilting your iPhone!
Adding Asteroids
The game is looking good so far, but where's the danger and excitement?! Let's spice things up by adding some wild asteroids to the scene!
The approach we're going to take is every so often, we'll create an asteroid offscreen to the right of the screen. Then we'll run a Cocos2D action to move it to the left of the screen.
We could simply create a new asteroid every time we needed to spawn, but allocating memory is a slow operation and it's best when you can avoid it. So we'll pre-allocate memory for a bunch of asteroids, and simply grab the next available asteroid when we need it.
OK, let's see what this looks like. Start by adding a few new instance variables inside the @interface in HelloWorldLayer.h:
CCArray *_asteroids; int _nextAsteroid; double _nextAsteroidSpawn;
Then make the following changes to HelloWorldLayer.m:
// Add to top of file #define kNumAsteroids 15 // Add to bottom of init _asteroids = [[CCArray alloc] initWithCapacity:kNumAsteroids]; for(int i = 0; i < kNumAsteroids; ++i) { CCSprite *asteroid = [CCSprite spriteWithSpriteFrameName:@"asteroid.png"]; asteroid.visible = NO; [_batchNode addChild:asteroid]; [_asteroids addObject:asteroid]; } // Add new method, above update loop - (float)randomValueBetween:(float)low andValue:(float)high { return (((float) arc4random() / 0xFFFFFFFFu) * (high - low)) + low; } // Add to bottom of update loop double curTime = CACurrentMediaTime(); if (curTime > _nextAsteroidSpawn) { float randSecs = [self randomValueBetween:0.20 andValue:1.0]; _nextAsteroidSpawn = randSecs + curTime; float randY = [self randomValueBetween:0.0 andValue:winSize.height]; float randDuration = [self randomValueBetween:2.0 andValue:10.0]; CCSprite *asteroid = [_asteroids objectAtIndex:_nextAsteroid]; _nextAsteroid++; if (_nextAsteroid >= _asteroids.count) _nextAsteroid = 0; [asteroid stopAllActions]; asteroid.position = ccp(winSize.width+asteroid.contentSize.width/2, randY); asteroid.visible = YES; [asteroid runAction:[CCSequence actions: [CCMoveBy actionWithDuration:randDuration position:ccp(-winSize.width-asteroid.contentSize.width, 0)], [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]]; } // Add new method - (void)setInvisible:(CCNode *)node { node.visible = NO; }
Some things to point out about the above code:
- CCArray is similar to NSArray, but optimized for speed. So it's good to use in Cocos2D when possible.
- Notice that we add all 15 asteroids to the batch node as soon as the game starts, but set them all to invisible. If they're invisible we treat them as inactive.
- We use an instance variable (_nextAsteroidSpawn) to tell us the time to spawn an asteroid next. We always check this in the update loop.
- If you're new to Cocos2D actions, these are easy ways to get sprites to do things over time, such as move, scale, rotate, etc. Here we perform a sequence of two actions: move to the left a good bit, then call a method that will set the asteroid to invisible again.
Compile and run your code, and now you have some asteroids flying across the screen!
Shooting Lasers
I don't know about you, but the first thing I think of when I see asteroids is MUST SHOOT THEM!
So let's take care of this urge by adding the ability to fire lasers! This code will be similar to how we added asteroids, because we'll create an array of reusable laser beams and move them across the screen with actions.
The main difference is we'll be using touch handling to detect when to shoot them.
Start by adding a few new instance variables inside the @interface in HelloWorldLayer.h:
CCArray *_shipLasers; int _nextShipLaser;
Then make the following changes to HelloWorldLayer.m:
// Add to top of file #define kNumLasers 5 // Add to bottom of init _shipLasers = [[CCArray alloc] initWithCapacity:kNumLasers]; for(int i = 0; i < kNumLasers; ++i) { CCSprite *shipLaser = [CCSprite spriteWithSpriteFrameName:@"laserbeam_blue.png"]; shipLaser.visible = NO; [_batchNode addChild:shipLaser]; [_shipLasers addObject:shipLaser]; } self.isTouchEnabled = YES; // Add new method - (void)ccTouchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { CGSize winSize = [CCDirector sharedDirector].winSize; CCSprite *shipLaser = [_shipLasers objectAtIndex:_nextShipLaser]; _nextShipLaser++; if (_nextShipLaser >= _shipLasers.count) _nextShipLaser = 0; shipLaser.position = ccpAdd(_ship.position, ccp(shipLaser.contentSize.width/2, 0)); shipLaser.visible = YES; [shipLaser stopAllActions]; [shipLaser runAction:[CCSequence actions: [CCMoveBy actionWithDuration:0.5 position:ccp(winSize.width, 0)], [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]]; }
So this shows you how easy it is to receive touch events in Cocos2D - just set isTouchEnabled to YES, then you can implement ccTouchesBegan (and/or ccTouchesMoved, ccTouchesEnded, etc)!
Compile and run your code, and now you can go pew-pew!
Basic Collision Detection
So far things look like a game, but don't act like a game, because nothing blows up!
And since I'm naughty by nature (and not 'cause I hate ya), it's time to add some violence into this game!
Start by adding the following to the @interface in HelloWorldLayer.h:
int _lives;
Then add the following to the bottom of the update loop:
for (CCSprite *asteroid in _asteroids) { if (!asteroid.visible) continue; for (CCSprite *shipLaser in _shipLasers) { if (!shipLaser.visible) continue; if (CGRectIntersectsRect(shipLaser.boundingBox, asteroid.boundingBox)) { shipLaser.visible = NO; asteroid.visible = NO; continue; } } if (CGRectIntersectsRect(_ship.boundingBox, asteroid.boundingBox)) { asteroid.visible = NO; [_ship runAction:[CCBlink actionWithDuration:1.0 blinks:9]]; _lives--; } }
This is a very basic method of collision detection that just checks the bounding box of the sprites to see if they collide. Note that this counts transparency, so isn't a perfect way of checking for collisions, but it's good enough for a simple game like this.
For more info on a better way to do collision detection in Cocos2D, check out the How To Use Box2D For Just Collision Detection tutorial.
Compile and run your code, and now things should blow up!
Win/Lose Detection
We're almost done - just need to add a way for the player to win or lose!
We'll make it so the player wins if he survives for 30 seconds, and loses if he gets hit 3 times.
Start by making a few changes to HelloWorldLayer.h:
// Add before @interface typedef enum { kEndReasonWin, kEndReasonLose } EndReason; // Add inside @interface double _gameOverTime; bool _gameOver;
Then make the following changes to HelloWorldLayer.m:
// Add at end of init _lives = 3; double curTime = CACurrentMediaTime(); _gameOverTime = curTime + 30.0; // Add at end of update loop if (_lives <= 0) { [_ship stopAllActions]; _ship.visible = FALSE; [self endScene:kEndReasonLose]; } else if (curTime >= _gameOverTime) { [self endScene:kEndReasonWin]; } // Add new methods above, winSize.height * 0.4); CCMenu *menu = [CCMenu menuWithItems:restartItem, nil]; menu.position = CGPointZero; [self addChild:menu]; [restartItem runAction:[CCScaleTo actionWithDuration:0.5 scale:1.0]]; [label runAction:[CCScaleTo actionWithDuration:0.5 scale:1.0]]; }
Don't worry if you don't understand how the endScene method works - that's some code I've used for a bunch of games for a quick win/lose menu in the past.
The important part is just that you understand the rest of the code - every update loop, you just check to see if the player has won or lost, and call that method if so.
Compile and run the code, and see if you can lose!
Gratuitous Music and Sound Effects
As you know, I can't leave you guys without some awesome sound effects and music to add in!
You've already added the sounds to your project, so just add a bit of code to play them in HelloWorldLayer.m:
// Add to top of file #import "SimpleAudioEngine.h" // Add to bottom of init [[SimpleAudioEngine sharedEngine] playBackgroundMusic:@"SpaceGame.caf" loop:YES]; [[SimpleAudioEngine sharedEngine] preloadEffect:@"explosion_large.caf"]; [[SimpleAudioEngine sharedEngine] preloadEffect:@"laser_ship.caf"]; // Add inside BOTH CGRectIntersectsRect tests [[SimpleAudioEngine sharedEngine] playEffect:@"explosion_large.caf"]; // Add inside ccTouchBegan [[SimpleAudioEngine sharedEngine] playEffect:@"laser_ship.caf"];
And that's it - congratulations, you've made a complete space game for the iPhone from scratch!
Want More?
I had a lot of fun working on this Cocos2D tutorial, so I actually spent a bit more time to polish this up and add some cool new features. Check out the following video to see what I mean:
I'm planning on making this (or something like it) into an actual iPhone game at some point, because I think the core concept is pretty fun.
But I was wondering if maybe you guys might also like me to turn this into a "Space Game Starter Kit" for sale on this site?
The starter kit would include the source code for the game in the video, plus a Cocos2D tutorial that shows how to make it from scratch, with the following features and instructions:
- Using Box2D for Collision Detection
- Using Physics Editor to define shapes
- Resizing Box2D shapes based on Cocos2D sprite scaling
- Detecting Box2D collision points
- Playing animations for the ship and efficient preloading
- Creating the alien ship and movement
- Having enemies with various amounts of health
- Shaking the screen
- Adding Retina-display graphics
- Making it a Universal App for iPhone + iPad
- Creating powerups
- Creating the cool zoom effect
But before I decide to do this, I want to make sure this is something you guys want. If not I won't mind, I have plenty of other stuff to work on lol!
So I'd really appreciate it if you could let me know if you'd like this (or not) with the following poll:
UPDATE: Thank you to everyone who voted! I removed the poll, because I got enough feedback, and by an overwhelming response you guys said you wanted it!
You guys asked for it, you got it! The Space Game Starter Kit is now complete, and available on the new raywenderlich.com store!
Where To Go From Here?
Here is the sample project with all of the code from the above tutorial.
And that concludes the How To Make A Space Shooter iPhone Game tutorial! If you want to learn more about Cocos2D, there are a bunch of other cocos2D tutorials on this site you can check out!
Thank you to @whereswayne from the forums for suggesting this idea! If you have an idea, feel free to suggest one yourself!
I know I still owe you guys a tutorial about OpenGL from a few weeks back, I'm still working on that LOL. It's quite a big subject!
Please join in the forum discussion below if you have any questions or comments! | https://www.raywenderlich.com/3611/cocos2d-tutorial-for-ios-how-to-make-a-space-shooter-iphone-game | CC-MAIN-2018-13 | refinedweb | 4,000 | 56.45 |
In iOS 14, Apple introduced a lot of new additions to the SwiftUI framework like LazyVGrid and LazyHGrid. But
matchedGeometryEffect is the new one that really caught my attention because it allows developers to create some amazing view animations with a few lines of code. The SwiftUI framework already makes it easy for developers to animate changes of a view. The
matchedGeometryEffect modifier just takes the implementation of view animations to the next level.
For any mobile apps, it is very common that you need to transit from one view to another. Creating a delightful transition between views will definitely improve the user experience. With
matchedGeometryEffect, all you need is describe the appearance of two views. The modifier will then compute the difference between those two views and automatically animates the size/position changes.
Feeling confused? No worries. You will understand what I mean after going through the demo apps.
Editor’s Note: This is an excerpt of our Mastering SwiftUI book. To dive deeper into SwiftUI animation and learn more about the SwiftUI framework, you can check out the book here.
Revisiting SwiftUI Animation
Before I walk you through the usage of
matchedGeometryEffect, let’s take a look at how we implement animation using SwiftUI. The figure below shows the beginning and final states of a view. When you tap the circle view on your left, it should grow bigger and move upward. Conversely, if you tap the one on the right, it returns to the original size and position.
The implementation of this tappable circle is very straightforward. Assuming you’ve created a new SwiftUI project, you can update the
ContentView struct like this:
We have a state variable
expand to keep track of the current state of the
Circle view. In both the
.frame and
.offset modifiers, we vary the frame size and offset when the state changes. If you run the app in the preview canvas, you should see the animation when you tap the circle.
Understanding the matchedGeometryEffect Modifier
So, what is
matchedGeometryEffect? How does it simplify the implementation of the view animation? Take a look at the first figure and the code of the circle animation again. We have to figure out the exact value change between the start and the final state. In the example, they are the frame size and the offset.
With the
matchedGeometryEffect modifier, you no longer need to figure out these differences. All you need to do is describe two views: one represents the start state and the other is for the final state.
matchedGeometryEffect will automatically interpolate the size and position between the views.
To create the same animation as shown earlier with
matchedGeometryEffect, you can first declare a namespace variable:
And then, rewrite the
body part like this:
In the code, we created two circle views: one is for the start state and the other is for the final state. When it’s first initialized, we have a
Circle view which is centered and has a width of 150 points. When the
expand state variable is changed from
false to
true, the app displays another
Circle view which is positioned 200 points from the center of the screen and has a width of 300 points.
For both
Circle views, we attach the
matchedGeometryEffect modifier and specify the same ID & namespace. By doing so, SwiftUI computes the size & position difference between these two views and interpolates the transition. Along with the
animation modifier, the framework will automatically animate the transition.
The ID and namespace are used for identifying which views are part of the same transition. This is why both
Circle views use the same ID and namespace.
This is how you use
matchedGeometryEffect to animate transition between two views. If you’ve used Magic Move in Keynote before, this new modifier is very much like Magic Move. To test the animation, I suggest you to run the app in an iPhone simulator. At the time of this writing, it seems there is a bug in Xcode 12 that you can’t test some of the animation properly in the preview canvas.
Morphing From a Circle to a Rounded Rectangle
Let’s try to implement another animated view transition. This time, we will morph a circle into a rounded rectangle. The circle is positioned to the top of the screen, while the rounded rectangle is close to the bottom part of the screen.
By using the same technique you learned earlier, you just need to prepare two views: the circle view and the rounded rectangle view. The
matchedGeometryEffect modifier will then handle the transformation. Now replace the
body variable of the
ContentView struct like this:
We still make use of the
expand state variable to toggle between the circle view and the rounded rectangle view. The code is very similar to the previous example, except that we use a
VStack and a
Spacer to position the view. You may wonder why we use
RoundedRectangle to create the circle. The main reason is that it will give you a more smooth transition.
For both views, we attach the
matchedGeometryEffect modifier and specify the same ID & namespace. That’s all we need to do. The modifier will compare the difference between these two views and animate the changes. If you run the app in the preview canvas or on an iPhone simulator, you will see a nice transition between the circle and the rounded rectangle views. This is the magic of
matchedGeometryEffect.
However, you may notice that the modifier doesn’t animate the color change. This is right.
matchedGeometryEffect only handles the position and size changes.
Exercise #1
Let’s have a simple exercise to test your understanding of
matchedGeometryEffect. Your task is to create the animated transition as shown in the figure below. It starts with an orange circle view. When the circle is tapped, it will transform into a full screen background. You can find the solution in the final project.
Swapping Two Views with Animated Transition
Now that you should have some basic knowledge of
matchedGeometryEffect, let’s continue to see how it can help us create some nice animations. In this example, we will swap the position of two circle views and apply the modifier to create a smooth transition.
We will use a state variable to store the state of the swap and create a namespace variable for
matchedGeometryEffect. Declare the following variable in
ContentView:
By default, the orange circle is on the left side of the screen, while the green circle is positioned on the right. When the user taps any of the circles, it will trigger the swap. You don’t need to figure out how the swap is done when using
matchedGeometryEffect. To create the transition, all you need to do is:
- Create the layout of the orange and green circles before the swap
- Create the layout of the two circles after the swap
If you translate the layout into code, you can write the
body variable like this:
We use a
HStack to layout the two circles horizontally and have a
Spacer in between to create some separation. In case the
swap variable is set to
true, the green circle is placed to the left of the orange circle. Conversely, the green circle is positioned to the right of the orange circle.
As you can see, we just describe the layout of the circle views in difference states and let
matchedGeometryEffect handle the rest. We attach the modifier to each of the
Circle view. However, this time is a bit different. Since we have two different
Circle views to match, we use two distinct IDs for the
matchedGeometryEffect modifier. For the orange circles, we set the identifier to
orangeCircle, while the green circles use the identifier
greenCircle.
Now if you run the app on a simulator, you should see the swap animation when you tap any of the circles.
Exercise #2
Earlier, we use the
matchedGeometryEffect on two circles and swap their position. Your exercise is to apply the same technique but on two images. The figure below shows you the sample UI. When the swap button is tapped, the app swaps the two photos with a nice animation.
You are free to use your own photos. For my demo, I used these free photos from Unsplash.com:
Summary
The introduction of the
matchedGeometryEffect modifier takes the implementation of view animation to the next level. You can create some nice view transitions with much less code. Even if you are a beginner of SwiftUI, you can take advantage of this new modifier to make your app more awesome.
This is a sample chapter of our Mastering SwiftUI book. If you want to learn more about various types of view animation and get the source code, you can check out the book, which has been fully updated for Xcode 12 and iOS 14. | https://www.appcoda.com/matchedgeometryeffect/ | CC-MAIN-2021-17 | refinedweb | 1,481 | 62.98 |
SOLVED Is it possible to export to from Space Center to a multi-page or tall-page PDF?
- StephenNixon last edited by gferreira
I'm getting a lot of value out of mojo.UI
SpaceCenterToPDF(), but finding a major limitation:
The PDFs I export cut off after a single page. So, I can't proof more than a very small sample of text.
Is there a way to export to more than a single page?
My current code is here:
hello @StephenNixon,
the pdf export from Space Center is very low-level (basically just the eps/pdf data which is drawn on screen).
to generate multipage PDF proofs, DrawBot is more appropriate. have a look at Scripts using DrawBot for some examples. you can try reading the Space Center contents and using it as input for the PDF proofs.
hope this helps!
Interesting, okay!
How might I get the contents and sizing (font and window) from the Space Center?
Is this something I would have to add an observer to detect, or is there some other way to detect the content typed in the Space Center?
Shoot, also: do you know if there a way to use the
CurrentFont()as the font in a DrawBot script?
My hope:
- Write a RoboFont script that calls DrawBot as a module
- Within that script, set the font with
font(CurrentFont())
- Get the text and sizing from the
CurrentSpaceCenter()
- Create a drawbot composition with these ingredients
- Save to PDF
The closest things I have found so far:
drawGlyph(glyph)allows me to draw a single RGlyph object to the DrawBot canvas. I guess I could iterate through the text in the space center, and use
drawGlyph()for each, as is written here:
I could test install the font with
CurrentFont().testInstall(), then use that as the
font()in DrawBot, similar to. I would worry that it might be easy to use an old version of the font this way, but I suppose I could even temporarily rename the font to test install and use it, then revert to the normal name at the end of the script. This might get blocked by feature code issues, etc, however.
Is there anything simpler? Either method above would take some level of fuss to make work...
hi @StephenNixon,
do you know if there a way to use the
CurrentFont()as the font in a DrawBot script?
that’s not possible, because:
font()takes a string (font name or path to OpenType font file)
CurrentFont()returns an
RFontobject
I see now that the
SpaceCenterobject is missing from the
mojo.UIdocs, hope to fix it soon. in the meantime, here’s a quick way to access it:
from mojo.UI import CurrentSpaceCenter spaceCenter = CurrentSpaceCenter() print('glyph names:', spaceCenter.get()) print('characters:', spaceCenter.getRaw()) print('fontSize:', spaceCenter.getPointSize()) print('lineheight:', spaceCenter.getLineHeight()) print('tracking:', spaceCenter.getTracking()) print('before:', spaceCenter.getPre()) print('after:', spaceCenter.getAfter()) # list all attributes & methods # print(dir(spaceCenter))
Is there anything simpler?
things are simple if you use the Space Center to proof on screen, and the DrawBot extension to generate PDFs (from UFOs or OTFs).
using the Space Center to create multi-page PDFs is a nice idea, but you’ll need to fill in the gaps because the Space Center was not built to generate PDFs. that’s what makes the idea exciting: it takes some level of fun to make it work ;)
while working on the
SpaceCenterAPI docs, I stumbled across the
glyphRecordsattribute, which returns the computed glyphs (including pre/after contexts, suffix, selected layer, etc). adding a bit of code to fold the glyphs into lines and pages, we can make a simple multi-page PDF exporter:
'''SpaceCenter to multipage PDF''' from mojo.UI import CurrentSpaceCenter # -------- # settings # -------- pageSize = 'A4Landscape' margin = 40 # ------------ # calculations # ------------ f = CurrentFont() spaceCenter = CurrentSpaceCenter() size(pageSize) s = spaceCenter.getPointSize() / f.info.unitsPerEm # scale factor L = (f.info.unitsPerEm + f.info.descender) * s # first line shift w = width() - margin * 2 h = height() - margin * 2 x = margin y = height() - margin - L # ---------- # make pages # ---------- translate(x, y) scale(s) X, Y = 0, 0 for gr in spaceCenter.glyphRecords: # linebreak if (X + gr.glyph.width) * s > w: X = 0 Y -= f.info.unitsPerEm * (1 + spaceCenter.getLineHeight() / 800) # pagebreak if (abs(Y * s) + L) > h: newPage(pageSize) translate(x, y) scale(s) X, Y = 0, 0 with savedState(): translate(X, Y) drawGlyph(gr.glyph) X += gr.glyph.width
you could try adding more features: font name, page number, date and time, glyph metrics, vertical metrics…
I hope this is useful!
Hey Gustavo,
This is amazing; thanks so much for going above and beyond, yet again!
I definitely agree that it's fun to figure this stuff out; I just like to feel relatively confident that I'm not doing something the slow way if there's something simple that I haven't noticed.
I'll try this on proofing very soon, and update the thread with my improvements! :)
- FosterType last edited by
@ThunderNixon I still just use indesign and build proofs manually. Don't worry, you're fine :D
@FosterType haha, thanks for letting me know!
Sometimes, InDesign is definitely the fastest way forward. Other times, though, the siren song of a "one-click" proof is pretty compelling...
it all depends on when the proof is needed in a scope of a design process: is it for you only, for a client, for collaborating typedesigners, your boss...
it all depends on when the proof is needed in a scope of a design process
In this case, I am proofing my own work, plus collaborating with others on UFOs. I want to find the quickest way to look at the glyphs they are drawing and give feedback.
So far, a handy way to do this has been to export the space center to a PDF, which I draw on with the iPad app, Notability. I can then email/screenshot those proofs. It would take longer to do a TestInstall(), then set up a proof in InDesign to make a PDF, then proof that – especially because the TestInstall might conflict with existing versions of the font I have installed on my computer, in general (unless I change naming, but then that's a whole other set of steps).
- StephenNixon last edited by gferreira
Question for @gferreira or @frederik
Having tried Gustavo's excellent starter script, it's a really helpful start! I am now trying to make the text scale so that the PDF width will match the Space Center width, and therefore make it easier to layout basic proofs, right in the Space Center.
However, I'm not quite sure how to get a value for the Space Center width.
If I use
help(CurrentSpaceCenter()), I see (among other things) what seems to be the answer:
| getPosSize(self) | The position and size of the object as a tuple of form *(left, top, width, height)*.
However, if I then use
print(CurrentSpaceCenter().getPosSize()), I simply get this:
(0, 0, 0, 0)
Needless to say, my current Space Center is not
0in size. It looks like this:
Am I perhaps missing some way to access this value from
VanillaBaseObject?
@StephenNixon this will give you the current width & height of the Space Center window:
from mojo.UI import CurrentSpaceCenter (x, y), (w, h) = CurrentSpaceCenter().getNSView().bounds() print(w, h) | https://forum.robofont.com/topic/658/is-it-possible-to-export-to-from-space-center-to-a-multi-page-or-tall-page-pdf | CC-MAIN-2021-49 | refinedweb | 1,219 | 71.55 |
![if !(IE 9)]> <![endif]>
You must have already guessed from the title that today's article will be focusing on bugs in software source code. But not only that. If you are not only interested in C++ and in reading about bugs in other developers' code but also dig unusual video games and wonder what "roguelikes" are and how you play them, then welcome to read on!
While searching for unusual games, I stumbled upon Cataclysm Dark Days Ahead, which stands out among other games thanks to its graphics based on ASCII characters of various colors arranged on the black background.
One thing that amazes you about this and other similar games is how much functionality is built into them. Particularly in Cataclysm, for instance, you can't even create a character without feeling an urge to google some guides because of the dozens of parameters, traits, and initial scenarios available, not to mention the multiple variations of events occurring throughout the game.
Since it's a game with open-source code, and one written in C++, we couldn't walk by without checking it with our static code analyzer PVS-Studio, in the development of which I am actively participating. The project's code is surprisingly high-quality, but it still has some minor defects, some of which I will talk about in this article.
Quite a lot of games have been checked with PVS-Studio already. You can find some examples in our article "Static Analysis in Video Game Development: Top 10 Software Bugs".
Example 1:
This example shows a classic copy-paste error.
V501 There are identical sub-expressions to the left and to the right of the '||' operator: rng(2, 7) < abs(z) || rng(2, 7) < abs(z) overmap.cpp 1503
bool overmap::generate_sub( const int z ) { .... if( rng( 2, 7 ) < abs( z ) || rng( 2, 7 ) < abs( z ) ) { .... } .... }
The same condition is checked twice. The programmer copied the expression but forgot to modify the copy. I'm not sure if this is a critical bug, but the fact is that the check doesn't work as it was meant to.
Another similar error:
Example 2:
V728 An excessive check can be simplified. The '(A && B) || (!A && !B)' expression is equivalent to the 'bool(A) == bool(B)' expression. inventory_ui.cpp 199
bool inventory_selector_preset::sort_compare( .... ) const { .... const bool left_fav = g->u.inv.assigned.count( lhs.location->invlet ); const bool right_fav = g->u.inv.assigned.count( rhs.location->invlet ); if( ( left_fav && right_fav ) || ( !left_fav && !right_fav ) ) { return .... } .... }
This condition is logically correct, but it's overcomplicated. Whoever wrote this code should have taken pity on their fellow programmers who will be maintaining it. It could be rewritten in a simpler form: if( left_fav == right_fav ) .
Another similar error:
I was surprised to discover that games going by the name of "roguelikes" today are only more moderate representatives of the old genre of roguelike games. It all started with the cult game Rogue of 1980, which inspired many students and programmers to create their own games with similar elements. I guess a lot of influence also came from the community of the tabletop game DnD and its variations.
Example 3:
Warnings of this group point to spots that could potentially be optimized rather than bugs.
V801 Decreased performance. It is better to redefine the second function argument as a reference. Consider replacing 'const .. type' with 'const .. &type'. map.cpp 4644
template <typename Stack> std::list<item> use_amount_stack( Stack stack, const itype_id type ) { std::list<item> ret; for( auto a = stack.begin(); a != stack.end() && quantity > 0; ) { if( a->use_amount( type, ret ) ) { a = stack.erase( a ); } else { ++a; } } return ret; }
In this code, itype_id is actually a disguised std::string. Since the argument is passed as a constant anyway, which means it's immutable, simply passing a reference to the variable would help to enhance performance and save computational resources by avoiding the copy operation. And even though the string is unlikely to be a long one, copying it every time without good reason is a bad idea - all the more so because this function is called by various callers, which, in turn, also get type from outside and have to copy it.
Similar problems:
Example 4:
V813 Decreased performance. The 'str' argument should probably be rendered as a constant reference. catacharset.cpp 256
std::string base64_encode( std::string str ) { if( str.length() > 0 && str[0] == '#' ) { return str; } int input_length = str.length(); std::string encoded_data( output_length, '\0' ); .... for( int i = 0, j = 0; i < input_length; ) { .... } for( int i = 0; i < mod_table[input_length % 3]; i++ ) { encoded_data[output_length - 1 - i] = '='; } return "#" + encoded_data; }
Though the argument is non-constant, it doesn't change in the function body in any way. Therefore, for the sake of optimization, a better solution would be to pass it by constant reference rather than force the compiler to create local copies.
This warning didn't come alone either; the total number of warnings of this type is 26.
Similar problems:
Some of the classic roguelike games are still in active development. If you check the GitHub repositories of Cataclysm DDA or NetHack, you'll see that changes are submitted every day. NetHack is actually the oldest game that's still being developed: it released in July 1987, and the last version dates back to 2018.
Dwarf Fortress is one of the most popular - though younger - games of the genre. The development started in 2002 and the first version was released in 2006. Its motto "Losing is fun" reflects the fact that it's impossible to win in this game. In 2007, Dwarf Fortress was awarded "Best Roguelike Game of the Year" by voting held annually on the ASCII GAMES site.
By the way, fans might be glad to know that Dwarf Fortress is coming to Steam with enhanced 32-bit graphics added by two experienced modders. The premium-version will also get additional music tracks and Steam Workshop support. Owners of paid copies will be able to switch to the old ASCII graphics if they wish. More.
Examples 5, 6:
Here's a couple of interesting warnings.
V690 The 'JsonObject' class implements a copy constructor, but lacks the '=' operator. It is dangerous to use such a class. json.h 647
class JsonObject { private: .... JsonIn *jsin; .... public: JsonObject( JsonIn &jsin ); JsonObject( const JsonObject &jsobj ); JsonObject() : positions(), start( 0 ), end( 0 ), jsin( NULL ) {} ~JsonObject() { finish(); } void finish(); // moves the stream to the end of the object .... void JsonObject::finish() { .... } .... }
This class has a copy constructor and a destructor but doesn't override the assignment operator. The problem is that an automatically generated assignment operator can assign the pointer only to JsonIn. As a result, both objects of class JsonObject would be pointing to the same JsonIn. I can't say for sure if such a situation could occur in the current version, but somebody will surely fall into this trap one day.
The next class has a similar problem.
V690 The 'JsonArray' class implements a copy constructor, but lacks the '=' operator. It is dangerous to use such a class. json.h 820
class JsonArray { private: .... JsonIn *jsin; .... public: JsonArray( JsonIn &jsin ); JsonArray( const JsonArray &jsarr ); JsonArray() : positions(), ...., jsin( NULL ) {}; ~JsonArray() { finish(); } void finish(); // move the stream position to the end of the array void JsonArray::finish() { .... } }
The danger of not overriding the assignment operator in a complex class is explained in detail in the article "The Law of The Big Two".
Examples 7, 8:
These two also deal with assignment operator overriding, but this time specific implementations of it.
V794 The assignment operator should be protected from the case of 'this == &other'. mattack_common.h 49
class StringRef { public: .... private: friend struct StringRefTestAccess; char const* m_start; size_type m_size; char* m_data = nullptr; .... auto operator = ( StringRef const &other ) noexcept -> StringRef& { delete[] m_data; m_data = nullptr; m_start = other.m_start; m_size = other.m_size; return *this; }
This implementation has no protection against potential self-assignment, which is unsafe practice. That is, passing a *this reference to this operator may cause a memory leak.
Here's a similar example of an improperly overridden assignment operator with a peculiar side effect:
V794 The assignment operator should be protected from the case of 'this == &rhs'. player_activity.cpp 38
player_activity &player_activity::operator=( const player_activity &rhs ) { type = rhs.type; .... targets.clear(); targets.reserve( rhs.targets.size() ); std::transform( rhs.targets.begin(), rhs.targets.end(), std::back_inserter( targets ), []( const item_location & e ) { return e.clone(); } ); return *this; }
This code has no check against self-assignment either, and in addition, it has a vector to be filled. With this implementation of the assignment operator, assigning an object to itself will result in doubling the vector in the targets field, with some of the elements getting corrupted. However, transform is preceded by clear, which will clear the object's vector, thus leading to data loss.
In 2008, roguelikes even got a formal definition known under the epic title "Berlin Interpretation". According to it, all such games share the following elements:
Finally, the most important feature of roguelikes is focusing mainly on exploring the world, finding new uses for items, and dungeon crawling.
It's a common situation in Cataclysm DDA for you character to end up frozen to the bone, starving, thirsty, and, to top it all off, having their two legs replaced with six tentacles.
Example 9:
V1028 Possible overflow. Consider casting operands of the 'start + larger' operator to the 'size_t' type, not the result. worldfactory.cpp 638
void worldfactory::draw_mod_list( int &start, .... ) { .... int larger = ....; unsigned int iNum = ....; .... for( .... ) { if( iNum >= static_cast<size_t>( start ) && iNum < static_cast<size_t>( start + larger ) ) { .... } .... } .... }
It looks like the programmer wanted to take precautions against an overflow. However, promoting the type of the sum won't make any difference because the overflow will occur before that, at the step of adding the values, and promotion will be done over a meaningless value. To avoid this, only one of the arguments should be cast to a wider type: (static_cast<size_t> (start) + larger).
Example 10:
V530 The return value of function 'size' is required to be utilized. worldfactory.cpp 1340
bool worldfactory::world_need_lua_build( std::string world_name ) { #ifndef LUA .... #endif // Prevent unused var error when LUA and RELEASE enabled. world_name.size(); return false; }
There's one trick for cases like this. If you end up with an unused variable and you want to suppress the compiler warning, simply write (void)world_name instead of calling methods on that variable.
Example 11:
V812 Decreased performance. Ineffective use of the 'count' function. It can possibly be replaced by the call to the 'find' function. player.cpp 9600
bool player::read( int inventory_position, const bool continuous ) { .... player_activity activity; if( !continuous || !std::all_of( learners.begin(), learners.end(), [&]( std::pair<npc *, std::string> elem ) { return std::count( activity.values.begin(), activity.values.end(), elem.first->getID() ) != 0; } ) { .... } .... }
The fact that count is compared with zero suggests that the programmer wanted to find out if activity contained at least one required element. But count has to walk through the whole container as it counts all occurrences of the element. The job could be done faster by using find, which stops once the first occurrence has been found.
Example 12:This bug is easy to find if you know one tricky detail about the char type. the errors that you won't spot easily unless you know that EOF is defined as -1. Therefore, when comparing it with a variable of type signed char, the condition evaluates to false in almost every case. The only exception is with the character whose code is 0xFF (255). When used in a comparison, it will turn to -1, thus making the condition true.
Example 13:
This small bug may become critical someday. There are good reasons, after all, that it's found on the CWE list as CWE-834. Note that the project has triggered this warning five times.
V663 Infinite loop is possible. The 'cin.eof()' condition is insufficient to break from the loop. Consider adding the 'cin.fail()' function call to the conditional expression. action.cpp 46
void parse_keymap( std::istream &keymap_txt, .... ) { while( !keymap_txt.eof() ) { .... } }
As the warning says, it's not enough to check for EOF when reading from the file - you also have to check for an input failure by calling cin.fail(). Let's fix the code to make it safer:
while( !keymap_txt.eof() ) { if(keymap_txt.fail()) { keymap_txt.clear(); keymap_txt.ignore(numeric_limits<streamsize>::max(),'\n'); break; } .... }
The purpose of keymap_txt.clear() is to clear the error state (flag) on the stream after a read error occurs so that you could read the rest of the text. Calling keymap_txt.ignore with the parameters numeric_limits<streamsize>::max() and newline character allows you to skip the remaining part of the string.
There's a much simpler way to stop the read:
while( !keymap_txt ) { .... }
When put in logic context, the stream will convert itself into a value equivalent to true until EOF is reached.
The most popular roguelike-related games of our time combine the elements of original roguelikes and other genres such as platformers, strategies, and so on. Such games have become known as "roguelike-like" or "roguelite". Among these are such famous titles as Don't Starve, The Binding of Isaac, FTL: Faster Than Light, Darkest Dungeon, and even Diablo.
However, the distinction between roguelike and roguelite can at times be so tiny that you can't tell for sure which category the game belongs in. Some argue that Dwarf Fortress is not a roguelike in the strict sense, while others believe Diablo is a classic roguelike game.
Even though the project proved to be generally high-quality, with only a few serious defects, it doesn't mean it can do without static analysis. The power of static analysis is in regular use rather than one-time checks like those that we do for popularization. When used regularly, static analyzers help you detect bugs at the earliest development stage and, therefore, make them cheaper to fix. Example calculations.
The game is still being intensely developed, with an active modder community working on it. By the way, it has been ported to multiple platforms, including iOS and Android. So, if you are interested, do give it a try! ... | https://www.viva64.com/en/b/0628/ | CC-MAIN-2019-22 | refinedweb | 2,361 | 56.15 |
This blog post is continuation of a 4-part series on How to use SAP HCP, mobile service for SAP Fiori.
Part 3: Configure Fiori Mobile and build Fiori app
In this third part, we’ll configure Fiori Mobile and test our app on a live device. You’ll learn how to…
- Subcribe your Fiori app in HCP
- Create Android Signing Profile
- Create your Avanced Configuration File
- Use Fiori mobile build process (application wizard, application workflow, etc.)
- Executing the app on the device (new work-around in step 22)
Pre-req:
1. Subscribe Fiori app in SAP HANA Cloud Platform – At the moment, packaged SAP Fiori apps can only be created if subscribed as an HTML5 application in HCP. In future, this will be automated.
- Open SAP HANA Cloud Platform Cockpit
- Click Subscriptions + add New Subscription
- Provider Account – click drop-down and select your account (e.g. P12345trial)
- Application – select your application (e.g. productlisting)
- Subscription Name – append “sub” at end of pre-populated appname (e.g. productlistingsub); note that this subcription name is your cloudComponentId in Fiori Mobile Advanced Configuration File
- Click Save
Step-by-step guide:
1. Open Fiori Mobile Admin Console
- Open HCP Cockpit
- Browse to Services > select Fiori Mobile
- Click Go to Admin Console
2. Add Android Signing Profile
NOTE: If you using Apple Signing Profile, you will need to have an iOS Developer Account and must upload your Signing Certificate, private key pass-phrase, and provisioning profile. Here is an KB article that shows you how you can create your own iOS Certificate.
- Open Account tab > Signing Profile
- Add New Profile (or use existing profile if you have one)
- You can create a signing profile if you have JAVA installed on your computer. You can either use Windows Command or Android Studio (if installed) to generate the keystore file. In the example below, I’ll use Windows Command.
- Open Windows Command Prompt and type the next line (feel free to change values but ensure you use same alias and keypass password)
- keytool -genkeypair -alias password123 -keypass password123 -keystore productlisting.keystore -storepass password123 -dname “CN=Bill Joy, OU=IT O=ACME, L=Palo Alto, S=CA, C=US” -sigalg MD5withRSA -keyalg RSA -keysize 2048 -validity 99
- Select Android
- Click OK
3. Provide signing profile details and click Save
4. Fiori mobile
- Select Fiori Cloud Edition and click Ping (typically status will let you know if server is available)
- Click OK on Ping Successful window (not pictured)
5. Add Fiori Mobile Application
- Click Applications tab
- Manage Apps
- New Application (NOTE – you will not see Application Type menu unless you have App & Device Management Service enabled)
- Select Application Type – Fiori Mobile
6. Click Get Started to start the App Wizard
7. Select your Fiori app scenario > select (default) – local launchpad (click Next)
- Scenario 1 – “I want to create…” All HTML/JavaScript resources are packaged into binary and are located on device when installed; an app re-build is required when making changes to the app and notification is triggered on Admin Console
- Scenario 2 – “I want to mobilize…” This is an optimized Fiori Client build with all HTML/JavaScript are accessed via cloud; re-build of app is not required for this scenario
8. Select your Fiori Server (click Next)
9. Upload an Advanced Configuration File (ACF) – The advanced configuration file is a .json file (you can create one in notepad) that describes an array of components (using properties) that should be packaged with the app. Your ability to create an advanced configuration file is dependent on selecting “I want to create a local launchpad with only the apps I want to mobilize” on the Fiori App Scenario page.
- Browse to upload ACF (please note that fields in ACF are case sensitive)
- Here’s what your sample ACF file code should look like
- id = your app id (it’s located in webide > app_project folder >webapp > manifest.json
- cloudComponentId = app subscription in HCP
- url = the location under which the application is registered in Fiori Launchpad. By default when deploying the app via SAP WebIDE it is /sap/fiori/[heliumhtml5appname] … it is NOT the subscription name but the name of the app that you subscribe to.
- Click Next
{ "applications":[ { "id":"com.sap.fiori.products", "cloudComponentId":"productlistingsub", "url": "/sap/fiori/productlisting" } ] }
NOTE: Currently there is no integration with SAP Demo Cloud, therefore if you try to select and build the available Fiori Application, the apps will not execute on mobile device.
10. Finish your application – Here, you can build your application or further configure application. In this example, uncheck the option and click Finish.
Application Workflow – After running through the App Creation Wizard, you can modify app information to control your app’s usability and appearance on SAP Mobile Place.
11. Details – no change; here you can modify the app selection, add app description, and provide more app context. In addition, if you need to re-upload your ACF, you can start your workflow again by click “Select Fiori Appliations”
12. Insight – Once application is deployed and consumed, Administrator can monitor application inisight such as usage charts, registration information, and user logs
13. Groups – If using SAP Mobile Place, Administrator can assign application to an individual group or multiple group(s)
14. Multimedia – For a better user experience, upload an app icon, a splash screen, and optionally a banner image (if using SAP Mobile Place)
- Application – upload Application Icon, Splash Screen, and optionally a Banner Image for SAP Mobile Place
- Screenshot and Videos – optionally add these elements for better usage experience (supported types include jpg, jpeg, png, and mp4)
- Documents – optionally add application usage documentation (supported types include ppt/x, doc/x, txt, pdf)
15. App Settings – no change; Use the SAP Fiori mobile service app settings to define, deploy, and monitor your app
- Application Security – When setting app security, consider the sensitivity of the data in the app. If the data is highly sensitive, requiring a complex password is recommended.
- Feature management – Control which device features (for example: Calendar, Camera, Barcode Scanner) can be used by the application running on the device
- Notifications – Send notifications to iOS and Android devices by using the Notification URL and Application ID under Notification Information
- Logging – Admin can capture log information for review; collect performance metrics and identify performance issues as they occur
16. Categories – no change; Use the Categories tab to sort apps so they appear in logical groupings within SAP Mobile Place
17. Owners – no change; Use the Owner info tab to view the apps you created and therefore own, and to select co-owners for the app.
18. Platforms – Use the Platforms tab to build your app; if you made changes to any of the above settings, please click Save before continuing
- Click Action and select Build
19. Build Summary
NOTE: “debug-enabled” mode is only supported in trial and can only be tested with a physical device. In order to use this feature, you will need Chrome installed as well as the Android SDK (I think). Once your device is connected to your computer, open Chrome and select More Tools > Developer Tools (from Menu). Then, from Console menu, select More tools > Inspect devices. Now run the app on device to debug it.
- Turn off iOS (unless you are building for this platform then leave it on and select your profile)
- For Android – Select your Signing Profile from drop-down
- Click Build
19. Your New Signed app is available for consumption; no actions are required
20. As a developer, we’ve made it easy to test your app. Simply login to SAP Mobile Place and go to Menu > My Profile > Fiori Mobile Apps
NOTE: Your SAP Mobile Place URL is available via HCP Cockpit > Services > Fiori Mobile > “Go to Mobile Place” or try https://<trial-p#trial>.sapmobileplace.com/ (only replace p# which is your trial id)
21. Click download button to immediately download app to your device (and start install process); else, you can view details of your app by clicking app icon (and download and install app from that page)
22. Once the app is installed, you can run the app (first time authentication will be required)
IMPORTANT NOTE: In order to run the app on your mobile or virtual device, please fix destination path in your SAP Web IDE project’s /neo-app.json and /webapp/manifest.json file by appending “/sap” in front of “/northwind” in destination field.
MyApp/neo-app.json:
Before “path”:”/Northwind”…
After: “path”: “/sap/Northwind”… (do not change casing of northwind)
MyApp/webapp/manifest.json: (open with code editor)
Before: “datasources”… “/Northwind/V*/northwind/northwind.svc/”,…
After: “datasources”… “/sap/Northwind/…”,… (do not change casing of northwind)
This concludes part 3 of this blog… On to part 4, adding push notification.
This is very smooth and straighforwarrd explaination of Fiori Mobile Dhimant!!
Thanks
Keshav
I got an error when build the apps in android platform.
how to correct the error message ?
Hi,
Can you please provide specific steps to analyse this further.
1. What kind of App scenario selected? Packed or custom Fiori Client?
2. Do you see app listed in catalog on Applications-> manage Apps?
3. Have you used any localized language while creating application?
Thanks
keshav
Hi,
i just follow step from above guide.
everything is ok till step no.18
the build progress stop in 80% and then failed.
I’m experiencing error opening Fiori Mobile Admin Console. Invalid account, username or password – Logon Again. It looks like SAML issue?
This should be working now. Please try again.
Thanks
Keshav
Thanks for fixing SAML issue, unfortunately, there is a few more to come –
Tiles are not accessible (numbers are not rendered) – screenshot 1 and there is no + sign to add new profile under (Account tab > Signing Profile > Add New Profile).
information in first screen are correct. You have 2 users. You have not enrolled any device hence you see zero enrolled devices.
For signing profile, i have tried with multiple accounts and could not reproduce this problem. what device/browser are you using?
Thanks
Keshav
Hi Experts,
Running step 19, could not generete.
attached, step and log
Hi ,
I to got the same error.Did u found any solution for this.
Dhimant Patel please check below problem.Thanking You
[2016-06-23 14:05:16.42801] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [:processReleaseJavaRes UP-TO-DATE] [Utility=>execCommand]
[2016-06-23 14:05:16.43581] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [:packageRelease] [Utility=>execCommand]
[2016-06-23 14:05:16.44341] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [:assembleRelease] [Utility=>execCommand]
[2016-06-23 14:05:16.45106] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [:cdvBuildRelease] [Utility=>execCommand]
[2016-06-23 14:05:16.45886] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [] [Utility=>execCommand]
[2016-06-23 14:05:16.46659] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [BUILD SUCCESSFUL] [Utility=>execCommand]
[2016-06-23 14:05:16.47420] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [] [Utility=>execCommand]
[2016-06-23 14:05:16.48188] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [Total time: 1 mins 23.128 secs] [Utility=>execCommand]
[2016-06-23 14:05:16.48948] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [Built the following apk(s):] [Utility=>execCommand]
[2016-06-23 14:05:16.49712] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [ /*/*/*/*/*/platforms/android/build/outputs/apk/android-release-unsigned.apk] [Utility=>execCommand]
[2016-06-23 14:05:16.50477] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Return Value: 0] [] [Utility=>execCommand]
[2016-06-23 14:05:16.51248] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Finished Executing Command: cordova build] [] [Utility=>execCommand]
[2016-06-23 14:05:16.52045] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Cordova Build End] [] [Cordova=>build]
[2016-06-23 14:05:16.52817] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Android Sign Application start] [] [CordovaAndriodBuildEntity=>signApplication]
[2016-06-23 14:05:16.83083] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Extracting android Signing Information] [] [CordovaAndriodBuildEntity=>signApplication()]
[2016-06-23 14:05:16.83864] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Decrypting Certificate Start] [] [Utility=>decryptCertificate]
[2016-06-23 14:05:16.84644] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Decrypting Certificate End] [] [Utility=>decryptCertificate]
[2016-06-23 14:05:16.85861] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Executing Command :jarsigner] [] [Utility=>execCommand]
[2016-06-23 14:05:17.32447] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Output] [jarsigner: you must enter key password] [Utility=>execCommand]
[2016-06-23 14:05:17.33318] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Command Return Value: 1] [] [Utility=>execCommand]
[2016-06-23 14:05:17.34103] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Finished Executing Command: jarsigner] [] [Utility=>execCommand]
[2016-06-23 14:05:17.34939] [IT] [576be9c5e595a::p1940837875@trial-p1940837875trial] [Android Sign Application Fail] [] [CordovaAndriodBuildEntity=>signApplication]
Perhaps an issue with signing of your app. If you are still having an issue, please let me know.
Followed all the instructions. But on step 19–> Android build continuously fails. I tried to build with the debug option ticked on step 19. Successfully build process completed. But when i launch the app in my mobile, error shown as
“requestUri”:”sap/fiori/product listing/sap/destinations/Northwind/V2/northwind/northwind.svc/$metadata?sap-documentation=heading”,”statusCode”: 404″
Seeking help..
Thanks
Prakash
In your Web IDE project, please use search function to search for “northwind.svc”. You’ll see three files show up… /PROJECT/.project.json, /PROJECT/webapp/manifest.json, and /PROJECT/dist/manifest.json. Open each file and ensure the uri matches your service (e.g. /sap/destinations/northwind/V2/northwind/northwind.svc)? It’s possible that service was not available but please ensure the path exists and give it a try. Thanks.
Awesome Article 🙂
Thanks Dhimant.
Hi Dhimant Patel,
Running step 19, could not build the app for Android.
Build is getting failed due to key store password. I have checked and I have provided the correct password as mentioned in the steps.
Here is the build log:
[Android Sign Application start] [] [CordovaAndriodBuildEntity=>signApplication]
[2016-06-29 13:18:58.10184] [IT] [5773c78ca9966::i071546@trial-i071546trial] [Extracting android Signing Information] [] [CordovaAndriodBuildEntity=>signApplication()]
[2016-06-29 13:18:58.10960] [IT] [5773c78ca9966::i071546@trial-i071546trial] [Decrypting Certificate Start] [] [Utility=>decryptCertificate]
[2016-06-29 13:18:58.11752] [IT] [5773c78ca9966::i071546@trial-i071546trial] [Decrypting Certificate End] [] [Utility=>decryptCertificate]
[2016-06-29 13:18:58.12952] [IT] [5773c78ca9966::i071546@trial-i071546trial] [Executing Command :jarsigner] [] [Utility=>execCommand]
[2016-06-29 13:18:58.53523] [IT] [5773c78ca9966::i071546@trial-i071546trial] [Command Output] [jarsigner: you must enter key password] [Utility=>execCommand]
[2016-06-29 13:18:58.54387] [IT] [5773c78ca9966::i071546@trial-i071546trial] [Command Return Value: 1] [] [Utility=>execCommand]
[2016-06-29 13:18:58.55179] [IT] [5773c78ca9966::i071546@trial-i071546trial] [Finished Executing Command: jarsigner] [] [Utility=>execCommand]
[2016-06-29 13:18:58.56011] [IT] [5773c78ca9966::i071546@trial-i071546trial] [Android Sign Application Fail] [] [CordovaAndriodBuildEntity=>signApplication]
Can you please help here?
Thanks & regards,
Lalit
Can you re-create your keystore file? Ensure you keep keystore password and alias name the same (e.g. mykey123)
Dhimant,
This actually fixed the issue and build is now successful.
Thanks & regards,
Lalit
Step 11 error in build process is reporting the following message and it is missing build option?
Please note that that error is generic and is likely due to apps limit reached in trial account. There is a limit of two Fiori Mobile apps in trial account. Please remove existing HCPms and/or Fiori Mobile apps and try again.
I’m unable to log in after successful build in Step 22. (first time authentication will be required) – Registration Error – please check your connection data…
Hi, possibly there was service interruption… did you resolve your issue?
I have exactly the same problem. Is the service alive?
Also, which user name is correct? HCP trial user? The name displayed in SAP Mobile Secure, adding trial-? However, I tried all possible combinations.
Regards,
Jan
Hi Jan, the service is up (as of answering this quetsion).. Your account name is “trial-z####trial”.
Thanks Dhimant. Unfortunately I still cannot login. Also, my HCP trial account is rather old. Could that be a problem?
hi,
I have an interesting question.
So it’s ok when your create in WEB IDE a simple SAP fiori Master-Detail Application.
When You have selected this application “sap fiori master-details application” the
parameters in your Json file are :
{
“applications”:[
{
“id”:”com.sap.fiori.products”,
“cloudComponentId”:”productssub”,
“url”:”/sap/fiori/products”
}
]
}
But when you create a other application type, I try create SAPUI5 Master Detail Kapsel Offline Application and the Url parameter is not same in the Json file.
My question what is the url parameter for SAPUI5 Master Detail Kapsel Offline Application or Other application type ?
{
“applications”:[
{
“id”:”com.sap.fiori.products”,
“cloudComponentId”:”productssub”,
“url”:”??????????”
}
]
}
Thanks
The SAPUI5 Master Detail Kapsel Offline App will be supported with Fiori Mobile in future… At the moment, you need to use template that has manifest.json/resource.json file.
So I can not deploy the Offline application ? or there exist other alternative ?
Because I need to develop an offline app for 1000 users. How can manage my app with
Hcp ?
Thanks & Regard
Jaouad Essika, the current working model is this…as a developer you create the application and publish the content back to either an on-premise Front End Server or to a Fiori cloud edition account.
At that point, Fiori mobile service can transform that app that you just built into a hybrid application (using Kapsel) using the concept known as “packaging”. As part of the workflow you supply an advanced configuration file which specifies that the app you are packaging support offline.
Underneath the covers (without you having to interact with it), we do use SAP HCP, mobile service for development and operations (formerly HCPms) to do runtime management and data synchronization.
We do look to add more support to the developer experience, in webIDE, in the future.
Also, you can go ahead and reach to me directly to discuss.
Thanks Britt Womelsdorf you can contact me by email for exchanges more on this subject.
jaouad.essika@gmail.com
Jaouad – I know this is a little bit late, but I wanted to make sure I posted that some of the steps in my original response are no longer necessary. You can build through Web IDE directly as outlined in my blog:. We are looking to add a blog on how to build an offline enabled app with the Cloud Build Service soon. Stay tuned!
Hi Britt
I have exactly the same issue. Everything works fine online, but this isn’t possible with the Kapsel Offline Template.
Any help at all would be greatly appreciated.
adamharkus@gmail.com
@Adam, the cloud build service does not currently support the Kapsel Offline Template. See for the list of supported templates. The Kapsel templates are only supported with the local add-on. We are hoping to publish a blog soon about how to create offline applications using the standard templates. Stay tuned!
Marvellous!!!
Thank you so much for getting back to me with the info.
Really appreciate it and look forward to reading the blog.
Adam
Hi Patel,
nice blog you have made.I have followed your steps and build my app successfully,.
Now on HCP Cockpit > Services > Fiori Mobile > “Go to Mobile Place” i accessed my app but the download/install button doesnt appear.Even if i click on it to see details ,still the download icon is not shown as below.I got an email that the build is successful and i can download.I downloaded on my pc the apk file which was 78778 KB in size and transferred it to my android phone version 5.1.But still it could not install. What could i have missed?
Hi, if you browse Mobile Place on your device’s (mobile) browser, do you see download link?
Thank you it worked like a charm.
This screenshot appears to be from a desktop? Does the Install button show on the mobile device itself? One thing you might want to check on the device is the “hamburger icon”, then tap on “My Profile” > “Fiori Mobile Apps”? Do you see it there?
Thanks Britt .I changed the screen to that of mobile and I could be able to see install
While running Step 19, the Android build failed. In the error logs we could see the error – [Android Sign Application Fail] [] [CordovaAndroidBuildEntity=>signApplication]
The rectification to this is – In Step 2.b.b. the keypass and the storepass need to be same password. If both are not same, when I execute Step 19, the Android build fails. may I request to update the same in the blog. Thanks.
Thanks for pointing this out, I have updated instructions.
Hi Dhimant and Britt,
Good stuff! However i am stuck at step 6. When i go to Manage Apps –> New Application –> Select Fiori Mobile —> Press the Get Started button: nothing happens. Tried it on both Chrome / Safari, but no luck. Cancel button does work. Any idea on what might be the case?
You might have too many apps already set up in HCPms and/or Fiori Mobile.
I experienced the same and other bugs as described here: Fiori Mobile Admin Console User Experience Inconsistent
That’s it Jan-Henrich! I’ve checked HCPMs and i did in fact already had two applications configured from previous demo’s. Thanks a lot for sharing!
Kind regards,
Jeroen
Jan-Henrich Mattfeld, Jeroen – Great find. We are working on making that message clearer and also fixing some of the other experiences you found.
Thanks,
Britt
Hello,
were you able to fix the issue encountered in Fiori Mobile the app?
Hi, the issue still exists (i.e. app does not run on mobile device)… I believe issue is with updated Web IDE template that breaks app on mobile devices. We’ll post an update once fixed.
Hi Dhimant, is it the case with the template you used only ?
Good question, I’ll try with other fiori templates tomorrow to see if they all behave same…
Hi Dhimant, I’m running into many errors with fiori’s app deployment to mobile devices by mobile secure. Is there any troubleshooting guide or an updated blog that help us with workarounds, due to the many bugs?
Best regards,
—
Ana
Hi, I’m working on requested guide (hope to have it posted soon). You can feel free to post encountered issues here. Here are basic checks to keep in mind: Trial account only support two (2) apps for mobile; Ensure you publish fiori app to HCP and subscribe the app (per instructions); Ensure your destination is alive; Make sure you have proper signing profiles created in Fiori Mobile Administrator.
Hi, please review step 22 for fix. Again this was “broken” due to modified web ide template(s). Thanks.
Hi Dhimant, I’ve updated the app with the 22 step fix. When I tried to open the app on the device it asks me for login, then it shows me a launchpad with the application in it, the problem is when I try to open the tile inside the launchpad, it shows this error on safari’s web inspector:
Best regards,
—
Ana
Ana Velasquez, I know you have a support case open on the topic. Perhaps we should work through the support case as the communication channel and then post the results if they are valuable to others?
Hi Britt, it’s just that this has me going for days now, and I’m searching in all the possible ways to find a solution. I’ll continue working with the support case and as soon as we’ve got a solution I’ll post it in a blog for others who run into the same problem.
Best regards,
—
Ana
I understand, and I apologize that you are having issues. I think though the support team will be able to help you. The colleagues that are monitoring this post are also involved in helping out with the support ticket. We’ll get it fixed!
Hi Ana,
any news on this. I am facing the same issue.
Thanks,
Andrei
Solved it myself.. the advanced config file was wrong.. I did not check for case sensitive issues.. but that was the root problem
Cheers,
Andrei
Hi Dhimant Patel,
I’ve posted my issue on this post, ¿could you please take a look at it and tell me if maybe you know a workaround or the proper way to do it?
Fiori app, built by mobile secure don’t reach the backend on iOS device
Thanks in advance.
When you go to HCP destinations, are you able to test connection for your destination? If you using Mobile Secure (i.e. Fiori Mobile) to create your ipa, you will not need to modify anything on HCPms (your app will display as read-only). For iOS, do you have proper signing profile created in Mobile Secure? The app will not run if it’s not properly signed. Also, did you subscribe app in HCP?
Dhimant, I’m able to ping successfully to my destination, as described in the question I’ve posted, when I deploy the app to HCP and then register it on Fiori Launchpad, the app runs well and reach the backend.
I did suscribed the app in HCP, as described in this blog, also the certificates are correctly uploaded to mobile secure. If I don’t put any call to backend on the app and deploy it to the device, it opens just well and runs. But if I add a call to backend and deploy it to the device, doesn’t work anymore.
Best regards.
—
Juan Lopez
@Juan, would you please fix the destination (step 22) and try again? I just posted above it above…
Hi Dhimant,
Thanks for your help, with your suggestion I can fix my problem and my application runs correctly.
It’s really a very helpful blog.
Hi Dhimant,
I’m having problems installing/running the app on my iPhone. Every step went smooth except the last (22). I chose IOS at steps I had to choose from ios and android and did the necessary for the signing process. Build went smooth too. I get the icon on my iPhone’s main screen but when tapped, it starts installing then fails after the white loading circle has finished, with the message ‘Impossible de télécharger l’application – “ProductListing” n’a pu être installé pour l’instant’ which roughly translate to ‘Could not download the application – “ProductListing” could not be installed for now’
Also worth mentionning, I did add the ‘sap’ in the northwind destination path in both manifest.json and neo-app.json
Any idea what could prevent the app from downloading?
Joel Provost, this almost always comes down to a signing issue. What type of AppID did you create in your profile? A wildcard, or a specific app ID? What type of iOS cert are you using?
AppID type is “Xcode: iOS Wildcard AppID (*)”
iOS cert type is “Private key, RSA, 2048 bits”
Hi Dhimant,
I am trying to follow your blog. However i have trouble enabling the FLP in the Fiori Mobile in step 4.
When i try to add it, it says i can’t add a URL with HTTPS, only HTTP. If i try to use HTTP instead it ends up failing with the following message:
Unable to create Fiori Server. Please contact your technical support team for assistance with error token: 57d0b76867080
Any thoughts?
Hi Jakob, seems like technical issue… The FLP should have been automatically picked up by FMS and you should have had to register it. I saw this issue in production recently but that was due to permissions issue… Please send me your trial acct number and we’ll look into the issue.
Hi Dhimant,
An email should be on its way to you now.
Hi Jakob,
Could you tell me how did you fix this issue ?
It is very blocking.
Thanks a lot.
Hi Dhimant,
I have exactly the same issue.
What is the solution ?
Thanks a lot.
Hi Dhimant,
I am facing issue at Step#19, and apps building getting failed at 7% processing only.
Hi Dhimant,
I tried several times and finally I was able to build the apps!
Now, i have successfully completed all 22nd steps. I have received a mail link to download the apps in my mobile.
I have downloaded it and it is connecting to the Mobile Place Apps!!!
Thanks & Regards,
Ramesh
Hi Dhimant,
It is really a very helpful blog.
Please keep writing the similar kind of blog for feature plug-ins deployment for the mobile apps.
Thanks again for a such a wonderful blog!
Regards,
Ramesh
Hi, it’ll be posted soon.
Hi Dhimant,
Once generated and deployed i can not seem to delete the generated app in SAP Mobile Secure nor Development and Operations (HCPms or what is it called today 😉 ). The context menu does not supply the given option as opposed to a manual app configuration. In Mobile Secure, the Delete button is grayed out. Any tips on how to do it?
Kind regards,
Jeroen
To delete a “container” in the Mobile Secure/FMS admin, you have to first delete any Platform that is not “Ready to Build” in status. Once you delete those, the Delete button will re-enable. It will also delete the APp COnnection in HCPms.
Thanks Britt, that did the trick!
Regards,
Jeroen
Hi,
How were you able to do it please?
Regards,
Amine
Under the Platforms tab, for any app that shows “New”, “Trial” or “Production” click on the Action icon and select “Delete”. You may have to scroll the Action menu that appears.
Once you delete all those platforms the “Delete” for app container should become enabled!
Hi Dhimant/Britt,
I am trying to add my Fiori app in this but under Applications > Manage Apps + New Application I am not getting the ‘Get Started’ button at all.
This is the first app I am registering with HCPms and I have not crossed my two-app limit.
I tried with Chrome and IE. For both the button is not appearing.
Please help.
BR
Shibaji
Hi,
I was able to build the app with some warning. The app is now available in Mobile Place. But the download option is not appearing.
Please help.
BR
Shibaji
In step 8 “Select your Fiori Server (click Next)” I get this error message:
Unable to retrieve Components from MobilePackager.
If you require help in resolving the issue and would like to open a support ticket,
report this case using error token: 5848339d2b7ce
Any idea about it?
Thx Helmut
This should be resolved now for a few weeks
As Nash noticed on June, I am facing same opening error Fiori Mobile Admin Console : “Invalid acount, username or password – Logon Again.
SAML issue again?
Very nice blog!!
When we use HCPms or Fiori Mobile to build the native/hybrid apps for the mobile devices, what all devices with their versions supported?
For Apple devices, which iOS versions are supported for both HCPms & Fiori Mobile apps?
For android devices, which android OS is supported? Also, will it be open to any android devices using that OS version or only specific vendors like Samsung devices are supported?
Similarly, what about Windows mobile devices?
Thanks,
Bhavik
You can find all the supported platforms here:-
fiori/hcp_mobileservice_fiori_user_guide.pdf
Hi Mark,
I am trying to open the link you have provided but looks like it is broken or I am not able to open it.
Here is the link to the old docs
Here is a link to the new docs
I also face this SAML problem:
When I try in the “Fiori Mobile” service of my trial account the link “Go to Admin Console”, I am redirected from first to a succesfull(!) SAML login and then unfortunatly to . All configuration and roles for Fiori mobile were done as documented.
What can I do to succesfully get to the Fiori Mobile Admin Console?
Any Idea what I could do?
If you go to you should be asked for an Account and Username. Your account should be trial-s0013822703trial
Enter in your username or your email you used to sign up with this trial account
Unfortunately same result: After successful SAML login I arrive at
Wolfgang,
Are you able to put in a support incident under the component MOB-FM? I’ve looked at your trial server and it is provisioned and available, but I suggest someone in support should look through your setup and see what is causing this to fail as the user information does not appear to be making it to the provisioned account for some reason.
Hi Mark,
thanks for the help. I have created incident 124536 / 2017 .
Regards,
Wolfgang
Hi all,
SAP support finally solved my problem and I can login now!
Everybody else who could not login in the past: you should give it a try now!
A big “Thank you!” to all the SAP people who helped us here and provided a fix to this problem!
Regards,
Wolfgang
Hi Dhimant,
I am following How to use SAP HCP, mobile service for SAP Fiori. All steps went fine till step #3. I am using my HANA Trial account for this. In step #4, I am not getting fiori server details. I should get default fiori server as but not getting the same in the list under Fiori Mobile. When I tried New connection, it is not taking https in the virtual host with URL. It is giving error when I tried with http as “Unable to create Fiori Server. If you require help in resolving the issue and would like to open a support ticket, report this case using error token: 5881ebd84ce16”.
Please help.
hi.
Perform all the steps but when I get here, I have the following problem. Please, if someone helps me.
This worked great, but only for the Master Detail Template….. (online app).
I tried this with the Kapsel Offline Application Template, and it displays the tile, which it shouldn’t . It’s also labelled with the namespace. and and it errors out without receiving any oData?
The app works fine online.
Any advice?
The Kapsel Offline Application Template is not supported with this service at this time.
Thanks for confirming Mark.
Hi,
I have the same problem with my Fiori Mobile configuration as was mentioned before when using my Trial HANA Cloud Platform Developer Account. The default Fiori Server is missing (and cannot be entered manually).
Please help.
Hi experts,
In my case, the problem that I´m facing now is :
The tab Signing Profile is not in my SAP Mobile -> Account
Does anybody has an idea about this problem?
The Manage Signing Profile can be found under Applications. Make sure you have the correct roles assigned to your user through HCP. You will need to be an App Catalog Admin
Hello Guys,
I am facing an issue after installing the app in the mobile. It’s asking for a certificate ‘pkcs#12 file’. Can I download it from google or does SAP provides that file?
Please help me here.
Thanks,
Aswin
How did you create your Android Signing Profile? You should not need to install a certificate file on your device if it is signed correctly.
Hello,
I am facing one issue.
I have successfully installed the application on android device & when i am trying to open the application a splash screen comes and then a white screen comes with
Showing error: Http Status -500 – An internal application error occurred. Request:2065975555 p1942415647trial:flpportal.
I have followed all the required steps & still getting this error.
Can you please help me in this?
Thanks in advance,
Omkar
If you go into your HCP account and in your Applications->Subscriptions, look for the subscription you created. Opening that subscription, is the Required Destination correctly mapped, it should show green if it is. If it is not then you need to either change it here, or in your project look at your App Descriptor File (neo-app.json) and see if the Destination is correct. In the file you should see as part of your routes
Hi Dhimant,
Thank you for the great blog.
I have followed part 1,2 & 3 and built the app successfully. but in Part 3 I am not able navigate to Fiori mobile place.
getting message like it is unavailable or moved permanently.
Could you please let me know how I can proceed with next steps.
Regards,
Arati | https://blogs.sap.com/2016/06/03/how-to-use-sap-hcp-mobile-service-for-sap-fiori-part-3/ | CC-MAIN-2018-30 | refinedweb | 6,092 | 65.12 |
A class for constructing level filled graphs for use with ILU(k) class preconditioners. More...
#include <Tifpack_IlukGraph.hpp>
A class for constructing level filled graphs for use with ILU(k) class preconditioners.
Tifpack::IlukGraph enables the construction of matrix graphs using level-fill algorithms. The only function required for construction is a getGlobalRowView capability, i.e., the graph that is passed in to the constructor must implement the RowGraph interface defined in Tpetra_RowGraph creates.
Copy constructor.
Set parameters using Teuchos::ParameterList object.
This method recogizes two parameter names: Level_fill and Level_overlap. Both are case insensitive, and in both cases the ParameterEntry must have type int.
This method performs the actual construction of the graph.
Does the actual construction of the overlap matrix graph.
Returns the level of fill used to construct this graph.
Returns the level of overlap used to construct this graph.
Returns the graph of lower triangle of the ILU(k) graph as a Tpetra::CrsGraph.
Returns the graph of upper triangle of the ILU(k) graph as a Tpetra::CrsGraph.
Returns the the overlapped graph.
Returns the global number of diagonals in the ILU(k) graph. | http://trilinos.sandia.gov/packages/docs/r10.4/packages/tifpack/doc/html/classTifpack_1_1IlukGraph.html | CC-MAIN-2014-15 | refinedweb | 189 | 51.75 |
Comprehensions in Python
Comprehensions in Python provide us with a short and concise way to construct new sequences (such as lists, set, dictionary etc.) using sequences which have… Read More »
Comprehensions in Python provide us with a short and concise way to construct new sequences (such as lists, set, dictionary etc.) using sequences which have… Read More »
In this article, you would come to know about proper structuring and formatting your python programs. Python Statements In general, the interpreter reads and executes… Read More »
range() : Python range function generates a list of numbers which are generally used in many situation for iteration as in for loop or in… Read More »
Using import * in python programs is considered a bad habit because this way you are polluting your namespace, the import * statement imports all… Read More »
Python variable assignment is different from some of the popular languages like c, c++ and java. There is no declaration of a variable, just an… Read More »
Strings in Python are immutable sequences of Unicode code points. Given a string, we need to find its length. Examples: Input : ‘abc’ Output :… Read More »
Python is an object oriented programming language like Java. Python is called an interpreted language. Python uses code modules that are interchangeable instead of a… Read More »
Both “is” and “==” are used for object comparison in Python. The operator “==” compares values of two objects, while “is” checks if two objects… » | https://www.geeksforgeeks.org/tag/python-basics/page/7/ | CC-MAIN-2020-29 | refinedweb | 237 | 55.88 |
Aborting a rake task which uses a subprocess
Discussion in 'Ruby' started by Alex Young, Jan 17,:
- 90
- Damphyr
- Jan 31, 2006
Rake and rake aborted! Rake aborted! undefined method `gem' for main:Objectpeppermonkey, Feb 9, 2007, in forum: Ruby
- Replies:
- 1
- Views:
- 219
- Gregory Brown
- Feb 10, 2007
[Rake] call a task of a namespace from an other task.Stéphane Wirtel, Jun 14, 2007, in forum: Ruby
- Replies:
- 3
- Views:
- 348
- Stephane Wirtel
- Jun 15, 2007
rake published rdoc version and arity of Rake::Task#execute - wrongnumber of arguments (0 for 1)James Mead, Jan 15, 2008, in forum: Ruby
- Replies:
- 0
- Views:
- 132
- James Mead
- Jan 15, 2008
rake tasks aborting with weird postgres issueNic Young, Jun 23, 2008, in forum: Ruby
- Replies:
- 0
- Views:
- 138
- Nic Young
- Jun 23, 2008 | http://www.thecodingforums.com/threads/aborting-a-rake-task-which-uses-a-subprocess.837275/ | CC-MAIN-2014-23 | refinedweb | 133 | 53.51 |
The GAVO STC Library¶
A library to process VO STC specifications¶
This library aims to ease processing specifications of space-time coordindates (STC) according to the IVOA STC data model with the XML and string serializations. Note that it is at this point an early beta at best. To change this, I welcome feedback, even if it’s just “I’d need X and Y”. Honestly.
More specifically, the library is intended to help in:
- supporting ADQL region specifications and conforming them
- generating registry coverage specifications from simple STC-S
- generating utypes for VOTable embedding of STC information and parsing from them
The implementation should conform to STC-S 1.33; what STC-X is supported conforms to STC-X 1.00 (but see Limitations).
Installation¶
If you are running a Debian-derived distribution, see Adding the GAVO repository. When you follow that recipe,
aptitude install python-gavostc
is enough.
Otherwise, you will have to install the source distribution. Unpack the .tar.gz and run:
python setup.py install
You will normally need to do this as root for a system-wide installation. There are, however, alternatives, first and foremost a virtual python that will keep your managed directories clean.
This library’s setup is based on setuptools. Thus, it will generally obtain all necessary dependencies from the net. For this to be successful, you will have to have net access.
If all this bothers you, contact the authors.
Usage¶
Command Line¶
For experiments, we provide a simple command line tool. Try:
gavostc help
to see what operations it exposes. Here are some examples:
$ gavostc help Usage: gavostc [options] <command> {<command-args} Use command 'help' to see commands available. Options: -h, --help show this help message and exit -e, --dump-exception Dump exceptions. Commands include: conform <srcSTCS>. <dstSTCS> -- prints srcSTCS in the system of dstSTCS. help -- outputs help to stdout. parseUtypes --- reads the output of utypes and prints quoted STC for it. parseX <srcFile> -- read STC-X from srcFile and output it as STC-S, - for stdin resprof <srcSTCS> -- make a resource profile for srcSTCS. utypes <QSTCS> -- prints the utypes for the quoted STC string <QSTCS>. $ gavostc resprof "Polygon ICRS 20 20 21 19 18 17" | xmlstarlet fo <?xml version="1.0"?> <STCResourceProfile xmlns="" xmlns: <AstroCoordSystem id="thgloml"> <SpaceFrame id="thdbgwl"> <ICRS/> <UNKNOWNRefPos/> <SPHERICAL coord_naxes="2"/> </SpaceFrame> </AstroCoordSystem> <AstroCoordArea coord_system_id="thgloml"> <Polygon frame_id="thdbgwl" unit="deg"> <Vertex> <Position> <C1>20.0</C1> <C2>20.0</C2> </Position> </Vertex> <Vertex> <Position> <C1>21.0</C1> <C2>19.0</C2> </Position> </Vertex> <Vertex> <Position> <C1>18.0</C1> <C2>17.0</C2> </Position> </Vertex> </Polygon> </AstroCoordArea> </STCResourceProfile> $ gavostc resprof "Circle FK5 -10 340 3" | gavostc parseX - Circle FK5 -10.0 340.0 3.0 $ gavostc conform "Position GALACTIC 3 4 VelocityInterval Velocity 0.01 -0.002 unit deg/cy" "Position FK5" Position FK5 264.371974024 -24.2795040403 VelocityInterval Velocity 0.00768930497899 0.00737459624525 unit deg/cy $ gavostc utypes 'Redshift TOPOCENTER VELOCITY "z" Error "e_z" PixSize "p_z"' AstroCoordSystem.RedshiftFrame.value_type = VELOCITY AstroCoordSystem.RedshiftFrame.DopplerDefinition = OPTICAL AstroCoordSystem.RedshiftFrame.ReferencePosition = TOPOCENTER AstroCoords.Redshift.Error -> e_z AstroCoords.Redshift.Value -> z AstroCoords.Redshift.PixSize -> p_z $ gavostc utypes 'Redshift TOPOCENTER VELOCITY "z" Error "e_z" PixSize "p_z"'\ | gavostc parseUtypes Redshift TOPOCENTER VELOCITY "z" Error "e_z" PixSize "p_z"
Limitations¶
- Internally, all dates and times are represented as datetimes, and all information whether they were JDs or MJDs before is discarded. Thus, you cannot generate STC with M?JDTime.
- All stc:DataModel utypes are ignored. On output and request, only stc:DataModel.URI is generated, fixed to uri stc.STCNamespace.
- “Library” coordinate systems for ECLIPTIC coordinates are not supported since it is unclear to me how the equinox of those is expressed.
- On system transformations, ellipses are not rotated, just moved. No “wiggles” (errors, etc) are touched at all.
- There currently is not real API for “bulk” transforms, i.e., computing a transformation once and then apply it to many coordinates. The code is organized to make it easy to add such a thing, though.
- Serialization of floats and friends is with a fixed format that may lose precision for very accurate values. The solution will probably be a floatFormat attribute on the frame/metadata object, but I’m open to other suggestions.
- Reference positions are not supported in any meaningful way. In particular, when transforming STCs, transformations between all reference positions are identities. This won’t hurt much for galactic or extragalactic objects but of course makes the whole thing useless for solar-system work. If someone points me to a concise collection of pertinent formulae, adding real reference positions transformations should not be hard.
- The behaviour of some transforms (in particular FK5<->FK4) close to the poles need some attention.
- Empty coordinate values (e.g., 2D data with just one coordinate) are not really supported. Processing them will, in general, work, but will, in general, not yield the expected result. This is fixable, but may require changes in the data model.
- No generic coordinates. Those can probably be added relatively easily, but it would definitely help if someone had a clear use case for them
- Spectral errors and their “wiggles” (error, size, etc) must be in the same “flavor”, i.e., either frequency, wavelength, or energy. If they are not, the library will silently fail. This is easily fixable, but there’s too much special casing in the code as is, and I consider this a crazy corner case no one will encounter.
- No reference on posAngles, always assumed to be ‘X’.
- Spatial intervals are system-conformed analogous to geometries, so any distance information is disregaded. This will be fixed on request.
- No support for Area.
- Frame handling currently is a big mess; in particular, the system changing functions assume that the frames on positions, velocities and geometries are identical. I’ll probably more towards requiring astroCoords being in astroSystem.
Extensions to STC-S¶
- After ECLIPTIC, FK4, or FK5, an equinox specification is allowed. This is either J<number> or B<number>.
- For velocities, arbitrary combinations of spaceUnit/timeUnit are allowed.
- To allow the notation of STC Library coordinate systems, you can give a System keyword with an STC Library tag at the end of a phrase (e.g.,
System TT-ICRS-TOPO). This overwrites all previous system information (e.g.,
Time ET Position FK4 System TT-ICRS-TOPOwill result in TT time scale and ICRS spatial frame). We admit it’s not nice, and are open to suggestions for better solutions.
Other Deviations from the Standard¶
- Units on geometries default to deg, deg when parsing from STC-X.
- The equinox to be used for ECLIPTIC isn’t quite clear from the specs. The library will use a specified equinox if given, else the time value if given, else the equinox will be None (which probably is not terribly useful).
Bugs¶
- Conversions between TT and TCB are performed using the rough approximation of the explanatory supplement rather than the more exact expression.
- TT should be extended to ET prior to 1973, but this is not done yet.
- STC-S parse errors are frequently not very helpful.
- Invalid STC-X documents may be accepted and yield nonsensical ASTs (this will probably not be fixed since it would require running a validating parser, which with XSD is not funny, but I’m open to suggestions).
API¶
The public API to the STC library is obtained by:
from gavo import stc
This is assumed for all examples below.
The Data Model¶
The STC library turns all input into a tree called AST (“Abstract Syntax Tree”, since it abstracts away the details for parsing from whatever serialisation you employ).
The ASTs are following the STC data model quite closely. However, it turned out that – even with the changes already in place – this is quite inconvenient to work with, so we will probably change it after we’ve gathered some experience. It is quite likely that we will enforce a much stricter separation between data and metadata, i.e., unit, error and such will go from the positions to what is now the frame object.
Thus, we don’t document the data model fully yet. The gory details are in dm.py. Meanwhile, we will try to maintain the following properties:
- All objects in ASTs are considered immutable, i.e., nobody is supposed to change them once they are constructed.
- An AST object has attributes time, place, freq, redshift, velocity refererring to an objects describing quantities or None if not given. These are called “positions” in the following.
- An AST object has attributes timeAs, areas, freqAs, redshiftAs, velocityAs containing sequences of intervals or geometries of the respective quantities. These sequences are empty if nothing is specified. They are called areas in the following.
- Both positions and areas have a frame attribute giving the frame (for spatial coordinates, these have flavor, nDim, refFrame, equinox, and refPos attributes, quite like in STC).
- Positions have a values attribute containing either a python float or a tuple of floats (for spatial and velocity coordinates). For time coordinates, a datetime.datetime object is used instead of a float
- Positions have a unit attribute. We will keep this even if all other metadata move to the frame object. The unit attribute follows the coordinate values, i.e., they are tuple-valued when the values are tuples. For velocities and redshifts, there is a velTimeUnit as well.
- ASTs have a cooSystem attribute with, in turn, spaceFrame, timeFrame, spectralFrame, and redshiftFrame attributes.
- NULL is consistently represented as None, except when the values would be sequences, in which case NULL is an empty tuple.
Parsing STC-X¶
To parse an STC-X document, use
stc.parseSTCX(literal) -> AST.
Thus, you pass in a string containing STC-X and receive a AST structure.
Since STC documents should in general be rather small, there should be no necessity for a streaming API. If you want to read directly from a file, you could use something like:
def parseFromFile(fName): f = open(fName) stcxLiteral = f.read() f.close() return stc.parseSTCX(stcxLiteral)
The return value is a sequence of pairs of
(tagName, ast), where
tagName is the namespace qualified name of the root element of the STC
element. The tagName is present since multiple STC trees may be present
in one STC-X document. The qualification is in standard W3C form, i.e.,
{<namespace URI>}<element name>. If you do not care about
versioning (and you should not need to with this library), you could
find a specific element using a construct like:
def getSTCElement(literal, elementName): for rootName, ast in stc.parseSTCX(literal): if rootName.endswith('}'+elementName): return ast getSTCElement(open("M81.xml").read(), "ObservationLocation")
Note that the STC library does not contain a validating parser. Invalid STC-X documents will at best give you rather incomprehensible error messages, at worst an AST that has little to do with what was in the document. If you are not sure whether the STC-X you receive is valid, run a schema validator before parsing.
We currently understand a subset of STC-X that matches the expressiveness of STC-S. Most STC-X features that cannot be mapped in STC-X are silently ignored.
Generating STC-X¶
To generate STC-X, use the
stc.getSTCX(ast, rootElmement) -> str
function. Since there are quite a few root elements possible, you have
to explicitely pass one. You can find root elements in
stc.STC. It
is probably a good idea to only use
ObservatoryLocation,
ObservationLocation, and
STCResourceProfile right now. Ask the
authors if you need something else.
There is the shortcut
stc.getSTCXProfile(ast) -> str that is
equivalent to
stc.getSTCX(ast, stc.STC.STCResourceProfile).
Parsing STC-S¶
To parse an STC-S string into an AST, use
stc.parseSTCS(str) -> ast.
The most common exception this may raise is stc.STCSParseError, though
others are conceivable.
Generating STC-S¶
To turn an AST into STC-S, use
stc.getSTCS(ast) -> str. If you pass
in ASTs that use features not supported by STC-S, you should get an
STCNotImplementedError or an STCValueError.
Generating Utypes¶
For embedding STC into VOTables, utypes are used. To turn an AST object
into utypes, use
stc.getUtypes(ast) -> dict, dict. The function
returns a pair of dictionaries:
- the first dictionary, the “system dict”, maps utypes to values. All utypes belong to AstroCoordSystem and into this group.
- the second dictionary, the “columns dict”, maps values to utypes.
Of course, the columns dict doesn’t make much sense with ASTs
actually containing values. To sensibly use it it a way useful for
VOTables, you can define your columns’ STC using “quoted STC-S”. In
this format, you have identifiers in double quotes instead of normal
STC-S values. Despite the double quotes, only python-compatible
identifiers are allowed, i.e., these are not quoted identifiers in the
SQL sense. The
stc.parseQSTCS(str) -> ast function parses such
strings.
Consider:
In [5]:from gavo import stc In [6]:stc.getUtypes(stc.parseQSTCS( ...:'Position ICRS "ra" "dec" Error "e_p" "e_p"')) Out[6]: ({'AstroCoordSystem.SpaceFrame.CoordFlavor': 'SPHERICAL', 'AstroCoordSystem.SpaceFrame.CoordRefFrame': 'ICRS', 'AstroCoordSystem.SpaceFrame.ReferencePosition': 'UNKNOWNRefPos'}, {'dec': 'AstroCoords.Position2D.Value2.C2', 'e_p': 'AstroCoords.Position2D.Error2Radius', 'ra': 'AstroCoords.Position2D.Value2.C1'})
Note that there is no silly “namespace prefix” here. Nobody really knows what those prefixes really mean with utypes. When sticking these things into VOTables, you will currently need to stick an “stc:” in front of those.
Parsing Utypes¶
When parsing a VOTable, you can gather the utypes encountered to
dictionaries as returned by
getUtypes. You can then pass these to
parseFromUtypes(sysDict, colDict) -> ast. The function does not
expect any namespace prefixes on the utypes.
Conforming¶
You can force two ASTs to be expressed in the same frames, which we call “conforming”. As mentioned above, currently only reference frames and equinoxes are conformed right now, i.e., the conversion from Galactic to FK5 1980.0 coordinates should work correctly. Reference positions are ignored, i.e. conforming ICRS TOPOCENTER to ICRS BARYCENTER will not change values.
To convert coordinates in ast1 to the frame defined by ast2, use the
stc.conformTo(ast1, ast2) -> ast function. This could look like
this:
>>> p = stc.parseSTCS("Circle ICRS 12 12 1") >>> stc.conformTo(p, stc.parseSTCS("Position GALACTIC")) >>> stc.conformTo(p, stc.parseSTCS("Position GALACTIC")).areas[0].center (121.59990883115164, -50.862855782323962)
Conforming also works for units:
>>> stc.conformTo(p, stc.parseSTCS("Position GALACTIC unit rad")).areas[0].center (2.1223187792285256, -0.8877243003685894)
Transformation¶
For simple transformations, you can ask DaCHS to give you a function just turning simple positions into positions. For instance,
from gavo import stc toICRS = stc.getSimple2Converter( stc.parseSTCS("Position FK4 B1900.0"), stc.parseSTCS("Position ICRS")) print(toICRS(30, 40))
shows how to build turn positions given in the B1900 equinox (don’t sweat the reference system for data that old) to ICRS.
Equivalence¶
For some applications it is necessary to decide if two STC specifications are equivalent. Python’s built-in equivalence operator requires all values in two ASTs to be identical except of the values of id attributes.
Frequently, you want to be more lenient:
- you might decide that unspecified values match anything
- you may ignore certain keys entirely (e.g., the reference position when you’re doing extragalactic work or when a parallax error doesn’t matter)
- you may want to view certain combinatinons as equivalent (e.g., ICRS and J2000 are quite close)
To support this, the STC library lets you define
EquivalencePolicy
objects. There is a default equivalence policy ignoring the reference
position, defining ICRS and FK5 J2000 as equivalent, and matching Nones
to anything. This default policy is available as
stc.defaultPolicy.
It has a single method,
match(sys1, sys2) -> boolean with the
obvious semantics. Note, however, that you pass in systems, i.e.,
ast.cooSystem rather than ASTs themselves.
You can define your own equivalence policies. Tell us if you want that
and we’ll document it. In the mean time, check
stc/eq.py.
Hacking¶
For those considering to contribute code, here is a short map of the source code:
- cli – the command line interface
- common – exceptions, some constants, definition of the AST node base class
- conform – high-level code for transformations between reference systems, units, etc.
- spherc.py, sphermath.py – low-level transformations for spherical coordinate systems used by conform
- times – helpers for converting time formats, plus transformations between time scales used by conform.
- dm – the core data model, i.e. definitions of the classes of the objects making up the ASTs
- stcsast.py, stcxast.py – tree transformers from STC-S and STC-X concrete syntax trees to ASTs.
- scsgen.py, stcxast.py – serializers from ASTs to STC-S and STC-X
- utypegen.py, utypeast.py – code generating and parsing utype dictionaries. These are thin wrappers around the STC-X code.
- stcs.py, stcsdefaults.py – a grammar for STC-S and a definition of the defaults used during parsing and generation of STC-S.
- units.py – units defined by STC, and transformations between them
Since the STC serializations and the sheer size of STC are not really amenable to a straightforward implementation, the stc*[gen|ast] code is not exactly easy to read. There’s quite a bit of half-assed metaprogramming going on, and thus these probably are not modules you’d want to touch if you don’t want to invest substantial amounts of time.
The conform, spherc, sphermath, units and time combo though shouldn’t be too opaque. Start in conform.py contains “master” code for the transformations (which may need some reorganization when we transform spectral and redshift coordinates as well).
Then, things get fanned out; in the probably most interesting case of
spherical coordinates, this this to spherc.py. That module defines lots
of transformations and
getTrafoFunction. All the spherical
coordinate stuff uses an internal representation of STC, six vectors and
frame triples; see conform.conformSystems on how to obtain these.
To introduce a new transformation, write a function or a matrix implementing it and enter it into the list in the construction of _findTransformsPath.
Either way: If you’re planning to hack on the library, please let us know at gavo@ari.uni-heidelberg.de. We’ll be delighted to help out with further hints.
Extending STC-S¶
Here’s an example for an extension to STC-S: Let’s handle the planetary ephemeris element.
Checking the schema, you’ll see only two literals are allowed for the
ephemeris:
JPL-DE200 and
JPL-DE405. So, in
stcs._getSTCSGrammar, near the definition of refpos, add:
plEphemeris = Keyword("JPL-DE200") | Keyword("JPL-DE405")
The plan is to allow the optional specification of the ephemeris used after refpos. Now grep for the occurrences of refpos and notice that there are quite a number of them. So, rather than fixing all those rules, we change the refpos rule from:
refpos = (Regex(_reFromKeys(stcRefPositions)))("refpos")
to:
refpos = ((Regex(_reFromKeys(stcRefPositions)))("refpos") + Optional( plEphemeris("plEphemeris") ))
We can test this. In stcstest.STCSSpaceParsesTest, let’s add the sample:
("position", "Position ICRS TOPOCENTER JPL-DE200"),
Now, the refpos nodes are handled in the _makeRefpos function, looking like this:
def _makeRefpos(node): refposName = node.get("refpos") if refposName=="UNKNOWNRefPos": refposName = None return dm.RefPos(standardOrigin=refposName)
The node passed in here is a pyparsing node. Since in our data model, None is always null/ignored, we can just take the planetary ephemeris if it’s present, and the system will do the right thing if it’s not there:
def _makeRefpos(node): refposName = node.get("refpos") if refposName=="UNKNOWNRefPos": refposName = None return dm.RefPos(standardOrigin=refposName, planetaryEphemeris=node.get("plEphemeris"))
Let’s test this; testing STC-S to AST parsing takes place in stctest.py,
so let’s add a method to
CoordSysTest:
def testPlanetaryEphemeris(self): ast = stcsast.parseSTCS("Time TT TOPOCENTER JPL-DE200") self.assertEqual(ast.astroSystem.timeFrame.refPos.planetaryEphemeris, "JPL-DE200")
Thus, we can parse the ephemeris spec from STC-S. To generate it, two things need to be done: The DM item must be transformed into the CST the STC-S is built from, and the part of the CST must be flattened out. Both things happen in stcsgen.py. The CST is just nested dictionaries. Refpos handline happens in refPosToCST, so replace:
def refPosToCST(node): return {"refpos": node.standardOrigin}
with:
def refPosToCST(node): return { "refpos": node.standardOrigin, "planetaryEphemeris": node.planetaryEphemeris,}
To flatten that out to the finished string, the flatteners need to be told that you want that key noticed. Grepping for repos shows that it’s used in several places. So, let’s define a “common flattener”, which is a function taking a value and the CST node (i.e., a dictionary) the value was taken from and returns a string ready for inclusion into the STC-S. The flattener here would look like this:
def _flattenRefPos(val, node): return _joinWithNull([node["refpos"], node["planetaryEphemeris"]])
The
_joinWithNull call makes sure that empty specifications do not
show up the in result.
This “global” flattener is now entered into
_commonFlatteners, a
dictionary mapping specific CST keys to flatten functions:
_commonFlatteners = { ... "refpos": _flattenRefPos, }
The most convenient way to test this is to define a round-trip test.
These again reside stcstest. Use
BaseGenerationTest and add a
sample pair like this:
("Redshift BARYCENTER JPL-DE405 3.5", "Redshift BARYCENTER JPL-DE405 3.5")
With this, you should be done. | https://dachs-doc.readthedocs.io/stc.html | CC-MAIN-2020-29 | refinedweb | 3,548 | 50.02 |
We're lucky to have Quizlet-loving teachers like Charlotte Cassidy, a computer teacher at San Ramon Valley Christian Academy in Danville, CA (a private K-8 school in the East Bay), who invited the Quizlet team to visit her school. When we heard how much they all used and loved Quizlet, there was no way we could turn her down:
We introduced Quizlet to our students last year and our teachers and students LOVE it. We use it almost across the board in every subject in junior high and it is used every day for a variety of purposes, such as studying, assigning learn mode for homework and using Quizlet for in class tests. Our students and teachers would be so excited and eager to have Quizlet visit our school so that we could share with you all of the ways we use Quizlet.
With students and teachers using Quizlet in nearly every subject, this was clearly a Quizlet Power School we had to check out.
Quizlet in the Classroom
We went to several classes, fixed some bugs (no more recurring invites!), and meet with students to field questions about Quizlet and ask them what they thought Quizlet should do next to make their studying experience even better.
First we visited 8th Grade Social Studies to see students using Quizlet to study world geography.
Next, we went to 6th grade Spanish where the teacher was using her SMART board to project Quizlet for the whole class to go over new vocabulary.
Ask the Quizlet Team Anything
Charlotte organized an exciting opportunity to meet with the 6th and 7th grades so that we could ask them questions and they could ask us everything they wanted to know about Quizlet.
Students asked how we built the map on the Quizlet home page that shows people all over the world studying on Quizlet (which David expertly explained). Dave told them how we build our engineering team and how impactful learning to program could be for them. Andrew spoke a lot about how he started Quizlet and what it took to build a website like this.
Learning From Our Users
A major theme from our conversation was that Quizlet made studying fun and easy (or at least as fun as stu-DYING could be) and an achievement/badge system could make it even better. Quizlet's CEO, Dave, also asked the students how they perceived Quizlet as a brand. We're happy to report that we're pretty cool in their minds and that students view Quizlet very positively as common effort between themselves and their teachers to do well in school. And we're pretty sure that a Quizlet tshirt store would fare pretty well based on our extensive market research.
We were impressed by how much the students knew about Quizlet and how precisely they could speak about what they liked (and didn't like) about using Quizlet. Thanks so much to Charlotte and the whole school for having us!
Want Quizlet in your class?
The school year is almost over! If you're in the San Francisco Area and use Quizlet, we'd love to stop by your school as well! Write to us at sophia@quizlet.com or drop us a note in feedback.
cool
i LOVE you guys<3
1st and 2nd comment
Super-duper awesome! Hi fivers!!! :)
I wish my school was like this. Although, pretty much my whole grade has an account but none of the teachers do. I LOVE QUIZLET!
6th comment!
7th comment, and so cool!
8th comment and cool :)
I am a student and have my own over 13 account. To help people study, I created a class for my class at school, and with my friend, explained how it worked. My friend and I then helped everyone sign up and most people chose to. Because of this, I have 20 members in my class now! My teacher was very happy! And by 3 days, I have created so many sets! I am very pleased with Quizlet and I hope it can you far in business and get more popular. (In other countries besides the U.S.) I wish you guys could come, but I live in NWI Indiana.
Oh, and thanks so much! However, it would be cool if you have assignments that can be posted, Word documents, PDF Files and more that can be uploaded on another assignment section on Quizlet. You guys saved my homework, test, and more with Quizlet.
Thanks again,
loop2014st
This is the 8th comment but 6th person to comment! Yay, and GO QUIZLET!!!!
11th Comment!
Coolness factor increases greatly with startup tees.
I would like to be able to "follow" people/friend them so that I can send messages or make sets only visible to them (without making them part of a public class).
cool
@loop20141 I have a story exactly like yours! But instead of only my class getting hooked on Quizlet, the entire 7th grade joined! Our average grade went up too! I am so happy with Quizlet.; Thank you so much Quizlet. I wish you could come to my school, but live in Pensacola, Florida.
All the way across the country from you guys :(
Nice.
15th High Five! I think that's pretty good.
Oh ya, go Quizlet!
17th comment!
Quizlet is great and it has helped me and my sophomore grade (about 57 people in the group) throughout 8th grade to currently the end of sophomore year. Our school is Covington Latin School which is a prestigious school and we as students study a lot, we currently have 256 sets just for our sophomore year. Thanks Quizlet, (having skipped 3 grades) I would probably not have been able to get this far with straight A's without you. KEEP IT UP.
20th Comment! Wow. great teacher! Quizlet is really helpful and i use it to study most stuff.
21st century fox haha... thanks quizlet for all your help... COME TO D.C.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
24!!!!
u know it would be AWESOME if u could like make emotes; not like u have to type it and be confused and u guys should add this emote(its supposed look like a robot or a hippe dude) [:|]
Thanks for coming! You guys were awesome and very informative!
Quizlet is fantastic! Keep up the good work!
28th comment!!
BTW That looks awesome! Keep up the good work!
29th comment
30th coment
cool
you guys are cool
Can You please come to my school practically everyone is obbssesed with quizlet.
I wish I lived I United States
35th comment
Who cares what number you are commenting? I love quizlet I work on worldly wise(a vocab thing) and I used to get 5/15 right because I never studied, now I study constantly HOURS cause its fun! Thank you it has made my grades much higher!
I learned about quizlet thru my communications professor, if I never knew about it, I would be doing terrible in all my classes. But honestly, I get kind of bored of it from time to time. There should be funner ways to learn in the quizlet program, I like the testing and flash cards portion though for now..
OMG this is my school! You guys were awesome and we had a lot of fun. My grades have improved dramatically because of quizlet: after studying on quizlet for one day, I got a 100% on two (!!!) difficult science tests. Thank you so much for visiting SRVCA!!!!
thanks
Wow, that second-to-last picture has got to be one heck of a classroom.
thatss cooll..... i got 40th comment. Come to the Netherlands, Quizlet!
when I searched to find a practical questions, I found a great free website that provides for us how to evaluate your self. However,I hope they could improve more and more especially in Medical-Nursing topics in advance process such as free NCLEX test for nursing, Advance health assessment, nursing theory......because my major going to be a MSN nurse in US for 2013-2013. I hope I could see more quiz questions in different nursing topics.... Thanks
That would be amazing. But I'm home-educated. And I don't live in America. I have been on Quizlet for about a year and use it all the time. I have already asked you to do this about half a dozen times, but here it is again. Please give us the option of parentheses and grammar counting in learn mode again. It was a lot better.
1: Because you could learn the the basics of a set without parentheses and then learn more finicky bits by turning parentheses on
2:I do some German. Capitalization rules in German are a bit odd, and I'd like to be able to learn them.
Thank you for making Quizlet!!!!!
41st comment
Visit my school!!! My school is renfroe middle in decatur ga.
42nd comment. drn.
thats so cool guys!! totally visit more schools!
awesome thanks for quizlet it is a great website its helped me and i create sets for my spanish class
visit west morris central
ccccccccccccccccccccccccccccccccccccccc!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
cool!can i go???????????????????????????????????????????????????????????????????????????????????????????????????????????????????
it s good like best!
cool beans
49th. one to leave a msg
Who cares whos first to send a comment? I have 3 words for u quizlet: I LUVE U!!!!!!!!!!!! Tests are ranging in the 90's now cuz of u!
i used to get like 70 on my exams!!!!!!!!
Keep progressing to keep quizlet up!
also when r u gunna ad the multiplayer game???
thanks so much
-Kingfanta, lover of soda and hated by dentists.
Looks like ya'll had fun! I would ask you to visit my school... but I am homeschooled! :P
Keep up the good work. You make learning fun!
I have a question for Quizlet: Can you make it possible to have friends on Quizlet?
Thank you so much for visiting our school! It was such a pleasure hosting the Quizlet team! We are very lucky to be located in the SF Bay Area. Thank you for all that you are doing to support students and teachers! You are doing amazing work and really changing education!
Muchas Gracias por visitar mi clase y escuela. Fue un placer conocerles y hablar con Uds. Por favor, venga otra vez!
It was a pleasure meeting and speaking with all of you. Please come again! You make my life so much easier as a teacher!!!
Great job quizlet! keep up da good work BD
Wow you guys are always up to something! Keep rocking San Fran!
69th! Awesomness!
adding friends on quizlet to share your sets and what not, would be a nice addition!
whos shane moone
Oh, thank goodness for the "no more reccurring invites"! I was getting sick of them...
my name is elahe. I am iranian. I wished i lived in abroad ...
64
100th High Five!
I remember that
Awesome! I started a class, invited my friend, and my friend sent invites, and by the end of the week I had 15 members. Now I have 20, but Quizlet really raised all of our grades. Now I have so many people to accept everyweek. LOL! :)
Cool!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Cool!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!So cool!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Cool, cool, cool!! :D
Thank you for your webside.
I think The quizlet is really good. How can I have this soft ware for my own PC to use?
73
74
75 comment!! Cool, Quizlet!!
****
Quizlet really helps me a lot and I really love it :)
Quizlet really helps me a lot and I really love it :)
cool
wow this is really cool
very impressive
wow re
? sorry i dont know what happened?????
sorry everybody
i love family guy!
omg i just made my own quizlet account for no reason my friend told me about it
86 comment:(
Cool! Actually, if you are taking suggestions, I have a couple ideas...
Make profile experience points. They will show up by your user name and you will earn them by high scores on tests, breaking records on scatter and space race, etc. If you end up making awards, getting awards can get you points, and you can get awards for having points!
(e.g. I get 5 points for making an account. I get 20 points for earning the 100% Correct badge. If I get 1000 experience points I get the XP Master badge)
Because I'm an avid user, I have 1500 points, but my friend who uses doesn't use quizlet as often has only 700 points.)
Make multi-player study options! There could be a game like a quiz show, where a question/definition/word is on the screen, and the first out of up to 4 players who buzzes in gets to try and answer the question/definition/word, and if you get it right you get a point in the game. After about 10 questions, the player with the most points wins, with a experience points pr
Oh yeah, and if you could fix the scatter bug that would be great. I will get an awesome score, beat everyone else, and then I'll check back later and someone will have beat my record with a score of 0.0 seconds. Not cool, man.
I love quiz let! I live in South East Alaska, and often travel via boat, where we have no wifi. I was wondering if there was a way to work the flashcards/Space Race without internet? If so that would be amazing!
cool i love quizlet!!!
cool beans
90th comment!!! WHAT!
nice
92nd comment's a charm :p
Teachers have asked the principal if she culd buy a package for our school. We don't know how many teachers will be in our school in the Fall because our school will be combined with another Chicago Public School because of mass school closings. Does such a discount deal exist?
Hey this is awesome!
You guys should totally do a national tour and visit the schools that have the most quizlet users all over the country, or you could have an essay contest or something like that! I don't know which but you guys need to come out to all the other states instead of staying in SF.
Thanks and I hope you guys are doing great!
This really cool my ELA teacher loves Quizlet and so do I! But none of the other teachers have it and not a lot of students either. :(
You guys should come to Kelly Mill Middle in South Carolina!!!! That would be awesome! :D
Please Visit St. Patricks Episcopal Day school in Washington DC.
Come to CBA Syracuse in New York!!!!
Comment on my sets if you think Quizlet should do a national tour!
That's my school!!! Had tons of fun! Thanks for visiting our school and I love Quizlet
Yeah, thanks for coming. We all really liked it, and we had fun! I hope the t-shirts come out soon!
I am 101. Love Quizlet.
HEY that is basically our class.Our vocab tests are based on quizlet and we spend at least an hour on it at school.
this is my school
cool
I went there. ughfm o.o i left for 6th grade in normal school.o///o i know those people tho huhu :P It was so awkward der :3 or i was x.x oh well very nice xD
Is there any way we can make the "counting parentheses" a button?
ok that is cool guys bat i am new in this program an i look for classroom that i can to study English language , so how can you help me .
Who can helping me , how i can to study English in Quizlet
I wish my school was like this.
Login to leave a comment | https://quizlet.com/blog/visiting-q-power-school | CC-MAIN-2017-04 | refinedweb | 2,638 | 83.86 |
Comparison of Pinia and Vuex: Is Pinia a good alternative to Vuex?
Introduction
Pinia is a lightweight state management library for Vue.js, which has become very popular recently. It uses the new reaction system in Vue 3 to build an intuitive and fully typed state management library.
Pinia's success can be attributed to its unique capabilities for managing stored data (scalability, storage module organization, state change grouping, multiple storage creation, etc.).
On the other hand, Vuex is also a popular state management library built for the Vue framework, and it is also a state management library recommended by the Vue core team. Vuex pays great attention to the scalability of the application, the ergonomics and confidence of the developers. It is based on the same traffic architecture as Redux.
In this article, we will compare Pinia and Vuex. We will analyze the settings, community advantages and performance of these two frameworks. We will also look at the new changes in Vuex 5 compared to Pinea 2.
Set up
Pinia settings
Pinia is easy to get started because it only needs to install and create a store.
To install Pinia, you can run the following command in the terminal:
yarn add pinia@next # or with npm npm install pinia@next
This version is compatible with Vue 3. If you are looking for a version compatible with Vue 2.x, please check the v1 branch.
Pinia is a wrapper around the Vue 3 Composition API. Therefore, you don't need to initialize it as a plugin, unless you need Vue devtools support, SSR support, and webpack code splitting:
//app.js import { createPinia } from 'pinia' app.use(createPinia())
In the above snippet, you add Pinia to the Vue.js project so that you can use Pinia's global objects in your code.
defineStore method with an object containing the states, actions, and getters needed to create a basic store.
// stores/todo.js import { defineStore } from 'pinia' export const useTodoStore = defineStore({ id: 'todo', state: () => ({ count: 0, title: "Cook noodles", done:false }) })
Vuex settings
Vuex is also very easy to set up, requiring installation and creation of a store.
To install Vuex, you can execute the following command in the terminal:
npm install vuex@next --save # or with yarn yarn add vuex@next --save
createStore method with an object containing the states, actions, and getters needed to create the basic store:
//store.js import {createStore} from 'vuex' const useStore = createStore({ state: { todos: [ { id: 1, title: '...', done: true } ] }, getters: { doneTodos (state) { return state.todos.filter(todo => todo.done) } } })
To access Vuex global objects, you need to add Vuex to the Vue.js project root file, as shown below:
//index.js import { createApp } from 'vue' import App from './App.vue' import {useStore} from './store' createApp(App).use(store).mount('#app')
use
Pinia use
Using Pinia, you can access the store as follows:
export default defineComponent({ setup() { const todo = useTodoStore() return { // 只允许访问特定的state state: computed(() => todo.title), } }, })
state object of store is omitted when accessing its attributes.
Vuex use
Using Vuex, you can access the store as follows:
import { computed } from 'vue' export default { setup () { const store = useStore() return { // 访问计算函数中的状态 count: computed(() => store.state.count), // 访问计算函数中的getter double: computed(() => store.getters.double) } } }
The power of communities and ecosystems
At the time of writing this article, Pinia's community is small, which leads to very few contributions on Stack Overflow and few solutions.
Since Pinia became popular at the beginning of last year and is currently making progress, its community is growing rapidly. Hope that more contributors and solutions will appear on Pinia soon.
Vuex is a state management library recommended by the Vue.js core team. It has a huge community and core team members have made significant contributions. It is easy to find solutions to Vuex errors on Stack Overflow.
Learning curve and documentation
Both state management libraries are fairly easy to learn because they have good documentation and learning resources on YouTube and third-party blogs. For developers who have previous experience in using Flux architecture libraries such as Redux, MobX, and Recoil, their learning curve is easier.
The documentation for both libraries is great and written in a way that is friendly to both experienced developers and new developers.
GitHub ratings
At the time of writing, Pania has two major versions: v1 and v2, of which v2 has more than 1.6k stars on GitHub. Given that it was originally released in 2019 and is relatively new, it is undoubtedly one of the fastest growing state management libraries in the Vue.js ecosystem.
At the same time, five stable versions of the Vuex library have been released since the creation of Vuex. Although v5 is in the experimental stage, Vuex's v4 is by far the most stable version, with approximately 26.3k stars on GitHub.
performance
Both Pinia and Vuex are very fast, and in some cases, web applications using Pinia will be faster than using Vuex. This performance improvement can be attributed to Pinia's extremely light weight, which is about 1KB in size.
Although Pinia was built with the support of Vue devtools, some functions such as time travel and editing are still not supported because Vue devtools does not expose the necessary API. This is worth noting when development speed and debugging are more important to your project.
Compare Pinia 2 and Vuex 4
Pinia compares these to Vuex 3 and 4:
- The mutation no longer exists. They are often considered very verbose. They initially brought devtools integration, but this is no longer a problem.
- There is no need to create a custom complex wrapper to support TypeScript, everything is typed, and the API is designed to make use of TS type inference as much as possible.
These are the additional insights Pinia made in the comparison between its state management library and Vuex:
- Pinia does not support nested storage. Instead, it allows you to create stores as needed. However, stores can still be implicitly nested by importing and using stores in another store
- The memory is automatically named when it is defined. Therefore, there is no need to explicitly name the module.
- Pinia allows you to create multiple stores and let your bundler code automatically split them
- Pinia allows getters to be used in other getters
- Pinia allows the use of
$patchto group changes on the devtools timeline.
this.$patch((state) => { state.posts.push(post) state.user.postsCount++ }).catch(error){ this.errors.push(error) }
Comparing Pinia 2 (currently in alpha stage) with Vuex, we can infer that Pinia is ahead of Vuex 4.
The Vue.js core team has developed an open RFC for Vuex 5, similar to the RFC used by Pinia. Currently, Vuex collects as much feedback from the community as possible through RFC. Hope that the stable version of Vuex 5 can surpass Pinea 2.
According to the creator of Pinia (Eduardo San Martin Morote) who is also a member of the Vue.js core team and actively involved in the design of Vuex, Pania and Vuex have more similarities than differences:
Pinia tries to get as close as possible to Vuex's philosophy. It was designed to test the next iteration of Vuex's proposal, and it was successful because we currently have an open RFC for Vuex 5, and its API is very similar to the one used by Pinea. My personal intention for this project is to redesign the experience of using the global store while maintaining the approachable concept of Vue. I keep Pinea's API as close as Vuex because it continues to move forward, making it easy for people to migrate to Vuex, and even merge the two projects (under Vuex) in the future.
Although Pinia is sufficient to replace Vuex, it is not its goal to replace Vuex, so Vuex is still the recommended state management library for Vue.js applications.
Advantages and disadvantages of Vuex and Pinia
Advantages of Vuex
- Support debugging functions such as time travel and editing
- Suitable for large, high-complexity Vue.js projects
Disadvantages of Vuex
- Starting from Vue 3, the result of the getter will not be cached like a calculated property
- Vuex 4 has some issues related to type safety
Advantages of Pinia
- Full TypeScript support: Compared to adding TypeScript in Vuex, adding TypeScript is easier
- Extremely lightweight (about 1KB in size)
- The store's action is scheduled as a regular function call, instead of using the
dispatchmethod or the
MapActionauxiliary function, which is very common in Vuex
- Support multiple stores
- Support Vue devtools, SSR and webpack code splitting
Disadvantages of Pinia
- Does not support debugging functions such as time travel and editing
When to use Pinia and when to use Vuex
According to my personal experience, since Pinea is lightweight and small in size, it is suitable for small and medium-sized applications. It is also suitable for low-complexity Vue.js projects, because some debugging features such as time travel and editing are still not supported.
Using Vuex for small and medium-sized Vue.js projects is excessive because it is heavyweight and has a great impact on performance degradation. Therefore, Vuex is suitable for large-scale, high-complexity Vue.js projects. | https://segmentfault.com/a/1190000040368602/en | CC-MAIN-2021-49 | refinedweb | 1,531 | 61.16 |
Tinyxml: Problem updating code from diamonback to electric.
Hi all,
I have a node that is using tinyxml to read and xml file.
That node works perfectly in diamonback, but I need to update it to electric, so I follow the instrucionts in :
1- Remove tinyxml dependency in manifest.
2- Change the header file which is included... Now is just:
#include <tinyxml.h>
3- Link against the library in the CMakeLists...
target_link_libraries(master tinyxml)
It compiles without problems, but when running the node, the behaviour is not the expected and gives some errors.
It seems like the function Value() is not behaving the same way it was in diamondback...
Do you think I should do any change more than what I have already done? Or maybe the library that was using diamondback and the one which installs electric are slightly different?
I will appreciate any help! | https://answers.ros.org/question/34202/tinyxml-problem-updating-code-from-diamonback-to-electric/ | CC-MAIN-2021-21 | refinedweb | 146 | 75.1 |
In today’s Programming Praxis exercise our task to write a parser for simple mathematical expressions. Let’s get started, shall we?
Some imports:
import Control.Applicative import Text.Parsec hiding ((<|>))
We’re going to be using a somewhat different approach than the provided Scheme solution. Rather than doing everything ourselves, we will use the Parsec library, which is the go-to solution for writing parsers in Haskell. This, however, results in a small limitation. Parsec is a parser combinator library, and parser combinators cannot deal with left-recursive grammars. The grammar in the assignment is left-recursive, because if we were to enter the rule expr = expr + term (as I did in my first attempt in solving this), our program will enter an infinite loop. Fortunately, left-recursive grammars can be fairly easily rewritten using the chain functions in Parsec.
An expression is one or more terms, separated by addition and subtraction operators.
expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-')
A term works just like an expression, but with multiplication and divison.
term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/')
Factors need no change from the specified grammar: they’re either numbers or expressions in parentheses.
fact = read <$> many1 digit <|> char '(' *> expr <* char ')'
Evaluating an expression is trivial. The only extra step is to filter out all the spaces, since they have not been defined in the grammar but are present in the test cases.
eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ')
A quick test shows that our rewritten grammar passes all of the test cases correctly:
main :: IO () main = mapM_ print [eval "6+2" == 8 ,eval "6-2" == 4 ,eval "6*2" == 12 ,eval "6/2" == 3 ,eval "6 * 2" == 12 ,eval "2+3*4" == 14 ,eval "2*3+4" == 10 ,eval "2+3+4" == 9 ,eval "2-3-4" == -5 ,eval "2*3*4" == 24 ,eval "(2+3)*4" == 20 ,eval "(2*3)+4" == 10 ,eval "2+(3*4)" == 14 ,eval "2*(3+4)" == 14 ,eval "12 * (34 + 56)" == 1080 ]
Using a parser library and slightly rewriting the grammar reduces the amount of required lines from 48 to 4. That’s a good trade-off in my book.
Tags: bonsai, code, evaluation, expression, Haskell, kata, parsec, parser, praxis, programming
April 19, 2010 at 1:59 pm |
Can I ask what do you use for code listing? I mean, what do you use for syntax highlighting and other things in the code snippets posted?
April 19, 2010 at 2:02 pm |
I use a program called Highlight (). It produces HTML code, which I paste in my post. Not as elegant as a plugin, but since you can’t install any plugins on a wordpress.com blog it’ll have to do.
April 19, 2010 at 3:24 pm |
Thanks. I’ll try to use it too :) | http://bonsaicode.wordpress.com/2010/04/16/programming-praxis-expression-evaluation/ | CC-MAIN-2014-15 | refinedweb | 468 | 71.75 |
Bugs item #1005308, was opened at 2004-08-08 01:10 Message generated for change (Comment added) made by loewis You can respond by visiting: Category: Build Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Georg Schwarz (gschwarz) Assigned to: Nobody/Anonymous (nobody) Summary: _XOPEN_SOURCE issue on IRIX 5.3 Initial Comment: On IRIX 5.3 /usr/include/sys/types.h contains: #if ( !defined(_XOPEN_SOURCE) && !defined(_POSIX_SOURCE) ) || defined(_BSD_TYPES) || defined(_BSD_COMPAT) /* * Nested include for BSD/sockets source compatibility. * (The select macros used to be defined here). */ #include <sys/bsd_types.h> #endif sys/bsd_types.h however defines u_int. If _XOPEN_SOURCE is defined (in pyconfig.h) this results in u_int not being known and the compilation to fail. One way to get around this is to change configure as follows (similarly as it is already being done for some other OSes, it seems...): --- configure.orig 2004-08-08 00:18:33.000000000 + 0200 +++ configure 2004-08-08 00:18:59.000000000 +0200 @@ -1466,6 +1466,8 @@ # has another value. By not (re)defining it, the defaults come in place. AIX/4) define_xopen_source=no;; + IRIX/5.*) + define_xopen_source=no;; esac if test $define_xopen_source = yes I am not sure if this is the best way to solve that problem though. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2004-08-08 15:22 Message: Logged In: YES user_id=21627 Will it build correctly if you define _BSD_TYPES or _BSD_COMPAT? What is the difference between these two (i.e. which one is less intrusive)? Get these defined directly, or would we have to define something else which in turn causes _BSD_TYPES (say) to be defined? ---------------------------------------------------------------------- You can respond by visiting: | https://mail.python.org/pipermail/python-bugs-list/2004-August/024683.html | CC-MAIN-2016-50 | refinedweb | 277 | 52.76 |
by Rina Artstain
How to handle errors with grace: failing silently is not an option.
Expect the Spanish Inquisition
The first step of handling errors is to identify when an “error” is not an “error!”. This of course depends on your application’s business logic, but in general, some errors are obvious and easy to fix.
- Got a from-to date range where the “to” is before “from”? Switch the order.
- Got a phone number which starts with + or contains dashes where you expect no special characters? Remove them.
- Null collection a problem? Make sure you initialize it before accessing (using lazy initialization or in the constructor).
Don’t interrupt your code flow for errors you can fix, and certainly don’t interrupt your users. If you can understand the problem and fix it yourself — just do it.
Returning Null or Other Magic Values
Null values, -1 where a positive number is expected and other “magic” return values — all these are evils which move the responsibility for error checking to the caller of the function. This is not only a problem because it causes error checking to proliferate and multiply, it is also a problem because it depends on convention and requires your user to be aware of arbitrary implementation details.
Your code will be full of code blocks like these which obscure the application’s logic:
return_value = possibly_return_a_magic_value()if return_value < 0: handle_error()else: do_something()
other_return_value = possibly_nullable_value()if other_return_value is None: handle_null_value()else: do_some_other_thing()
Even if your language has a built in nullable value propagation system — that’s just applying an unreadable patch to flaky code:
var item = returnSomethingWhichCouldBeNull();var result = item?.Property?.MaybeExists;if (result.HasValue){ DoSomething();}
Passing null values to methods is just as problematic, and you’ll often see methods begin with a few lines of checking that the input is valid, but this is truly unnecessary. Most modern languages provide several tools which allow you to be explicit about what you expect and skip those code-cluttering checks, e.g. defining parameters as non-nullable or with an appropriate decorator.
Error Codes
Error codes have the same problem as null and other magic values, with the additional complication of having to, well, deal with error codes.
You might decide to return the error code through an “out” parameter:
int errorCode;var result = getSomething(out errorCode);if (errorCode != 0){ doSomethingWithResult(result);}
You may choose to wrap all your results in a “Result” construct like this (I’m very guilty of this one, though it was very useful for ajax calls at the time):
public class Result<T>{ public T Item { get; set; } // At least "ErrorCode" is an enum public ErrorCode ErrorCode { get; set; } = ErrorCode.None; public IsError { get { return ErrorCode != ErrorCode.None; } } }
public class UsingResultConstruct{ ... var result = GetResult(); if (result.IsError) { switch (result.ErrorCode) { case ErrorCode.NetworkError: HandleNetworkError(); break; case ErrorCode.UserError: HandleUserError(); break; default: HandleUnknownError(); break; } } ActuallyDoSomethingWithResult(result); ...}
Yep. That’s really bad. The Item property could still be empty for some reason, there’s no actual guarantee (besides convention) that when the result doesn’t contain an error you can safely access the Item property.
After you’re done with all of this handling, you still have to translate your error code to an error message and do something with it. Often, at this point you’ve obscured the original problem enough that you might not have the exact details of what happened, so you can’t even report the error effectively.
On top of this horribly unnecessarily over-complicated and unreadable code, an even worse problem exists — if you, or someone else, change your internal implementation to handle a new invalid state with a new error code, the calling code will have no way of knowing something which they need to handle has changed and will fail in unpredictable ways.
If At First You Don’t Succeed, Try, Catch, Finally
Before we continue, this might be a good time to mention that code failing silently is not a good thing. Failing silently means errors can go undetected for quite a while before exploding suddenly at inconvenient and unpredictable times. Usually over the weekend. The previous error handling methods allow you to fail silently, so maybe, just maybe, they’re not the best way to go.
At this point, if you’ve read Clean Code you’re probably wondering why anyone would ever do any of that instead of just throwing an exception? If you haven’t, you might think exceptions are the root of all evil. I used to feel the same way, but now I’m not so sure. Bear with me, let’s see if we can agree that exceptions are not all bad, and might even be quite useful. And if you’re writing in a language without exceptions? Well, it is what it is.
An interesting side note, at least to me, is that the default implementation for a new C# method is to throw a NotImplementedException, whereas the default for a new python method is “pass”.
I’m not sure if this is a C# convention or just how my Resharper was configured, but the result is basically setting up python to fail silently. I wonder how many developers have spent a long and sad debugging session trying to figure what was going on, only to find out they had forgotten to implement a placeholder method.
But wait, you could easily create a cluttered mess of error checking and exception throwing which is quite similar to the previous error checking sections!
public MyDataObject UpdateSomething(MyDataObject toUpdate){ if (_dbConnection == null) { throw new DbConnectionError(); } try { var newVersion = _dbConnection.Update(toUpdate); if (newVersion == null) { return null; } MyDataObject result = new MyDataObject(newVersion); return result; } catch (DbConnectionClosedException dbcc) { throw new DbConnectionError(); } catch (MyDataObjectUnhappyException dou) { throw new MalformedDataException(); } catch (Exception ex) { throw new UnknownErrorException(); }}
So, of course, throwing exceptions will not protect you from unreadable and unmanageable code. You need to apply exception throwing as a well thought out strategy. If your scope is too big, your application might end up in an inconsistent state. If your scope is too small, you’ll end up with a cluttered mess.
My approach to this problem is as follows:
Consistency rulezzz. You must make sure that your application is always in a consistent state. Ugly code makes me sad, but not as much as actual problems which affect the users of whatever it is your code is actually doing. If that means you have to wrap every couple of lines with a try/catch block — hide them inside another function.
def my_function(): try: do_this() do_that() except: something_bad_happened() finally: cleanup_resource()
Consolidate errors. It’s fine if you care about different kinds of errors which need to be handled differently, but do your users a favor and hide that internally. Externally, throw a single type of exception just to let your users know something went wrong. They shouldn’t really care about the details, that’s your responsibility.
public MyDataObject UpdateSomething(MyDataObject toUpdate){ try { var newVersion = _dbConnection.Update(toUpdate); MyDataObject result = new MyDataObject(newVersion); return result; } catch (DbConnectionClosedException dbcc) { HandleDbConnectionClosed(); throw new UpdateMyDataObjectException(); } catch (MyDataObjectUnhappyException dou) { RollbackVersion(); throw new UpdateMyDataObjectException(); } catch (Exception ex) { throw new UpdateMyDataObjectException(); }}
Catch early, catch often. Catch your exceptions as close to the source at the lowest level possible. Maintain consistency and hide the details (as explained above), then try to avoid handling errors until the very top level of your application. Hopefully there aren’t too many levels along the way. If you can pull this off, you’ll be able to clearly separate the normal flow of your application logic from the error handling flow, allowing your code to be clear and concise without mixing concerns.
def my_api(): try: item = get_something_from_the_db() new_version = do_something_to_item(item) return new_version except Exception as ex: handle_high_level_exception(ex)
Thanks for reading this far, I hope it was helpful! Also, I’m only starting to form my opinions on this subject, so I’d be really happy to hear what your strategy is for handling errors. The comments section is open! | https://www.freecodecamp.org/news/how-to-handle-errors-with-grace-failing-silently-is-not-an-option-de6ce8f897d7/ | CC-MAIN-2019-43 | refinedweb | 1,341 | 52.19 |
13 February 2012
By clicking Submit, you accept the Adobe Terms of Use.
General knowledge of using the Flash Professional workspace and a basic understanding of concepts needed to use ActionScript 3 in Flash.
All
As Adobe Flash Professional has grown into a powerful application and game development environment, so has the need to understand media characteristics and related performance optimizations. Working with vector and bitmap images is a fundamental part of visual design in Flash. Image rasterization refers to the process of converting vector graphics into bitmap graphics for performance optimization.
This article explores the pros and cons of using vector and bitmap graphics, as well as the options available for rasterizing images. You'll find a series of simple exercises illustrating how to set up and use the rasterization features at author-time and runtime (click Figure 1). Along the way, you'll learn how to work with the new Export as Bitmap feature—while learning how to use the BitmapData object in ActionScript 3.
Images in Flash come in two flavors: vector images and bitmap images. Each format has advantages and drawbacks. In this section, you'll learn about each format, the pros and cons of each format, and when it is appropriate to use each format.
A vector graphic is a shape drawn with a series of points and lines connecting the points (see Figure 2). For example, a square consists of four corner points with lines connecting each point. A circle contains the same four points, but the lines between them are curved instead of straight. A vector shape has a fill color and an outline (stroke) color. Usually a vector graphic is composed of dozens or more vector shapes that overlap to form a picture.
Vector graphics have the advantage of being light-weight and scalable. Under the hood, vector graphics are defined entirely by the math describing their points and lines, so they are not composed of resolution. As a result, vector graphics are light-weight and can be scaled up and down in size without losing their quality. And vector graphics can be edited and changed at any time.
Vector graphics have the disadvantage of being processor-intensive in some situations. Vector graphics are rendered by the CPU at runtime and have to be re-rendered whenever a change in the graphic occurs. For example, when a vector graphic is used in a tween animation, or if the vector graphic on Stage is overlapped by a tween animation, the shape is rendered again in each frame to display the changes on the screen. Most modern desktop and laptop computers don't have any problem rendering complex groups of vector graphics, but be aware that mobile devices may display visible performance issues.
Use vectors whenever you need to create scalable graphics, work with editable text and shapes, or when flexible content is required for animations.
Tip: The drawing tools in Flash Professional natively draw vector graphics, but in many cases, it is a best practice to publish the graphics as bitmaps in order to improve performance.
A bitmap graphic is an image composed of a grid of dots called pixels (see Figure 3). Each pixel contains a color. Collectively, the grid of colored pixels forms the image. The number of pixels per inch defines the resolution of an image. The common screen resolution for computer monitors is 72 dpi (dots per inch).
Bitmap graphics have the advantage of displaying highly detailed photographic content without the use of CPU rendering. Once the bitmap has downloaded to the display, it does not need to be rendered again.
Bitmap graphics have the disadvantage of producing larger file sizes. The resolution, number of colors, compression scheme, and dimensions of the bitmap all contribute to the file size of the image. Also, since bitmaps are defined in resolution, they cannot be scaled to larger sizes without incurring a loss of quality. Bitmaps are not editable in Flash; you can use a tool such as Photoshop or Fireworks to edit your bitmap images prior to importing them in Flash.
Use bitmaps for backgrounds and static images that don't need to be edited or scaled. Also, for performance reasons and portability to mobile devices, consider using bitmaps or rasterization techniques whenever possible.
Tip: You can rasterize vector images into bitmap images at author-time or at runtime as a strategy to optimize editable vector graphics.
Depending on the complexity of your Flash movie, you may find that some animations and page transitions seem sluggish or fragmented. This scenario can occur in projects when too many overlapping vector images are redrawn in every frame. The result can be inconsistent frame rates and intermittent pauses in the vector rendering.
Here are a few things to consider that may improve a project's performance:
Use bitmap images for background graphics. Animations often appear on top of larger background graphics. Using bitmaps for the backgrounds will help reduce the resources required to render the graphics and minimize CPU usage.
Tip: You can use ActionScript-based profiler utilities to test the performance of your movie and compare memory usage while optimizing your projects. Check out Shane McCarthy's AS3 SWF Profiler, an option that's easy to use.
Flash Professional and ActionScript 3 provide a handful of options for converting vector graphics to bitmaps. The benefit of using these features is that you can often avoid the performance pitfalls described previously while leaving your artwork editable at author-time. The result can produce projects that require less CPU usage, smoother animation performance and frame rate playback, and improved performance—especially for applications being ported to mobile devices.
The following sections of this article will walk you through concepts and exercises describing the rasterization features in Flash.
You'll explore these author-time features:
You'll also explore these runtime ActionScript features:
Flash Professional CS5.5 introduces the Export as bitmap features as an improved workflow to publish vector graphics as bitmaps in the SWF file. This feature, combined with the pre-existing cache as bitmap and convert to bitmap features, offer powerful, easy to use options while working visually at author-time in the FLA file.
This section provides an overview of the author-time rasterization features in Flash.
The bitmap caching feature is one of the older rasterization features in Flash. Use bitmap caching to allow Flash Player to cache a symbol as a bitmap in RAM memory at runtime. The bitmap caching feature is easy to use. Convert an image to a symbol, make sure it's selected, and then choose the Cache as Bitmap Render option in the Display section of the Property inspector (see Figure 4).
The theory is that bitmap caching allows Flash Player to cache the image to avoid re-rendering the image in each frame. This feature works best when using images that do not animate, and do not have filters or blend modes applied.
Tip: The Cache as bitmap option can actually increase CPU usage when used with images that are part of a tween animation that scales or rotates. Animations containing complex transformations will re-cache the bitmap in each frame—causing them to require more CPU usage than simply using the original vector artwork.
The convert to bitmap and convert selection to bitmap features are handy when you want to convert vector artwork to an actual bitmap at author-time. This feature can be applied to both symbols and drawing objects. To use the feature, select one or more graphics on the Stage and choose Modify > Convert to Bitmap. You can also right-click on the selected objects and choose the Convert to Bitmap option in the context menu that appears.
The convert to bitmap feature literally creates a new bitmap file in the Library from the selected items. You can capture a selection of artwork, convert it to a bitmap, and then edit the bitmap in an image-editing application, such as Photoshop (see Figure 5). After the bitmap is created, symbols are still preserved in the Library as separate editable objects. Drawing objects are no longer editable after they've been converted to a bitmap.
Use the convert to bitmap feature when you want to create a physical bitmap file that can be edited in other drawing programs and saved as an external bitmap image.
Tip: One of the benefits of this approach is that a bitmap item is generated in the Library. Double-click on the bitmap to launch the Bitmap Properties dialog box to adjust the compression setting and see details about the bitmap image.
The new export as bitmap feature allows you to turn any movie clip or button symbol into a bitmap at author-time (see Figure 6). When this option is enabled, Frame 1 of the symbol is displayed as the bitmap. You can set the background to transparent or opaque with a specified background color. While viewing the symbol instance, you see the Frame 1 bitmap. While editing the symbol's timeline, you see the frame actions, frames, and keyframes as expected. Unlike the convert to bitmap feature, export as bitmap allows the symbol and its contents to remain editable in the FLA file. That means you can continue editing and resizing the Flash content as needed, see the bitmap while you are authoring the file, and publish the bitmap from Frame 1 at the same time you publish the movie. When using the export as bitmap feature, a separate bitmap is not created in the Library.
The export as bitmap feature makes it easier to repurpose graphics across applications or banners. Use this approach when you need to port artwork, banner graphics, or UI graphics to applications that will be displayed at different screen sizes. For example, use the export as bitmap feature to resize background artwork and logos as needed across FLA files and still publish the symbol elements as static bitmaps in the SWF files you create.
Tip: Combining the export as bitmap feature with the author-time shared assets feature in the Project panel is a great way to distribute common assets that will be reused and resized across files in a project.
In the simplest scenario, you might find yourself working with a large vector graphic as a background that you want to publish as a bitmap. Doing so will incur additional file size, but it will allow animations and other overlapping content to play back more smoothly.
In this exercise, you'll import a photograph of a group of people and position it as a background image for your movie. For fun and as a special effect, you'll convert the photo to a series of vectors using the Trace Bitmap command, and then set up the adjusted photo to export as a bitmap. You'll also try using the convert to bitmap option to facilitate further editing.
Follow these steps to set up your project folder:
Follow these steps to set up the background image:
Follow these steps to edit the bitmap file outside of Flash:
This simple exercise illustrates how you can create interesting backgrounds using posterized photographs. The export as bitmap feature leaves the vector artwork editable so that you can resize it as needed, but publishes the background as a bitmap so overlapping animations and content can run smoothly. From here, you can add content to the FLA file as desired.
A more complex scenario might involve repurposing a background graphic across project files that use different dimensions. For example, if you were developing a banner ad campaign, you may be tasked with delivering a tagline and logo animation that displays across several different standard banner sizes. Or you might develop a game or application that will be ported to fit different screen sizes, based on web or mobile deployment requirements.
In this exercise, you'll develop a template you can use to create banner ads in different sizes that all use the same background texture. In this scenario, it's important to keep the file sizes well below 40k in the template. To achieve this, you'll use a scalable texture created in Illustrator. The main goal is the ability to duplicate and resize the texture as editable vectors for each banner ad, while still publishing the background graphics as bitmaps.
Follow these steps to set up the banner template:
Tip: There isn't any way to control the compression settings that produce the bitmap when using the export as bitmap feature. Use the Bandwidth Profiler to check the SWF's file size as you work. If you find that you need more control over the compression scheme, use the convert to bitmap feature to adjust and edit the bitmap item in the Library.
At this point you've set up the foundation of the banner template. From here, you can add logos, taglines, and animations to the file as needed. When you want to create a banner ad to fit different dimensions, simply choose File > Save As and save the file with a new name. Edit the new file to change the Stage dimensions, and resize the assets to fit the Stage.
Follow these steps to use the template to create a banner ad with different dimensions:
So that's it. Using a template, you can continue generating new SWF files following the steps listed above. If you'd like to practice, try creating several SWF files to fit standard banner sizes at 120 x 600 pixels, 160 x 600 pixels, 728 x 90 pixels, and so on. Be sure to place any common elements, such as logos or text, in the template file so that they are ready to use when you make copies.
Tip: You can take this example to the next level by using the author-time shared library feature in the Flash CS5.5 Project panel. This enables you to add the Illustrator background pattern to the shared library, so that you can update all the banner files at once any time the pattern is changed. For more information on this process, see Working with Flash projects in the Flash Professional online documentation.
ActionScript 3 offers powerful options for creating and manipulating bitmaps at runtime. These features are most commonly used to create graphic effects and achieve more advanced optimization in Flash games and animation projects.
This section provides an overview of the runtime ActionScript options related to rasterization.
The DisplayObject class in ActionScript 3 contains the cacheAsBitmap property. It's essentially the programmatic way of enabling the cache as bitmap feature in the Property inspector while authoring FLA files. This means that you'll want to follow the same guidelines outlined above for the author-time feature. It's best to only use cacheAsBitmap on display objects that are static or do not animate using scale and rotation.
The cacheAsBitmap property can be applied via ActionScript to any display object. You can name the display object instance in the Property inspector and then use the following code to activate the feature:
myDisplayObject.cacheAsBitmap = true;
The Bitmap class in ActionScript 3 is used to represent a bitmap in code. You can either load a bitmap using the Loader object, or you can create a new Bitmap instance in code and fill it with data from a BitmapData object. The Bitmap class is usually used to data type a bitmap, unless you're creating a bitmap dynamically with BitmapData.
Creating an empty bitmap with ActionScript looks like this:
import flash.display.Bitmap; var bmp:Bitmap = new Bitmap(); addChild(bmp);
The BitmapData class is the primary tool used to manipulate bitmaps in ActionScript. You can use BitmapData to draw vector content as bitmaps, add effects to bitmaps, and perform pixel-level adjustments, calculations, and comparisons. Projects on the scale of applications and games will commonly use BitmapData and rasterization to optimize animations, page transitions, and content playback.
A BitmapData instance can be instantiated with four parameters: width, height, transparent, and fill color. The following code example shows how you can create a bitmap from scratch:
import flash.display.Bitmap; import flash.display.BitmapData; var bmpData:BitmapData = new BitmapData(50, 50, true, 0xFFFFAA00); var bmp:Bitmap = new Bitmap(bmpData); addChild(bmp);
Tip: An interesting aspect about working with BitmapData is that it can be used to manipulate the pixels of an existing bitmap, or it can be used to draw bitmaps from scratch.
The BitmapData class can be used to generate a bitmap from any frame in a movie clip symbol. This can be a handy way to optimize and reuse graphics at runtime. Since BitmapData instances can be shared among many different bitmap instances, this approach can be used in animation and gaming projects to create repetitive scenery or game elements that require less use of RAM memory.
In this exercise, you'll begin working with the BitmapData class by simply creating a bitmap snapshot of vector artwork on a frame in a movie clip. Thanks to MyWeb Software for the use of the cool Lion avatar graphic used in this sample project.
Follow these steps to render a bitmap from vector artwork:
import flash.display.Bitmap; import flash.display.BitmapData; import flash.display.MovieClip; import flash.geom.Rectangle; import flash.geom.Matrix;
// 1. Add symbol to stage var target:MovieClip = new WalkCycle(); target.scaleX = 2; target.scaleY = 2; addChild(target);
// 2. Go to a frame in the symbol target.gotoAndStop(10);
// 3. Copy the frame to BitmapData var bounds:Rectangle = target.getBounds(this););
// 4. Render Bitmap instance var bmp:Bitmap = new Bitmap(bmpData); bmp.x = (1024-bmp.width)/2; bmp.y = (600-bmp.height)/2; addChild(bmp);
// 5. Remove symbol from the Stage removeChild(target);
If you'd like to take this sample project a step further, you can experiment with generating multiple bitmaps all using the same BitmapData instance.
Vector animations can be sluggish, especially if they fill the screen with lots of complex content. To address this obstacle, many Flash applications and games turn to rasterization of vector content. You can essentially loop through the frames of a vector animation and render each frame as a bitmap. From there, you can play the bitmap frames back using an ENTER_FRAME event loop or a Timer interval function.
In this exercise, you'll generate a bitmap animation from a standard vector animation. You'll use the lion sample again and create a tween animation to move the lion's position on the Stage. Both the walk cycle and your tween will be reflected in the resulting bitmap animations.
Follow these steps to render a vector animation as a bitmap animation:
At this point you've created the vector-based version of the animation. The next task involves adding the ActionScript code to generate the bitmaps.
import flash.display.Bitmap; import flash.display.BitmapData; import flash.display.MovieClip; import flash.geom.Rectangle; import flash.geom.Matrix; // 1. Add symbol to the Stage var target:MovieClip = new WalkAnimation(); target.scaleX = 1.5; target.scaleY = 1.5; addChild(target);
// 2. Get bounds of all frames var bounds:Rectangle = target.getBounds(this); var framesTotal:uint = target.totalFrames; for(var j:uint=0; j<framesTotal; j++) { // Go to frame [j] target.gotoAndStop(j); // Map out area var frBounds:Rectangle = target.getBounds(this); if( frBounds.x < bounds.x ) bounds.x = frBounds.x; if( frBounds.y < bounds.y ) bounds.y = frBounds.y; if( frBounds.x + frBounds.width > bounds.width ) bounds.width = frBounds.x + frBounds.width; if( frBounds.y + frBounds.height > bounds.height ) bounds.height = frBounds.y + frBounds.height; }
// 3. Build BitmapData array for all frames var frameData:Vector.<BitmapData> = new Vector.<BitmapData>(framesTotal, true); for(var i:uint=0; i<framesTotal; i++) { // Go to frame [i] target.gotoAndStop(i); // Create bitmap data); // Add to frame data... frameData[i] = bmpData; }
// 4. Remove symbol from stage removeChild(target); // 5. Create container for the animation var bmp:Bitmap = new Bitmap(); bmp.x = 150; bmp.y = (600-bmp.height)/2; addChild(bmp);
// 6. Create a function to show frames function showFrame( fr:uint ):void { bmp.bitmapData = frameData[fr]; } var frame:uint = 0; showFrame(frame);
// 7. Loop at frame rate to display animation function animationFrameHandler( event:Event ):void { frame++; if( frame == framesTotal ){// loop... frame = 0; } showFrame(frame); } addEventListener(Event.ENTER_FRAME,animationFrameHandler);
In a full-scale solution, you would continue adding scripts to include memory optimization, as well as adding other fine-tuned features.
While displaying vector content on the web has always been one of the strengths of working with Flash content, it's becoming more common these days to use bitmaps wherever possible. Consider adopting workflows that allow you to develop content using the flexibility of vector drawing tools while delivering static content as bitmaps in the final published SWF file.
Check out these online resources to get more information on topics described in this article:
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Permissions beyond the scope of this license, pertaining to the examples of code included within this work are available at Adobe. | http://www.adobe.com/devnet/flash/articles/image-rasterization.html | CC-MAIN-2016-22 | refinedweb | 3,511 | 54.42 |
pthread_attr_init()
Initialize a thread-attribute object
Synopsis:
#include <pthread.h> int pthread_attr_init( pthread_attr_t *attr );
Since:
BlackBerry 10.0.0
Arguments:
- attr
- A pointer to the pthread_attr_t structure that you want to initialize. For more information, see below.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The pthread_attr_init() function initializes the thread attributes in the thread attribute object attr to their default values:
- Cancellation requests may be acted on according to the cancellation type (PTHREAD_CANCEL_DEFERRED).
- Cancellation requests are held pending until a cancellation point (PTHREAD_CANCEL_ENABLE).
- Threads are put into a zombie state when they terminate, and they stay in this state until you retrieve their exit status or detach them (PTHREAD_CREATE_JOINABLE).
- Threads inherit the scheduling policy of their parent thread (PTHREAD_INHERIT_SCHED).
- Threads are scheduled against all threads in the system (PTHREAD_SCOPE_SYSTEM).
- The stack attributes are set so that the kernel will allocate a 4 KB stack for new threads and free the stacks when the threads terminate.
After initialization, you can use the pthread_attr_* family of functions to get and set the attributes:
You can also set some non-POSIX attributes; for more information, see ".
Returns:
- EOK
- Success.
Classification:
Last modified: 2014-11-17
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_attr_init.html | CC-MAIN-2020-10 | refinedweb | 222 | 50.23 |
New submission from Hallvard B Furuseth <h.b.furuseth at usit.uio.no>: dlmalloc uses mremap() which is undeclared on Linux, needs _GNU_SOURCE. This can break at least on hosts where void* = 64 bits and int (default return type) 32 bits, since some bits in the return type are lost. A minimal patch is: --- Modules/_ctypes/libffi/src/dlmalloc.c +++ Modules/_ctypes/libffi/src/dlmalloc.c @@ -459,2 +459,4 @@ #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */ +#elif !defined _GNU_SOURCE +#define _GNU_SOURCE 1 /* mremap() in Linux sys/mman.h */ #endif /* WIN32 */ However the (char*)CALL_MREMAP() cast looks like a broken fix for this, it suppresses a warning instead of fixing it. Maybe you should remove the cast and instead assign CALL_MREMAP() to a void*, to catch any similar trouble in the future. ---------- components: Extension Modules messages: 120391 nosy: hfuru priority: normal severity: normal status: open title: dlmalloc.c needs _GNU_SOURCE for mremap() type: behavior versions: Python 3.2 _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________ | https://mail.python.org/pipermail/new-bugs-announce/2010-November/009132.html | CC-MAIN-2016-40 | refinedweb | 171 | 50.53 |
ODE¶
User Functions¶
These are functions that are imported into the global namespace with
from
sympy import *. These functions (unlike Hint Functions, below) are
intended for use by ordinary users of SymPy.
dsolve()¶
sympy.solvers.ode.
dsolve(eq, func=None, hint='default', simplify=True, ics=None, xi=None, eta=None, x0=0, n=6, **kwargs)[source]¶
- Solves any (supported) kind of ordinary differential equation and system of ordinary differential equations.
Examples
>>> from sympy import Function, dsolve, Eq, Derivative, sin, cos, symbols >>> from sympy.abc import x >>> f = Function('f') >>> dsolve(Derivative(f(x), x, x) + 9*f(x), f(x)) Eq(f(x), C1*sin(3*x) + C2*cos(3*x))
>>> eq = sin(x)*cos(f(x)) + cos(x)*sin(f(x))*f(x).diff(x) >>> dsolve(eq, hint='1st_exact') [Eq(f(x), -acos(C1/cos(x)) + 2*pi), Eq(f(x), acos(C1/cos(x)))] >>> dsolve(eq, hint='almost_linear') [Eq(f(x), -acos(C1/sqrt(-cos(x)**2)) + 2*pi), Eq(f(x), acos(C1/sqrt(-cos(x)**2)))] >>> t = symbols('t') >>> x, y = symbols('x, y', function=True) >>> eq = (Eq(Derivative(x(t),t), 12*t*x(t) + 8*y(t)), Eq(Derivative(y(t),t), 21*x(t) + 7*t*y(t))) >>> dsolve(eq) [Eq(x(t), C1*x0 + C2*x0*Integral(8*exp(Integral(7*t, t))*exp(Integral(12*t, t))/x0**2, t)), Eq(y(t), C1*y0 + C2(y0*Integral(8*exp(Integral(7*t, t))*exp(Integral(12*t, t))/x0**2, t) + exp(Integral(7*t, t))*exp(Integral(12*t, t))/x0))] >>> eq = (Eq(Derivative(x(t),t),x(t)*y(t)*sin(t)), Eq(Derivative(y(t),t),y(t)**2*sin(t))) >>> dsolve(eq) {Eq(x(t), -exp(C1)/(C2*exp(C1) - cos(t))), Eq(y(t), -1/(C1 - cos(t)))}
For Single Ordinary Differential Equation
It is classified under this when number of equation in
eqis one. Usage
dsolve(eq, f(x), hint)-> Solve ordinary differential equation
eqfor function
f(x), using method
hint.
Details
eqcan be any supported ordinary differential equation (see the
odedocenables simplification by
odesimp(). See its docstring for more information. Turn this off, for example, to disable solving of solutions for
funcor simplification of arbitrary constants. It will still integrate with this hint. Note that the solution may contain more arbitrary constants than the order of the ODE with this option enabled.
xiand
etaare the infinitesimal functions of an ordinary
- differential equation. They are the infinitesimals of the Lie group of point transformations for which the differential equation is invariant. The user can specify values for the infinitesimals. If nothing is specified,
xiand
etaare calculated using
infinitesimals()with the help of various heuristics.
icsis the set of boundary conditions for the differential equation.
- It should be given in the form of
{f(x0): x1, f(x).diff(x).subs(x, x2): x3}and so on. For now initial conditions are implemented only for power series solutions of first-order differential equations which should be given in the form of
{f(x0): x1}(See issue 4720). If nothing is specified for this case
f(0)is assumed to be
C0and the power series solution is calculated about 0.
x0is the point about which the power series solution of a differential
- equation is to be evaluated.
ngives the exponent of the dependent variable up to which the power series
- solution of a differential equation is to be evaluated.terms.below.hint, it only returns the
_Integralhint. This is useful if
allcausesdedocfor many tests, which serves also as a set of examples for how to use
dsolve().
dsolve()always returns an
Equalityclass (except for the case when the hint is
all.
For System Of Ordinary Differential Equations
- Usage
dsolve(eq, func)-> Solve a system of ordinary differential equations
eqfor
funcbeing list of functions including \(x(t)\), \(y(t)\), \(z(t)\) where number of functions in the list depends upon the number of equations provided in
eq.
Details
eqcan be any supported system of ordinary differential equations This can either be an
Equality, or an expression, which is assumed to be equal to
0.
funcholds
x(t)and
y(t)being functions of one variable which together with some of their derivatives make up the system of ordinary differential equation
eq. It is not necessary to provide this; it will be autodetected (and an error raised if it couldn’t be detected).
HintsThe hints are formed by parameters returned by classify_sysode, combining them give hints name used later for forming method name.
classify_ode()¶
sympy.solvers.ode.
classify_ode(eq, func=None, dict=False, ics=None, **kwargs)[source]¶is true,
classify_ode()will return a dictionary of
hint:matchexpressionis the name of the hint without
_Integral.
See
allhintsor the
odedocstring for a list of all supported hints that can be returned from
classify_ode().
Notes
These are remarks on hint names.
_Integral
If a classification has
_Integralat the end, it will return the expression with an unevaluated
Integralclass in it. Note that a hint may do this anyway if
integrate()cannot do the integral, though just using an
_Integralwill do so much faster. Indeed, an
_Integralhint will always be faster than its corresponding hint without
_Integralbecause
integrate()is an expensive routine. If
dsolve()hangs, it is probably because
integrate()is hanging on a tough or impossible integral. Try using an
_Integralhint or
all_Integralto get it return something.
Note that some hints do not have
_Integralcounterparts. This is because
integrate()is not used in solving the ODE for those method. For example, \(n\)th order linear homogeneous ODEs with constant coefficients do not require integration to solve, so there is no
nth_linear_homogeneous_constant_coeff_Integratehint. You can easily evaluate any unevaluated
Integrals in an expression by doing
expr.doit().
OrdinalsSome hints contain an ordinal such as
1st_linear. This is to help differentiate them from other hints, as well as from other methods that may not be implemented yet. If a hint has
nthin it, such as the
nth_linearhints, this means that the method used to applies to ODEs of any order.
indepand
depSome hints contain the words
indepor
dep. These reference the independent variable and the dependent function, respectively. For example, if an ODE is in terms of \(f(x)\), then
indepwill refer to \(x\) and
depwill refer to \(f\).
subsIf a hints has the word
subsin it, it means the the ODE is solved by substituting the expression given after the word
subsfor a single dummy variable. This is usually in terms of
indepand
depas above. The substituted expression will be written only in characters allowed for names of Python objects, meaning operators will be spelled out. For example,
indep/
depwill be written as
indep_div_dep.
coeffThe word
coeffMethods that have more than one fundamental way to solve will have a hint for each sub-method and a
_bestmeta-classification. This will evaluate all hints and return the best, using the same considerations as the normal
bestmeta', '1st_power_series', 'lie_group', ')
checkodesol()¶
sympy.solvers.ode.
checkodesol(ode, sol, func=None, order='auto', solve_for_func=True)[source]¶
Substitutes
solinto
odeand checks that the result is
0.
This only works when
funcis one function, like \(f(x)\).
solcan be a single solution or a list of solutions. Each solution may be an
Equalitythatargument.
If a sequence of solutions is passed, the same sort of container will be used to return the result for each solution.
It tries the following methods, in order, until it finds zero equivalence:
- Substitute the solution for \(f\) in the original equation. This only works if
ode\)th derivatives of the solution, each time solving for the derivative of \(f\) of that order (this will always be possible because \(f\) is a linear operator). Then back substitute each derivative into
odein reverse order.
This function returns a tuple. The first item in the tuple is
Trueif the substitution results in
0, and
Falseotherwise. The second item in the tuple is what the substitution results in. It should always be
0ifreally)
homogeneous_order()¶
sympy.solvers.ode.
homogeneous_order(eq, *symbols)[source]¶
Returns the order \(n\) if \(g\) is homogeneous and
Noneif
infinitesimals()¶
sympy.solvers.ode.
infinitesimals(eq, func=None, order=None, hint='default', match=None)[source]¶\).\[\left(\frac{\partial X(x,y;\varepsilon)}{\partial\varepsilon }\right)|_{\varepsilon=0} = \xi, \left(\frac{\partial Y(x,y;\varepsilon)}{\partial\varepsilon }\right)|_{\varepsilon=0} = \eta,\]should be flagged as
all, which gives the complete list of infinitesimals. If the infinitesimals for a particular heuristic needs to be found, it can be passed as a flag to
hint.
References
- Solving differential equations by Symmetry Groups, John Starrett, pp. 1 - pp. 14}]
checkinfsol()¶
sympy.solvers.ode.
checkinfsol(eq, infinitesimals, func=None, order=None)[source]¶.\[\frac{\partial \eta}{\partial x} + \left(\frac{\partial \eta}{\partial y} - \frac{\partial \xi}{\partial x}\right)*h - \frac{\partial \xi}{\partial y}*h^{2} - \xi\frac{\partial h}{\partial x} - \eta\frac{\partial h}{\partial y} = 0\]is the value obtained after substituting the infinitesimals in the PDE. If it is
True, then
solwould be 0.
Hint Functions¶.
allhints¶
sympy.solvers.ode.
allhints= ('separable', '1st_exact', '1st_linear', 'Bernoulli', 'Riccati_special_minus2', '1st_homogeneous_coeff_best', '1st_homogeneous_coeff_subs_indep_div_dep', '1st_homogeneous_coeff_subs_dep_div_indep', 'almost_linear', 'linear_coefficients', 'separable_reduced', '1st_power_series', 'lie_group', 'nth_linear_constant_coeff_homogeneous', 'nth_linear_euler_eq_homogeneous', 'nth_linear_constant_coeff_undetermined_coefficients', 'nth_linear_euler_eq_nonhomogeneous_undetermined_coefficients', 'nth_linear_constant_coeff_variation_of_parameters', 'nth_linear_euler_eq_nonhomogeneous_variation_of_parameters', 'Liouville', '2nd_power_series_ordinary', '2nd_power_series_regular', 'separable_Integral', '1st_exact_Integral', '1st_linear_Integral', 'Bernoulli_Integral', '1st_homogeneous_coeff_subs_indep_div_dep_Integral', '1st_homogeneous_coeff_subs_dep_div_indep_Integral', 'almost_linear_Integral', 'linear_coefficients_Integral', 'separable_reduced_Integral', 'nth_linear_constant_coeff_variation_of_parameters_Integral', 'nth_linear_euler_eq_nonhomogeneous_variation_of_parameters_Integral', 'Liouville_Integral')¶hints are grouped at the end of the list, unless there is a method that returns an unevaluable integral most of the time (which go near the end of the list anyway).
default,
all,
best, and
all_Integralmeta-hints should not be included in this list, but
_bestand
_Integralhints should be included.
odesimp¶
sympy.solvers.ode.
odesimp(eq, func, order, constants, hint)[source]¶
Simplifies ODEs, including trying to solve for
funcand running
constantsimp().
It may use knowledge of the type of solution that the hint returns to apply additional simplifications.
It also attempts to integrate any
Integrals in the expression, if the hint is not an
_Integralhint., {C1}, ... hint='1st_homogeneous_coeff_subs_indep_div_dep' ... )) x --------- = C1 /f(x)\ tan|----| \2*x /
constant_renumber¶
sympy.solvers.ode.
constant_renumber(expr, symbolname, startnumber, endnumber)[source]¶
Renumber arbitrary constants in
exprto have numbers 1 through \(N\) where \(N\) is
endnumber - startnumber + 1at most. In the process, this reorders expression terms in a standard way.
This is a simple function that goes through and renumbers any
Symbolwith a name in the form
symbolname + numwhere
numis in the range from
startnumberto
constantsimp¶
sympy.solvers.ode.
constantsimp(expr, constants)[source]¶
Simplifies an expression with arbitrary constants in it.
This function is written specifically to work with
dsolve(), and is not intended for general use.
Simplification is done by “absorbing” the arbitrary constants into other arbitrary constants, numbers, and symbols that they are not independent of.
The symbols must all have the same name with numbers after it, for example,
C1,
C2,
C3. The
symbolnamehere would be ‘
C‘, the
startnumberwould be 1, and the
endnumberwould:
- terms of
Adds are collected to try join constants so \(e^x (C_1 \cos(x) + C_2 \cos(x))\) will simplify to \(e^x C_1 \cos(x)\);
- powers with exponents that are
Adds are expanded so \(e^{C_1 + x}\) will be simplified to \(C_1 e^x\)., {C1, C2, C3}) C1*x >>> constantsimp(C1 + 2 + x, {C1, C2, C3}) C1 + x >>> constantsimp(C1*C2 + 2 + C2 + C3*x, {C1, C2, C3}) C1 + C3*x
sol_simplicity¶
sympy.solvers.ode.
ode_sol_simplicity(sol, func, trysolving=True)[source]¶
Returns an extended integer representing how simple a solution to an ODE is.
The following things are considered, in order from most simple to least:
solis solved for
func.
solis not solved for
func, but can be if passed to solve (e.g., a solution returned by
dsolve(ode, func, simplify=False).
- If
solis not solved for
func, then base the result on the length of
sol, as computed by
len(str(sol)).
- If
solhas any unevaluated
Integrals, this will automatically be considered less simple than any of the above.here means the SymPy infinity, which should compare greater than any integer.
If you already know
solve()cannot solve
sol, you can use
trysolving=Falseto skip that step, which is the only potentially slow step. For example,
dsolve()with the
simplify=Falseflag should do this.
If
solis a list of solutions, if the worst solution in the list returns
ooit returns that, otherwise it returns
len(str(sol)), that is, the length of the string representation of the whole list.
Examples
This function is designed to be passed to
min]] [28, 35] >>> min([eq1, eq2], key=lambda i: ode_sol_simplicity(i, f(x))) Eq(f(x)/tan(f(x)/(2*x)), C1)
1st_exact¶
sympy.solvers.ode.
ode_1st_exact(eq, func, order, match)[source]¶
Solves 1st order exact ordinary differential equations.
A 1st order differential equation is called exact if it is the total differential of a function. That is, the differential equation\[P(x, y) \,\partial{}x + Q(x, y) \,\partial{}y = 0\]will return an integral with \(dy\). This is supposed to represent the function that you are solving for.
References
-
- M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 73
# indirect doctest
Examples
>>> from sympy import Function, dsolve, cos, sin >>> from sympy.abc import x >>> f = Function('f') >>> dsolve(cos(f(x)) - (x*sin(f(x)) - f(x)**2)*f(x).diff(x), ... f(x), hint='1st_exact') Eq(x*cos(f(x)) + f(x)**3/3, C1)
1st_homogeneous_coeff_best¶
sympy.solvers.ode.
ode_1st_homogeneous_coeff_best(eq, func, order, match)[source]¶
Returns the best solution to an ODE from the two hints
1st_homogeneous_coeff_subs_dep_div_indephint.
References
-
- M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 59
#
1st_homogeneous_coeff_subs_dep_div_indep¶
sympy.solvers.ode.
ode_1st_homogeneous_coeff_subs_dep_div_indep(eq, func, order, match)[source]¶
Solves a 1st order differential equation with homogeneous coefficients using the substitution \(u_1 = \frac{\text{<dependent variable>}}{\text{<independent
-
- M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 59
#
1st_homogeneous_coeff_subs_indep_div_dep¶
sympy.solvers.ode.
ode_1st_homogeneous_coeff_subs_indep_div_dep(eq, func, order, match)[source]¶
Solves a 1st order differential equation with homogeneous coefficients using the substitution \(u_2 = \frac{\text{<independent variable>}}{\text{<dependent
-
- M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 59
#
1st_linear¶
sympy.solvers.ode.
ode_1st_linear(eq, func, order, match)[source]¶
Solves 1st order linear differential equations.
These are differential equations of the form\[dy/dx + P(x) y = Q(x)\text{.}\]
-
- M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 92
# indirect doctest
Examples
>>> f = Function('f') >>> pprint(dsolve(Eq(x*diff(f(x), x) - f(x), x**2*sin(x)), ... f(x), '1st_linear')) f(x) = x*(C1 - cos(x))
Bernoulli¶
sympy.solvers.ode.
ode_Bernoulli(eq, func, order, match)[source]¶
Solves Bernoulli differential equations.
These are equations of the form\[dy/dx + P(x) y = Q(x) y^n\text{, }n \ne 1`\text{.}\]
-
- M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 95
#/
Liouville¶
sympy.solvers.ode.
ode_Liouville(eq, func, order, match)[source]¶
Solves 2nd order Liouville differential equations.
The general form of a Liouville ODE is\[\frac{d^2 y}{dx^2} + g(y) \left(\! \frac{dy}{dx}\!\right)^2 + h(x) \frac{dy}{dx}\text{.}\]
- Goldstein and Braun, “Advanced Methods for the Solution of Differential Equations”, pp. 98
-
#) ]
Riccati_special_minus2¶
sympy.solvers.ode.
ode_Riccati_special_minus2(eq, func, order, match)[source]¶
The general Riccati equation has the form\[dy/dx = f(x) y^2 + g(x) y + h(x)\text{.}\]
nth_linear_constant_coeff_homogeneous¶
sympy.solvers.ode.
ode_nth_linear_constant_coeff_homogeneous(eq, func, order, match, returns='sol')[source]¶
Solves an \(n\)th order linear homogeneous differential equation with constant coefficients.
This is an equation of the form\[a_n f^{(n)}(x) + a_{n-1} f^{(n-1)}(x) + \cdots + a_1 f'(x) + a_0 f(x) = 0\text{.}\]
CRootOfinstance') ... Eq(f(x), C1*exp(x*CRootOf(_x**5 + 10*_x - 2, 0)) + C2*exp(x*CRootOf(_x**5 + 10*_x - 2, 1)) + C3*exp(x*CRootOf(_x**5 + 10*_x - 2, 2)) + C4*exp(x*CRootOf(_x**5 + 10*_x - 2, 3)) + C5*exp(x*CRootOf(_x**5 + 10*_x - 2, 4)))
Note that because this method does not involve integration, there is no
nth_linear_constant_coeff_homogeneous_Integralhint.
The following is for internal use:
returns = 'sol'returns the solution to the ODE.
returns = 'list'returns a list of linearly independent solutions, for use with non homogeneous solution methods like variation of parameters and undetermined coefficients. Note that, though the solutions should be linearly independent, this function does not explicitly check that. You can do
assert simplify(wronskian(sollist)) != 0to check for linear independence. Also,
assert len(sollist) == orderwill need to pass.
returns = 'both', return a dictionary
{'sol': <solution to ODE>, 'list': <list of linearly independent solutions>}.
References
- section: Nonhomogeneous_equation_with_constant_coefficients
- M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 211
#
nth_linear_constant_coeff_undetermined_coefficients¶
sympy.solvers.ode.
ode_nth_linear_constant_coeff_undetermined_coefficients(eq, func, order, match)[source]¶
Solves an \(n\)th order linear differential equation with constant coefficients using the method of undetermined coefficients.
This method works on differential equations of the form\[a_n f^{(n)}(x) + a_{n-1} f^{(n-1)}(x) + \cdots + a_1 f'(x) + a_0 f(x) = P(x)\text{,}\]
-
- M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 221
#
nth_linear_constant_coeff_variation_of_parameters¶
sympy.solvers.ode.
ode_nth_linear_constant_coeff_variation_of_parameters(eq, func, order, match)[source]¶
Solves an \(n\)th order linear differential equation with constant coefficients using the method of variation of parameters.
This method works on any differential equations of the form\[f^{(n)}(x) + a_{n-1} f^{(n-1)}(x) + \cdots + a_1 f'(x) + a_0 f(x) = P(x)\text{.}\]
This method works by assuming that the particular solution takes the form\[\sum_{x=1}^{n} c_i(x) y_i(x)\text{,}\]
where \(y_i\) is the \(i\)th solution to the homogeneous equation. The solution is then solved using Wronskian’s and Cramer’s Rule. The particular solution is given by\[\sum_{x=1}^n \left( \int \frac{W_i(x)}{W(x)} \,dx \right) y_i(x) \text{,}\]hint and simplifying the integrals manually. Also, prefer using
nth_linear_constant_coeff_undetermined_coefficientswhen
-
-
- M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 233
# /
separable¶
sympy.solvers.ode.
ode_separable(eq, func, order, match)[source]¶
- M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 52
#
almost_linear¶
sympy.solvers.ode.
ode_almost_linear(eq, func, order, match)[source]¶
Solves an almost-linear differential equation.
The general form of an almost linear differential equation is\[f(x) g(y) y + k(x) l(y) + m(x) = 0 \text{where} l'(y) = g(y)\text{.}\]') Eq(f(x), (C1 - Ei(x))*exp(-x)) >>> pprint(dsolve(eq, f(x), hint='almost_linear')) -x f(x) = (C1 - Ei(x))*e
linear_coefficients¶
sympy.solvers.ode.
ode_linear_coefficients(eq, func, order, match)[source]¶
Solves a differential equation with linear coefficients.
The general form of a differential equation with linear coefficients is\[y' + F\left(\!\frac{a_1 x + b_1 y + c_1}{a_2 x + b_2 y + c_2}\!\right) = 0\text{,}\]
where \(a_1\), \(b_1\), \(c_1\), \(a_2\), \(b_2\), \(c_2\) are constants and \(a_1 b_2 - a_2 b_1 \ne 0\).
This can be solved by substituting:\[x = x' + \frac{b_2 c_1 - b_1 c_2}{a_2 b_1 - a_1 b_2}\]\[y = y' + \frac{a_1 c_2 - a_2 c_1}{a_2 b_1 - a_1 b_2}\text{.}\]') [Eq(f(x), -x - sqrt(C1 + 7*x**2) - 1), Eq(f(x), -x + sqrt(C1 + 7*x**2) - 1)] >>> pprint(dsolve(eq, hint='linear_coefficients')) ___________ ___________ / 2 / 2 [f(x) = -x - \/ C1 + 7*x - 1, f(x) = -x + \/ C1 + 7*x - 1]
separable_reduced¶
sympy.solvers.ode.
ode_separable_reduced(eq, func, order, match)[source]¶
Solves a differential equation that can be reduced to the separable form.
The general form of this equation is\[y' + (y/x) H(x^n y) = 0\text{}.\]') [Eq(f(x), (-sqrt(C1*x**2 + 1) + 1)/x), Eq(f(x), (sqrt(C1*x**2 + 1) + 1)/x)] >>> pprint(dsolve(eq, hint='separable_reduced')) ___________ ___________ / 2 / 2 - \/ C1*x + 1 + 1 \/ C1*x + 1 + 1 [f(x) = --------------------, f(x) = ------------------] x x
lie_group¶
sympy.solvers.ode.
ode_lie_group(eq, func, order, match)[source]¶
This hint implements the Lie group method of solving first order differential equations. The aim is to convert the given differential equation from the given coordinate given system into another coordinate system where it becomes invariant under the one-parameter Lie group of translations. The converted ODE is quadrature and can be solved easily. It makes use of the
sympy.solvers.ode.infinitesimals()function which returns the infinitesimals of the transformation.
The coordinates \(r\) and \(s\) can be found by solving the following Partial Differential Equations.\[\xi\frac{\partial r}{\partial x} + \eta\frac{\partial r}{\partial y} = 0\]\[\xi\frac{\partial s}{\partial x} + \eta\frac{\partial s}{\partial y} = 1\]
The differential equation becomes separable in the new coordinate system\[\frac{ds}{dr} = \frac{\frac{\partial s}{\partial x} + h(x, y)\frac{\partial s}{\partial y}}{ \frac{\partial r}{\partial x} + h(x, y)\frac{\partial r}{\partial y}}\]
After finding the solution by integration, it is then converted back to the original coordinate system by subsituting \(r\) and \(s\) in terms of \(x\) and \(y\) again.
References
- Solving differential equations by Symmetry Groups, John Starrett, pp. 1 - pp. 14
Examples
>>> from sympy import Function, dsolve, Eq, exp, pprint >>> from sympy.abc import x >>> f = Function('f') >>> pprint(dsolve(f(x).diff(x) + 2*x*f(x) - x*exp(-x**2), f(x), ... hint='lie_group')) / 2\ 2 | x | -x f(x) = |C1 + --|*e \ 2 /
1st_power_series¶
sympy.solvers.ode.
ode_1st_power_series(eq, func, order, match)[source]¶
The power series solution is a method which gives the Taylor series expansion to the solution of a differential equation.
For a first order differential equation \(\frac{dy}{dx} = h(x, y)\), a power series solution exists at a point \(x = x_{0}\) if \(h(x, y)\) is analytic at \(x_{0}\). The solution is given by\[y(x) = y(x_{0}) + \sum_{n = 1}^{\infty} \frac{F_{n}(x_{0},b)(x - x_{0})^n}{n!},\]
where \(y(x_{0}) = b\) is the value of y at the initial value of \(x_{0}\). To compute the values of the \(F_{n}(x_{0},b)\) the following algorithm is followed, until the required number of terms are generated.
- \(F_1 = h(x_{0}, b)\)
- \(F_{n+1} = \frac{\partial F_{n}}{\partial x} + \frac{\partial F_{n}}{\partial y}F_{1}\)
References
- Travis W. Walker, Analytic power series technique for solving first-order differential equations, p.p 17, 18
Examples
>>> from sympy import Function, Derivative, pprint, exp >>> from sympy.solvers.ode import dsolve >>> from sympy.abc import x >>> f = Function('f') >>> eq = exp(x)*(f(x).diff(x)) - f(x) >>> pprint(dsolve(eq, hint='1st_power_series')) 3 4 5 C1*x C1*x C1*x / 6\ f(x) = C1 + C1*x - ----- + ----- + ----- + O\x / 6 24 60
2nd_power_series_ordinary¶
sympy.solvers.ode.
ode_2nd_power_series_ordinary(eq, func, order, match)[source]¶
Gives a power series solution to a second order homogeneous differential equation with polynomial coefficients at an ordinary point. A homogenous differential equation is of the form\[P(x)\frac{d^2y}{dx^2} + Q(x)\frac{dy}{dx} + R(x) = 0\]
For simplicity it is assumed that \(P(x)\), \(Q(x)\) and \(R(x)\) are polynomials, it is sufficient that \(\frac{Q(x)}{P(x)}\) and \(\frac{R(x)}{P(x)}\) exists at \(x_{0}\). A recurrence relation is obtained by substituting \(y\) as \(\sum_{n=0}^\infty a_{n}x^{n}\), in the differential equation, and equating the nth term. Using this relation various terms can be generated.
References
-
- George E. Simmons, “Differential Equations with Applications and Historical Notes”, p.p 176 - 184
Examples
>>> from sympy import dsolve, Function, pprint >>> from sympy.abc import x, y >>> f = Function("f") >>> eq = f(x).diff(x, 2) + f(x) >>> pprint(dsolve(eq, hint='2nd_power_series_ordinary')) / 4 2 \ / 2 \ |x x | | x | / 6\ f(x) = C2*|-- - -- + 1| + C1*x*|- -- + 1| + O\x / \24 2 / \ 6 /
2nd_power_series_regular¶
sympy.solvers.ode.
ode_2nd_power_series_regular(eq, func, order, match)[source]¶
Gives a power series solution to a second order homogeneous differential equation with polynomial coefficients at a regular point. A second order homogenous differential equation is of the form\[P(x)\frac{d^2y}{dx^2} + Q(x)\frac{dy}{dx} + R(x) = 0\]
A point is said to regular singular at \(x0\) if \(x - x0\frac{Q(x)}{P(x)}\) and \((x - x0)^{2}\frac{R(x)}{P(x)}\) are analytic at \(x0\). For simplicity \(P(x)\), \(Q(x)\) and \(R(x)\) are assumed to be polynomials. The algorithm for finding the power series solutions is:
- Try expressing \((x - x0)P(x)\) and \(((x - x0)^{2})Q(x)\) as power series solutions about x0. Find \(p0\) and \(q0\) which are the constants of the power series expansions.
- Solve the indicial equation \(f(m) = m(m - 1) + m*p0 + q0\), to obtain the roots \(m1\) and \(m2\) of the indicial equation.
- If \(m1 - m2\) is a non integer there exists two series solutions. If \(m1 = m2\), there exists only one solution. If \(m1 - m2\) is an integer, then the existence of one solution is confirmed. The other solution may or may not exist.
The power series solution is of the form \(x^{m}\sum_{n=0}^\infty a_{n}x^{n}\). The coefficients are determined by the following recurrence relation. \(a_{n} = -\frac{\sum_{k=0}^{n-1} q_{n-k} + (m + k)p_{n-k}}{f(m + n)}\). For the case in which \(m1 - m2\) is an integer, it can be seen from the recurrence relation that for the lower root \(m\), when \(n\) equals the difference of both the roots, the denominator becomes zero. So if the numerator is not equal to zero, a second series solution exists.
References
- George E. Simmons, “Differential Equations with Applications and Historical Notes”, p.p 176 - 184
Examples
>>> from sympy import dsolve, Function, pprint >>> from sympy.abc import x, y >>> f = Function("f") >>> eq = x*(f(x).diff(x, 2)) + 2*(f(x).diff(x)) + x*f(x) >>> pprint(dsolve(eq)) / 6 4 2 \ | x x x | / 4 2 \ C1*|- --- + -- - -- + 1| | x x | \ 720 24 2 / / 6\ f(x) = C2*|--- - -- + 1| + ------------------------ + O\x / \120 6 / x
Lie heuristics¶
These functions are intended for internal use of the Lie Group Solver. Nonetheless, they contain useful information in their docstrings on the algorithms implemented for the various heuristics.
abaco1_simple¶
sympy.solvers.ode.
lie_heuristic_abaco1_simple(match, comp=False)[source]¶
The first heuristic uses the following four sets of assumptions on \(\xi\) and \(\eta\)\[\xi = 0, \eta = f(x)\]\[\xi = 0, \eta = f(y)\]\[\xi = f(x), \eta = 0\]\[\xi = f(y), \eta = 0\]
The success of this heuristic is determined by algebraic factorisation. For the first assumption \(\xi = 0\) and \(\eta\) to be a function of \(x\), the PDE\[\frac{\partial \eta}{\partial x} + (\frac{\partial \eta}{\partial y} - \frac{\partial \xi}{\partial x})*h - \frac{\partial \xi}{\partial y}*h^{2} - \xi*\frac{\partial h}{\partial x} - \eta*\frac{\partial h}{\partial y} = 0\]
reduces to \(f'(x) - f\frac{\partial h}{\partial y} = 0\) If \(\frac{\partial h}{\partial y}\) is a function of \(x\), then this can usually be integrated easily. A similar idea is applied to the other 3 assumptions as well.
References
- E.S Cheb-Terrab, L.G.S Duarte and L.A,C.P da Mota, Computer Algebra Solving of First Order ODEs Using Symmetry Methods, pp. 8
abaco1_product¶
sympy.solvers.ode.
lie_heuristic_abaco1_product(match, comp=False)[source]¶
The second heuristic uses the following two assumptions on \(\xi\) and \(\eta\)\[\eta = 0, \xi = f(x)*g(y)\]\[\eta = f(x)*g(y), \xi = 0\]
The first assumption of this heuristic holds good if \(\frac{1}{h^{2}}\frac{\partial^2}{\partial x \partial y}\log(h)\) is separable in \(x\) and \(y\), then the separated factors containing \(x\) is \(f(x)\), and \(g(y)\) is obtained by\[e^{\int f\frac{\partial}{\partial x}\left(\frac{1}{f*h}\right)\,dy}\]
- E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 7 - pp. 8
bivariate¶
sympy.solvers.ode.
lie_heuristic_bivariate(match, comp=False)[source]¶
- Lie Groups and Differential Equations pp. 327 - pp. 329
chi¶
sympy.solvers.ode.
lie_heuristic_chi(match, comp=False)[source]¶
- E.S Cheb-Terrab, L.G.S Duarte and L.A,C.P da Mota, Computer Algebra Solving of First Order ODEs Using Symmetry Methods, pp. 8
abaco2_similar¶
sympy.solvers.ode.
lie_heuristic_abaco2_similar(match, comp=False)[source]¶
This heuristic uses the following two assumptions on \(\xi\) and \(\eta\)\[\eta = g(x), \xi = f(x)\]\[\eta = f(y), \xi = g(y)\]
For the first assumption,
First \(\frac{\frac{\partial h}{\partial y}}{\frac{\partial^{2} h}{ \partial yy}}\) is calculated. Let us say this value is A
If this is constant, then \(h\) is matched to the form \(A(x) + B(x)e^{ \frac{y}{C}}\) then, \(\frac{e^{\int \frac{A(x)}{C} \,dx}}{B(x)}\) gives \(f(x)\) and \(A(x)*f(x)\) gives \(g(x)\)
Otherwise \(\frac{\frac{\partial A}{\partial X}}{\frac{\partial A}{ \partial Y}} = \gamma\) is calculated. If
a] \(\gamma\) is a function of \(x\) alone
b] \(\frac{\gamma\frac{\partial h}{\partial y} - \gamma'(x) - \frac{ \partial h}{\partial x}}{h + \gamma} = G\) is a function of \(x\) alone. then, \(e^{\int G \,dx}\) gives \(f(x)\) and \(-\gamma*f(x)\) gives \(g(x)\)
The second assumption holds good if \(\frac{dy}{dx} = h(x, y)\) is rewritten as \(\frac{dy}{dx} = \frac{1}{h(y, x)}\) and the same properties of the first assumption satisifes. After obtaining \(f(x)\) and \(g(x)\), the coordinates are again interchanged, to get \(\xi\) as \(f(x^*)\) and \(\eta\) as \(g(y^*)\)
References
- E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 10 - pp. 12
function_sum¶
sympy.solvers.ode.
lie_heuristic_function_sum(match, comp=False)[source]¶
This heuristic uses the following two assumptions on \(\xi\) and \(\eta\)\[\eta = 0, \xi = f(x) + g(y)\]\[\eta = f(x) + g(y), \xi = 0\]
The first assumption of this heuristic holds good if\[\frac{\partial}{\partial y}[(h\frac{\partial^{2}}{ \partial x^{2}}(h^{-1}))^{-1}]\]
is separable in \(x\) and \(y\),
- The separated factors containing \(y\) is \(\frac{\partial g}{\partial y}\). From this \(g(y)\) can be determined.
- The separated factors containing \(x\) is \(f''(x)\).
- \(h\frac{\partial^{2}}{\partial x^{2}}(h^{-1})\) equals \(\frac{f''(x)}{f(x) + g(y)}\). From this \(f(x)\) can be determined.)\).
For both assumptions, the constant factors are separated among \(g(y)\) and \(f''(x)\), such that \(f''(x)\) obtained from 3] is the same as that obtained from 2]. If not possible, then this heuristic fails.
References
- E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 7 - pp. 8
abaco2_unique_unknown¶
sympy.solvers.ode.
lie_heuristic_abaco2_unique_unknown(match, comp=False)[source]¶
This heuristic assumes the presence of unknown functions or known functions with non-integer powers.
A list of all functions and non-integer powers containing x and y
Loop over each element \(f\) in the list, find \(\frac{\frac{\partial f}{\partial x}}{ \frac{\partial f}{\partial x}} = R\)
If it is separable in \(x\) and \(y\), let \(X\) be the factors containing \(x\). Then
- a] Check if \(\xi = X\) and \(\eta = -\frac{X}{R}\) satisfy the PDE. If yes, then return
\(\xi\) and \(\eta\)
- b] Check if \(\xi = \frac{-R}{X}\) and \(\eta = -\frac{1}{X}\) satisfy the PDE.
If yes, then return \(\xi\) and \(\eta\)
If not, then check if
a] \(\xi = -R,\eta = 1\)
b] \(\xi = 1, \eta = -\frac{1}{R}\)
are solutions.
References
- E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 10 - pp. 12
abaco2_unique_general¶
sympy.solvers.ode.
lie_heuristic_abaco2_unique_general(match, comp=False)[source]¶
This heuristic finds if infinitesimals of the form \(\eta = f(x)\), \(\xi = g(y)\) without making any assumptions on \(h\).
The complete sequence of steps is given in the paper mentioned below.
References
- E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 10 - pp. 12
linear¶
sympy.solvers.ode.
lie_heuristic_linear(match, comp=False)[source]¶
This heuristic assumes
- \(\xi = ax + by + c\) and
- \(\eta = fx + gy + h\)
After substituting the following assumptions in the determining PDE, it reduces to\[f + (g - a)h - bh^{2} - (ax + by + c)\frac{\partial h}{\partial x} - (fx + gy + c)\frac{\partial h}{\partial y}\]
Solving the reduced PDE obtained, using the method of characteristics, becomes impractical. The method followed is grouping similar terms and solving the system of linear equations obtained. The difference between the bivariate heuristic is that \(h\) need not be a rational function in this case.
References
- E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 10 - pp. 12
System of ODEs¶
These functions are intended for internal use by
dsolve() for system of differential equations.
system_of_odes_linear_2eq_order1_type1¶
sympy.solvers.ode.
_linear_2eq_order1_type1(x, y, t, r, eq)[source]¶
It is classified under system of two linear homogeneous first-order constant-coefficient ordinary differential equations.
The equations which come under this type are\[x' = ax + by,\]\[y' = cx + dy\]
The characteristics equation is written as\[\lambda^{2} + (a+d) \lambda + ad - bc = 0\]
and its discriminant is \(D = (a-d)^{2} + 4bc\). There are several cases
1. Case when \(ad - bc \neq 0\). The origin of coordinates, \(x = y = 0\), is the only stationary point; it is - a node if \(D = 0\) - a node if \(D > 0\) and \(ad - bc > 0\) - a saddle if \(D > 0\) and \(ad - bc < 0\) - a focus if \(D < 0\) and \(a + d \neq 0\) - a centre if \(D < 0\) and \(a + d \neq 0\).
1.1. If \(D > 0\). The characteristic equation has two distinct real roots \(\lambda_1\) and \(\lambda_ 2\) . The general solution of the system in question is expressed as\[x = C_1 b e^{\lambda_1 t} + C_2 b e^{\lambda_2 t}\]\[y = C_1 (\lambda_1 - a) e^{\lambda_1 t} + C_2 (\lambda_2 - a) e^{\lambda_2 t}\]
where \(C_1\) and \(C_2\) being arbitary constants
1.2. If \(D < 0\). The characteristics equation has two conjugate roots, \(\lambda_1 = \sigma + i \beta\) and \(\lambda_2 = \sigma - i \beta\). The general solution of the system is given by\[x = b e^{\sigma t} (C_1 \sin(\beta t) + C_2 \cos(\beta t))\]\[y = e^{\sigma t} ([(\sigma - a) C_1 - \beta C_2] \sin(\beta t) + [\beta C_1 + (\sigma - a) C_2 \cos(\beta t)])\]
1.3. If \(D = 0\) and \(a \neq d\). The characteristic equation has two equal roots, \(\lambda_1 = \lambda_2\). The general solution of the system is written as\[x = 2b (C_1 + \frac{C_2}{a-d} + C_2 t) e^{\frac{a+d}{2} t}\]\[y = [(d - a) C_1 + C_2 + (d - a) C_2 t] e^{\frac{a+d}{2} t}\]
1.4. If \(D = 0\) and \(a = d \neq 0\) and \(b = 0\)\[x = C_1 e^{a t} , y = (c C_1 t + C_2) e^{a t}\]
1.5. If \(D = 0\) and \(a = d \neq 0\) and \(c = 0\)\[x = (b C_1 t + C_2) e^{a t} , y = C_1 e^{a t}\]
2. Case when \(ad - bc = 0\) and \(a^{2} + b^{2} > 0\). The whole straight line \(ax + by = 0\) consists of singular points. The orginal system of differential equaitons can be rewritten as\[x' = ax + by , y' = k (ax + by)\]
2.1 If \(a + bk \neq 0\), solution will be\[x = b C_1 + C_2 e^{(a + bk) t} , y = -a C_1 + k C_2 e^{(a + bk) t}\]
2.2 If \(a + bk = 0\), solution will be\[x = C_1 (bk t - 1) + b C_2 t , y = k^{2} b C_1 t + (b k^{2} t + 1) C_2\]
system_of_odes_linear_2eq_order1_type2¶
sympy.solvers.ode.
_linear_2eq_order1_type2(x, y, t, r, eq)[source]¶
The equations of this type are\[x' = ax + by + k1 , y' = cx + dy + k2\]
The general solution of this system is given by sum of its particular solution and the general solution of the corresponding homogeneous system is obtained from type1.
1. When \(ad - bc \neq 0\). The particular solution will be \(x = x_0\) and \(y = y_0\) where \(x_0\) and \(y_0\) are determined by solving linear system of equations\[a x_0 + b y_0 + k1 = 0 , c x_0 + d y_0 + k2 = 0\]
\[x' = ax + by + k_1 , y' = k (ax + by) + k_2\]
- When \(ad - bc = 0\) and \(a^{2} + b^{2} > 0\). In this case, the system of equation becomes
2.1 If \(\sigma = a + bk \neq 0\), particular solution is given by\[x = b \sigma^{-1} (c_1 k - c_2) t - \sigma^{-2} (a c_1 + b c_2)\]\[y = kx + (c_2 - c_1 k) t\]
2.2 If \(\sigma = a + bk = 0\), particular solution is given by\[x = \frac{1}{2} b (c_2 - c_1 k) t^{2} + c_1 t\]\[y = kx + (c_2 - c_1 k) t\]
system_of_odes_linear_2eq_order1_type3¶
sympy.solvers.ode.
_linear_2eq_order1_type3(x, y, t, r, eq)[source]¶
The equations of this type of ode are\[x' = f(t) x + g(t) y\]\[y' = g(t) x + f(t) y\]
The solution of such equations is given by\[x = e^{F} (C_1 e^{G} + C_2 e^{-G}) , y = e^{F} (C_1 e^{G} - C_2 e^{-G})\]
where \(C_1\) and \(C_2\) are arbitary constants, and\[F = \int f(t) \,dt , G = \int g(t) \,dt\]
system_of_odes_linear_2eq_order1_type4¶
sympy.solvers.ode.
_linear_2eq_order1_type4(x, y, t, r, eq)[source]¶
The equations of this type of ode are .\[x' = f(t) x + g(t) y\]\[y' = -g(t) x + f(t) y\]
The solution is given by\[x = F (C_1 \cos(G) + C_2 \sin(G)), y = F (-C_1 \sin(G) + C_2 \cos(G))\]
where \(C_1\) and \(C_2\) are arbitary constants, and\[F = \int f(t) \,dt , G = \int g(t) \,dt\]
system_of_odes_linear_2eq_order1_type5¶
sympy.solvers.ode.
_linear_2eq_order1_type5(x, y, t, r, eq)[source]¶
The equations of this type of ode are .\[x' = f(t) x + g(t) y\]\[y' = a g(t) x + [f(t) + b g(t)] y\]
The transformation of\[x = e^{\int f(t) \,dt} u , y = e^{\int f(t) \,dt} v , T = \int g(t) \,dt\]
leads to a system of constant coefficient linear differential equations\[u'(T) = v , v'(T) = au + bv\]
system_of_odes_linear_2eq_order1_type6¶
sympy.solvers.ode.
_linear_2eq_order1_type6(x, y, t, r, eq)[source]¶
The equations of this type of ode are .\[x' = f(t) x + g(t) y\]\[y' = a [f(t) + a h(t)] x + a [g(t) - h(t)] y\]
This is solved by first multiplying the first equation by \(-a\) and adding it to the second equation to obtain\[y' - a x' = -a h(t) (y - a x)\]
Setting \(U = y - ax\) and integrating the equation we arrive at\[y - ax = C_1 e^{-a \int h(t) \,dt}\]
and on substituing the value of y in first equation give rise to first order ODEs. After solving for \(x\), we can obtain \(y\) by substituting the value of \(x\) in second equation.
system_of_odes_linear_2eq_order1_type7¶
sympy.solvers.ode.
_linear_2eq_order1_type7(x, y, t, r, eq)[source]¶
The equations of this type of ode are .\[x' = f(t) x + g(t) y\]\[y' = h(t) x + p(t) y\]
Differentiating the first equation and substituting the value of \(y\) from second equation will give a second-order linear equation\[g x'' - (fg + gp + g') x' + (fgp - g^{2} h + f g' - f' g) x = 0\]
This above equation can be easily integrated if following conditions are satisfied.
- \(fgp - g^{2} h + f g' - f' g = 0\)
- \(fgp - g^{2} h + f g' - f' g = ag, fg + gp + g' = bg\)
If first condition is satisfied then it is solved by current dsolve solver and in second case it becomes a constant cofficient differential equation which is also solved by current solver.
Otherwise if the above condition fails then, a particular solution is assumed as \(x = x_0(t)\) and \(y = y_0(t)\) Then the general solution is expressed as\[x = C_1 x_0(t) + C_2 x_0(t) \int \frac{g(t) F(t) P(t)}{x_0^{2}(t)} \,dt\]\[y = C_1 y_0(t) + C_2 [\frac{F(t) P(t)}{x_0(t)} + y_0(t) \int \frac{g(t) F(t) P(t)}{x_0^{2}(t)} \,dt]\]
where C1 and C2 are arbitary constants and\[F(t) = e^{\int f(t) \,dt} , P(t) = e^{\int p(t) \,dt}\]
system_of_odes_linear_2eq_order2_type1¶
sympy.solvers.ode.
_linear_2eq_order2_type1(x, y, t, r, eq)[source]¶
System of two constant-coefficient second-order linear homogeneous differential equations\[x'' = ax + by\]\[y'' = cx + dy\]
The charecteristic equation for above equations\[\lambda^4 - (a + d) \lambda^2 + ad - bc = 0\]
whose discriminant is \(D = (a - d)^2 + 4bc \neq 0\)
- When \(ad - bc \neq 0\)
1.1. If \(D \neq 0\). The characteristic equation has four distict roots, \(\lambda_1, \lambda_2, \lambda_3, \lambda_4\). The general solution of the system is\[x = C_1 b e^{\lambda_1 t} + C_2 b e^{\lambda_2 t} + C_3 b e^{\lambda_3 t} + C_4 b e^{\lambda_4 t}\]\[y = C_1 (\lambda_1^{2} - a) e^{\lambda_1 t} + C_2 (\lambda_2^{2} - a) e^{\lambda_2 t} + C_3 (\lambda_3^{2} - a) e^{\lambda_3 t} + C_4 (\lambda_4^{2} - a) e^{\lambda_4 t}\]
where \(C_1,..., C_4\) are arbitary constants.
1.2. If \(D = 0\) and \(a \neq d\):\[x = 2 C_1 (bt + \frac{2bk}{a - d}) e^{\frac{kt}{2}} + 2 C_2 (bt + \frac{2bk}{a - d}) e^{\frac{-kt}{2}} + 2b C_3 t e^{\frac{kt}{2}} + 2b C_4 t e^{\frac{-kt}{2}}\]\[y = C_1 (d - a) t e^{\frac{kt}{2}} + C_2 (d - a) t e^{\frac{-kt}{2}} + C_3 [(d - a) t + 2k] e^{\frac{kt}{2}} + C_4 [(d - a) t - 2k] e^{\frac{-kt}{2}}\]
where \(C_1,..., C_4\) are arbitary constants and \(k = \sqrt{2 (a + d)}\)
1.3. If \(D = 0\) and \(a = d \neq 0\) and \(b = 0\):\[x = 2 \sqrt{a} C_1 e^{\sqrt{a} t} + 2 \sqrt{a} C_2 e^{-\sqrt{a} t}\]\[y = c C_1 t e^{\sqrt{a} t} - c C_2 t e^{-\sqrt{a} t} + C_3 e^{\sqrt{a} t} + C_4 e^{-\sqrt{a} t}\]
1.4. If \(D = 0\) and \(a = d \neq 0\) and \(c = 0\):\[x = b C_1 t e^{\sqrt{a} t} - b C_2 t e^{-\sqrt{a} t} + C_3 e^{\sqrt{a} t} + C_4 e^{-\sqrt{a} t}\]\[y = 2 \sqrt{a} C_1 e^{\sqrt{a} t} + 2 \sqrt{a} C_2 e^{-\sqrt{a} t}\]
\[x'' = ax + by\]\[y'' = k (ax + by)\]
- When \(ad - bc = 0\) and \(a^2 + b^2 > 0\). Then the original system becomes
2.1. If \(a + bk \neq 0\):\[x = C_1 e^{t \sqrt{a + bk}} + C_2 e^{-t \sqrt{a + bk}} + C_3 bt + C_4 b\]\[y = C_1 k e^{t \sqrt{a + bk}} + C_2 k e^{-t \sqrt{a + bk}} - C_3 at - C_4 a\]
2.2. If \(a + bk = 0\):\[x = C_1 b t^3 + C_2 b t^2 + C_3 t + C_4\]\[y = kx + 6 C_1 t + 2 C_2\]
system_of_odes_linear_2eq_order2_type2¶
sympy.solvers.ode.
_linear_2eq_order2_type2(x, y, t, r, eq)[source]¶
The equations in this type are\[x'' = a_1 x + b_1 y + c_1\]\[y'' = a_2 x + b_2 y + c_2\]
The general solution of this system is given by the sum of its particular solution and the general solution of the homogeneous system. The general solution is given by the linear system of 2 equation of order 2 and type 1
1. If \(a_1 b_2 - a_2 b_1 \neq 0\). A particular solution will be \(x = x_0\) and \(y = y_0\) where the constants \(x_0\) and \(y_0\) are determined by solving the linear algebraic system\[a_1 x_0 + b_1 y_0 + c_1 = 0, a_2 x_0 + b_2 y_0 + c_2 = 0\]
\[x'' = ax + by + c_1, y'' = k (ax + by) + c_2\]
- If \(a_1 b_2 - a_2 b_1 = 0\) and \(a_1^2 + b_1^2 > 0\). In this case, the system in question becomes
2.1. If \(\sigma = a + bk \neq 0\), the particular solution will be\[x = \frac{1}{2} b \sigma^{-1} (c_1 k - c_2) t^2 - \sigma^{-2} (a c_1 + b c_2)\]\[y = kx + \frac{1}{2} (c_2 - c_1 k) t^2\]
2.2. If \(\sigma = a + bk = 0\), the particular solution will be\[x = \frac{1}{24} b (c_2 - c_1 k) t^4 + \frac{1}{2} c_1 t^2\]\[y = kx + \frac{1}{2} (c_2 - c_1 k) t^2\]
system_of_odes_linear_2eq_order2_type3¶
sympy.solvers.ode.
_linear_2eq_order2_type3(x, y, t, r, eq)[source]¶
These type of equation is used for describing the horizontal motion of a pendulum taking into account the Earth rotation. The solution is given with \(a^2 + 4b > 0\):\[x = C_1 \cos(\alpha t) + C_2 \sin(\alpha t) + C_3 \cos(\beta t) + C_4 \sin(\beta t)\]\[y = -C_1 \sin(\alpha t) + C_2 \cos(\alpha t) - C_3 \sin(\beta t) + C_4 \cos(\beta t)\]
where \(C_1,...,C_4\) and\[\alpha = \frac{1}{2} a + \frac{1}{2} \sqrt{a^2 + 4b}, \beta = \frac{1}{2} a - \frac{1}{2} \sqrt{a^2 + 4b}\]
system_of_odes_linear_2eq_order2_type4¶
sympy.solvers.ode.
_linear_2eq_order2_type4(x, y, t, r, eq)[source]¶
These equations are found in the theory of oscillations\[x'' + a_1 x' + b_1 y' + c_1 x + d_1 y = k_1 e^{i \omega t}\]\[y'' + a_2 x' + b_2 y' + c_2 x + d_2 y = k_2 e^{i \omega t}\]
The general solution of this linear nonhomogeneous system of constant-coefficient differential equations is given by the sum of its particular solution and the general solution of the corresponding homogeneous system (with \(k_1 = k_2 = 0\))
\[x = A_* e^{i \omega t}, y = B_* e^{i \omega t}\]
- A particular solution is obtained by the method of undetermined coefficients:
On substituting these expressions into the original system of differential equations, one arrive at a linear nonhomogeneous system of algebraic equations for the coefficients \(A\) and \(B\).
2. \lambda + c_1) A + (b_1 \lambda + d_1) B = 0\]\[(a_2 \lambda + c_2) A + (\lambda^{2} + b_2 \lambda + d_2) B = 0\]
The determinant of this system must vanish for nontrivial solutions A, B to exist. This requirement results in the following characteristic equation for \(\lambda\)\[(\lambda^2 + a_1 \lambda + c_1) (\lambda^2 + b \lambda_1 + c_1) e^{\lambda_1 t} + C_2 (\lambda_2^{2} + a_1 \lambda_2 + c_1) e^{\lambda_2 t} + C_3 (\lambda_3^{2} + a_1 \lambda_3 + c_1) e^{\lambda_3 t} + C_4 (\lambda_4^{2} + a_1 \lambda_4 + c_1) e^{\lambda_4 t}\]
system_of_odes_linear_2eq_order2_type5¶
sympy.solvers.ode.
_linear_2eq_order2_type5(x, y, t, r, eq)[source]¶
The equation which come under this catagory are\[x'' = a (t y' - y)\]\[y'' = b (t x' - x)\]
The transformation\[u = t x' - x, b = t y' - y\]
leads to the first-order system\[u' = atv, v' = btu\]
The general solution of this system is given by
If \(ab > 0\):\[u = C_1 a e^{\frac{1}{2} \sqrt{ab} t^2} + C_2 a e^{-\frac{1}{2} \sqrt{ab} t^2}\]\[v = C_1 \sqrt{ab} e^{\frac{1}{2} \sqrt{ab} t^2} - C_2 \sqrt{ab} e^{-\frac{1}{2} \sqrt{ab} t^2}\]
If \(ab < 0\):\[u = C_1 a \cos(\frac{1}{2} \sqrt{\left|ab\right|} t^2) + C_2 a \sin(-\frac{1}{2} \sqrt{\left|ab\right|} t^2)\]\[v = C_1 \sqrt{\left|ab\right|} \sin(\frac{1}{2} \sqrt{\left|ab\right|} t^2) + C_2 \sqrt{\left|ab\right|} \cos(-\frac{1}{2} \sqrt{\left|ab\right|} t^2)\]6¶
sympy.solvers.ode.
_linear_2eq_order2_type6(x, y, t, r, eq)[source]¶
The equations are\[x'' = f(t) (a_1 x + b_1 y)\]\[y'' = f(t) (a_2 x + b_2 y)\]
If \(k_1\) and \(k_2\) are roots of the quadratic equation\[k^2 - (a_1 + b_2) k + a_1 b_2 - a_2 b_1 = 0\]
Then by multiplying appropriate constants and adding together original equations we obtain two independent equations:\[z_1'' = k_1 f(t) z_1, z_1 = a_2 x + (k_1 - a_1) y\]\[z_2'' = k_2 f(t) z_2, z_2 = a_2 x + (k_2 - a_1) y\]
Solving the equations will give the values of \(x\) and \(y\) after obtaining the value of \(z_1\) and \(z_2\) by solving the differential equation and substuting the result.
system_of_odes_linear_2eq_order2_type7¶
sympy.solvers.ode.
_linear_2eq_order2_type7(x, y, t, r, eq)[source]¶
The equations are given as\[x'' = f(t) (a_1 x' + b_1 y')\]\[y'' = f(t) (a_2 x' + b_2 y')\]
If \(k_1\) and ‘k_2` are roots of the quadratic equation\[k^2 - (a_1 + b_2) k + a_1 b_2 - a_2 b_1 = 0\]
Then the system can be reduced by adding together the two equations multiplied by appropriate constants give following two independent equations:\[z_1'' = k_1 f(t) z_1', z_1 = a_2 x + (k_1 - a_1) y\]\[z_2'' = k_2 f(t) z_2', z_2 = a_2 x + (k_2 - a_1) y\]
Integrating these and returning to the original variables, one arrives at a linear algebraic system for the unknowns \(x\) and \(y\):\[a_2 x + (k_1 - a_1) y = C_1 \int e^{k_1 F(t)} \,dt + C_2\]\[a_2 x + (k_2 - a_1) y = C_3 \int e^{k_2 F(t)} \,dt + C_4\]
where \(C_1,...,C_4\) are arbitrary constants and \(F(t) = \int f(t) \,dt\)
system_of_odes_linear_2eq_order2_type8¶
sympy.solvers.ode.
_linear_2eq_order2_type8(x, y, t, r, eq)[source]¶
The equation of this catagory are\[x'' = a f(t) (t y' - y)\]\[y'' = b f(t) (t x' - x)\]
The transformation\[u = t x' - x, v = t y' - y\]
leads to the system of first-order equations\[u' = a t f(t) v, v' = b t f(t) u\]
The general solution of this system has the form
If \(ab > 0\):\[u = C_1 a e^{\sqrt{ab} \int t f(t) \,dt} + C_2 a e^{-\sqrt{ab} \int t f(t) \,dt}\]\[v = C_1 \sqrt{ab} e^{\sqrt{ab} \int t f(t) \,dt} - C_2 \sqrt{ab} e^{-\sqrt{ab} \int t f(t) \,dt}\]
If \(ab < 0\):\[u = C_1 a \cos(\sqrt{\left|ab\right|} \int t f(t) \,dt) + C_2 a \sin(-\sqrt{\left|ab\right|} \int t f(t) \,dt)\]\[v = C_1 \sqrt{\left|ab\right|} \sin(\sqrt{\left|ab\right|} \int t f(t) \,dt) + C_2 \sqrt{\left|ab\right|} \cos(-\sqrt{\left|ab\right|} \int t f(t) \,dt)\]9¶
sympy.solvers.ode.
_linear_2eq_order2_type9(x, y, t, r, eq)[source]¶
- \[t^2 x'' + a_1 t x' + b_1 t y' + c_1 x + d_1 y = 0\]\[t^2 y'' + a_2 t x' + b_2 t y' + c_2 x + d_2 y = 0\]
These system of equations are euler type.
The substitution of \(t = \sigma e^{\tau} (\sigma \neq 0)\) leads to the system of constant coefficient linear differential equations\[x'' + (a_1 - 1) x' + b_1 y' + c_1 x + d_1 y = 0\]\[y'' + a_2 x' + (b_2 - 1) y' + c_2 x + d_2 y = 0\] - 1) \lambda + c_1) A + (b_1 \lambda + d_1) B = 0\]\[(a_2 \lambda + c_2) A + (\lambda^{2} + (b_2 - 1) \lambda + d_2) B = 0\]
The determinant of this system must vanish for nontrivial solutions A, B to exist. This requirement results in the following characteristic equation for \(\lambda\)\[(\lambda^2 + (a_1 - 1) \lambda + c_1) (\lambda^2 + (b_2 - - 1) \lambda_1 + c_1) e^{\lambda_1 t} + C_2 (\lambda_2^{2} + (a_1 - 1) \lambda_2 + c_1) e^{\lambda_2 t} + C_3 (\lambda_3^{2} + (a_1 - 1) \lambda_3 + c_1) e^{\lambda_3 t} + C_4 (\lambda_4^{2} + (a_1 - 1) \lambda_4 + c_1) e^{\lambda_4 t}\]
system_of_odes_linear_2eq_order2_type10¶
sympy.solvers.ode.
_linear_2eq_order2_type10(x, y, t, r, eq)[source]¶
The equation of this catagory are\[(\alpha t^2 + \beta t + \gamma)^{2} x'' = ax + by\]\[(\alpha t^2 + \beta t + \gamma)^{2} y'' = cx + dy\]
The transformation\[\tau = \int \frac{1}{\alpha t^2 + \beta t + \gamma} \,dt , u = \frac{x}{\sqrt{\left|\alpha t^2 + \beta t + \gamma\right|}} , v = \frac{y}{\sqrt{\left|\alpha t^2 + \beta t + \gamma\right|}}\]
leads to a constant coefficient linear system of equations\[u'' = (a - \alpha \gamma + \frac{1}{4} \beta^{2}) u + b v\]\[v'' = c u + (d - \alpha \gamma + \frac{1}{4} \beta^{2}) v\]
These system of equations obtained can be solved by type1 of System of two constant-coefficient second-order linear homogeneous differential equations.
system_of_odes_linear_2eq_order2_type11¶
sympy.solvers.ode.
_linear_2eq_order2_type11(x, y, t, r, eq)[source]¶
The equations which comes under this type are\[x'' = f(t) (t x' - x) + g(t) (t y' - y)\]\[y'' = h(t) (t x' - x) + p(t) (t y' - y)\]
The transformation\[u = t x' - x, v = t y' - y\]
leads to the linear system of first-order equations\[u' = t f(t) u + t g(t) v, v' = t h(t) u + t p(t) v\]
On substituting the value of \(u\) and \(v\) in transformed equation gives value of \(x\) and \(y\) as\[x = C_3 t + t \int \frac{u}{t^2} \,dt , y = C_4 t + t \int \frac{v}{t^2} \,dt.\]
where \(C_3\) and \(C_4\) are arbitrary constants.
system_of_odes_linear_3eq_order1_type1¶
sympy.solvers.ode.
_linear_3eq_order1_type1(x, y, z, t, r, eq)[source]¶
- \[x' = ax\]\[y' = bx + cy\]\[z' = dx + ky + pz\]
Solution of such equations are forward substitution. Solving first equations gives the value of \(x\), substituting it in second and third equation and solving second equation gives \(y\) and similarly substituting \(y\) in third equation give \(z\).\[x = C_1 e^{at}\]\[y = \frac{b C_1}{a - c} e^{at} + C_2 e^{ct}\]\[z = \frac{C_1}{a - p} (d + \frac{bk}{a - c}) e^{at} + \frac{k C_2}{c - p} e^{ct} + C_3 e^{pt}\]
where \(C_1, C_2\) and \(C_3\) are arbitrary constants.
system_of_odes_linear_3eq_order1_type2¶
sympy.solvers.ode.
_linear_3eq_order1_type2(x, y, z, t, r, eq)[source]¶
The equations of this type are\[x' = cy - bz\]\[y' = az - cx\]\[z' = bx - ay\]
\[ax + by + cz = A \qquad - (1)\]\[x^2 + y^2 + z^2 = B^2 \qquad - (2)\]
- First integral:
where \(A\) and \(B\) are arbitrary constants. It follows from these integrals that the integral lines are circles formed by the intersection of the planes \((1)\) and sphere \((2)\)
\[x = a C_0 + k C_1 \cos(kt) + (c C_2 - b C_3) \sin(kt)\]\[y = b C_0 + k C_2 \cos(kt) + (a C_2 - c C_3) \sin(kt)\]\[z = c C_0 + k C_3 \cos(kt) + (b C_2 - a C_3) \sin(kt)\]
- Solution:
where \(k = \sqrt{a^2 + b^2 + c^2}\) and the four constants of integration, \(C_1,...,C_4\) are constrained by a single relation,\[a C_1 + b C_2 + c C_3 = 0\]
system_of_odes_linear_3eq_order1_type3¶
sympy.solvers.ode.
_linear_3eq_order1_type3(x, y, z, t, r, eq)[source]¶
Equations of this system of ODEs\[a x' = bc (y - z)\]\[b y' = ac (z - x)\]\[c z' = ab (x - y)\]
\[a^2 x + b^2 y + c^2 z = A\]
- First integral:
where A is an arbitary constant. It follows that the integral lines are plane curves.
\[x = C_0 + k C_1 \cos(kt) + a^{-1} bc (C_2 - C_3) \sin(kt)\]\[y = C_0 + k C_2 \cos(kt) + a b^{-1} c (C_3 - C_1) \sin(kt)\]\[z = C_0 + k C_3 \cos(kt) + ab c^{-1} (C_1 - C_2) \sin(kt)\]
- Solution:
where \(k = \sqrt{a^2 + b^2 + c^2}\) and the four constants of integration, \(C_1,...,C_4\) are constrained by a single relation\[a^2 C_1 + b^2 C_2 + c^2 C_3 = 0\]
system_of_odes_linear_3eq_order1_type4¶
sympy.solvers.ode.
_linear_3eq_order1_type4(x, y, z, t, r, eq)[source]¶
Equations:\[x' = (a_1 f(t) + g(t)) x + a_2 f(t) y + a_3 f(t) z\]\[y' = b_1 f(t) x + (b_2 f(t) + g(t)) y + b_3 f(t) z\]\[z' = c_1 f(t) x + c_2 f(t) y + (c_3 f(t) + g(t)) z\]
The transformation\[x = e^{\int g(t) \,dt} u, y = e^{\int g(t) \,dt} v, z = e^{\int g(t) \,dt} w, \tau = \int f(t) \,dt\]
leads to the system of constant coefficient linear differential equations\[u' = a_1 u + a_2 v + a_3 w\]\[v' = b_1 u + b_2 v + b_3 w\]\[w' = c_1 u + c_2 v + c_3 w\]
These system of equations are solved by homogeneous linear system of constant coefficients of \(n\) equations of first order. Then substituting the value of \(u, v\) and \(w\) in transformed equation gives value of \(x, y\) and \(z\).
system_of_odes_linear_neq_order1_type1¶
sympy.solvers.ode.
_linear_neq_order1_type1(match_)[source]¶
System of n first-order constant-coefficient linear nonhomogeneous differential equation\[y'_k = a_{k1} y_1 + a_{k2} y_2 +...+ a_{kn} y_n; k = 1,2,...,n\]
or that can be written as \(\vec{y'} = A . \vec{y}\) where \(\vec{y}\) is matrix of \(y_k\) for \(k = 1,2,...n\) and \(A\) is a \(n \times n\) matrix.
Since these equations are equivalent to a first order homogeneous linear differential equation. So the general solution will contain \(n\) linearly independent parts and solution will consist some type of exponential functions. Assuming \(y = \vec{v} e^{rt}\) is a solution of the system where \(\vec{v}\) is a vector of coefficients of \(y_1,...,y_n\). Substituting \(y\) and \(y' = r v e^{r t}\) into the equation \(\vec{y'} = A . \vec{y}\), we get\[r \vec{v} e^{rt} = A \vec{v} e^{rt}\]\[r \vec{v} = A \vec{v}\]
where \(r\) comes out to be eigenvalue of \(A\) and vector \(\vec{v}\) is the eigenvector of \(A\) corresponding to \(r\). There are three possiblities of eigenvalues of \(A\)
- \(n\) distinct real eigenvalues
- complex conjugate eigenvalues
- eigenvalues with multiplicity \(k\)
1. When all eigenvalues \(r_1,..,r_n\) are distinct with \(n\) different eigenvectors \(v_1,...v_n\) then the solution is given by\[\vec{y} = C_1 e^{r_1 t} \vec{v_1} + C_2 e^{r_2 t} \vec{v_2} +...+ C_n e^{r_n t} \vec{v_n}\]
where \(C_1,C_2,...,C_n\) are arbitrary constants.
2. When some eigenvalues are complex then in order to make the solution real, we take a llinear combination: if \(r = a + bi\) has an eigenvector \(\vec{v} = \vec{w_1} + i \vec{w_2}\) then to obtain real-valued solutions to the system, replace the complex-valued solutions \(e^{rx} \vec{v}\) with real-valued solution \(e^{ax} (\vec{w_1} \cos(bx) - \vec{w_2} \sin(bx))\) and for \(r = a - bi\) replace the solution \(e^{-r x} \vec{v}\) with \(e^{ax} (\vec{w_1} \sin(bx) + \vec{w_2} \cos(bx))\)
3. If some eigenvalues are repeated. Then we get fewer than \(n\) linearly independent eigenvectors, we miss some of the solutions and need to construct the missing ones. We do this via generalized eigenvectors, vectors which are not eigenvectors but are close enough that we can use to write down the remaining solutions. For a eigenvalue \(r\) with eigenvector \(\vec{w}\) we obtain \(\vec{w_2},...,\vec{w_k}\) using\[(A - r I) . \vec{w_2} = \vec{w}\]\[(A - r I) . \vec{w_3} = \vec{w_2}\]\[\vdots\]\[(A - r I) . \vec{w_k} = \vec{w_{k-1}}\]
Then the solutions to the system for the eigenspace are \(e^{rt} [\vec{w}], e^{rt} [t \vec{w} + \vec{w_2}], e^{rt} [\frac{t^2}{2} \vec{w} + t \vec{w_2} + \vec{w_3}], ...,e^{rt} [\frac{t^{k-1}}{(k-1)!} \vec{w} + \frac{t^{k-2}}{(k-2)!} \vec{w_2} +...+ t \vec{w_{k-1}} + \vec{w_k}]\)
So, If \(\vec{y_1},...,\vec{y_n}\) are \(n\) solution of obtained from three categories of \(A\), then general solution to the system \(\vec{y'} = A . \vec{y}\)\[\vec{y} = C_1 \vec{y_1} + C_2 \vec{y_2} + \cdots + C_n \vec{y_n}\]
system_of_odes_nonlinear_2eq_order1_type1¶
sympy.solvers.ode.
_nonlinear_2eq_order1_type1(x, y, t, eq)[source]¶
Equations:\[x' = x^n F(x,y)\]\[y' = g(y) F(x,y)\]
Solution:\[x = \varphi(y), \int \frac{1}{g(y) F(\varphi(y),y)} \,dy = t + C_2\]
where
if \(n \neq 1\)\[\varphi = [C_1 + (1-n) \int \frac{1}{g(y)} \,dy]^{\frac{1}{1-n}}\]
if \(n = 1\)\[\varphi = C_1 e^{\int \frac{1}{g(y)} \,dy}\]
where \(C_1\) and \(C_2\) are arbitrary constants.
system_of_odes_nonlinear_2eq_order1_type2¶
sympy.solvers.ode.
_nonlinear_2eq_order1_type2(x, y, t, eq)[source]¶
Equations:\[x' = e^{\lambda x} F(x,y)\]\[y' = g(y) F(x,y)\]
Solution:\[x = \varphi(y), \int \frac{1}{g(y) F(\varphi(y),y)} \,dy = t + C_2\]
where
if \(\lambda \neq 0\)\[\varphi = -\frac{1}{\lambda} log(C_1 - \lambda \int \frac{1}{g(y)} \,dy)\]
if \(\lambda = 0\)\[\varphi = C_1 + \int \frac{1}{g(y)} \,dy\]
where \(C_1\) and \(C_2\) are arbitrary constants.
system_of_odes_nonlinear_2eq_order1_type3¶
sympy.solvers.ode.
_nonlinear_2eq_order1_type3(x, y, t, eq)[source]¶
Autonomous system of general form\[x' = F(x,y)\]\[y' = G(x,y)\]
Assuming \(y = y(x, C_1)\) where \(C_1\) is an arbitrary constant is the general solution of the first-order equation\[F(x,y) y'_x = G(x,y)\]
Then the general solution of the original system of equations has the form\[\int \frac{1}{F(x,y(x,C_1))} \,dx = t + C_1\]
system_of_odes_nonlinear_2eq_order1_type4¶
sympy.solvers.ode.
_nonlinear_2eq_order1_type4(x, y, t, eq)[source]¶
Equation:\[x' = f_1(x) g_1(y) \phi(x,y,t)\]\[y' = f_2(x) g_2(y) \phi(x,y,t)\]
First integral:\[\int \frac{f_2(x)}{f_1(x)} \,dx - \int \frac{g_1(y)}{g_2(y)} \,dy = C\]
where \(C\) is an arbitrary constant.
On solving the first integral for \(x\) (resp., \(y\) ) and on substituting the resulting expression into either equation of the original solution, one arrives at a firs-order equation for determining \(y\) (resp., \(x\) ).
system_of_odes_nonlinear_2eq_order1_type5¶
sympy.solvers.ode.
_nonlinear_2eq_order1_type5(func, t, eq)[source]¶
Clairaut system of ODEs\[x = t x' + F(x',y')\]\[y = t y' + G(x',y')\]
The following are solutions of the system
\((i)\) straight lines:\[x = C_1 t + F(C_1, C_2), y = C_2 t + G(C_1, C_2)\]
where \(C_1\) and \(C_2\) are arbitrary constants;
\((ii)\) envelopes of the above lines;
\((iii)\) continuously differentiable lines made up from segments of the lines \((i)\) and \((ii)\).
system_of_odes_nonlinear_3eq_order1_type1¶
sympy.solvers.ode.
_nonlinear_3eq_order1_type1(x, y, z, t, eq)[source]¶
Equations:\[a x' = (b - c) y z, \enspace b y' = (c - a) z x, \enspace c z' = (a - b) x separable first-order equation on \(x\). Similarly doing that for other two equations, we will arrive at first order equation on \(y\) and \(z\) too.
References
-
system_of_odes_nonlinear_3eq_order1_type2¶
sympy.solvers.ode.
_nonlinear_3eq_order1_type2(x, y, z, t, eq)[source]¶
Equations:\[a x' = (b - c) y z f(x, y, z, t)\]\[b y' = (c - a) z x f(x, y, z, t)\]\[c z' = (a - b) x y f(x, y, z, first-order differential equations on \(x\). Similarly doing that for other two equations we will arrive at first order equation on \(y\) and \(z\).
References
-
system_of_odes_nonlinear_3eq_order1_type3¶
sympy.solvers.ode.
_nonlinear_3eq_order1_type3(x, y, z, t, eq)[source]¶
Equations:\[x' = c F_2 - b F_3, \enspace y' = a F_3 - c F_1, \enspace z' = b F_1 - a F_2\]
where \(F_n = F_n(x, y, z, t)\).
\[a x + b y + c z = C_1,\]
- First Integral:
where C is an arbitrary constant.
2. If we assume function \(F_n\) to be independent of \(t\),i.e, \(F_n\) = \(F_n (x, y, z)\) Then, on eliminating \(t\) and \(z\) from the first two equation of the system, one arrives at the first-order equation\[\frac{dy}{dx} = \frac{a F_3 (x, y, z) - c F_1 (x, y, z)}{c F_2 (x, y, z) - b F_3 (x, y, z)}\]
where \(z = \frac{1}{c} (C_1 - a x - b y)\)
References
-
system_of_odes_nonlinear_3eq_order1_type4¶
sympy.solvers.ode.
_nonlinear_3eq_order1_type4(x, y, z, t, eq)[source]¶
Equations:\[x' = c z F_2 - b y F_3, \enspace y' = a x F_3 - c z F_1, \enspace z' = b y F_1 - a x F_2\]
where \(F_n = F_n (x, y, z, t)\)
\[a x^{2} + b y^{2} + c z^{2} = C_1\]
- First integral:
where \(C\) is an arbitrary constant.
2. Assuming the function \(F_n\) is independent of \(t\): \(F_n = F_n (x, y, z)\). Then on eliminating \(t\) and \(z\) from the first two equations of the system, one arrives at the first-order equation\[\frac{dy}{dx} = \frac{a x F_3 (x, y, z) - c z F_1 (x, y, z)} {c z F_2 (x, y, z) - b y F_3 (x, y, z)}\]
where \(z = \pm \sqrt{\frac{1}{c} (C_1 - a x^{2} - b y^{2})}\)
References
-
system_of_odes_nonlinear_3eq_order1_type5¶
sympy.solvers.ode.
_nonlinear_3eq_order1_type5(x, y, t, eq)[source]¶
- \[x' = x (c F_2 - b F_3), \enspace y' = y (a F_3 - c F_1), \enspace z' = z (b F_1 - a F_2)\]
where \(F_n = F_n (x, y, z, t)\) and are arbitrary functions.
First Integral:\[\left|x\right|^{a} \left|y\right|^{b} \left|z\right|^{c} = C_1\]
where \(C\) is an arbitrary constant. If the function \(F_n\) is independent of \(t\), then, by eliminating \(t\) and \(z\) from the first two equations of the system, one arrives at a first-order equation.
References
-
Information on the ode module¶.
- Power series solutions for first order differential equations.
- Lie Group method of solving first order differential equations.
- 2nd order Liouville differential equations.
- Power series solutions for second order differential equations at ordinary and regular singular points.
- \. | http://docs.sympy.org/1.1/modules/solvers/ode.html | CC-MAIN-2018-09 | refinedweb | 10,596 | 51.28 |
It's traditional for new columnists to spend a paragraph or two introducing themselv es, abouter.
Downloading and Installing the Generics Specification.
Here's what you need to do:
Make sure you're running JDK 1.4.2 (using
java -version). If you're not running JDK 1.4.2, you'll need todownload and install it..
A Very Brief Review of the Basicsinterface::
- Type Parameters are the things in the angle brackets. They're parameters that will get replaced by either classes or parametrized types in member declarations and in object instantiations.
- A Parameterized Type is a class or interface, along with a set of type parameters.
- A Raw Type is the class with all of the type parameters removed.
Erasure>,
Tis mapped to
Object. In the case of
<T extends Rentable>, T is mapped to
Rentable, and so on.
- Casts are inserted wherever necessary, to ensure that the code compiles.
Consider, for example, the example of a
ShoppingCart for our video store. Written using generics, the code might look like:.
I should also note that the loop in
getTotalPurchasePrice is a bit unseemly these days. In JDK 1.5, we'd probably use the nifty new
for syntax and write something like:.
Erasure and Accidental Compile-Time Conflicts
One consequence of erasure is that code sometimes doesn't compile because, after erasure, there are conflicts. You probably suspect that the following two classes will have an issue:
class ShoppingCart<T extends DVD>{ // ... } class ShoppingCart<T extends VideoTape>{ // ... }
And the reason is pretty easy to discern: under erasure, these classes have the same name. (In fact, you'd probably have run into problems before you got as far as compiling, because both of these classes want to be stored in
ShoppingCart.java.)
But there are more subtle name clashes, as well. If two methods erase to the same method, then you'll get a compile time error. For example, the following code doesn't compile, either:
class TwoForOneSpecial<T extends Rentable, W extends Rentable> { public void add(T newRentable) { //... } public void add(W newRentable) { //... } }
It fails for a pretty obvious reason: the two
addmethods have identical erasures, and therefore cause compile-time problems. The problem here is that the system maps both
T and
W to
Rentable, and therefore causes a compilation error (the compiler currently states "add(T) and add(W) have the same erasure").
No such problem exists for the following class:
class GetAFreeVideoTape<T extends Rentable, W extends VideoTape> { public void add(T anything) { //... } public void add(W videotape) { //... } }
GetAFreeVideoTape will compile because under erasure, it becomes:
class GetAFreeVideoTape { public void add(Rentable anything) { //... } public void add(Videotape videotape) { //... } }
Erasure and Static Variables
Page three of the generics specification (the June 23, 2003 draft) contains an interesting sentence:
The scope of a type parameter is all of the declared class, except any static members or initializers, but including the type parameter section itself.
This is intriguing -- it says that you can't use type parameters in static fields or methods. That is, code like the following isn't allowed:
public class Store<T> { private static T STATIC_VARIABLE; }
At first glance, this seems like a strange restriction to have. But it does makes sense, and it's a consequence of erasure.
The heart of the problem is the fact that when you use generics, you're not defining new classes. Erasure maps parameterized types to raw types. So, for example,
Vector<String>and
Vector<Integer> both get erased to the same class (namely,
Vector) and the code that accesses these instances casts the values that it fetches from them.
Why is this problematic? Consider the following class (it won't compile, but pretend for a moment that it could):
public class GlobalEventQueue<T extends Event> { // stores all events in a global linked list so that // all events are serialized. private static LinkedList<T> QUEUE = new LinkedList<T> (); // ... public static synchronized T removeEvent() { return QUEUE.removeFirst(); } public static synchronized void addEvent(T event) { QUEUE.add(event); } }
GlobalEventQueue<T extends Event> uses a static
LinkedList<T>to store events. But you can have many instances of
GlobalEventQueue<T extends Event>, and you can subclass it. Suppose you did so, by creating the classes
ScreenEventQueue and
DiskEventQueue.
public class ScreenEventQueue { private GlobalEventQueue<ScreenEvent> _myQueue; public ScreenEventView() { _myQueue = new GlobalEventQueue<ScreenEvent>(); } public ScreenEvent removeEvent() { return _myQueue.removeEvent(); } public void addEvent(ScreenEvent event) { return _myQueue.addEvent(event); } } public class DiskEventQueue { private GlobalEventQueue<DiskEvent> _myQueue; public DiskEventQueue() { _myQueue = new GlobalEventQueue<DiskEvent>(); } public DiskEvent removeEvent() { return _myQueue.removeEvent(); } public void addEvent(DiskEvent event) { return _myQueue.addEvent(event); } }
What happens if you create an instance of
ScreenEventQueue and
DiskEventQueue? Well, at runtime, both of these insert and remove objects from the same instance of
LinkedList. And they do so by casting the values that they remove. In fact, under erasure,
ScreenEventQueue and
DiskEventQueuebecome:
public class ScreenEventQueue { private GlobalEventQueue _myQueue; public ScreenEventView() { _myQueue = new GlobalEventQueue(); } public ScreenEvent removeEvent() { return (ScreenEvent) _myQueue.removeEvent(); } public void addEvent(ScreenEvent event) { return _myQueue.addEvent(event); } } public class DiskEventQueue { private GlobalEventQueue _myQueue; public DiskEventQueue() { _myQueue = new GlobalEventQueue(); } public DiskEvent removeEvent() { return (DiskEvent) _myQueue.removeEvent(); } public void addEvent(DiskEvent event) { return _myQueue.addEvent(event); } }
Which means that we can insert an instance of
DiskEvent into the static
LinkedList and then try to cast it as a
ScreenEvent.Unless there's some synchronization logic somewhere else, we're going to eventually add an instance of
DiskEvent to the queue and then try to cast it as a
ScreenEvent when we remove it. At runtime, the transformed code will throw instances of
ClassCastException.
More generally, if we let static members be defined using type parameters, it's impossible to avoid the possibility of a
ClassCast Exception being thrown at runtime. And so the specification carefully limits the scope of type parameters to rule out code like the example above.
Bridging
So far, we've only talked about erasure. However, the current implementation of generics uses another form of code transformation as well. This process, which is referred to as bridging, consists of inserting extra methods into objects. And, like erasure, bridging is motivated by backwards compatibility.
To understand bridging, let's extend our example and have our shopping cart sort the tapes before returning them. To do this, we need to define a
Comparator. In the generics specification,
Comparator has become the parameterized type
Comparator<T>. But it's still an interface, and it has the same methods (they've just become more strongly typed). The interface has become:
public interface Comparator<T> { int compare(T o1, T o2); boolean equals(Object obj); }
Here's an implementation of
RentableComparator.
public class RentableComparator implements Comparator<Rentable>{ public int compare(Rentable rentable1, Rentable rentable2) { if (null==rentable1) { if (null==rentable2) { return 0; } return -1; } if (null==rentable2) { return 1; } return rentable1.getDisplayName().compareTo(rentable2.getDisplayName()); } }
This looks pretty similar to the code you write today, and it's exactly what you want: it's a strongly typed comparator. At compile time, the compiler can check that you're using things correctly. Incorporating
RentableComparator into
ShoppingCart is easy. We just use our comparator to sort the list before returning an
Iterator in the
getContents method.
public Iterator<T> getContents() { RentableComparator comparator = new RentableComparator(); Collections.sort(_contents, comparator); return _contents.iterator(); }
Again, this looks exactly like the code you write today. And, if you're under deadline pressure, you might just write this code, check that it works, and move on.
But if you stop and think about backwards compatibility, this can get pretty confusing. Suppose, for example, you're also using a legacy library that puts objects into instances of
Vector and then sorts them, as in the following code.
public void printSortedCollection(Collection collection, Comparator comparator) { Vector vector = new Vector(collection); Collections.sort(vector, comparator); Iterator i = vector.iterator(); while (i.hasNext()) { System.out.println(i.next()); } }
The legacy library has to work, as well. When you pass in an instance of
RentableComparator, the right thing will happen (the legacy library will sort the vector correctly). This happens in spite of the fact that your comparator implemented
public int compare(Rentable rentable1, Rentable rentable2) and the legacy library was expecting
public int compare(Object object1, Object object2).
If you're at all familiar with the way inner classes are implemented, you've already guessed the solution to this problem: the generics compiler actually inserts extra methods, calledbridge methods, into the parameterically typed classes (or subclasses) to make sure that the legacy code works. In this case, the compiler will insert code that looks like the following into
RentableComparator:
public int compare(Object obj, Object ob1) { return compare((Rentable) obj, (Rentable) obj1); }
This is very nice. With bridging, you get the benefits of static typing in all of your code. And you get backwards compatibility with all of the old libraries you are currently using (or might use).
Final Thoughts
At this point, you're probably a little tired of learning about what the compiler is doing to your code behind the scenes. So in the final section of this column, I'm going to switch gears and talk about what the compiler can't do to your code (at least, as far as I've been able to puzzle it out). In particular, there are two things I wish it did, that it doesn't do.
The first thing on my wish list this Christmas is a typesafe
equals method. I really hate code like the following (from the
ItemCode class):
public boolean equals(Object object) { if (!(object instanceof ItemCode)) { return false; } int otherCode = ((ItemCode) object)._code; return (_code == otherCode); }
This is correct, concise, and perfectly reasonable. But the cast check in the beginning smells bad to me. In most cases, class checks are in there for logical completeness; not because the developer expects it to happen. I'd wager that in many cases the code should really be:
public boolean equals(Object object) { if (!(object instanceof ItemCode)) { // huh? It's not? Wow. I didn't expect that to happen at all. // Oh well. I guess if it's a different class entirely, // it's not equal. So returning false is the safe thing to do. return false; } int otherCode = ((ItemCode) object)._code; return (_code == otherCode); }
and that, therefore, all the
instanceof really does is obfuscate a logical error. I'll grant you that it might be better to return
false than throw a
ClassCastException on a production server, but it's a not good thing to do.
Another problem with
equals arises from the fact that type parameters are erased. The issue is that, at runtime, we can only check the raw type and not the parameterized type. Put another way, the problem is that the return value in the following code snippet evaluates to
true.
Vector<String> vector1 = new Vector<String>(); Vector<Vector<String>> vector2 = new Vector<Vector<String>>(); return vector1.getClass() == vector2.getClass();
The fact that two very different types (
Vector<String> and
Vector<Vector<String>>) have the same class makes relying on
instanceof (or
getClass) problematic.
Ideally, I'd like
equals to somehow be a generic method, so that I could get rid of the class casts and take advantage of static typing. But I don't see any way to do it (and still take advantage of all of the code out there that calls
equals). So I'm leaving this one open as a challenge -- can anyone see a way to have static typing on the arguments to
equals and still preserve backwards compatibility?
Another place you still need to cast, and cast correctly, is inside of serialization code. If you use serialization to persist objects, you're either going to use default serialization (which is often unwise, for the reasons outlined in Java Enterprise Best Practices) or you're going to wind up writing code like:
FileOutputStream fos = new FileOutputStream(_persistenceFileName); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(settings); oos.close(); // ..... FileInputStream fis = new FileInputStream(_persistenceFileName); ObjectInputStream ois = new ObjectInputStream(fis); Hashtable settings = (Hashtable)ois.readObject(); o = ois.readObject(); ois.close();
Wouldn't it be nice if the compiler could check this code too?
And with that thought, I'll end this month's column. In next month's column, I'll talk about how inheritance interacts with generics and how wildcards work. | https://community.oracle.com/docs/DOC-982969 | CC-MAIN-2017-39 | refinedweb | 2,054 | 56.25 |
Accounts with zero posts and zero activity during the last months will be deleted periodically to fight SPAM!
It appears that the template is correctly parsed, but there is no check as to whether a given template actually is a partial instantiation of a previously defined one - in this case, the class name suffices, of course. A good approach would be to store information on class templates in a relational way. (i.e., allowing to nest partial instantiations with the same class name under the most generic template).
This would mean that the associated parser would need to keep track of template parameters - not evident at all, I know, but I'm willing to try to implement this myself.
Another thing I would like to know whether it is feasible, is to also list typedefs or show any aliases for a namespace next to its name.
namespace MyNamespace // namespace definition{ int a_member;};namespace alias_name = MyNamespace; // namespace alias
The bottom line is, I would need to make some fundamental modifications to it, and as such I think I'll try to hack my own version of the CodeCompletion plugin, something like an AdvancedCodeCompletion, which is specific for my needs.
Is it possible to do this? Is it a setting that I have missed? | http://forums.codeblocks.org/index.php?topic=327.msg1591 | CC-MAIN-2020-24 | refinedweb | 212 | 56.49 |
The result class generated by the seqan3::seach algorithm. More...
#include <seqan3/search/search_result.hpp>
The result class generated by the seqan3::seach algorithm.
The seqan3::search algorithm returns a range of hits. A single hit is stored in a seqan3::search_result. By default, the search result contains the query id, the reference id where the query matched and the begin position in the reference where the query sequence starts to match the reference sequence. Those information can be accessed via the respective member functions.
The following member functions exist:
Note that the index cursor is not included in a hit by default. If you are trying to use the respective member function, a static_assert will prevent you from doing so. You can configure the result of the search with the output configuration.
Returns the index cursor pointing to the suffix array range where the query was found.
Returns the reference id where the query was found.
The reference id is an arithmetic value that corresponds to the index of the reference text in the index. The order is determined on construction of the index. | https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1search__result.html | CC-MAIN-2021-21 | refinedweb | 185 | 57.57 |
Setting attributes of ui elements with RGB truples
I hope someone can help me understand dealing with RGB colours in the ui module. When I set an attribute using a so called RGB color, it does not seem to work as I would expect. In my example below, I looked up the CSS color lightcoral on the net. I got the corresponding values for the hex and RGB Values. As you can see I try to apply the different types of values to the background color of a button. I don't understand the results. I have worked on this for some hours, so frustrating, is probably so simple. Any advice, appreciated. Oh, the reason why I want to get the RGB values working/understanding, is because I was reading an answer on stackflow about how to programmatically create shades and tints from RGB components. Looks great.
import ui color_modes = [ #RGBA as truple (240.0,128.0,128.0, 1.0), #RGBA as string "240.0,128.0,128.0, 1.0", #RGB as truple (240,128,128), #CSS Color 'lightcoral', #CSS Color as Hex '#F08080', #the RGBA truple after i printed it out from CSS Color lightcoral, then pasted it into the truple here (0.9411759972572327, 0.5019609928131104, 0.5019609928131104, 1), #as above but represented as a string "(0.9411759972572327, 0.5019609928131104, 0.5019609928131104, 1)", ] _pad = 10 def std_btn(title = ''): btn = ui.Button(title = title) btn.border_color = 'black' btn.border_width = 1 btn.height = 64 btn.width = 540 btn.font = ('<System>', 14) btn.tint_color = 'green' return btn if __name__ == '__main__': v = ui.View(name = 'lightcoral CSS color background test') for i in range(0,len(color_modes)): btn = std_btn('button' + str(i +1) + ' ' + str(color_modes[i])) btn.y = (btn.height * i) + (i * _pad) btn.background_color = color_modes[i] v.add_subview(btn) v.background_color ='white' v.present('sheet')
RGB(A) tuples in pythonista uses a float number between 0 and 1 instead of the range between 0 to 255.
@Sebastian, thanks. The resources I am looking at all are integers. 0-255. Is there an easy way to convert them to floats that you know of? As far as I can see there is no 1:1 relationship.
Sorry I got it. Float division eg, 240.0 / 255.0 is very close. I just didn't know what to search for. Thanks for the help
The code at might help. It creates 752 Pythonista colors from tkinter. Part of its output is
light coral = (0.9411764705882353, 0.5019607843137255, 0.5019607843137255);-)
To convert the tuple to the Pythonista format, you'd simply have to create a new tuple with each number divided by 255.
colour = (240,128,128) # as you would have had before colour = (colour[0]/255.0, colour[1]/255.0, colour[2]/255.0) # the actual conversion
EDIT: Here's a better version:
colour = (240,128,128) colour = tuple(entry/255.0 for entry in colour)
Using generator expressions not only makes it shorter, but also easier to read.
Thanks for all the feedback. Very useful. Been playing with
def calc_color(RGB, shade_percent): return tuple((entry / 255.0)* shade_percent for entry in RGB)
Seen the shading idea on stackflow. Looks like it could give some pleasant results with little effort. Don't really play with colours, so all new to me, but is fun
Ok, I am sure there is a lot to pick apart about my code. But just been experimenting and learning from the input I received here. I just did this small example. An achievement for me ...
import ui, clipboard, console from random import randint def calc_color(RGB, shade_percent): return tuple((entry / 255.0) * shade_percent for entry in RGB) _base_RGB_color = (128,255,128) _percent_increment = .02 ''' i had to declare and use _my_view, because i did not know how to recover the ui.View object from a ui.ButtonItem. i am guessing you cant, given it does not inherit from ui.View. i am sure a better way to do it , as most of my code ''' _my_view = None def std_btn(title = '', x=0,y=0): btn = ui.Button(title = title) btn.border_color = 'black' btn.border_width = .5 btn.width = 54 btn.height = 54 btn.font = ('<System-Bold>', 18) btn.x, btn.y = x,y btn.tint_color = 'blue' btn.action = btn_action return btn def random_rgb(): color = (0,0,0) return tuple(( entry + randint(0,255)) for entry in color) def btn_action(sender): clipboard.set(str(sender.background_color)) console.hud_alert('Copied', duration = 1) def redraw_grid(sender): v = _my_view color = random_rgb() v.name = str(color) percent = 0. for btn in v.subviews: if type(btn)== ui.Button: btn.title = str(percent) btn.background_color = calc_color(color, percent ) percent += _percent_increment draw_linear(_my_view, color) def draw_linear(v, color): percent = 0. for lb in v.subviews: if type(lb) == ui.Label: lb.background_color = calc_color(color, percent) percent += _percent_increment if __name__ == '__main__': color = random_rgb() v = ui.View(name=str(color)) btn = ui.ButtonItem('Random RGB Color') btn.action = redraw_grid v.right_button_items = [btn] # draw button grid percent = 0. for i in range(0,10): for j in range(0,10): ctl = std_btn(str(percent), x = j * 54, y = i * 54) ctl.tint_color = 'white' percent += _percent_increment v.add_subview(ctl) # draw linear stripe percent = 0. for i in range(0,100): lb = ui.Label(str(percent)) lb.width = 5.4 lb.height = 36 lb.y = 540 lb.x = i * 5.4 lb.border_width = 0 v.add_subview(lb) _my_view = v redraw_grid(v) v.present('sheet')
@Moe. Really,I love the syntax for the generator. I sort of see how great it is. But because I am so new, it will take a while to stick in my brain and more importantly for me to be able to use it more generically and naturally.I struggled to write the random_rgb() func using the generator. But, I think what I did in the end was correct. Having to incorporate a function threw me. If I over complicated it, I would love to hear back. Thanks again, great learning experience for me as an old guy :)
Simplifying
random_rgb():
def random_rgb(): return randint(0, 255), randint(0, 255), randint(0, 255)
Also in the Pythonista beta, you need to add the line:
v.width = v.height = 540
near the bottom of the script or the view is too small to see or dismiss.
@ccc, thanks for the comments. Yeah, I can see your random_rgb() a lot simpler. I as just trying to get into the groove of using the other syntax. I can imagine for larger truples could be very nice. Just hard for me to think this way naturally at the moment. Need a lot more experience. But I will keep writing small programs like this to learn.
I had previously specified the width and height of the view before, but somehow i removed from my code when I was trying to clean up. I realise far to many literals in my code anyway. I really haven't got my head around the presentation styles and orientation switching as yet, let alone iPad vrs iphone.
But thanks again.
If you like tuples, check out namedtuples...
import collections rgb = collections.namedtuple('rgb', 'red green blue') color = rgb(.1, .2, .3) print(color) # color.blue = .5 # can not modify a tuple
@ccc, thanks, I did look at namedtuple's before. I see many people have ideas about their performance. For me, is not an issue, as I am not doing anything demanding on the processor. But again, people say Python is easy to learn. And yes, you can do some amazing things with little knowledge, but in reality you need to learn a lot. Not to put anyone off, but still it is not as simple as people make out. I can sort of speak Thai (I am Australian) , but sort of... So many versions of being able to speak/understand/write a spoken language. I see correlations with Python language. That being said, I wish I had a language like this in the dark ages :) is inspired! | https://forum.omz-software.com/topic/1785/setting-attributes-of-ui-elements-with-rgb-truples | CC-MAIN-2018-13 | refinedweb | 1,331 | 70.09 |
Book ActionsStart Reading
Book Information
CFP Exam Calculation Workbook: 400+ Calculations to Prepare for the CFP Exam (2018 Edition)
Description
"The CFP Exam Calculation Workbook" provides over 400 calculation questions to prepare for the demanding CFP Exam. Master exam topics with intensive practice in the areas you'll find on the test. Whether you’re challenging the exam for the first time or trying again after an unsuccessful attempt, you will learn the critical skills needed to master the exam.
Included are practice exams for the following topics:
• Financial Planning Principles
• Life and Disability Insurance
• Income Planning
• Investments
• Retirement Planning
• Estate Planning
Book Preview
CFP Exam Calculation Workbook - Coventry House Publishing
reproduced.
Contents
Section 1: Financial Planning Principles
Questions
Answer Key
Section 2: Life and Disability Insurance
Questions
Answer Key
Section 3: Income Planning
Questions
Answer Key
Section 4: Investments
Questions
Answer Key
Section 5: Retirement Planning
Questions
Answer Key
Section 6: Estate Planning
Questions
Answer Key
Section 1
Financial Planning Principles
Questions
1. Liz wants to deposit an amount today that will last for 5 years. She needs to withdraw $1,300 at the beginning of each 6-month period, and she’ll earn 8% compounded semiannually on her investments. How much does she need to deposit to achieve her goal?
A. $8,250.17
B. $9,364.01
C. $10,965.93
D. $11,374.26
2. Mary would like to receive the equivalent of $35,000 in today’s dollars at the beginning of each year for the next 7 years. She assumes that inflation will average 3%, and she can earn a 7% compound annual rate of return on her investments. How much does Mary need to invest today in order to achieve her goal?
A. $217,039.87
B. $219,172.72
C. $221,361.56
D. $224,611.80
3. Jacob wants to start his own business in 4 years. He needs to accumulate $150,000 in today’s dollars to fund the start-up costs. He assumes that inflation will average 5%, and he can earn an 8% compound annual rate of return on his investments. What serial payment should Jacob invest at the end of the first year to achieve his goal?
A. $34,175.13
B. $35,083.65
C. $36,472.82
D. $37,727.11
4. John deposited $425 into his money market account at the end of each month for the past 4 years. His account is now worth $24,915. If interest was compounded monthly, what was the average annual compound return that he earned over the 4-year period?
A. 9.9%
B. 10.3%
C. 10.7%
D. 11.1%
5. What is the IRR of a 1-year investment in a REIT, if $200 is invested at the beginning of each month? Assume the REIT’s end of year value is $2,500.
A. 6.33%
B. 7.52%
C. 8.96%
D. 9.02%
6. Ryan wants to save $65,000 for a down payment on a new motor home in 4 years. He can invest $1,200 at the beginning of each month, and he expects to earn 8% compounded monthly on his investments. How much will Ryan have saved in 4 years?
A. $66,454.35
B. $67,129.34
C. $68,070.70
D. $69,310.18
7. Jim would like to save $60,000 for his son’s college education. His son will begin college in 12 years. Assume that Jim can invest $15,000 now, and $500 at the end of each 3-month period. What annual rate of return is required for Jim to achieve his goal?
A. 4.52%
B. 4.79%
C. 4.86%
D. 5.03%
8. Christine’s investment of $3,400,000 produces the following cash flows:
Year 1: $2,100,000
Year 2: $2,200,000
Year 3: $1,600,000
If the discount rate is 7%, what is the net present value (NPV)?
A. $1,670,749.25
B. $1,790,258.62
C. $1,920,102.46
D. $1,980,638.10
9. Rachel takes out a $170,000, 15-year fixed loan at 5.50%. How much interest will she pay by the end of the loan period?
A. $80,028
B. $84,514
C. $89,216
D. $93,073
10. William purchased an investment for $750. He kept the investment for 6 years before selling it. If the internal rate of return for the 6-year period was 7%, what was the final selling price?
A. $1,125.55
B. $1,137.70
C. $1,148.91
D. $1,156.65
11. Jane has been investing $4,125 at the end of each year for the past 18 years. Assuming that she has earned 6.35% compounded annually on her investments, she has accumulated a total of:
A. $128,482.90.
B. $131,794.07.
C. $134,638.44.
D. $137,120.73.
12. Sam purchased an investment for $41,210. He expects it will increase in value at a rate of 7.25% compounded annually for the next 5 years. If his expectations are correct, how much will his investment be worth at the end of the fifth year?
A. $51,365.62
B. $54,599.34
C. $55,182.29
D. $58,477.54
13. Beth wants to accumulate $105,000 in 8.5 years to fund her child’s college education. She expects to earn an annual rate of 12.5% compounded quarterly. How much does she need to invest today to achieve her goal?
A. $36,882.01
B. $37,543.98
C. $38,793.35
D. $39,181.69
14. Tony’s investment of $200 produced the following cash flows:
Year 1: $80
Year 2: $110
Year 3: $120
If the required rate of return is 12%, what is the net present value (NPV)?
A. $44.53
B. $46.18
C. $48.34
D. $50.27
15. Megan expects to receive $1,250,000 from an irrevocable trust in 17 years. If the trust is earning an annual rate of 9% compounded quarterly, its current value is:
A. $275,298.90.
B. $277,346.01.
C. $280,321.32.
D. $283,902.87.
16. Carl has been investing $3,200 at the end of each 6-month period to accumulate funds for his daughter’s college tuition. The funds are earning an annual rate of 5% compounded semiannually. When Carl’s daughter begins college in 6 years, how much will the account be worth?
A. $40,763.45
B. $42,526.78
C. $43,992.15
D. $44,145.77
17. Angela purchased an investment for $950. She kept the investment for 5 years and then sold it for $1,300. What was the investment’s internal rate of return (IRR)?
A. 6.5%
B. 7.8%
C. 8.1%
D. 8.5%
18. Alpha Corp. provides the following data regarding cash flows for a capital project:
If the required rate of return is 6%, the net present value (NPV) is:
A. $13,813.96.
B. $14,671.02.
C. $15,092.77.
D. $16,302.25.
19. Jeff sued his former employer and won a judgment that provides him $3,000 at the end of each 6-month period for the next 7 years. If the account that holds his settlement earns an average annual rate of 5% compounded semiannually, how much was the employer initially required to pay Jeff?
A. $33,170.16
B. $35,072.74
C. $37,517.24
D. $39,839.07
The following information relates to questions 20 – 21.
Beta Corporation is investing $900,000 in a new production facility. The present value of the future after-tax cash flows is estimated to be $950,000. Beta Corporation currently has 80,000 outstanding shares of stock with a current market price of $14.00 per share.
20. What will be the value of Beta Corporation after the investment?
A. $1,120,000
B. $1,170,000
C. $1,190,000
D. $1,204,000
21. What will be the value of Beta Corporation’s share price after the investment?
A. $14.63
B. $15.26
C. $15.89
D. $16.12
22. Assuming semiannual compounding, what is the current price of a zero-coupon bond with a $1,000 face value, a yield-to-maturity of 7.98%, and 4 years until maturity?
A. $710.27
B. $719.78
C. $725.53
D. $731.25
23. A bond has a market price of $920 and a face value of $1,000. If the bond pays a 12% semiannual coupon payment and matures in 4 years, what is the bond’s yield-to-maturity?
A. 14.72%
B. 15.19%
C. 16.37%
D. 17.68%
24. If comparable bonds are yielding 11.2%, what is the current price of a $1,000 face value bond that pays a 9% semiannual coupon payment and matures in 7 years?
A. $819.14
B. $840.88
C. $895.17
D. $908.04
25. Jessica invests $4,000 today with the expectation that she will receive $9,000 in 5 years. If interest is compounded weekly, what is the average annual rate of return that Jessica will earn?
A. 13.38%
B. 14.12%
C. 15.85%
D. 16.24%
26. What is the IRR of a bond with a current price of $965, a face value of $1,000, a 7% semiannual coupon, and 3 years until maturity?
A. 6.47%
B. 7.10%
C. 8.34%
D. 9.56%
27. Assuming semiannual compounding, what is the IRR of a zero-coupon bond with a $1,000 face value, a current market price of $840, and 3 years until maturity?
A. 5.72%
B. 5.90%
C. 6.03%
D. 6.12%
28. If comparable bonds are yielding 8.8%, what is the intrinsic value of a bond with a $1,000 face value, a 6% semiannual coupon, and 5 years until maturity?
A. $888.68
B. $900.97
C. $912.33
D. $922.35
29. Melinda’s annual gross income is $50,000. If she pays $15,000 in annual income tax, then her total consumer debt payments, such as credit cards and auto loans, should not exceed __________ per month.
A. $583.33
B. $816.67
C. $833.33
D. $1,166.67
30. Victoria’s annual gross income is $100,000. If she pays $25,000 in annual income tax, then her total housing debt costs, including principal, interest, taxes, and insurance, should not exceed __________ per month.
A. $1,166.67
B. $1,750.00
C. $2,333.33
D. $3,000.00
31. Adam’s annual gross income is $80,000. If he pays $20,000 in annual income tax, then his total debt payments should not exceed __________ per month.
A. $1,866.67
B. $2,200.33
C. $2,400.00
D. $2,600.67
32. Sigma Lending Company’s underwriting requirements specify a maximum housing debt-to-income ratio of 28%. If the applicant discloses annual earnings of $75,000, what is the maximum monthly PITI payment the mortgage company will accept?
A. $1,750
B. $1,825
C. $6,250
D. $21,000
33. George earns an annual income of $95,000, and he would like to purchase a new house. He expects to make a 20% down payment and finance the remaining amount. If the mortgage lender will provide a loan equal to 2.5 times annual income, what is the maximum house that George can afford to purchase?
A. $190,000
B. $237,500
C. $284,425
D. $296,875
34. Cameron assumed the loan for a property that he recently acquired. If loan interest payable for the current month is $610, and the interest rate is 4.75%, then the principal balance is:
A. $148,390.65.
B. $154,106.56.
C. $158,126.60.
D. $164,975.50. | https://www.scribd.com/book/366352188/CFP-Exam-Calculation-Workbook-400-Calculations-to-Prepare-for-the-CFP-Exam-2018-Edition | CC-MAIN-2020-16 | refinedweb | 2,002 | 69.41 |
Working with ServletRequest's setCharacterEncoding for UTF-8 form submissions
Working with ServletRequest's setCharacterEncoding for UTF-8 form submissions
Aspire, the product that I developed over the last few years while I was working with java, servlets, jsp and XML, is being used on some SBIR projects involving JetSpeed and a few other open source tools. I have got a call the other day indicating that one of the forms is not accepting Vietnamese input. They are getting invalid characters placed into the MySQL database. The usage is pretty straight-forward. The user will see a form in a browser and will proceed to type Vietnamese (copy/paste perhaps from a Vietnamese app). This form will then be submitted to the server. The server is expected to parse the form input into parameters. Mahaveer suggested that perhaps I should be using some kind of character encoding to retrieve the parameters.
when it comes to complicated things such as setting digital watches, programming vcrs, ordering a sub for your spouse (Who always has a very discriminating taste), ordering at a local McDonalds drive through, self checkouts at Home Depot, and ofcourse software, I am a minimalist. If something is not hampering my imagination and productivity I usually don't upgrade. To exaggerate and make a point, I would be quite happy with windows 95, Tomcat 1.x, Frontpage, Jbuilder 3 for developing server side java applications that run in any container while David prods me all the time to upgrade to XP and how wonderful it is. (you can tell I am relegated to the auto pilot windows world). Anyway having made the point I still compile Aspire with servlets 2.1 although the current release may be at 2.4. Not that I don't like the new stuff in this particular instance but I want to be as backward compatible as possible as long as it is not crippling my style :-). For the curious I did upgrade to XP because I needed a better photo printing software.
Any way back to the encoding issue. I don't remember seeing any encoding issues while trying to read form submission parameters before. I remember porting one of our web sites to Japanese with out any problem last year. Based on that assumption I have advised Mahaveer that it should not be a problem and servlets probably will figure out the necessary details to retrieve the parameters. Ofcourse I am proven wrong.
Character encoding, Form submissions and URLs in browsers
This is what happens.
Default Servlet Behavior
The browser is supposed to set the character encoding in the content type of the post stream. As per the servlet documentation most browsers do not do this at this time. What this means is that the server side will have to assume a certain encoding to read the stream and also to retrieve any parameters from that stream. Apparently servlets will use latin-1 character set. Why they do this instead of utf-8 (utf-8 being seemingly more commmon) I am not sure. This will result in errors. This is what is happening in the above case. Is this behavior any different in servlets before 2.3 I am not sure. Either way I have a problem as Mahaveer is running Tomcat 4.x which is certainly running on a servlet api that is at least 2.3.
get/setCharacterEncoding():Suggested mechanism in 2.3
2.3 servlets provide two methods to get and set character encoding on the servletRequest interface. In cases where the client or the container do not set the encoding the get method will return null. It is clear that a client has the responsibility and means to set the encoding. But one might ask how can a container such as tomcat do this. Well, as Tomcat can intercept requests, it can use some external scheme such as the locale of the browser to actively determine what the encoding could be. I will discuss this topic in more depth later. Although it is not clear as to what is the best strategy to determine the character encoding, servlets do offer a set method to set the encoding. This method takes a string value of the encoding such as "UTF-8". If this method is called prior to doing anything else with the incoming stream we will be in good shape.
Determining the encoding
In this particular case as we are testing with IE and as IE seem to use utf-8 all the time, the choice is easy enough. I would check the get method first if it returns null. If it does then I need to set the character encoding to utf-8. Perhaps I can be bit a nice and put this in a config file in case if one has to change it globally. Another way to do this might be by examining the incoming locale and use a locale to character encoding map. The web.xml already allows this by providing such a table for response. Even in that case there is a problem. In the case that Mahaveer is dealing with, his locale is English but the encoding is UTF-8 as opposed to latin-1. In my mind this is still an open question. A quick search on the internet on the subject has not been as fruitful as I had hoped.
Delving into the Locale dependency
The web.xml allows for a configuration like the following. Provided, these mappings are in place, one doesn't have to use the set encoding explicitly for the response. In servelet spec this section is described as part of providing the encoding hints for the response. This has no impact on the incoming request. Nevertheless it won't be that hard to write a servlet filter that can do this pretty easily if this methodology works for you on the request side.
<pre><code>
<locale-encoding-mapping-list>
<locale-encoding-mapping>
<locale>ja</locale>
<encoding>Shift_JIS</encoding>
</locale-encoding-mapping>
</locale-encoding-mapping-list>
</code></pre>
So how to fix the problem
Based on what is learned, it is not hard to make a change to the code where the request object is set with its encoding right upfront. But the trouble with this is that the code now is not backward compatible any more with servlets 2.2 and before. Moreover the change is invasive as if you were to think of a different strategy to determine the character encoding. The obvious way then is to do the change via a servlet filter. With such an approach you can drop in more servlet filters in the future. In my case of Aspire there is an internal http event model that allows me to do the same via configuration and not depend on entries into web.xml. Here is the actual code from Aspire:
package com.ai.servlets;
import com.ai.application.utils.*;
import com.ai.application.interfaces.*;
import javax.servlet.http.*;
import java.util.*;
public class HttpRequestCharacterEncodingHandler extends
DefaultHttpEvents
{
public void beginRequest(HttpServletRequest request,
HttpServletResponse response)
throws AspireServletException
{
try
{
String enc = request.getCharacterEncoding();
if (enc == null)
{
String encoding =
AppObjects.getValue(m_requestName + ".encoding", "UTF-8");
request.setCharacterEncoding(encoding);
AppObjects.log("Info:setting encoding to " + encoding);
}
}
catch(java.io.UnsupportedEncodingException x)
{
throw new AspireServletException(
"Error: Encoding error while setting http request char encoding",x);
}
}//eof-function
}//eof-class
Support for encoding in JSPs
If you are using JSPs to process your form inputs you may be inclined to think that the page directive that sets the content type can automatically resolve this issue. It doesn't. This directive only works for the response. You still have to set the character encoding as in direct servlets. Nevertheless for those that uses JSTL there is a tag fmt:requestEncoding that can do the same. JSP has also a page encoding directive that deals with the encoding that is used to write the JSP page itself. For the discussion here this tag has no relevance to the request processing.
Well I don't do any such fancy thing but still works!
Even if you don't set the request encoding, thing seem to work. But this must be the case because perhaps the characters in the latin 1 set and the utf-8 characters sets match, such as for English. This is my suspicion why it would work.
What are the allowed character encodings for servlets?
Because the set encoding takes a untyped string value for knowing the encoding type, it is important to know their names right. The url is a large list of these character encodings. Weather the servlets api supports all or only a portion is not clear from the docs. The java web services tutorial has a good article in understanding these character sets as well. Appendix F of the same document states that the Java platform supports only 4 encoding schemes:
US-ASCII
ISO-8859-1
UTF-8
UTF-16
It is worth reading this appendix F as this explains the difference between UTF-8 and UTF-16 concisely and nicely.
References
1. java web services tutorial, Chapter 23
2. List of Java encoding schemes
3. An explanation of Character sets and encodings in Servlets
4. Servlets 2.4 spec, The Request section under internationalization
4. My further notes on this and other java related topics
- Login or register to post comments
- Printer-friendly version
- satyak's blog
- 6047 reads | https://weblogs.java.net/blog/satyak/archive/2004/05/working_with_se.html | CC-MAIN-2014-15 | refinedweb | 1,576 | 63.9 |
Development/Tutorials/Plasma4/Ruby/Blinker
Inside the svg)'
First..
def paintInterface(painter, option, contentsRect)
puts "ENTER paintInterface, paint height is " + @y.to_s @svg.resize(size()) @svg.paint(painter, 0, 0, "lcd_background")
The paintInterface is another API call..
The elements are rendered on top of each other in the listed order and when all are done they are painted as one to the screenbuffer. Try out moving the line of lcd_background further down the list.
...
This is all we need to program our own monochrome LCD plasmoids. In a next tutorial I'll explain how to create them with an SVG drawing application, as I've already played with SVG drawing.
Interaction. | https://techbase.kde.org/index.php?title=Development/Tutorials/Plasma4/Ruby/Blinker&oldid=49708 | CC-MAIN-2019-51 | refinedweb | 111 | 57.16 |
A new runtime for Lever 0.10.0
I set up the environment for myself to rewrite Lever's runtime. It is a task that is going to take at least 2 months to finish.
In the current version there are 14000 lines of Python and 30000 lines of Lever code.
Since one of the objectives in rewriting is to improve the readability of the code, you may like to read the code in the repository. The 'next' branch in Lever repository.
The rewrite is appropriate because the changes that are needed will change everything anyway. I cleaned out myself a 'next' branch so that the existing runtime stays in working order while I work on the new branch.
Type system changes
Finally discovered how the subtyping type inference can be made to work. I turned the suoml project into a working prototype.
Key to this was understanding that if you have multiple potential combinations of inputs, then you have to go through all of them. Also there is no sense in doing it as a post-processing step. It should be done at the same time when the type constraint are eliminated. There are many other rules that derived from this observation, they need their own blog post at some time.
There is one remaining very hard problem that is the simplification of the resulting type annotations. Some of the ways MLsub does it still apply, but we still end up with an operator graph that doesn't seem like a good fit for simplifying through subset construction.
Computer algebra modules
The main motivation for this change sprung from the necessity to run linear algebra on more things than floats.
The new design replaces multimethods by operations that resolve themselves through coercion. You can only create coercions between types that you create, and it is checked that the produced coercions are unique.
All of this is compatible with the type inferencing, so it remains available if code must run through an optimizing compiler.
I did try to produce good algebraic structures in my previous language. Once I figured out many enough things and studied enough abstract algebra, I could have implemented them in the existing language. Doing it right from the beginning is probably a better solution.
Documentation format
The Texopic syntax that I experimented on has to go away. It doesn't look bad at first but has too many surprises and is difficult to implement. I'm planning to move into XML that I didn't even consider at first.
Turns out that the text mode was not going to be the mode where I write documentation in. Good documentation also needs extensive use of graphics and other means of communication. You need a graphical editor for producing good documentation with most comfort that you can get.
Additionally I explored literate programming and realised that it might be what I want to do in the future. I think it is near to my original plan of writing documentation, upwards from the source code.
So how do you keep the source code as readable text while you have a fancy XML-based graphical documentation format?
I guess this would work, here's file listing:
example.lc
example.lc.doc
The
example.lc contents:
import display
# @main This comment would be grabbed from here and end
# up into the documentation from here. The snippet would
# continue up to the next point.
main = ():
dpy = display.open()
print(dpy)
The
example.lc.doc:
<doc src="example.lc"/>
<h1>The example documentation</h1>
The plain text written would cut itself into
paragraphs by itself when detecting empty lines in the
input text. Everything requiring to be vertical would
cut the paragraph automatically.
You could drop the full snippet like this:
<full-snippet
The snippet would be also available as plain paragraphs
and separable with the 'snippet' 'comment' -tags.
Likewise the documentation format could grasp to the
module contents and reference those.
Several things here make it meaningfully different from the
usual XML documents you read. It is the recognition that the
XML tags themselves are extremely heavy for the reader.
Also, tracking indentation in a file that can be several
hundred pages long is so tedious that you shouldn't at least
have the outermost mandatory
html and
body there.
Documentation effort
I think I have reached the point where documentation can be pleasing to write. But the problem is that it has become too pleasing and I write about all the right things, but don't provide all the context or structure for people to understand my text.
Additionally I have noticed that people simply do not read everything. If I write too much, they start skimming and scanning the texts I write. I notice I do that myself as well, when reading my own texts.
It helps to keep the paragraphs short, but really what helps best is to write to the point and remove or move away everything that goes sideways from what you're trying to tell.
Also I've noticed tendencies to write "marketing hype", the kind of text that tries to impress people but does not really say anything that I would really want to communicate, or tells everything that any other general run-off-the-mill text would write. If anyone else could write it then perhaps I should not write it?
To get better at writing, I propose the most help you can get by getting better at reading. Dan Kurland's critical reading may be a good starting point for your research. Once you've become a good reader, you may then become good writer by reading your own texts as if they were written by someone else.
The lesson about scalability
Overall I do this rewrite for making Lever more scalable without making it considerably less dynamic. The first two changes will make the code further much more compact than what it was and more effective your line of code is, less there is something left to maintain that would limit you from scaling it up.
You are not always sure what people mean when they say 'scalable'. They might mean to scale up in features or scale up into handling larger inputs at once.
Overzealous static typing hinders feature scalability because it introduces the danger of introducing excessive or unsound invariances, for example, when representing gender with a boolean when you are tracking snails because you were previously tracking mice. Type checker zealotry finds it important to validate that a value of a variable stays within a given set of elements, even if you didn't do anything with that information afterwards.
Static typing may improve mass scalability, but the amount of work and effort you do depends on where you have to start, and how much you have to bend the rest of the software to follow those changes. If everything is statically typed then that effort is humongous.
Static typing may improve feature scalability too if the type annotations are correct and on to the point. Correctly documenting the boundaries where some abstractly defined operation succeeds. There's also value if you can verify that interfaces do not inadvertendly change without a version/standard update.
Ironically people who blast loudest about the merits of static typing are themselves likely to get it wrong in the worst possible ways.
Were in one kind of scalability or another, I think the main barrier to scaling up software comes from the documentation effort, or failure of that documentation effort.
If you have the documentation just right, and it shows that you understand your system extremely well, there won't be limits for being able to scale. | http://boxbase.org/entries/2018/may/14/new-runtime/ | CC-MAIN-2018-34 | refinedweb | 1,284 | 62.07 |
NS(1) NS(1) NAME ns - display name space SYNOPSIS ns [ -r ] [ pid ] DESCRIPTION Ns prints a representation of the file name space of the process with the named pid, or by default itself. The output is in the form of an rc(1) script that could, in principle, recreate the name space. The output is produced by reading and reformatting the contents of /proc/pid/ns. By default, ns rewrites the names of network data files to represent the network address that data file is connected to, for example replac‐ ing /net/tcp/82/data with tcp!123.122.121.9. The -r flag suppresses this rewriting. FILES /proc/*/ns SOURCE /sys/src/cmd/ns.c SEE ALSO ps(1), proc(3), namespace(4), namespace(6) BUGS The names of files printed by ns will be inaccurate if a file or direc‐ tory it includes has been renamed. NS(1)[top]
_ _ _ | | | | | | | | | | | | __ | | __ __ | | __ __ | | __ \ \| |/ / \ \| |/ / \ \| |/ / \ \ / / \ \ / / \ \ / / \ / \ / \ / \_/ \_/ \_/More information is available in HTML format for server Plan9 | http://www.polarhome.com/service/man/?qf=NS&af=0&tf=2&of=Plan9&print=1 | CC-MAIN-2020-40 | refinedweb | 174 | 69.31 |
After running this snippet:
import numpy as np a = np.array([0.112233445566778899], dtype=np.float32) b = np.array([0.112233445566778899], dtype=np.float64) print(a, b)
It print out:
[0.11223345] [0.11223345]
Why np.float32 and np.float64 have the same output? The answer is: displaying of numpy array need to set options.
Let’s set option before print:
import numpy as np a = np.array([0.112233445566778899], dtype=np.float32) b = np.array([0.112233445566778899], dtype=np.float64) np.set_printoptions(precision=18) print(a, b)
The result has became:
[0.112233445] [0.1122334455667789]
which looks much reasonable.
Furthermore, why it prints out ‘0.1122334455667789’ which has only ’16’ precision instead of ’18’? Because the float64 only support about 15~16 precisions, as this reference said.
There are two parquet files which look different after using ‘cksum’ to compare. But after we export them as CSV files:
import pandas as pd df = pd.read_parquet("my.parquet") df.to_csv("my.csv") ...
The two output CSV files are exactly the same.
Then what happened in those previous two parquet files? Dose parquet file have some hidden metadata in it?
As a matter of fact, parquet file will save the ‘index’ of a DataFrame of Pandas while CSV file will not. If we drop the index before writing out the parquet file:
df.reset_index(drop=True) df.to_parquet("my.parquet") ...
These two parquet files would become identical. | https://donghao.org/2020/04/27/recent-learned-tips-abou-numpy-and-pandas/ | CC-MAIN-2021-39 | refinedweb | 236 | 71.21 |
strftime
From cppreference.com
Converts the date and time information from a given calendar time
time to a null-terminated multibyte character string
str according to format string
format. Up to
count bytes are written.
Parameters
Return value
The number of bytes written into the character array pointed to by
str not including the terminating '\0' on success. If
count was reached before the entire string could be stored, 0 is returned and the contents are undefined.
Example
Run this code
#include <stdio.h> #include <time.h> int main() { char buff[50] = {0}; struct tm my_time = {0}; my_time.tm_year=112; // = year 2012 my_time.tm_mon=9; // = 10th month my_time.tm_mday=9; // = 9th day my_time.tm_hour=8; // = 8 hours my_time.tm_min=10; // = 10 minutes my_time.tm_sec=20; // = 20 secs if (strftime(buff,50,"%Y-%m-%d %H:%M:%S",&_tm) != 19) { puts("Error!"); } else { puts(buff); } return 0; }
Output:
2012-10-09 08:10:20 | http://en.cppreference.com/mwiki/index.php?title=c/chrono/strftime&oldid=63893 | CC-MAIN-2015-27 | refinedweb | 153 | 63.25 |
Microsoft's official enterprise support blog for AD DS and more
Hi, Mike here. Have you ever wanted to map a drive for specific users at logon—without using a logon script? Have you ever wanted to change the local administrator’s password on all your client computers? Have you ever wanted to add items to a user’s Start menu? Now you can with Windows Server 2008, which includes Group Policy preferences.
What are Group Policy preferences? Group Policy preferences allow administrators to configure and deploy Windows and application settings that were previously unavailable using Group Policy. The Windows Server 2008 Group Policy Management Console (GPMC) includes Group Policy preferences, which are available when editing domain-based Group Policies. Also, you can manage Group Policy preferences from a Windows Vista Service Pack 1 computer by installing the Remote Server Administration Tools (RSAT), which included the updated version of GPMC.
You first notice a change in the namespace and node structure when editing a domain-based Group Policy object with GPMC. Computer and User Configuration remain; however there are now two categories under each configuration: Policies and Preferences.
Figure 1- Group Policy Preferences nodes
The Policies node contains the familiar node structure found when editing earlier versions of Group Policy. The Preferences node contains all the preference settings, which are a categorized into Windows Settings and Control Panel Settings.
Figure 2- Windows and Control Panel Settings
With Group Policy preferences there are many different ways to accomplish a specific task. Each Group Policy preference extension provides configuration properties specific to the extension and common among most preference extensions.
Figure 3 – Extension specific configuration properties
Preference items allow you to fine tune how they apply to users and computer by offering sophisticated targeting features. Using the targeting editor, you can create various targeting conditions to ensure the correct preference item applies to the correct user or computer.
Figure 4 - Targeting Editor
The Client-Side Extensions for GP Preferences are included in Windows Server 2008, and down-level versions will be available as a separate download for:
1. Windows XP Service Pack 2 and above 2. Windows Vista RTM and above 3. Windows Server 2003 SP1 and above
Finally, it’s important to understand that Group Policy preferences are just that – preferences. Unlike policy-enabled components that apply managed policy settings, preferences simply configure the settings as if a person did it. Users can change these settings until the next refresh of Group Policy. For example, when you use Group Policy to configure a screensaver, the option to change it is unavailable (grayed out) for the user. When using preferences, the screensaver is preconfigured per the preference settings; however, the user still has the ability to change the settings (until the next Group Policy refresh—depending on how you configure the preference item).
You can read more details on Group Policy preferences by downloading the Group Policy preferences whitepaper from the Microsoft Download Center ()
- Mike Stephens
Hi, Ned here again. Today I’m going to show some interesting new features of Auditing in Windows Vista and Windows Server 2008 that can be used for troubleshooting problems or seeing what’s happening in your environment. I’ll be building upon some of the basic information Dave Beach talked about in ‘Introducing Auditing Changes in Windows 2008’. This is not designed to be the be-all, end-all (in fact, I hope to write some further follow-ups on this), but it should definitely help get the creative juices flowing and help people turn to auditing more often than they have in the NT-2003 days.
First, let’s quickly revisit the changes. Vista and 2008 now support granular auditing, meaning that instead of a couple large categories you can specify very specific elements to be audited. The advantage here is that instead of turning on Object Access auditing then wading through miles of chaff, you can get something as specific as just network file share usage information, or just account lockouts, or just process creation. To see all your options execute (from an elevated CMD prompt):
auditpol /list /subcategory:*
auditpol /list /subcategory:*
This will return all the new categories and subcategories. To deploy these types of changes in a Domain-based environment, follow ‘How to use Group Policy to configure detailed security auditing settings for Windows Vista client computers in a Windows Server 2003 domain or in a Windows 2000 domain’.
If you’re troubleshooting though and just want to make some specific audit changes to a computer in a domain, there’s an easier way:
Via GPEDIT.MSC.
Via REGEDIT.EXE
1. Open the Registry Editor (REGEDIT.EXE) 2. Locate and then click the following registry subkey:
1. Open the Registry Editor (REGEDIT.EXE) 2. Locate and then click the following registry subkey:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\LSA
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\LSA
3. Add a new DWORD value of SCENoApplyLegacyAuditPolicy 4. Set the value data to 1 5. Close REGEDIT and restart the machine.
3. Add a new DWORD value of SCENoApplyLegacyAuditPolicy 4. Set the value data to 1 5. Close REGEDIT and restart the machine.
After making this change the computer will no longer apply auditing security policy even when specified in domain-based group policies; this way your changes won’t get overwritten as we proceed. You can read more about this in KB921468.
It’s critical to realize that something else important happened in Windows Vista and 2008 – Crimson. We finally rewrote the 15-year old Event Service first delivered in NT3.1, and now there are no longer the old scalability and reliability problems with event logs. Need a 16GB security log? No sweat, it can handle it. Plenty more on this here.
So let’s do some cool things with this new system…
Find out when users on Vista are elevating their programs with UAC
Here’s a common scenario: User Account Control is enabled and the users have been added to the Administrators group on their Vista clients. You need to know how often and for what reasons they are elevating their so-called ‘split tokens’. Maybe it’s for audit trail, maybe it’s to identify applications that might need to be updated or rewritten to not need admin credentials, or maybe you just want to prove that when a user complains he gets prompted ‘all the time’ you can respond with ‘Really? Seems like it was just twice this week!’
1. We enable the “Process Creation” subcategory of “Detailed Tracking” category with:
Auditpol.exe /set /subcategory:"process creation" /success:enable
Auditpol.exe /set /subcategory:"process creation" /success:enable
2. We let things cruise for a few days with the user cranking away on his system.3. We connect to the machine with EVENTVWR.EXE using ‘Connect to Another Computer’ 4. We set a filter by selecting Windows Logs -> Security then select Filter Current Log. We set a filter for Event ID of 4688 like so:
5. This removes everything from our view except Process Creation events. 6. We now do a Find for TokenElevationTypeFull (2) and we see things like:
Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 11/6/2007 4:44:57 PM Event ID: 4688 Task Category: Process Creation Level: Information Keywords: Audit Success User: N/A Computer: nedpyle04.fabrikam.com Description: A new process has been created. Subject: Security ID: FABRIKAM\nedpyle Account Name: nedpyle Account Domain: FABRIKAM Logon ID: 0x329656f Process Information: New Process ID: 0xe44 New Process Name: C:\Windows\System32\awfulapp.exe Token Elevation Type: TokenElevationTypeFull (2) Creator Process ID: 0x35c.
Did I mention the actual audit descriptions are useful now? :) I might not have known what TokenElevationTypeFull meant, but the event is there to tell me all about it.
Find out who is making AD changes unrelated to Accounts, using Windows Server 2008
Another favorite is “who the heck has been messing with group policy?!” In this scenario we have a number of OU’s that are managed by branch and group policy administrators. These users have been delegated rights to control group policy for their respective only. After time though, you are finding that some of them are making changes that contravene IT policies and standards. What’s worse, there are several people with rights to make changes and no one is admitting anything. How can we figure this out?
By default, Active Directory OU’s are already configured with audit entries that track group policies being linked, blocked, or enforced – so we don’t have to muck around in ADSIEDIT or AD Users and Computers this time. So let’s get on with the rest:
1. Enable the auditing subcategory of “Directory Service Changes”, either via AUDITPOL on all DC’s or KB921469. This means going forward you’re tracking people making changes in DS, for anything you have configured to be auditable.
We wait for the hammer to drop…
2. Finally, someone makes one of these changes we don’t like. On any computer joined to that domain running Windows Support Tools, we execute:
repadmin /showmeta "ou=remote branch,dc=cohowinery,dc=com"
repadmin /showmeta "ou=remote branch,dc=cohowinery,dc=com"
… which is the OU we cared about. It returns:
So now we know the when and the where – someone changed group policy links and enforcement on Nov 8 at 7:20PM. We need the who and the what.
We crack open the EVENTVWR on our Windows Server 2008 DC named (imaginatively named ‘2008SRV10’). We filter for 5136 events and see:
Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 11/8/2007 7:25:56: gPOptions Syntax (OID): 2.5.5.9 Value: 1 Operation: Type: Value Added Correlation ID: {8d929c75-e7c8-47a2-9592-835041973fc1} Application Correlation ID: -
And:
Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 11/8/2007 7:26:06: gPLink Syntax (OID): 2.5.5.12 Value: [LDAP://cn={2CC9A70E-DAD5-4707-B1C1-3EE8DC565AC8},cn=policies,cn=system,DC=cohowinery,DC=com;0] Operation: Type: Value Added Correlation ID: {cf7540ef-037e-4d6c-bd72-4654561762d5} Application Correlation ID: -
GPoptions is an AD attribute that marks ‘Block Policy Inheritance’ – being set to 1 means that this OU will no longer receive policies linked at a higher level than itself. Since we’re setting some policies at the domain level, this is probably not cool!
GPLink is an AD attribute that lists out which policies are linked to an OU (or site, or domain) and whether it’s enforced or not (0 or 1). It contains a value that points to an actually policy object.
So above, we see now that Robert is our culprit. He’s been blocking corporate group policies specified at the domain, and he’s also linking policies at his own OU. Since he’s allowed to link, let’s see what he actually added. We can either run Windows Server 2008’s built-in ADSIEDIT.MSC or we can execute a command.
Executing (this is all one line):
dsquery * "cn={2CC9A70E-DAD5-4707-B1C1-3EE8DC565AC8},cn=policies,cn=system,DC=cohowinery,DC=com" -scope base -attr displayname
dsquery * "cn={2CC9A70E-DAD5-4707-B1C1-3EE8DC565AC8},cn=policies,cn=system,DC=cohowinery,DC=com" -scope base -attr displayname
Which returns:
displayname Install Specially Licensed Software
displayname Install Specially Licensed Software
Well heck – he’s also linking a policy that installs some software that we have limited licensing. And of course, he’s not supposed to. But now we know…
Find out what is changing a registry value at random intervals
Finally, let’s discuss another common problem – at unpredictable intervals, something or someone is making registry changes on Vista computers. You suspect it’s a script or application, but you’re not sure how to figure out the source. Since it takes so long to happen you can’t just stare at the machine, so you decide to enable some auditing.
For this specific scenario, we have a handful of Vista machines that keep ending up in the wrong AD Logical Site. These machines were specially configured in the past to go to a specific site called FLURG rather than use dynamic site coverage.
So here’s what the registry looks like when everything is right:
1. We open REGEDIT and navigate to
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Netlogon\Parameters
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Netlogon\Parameters
2. We right click and choose Permissions. We click Advanced, then Auditing. We add Everyone like below, making sure to only choose ‘This Key Only’ and Set Value for Successful. The reasoning here is that we only want this specific data and no sub-keys that might exist, we only want to see something writing data (certainly not querying!), and we don’t care if it fails.
3. We then enable the auditing subcategory of ‘Registry’ under ‘Object Access’ like so:
Auditpol.exe /set /subcategory:"Registry" /success:enable
Auditpol.exe /set /subcategory:"Registry" /success:enable
We already know this isn’t going to happen for days, and we sure as heck don’t want to keep walking over to this machine every few hours, so let’s configure a subscription to tell us when things change.
4. We turn on the Windows Remote Management service on the affected machine:
winrm quickconfig
winrm quickconfig
… which returns:.
5. We start that Windows Remote Management service.
6. We then go into EVENTVWR on our own administrative machine (not the problem machine!) and click Subscriptions. We then select Create Subscription, which gives us a dialog we can fill out:
7. Make sure to hit advanced and add a user account with rights to access that machine. If you want to get alerts ASAP select the option for Minimize Latency.
8. We then choose ‘Select Events’ and add our filter so that we get Informational events (which Security events always fall under), matching 4657, and auditing success only.
So now, all Event 4657 occurrences on the problem machine will be forward to our own machine. But let’s not stop there; after all, I still don’t want to stare at Event Viewer all day at my desk. Let’s make it tell us when something interesting happens, by adding a special kind of scheduled task on our workstation:
schtasks /create /tn EventLog /tr eventvwr.exe /sc onevent /ec ForwardedEvents /mo *[System/EventID=4657]
This command adds a scheduled task which:
So a few days go by and we’re sitting at our desk enjoying some light reading when suddenly Event Viewer opens. Woohoo! We look at the Forwarded Events and see:
Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 11/6/2007 7:38:50 PM Event ID: 4657 Task Category: Registry Level: Information Keywords: Audit Success User: N/A Computer: nedpyle04.fabrikam.com Description: A registry value was modified.
Subject: Security ID: FABRIKAM\billybob Account Name: billybob Account Domain: FABRIKAM Logon ID: 0x329656f
Object: Object Name: \REGISTRY\MACHINE\SYSTEM\ControlSet001\Services\Netlogon\Parameters Object Value Name: SiteName Handle ID: 0x124 Operation Type: Existing registry value modified
Process Information: Process ID: 0x930 Process Name: C:\Windows\System32\reg.exe
Change Information: Old Value Type: REG_SZ Old Value: Flurg New Value Type: REG_SZ New Value: Nargle
There’s our guy. We walk over and chat with Billy Bob and he admits he’s got a script he’s been using that shells REG.EXE to make some registry changes. And wouldn’t you know it, there’s some old recycled code in it that explains our issue. High five.
I hope this post helped you get familiarized with Vista and Windows Server 2008 auditing capabilities. Please let us know one way or the other.
- Ned Pyle
Hi, Seth Scruggs here from the Directory Services team. Today I’m going to discuss how to troubleshoot certificate enrollment in Windows using a Windows Server 2003 Certification Authority (CA). Before you read on, make sure you have the Windows Server 2003 Resource Kit, the Windows Server 2003 or Windows XP Support tools, and the Windows Server 2003 admin pack installed. All will be needed for troubleshooting!
There are four ways we enroll for certificates in Windows:
This blog is going to specifically cover how to troubleshoot enrollment through the MMC Certificate Snap-in. If you are troubleshooting auto enrollment, the first step is to always try MMC-based enrollment; if you find this fails, there is no point troubleshooting auto enrollment until MMC-based enrollment works.
We call this MMC enrollment because we do this from the Certificates snap-in that can be added to an MMC. To use this snap-in, click Start | Run | MMC. Then in the blank MMC, click File | Add/remove snap-in | Certificates. I am specifically writing this blog to cover user based enrollment (as opposed to computer based enrollment), but the concepts can be applied to both.
MMC or snap-in based enrollment breaks in one of two spots; when you launch the wizard or when you click finish at the end of the wizard.
Let’s start by doing a high level walkthrough of the beginning steps and understand why it might fail here:
1. We first query Active Directory and search for a list of available CAs. When the client retrieves the result of the query, it filters out the results based on the following:
2. Then we query Active Directory for a list of certificate templates. When the client retrieves the result of the query, it filters out the results based on the following:
If the combination of these filters leaves your template or CA list blank, then you receive an error when you launch the wizard:
Do you have this error? Let’s use some tools and troubleshoot this. If you don’t see this error when you launch the Request Wizard, you can read this next section just for fun or skip this and go directly to “Troubleshooting errors when you click finish at the end of the Wizard”.
We started by looking for CAs that are published in AD. Specifically, the client does an LDAP query for objects in the following container:
CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=contoso,DC=com
This container should hold one object for every internal Windows CA installed in the forest. If for some reason the user doesn’t have permission to read this container, the objects beneath it, or there are no objects beneath it, we fail. By default Authenticated Users have read access on this container; so start by verifying this is the case in your forest. We’re going to use ADSIEdit.msc from the Windows Support tools to look at permissions.
Launch ADSIEdit.msc, then expand CN=Configuration | CN=Services | CN=Public Key Services | CN=Enrollment Services. This should look like:
You can see from the screen shot that in my lab there is one object under CN=Enrollment Services. If you don’t have an object in your environment (but you know you have a CA) then something went wrong during installation or an administrator in your forest deleted the object. In either case, you can re-populate the object by logging on to the CA as an Enterprise Admin or Forest Root Domain Admin and doing a backup and restore of the CA in the CertSrv.msc console.
If you have verified that one or more objects actually exist in the CN=Enrollment Services, let’s check permissions. Right click on the CN=Enrollment Services container, select properties and then click on the Security tab. We should see that Authenticated Users have read permission on the container.
If Authenticated Users is not listed here, you’ll need to add this group and assign Read Permissions to this container and any child container beneath it.
You could reset permission on the container to the default permissions as defined by the schema using DSACLS.exe, but this is a shotgun approach and you could remove any custom permissions that were previously delegated to this container. If you feel like doing it anyways, the syntax is here:
OK, so you made sure Authenticated Users have read permissions on CN=Enrollment Services, and you made sure that there is actually one or more objects in the container. The next step is to make sure that your CA is an Enterprise CA and that the object/s in ADSIEDit.msc reflect this. We’ll do this in two steps.
First, logon to your CA and at the command prompt run certutil –cainfo. In the output find the field labeled CA type. If it says Enterprise Subordinate CA or Enterprise Root CA then we are fine. If it displays Standalone Root or Standalone Subordinate CA then MMC enrollment will not be possible and you must choose another request method.
After you have verified that you actually have an Enterprise CA, let’s look at the CA object in ADSIEdit.msc and make sure the flag that identifies it as an Enterprise CA is set correctly. It is very unusual to see the flag set incorrectly, but all the same it is possible. As we did before, launch ADSIEdit.msc, then expand CN=Configuration | CN=Services | CN=Public Key Services | CN=Enrollment Services. Right click the CA in the right pane that you want to enroll from and click properties. Find the flags attribute; and verify that it is set to 10. If it isn’t set to 10, then set it to 10 using ADSIedit.msc and allow for Active Directory replication to complete.
The next step is to make sure that I trust the CA, and that I can make sure the CA is not revoked. Certificate verification is kind of a big topic, and I’m going to barely touch it. If you want in-depth knowledge, you can read this whitepaper.
The easiest way to verify do this is to launch PKIView.msc (available in the Windows Server 2003 Resource Kit). Once open, right click on Enterprise PKI and select Manage AD Containers. When this opens, click on the Enrollment Services tab.
We can already tell that the user trusts this CA, because the status is OK. This can be a bit misleading, but is a good start. Now let’s do an in-depth verification:
Highlight the certificate and click View. When the certificate dialog box opens, click on the Details tab, then click Copy to file. Export the certificate as a .cer file (DER or Base-64 encoding is fine). At the command prompt, run certutil –verify C:\filename.cer >verifyresults.txt, replacing C:\filename.cer with the path and file name of the certificate file you exported. After it runs, open verifyresults.txt and scroll down to the very bottom. Any error message at the bottom indicates a chaining or revocation checking problem; either of which would cause an enrollment to fail.
A common chaining problem looks like this:
A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider. 0x800b0109 (-2146762487) ------------------------------------ Verifies against UNTRUSTED root
A typical revocation error looks like this:
ERROR: Verifying leaf certificate revocation status returned The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2146885613).
CertUtil: The revocation function was unable to check revocation because the revocation server was offline.
If you see a revocation checking error message, run certutil –verify –urlfetch C:\filename.cer >urlfetch.txt. Open urlfetch.txt and find the CDP sections of the output. In this section, examine each path and figure out why this client isn’t able to reach the path. A client doesn’t need to be able to reach all paths, but does need to be able to reach at least one path for each CA. An example looks like this:
---------------- Certificate CDP ---------------- Failed "CDP" Time: 0 Error retrieving URL: The specified network resource or device is no longer available. 0x80070037 (WIN32: 55) ldap:///CN=2003Dom%20Enterprise%20Issuing%20CA,CN=2003DOMCA01,CN=CDP,CN=Public%20Key%20Services,CN=Services,CN=Configuration,DC=2003Dom,DC=com?certificateRevocationList?base?objectClass=cRLDistributionPoint
Failed "CDP" Time: 0 Error retrieving URL: The server name or address could not be resolved 0x80072ee7 (WIN32: 12007)
In this case you can see we can’t get to either CRL distribution point URL. Again, we don’t need to get to both, but we have to be able to get to one of them.
If everything looks good with chaining and revocation status, let’s move on to CA request permissions. Permissions are placed on the CA and are saved on the object that is stored in the Enrollment Services container. To verify the client has permission to request from the CA, open CertSrv.msc on the CA, right click on the name of the CA, and then click on the Security tab. By default Authenticated Users have the Request Certificates permission. If Authenticated Users don’t have this permission, make sure that the client requesting the certificate is a member of some group that does have this permission.
At this point we have queried for CAs that are available in the forest and we have filtered down our list. If we have identified one or more valid CAs, we do another LDAP query for all of the Certificate Template objects in the following container:
CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=contoso,DC=com
First we look at the permissions on each object returned, and determine if the client has Read and Enroll permissions on the certificate template. An easy way to verify permissions is to logon as the requesting user and run certutil –template on the client (on XP, you must install the Windows Server 2003 Admin pack to use this utility). This will dump all of the certificate templates in the CN=Certificate Templates container; and if the user doesn’t have permission to one of them it will display Access is Denied next to that template.
If the certificate template you want to use has an Access is Denied next to it check the user’s group memberships and/or the permissions on the Certificate Template. To verify group membership of the user, you can run whoami /groups (whoami is part of the Windows Support tools on XP and included in the OS with Windows Server 2003). To check permissions on the certificate template, open CertTmpl.msc, find and double click the certificate template you want, then go to the security tab. If you are making constant changes to this stuff, remember that AD replication latency is a factor.
OK, now that we have confirmed permissions are OK, let’s make sure this CA offers the certificate template we want. The Wizard does this by looking at the Certificate Templates attribute on the objects in CN=Enrollment Services. We can do this by opening certsrv.msc on the CA, then clicking on the Certificate Templates folder. All you need to do is make sure you see the desired template in the list:
If your certificate template is in the list, then you’re good. If not, right click on Certificate Templates, select New Certificate Template to issue, and then choose the correct template to add.
The last thing to check is if the template settings allow it to ever show up in the MMC. If the Certificate Template is set to supply the subject name in the request, it will never appear in the MMC because the MMC (in 2K/XP/2003) doesn’t allow you to enter this value. For the template to be offered in the MMC, the subject name must be built from Active Directory. The setting on the template should look like one of these:
At this point we have covered all of the reasons that a request through the MMC Snap-in might initially fail. Let’s look at what happens when you finish the wizard, and why that might fail.
Let’s do a high level walkthrough of the process when you click the Finish button in the wizard:
If one of these steps fails, you could receive a few different errors, but the most common is:
Let’s break down the steps and figure out why we might see this error message.
We start by reading the certificate template settings and building a request. To do this, we use the Cryptographic Service Provider (CSP) specified by the template (or chose one locally based on the template setting) to generate a key pair and create a key container to store the private key. Then we sign the data in our request with the private key and store the request locally in the Pending Request certificate store.
The most common cause of failure in this step is permissions on the directory where we create key containers. For a user, we create the private key container in the user’s profile in:
%userprofile%\Application Data\Microsoft\Crypto\RSA\<SID of user>
If you look, you might find that you already have dozens of files in this directory; or you could have none. What is important is that the permissions on this directory look like this; with all three security principals having inherited full control.
If yours isn’t like this, restore them to defaults. This should be:
Administrators Full Control <The Username> Full Control SYSTEM Full Control
These should be inherited, and apply to This Folder, subfolders, and files. If this looks good and we are sure that writing to this directory isn’t our problem, let’s move on to the next step.
Once we build the request, we send it to the CA using DCOM/RPC. Using the dnsHostName attribute on the CA’s object in CN=Enrollment Services, we grab the DNS name of the server, resolve the server name using DNS, then send an RPC end point mapper (EPM) request to the CA over port 135. The EPM on the CA returns a port to which the client may send the request. In order for any of this DCOM magic to work, both the client and the CA must have DCOM enabled and configured correctly. Check on your client and server by opening the Component Services snapin: Start | Run | DCOMCNFG. Once open, expand Component Services | Computers | My Computer. Right click on the Default Properties tab. It should look like this on both the client and server:
Now let’s check for port connectivity. As I stated before, we must have access to port 135; in addition to this the random port range that must be open is 1024 – 65534. This can be changed, but is out of scope for this blog. For additional information check out the ports article.
There are many ways to test connectivity; including telnet.exe, network traces, or portqryui.exe. We are going to use Port Query. Before we test connectivity, let’s find out which ports the CertSrv.exe service is listening on. Since the port is allocated from a dynamic range and is a moving target, we’ll need to do this in two steps. First, dump the list of processes on the Certificate Server:
Tasklist >tasklist.txt
Open this file, and find the PID (process ID) of the certsrv.exe process (in this case it is 1152)
Second, find out which ports are open by PID:
Nestat –ano >ports.txt
Open this file, and find each instance of your PID (again, my PID was 1152). It is expected to see more than one port open for this process, since we have multiple interfaces for the service. We are interested in the port bound to 0.0.0.0, in this case port 1104.
So, we have two ports of interest, 1104 and 135. Now download, extract, and launch portqryui.exe on the client. Enter the IP or FQDN of the server that hosts Certificate Services, choose Manually input query ports, and enter the port you found plus port 135.
You can see in the output that port 135 was identified as the endpoint mapper and is LISTENING (this is good). Now scroll down to the bottom and look for the evaluation of port 1104.
Again, LISTENING, so we shouldn’t have any connectivity issues.
After checking port connectivity, the next step is to consider DCOM permissions on the CA. In Windows 2K3 SP1, we hardened security on the CertSrv Request DCOM interface. Basically, we create a new Local group called CERTSVC_DCOM_ACCESS and only allow members of this group permission to hit this interface. On a member server, this group is local and it contains the Everyone group. However, if the CA is installed on a domain controller, we create the group as a domain local group and add Domain Users and Domain Computers from that domain. The problem is that this doesn’t include the Domain Controllers group; or the Users, Computers, or Domain controllers from any other domain in the forest.
A good article on this change is here. Basically, make sure that your user is in a group that is a member of the CERTSVC_DCOM_ACCESS group. If you have Windows Server 2003 with SP1 or higher and you don’t see this group, follow the steps in the article to recreate the group using certutil -setreg SetupStatus -SETUP_DCOM_SECURITY_UPDATED_FLAG. Again, if the CA is on a DC, it will be a Domain Local group, otherwise it will be a regular local group.
Keep in mind that if you make adjustments to the group by adding new members, the client must be restarted to build its logon token. One way to test permissions (and connectivity) is to run certutil –config FQDN\CAName –ping at the command prompt on the client where FQDN is the fully qualified name of the server and CAName is the subject name of the certificate for that CA. An example would be:
certutil –config “server01.contoso.com\Contoso Enterprise Sub CA” –ping
If you don’t have permission to the DCOM interface, an Access is Denied message will be returned.
Well, that is about it. Hopefully if you are troubleshooting an issue your problem is now resolved. These steps obviously won’t fix all issues, but easily the top 90%. Please feel free to add comments or other questions!
281260 A Certificate Request That Uses a New Template Is Unsuccessful;EN-US;281260
833704 "The certificate request failed because of one of the following conditions" error message when you request a certificate in ISA Server 2004;EN-US;833704
929494 Error message when you request a certificate from a computer that is running Windows Server 2003 with Service Pack 1: "The certificate request failed because of one of the following conditions…";EN-US;929494
903220 Description of the changes to DCOM security settings after you install Windows Server 2003 Service Pack 1;EN-US;903220
- Seth Scruggs
Hi, Randy here. This is my first blog post to help explain authentication and authorization. This post will be helpful in understanding "Access is Denied" messages and how to troubleshoot when these happen. I'd like to start with an explanation of the security token.
When you log on to a system, you provide credentials in order to gain access to resources to which you have permission. You typically set these permissions by going to the security tab in the properties window of an object and add a user to the list along with the access that you either allow or deny. The security token is how this list (referred to as the Access Control List (ACL), validates you to allow or deny access. The token is an object representing your credentials and an access check is a search of the ACL to see if there is an entry with your name on it. So what about group membership? Security groups are a way to simplify security administration. If we are able to group users together, we can define an entry in the access control list for the security group instead of defining an entry for every user in your environment.
Here are two screenshots. The first screenshot is of the ACL on the security properties page of an NTFS folder named Tools. Notice that it lists each of the user or group names and their permissions. The second picture is of the same ACL but with communication blocked to the domain controller. Because we cannot reach a domain controller, we cannot resolve the friendly names in the list and they are displayed as security identifiers (SIDs), for example S-1-5-21-1712426984-1618080182-1209977580-1108. Security identifiers are how computers see users and groups and will be discussed in more detail later in this post. In the second picture we see our local group resolve to a friendly name Administrators. This is because our local computer manages local groups while the domain controller manages domain groups.
When you first log on to your computer, you prove your identity by typing your username and password or validating with a smartcard. A domain controller validates these credentials and returns your user information and group membership. Your workstation uses this information to compare against its security policy and local security groups to build a security token that will accurately represent you, your domain and local group memberships, and all privileges you have to that computer. It is important to remember that your domain group membership is only part of the token; much of the security token is based on information local to the machine where the token is built. This means that your token may be different when it is built on your workstation of which you are an administrator and built on a file Server where you may have more limited access rights.
A good tool to view the contents of a token is TokenSZ. Below are two separate outputs of TokenSZ for a user named Randy in the domain Wingtiptoys.com. The first one is from an XP workstation where I am the local administrator. The second output is from a file server (where I am not a local administrator that) hosts a share that I often access. I used the command tokensz /compute_tokensize /dump_groups
Notice the output on my client workstation has more data, but much of this information is very similar between the two outputs. The first portion of both outputs reference information on the Kerberos ticket and the size of the token that is generated. This information is not relevant for this discussion, but if you would like to read more about this output and issues where this information is important, please see the whitepaper Addressing Problems Due to Access Token Limitation.
The first portion is the Security Identifier for our User:
· User
S-1-5-21-1712426984-1618080182-1209977580-1108
S-1-5-21-1712426984-1618080182-1209977580-1108
This number is the unique SID for the user account. This string is what a computer sees when validating access and privileges. When we looked at the Security tab of a directory earlier in this discussion, we saw that the ACL is actually populated with these SIDs. The ACL editor user interface communicates with a domain controller to translate SIDs into friendly names to make our jobs a little easier. You can determine the SID of a friendly name using PSGetSid. Also, the Microsoft Script Center has an article on how to do this in a script with WMI.
Now let’s look at how this number is generated. The first three segments S-1-5 are called the revision level. You will see S-1-5 quite often, this means it is revision level 1 and generated by the Windows Security_NT_Authority. The second 4 segments is the domain identifier, 21-1712426984-1618080182-1209977580. This is a unique number for a domain. If this SID was for a local user account or group, the domain identifier would be a unique number representing that computer. The last dash is the Relative Identifier (RID), 1108, which makes this number unique to any other SID in the domain.
The next portion shows the SIDs for the enumerated groups and where the two outputs differ.
· Groups:
This is the enumeration of all the domain and local groups. Notice that the group list contains different local groups even though we are the same user on the two machines. When my identification was returned from the domain controller, my local workstation checked its local group membership to determine that my domain account is a local administrator to that computer and added this to our token (S-1-5-32-544). We can tell that the domain identifier is 21-1712426984-1618080182-1209977580. If we were a member of a local group created on the box or in a security group in a trusted domain, that entry would have a different domain identifier in its SID. So why are some of these SIDs missing some parts? S-1-5-11 is one example. This is a well known security identifier. These are generic groups that are predefined to the workstation or domain and required for basic functionality. It is important that these security groups have the same SID regardless of domain membership. This way everything does not break if you move a computer into another domain or into a workgroup. If you look through the list in the article, you will see that several of these well known SIDs go beyond simple group membership and are used to define properties of the specific session, such as if these credentials are used as a batch job (S-1-5-3) or as a service (S-1-5-6.)
The next portion defines the Primary Group of the User
· Primary Group:
S-1-5-21-1712426984-1618080182-1209977580-513
S-1-5-21-1712426984-1618080182-1209977580-513
This is not relevant to our discussion but is used for compatibility for the POSIX subsystem. When a user creates an object it will set that object’s primary group to that of the user creating it. The -513 identifies this as the Domain Users group.
The next portion of the token represents privileges. The privileges represent certain actions that this token can perform local to the machine where it was built. These are basic activities such as "log on locally" or "start as a service" or "shutdown the computer." These actions define how much control a user will have to system-wide resources and will often permit the user to call these functions regardless of object-based permissions that may be set. Some of these privileges control logon access to a computer and will block or permit a user from establishing a session to the computer or connecting to the computer altogether. These privileges are defined in a computer's security policy and use the user and group security objects to define who has these privileges. Privileges are another way to perform an access check when actions are more complicated than just matching up to an access control list of an object. When comparing the token output above, you see that we have numerous privileges available to us on the workstation where we are a local administrator versus on the file server. Many of these privileges can be traced back to Windows NT 3.1 and were managed using NTRights. You can still use NTRights to script these privileges, but now they are typically defined in group policy under Computer Configuration\Security Settings\Local Policies\User Rights Assignment.
Whoami is included in Windows Server 2003 and Vista and is available for download for Windows 2000 and Windows XP. Although running Whoami with no switches only returns the name of the currently logged on user, the tool has much more functionality. Whoami /user will show the SID of the logged on user. Whoami /groups displays group membership for the current user, and Whoami /priv shows the security privileges of the current user. An important thing to note about the Whoami /priv output is that the current user has all privileges listed in the output, regardless if the State shows Enabled or Disabled. The State column indicates if the privilege is currently being used, not whether the user actually has that privilege. So if a privilege shows up in the list at all, you have it. Whoami /priv is always an easy way to demonstrate some of the differences between a full security token in Windows Vista versus a restricted token. Just run Whoami /priv under both an elevated and non-elevated command prompt in Vista to see the difference.
To summarize, a security token lists SIDs representing your user account and domain and local group membership. Some of these groups are defined by administrators and others are defined by the operating system based on characteristics of your session. Your token also contains rights and privileges granted by the machine where the token was built. It is important to know that these tokens are unique to every system and how you may access them; whether it is an interactive logon, access of a network resource or credentials passed by a service or batch file. With token in hand, we have proof of our identity and can present this token when accessing a protected object. A good analogy is viewing your token as a key ring; each security identifier and privilege is a different key that may or may not fit the lock placed on objects by the security subsystem. So now we need to look at where these tokens are and who uses them.
I will not be going into great detail on Windows internals, but it will be helpful to understand processes and object management. A process is an activity running on your computer. A process can be an application that you open, or a service that starts, or a task of the operating system, but everything a computer does can be identified as a process. When a process is created, it includes a primary token, representing the credentials used to run the process. The token may represent the local computer account, a service account, or user account based on what it does and how it was created. When a user logs on, the Winlogon process starts a process under the user’s security context. By default this is Explorer.exe, the interactive graphical user interface shell in Windows. This process contains a token that was based on the information collected about the logged on user. Explorer needs access to a resource, it can present that token. Whenever you perform an activity on your desktop, it initializes a child process from the original Explorer.exe. A child process retains the token from its parent and passes this token to any additional processes that it spawns. This is why people have the impression that when they authenticate, everything they do is performed using their security credentials. But not all processes stem from explorer.exe. There are several activities happening that are initiated by the local computer or specify their own credentials, like a service that uses a service startup account. When you troubleshoot authorization, it is important to recognize that each process running may or may not be using the credentials of the person typing at the keyboard or moving the mouse. You can see this using Process Explorer. You are able to view the Primary Token of a process by right-clicking a process and looking at the Security tab. This displays the processes’ security context. If I execute runas /user:domainname\differentuser c:\windows\system32\notepad.exe from a prompt, I will see that this process is running under different user credentials.
Notice that the process tree appears to start in two places, System and Explorer. Explorer.exe is the root of the user shell environment and runs under the context of the logged on user. When we open most applications, they open as a process underneath Explorer and inherit the user token used by Explorer (with some exceptions that will be discussed in a future post.) As discussed earlier, this is how we continue using our logged on credentials when navigating through the Windows shell. Now look at the Notepad process running under alternate credentials. This is under its own SVChost.exe because it creates a process with a new logon session. You can right click on several of the services running under Services.exe and you will see credentials of the service startup account for each of the running services. These services do not inherit the token of the parent process, Services.exe, because it is a special process running as the local system that creates new processes with a new logon session.
This ends our first post on tokens. In a future blog post we will look at how processes are able to impersonate other tokens. We will also explore several different ways the operating system can manipulate a token, such as restrictive, elevated, integrity levels and so-called “split” tokens used in Windows Vista.
Cheers,
Randy Turner
Primary Domain Controller Emulator settings
Note here we have selected “tick.usno.navy.mil” and have added/changed our flags according to the Windows Time Service Tools and Settings article:).
So that’s it. If the secondary NTP server was not configured prior to the real problem, an administrator could run into more complications by thinking the issue is internal and makes premature changes which might complicate things. Hopefully this helps you in the future! - Bob Drake | http://blogs.technet.com/b/askds/archive/2007/11.aspx | CC-MAIN-2014-15 | refinedweb | 8,255 | 61.56 |
Hi! I'm fairly new to Java and as ridiculous as this sounds, I'm having a difficult time wrapping my head around methods and how to execute them.
For example, right now I'm making a game for class where the user guesses a number that a random generator pulls up. It's in one class right now called numbersTester but I want to have two classes (numbers and numbersTester) where numbers have the methods. I attempted to do this myself but I didn't know how to pull up the code in numbersTester correctly so I just put it in one class.
Any feedback/help would be appreciated!
import java.util.Random;
import java.util.Scanner;
public class NumbersTester extends Numbers {
public static void main(String[] args) {
Random number = new Random();
int numberToGuess=number.nextInt(100);
Scanner input = new Scanner (System.in);
boolean win = false;
int numberOfTries = 0;
int guess;
while (win == false){
System.out.println("Guess a number between 1 and 100: ");
guess = input.nextInt();
numberOfTries ++;
if (guess == numberToGuess){
win = true;
}
if (guess > 100 || guess < 0){
System.out.println("Invalid number!");
}
else if (guess < numberToGuess){
System.out.println("The number is too low! Guess again.");
}
else if (guess > numberToGuess){
System.out.println("The number is too high! Guess again.");
}
}
System.out.println("\n" + "Game Over! " + "\n" + "You guessed correctly.");
System.out.println("The answer was: " + numberToGuess);
System.out.println(numberOfTries + " tries.");
}
} | http://www.javaprogrammingforums.com/%20java-theory-questions/29037-methods-printingthethread.html | CC-MAIN-2016-07 | refinedweb | 234 | 60.41 |
>>."
full text (Score:2, Informative)
BBC News Online technology staff
line
Tech industry leaders gathered in Brussels have reiterated the growing threat of piracy to the software industry in Europe.
The warning was issued at a conference, organised by the Business Software Alliance (BSA), which attracted delegates from firms such as Microsoft, Apple, Adobe and Symantec.
The meeting was told that in 2000 the software industry in Europe lost $3bn to pirates.
This figure is thought to be only a tiny fraction of the amount of piracy that is going on every day on the internet.
"We can't estimate how much piracy is on the net but in one day we found a million sites under a search for one of the codenames for pirated software," said a BSA spokesperson.
Unacceptable
For an industry that commits millions of pounds to research and development, and that contributes six times as much to Europe's GDP as the consumer goods industry, the levels are unacceptable, the BSA says.
"It is a risk most other businesses don't have to deal with - having 34% of your product stolen," BSA's president Robert Holleyman told the conference.
According to Microsoft lawyer Brad Smith, piracy has transformed the nature of the software industry in Europe.
"If there wasn't piracy there would be more software companies in Russia and Eastern Europe," he said.
Instead Russia has become an enclave for pirated software and Microsoft has recently declared a five-month amnesty for Russian and Ukrainian internet cafes to switch to legally licensed software.
Software pirates range from professional businessmen to teenagers selling illegal programmes from their bedrooms to organised criminals.
Organised crime is giving the BSA the biggest headache.
"Criminal organisations can sell software direct, as well as through retail channels," said Symantec lawyer Art Courville. "So, it is harder to monitor."
Tightening legislation
Europe has a greater rate of piracy than the US - around 34% compared with 25% in the US. Software leaders put this down in part to differing rules in Europe.
"Some countries in Europe had copyright laws dating back to the 1940s," pointed out Apple lawyer Peter Davies.
The last thing that you want is to create havens where the legislation is weaker
BSA spokesperson
That is about to change as the European Commission puts into force a directive intended to harmonise civil laws governing how courts deal with cases involving intellectual property.
All BSA members are hopeful that this will act as a deterrent.
"The last thing that you want is to create havens where the legislation is weaker," said a BSA spokesperson.
Change of attitude
There is also work to be done on educating the public about the importance of intellectual property, especially as a web counter-culture advocating free software, such as music downloads, continues to grow.
Open source software such as Linux is not seen as a threat to the work the BSA is doing, however.
"Linux is a way of developing software whereas piracy is copying," said Microsoft's Brad Smith.
He does believe that stopping the pirates could have a dramatic effect on the current pricing of software, however.
"As the legal market grows, there is more investment in new products and enhanced competition. A healthy market leads to more attractive prices for consumers," he said.
Despite the efforts of the pirates, the software industry in Europe is looking pretty healthy.
It is forecast to grow from £35bn in 2000 to £67bn by 2005.
Re:full text (Score:2, Interesting)
I propose we show the RIAA, the MPAA and the BSA what theft is.
We'll call it - "show those bastards what theft is day"
Here's how it works. You walk into a store, take the latest Hollywood crap film, the latest crap Top 40 album, and any of the BSA's products - Illustrator, Office, etc. Put it in your pocket and leave the store.
That's theft!
The BSA has to realize that their products would not be in the top spot if it wasn't for (at one time legal) copying and installing on home computers. How many art students can afford Illustrator? None. How many art students can learn about Illustrator, do amateur work on it, get a job, use the software in "the real world" and then increase Adobe's sales? All of them.
I mean shit - who out there pays for this crap? I don't. I can't afford to give Microsoft $150 every 9 months for incomplete upgrades. I can't afford to give Adobe $10000 every other year for their upgrades. I use pirated versions - and I don't feel bad about it at all.
If I didn't have a bootleg of Windows 2000 server and workstation, I wouldn't have the skills to perform a migration. By migrating - just one client, to Windows 2000, I help sell several hundred licenses.
If I didn't have bootlegs of all the Adobe software, I wouldn't be able to support them in the real world.
The fact that you can get this shit for free (if you try hard enough) is what keeps qualified tech support people in 2nd and 3rd tier industries.
Shit, if I could download an "evaluation copy" of Reuter's or Bloomberg, I would be an expert on that too. Instead, these firms end up ramping up the TCO for their clients because the job skills are impossible to get outside of the client environment.
Hey, its simple math. More qualified techs for a given product means that more IS departments can provide support for a product meaning more firms can buy a product. Simple as that. If the product is difficult to support, and people can't roll their own skillset, the product will never grow in in its installed base, and the product's future will be left to the people least likely to help it: the software firms' marketing departments.
Hell, marketing departments for software should be banned, as part of the Industry's Best Practices. These shit for brains are the reasons developers get sued out of existance.
Hell, shareware is the key. Look at winzip for crying out loud. You don't really have to pay for their product - but if you are in a corporate environment, they probably buy licenses a few hundred at a time. Winzip's happy. I'm happy. Shit, even when my firm would buy Winzip licenses, I still churn out the registration code with a cracker!
Re:full text (Score:3, Insightful)
Downright misleading. Search for "warez", get a million hits, trumpet about how software is being stolen. "We can't estimate...but we found". If you can't estimate it, why are you spitting out these meaningless figures, other than to feed the press? I think the best counterclaim would be to have people follow the top, say, hundred links and see what percentage of those pages *actually* contain links to pirated files. Be interesting, actually.
There are legitimate numbers that could have been given. The number of Hotline servers serving files containing given strings. The percentage of computers without licenses found when MS audits are conducted. The "number of times transferred" statistic some IRC file serving bots put out. Number of napster or gnutella hits for a search for the name of a piece of software. Call me naive, but it seems that if piracy is as big a problem as the BSA is telling everyone, they should be able to come up with some meaningful statistics.
I also love the BSA's emphasis of organized crime. Most software is pirated through organized crime? Please. Oh, maybe in China or Russia, and I don't live in eastern Europe, so I can't really say there. But in the US (and, I would assume, western Europe), the BSA likes holding up the Mafia on one hand and asking legislators "Don't you want to stop this?" Most software pirated in the US is from casual copying, end of story.
Now, all this doesn't mean that piracy isn't a real, legitimate problem. But that release has as much spin on it as I've ever seen.
As for the token handed to Linux, I don't know why that was in there, unless it was to try to split up the groups of people (pirates, OSS folks) who don't really approve of the BSA.
I don't know if Microsoft is evil, but a search for "microsoft evil" on google spits out a quarter million results...:-)
funny thing is (Score:2, Insightful)
Re:funny thing is (Score:2)
For some time it's been widely accepted that piracy is going to happen, but you make better products and you make it worthwhile to buy the real deal and somehow, magically, those companies manage to stay in business - both in the entertainment industry and in computer software.
Also, that 34% figure is probably way out of whack, but why don't they look at it like this: all the copy protection crap we put on our products is incoveniencing our honest customers in a two to one ratio over pirates.
Yes, that's a good business practice.
Re:funny thing is (Score:2)
Microsoft's mind-numbing arrogance. (Score:5, Insightful)
The funny thing is that so many of the fears of a World Government were that it would come from quasi-socialistic NGO's. But here, the multinationals are coming in and dictating the property model for other countries to use. What if a nation doesn't want to recognize IP as property? What does it cost Microsoft if an entire nation opts out? After all, most Russians and Ukrainians probably aren't getting *any* real benefit from intellectual property laws - how much Russian or Ukrainian-owned software do *you* use? (US companies employing coding sweatshops doesn't count - after all, the IP is owned and enforced in the US.)
News To Me (Score:5, Funny)
And all this time I was under the impression that Linux was an operating system kernel!
Re:News To Me (Score:2)
'Linux is a way of developing software whereas piracy is copying.'
And I always thought "piracy" was the illegal plundering of ships and boats. I love newspeak!
Re:News To Me (Score:5, Funny)
Maybe it's all in how you develop your software.
Normal: "I'm going to compile my latest version of code."
Piracy: "Avast me hardies, I'm going to compile me latest booty! Arrr!"
Re:News To Me (Score:3, Funny)
Jim Lad: Shiver me timbers there be 1000 pirate copies of Windows 98
Parrot: Pieces of 98, Pieces of 98.
Re:News To Me (Score:3, Funny)
Re:News To Me (Score:3, Interesting)
Re:News To Me (Score:3, Insightful)
Man, I totally agree. They're not even worth the diskspace.
Obviously, you've never seen the LP version of the double album "Kiss Alive II". I still have the copy I bought 25 years ago. It folds out to show a vivid color 24x12 inch live concert photo with the band raised on hydraulic platforms in front truly impressive array of fireworks and huge orange fireballs. It contains a 12x12 inch book detailing the "Evolution of Kiss". It has two nice big 12 inch vinyl platters. I think it came with some 24x24 inch Kiss posters, but I've lost those over the years.
This package had real value that is still interesting today. I think that side IV even has some good music on it.
I'd bet if you bought the CD version today, you'd be lucky to get 4 inch sheet of paper with the list of titles on it.
I think the record companies hurt themselves when they started selling $17 products that have almost zero value-add over a bootleg copy.
Re:News To Me (Score:4, Insightful)
Re:News To Me (Score:3, Informative)
This is true for the web, but there are other ways that require much less effort... *cough* IRC *cough*
BSA (Score:3, Informative)
Re:BSA (Score:5, Interesting)
The BSA is nothing more than a legalized protection racket.
Re:BSA (Score:3, Interesting)
They ran ads back in January in the SF Bay Area (e.g. KCBS 740) about how important it is to keep a clean shop and comply by the grace period end. Nothing about imperial stormtroopers installing software on your PC's or Servers, or demanding audits which would be unthinkable in short timeframes, or even the extortion of large wads of cash and total capitulation as the only other option.
Still Unclear on MSFT's Strong Dislike of Linux? (Score:5, Interesting)
Ideas developed and shared undermine Intellectual Property. i.e. If you invented a better moustrap and GPL'd the design, then MSFT wouldn't be able get a patent on it, and thus license for big fees or lock any other developer or competitor out.
Having to include source to something they didn't invent and can't get along without is their problem and, like any reasonable minded person, don't want problems. They like to keep it simple, by owning or having license agreements on IP.
How anyone actually associates Linux with Piracy is beyond me and reflective of a lack of understanding the spirit of MSFT's gripes.
Re:Still Unclear on MSFT's Strong Dislike of Linux (Score:3, Insightful)
The problem is that you have software like Napster that represents the freeloader movement getting confused with the free software movement. Popular websites like slashdot do more to hurt than to help with this problem. A lot of people are under the false impression that Linux and open source are about "free beer", and if you believe that, then it's not an enormous stretch to conclude that Linux is about piracy.
Re:Still Unclear on MSFT's Strong Dislike of Linux (Score:4, Insightful)
Ok, if you're going to mark
Secondly, Yes, a lot of people are under the impression that open source means "free as in beer" because it DOES! Look at Freshmeat or SourceForge and try to find some pay products. The percentage probably can't get measures in whole numbers.
Lastly, who the hell that reads
Free beer doesn't sound like freedom...... (Score:3, Insightful)..
Re:Still Unclear on MSFT's Strong Dislike of Linux (Score:3, Insightful)
"We can't estimate how much piracy is on the net but in one day we found a million sites under a search for one of the codenames for pirated software," said a BSA spokesperson.
I believe the "code word" is WAREZ and I believe that the inference was that because there are a lot of sites advertising "WAREZ" there must be a lot of piracy.
Yes, there are an awful lot of sites that try to sucker people in by having "WAREZ" in their site name. Most of these sites have a lot of shareware not actual pirated software. So the inference is total crap. Not that there isn't a lot of piracy just that the hit count for "WAREZ" is no proof of it.
"For an industry that commits millions of pounds to research and development, and that contributes six times as much to Europe's GDP as the consumer goods industry, the levels are unacceptable, the BSA says. "
I thought they just said that "We can't estimate how much piracy is on the net..." If so, how do the know that "the levels are unacceptable?"
"That is about to change as the European Commission puts into force a directive intended to harmonise civil laws governing how courts deal with cases involving intellectual property"
Software piracy is already illegal. So what do they want to do? Make it REALLY, REALLY illegal?
"There is also work to be done on educating the public about the importance of intellectual property, especially as a web counter-culture advocating free software, such as music downloads, continues to grow."
Oh, so now we get to the crux of the matter. We're back on the "Kill Mp3s" track again. They want laws to take away fair use so that they can increase corporate profits.
"Open source software such as Linux is not seen as a threat to the work the BSA is doing, however."
Then why mention it? "Look! Your shoe is untied!"
"Linux is a way of developing software whereas piracy is copying," said Microsoft's Brad Smith.
Wrong again Bozo. Linux is an operating system. Someday Microsoft should try to create one.
"He does believe that stopping the pirates could have a dramatic effect on the current pricing of software, however."
And why does he bundle the discussion of Linux in with the discussion of piracy? He's not using subtle association techniques is he?
"As the legal market grows, there is more investment in new products and enhanced competition. A healthy market leads to more attractive prices for consumers," he said.
So does the open source movement. You can't get a better price than free. The only problem that I see with open source is that society as a whole isn't mature enough to break out of the "take what you can get and give back nothing" attitude. We need to learn to do a better job of voluntarily supporting open source companies.
Re:Still Unclear on MSFT's Strong Dislike of Linux (Score:5, Insightful)
The Linux kernal is more than just a better mouse trap. Its free software.
I believe we are seeing the beginnings of the third and last stage of software. An age where software is largely mutually beneficial to everyone. Much as math and sciences are today and have been for a long time now.
This stage is an unfortunate stage for software businesses. Because they can not continue to exist.
And its not just the threat of GNU and the large body of free software either. Its economics. Even though software isn't scarce, lets assume it is for benefit of argument.
What do you do when everyone has the software they need? This is the burden Microsoft has had for a while. So they play every trick in the book. Changing file formats -- more restrictive licensing -- regular upgrades -- huge marketing -- and the creation of new technologies. The hope is to obsolete the previous version of software.
Problem is that this provides almost zero benefit for the customer. Sure -- every so often someone gets a fringe benefit from a new technology. But usually, people are happy with the software they have now.
So, in economics, if the customer gets no benefit from a product, they won't buy it--right? And thats the future as I see it. "Piracy" is the least of their worries. Their business model is about to collapse upon itself.
And the GNU/Linux operating system represents this collapse all too vividly. Microsoft, there is no hope for you.
Re:Still Unclear on MSFT's Strong Dislike of Linux (Score:3, Interesting)
It's easy. Same way we associate "drugs" and "bad". It's all down to an understanding of the way the human brain interprets repitition, association, emphasis and repitition:
Linux is a way of developing software whereas piracy is copying.
Linux is [mumble] software whereas piracy is copying.
Linux is [mumble] software [mumble] piracy [mumble] copying.
Linux is [mumble mumble] piracy.
Linux is piracy.
Linux is piracy.
LINUX is PIRACY.
Incidentally, I am not - repeat NOT - trying to be cute or funny here. Microsoft are mentioning Linux and piracy in the same sentence because they are laying the foundations for Joe Reader to imagine an association. Expect to see a lot more of this in the future, especially once they figure out whether they want to demonise specifically Linux, the GPL, or Open Source in general.
Re:Still Unclear on MSFT's Strong Dislike of Linux (Score:2)
They provide the source, but not the actual ISOs or other form of download. I consider myself pretty savvy when it comes to dealing with OSS software, and I wouldn't want to take on compiling all of the elements of a distro!
Minor difference? Am I nitpicking? Maybe, but it's still important!
Re:Still Unclear on MSFT's Strong Dislike of Linux (Score:2)
No, they don't. If they were "selling Linux", would be a better deal.
They provide the source, but not the actual ISOs or other form of download. I consider myself pretty savvy when it comes to dealing with OSS software, and I wouldn't want to take on compiling all of the elements of a distro!
Exactly. They're selling the packaging and more importantly, maintenance of that packaging. They are not selling the software itself. In particular, they may fund software development, but this doesn't directly provide them with revenue (because the end result of that development is given away)
Re:Still Unclear on MSFT's Strong Dislike of Linux (Score:2)
Re:Still Unclear on MSFT's Strong Dislike of Linux (Score:3, Interesting)
You sure about that? [redhat.com]
Re:Still Unclear on MSFT's Strong Dislike of Linux (Score:2)
Piracy is copying? (Score:5, Funny)
So 18th century pirates just boarded your ship, copied everything, and left?
Re:Piracy is copying? (Score:5, Funny)
Isn's that what copyleft means????
Re:Linux ok. MS-OS free machines not (Score:3, Insightful)
...but as Linux users, we know better. It is, in fact, a free OS machine.
Re:Linux ok. MS-OS free machines not (Score:3, Insightful)
The beef isn't that IE is bundled with the ``desktop''. It's that IE is being pushed as part of the operating system.
And your KDE comment is wrong as well for a couple of reasons:
Don't try to equate Microsoft's comingling of IE and Windows with KDE's (or Gnome's, etc.) bundling. It's not the same thing.
Dont forget... (Score:4, Funny)
#include "StandardWhineAboutExpensiveSoftware.h"
#include "WhineAboutOnlyUsingItOnce.h"
#include "RantAboutNeedingPhotoshopInsteadOfGimp.h"
etc..
.
Well Duh.... (Score:3, Insightful)
This is another one of those "We'll look like we're compromising on this minor point so that people can buy into our other major point" things. Linux may not be piracy, but it is viral and anti-capitalist and bad for consumers because it's supported by hobbyists with PHDs in CS rather than a major company whose tech support knows as much as their average supportee (is that a word?).
What is the market for Linux like in Europe? Does M$ have any more reason to be worried over there than they do here?
(sorry if this is a dumb question, but I'm an American so I have no clue what goes on outside of my own country)
Re:Well Duh.... (Score:2, Informative)
And in Europe, yes, Linux is much more popular. A number of people don't want to rely on an American company for their OS.
Re:Well Duh.... (Score:3, Interesting)
To those people, Open Source software just got a lot more appealing, because a foreign power can't take it away from you.
Software Sales ARE growing, well the PRICE is! (Score:2, Informative)
So I'm not really sure if I belive that "sales" are increasing. In all reality the cost of standard software is going up and therefore so are "sales."
Wrong assertion from the lawyers (Score:5, Insightful)
If they get away with defining 'piracy'=='copying', even in people's perceptions, the main distribution method of linux will be severely hampered. I can't tell you how many times I've seen someone receive a burnt CD with 'Red Hat xx' scribbled with a magic marker, and they ask something like, "is this legal?". It just 'feels' like you are doing something dirty.
It is only illegal to copy it if you have specifically given up that right. As the GPL says, "Most lices are created with the purpose of taking away your rights..."
another wrong assertion (Score:4, Informative)
You've got that backwards. It is only legal to copy a copyrighted work (other than for fair use) if you've been specifically granted that right by a license (e.g. the GPL). (IANAL)
The default under copyright law is to forbid copying; most shrink-wrap "licenses" try to restrict your rights beyond the ordinary powers of copyright.
Re:another wrong assertion (Score:2)
It is completely in the spirit and letter of the license that I can download any linux distro and install it on an infinite number of machines.
Most people instinctively feel that you must be doing something wrong.
The comment from the lawyer makes it sound like they are 'graciously' allowing that.
Re:Wrong assertion from the lawyers (Score:2).
I think their numbers may be off (Score:4, Interesting)
And wow, it sure took them a long time to figure out the "codeword" for pirated software
:)
Someone needs to call shenannigans... (Score:3, Informative)
I can do a search for "warez" right now and probably come up with at least a million sites. These guys are so full of shit it should be criminal. They are deliberately misleading people about this issue. So, is anyone standing up to call them on it? Who has the clout to be heard there??
Eastern Europe? (Score:3, Insightful)
Re:Eastern Europe? (Score:3, Insightful)
You know how MS got so popular? People took copies home from work, installed it on ther machines so they could 'work' from home.
My 98 system went down this weekend, lost everything. I went to reinstall Ofice 98. Turns out there where in the box that got lost when we moved. You think I'm going to buy my next copy?
Re:Eastern Europe? (Score:2)
+1 Nitpick, +1 correction (Score:2)
"Progress" is usually an intransitive verb, but it does have an (mostly obsolete, but coming back in style as you noted) usage as a transitive verb.
Make up your minds... (Score:3, Funny)
Sheesh. Are there any insulting comparisons Microsoft hasn't yet made?
Lower prices? Yeah, right. (Score:3, Insightful)
Re:Lower prices? Yeah, right. (Score:2)
They were so right.......
Open source: the solution to piracy (Score:2)
New Acronym? (Score:2, Interesting)
What do you expect. Software is write-once sell-many (WOSM)
Opportunity cost (Score:4, Insightful)
Take games for example. They still usually cost around $50 bucks, just like they have for years. I pay $50 dollars for my tax program every year now because, after all, what's $50 bucks? 10 years ago it cost the same and we used to get 5 people together and pay $10 bucks each. Now we just buy it because it's more of a nuisance to pirate than it is to just pony up the cash.
Games are relatively cheap too. If you use a pirated version, half the time you're having problems like, "I need the latest 1.09 patch for such and such bug/feature but it breaks my 1.07 pirated no-cd version". It's just easier to buy it than it is to go surfing warez sites/kazaa, etc. My time is more valuable than that.....surfing for warez takes time away from gaming.
Re:Opportunity cost (Score:2)
Everything else even remotely related to computers has gotten drastically cheaper over the years. Everything except Microsoft's software. It has actually gotten more expensive. Gee... I wonder why that is... could it be... monopoly?!
Re:Opportunity cost (Score:3, Insightful)
Prove this statement. I dare you.
Go back and check the price of the original DOS, the original Windows... Windows 95 and so on. The price of the OS has remained almost constant as long as I can remember.
Then if you go and compare the price of Microsoft's offerings to other comparable products in the industry you'll see software has gotten drastically cheaper because of Microsoft.
Linux Piracy numbers (Score:2, Funny)
Come on MS... come on... grow up!
Duh. (Score:2)
Besides, at least the pirates use windows. Us linux users are much more lowlife, in their opinion.
Well... (Score:2, Offtopic)
aawwwwww...
Easter Europe (Score:5, Interesting)
The problem isn't piracy. It is a lack of respect or even awareness of Intellectual Property in my opinion. There is no respect for it at all, it seems, in these countries. Their legislatures are just now starting to examine laws concerning it. I am not sure which industry is bigger: China's piracy rings or Russia's. In China the piracy goes to aid specific Red Army units (in fact the rings are allegedly controled by Army Generals).
It is an interesting problem. While we want to business with these countries, lack of protections makes it nearly impossible. At least under the rules and structure of Capitalism. While those rules can lead to our current situation where we have an agressively bad and dangerous monopoly controlled by Bill Gates, they generally are good and promote sane business practices. My hope is that Eastern Europe reforms. With China, I don't see and end coming to their ways of doing business.
Re:Easter Europe (Score:2)
Let's hope the Eastern European legislatures crack down on this kind of problem. When Windows XP costs $200, as God intended it to, then open source software will look a lot more attractive to those people.
Piracy works for Microsoft (Score:2)
People learn to use Microsoft and end up paying later, or encouraging other people to use microsoft through a network effect ('everyone uses Word/Excel'). If Microsoft software was available only at full price they would be more likely to try other alternative. The main battle is for mindshare not dollars.
Piracy allows microsoft to effectively sell cheap, without being accused of dumping.
Well, at least they are making that clear (Score:5, Interesting)
It is easy for those of us hip to the open source movement to laugh at this crap from MS, even though we know that some end users and such might be taken in by it. But the depths to which MS FUD penetrates the general IT community is bloody incredible to me.
Yesterday I was talking with a mid-level QA engineer from Apple. This guy is working on a very complex product. He knows how to code.
We start talking about software development, and I mention some things I am working on, mostly centered on Linux. At which point he says:
"That's cool, but anything you do on Linux you would have to give away for free, right?"
Contrary to what everyone is thinking, this guy isn't stupid. He isn't even technically inept. He works on a complex project and knows what he is doing in his problem domain.
Anything that MS might say about Linux and open source that isn't totally negative should be lauded, because a LOT more people than some of us realize, people we think should know better, apparently are buying pretty much everything MS is trying to spread about open source and Linux.
SHHHH!!!! Don't say the code word! (Score:5, Insightful)
I assume here they are referring to "warez". And yes, you will get a LOT of hits if you put that into a search engine. However, before you get TOO excited about it, understand that 99 times out of 100, you're more likely to find porn than pirated software if you actually visit any of those sites. Its a completely meaningless association.
The majority of "warez" trading is done through IRC or usenet. Yet those who are striving to rid the internet of piracy rarely mention these treasure troves. Certainly they get mentioned as the breeding ground for evil "hackers" and for child porn distribution, but as far as piracy goes, they tend to stay rather mum about it.
Could it be that their only real mission is one of sensationalism? They know for a fact that the average clueless newbie will do a hunt for pirated software on the web (because as far as they know, the web IS the internet), and will be disillusioned by all the porn websites, banners, and popups that they will figure its more trouble than its worth. They might trade with their friends and download some mp3's off Morpheus, but that will be the limit of their piracy activities.
However, if lots of news articles spent a great deal of time complaining about the rampant piracy on IRC and usenet and other places, then that clueless newbie might actually decide for once that a clue isn't such a bad thing and venture into that world. "What do you mean that IE can't go there???" But once entrenched in that world, they'll be very difficult to "retrain".
The public at large has been convinced by and large that child pornography and hacking are indeed "Bad things (tm)" and will probably avoid those places that distribute them. But software piracy hasn't reached that degree of evil in most people's eyes. So they will to some small degree actually seek it out. And deep down, there's probably an even bigger fear. Their preverbial sheep might stumble across something dangerous. "What's this here linux thing all about???"
ok. Fine. Mod me down.
-Restil
Bradford L. Smith (Score:3, Informative)?
Re?
I have no idea how they arrive at that number, but in reality ever copy of software that is downloaded (and used, so they people that just trade software and never use any of it don't count) usually costs someone something. If someone needs a photo editing program and they d/l a cracked copy of Photoshop, they most probably would not have paid for Photoshop, so they rightly did not cost Adobe $500 (or whatever the going rate is). Of course this is not always true, but in a general sense. It is the makers of the Gimp and small apps like Paint Shop Pro that have really lost the money (okay, Gimp lost users not money, but they still lost something). These people probably can't afford Photoshop and probably wouldn't have bought it, but they probably can afford a cheaper app (or a free app) but they don't use it because they can pirate Photoshop for free. If they need a photo editing app, they may not have bought Photoshop, but they would have bought something if they really needed it. But instead they bought nothing, choosing to get a pirated version instead. So no, ever person who d/ls and uses a cracked copy of Photoshop is not costing Adobe $500, but they are costing the smaller companies and free software instead.
How bad Piracy can get (Score:2, Interesting)
Right now, Piracy is such a problem in China that it actually has an impact on their economy. However, the piracy is not on software like Microsoft Office or Adobe Photoshop, it's on the software that governs assembly lines and supports large scale manufacturing, etc.
It's so established that there are actual private networks that have been built specifically for shuffling pirated software back and forth.
So why doesn't the government go after these private networks? Because the cost of bandwidth on these networks is much cheaper than the regular service providers...which means you have regular, legal companies using these pirate networks for everyday business use. And to top it all off, the average joe looks at these pirates as the underdog against the big bad govnt. The Chinese government can't touch these nets because they risk putting a lot of small businesses, well, out of business.
That's pretty scary to me.
For future reference... (Score:5, Funny)
Problems with Quoting? (Score:5, Insightful)
Why does the Topic say the lawyer said "is not piracy" when the text of the submission does not use these words? In fact, the text says: "Brad Smith as saying: 'Linux is a way of developing software whereas piracy is copying.'"
Could be just me, but I don't see the words "is not piracy" in there. We couldn't be bothered to use the actual words I suppose?
My Favorite (Score:4, Insightful)
By this standard, I suppose the music industry and (perhaps to a lesser extent) the software industry are "unhealthy". In fact, this makes piracy look pretty attractive, unless these "attractive prices" are cheaper than "free".
Obviously, the reason we have piracy is *because* the current prices aren't "more attractive". Also, not everyone who pirates a program really needs it, especially not for the price that it is selling at.
This goes double for programs that have free alternatives; most people don't really need that new copy of Photoshop 6, but why bother learning about The Gimp when you can just pirate the industry standard? Actually, bundling free alternatives to commercial software would be a good way to decrease piracy, but I doubt that most companies would agree to this, because it might also decrease *SALES*, which is all they really care about. They don't care about their customers, just their money...
Piracy is not good ... (Score:3, Insightful)
I'm surprised that the
-Sean
Software sales numbers & the price of goods (Score:4, Insightful)
To sum up the numbers from the article, the Euro software industry currently makes about 35bln pounds ($51bln USD), and say they're losing about 3bln pounds (4.4bln USD). So, about 8.6% is going down the drain, theoretically, due to piracy, putting aside quibbles about how they did the numbers, etc.
What I want to consider here are two things. First, the cost v. benefit of pursuing pirates, and second, the likelihood of cost improvement for the general software-buying public if piracy were eliminated.
As far as piracy is concerned, once you get past business-level piracy of software and get down to kids in their rooms trading programs, I'd say it becomes impossible. You simply can't police everyone all the time. But this isn't necessarily a bad thing -- maybe today's pimple-faced thief is tomorrow's well-off software purchaser? Software companies, none of whom are hurting, shouldn't let their lawyers obsess about kids in front of their computers.
Second and last, does anyone really think that if they were able to truly get rid of piracy across the board that they'd lower their prices to give you, the honest software buyer, a better deal? Even if they did, that discount in price would never be lower than the amount of piracy currently projected, meaning that a $50 program would only become a $46 program.
Software companies are stuck in a situation where they have to make their product both useful and try to prevent people from stealing it because it's easy to steal. Most people are honest, though, and because of this they make money. Basically, none of them were ignorant of the piracy situation, which has been the same since Win3.1, so they should quit their whining about how the market should be and work in the market that is.
Linux "is not piracy" says Microsoft lawyer... (Score:3, Funny)
press release (Score:5, Insightful)
Dear Jane Wakefield,
In the article titled "Net pirates 'threaten software industry'", posted at
on Monday, 29 April, 2002, 07:52 GMT 08:52 UK, you write down a few items that I don't consider to be entirely correct, and even more that are very one-sided.
Allow me to comment on some of these items:
> The warning was issued at a conference, organised by the Business
> Software Alliance (BSA), which attracted delegates from firms such as
> Microsoft, Apple, Adobe and Symantec.
This sounds like an accomplishment with credits to the BSA, except that the BSA is funded by the firms mentioned, especially Microsoft. Once you check the speakers list against the BSA membership list, you realize that what appears to be a conference is, in fact, a PR meeting.
Pointing this out to the reader would have enabled him to take the points made by these "delegates" with the grain of salt they deserve.
> The meeting was told that in 2000 the software industry in Europe lost
> $3bn to pirates.
I have always been interested in finding out just how BSA and other "independent" researchers arrive at these figures. They don't tell. Any credible claim should name its sources, shouldn't it?
> This figure is thought to be only a tiny fraction of the amount of
> piracy that is going on every day on the internet.
If I interpret "tiny fraction" as less than 10%, I'm at $30bn EVERY DAY, or about 11 trillion per year. The GDP of the UK in 2000 was $1.36 trillion. So these people are telling you that internet piracy is a business 10 times the size of the whole UK economy?
Obviously that is, if you excuse the word, bullshit. The sentence does, however, create the impression that internet piracy is unbelievably huge.
Even so, $30bn is more than Microsoft's worldwide net profits, and a considerable percentage of the total net earnings of europe's software industry. A claim of this size better be substantiated by serious facts and sources. Where are they?
> "We can't estimate how much piracy is on the net but in one day we
> found a million sites under a search for one of the codenames for
> pirated software," said a BSA spokesperson.
One of the "codenames" is "warez" and does indeed return about 4,230,000 hits when put into google.
However, what kind of point does that make? "Buckingham Palace" returns 99,300 hits, but as far as I am aware, there is only one.
More to the point, a search engine just tells you how many sites mention a given topic. Ironically, the BSA's own websites, both at bsa.org and national sites such as bsa.de or bsa.org.tr appear in the above-mentioned search for "warez", because they use the "bad word". A majority of the "real" warez sites are just traps with pornographic advertisement. A little research would have taken an hour or two and been quite revealing.
Warez sites are very real. The BSA, however, having an agenda, is greatly exagerating both their number and capabilities.
Finally, here are a few choice quotes that should have really ticked you off to the fact that the figures are made up:
> The meeting was told that in 2000 the software industry in Europe
> lost $3bn to pirates.
[...]
> Europe has a greater rate of piracy than the US - around 34%
[...]
> It is forecast to grow from £35bn in 2000
Maybe math works differently in america, but even without a calculator I can see that $3bn isn't 34% of $50bn.
It sorries me when I see journalists lifting whole articles almost verbatim out of corporate press releases. It is especially not the kind of reporting I expect from a respectable news source like BBC.
For the record, I am a computer security professional with a telco company. I have been working professionally on the internet for over 5 years, and I have seen the warez scene both from inside (when I was a teenager) and from the outside now that I deal with people abusing our computer resources for these purposes or help the law enforcement agencies to track criminals through our systems.
Piracy is real, no question about it. The BSA, however, justifies its very existence by a gross exageration of the facts, and as a very interested party should not be believed too much.
Re:Atoms != Electrons (Score:2, Insightful)
Re:Atoms != Electrons (Score:4, Insightful)
Here we go - in very tiny words for you, ok?
You go into a store. Software Product A is sitting on the shelf for $10.
You go around to your friend's house. Software Product A is copied to you for free.
Producer of Software Product A has now lost a $10 sale.
Whether you would have bought it for $10 or not is irrelevant - you made a copy, so it obviously has value to you.
Therefore, you are depriving the software company of their profit on that product.
If you disagree with this, then fine, disagree with the software company too - and DON'T USE or COPY THEIR PRODUCT.
Simon
In even tinyer words (or is that more tiny) (Score:2)
1) What if getting a copy the product required no effort on my part?
2) Are all things I get free of charge that are "of value to me" AND that somebody else sells piracy? For example, I hire somebody to clean my windows, I learn how they do it, then fire them and clean my windows myself. Is that piracy? Or shall we patent/copyright "a method for cleaning windows?"
How about a trademark? (Score:4, Funny)
I believe Linus holds the trademark on "Linux," and I've used it to clean Windows off of several systems.
You missed a step (Score:3, Insightful)
You decide that Product A isn't worth $10 to you. (The step you missed)
You go around to your friend's house. Software Product A is copied to you for free.
Producer of Software Product A has now lost a $10 sale.
Oops, except that the producer wouldn't have had that sale anyway. So while the revenue lost to unauthorized redistribution is probably non-zero, it is ceratinly not the total retail value of the number of unauthorized copies.
Re:So did you (Score:3, Insightful)
Clearly, less than $10, but more than nothing. But if I have already decided not to pay for a copy of the software, then logically no action on my part can possibly lose the firm in question a sale. This is self-evidently true. Of course, it depends on whether I would have bought a copy if I was unable to make a copy illegally.
I bet you go to car dealerships and drive cars off the lot that are 'too expensive' for you too.
An imbecilic analogy. If someone steals a car, someone loses a car: a zero-sum game. If, however, someone copies information, the original copy still exists.
Whatever. I thought Adam Waring was funnier anyway.
Zero minus $10 is still Zero! (Score:4, Flamebait)
That's why these enormous piracy figures are such a crock! They impute a negative amount of money to each illegal copy, and figure that as their loss. The only real losses are from those who can afford to buy the software, WOULD buy the software at the price set, and DON'T. And face it, the hassle for those people is not worth it!
Now the article raises another concern: organized criminals selling illegal copies of software. For some reason, this tends to happen in poor countries like Ukraine or Vietnam. Guess what? Selling Windows XP for $99 dollars in Vietnam is more like old-fashioned piracy than selling illegal copies! It's requiring more than the product is worth in time and trouble to those who haven't got the income. Sell your software products in rich countries for full price, but sell it dirt cheap in poor countries, and you'll have GREAT market share down the road.:Atoms != Electrons (Score:2)
Re:Atoms != Electrons (Score:4, Informative)
Software company 'A' sells a piece of software for $159.95. User 'Bob' doesn't have $159.95 to spend on software. Now, if we take the amount of money that company 'A' has when 'Bob' doesn't buy the software, and subtract it from the amount of money that the company 'A' has when 'Bob' pirates the software we get the indisputably correct amount of money that company 'A' has lost from 'Bob's piracy. I hope you can add, because here we go:
$0 - $0 = $0.
That's right, the company lost $0. $0. That's it, just $0.
I'm a professional software developer. That's what I do for a living. I fully understand that software piracy is bad, but to say that every pirated copy is a loss of money is just a lie. Some are, some aren't. I just showed you the math to prove it. Stop spreading you BSA marketing department lies.
If you want people to listen to you, more importantly if you want to influence people, then you have to tell the truth. People aren't stupid, and they can tell when you are lying to them. If you want to convince people to stop pirating software, you will have to find honest arguments, and you should know that there are many of them. Even 'Bob' the hypothetical software pirate can add, so your arguement won't work on him.
Re:Atoms != Electrons (Score:3, Insightful)
My reasoning is math. The numbers add up. Yes it's wrong to pirate software, but the software companies are only loosing money on some pirates, not all of them.
I'm not saying you should be sympathetic with software pirates, and I'm not saying the BSA shouldn't go after companies that are pirating software ('are' should be emphasized in that sentence). I'm just saying that it's lying to say that software companies are loosing $x times the number of pirated copies of their software where 'x' is the amount of revenue they take in on each copy sold.
Re:Atoms != Electrons (Score:3, Insightful)
The price is negotiable in the sense that market demand will either force a price downward as demand decreases or upward as demand increases. Whether or not "Bob" can afford the software has nothing to do with it.
Macromedia has specified that their selling price is $499 and just because Bob cannot afford the cost of Flash then he is certainly not entitled to copy or steal the product.
Again, "justifiable" price is a subjective term. But the price, according to the law, is the value of the product, and therefore the loss is equal to the number of pirated copies times the price.
By your argument, I would be justified in taking your car if you refused to negotiate on a price I could afford.
Re:Atoms != Electrons (Score:2)
Assume that sans this option, I wouldn't
have bought it. That means the value it has
for me is less than the amount they are asking.
Possibly, near zero value.
They lost no profit, because they never would
have made any off me.
It's leeches like you that give leeches a bad name.
Look: they don't want to give you free entertainment. They want to sell entertainment to others.
If you don't like that, don't do it. Don't fuck with other peoples livelihoods just because you're too cheap to buy your toys.
I hope one of these days I get the opportunity to steal from you.
Simon
Re:Atoms != Electrons (Score:5, Insightful)
The case of mass illegal copying of software in China (AFAIK, the worst case) is a different matter, as is the practice of buying a limited number of licenses at a business or government office and then using far more copies than are licensed. In the first case, revenue is being generated, but the developer is not getting paid. In the second, the organization has a contract that they are violating. There are laws against both cases, and the BSA is trying to enforce them. Although I disapprove strongly of their tactics, I believe that they are well within their rights to attempt to enforce licenses. After all, even Evil Empires should be paid for their products.
#pragma old_geezer_mode=on
;-)
Way back in the old days, Borland had a product called "Turbo Pascal", when it came out it had copy protection and a horrible license agreement. After getting raked over the coals by lots of people, they cut the price, dropped the copy protection and horrific license and TP became one of the most popular development products of all time. I actually worked in support for TP, and the official company position was "We don't condone illegal copying, but at our price point we feel it doesn't matter." We'd even answer support calls from illegal copiers, but it was always fun to refer one of these guys to a particular page in the manual, and listen to them squirm
#pragma old_geezer_mode=off
"Pirates" are generally customers who's price point you haven't met (who said that, anyway?). This is the problem that many BSA members need to address. On the flip side, any educational/non-profit organization that doesn't switch to open source solutions will likely run afoul of them, with extremely negative consequences.
Atoms || Electrons != Information (Score:5, Interesting)
Consider this: Would a theatre continue to attract PAYING customers if they ever found out that people were getting in free? Would allowing people who can't afford PowerPoint to use it affect sales to those would can afford it? Should there be 'means testing' for software, like for charity, where you have to undergo an income/background check to see whether you have to pay for it or can qualify to get a free copy??
Re:Was there a question? (Score:3, Insightful) | https://slashdot.org/story/02/04/29/1639219/linux-is-not-piracy-says-microsoft-lawyer | CC-MAIN-2016-50 | refinedweb | 8,952 | 71.95 |
> > I don't know if this will help you or not, but mod_python actually uses
> > subinterpreters, each of which have their own namespace separate from the
> > main interpreter.
Brilliant! Just what I need.
> In short, requests against the directory and a file in the directory map
> to different interpreters when they should be the same. This makes it
> hard to have a default index page served up when directory is accessed.
Fortunately we do not use index.psp files, so it seems this bug won't effect us.
To summarise:
The final 3 lines I added after line 207 in psp.py are:
cwd = os.path.split(self.filename)[0]
if cwd not in sys.path:
sys.path.append(cwd)
This allows (along with the PythonInterpPerDirectory driective)
multiple developers, on the same machine, to import different packages
(which reside in the same folder as the .psp file) of the same name
into .psp pages which are served from their public_html folders.
Thanks for the assistance.
Sw. | https://modpython.org/pipermail/mod_python/2005-January/017041.html | CC-MAIN-2022-21 | refinedweb | 166 | 67.96 |
Sending Mail through ASP.NET c#
Sending Mail through ASP.NET c# by using this code of lines
//this code is help to sending emails through ASP.NET c#
// and directly send into inbox not in spam
//Go through some steps
first Step
write code in web.config file
<system.net>
< mailSettings >
<smtp from="sender Name">
<network host="mail.Gmail.com" password="Email ID password" userName="sender Email ID" />
</smtp>
</mailSettings>
</system.net>
Second Step
//use Necessary library
using System.Web.Mail;
third Step
//function to sending mail
//here create a (smtp) object
public void semdMail()
{
//create mail object
MailMessage mail = new MailMessage();
// mail parameters
mail.To = "Receiver EmailID";
mail.From = "Sender Email ID";
mail.Subject = "Subject";
mail.Body = "Message";
SmtpMail.SmtpServer = "localhost";
SmtpMail.Send(mail);
}
Nice Snippets send mails successfully ....
but when i Test this code in my website all mails are received in Spam folder
only want to do is to send mails that will show in the inbox folder in receiver mails.
Thanks in advance............. | http://www.dotnetspider.com/resources/38149-Sending-Mail-through-ASP-NET-c.aspx | CC-MAIN-2018-34 | refinedweb | 168 | 71.21 |
Passing Blocks as Arguments
A recent project at Turing involved creating an Enigma machine of sorts that encrypted and decrypted messages by rotating each character to a new one based on a randomly generated key and a date, with different rotators based on the location of the character in the message.
Once I had the functionality working, I realized I had a method in both my decrypt and encrypt classes that did the exact same thing, but in reverse in the case of decrypt. It looked like this:
#In the Encrypt class def rotate_message(message)
indices_and_rotators = which_rotator(message)
new_indices = []
indices_and_rotators.length.times do
new_indices << (indices_and_rotators[i][0] + (indices_and_rotators[i][1] % 85))
end
end def rotate_encrypted_message(message)
indices_and_rotators = which_rotator(message)
new_indices = []
indices_and_rotators.length.times do
new_indices << (indices_and_rotators[i][0] - (indices_and_rotators[i][1] % 85))
end
end
Basically, the indices_and_rotators variable contains a 2D array of the initial index of the character, and the number of characters by which it was to be rotated. As you can see, the only difference in these two methods is that I’m adding the index and rotator together in the case of Encrypt, and subtracting them in the case of Decrypt. In the spirit of DRYing up my code, I wanted to do something about this.
I’ll walk through how it works, but first, spoiler alert — here’s what I ended up with:
#this is in the class from which both Encrypt and Decrypt inheritdef rotate_indices(message, &block)
indices_and_rotators = which_rotator(message)
@new_indices = []
indices_and_rotators.length.times do |i|
@new_indices << block.call(indicies_and_rotators[i][0], indices_and_rotators[i][1] % 85)
end
ensure_valid_rotator
end #in Encrypt def rotate_message(message)
rotate_indices(message) do |initial, rotation|
initial + rotation
end
end #in Decrypt def rotate_message(message)
rotate_indices(message) do |initial, rotation|
initial - rotation
end
end
In the actual code, each of these methods is in a different Class and different files, but I’ve brought them together just to simplify viewing. Here’s how this works:
First, do all the setup (calling the which_rotator method, defined previously, and setting up the new_indices variable as an empty array) in the rotate_indices method so you don’t have to do it twice. Note that the new_indices variable now needs to be an instance variable so that it’s accessible across methods.
In the rotate_indices method, which both Encrypt and Decrypt have access to, call the block as an argument. Do this by calling &block as an argument in the method. You can call it whatever you want, as long as it starts with the ampersand, and you use the same name below when you call the block (line 6 in my example). This block argument does have to be the last one you pass your method.
This turns your block into a Proc that you can call elsewhere, and still give it access to your local variables.
Once you’ve created your Proc, you can call it in the different rotate_message methods. initial references indices_and_rotators[i][0] and rotation references indices_and_rotators[i][1]. Then you can pass each the separate method that they need. (Yes, + and — are methods, you can write them the way you can because of Ruby’s syntactic sugar).
This is another brief example. The accumulate method is basically a rewriting of the built in map method.
class Array
def accumulate(&block)
result = []
self.each do |item|
result << block.call(item)
end
result
end
end #In use: [1, 2, 3].accumulate do |number|
number * number
end # => [1, 4, 9]
Success! Cheers to Josh Cheek for showing me how this works. | https://adriennedomingus.medium.com/passing-blocks-as-arguments-859db5300f1 | CC-MAIN-2022-21 | refinedweb | 589 | 51.89 |
Hi all,
Years ago I wrote a few android apps, and I've decided to get back in and try it with Xamarin. After going back and forth on ideas, I decided to give the xamarin forms a whirl. From the defaults I generated a forms app with the following:
Xamarin.Forms.Droid
Xamarin.Forms.UWP
Xamarin.Forms.PCL
(DELETED iOS)
Now from my understanding, I created an additional Xamarin.Forms.Droid.UITest project using XUnit (Note: I've tried the default of NUnit too)
I've got a mix of styles (from the PCL) and theme (from Droid).
Recently I asked a question over trying to get my theme recognized within the android test project: Unit testing a Xamarin Forms Android specific code project
I'm not running into a conflict that shows up within xunit / nunit as the testing strategy. I feel like it's because the FormsAppCompatActivity with an older activity type, but I'm new to Xamarin and am unsure on how to approach this.
I get a lot of these type of errors:
Attribute
layout_anchorGravity already defined with incompatible format.
The full list of the similar Attribute errors is:
here is the basic code... At this point, I got a simple test of forcing it to be false to know it works.
//Within the Android Forms project, I have this inheritance: public class MainActivity : global::Xamarin.Forms.Platform.Android.FormsAppCompatActivity { // typical code } //within the Android UITest project public class MainActivity : Xunit.Runners.UI.RunnerActivity { // tests can be inside the main assembly AddTestAssembly(Assembly.GetExecutingAssembly()); AddExecutionAssembly(typeof(ExtensibilityPointFactory).Assembly); // or in any reference assemblies //AddTestAssembly(typeof(PortableTests).Assembly); // or in any assembly that you load (since JIT is available) #if false // you can use the default or set your own custom writer (e.g. save to web site and tweet it ;-) Writer = new TcpTextWriter ("10.0.1.2", 16384); // start running the test suites as soon as the application is loaded AutoStart = true; // crash the application (to ensure it's ended) and return to springboard TerminateAfterExecution = true; #endif // you cannot add more assemblies once calling base base.OnCreate(bundle); }
Any help would be much appreciated,
Kelly
P.S. I've also asked about this on stack overflow without resolution:
Answers
I assuming this is the right approach, but I could be completely wrong. The main thing is that I'm trying to do is integration testing (really unit testing android code, but like we all know, we have to go through the emulator at times).
Where this really started was that I had a basic UI that had a mix of styles and themes. The problem when going down this path was the unit testing project saying: hey buddy I can't find the theme that was referenced within the app. Eventually, with the help of an individual on Stack Overflow, that person had told me to change the reference to @style/mytheme. After doing this, the style and themes were seen from the project, but now I get the styling issues. I mentioned above.
Do you others do unit testing or integration testing of your xamarin forms android app with AppCompat activities? How do you approach this? I've looked around, and it seems that most tutorials show this concept pre AppCompat. | https://forums.xamarin.com/discussion/comment/286559 | CC-MAIN-2019-51 | refinedweb | 547 | 53.21 |
05 April 2012 07:54 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
From 1 April, Sinopec refineries charged yuan (CNY) 7,940/tonne ($1,260/tonne) ex-refinery for naphtha supply to its chemical units, up by CNY410/tonne from March, according to the company source.
PetroChina increased its naphtha prices by CNY173/tonne to CNY6,911/tonne ex-refinery to its chemical units, the source from the company said.
The two companies primarily keep their naphtha resources for their own use. Surplus stock is sold either on spot markets or when the companies will enjoy higher margins selling them than processing the feedstock into chemicals, industry sources | http://www.icis.com/Articles/2012/04/05/9547950/chinas-sinopec-petrochina-raise-april-naphtha-prices.html | CC-MAIN-2014-35 | refinedweb | 108 | 52.19 |
hi
i am calling one servlet form another servlet but this error is coming
HTTP Status 405 - HTTP method GET is not supported by this URL
could anybody plz help where is the problem as i m new to this technology and eager to learn.....my code is
FromServlet.java
import java.io.*;
import javax.servlet.*;
import javax.servlet.http.*;
public class FromServlet extends HttpServlet {
public void doPost (HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
RequestDispatcher RD = getServletContext().getRequestDispatcher("/ToServlet?param=method1");
RD.forward(request, response);
}
}
I]ToServlet.java[/I]
import java.io.*;import javax.servlet.*;import javax.servlet.http.*;public class ToServlet extends HttpServlet {public void doGet (HttpServletRequest request, HttpServletResponse response)throws ServletException {String param;param = request.getParameter("param");if (param.compareTo("method1")==0) {try {method1();} catch (Exception ex) {}}}
public void method1() {//Do something
}}
Have you tried using response.sendRedirect() ?
no i have not as i said earlier i m new to this so plz could u plz change my code and i can try it?
The idea is that we help you solve your programming problems. vikrantkorde gave you a great hint, maybe you should try it out before you give up.
If you would like someone to do your work for you, I suggest you drop your programming classes and start working towards your MBA.
This topic is now closed. New replies are no longer allowed. | http://community.sitepoint.com/t/calling-one-servlet-form-another-servlet/2540 | CC-MAIN-2014-42 | refinedweb | 229 | 57.16 |
Working with resource strings
Resource strings store the text displayed in the Kentico administration interface (and on the live site in some cases).
- All default resource strings are stored in the cms.resx file, which is located in the project's CMSResources folder.
- You can edit the resource strings that the system stores in the database in the Localization application on the Resource strings tab.
Resource string priority
When loading resource strings, the system uses the following priority:
- database (Localization application)
- custom.resx
- cms.resx
If there are duplicate strings with the same key in all three sources, the system uses the one stored in the database.
To change the priorities, you can add the following key to your web.config:
<add key="CMSUseSQLResourceManagerAsPrimary" value="false" />
When this key is added, the priorities are as follows:
- custom.resx
- cms.resx
- database
Modifying the default UI strings
If you want to modify text in the user interface (including web part dialogs), you can create a custom.resx file and store your strings in this file. The keys used to identify the strings must be the same as in cms.resx file. This procedure allows you to modify the strings without worrying that your changes will be overwritten during an upgrade to a newer version.
If you need to customize strings in a non-English resource file, your custom file must use a name like custom.<culture code>.resx (for example, custom.fr-fr.resx for French).") %}
If you need to retrieve the value of a resource string in your custom code, use the CMS.Helpers.ResHelper.GetString method.
using CMS.Helpers; ... // Loads the value of the 'stringKey' resource string (in the default culture) string localizedResult = ResHelper.GetString("stringKey"); | https://docs.kentico.com/k82/multilingual-websites/setting-up-a-multilingual-user-interface/working-with-resource-strings | CC-MAIN-2019-13 | refinedweb | 287 | 55.74 |
Travian: Kingdom utilities for your need.
Project description
tkpy
Travian: Kingdom utilities for your need.
It provide several object that mostly used on Travian: Kingdom such as
Map,
Villages,
Notepad, and
Farmlist.
Table of Contents
Installation
It is recommended to use virtualenv or any other similar virtual environment management for python.
Since
tkpy depend on
primordial package, first you need to install primordial package.
pip install git+
Getting started
tkpy need
Gameworld object from
primordial package so it can request data from Travian: Kingdom. Use
authenticate function to retrieve
Gameworld object.
from tkpy import authenticate driver = authenticate(email='your@email.com', password='your password', gameworld='com12')
Usage
Map
Map object is an object for keeping map data from Travian: Kingdom.
To get map data from Travian: Kingdom, you need to call
pull method.
from tkpy import Map m = Map(driver) m.pull()
Once you call
pull method, now you can get all tiles data using several method.
all_villages = list(m.gen_villages()) # get all villages from map abandoned_valleys = list(m.gen_abandoned_valley()) # get all unsettled tiles from map oases = list(m.gen_oases()) # get all oases from map grey_villages = list(m.gen_grey_villages()) # get all grey villages from map
Or if you want to get data from specific tile, you can use
coordinate method.
m.coordinate(0, 0) <Cell({'id': '536887296', 'landscape': '9013', 'owner': '0'})>
When you call
pull method, you also will get player data and kingdom data. Map object will keep this data and you can get player data or kingdom data using several method.
player_list = list(m.gen_players()) # get all players kingdom_list = list(m.gen_kingdoms()) # get all kingdoms inactive_player_list = list(m.gen_inactive_players()) # get all inactive players
If you want to get data from specific player or kingdom, you can use
get_player method and
get_kingdom method.
m.get_player('player name') <Player({'name': 'player name', 'country': 'en', 'tribeId': '1', ...})> m.get_kingdom('kingdom name') <Kingdom({'tag': 'kingdom name', 'kingdomId': '9999'})>
If you want to slice map data based on the area of your interest, you can use
slice_map method.
sliced_map = m.slice_map(center=(0, 0), radius=5) # now you can do the same thing as `Map` object grey_villages = list(sliced_map.gen_grey_villages())
Villages
Villages object is like built-in
dict object from
Python so you can access the village using its name as key. To get village data from Travian: Kingdom, you need to call
pull method first.
from tkpy import Villages v = Villages(driver) v.pull() v['your first village'] <Village({'villageId': '537313245', 'playerId': '001', 'name': 'my first village',...})>
From
Villages object you can get
Village object and from this object you can do
send_attack,
send_raid,
send_defend,
send_spy, and
send_siege.
If you want to attack, you need to get troop enum from
tkpy.
from tkpy import RomanTroop # if you are a Roman tribe first_village = v['your first village'] # get your first village object units_siege = {RomanTroop.IMPERIAN: 1000, RomanTroop.BATTERING_RAM: 1} # prepare unit units_attack = {RomanTroop.LEGIONNAIRE: 1000} # prepare unit units_raid = {RomanTroop.EQUITES_IMPERATORIS: 50} # prepare unit units_defend = {RomanTroop.PRAETORIAN: 1000} # prepare unit first_village.send_siege(x=0, y=0, units=units_siege) # send siege first_village.send_attack(x=0, y=0, units=units_attack) # send attack first_village.send_raid(x=0, y=0, units=units_raid) # send raid first_village.send_spy(x=0, y=0, amount=1) # send spy first_village.send_defend(x=0, y=0, untis=unts_defend) # send defend
From
Village object you can also upgrade building that on the village by using
upgrade method. And if you want to construct building, you can use
construct method.
If you want to upgrade or construct building, you need to get building enum from
tkpy.
from tkpy import BuildingType first_village = v['your first village'] # get your first village object first_village.upgrade(building=BuildingType.MAIN_BUILDING) # upgrade main building first_village.construct(buildng=BuildingType.WAREHOUSE) # construct warehouse
Farmlist
Farmlist object is like built-in
dict object from
Python so you can access farmlist using its name as key. To get farmlist data from Travian: Kingdom, you need to call
pull method first. From
Farmlist you can create new farmlist by calling
create_farmlist.
from tkpy import Farmlist f = Farmlist(driver) f.pull() f['Startup farm list'] <FarmlistEntry({'listId': '1631', 'listName': 'Startup farm list', ...})> f.create_farmlist('new farmlist') f['new farmlist'] <FarmlistEntry({'listId': '1632', 'listName': 'new farmlist', ...})>
From
Farmlist object you can get
FarmlistEntry object and from it you can add new village to the
FarmlistEntry and send this
FarmlistEntry.
f['Startup farm list'].add(villageId=536887296) # add village using village id to 'Startup farm list' f['Startup farm list'].send(villageId=537051141) # send 'startup farm list' from village using village id
Notepad
Notepad is an object that when instantiate will create new notepad in game. Use
message method for write new message.
from tkpy import Notepad n = Notepad(driver) # new notepad will appear in game n.message('this is new message on new notepad') # write message to the notepad # careful, use `message` method will overwrite message previously on notepad n.message('old message will be overwrited')
Documentation
For documentation, you can go to this wiki.
Disclaimer
Please note that this is a research project, i am by no means responsible for any usage of this utilities.
Use on your own behalf, i am also not responsible if your accounts get banned due to extensive use of this utilites.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/tkpy/0.0.1b4/ | CC-MAIN-2021-17 | refinedweb | 898 | 58.69 |
CowHerd
CowHerd is a partially-observed reinforcement learning environment, where the player walks around an area and is rewarded for milking cows. The cows try to escape and the player can place fences to help capture them. The implementation of CowHerd is based on the Crafter environment.
Play Yourself
You can play the game yourself with an interactive window and keyboard input.
The mapping from keys to actions, health level, and inventory state are printed
to the terminal.
# Install with GUI pip3 install 'cowherd[gui]' # Start the game cowherd # Alternative way to start the game python3 -m cowherd.run_gui
The following optional command line flags are available:
Training Agents
Installation:
pip3 install -U cowherd
The environment follows the OpenAI Gym interface:
import cowherd env = cowherd.Env(seed=0) obs = env.reset() assert obs.shape == (64, 64, 3) done = False while not done: action = env.action_space.sample() obs, reward, done, info = env.step(action)
Environment Details
Reward
A reward of +1 is given every time the player milks one of the cows.
Termination
Episodes terminate after 1000 steps.
Observation Space
Each observation is an RGB image that shows a local view of the world around
the player, as well as the inventory state of the agent.
Action Space
The action space is categorical. Each action is an integer index representing
one of the possible actions:
GitHub | https://pythonawesome.com/partially-observed-visual-reinforcement-learning-domain/ | CC-MAIN-2021-43 | refinedweb | 225 | 57.98 |
Introduction:
I am a huge chicken, trying to write a blog for the first time, I hope you can be more inclusive
There are errors and we hope that the giants will come forward. I will correct them in time. Thank you.
At your best age, find future goals
There is still one year to struggle, next year to interview internship, go on!
As usual, first look at the title requirements:
Enter the root node of a binary tree to determine if it is a balanced binary tree.
If the left and right subtrees of any node in a binary tree have no more than 1 depth difference,
So it's a balanced binary tree.
Example:
Given a binary tree [3,9,20,null,null,15,7,null,null]
Answer: true
Looking at the title, my first reaction was:
According to the topic, he said that to judge an AVL tree, you must ensure that each node is a left subtree whose depth differs from that of a right subtree by no more than 1.
So, the first idea I came up with was to write a function to determine depth (depthdiff), so I would go through each node from top to bottom, and when I reached a node, I would pass his left son node to depthdiff to calculate depth, and the right son would do the same, and then compare the two again.
This is more complex, and the time complexity is O(nlogn).
The detailed code is commented out as follows:
#include <iostream> #include <queue> #include <cmath> using namespace std; struct TreeNode { int val; TreeNode *left; TreeNode *right; TreeNode(int x) : val(x), left(nullptr), right(nullptr) {} }; class Solution {//This is a top-down strategy //By careful analysis, you can see that when you have judged the root node, what is actually happening to the underlying node is already known, so the efficiency is too low, there are too many repetitive operations, so the improvement public: bool isBalanced(TreeNode* root) { if (root != nullptr) {//Just follow the title here bool judge = false; int left = depthdiff(root->left); int right = depthdiff(root->right); if (abs(left - right) >= 0 && abs(left - right) <= 1) {// judge = true; bool judge1 = isBalanced(root->left); bool judge2 = isBalanced(root->right); if (judge == true && judge1 == true && judge2 == true) {//The current node must be available so that the left and right sons can make true return true;//But anything goes wrong } } return false;//Current node does not work, then false must be returned } else {//If the tree itself is empty, there is no need to judge return true; } } int depthdiff(TreeNode* root) { if (root == nullptr) {//When the incoming node is an empty one return 0;//That means there are no layers, return 0 } else { queue<TreeNode*> tar; tar.push(root);//Up First Entry Root Node int cnt = 1;//Represents the current number of layers int cntTemp = 1;//Represents how many nodes there are in the layer while (!tar.empty()) { int cnt0 = 0;//Record how many nodes are in the current layer to prepare for the next big cycle for (int i = 1; i <= cntTemp; i++) {//How many nodes are on the previous level and cntTemp is present, how many times is this done if (tar.front()->left != nullptr) {//In fact, every time I take out my opponent to see if there are left or right sons and join the team cnt0++; tar.push(tar.front()->left); } if (tar.front()->right != nullptr) { cnt0++; tar.push(tar.front()->right); } tar.pop();//Throw this away when you're finished } cntTemp = cnt0;//Wait until the next layer is finished, how many records are in that layer, and then update the data if (!tar.empty()) { cnt++;//Think about extreme situations, not that there must be a son node, and if there are no sons, the number of layers cannot increase } } return cnt; } } };
(All code is running correctly on the power button)
The problem with method one is still brutal:
The first is obvious, using a top-down, top-down strategy
But as we know by drawing with a pen and paper, this method is very inefficient
I finished all the nodes below her when I was at the root node, and then I worked all over again
What's the most annoying thing about our programmed apes?Is it not just repeating meaningless work?
We're all bored. The boss and the interviewer are even more bored!
Let's imagine a job, a job doesn't have to be done from start to finish. You can have more than one division of work
When you calculate one more step, you only need to know the results of the predecessors.
So the optimization scheme comes
The second optimization scheme --- remember your thoughts (you're blind, big guys don't mind):
Before saying that, the viewer can draw a tree simply. It is not difficult to find that the depth of a tree is the maximum of two depths of the left and right son nodes of the root node+1
With this formula we'll be fine
Because it's a bottom-up strategy, we have to go to the bottom first before we start, so we take a back-root traversal
Because it is equivalent to traversing from bottom to top, the time complexity is O(N).
The code is as follows:
#include <iostream> #include <queue> #include <cmath> using namespace std; struct TreeNode { int val; TreeNode *left; TreeNode *right; TreeNode(int x) : val(x), left(nullptr), right(nullptr) {} }; class Solution { public: bool isBalanced(TreeNode* root) { return (depthdiff(root) != -1);//Since all the data are numbers >=0, we stipulate that when he is negative, he is not qualified and the qualified cases return layers >=0 } int depthdiff(TreeNode* root) { if (root == nullptr) {//The boundary must be taken into account when designing the program, once empty it reaches the next level of the leaf node return 0;//So it's zero without layers } int left = depthdiff(root->left);//Heel int right = depthdiff(root->right);//Rear Root if (left == -1 || right == -1) { return -1;//Here you will remember, because once there is an unavailable point, the judgement of the other points is meaningless and unnecessary, so prune, quickly return to -1 and exit }//So here I want to reserve a quick exit to prune on my behalf, which is no longer required else { if (abs(left - right) >= 0 && abs(left - right) <= 1) {//Judge as required return max(left, right) + 1;//Calculate Depth by Formula } else { return -1;//This node is not allowed to prune immediately, take the fast track to exit } } } };
(All code is running correctly on the power button)
Summary:
1. In the first method, the most critical is the operation on the queue.The queue is a highly flexible thing, there is no need to go through one at a time and then into their two sons at the traditional level. This is not easy to count the number of layers, so we don't have to be stuck here. We can operate one layer at a time, one layer at a time, one layer at a time, and then see if the queue is empty, not ++,Conversely, ++, so be flexible with column operations, one layer at a time.
2. In the second optimization method, the idea is very common and important, for example, the optimization of Fibonacci series is a bottom-up strategy. This strategy is very fast and convenient, I do not need to do a lot of repeated operations, the latter uses the data of the former!
When you find that your program is doing a lot of repetitive work over and over again, you have to think about using memory algorithms to free up the workforce faster
When you find that your program has too many things to consider from top to bottom, you may want to turn it upside down, traverse through the roots to find the lowest point, then move up one by one, and you will find that the latter uses the results of the former, and the number of overall situations decreases.
3. When a certain condition is found to be triggered which causes a large number of nodes of this tree to be useless, a quick exit channel can be opened to achieve the effect of pruning.
4 When writing program design algorithms, we must focus on the boundary, consider the boundary, consider the boundary (important things say three times), and design a good boundary handling method | https://www.fatalerrors.org/a/vibrant-brush-force-buckle.html | CC-MAIN-2021-17 | refinedweb | 1,413 | 50.23 |
I installed the dataextract python library. I get no error messages. However when I try to "import dataextract", the statement fails with the following message. Can someone help me figure out what the problem is?
import dataextract
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import dataextract
File "C:\Python27\lib\site-packages\dataextract\__init__.py", line 15, in <module>
from Base import *
File "C:\Python27\lib\site-packages\dataextract\Base.py", line 17, in <module>
from . import Types
File "C:\Python27\lib\site-packages\dataextract\Types.py", line 17, in <module>
tablib = libs.load_lib
File "C:\Python27\lib\site-packages\dataextract\Libs.py", line 33, in load_lib
self.lib = ctypes.cdll.LoadLibrary(self.lib_path)
File "C:\Python27\lib\ctypes\__init__.py", line 443, in LoadLibrary
return self._dlltype(name)
File "C:\Python27\lib\ctypes\__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
WindowsError: [Error 193] %1 is not a valid Win32 application
the DLLS in the API package must be in the DLL search path (your current working directory or PATH). Please also check if you have python and API for the same architecture (32 or 64 bit)
Even if you have a Windows 64 Bit, try to download the 32 Bit version. This worked for me on two 64 Bit machines.
Become a Viz Whiz on the Forums! Support the Community and master Tableau. | https://community.tableau.com/thread/126581 | CC-MAIN-2019-09 | refinedweb | 234 | 62.14 |
User:SuggestBot
From Wikipedia, the free encyclopedia
SuggestBot is a program that attempts to help Wikipedia users find pages to edit. More detail is below.
If you want to see some personalized recommendations for you, please leave your name at User:SuggestBot/Requests.
[edit] About SuggestBot
This is a Wikipedia bot belonging to ForteTuba. Its purpose is to match people with pages they might like to contribute to based on their past contributions. It uses a variety of algorithms, including standard information retrieval and collaborative filtering techniques, to make suggestions. It also sometimes points people to the Community Portal, or their past edits, as a source of inspiration.
It mostly runs at the GroupLens Research Lab on various machines, mostly using a recent copy of the Wikipedia database. It downloads people's contributions when making recommendations (to avoid recommending pages they've edited since the last dump) and it also occasionally downloads recent contributions to check whether people are taking recommendations. It is still under development, written in Perl.
It makes suggestions in two ways:
- People who ask, it posts them directly to their talk page, like this.
- It randomly picks, creating a subpage of this user page and putting the links there, putting a brief note on their talk page.
If this bot made personalized recommendations for you, please tell leave feedback on whether they were useful and how to make them better. Comments on recommendations, as well as general comments, suggestions, or complaints, are best left on the bot's talk page. Comments are welcome and valuable, as they will help SuggestBot do a better job of helping Wikipedia.
No one had strong objections on Wikipedia_talk:Bots when this bot was proposed, so I'm running off and on. It has run a couple of pilot tests, and this bot has a page where people can request suggestions. Eventually, the creator of SuggestBot wants it to become a Wikipedia Tool.
[edit] Limitations/issues
- It's still not good with non-low-ASCII characters in usernames. Sigh.
- Some people would like wanted articles (redlinks). Hard because the only info we have on a redlink is a title and the pages that link to it -- no edit history to work from. Might be able to do this.
- Someone suggested removing section stubs from the stubs list. Probably the right thing to do.
- Right now you have to make requests each time you want recommendations. It should have an easy way to support repeat customers.
- Should probably remember what's been recommended to a person, and avoid re-recommending for repeat customers.
- Needs to eventually, automatically, re-download lists of articles.
- Automated posting of suggestions/notifications is broken for some talk pages, and I don't know why. Probably redirects, someone pointed this out to me.
- Only reads up to N (=500 as of Mar 6 2006) of a person's most recent edits when making recs. It tries to get older edits from a dump, in order to not recommend articles people have edited in the past, but this isn't perfect because dumps go out of date (there might be a gap between your last 500 edits and any edits it finds in the dump, and articles in that space might be recommended).
- Doesn't handle redirects (also leading to recommendation of already-edited pages). This appears to be a relatively minor issue based on a little bit of testing of recommended items. I'd like to do this on the back end, so that if a person has edited several versions of a page, SuggestBot would "know" that they were all the same page (and maybe do better).
- Ignores anything outside of main namespace (taking article talk pages into account might be interesting, a better representation of people's interests than just edits of articles directly. On the other hand, people often post on talk pages of articles they'd like to see deleted?)...
[edit] Changelog
- Try to improve profiles by ignoring minor and disambig edits. -- 11:28, 7 August 2006 (UTC)
- Kick over coedit recommender to 7-17 dump. -- 11:28, 7 August 2006 (UTC)
- Removed random recommendations, they were rarely followed. -- 05:01, 27 March 2006 (UTC)
- Eliminate most previously edited articles by looking at a relatively recent local dump. Many dumps fail on en, and processing them takes days, so for now we're on a mostly-processed version of the 2-19-06 dump. -- 16:24, 15 March 2006 (UTC)
- Maybe fixed all accented character issues? -- 15:58, 15 March 2006 (UTC)
- Add a filter to not recommend articles in the top N% (N=1) of edited articles -- a better way to handle the controversial article problem, and consistent across recommendation algorithms. -- 00:11, 15 March 2006 (UTC)
- Instead of recommending among all articles, focus on recommending articles tagged as stubs or needing work. (Somewhat like OpenTask but giving more weight to stubs since there are so many more of them.) -- 21:49, 14 March 2006 (UTC)
- Expand edit removing to include protection actions. -- ForteTuba 16:16, 14 March 2006 (UTC)
- Make some random recommendations, to make sure all articles get recommended eventually (a la User:Pearle's maintenance of Template:Opentask). -- 18:48, 13 March 2006 (UTC)
- Harshly penalize articles with lots of links in the link-based recommender, to recommend fewer popular pages (that presumably have less opportunity to contribute to). -- 16:35, 10 March 2006 (UTC)
- Remove many edits as input to recommendations, if the comment suggests they're reverting vandals. These edits appear to cause recommendations to zero in on controversial pages. -- 16:35, 10 March 2006 (UTC)
- Fix many accented character issues. -- 21:41, 7 March 2006 (UTC) | http://ornacle.com/wiki/User:SuggestBot | crawl-002 | refinedweb | 949 | 54.22 |
Yesterday was a fantastic day. I spent the afternoon helping out at the Wintellect booth. There is a rumor I may have been seen challenging attendees to a game of 9-ball in the game room once or twice. I also attended two great sessions, one about dependency injection and the other a panel discussion.
The sessions from day one are now posted online.
The Future of Dependency Injection
This talk by Vojta Jina discussed dependency injection within Angular and what the future will look like. First, he talked a bit about why DI is so important. I think this is something many server-side OO developers take for granted but the nature of JavaScript makes it a little less obvious on the client. He demonstrated the difference between creating instances from within a module versus taking them as a constructor panel, but pointed out this leads to a monolithic and complex “main method” that has to wire everything up. The logical solution is an injector that can keep track of components and provide them as needed when other components require them.
The existing Angular 1.x versions use a module-based approach. Each module is essentially a separate DI container, and modules can depend on other modules so the injector knows how to reach across containers to wire up the dependency graph. This approach works but is also a bit complex to understand and master. The Angular team decided they wanted to focus on simplifying this as much as possible for the 2.x versions, so a new approach is being taken.
The new approach is quite elegant. The concept of containers goes away. This is possible because of the way you reference dependencies. There are two parts. The first is a special statement to import dependencies, that looks something like this:
import {MyComponent} from ‘./myComponent’;
This will look in the myComponent.js file for the definition (either via constructor function or factory) of MyComponent. Anything you wish to expose for dependencies gets exported in a format similar to TypeScript, like this:
export class MyComponent
Of course this is the new format that is not compatible with older JavaScript implementations. For this, Google provides a compiler that builds modern JavaScript out of “future JavaScript” or you can use the direct convention like:
define([‘./myComponent’, function ….
Then to annotate what dependencies a class needs, you annotate like this:
@inject(MyComponent)
class MyDependentComponent {
or the classic:
myDependentComponent.annotations = new Inject(MyComponent);
This is very exciting. It allows for asynchronous dependency resolution and makes asynchronous module loading a first class citizen of the framework. As a veteran of many server-side DI frameworks over the years I’m a huge fan of the changes they are making and look forward to the new version.
Team Panel
The afternoon concluded with a team panel. This included almost a dozen team members answering questions that had been voted on a site. The majority of questions had been previously answered in other talks. A few that stood out included a concern over Dart support. There had been some rumors that the adoption of Dart would de-prioritized JavaScript but the team assured everyone that JavaScript will continue to be first class.
One interesting question was about memory leaks. The team was adamant that Angular is so thoroughly tested that it simply doesn’t leak, but that there are side effects from practices that can cause the leaks. For example, compiling a DOM element and not attaching it to the DOM will cause a leak. We’ve encountered issues with third party controls that create or track DOM elements as well, but in each case we’ve simply had to track the integration point and either call $destroy on the related $scope or call a method on the third-party control to detach the DOM. Angular automatically fires an event when a compiled DOM node is detached that will destroy the associated $scope.
Community
Igor Minar started day two by talking about the Angular community. He explained the vision of changing the way developers feel about the web, and how users experience the web. He shared stories about the early days of Angular when there were few defects and fewer docs. He covered many of the ups and downs, like the 1.2 release breaking a lot of apps until they reverted a “minor” change, the reaction when they pulled support for IE8 and the announcements around Angular for Dart. He also shared how the team uses the leading edge version of Angular internally … literally within hours of a release, hundreds of apps within Google are updated with that version. This has taught them to focus on the stability of changes and “not to ship bugs.” The pre-release builds are pushed to Bower as well.
Pro tip: when building features like progress bars, date controls, etc. instead of building them as an Angular directive, consider building them as a traditional reusable JavaScript entity and then use the directive to glue into Angular. Makes more more reusable code.
At the end of his talk he shared that the team is creating an Angular working group to help steer the direction of future releases. Do you want to contribute to Angular and help it grow? The call to action is available right here.
TypeScript
The next talk covered TypeScript. This is a language Wintellect has had great success using. It is not a separate language to learn, but is a superset of existing JavaScript that tries to conform with upcoming ECMAScript 6 standards. Sean Hess walked through some examples and features of the language. We have experience with several projects that use TypeScript, including projects that switched midstream so we have a great before/after snapshot of the benefits it provided (and surprisingly, even the “JavaScript purists” on the team were won over). Contact me or swing by the Wintellect booth if you want to learn more.
Realtime Apps with Firebase
Anant Narayanan covered Firebase. He started with a discussion of the evolution of web apps and the notion that we are finally at a place with tools like Angular to build real web apps and not just try to paste the app paradigm onto the web page paradigm. Firebase is an API (on top of an application platform) to store and synchronize data in real time. It provides a simple login service that supports several major login providers for authentication and provides a declarative security system for data. Firebase provides “3-way data-binding” by synchronizing data with remote databases for you.
He demonstrated the functionality by building a live “video commenting” app. The concept is that while you are on the page, the comments update real-time as other users are adding comments. It was very impressive to see how easily he was able to drop in the $firebase dependency and create a real-time app on the fly. Firebase provides a hosting service for SPA apps that holds all static assets, JavaScript, etc. Learn more on their site. You can also interact with a sample he built and deployed live.
Scaling Directives
Burke Holland talked about scaling directives. He first covered the history of KendoUI and how once they released it, they went through a phase of various requests that centered around Knockout, leading them to build a Knockout integration. Suddenly in 2013 these requests started shifting to Angular. Burke was given the task of integrating KendoUI to Angular. It was interesting to learn about his approach to this integration and what he learned along the way.
Their approach is interesting. They have a “master directive” that spins through all of the available widgets, then creates a new directive per widget. It binds to the various attributes, and most importantly wires changes to the $scope through the “Change” and “Value” events that exist on every widget. For the more complex widgets they create special one-off directives to handle the complexity (such as their Scheduler).
Zone
I was not able to sit in on the Zone talk as I was invited to a panel interview with Ward Bell, John Papa, and Dan Wahlin to discuss Angular as it relates to the .NET community. The ZoneJS library is “thread-local storage for JavaScript VMs” that provides “execution contexts” to persist across asynchronous calls. I probably am not doing it justice so be sure to check out the video that is already posted online.
Using Angular with Require
Thomas Burleson took the stage to discuss using Angular with Require. He summarized at a high level that Angular injects instances and Require injects classes.
There are 4 general dependency types: load, construction, runtime, and module. Require provides a package dependency manager, a class injector, a JavaScript file loader, and a process to concatenate JavaScript.
Require has three methods: define (define your dependencies and register a factory) using the AMD (Asynchronous Module Definitions) standard, require (callback function invoked when dependencies are resolved), and config to configure source paths and aliases. He did a great job of showing not only how Angular and Require solve different problems, but how they can work together right now.
The talk is posted online here.
He did not have time for the Angular decorator talk, but posted his slides here.
End to End Testing Using Protractor
Julie Ralph discussed end-to-end (E2E) testing with Protractor. Her words: “Shiny new …” She is a software engineer in test at Google. It replaces the Angular Scenario Runner. Test is about confidence in code. Julie says if you give her a suite of unit tests she will not have confidence your system is going to run … they are a good foundation but you need a good set of E2E tests. Protractor uses WebDriver (also known as Selenium). The framework is specific to Angular, because it can use the knowledge of Angular to make it easier to write tests.
Common complaints about E2E: tests are hard to write, they are tough to keep up to date, they are slow and flaky, and it’s tough to debug them (not to mention authentication and cleaning up after tests). Protractor helps by providing a manager to keep WebDriver running and up to date. Protractor uses Jasmine to scaffold tests and a similar configuration to Karma. It also provides some special helpers for interacting with the browser and DOM.
Nice “wait for Angular’ function that will let injection and asynchronous code to complete before continuing your test. In the future, the framework will formalize the contract with Angular and migrate internal tests away from the scenario builder.
ngModelController
Jason Aden then discussed ng-model. It is core to great UI components and responsible for 2-way data-binding. It can be used to port jQuery components and enables declarative components. To add this in you use an ng-model in your directive and require it as part of the directive definition.
It interfaces with ng-form to transport input into a model representation. It takes care of alerting listeners. It watches for model changes and changes model representation back to view representations. In short, it is the key “glue” for data-binding.
Summary
There is more to come but I probably won’t have to time to wrap up before I post so I figured I would close out here. This conference has made it clear that Angular is a force to be reckoned with and is not only gaining attention, but is being used in real world applications with success. The community is excited and supportive and the releases are coming quickly. The conference was incredibly well-organized and an absolute pleasure to attend. I’m looking forward to watching this community grow over the next year and to future events.
Thanks to all of the organizers and sponsors, and of course to the community for helping make this a great set of tools (I know people feel the favor is returned because of what it saves them when building JavaScript apps). Don’t forget the videos are all available online. | https://csharperimage.jeremylikness.com/2014/01/ | CC-MAIN-2018-30 | refinedweb | 2,012 | 62.98 |
Adding an Ebuild to the Wiki
This page describes how to add an official entry for an ebuild to the Funtoo Linux wiki.
Contents.
You can also view all pages having to do with ebuilds (which also includes Package pages themselves) by going to the Category:Ebuilds page. Package pages will be listed by their regular wiki page names, which may make them harder to find.., they should be named as a regular wiki page, with a descriptive English name in the Package namespace. The Package namespace (as well as being part of the Category:Ebuilds category) indicates that this page is about a Package (ebuild.), Maintainer and a Repository. The Repository (as well as Maintainer) field will auto-complete, based on Users and Repositories defined on the wiki. If you don't know the right values for either of these, just leave them blank -- a staff member will fill them in properly for you. Our main goal is to get quality documentation online, so if you can help with that, fantastic! | http://www.funtoo.org/index.php?title=System_resurrection&oldid=4127 | CC-MAIN-2014-42 | refinedweb | 172 | 60.95 |
Full usage of ComboBoxOlalekan Ogunleye Dec 24, 2010 12:40 PM
Hi All,
I was wondering if someone can just help me out on this and or explain to me how and what to do on it. OK this is the thing:
I want to use a comboBox. My comboBox has been prepopulated from the table. However, when a user selects an option from the combox I want to be able to render a different form basedon the user selection.
Therefore, I will like to ask if someone can direct me to any example on this or someone can show me a skeleton way of doing this.
Thank you for your anticipated and positive response
1. Re: Full usage of ComboBoxIlya Sorokoumov Dec 24, 2010 4:34 PM (in response to Olalekan Ogunleye)
So you want a user to select one of comboBox items and then show him some specific form based on the selected item. Does user have to press a button to submit his choise? If so you can use a4j:include & ajax navigation in order to reRender page partialy. Does it fit for you?
2. Re: Full usage of ComboBoxOlalekan Ogunleye Dec 24, 2010 8:45 PM (in response to Ilya Sorokoumov)
Thank you for your response Ilya. However, The user does not need to press the button after selecting the button. The moment the user selects his choice the form is displayed and after filling the form that is being displayed then he can press a submit button to submit the form. Any direction? Thank you
3. Re: Full usage of ComboBoxIlya Sorokoumov Dec 25, 2010 2:04 AM (in response to Olalekan Ogunleye)
1. use a4j support like in
2. add your forms into a wrapper like the following
<h:panelGroup
<h:form
<h:form
</h:panelGroup>
3. reRender wrap element on change event
WARN: #{elX} must still be true when a user sumbits formX otherwice JSF ignores a submit of formX
4. Re: Full usage of ComboBoxIlya Sorokoumov Dec 26, 2010 11:09 AM (in response to Ilya Sorokoumov)
This article could be very usefull for you
5. Re: Full usage of ComboBoxOlalekan Ogunleye Jan 4, 2011 9:22 AM (in response to Ilya Sorokoumov)
Hi,
I am still having problem implementing this. Can someone please show me an example of how this has been done before or send me the link may be this could be useful. Thank you
6. Re: Full usage of ComboBoxIlya Sorokoumov Jan 4, 2011 9:40 AM (in response to Olalekan Ogunleye)
Could you post code(jsf page + bean) which you have now and then I can try to help you.
7. Re: Full usage of ComboBoxOlalekan Ogunleye Jan 5, 2011 11:06 AM (in response to Ilya Sorokoumov)
Ilya Sorokoumov wrote:
Could you post code(jsf page + bean) which you have now and then I can try to help you.
Please find attached the codes (JSF only) for the app. The main page is service configuration and what I am planing to do is that I want the content of the other pages be shown in service configuration depending on what a user selects from the comboBox in serviceConfiguration. Thanks for your help.
- servicesSMS.xhtml.zip 947 bytes
-
- serviceconfiguration.xhtml.zip 858 bytes
8. Re: Full usage of ComboBoxIlya Sorokoumov Jan 5, 2011 4:11 PM (in response to Olalekan Ogunleye)
I guess that you can do it in the following way:
1) add valueChangeListener to comboBox in order to handle a selection event
2) use ui:include into your rich:panel where you want to have different content
3) use attribute src of ui:include like this src="#{yourBean.somePath}". somePath should be changed in your valueChangeListener when user selects some option in comboBox.
4) You should reRender your rich:panel when comboBox value is changed
P.S. if ui:include won't work try a4j:include instead. But try ui:include before a4j because there may be some another issues with a4j:include.
P.S.S. Let me know about how it goes
Regards,
Ilya.
9. Re: Full usage of ComboBoxOlalekan Ogunleye Jan 11, 2011 5:16 AM (in response to Ilya Sorokoumov)
Hi llya,
I have followed the step you advised that I should follow but all to no avail. I went as far as tried different solution of the same way but still I can only get my comboBox to fetch the value but cannot display the pages as I want. I have therefore, attached my code to help see what I am doing wrong. I will be much grateful if anyone can tell me what I am doing wrong.
Attach is my codes. The (needed1 and needed2).txt are the backups of various BB codes that I used before I used tha ValuchangedBean.Java.
Someone please help.
- ValueChangedBean.java.zip 755 bytes
- serviceSms.xhtml.zip 939 bytes
-
- serviceConfiguration.xhtml.zip 934 bytes
- serviceC3to.xhtml.zip 941 bytes
- needed2.txt.zip 958 bytes
- needed1.txt.zip 977 bytes
10. Re: Full usage of ComboBoxIlya Sorokoumov Jan 11, 2011 6:01 AM (in response to Olalekan Ogunleye)
Hi Olalekan,
I looked into your code and I have several suggestions for you:
1) Do not use/try something if you are not sure that it's possible. Try to use code from demos/blog posts etc. but do not combine them because you will never find a mistake especially if there are a lot of them in your code at the same time.
2) Try to debug your code by using all possible mechanisms (System.out.println, rich:messages, look at server log files, firebug etc.)
3) Revise your knowledge about JSF. Read articles about actions/listeners/validators etc.
4) If you are using some RF component make sure that you understand how it works. Look at RF online documentaion / demo for the details.
5) A solving of every complicated issue consists of solving several smaller issues. Try to split your problem to pieces .
My solution follows below and it works fine
package test;
import java.io.Serializable;
public class TestBean implements Serializable {
private static final long serialVersionUID = 1L;
private String state = "";
public void valueChangeListener(javax.faces.event.ValueChangeEvent e) {
state = (String) e.getNewValue();
}
public void setState(String state) {
this.state = state;
}
public String getState() {
return state;
}
}
faces-config.xml:
<managed-bean>
<managed-bean-name>testBean</managed-bean-name>
<managed-bean-class>test.TestBean</managed-bean-class>
<managed-bean-scope>request</managed-bean-scope>
</managed-bean>
xhtml page:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns=""
xmlns:h=""
xmlns:f=""
xmlns:ui=""
xmlns:a4j=""
xmlns:
<head>
<title>Test</title>
</head>
<body>
<a4j:keepAlive</a4j:keepAlive>
<h:form>
<rich:comboBox
<f:selectItem
<f:selectItem
<f:selectItem
<f:selectItem
<a4j:support
</rich:comboBox>
</h:form>
<h:panelGroup
<h:panelGroup
11111111111
</h:panelGroup>
</h:panelGroup>
<h:panelGroup
<h:panelGroup
22222222222
</h:panelGroup>
</h:panelGroup>
<h:panelGroup
<h:panelGroup
33333333333
</h:panelGroup>
</h:panelGroup>
</body>
</html>
Regards,
Ilya.
11. Re: Full usage of ComboBoxOlalekan Ogunleye Jan 12, 2011 4:07 AM (in response to Ilya Sorokoumov)
Thanks a lot for your help and the detailed advice Ilya. However, I must say that it is difficult to implement the solution that you have advised. The reason being the fact that anytime I edit the faces-config.xml. My JBoss starts running in a loop and it will not allow my apps to be deployed on the server. I have had this problem with pages.xml before and till this moment I have not known the reason why this has to be so. Then what I did was to go inside pages.xml after shutting down eclipse and edit it. I did the same to faces-config.xml and it is not working out for me. I guess that is one of the price one has to pay for using open source system. Any idea as to how to go around this?
Regards
12. Re: Full usage of ComboBoxIlya Sorokoumov Jan 12, 2011 4:32 AM (in response to Olalekan Ogunleye)
I'm not an expert in Seam so can't tell you a lot. Do you need to define you beans in faces-config.xml if you can use @Name? I just showed my implementation and it works on Jboss 5.1/JSF1.2/Facelets 1.1.15-B1/RichFaces 3.3.3Final. If you use another environment you should adapt my solution to it. May be you should write about your problem(with Jboss & Eclipse) on some other forums(Jboss server/Eclipse etc.). B.T.W what environment are you using?
13. Re: Full usage of ComboBoxOlalekan Ogunleye Jan 12, 2011 4:37 AM (in response to Ilya Sorokoumov)
I am using Eclipse/JBoss and I am solely developing seam apps. However, I do not need to define my beans in faces-config.xml if I can use at name but I think I should first implement it the way it is before I start making changes. Thank you
14. Re: Full usage of ComboBoxOlalekan Ogunleye Jan 12, 2011 9:58 AM (in response to Olalekan Ogunleye)
Hi Ilya,
Thank you for your prompt help. I have re looked at the code but I found out that the form can only be displayed with the h:selectOneMenu as oppose to rich:comboBox. Thus far it is work but the problem that I am having now is that my webform that I want to display is not display. But when I change the value in the selectOneMenu, it fires the 11111, 22222, 33333 but it does not display my form and what I get is "WARNING [renderkit] 'for' attribute cannot be null" from my JBoss console. Can you or anyone tell me what I am doing wrong? Please see attached my code.
serviceConfiguration.xhtml is the main body and the rest *.xhtml are the forms that I want to rendere.
Thank you
- ValueChanged.java.zip 273 bytes
- ValuChangeBean.Java.zip 609 bytes
- serviceSms.xhtml.zip 939 bytes
-
- serviceConfiguration.xhtml.zip 859 bytes
- serviceC3to.xhtml.zip 941 bytes | https://developer.jboss.org/message/580221 | CC-MAIN-2016-36 | refinedweb | 1,681 | 73.88 |
for connected embedded systems
Printing
This chapter includes:
- Overview
- Print contexts
- Starting a print job
- Printing the desired widgets
- Suspending and resuming a print job
- Ending a print job
- Freeing the print context
- Example
Overview
Printing and drawing are the same in Photon--the difference depends on the draw context, a data structure that defines where the draw stream (i.e. the draw events) flows:
- by default, to the graphics driver for drawing on the screen
Or:
- to a memory context (or MC) for storing images in memory for later use
Or:
- to a print context (or PC) for printing. See "Print Contexts," below.
To print in Photon:
- Create a print context by calling PpCreatePC().
- Set up the print context automatically via the PtPrintSel widget, or programmatically via PpSetPC().
- Initialize the print job by calling PpStartJob().
- Any time after PpStartJob() is called, make the print context "active" by calling PpContinueJob(). When a print context is active, anything that's drawn via PpPrintWidget() or Pg* calls, including widgets, is directed to the file opened by the print context during the PpStartJob() call.
- Insert page breaks, as required, by calling PpPrintNewPage().
- The print context can be made inactive without terminating the current print job by calling PpSuspendJob(), or by calling PpContinueJob() with a different print context. To resume the print job from where you left off, call PpContinueJob().
- Complete the print job by calling PpEndJob().
- When your application doesn't need to print anything else, call PpReleasePC() to free the print context.
Print contexts
A print context is a PpPrintContext_t structure whose members control how printing is to be done. For information about what's in a print context, see the Photon Library Reference.
Creating a print context
The first step to printing in Photon is to create a print context by calling PpCreatePC():
PpPrintContext_t *pc; pc = PpCreatePC();
Modifying a print context
Once the print context is created, you must set it up properly for your printer and the options (orientation, paper size, etc.) you want to use. You can do this by calling:
- PpLoadDefaultPrinter()
- PpLoadPrinter()
- PpSetPC()
- PtPrintPropSelect()
- PtPrintSelect()
- PtPrintSelection()
These functions are described in the Photon Library Reference.
You can also use PtPrintSel (see the Photon Widget Reference).
You can get a list of available printers by calling PpLoadPrinterList(). When you're finished with the list, call PpFreePrinterList().
Starting a print job
If you're using an application that needs to know anything about the print context, you can use PpGetPC() to get the appropriate information. For example, you might need to know the selected orientation (in order to orient your widgets properly). If you need to know the size of the margins, you can call PpGetCanvas().
Before printing, you must set the source size or resolution. For example:
- If you want a widget to fill the page, set the source size to equal the widget's dimensions. You can call PpSetCanvas() to do this.
- by default, the source resolution is 100 pixels per inch so that fonts are printed at an appealing size. You can get the size of the interior canvas by calling PpGetCanvas(), which gives dimensions that take into account the marginal, nonprintable area.. */ PpGetPC(pc, Pp_PC_NONPRINT_MARGINS, &rect); PpGetPC(pc, Pp_PC_PAPER_SIZE, &dim); size.w = ((dim->w - (rect->ul.x + rect->lr.x)) * 72) / 1000; size.h = ((dim->h - (rect->ul.y + rect->lr.y)) * 72) / 1000; /* Set the source size. */ PpSetPC( pc, Pp_PC_SOURCE_SIZE, &size, 0); /* Start printing the label. */ PpStartJob(pc); PpContinueJob(pc); /* Damage the widget. */ PtDamageWidget(label); PtFlush(); /* Close the PC. */ PpSuspendJob(pc); PpEndJob, 1, args)) == NULL) PtExit (EXIT_FAILURE); /* Create a print context. */ pc = Pp_BEVELpSetPC( pc, Pp_PC_SOURCE_OFFSET, &offset, 0 );
Once the source size and offset have been set, you can start printing:
PpStartJob(pc); PpContinueJob(pc);
PpStartJob() sets up the print context for printing and PpContinueJob() makes the print context active, causing all Photon draw commands to be redirected to the destination specified in the print context.
Printing the desired widgets
After you've made the print context active, you can start printing widgets and so on. This can be done by calling any combination of the following:
- Pg* functions
- PpPrintWidget() -- you can even print a widget that hasn't been unrealized.
Printing a new page
You can force a page break at any point by calling PpPrintNewPage():
PpPrintNewPage(pc);
Note that once you call PpStartJob(), any changes to the print context take effect after the next call to PpPrintNewPage().
Photon assumes that the page numbers increase by one. If this isn't the case, manually set the Pp_PC_PAGE_NUM member of the print context to the correct page number. Don't make the page number decrease because the print drivers might not work properly.
Printing widgets that scroll.
- Call PpPrintWidget() for the widget or parent widget.
- Reset the resize flags for the PtList widget.
- Stop and close the print context.
PtMultiText
To print a PtMultiText widget's entire text, breaking the output into pages:
- Create another multitext widget -- let's call it the print widget -- that isn't visible to the user (i.e. hide it behind the user's multitext widget).
- Get the printer settings for printing: the orientation, page size, and the margins.
- Adjust the printer settings for what you want and then use PpSetPC() to set them.
- Set the print multitext widget's margins to match those of the printer that you just set.
- Use PpStartJob() to start your print job.
- Get the user's multitext widget resources (i.e. text, fonts, number of lines) and set the print multitext widget's resources to match them.
- Go through the user's multitext and get the attributes for each line (color, font, tabs, etc) and set the print multitext widget's attributes accordingly.
- Once you've set all of the attributes to match, specify the top line of the print multitext widget. This positions the widget to start printing.
- Get the number of lines that are completely visible in the print multitext widget, as well as the total number of lines.
- Use PpContinueJob(), PpPrintWidget(), and PpSuspendJob() to print the current page.
- Delete the lines that you just printed from the print multitext widget. Doing this causes the next group of lines that you want to print to become the visible lines of the widget.
- Call PpPrintNewPage() to insert a page break.
- Continue printing pages and deleting the visible lines until you've reached the end of the text in the print multitext widget., 0);
Suspending and resuming a print job
To suspend a print job and direct all draw events back to the graphics driver at any point after calling PpStartJob(), call:
PpSuspendJob( pc );
To resume a print job, reactivating the print context, causing draw events to be directed towards the destination specified in the print context, call:
PpContinueJob( pc );
Ending a print job
When you're finished printing your widgets, the print context must be deactivated and closed. This is done by calling:
PpSuspendJob(pc); PpEndJob(pc);
All draw events will be directed to the graphics driver.
Freeing the print context
When printing is complete and you no longer need the print context, you can free it, which in turn frees any resources used by it.
If you want to remember any information from the print context for future use, save it by calling PpGetPC() before freeing the print context. For example:
short const *orientation; ... PpGetPC( pc, Pp_PC_ORIENTATION, &orientation );
To free a print context, call:
PpReleasePC( pc );
Example
This example creates an application with a main window, and a pane with a few widgets on it. When you press the Print button, a Print Selection Dialog appears. When you select this dialog's Print or Preview button, the pane is "drawn" on the printer.
#include <stdio.h> #include <stdlib.h> #include <Pt.h> PtWidget_t *pane, *window; PpPrintContext_t *pc; int quit_cb ( PtWidget_t *widget, void *data, PtCallbackInfo_t *cbinfo) { PpReleasePC (pc); exit (EXIT_SUCCESS); return (Pt_CONTINUE); } int print_cb ( PtWidget_t *widget, void *data, PtCallbackInfo_t *cbinfo) { int action; /* You could make these calls to PpSetPC() right after creating the print context. Having it here lets you reuse the print context. */ PhDim_t size = { 850, 1100 }; PhDim_t size2 = { 200, 150 }; /* Set the source resolution to be proportional to the size of a page. */ PpSetPC(pc, Pp_PC_SOURCE_SIZE, &size, 0); /* Uncomment this to set the source size to be the size of the widget. The widget will be scaled when printed. */ /* PpSetPC(pc, Pp_PC_SOURCE_SIZE, &size2, 0); */ action = PtPrintSelection(window, NULL, "Demo Print Selector", pc, Pt_PRINTSEL_DFLT_LOOK); if (action != Pt_PRINTSEL_CANCEL) { /* Start printing the pane. Note that we're using the same print context for all print jobs. */ PpStartJob(pc); PpContinueJob(pc); /* Print the widget. */ PpPrintWidget(pc, pane, NULL, NULL, 0); /* Close the print context. */ PpSuspendJob(pc); PpEndJob, 2, args)) == NULL) PtExit(EXIT_FAILURE); /* Create a print context. */ pc = Pp); } | http://www.qnx.com/developers/docs/6.4.0/photon/prog_guide/printing.html | crawl-003 | refinedweb | 1,452 | 63.7 |
Writing a Simple CGI
I run into a requirement to implement a simple web application to take an identifier appended to a URL, send a message, then dump the response. I’ve seen a lot of references to CGI, including in the Java Servlet Specification and I’ve never built a real CGI application. I have written many CGI based interpreted application on PHP, Perl, etc platforms, however I never had to deal with the CGI interface personally. So I will chronicle my attempt here. For those of you playing along at home I will be using C++ as my implementation language. If you aren’t fluent in C++ I will provide examples and descriptions.
I do assume you have prior knowledge of a C based language, such as C, C++, Java, Objective-C, etc, and some experience with a Web based system such as Perl, PHP, Servlets, etc.
The CGI Interface
CGI, or Common Gateway Interface is an application data interface between a web service, such as Apache’s HTTPD, and an executable (script or binary) on the local system. When a request is received for a URL mapped to the executable the web service spawns an instance of the executable. The web service will set a number of environment variables which fill in the details of the request and will pass the request body in via the standard input for the POST and PUT methods.
The protocol for CGI 1.1 is defined in RFC 3875, which appears to be the current at the time of writing this post. Section 6 of the RFC specifies the expected response, and I will go over the the output format first. Section 4 defines the request. If you are going to dive into the RFC you should read section 1.4, Terminology, because the authors clarify the terminology used to refer to various components of the system.
The CGI application is expected to provide an HTTP-like response. The format is as follows:
Content-Type: type Status: Status-Code Status-Text Header-Name: Header-Value Response body, perhaps an HTML or XML document.
So the response is pretty simple. The response is like writing your own web server save the request processing...because parsing is as simple as pie...
So how is request data passed? Environment variables for most. Here is a brief list of variables I typically find useful in a web application.
Nose to the grindstone
Alright, so now we know the input and expected output from our application. Let us start with a simple C++ stub of an application for those who aren’t familiar with the language.
#include <iomanip> #include <iostream> #include <stdint.h> using namespace std; int main(int argc, char** argv){ uint8_t result; result = 0; return result; }
So all this does is provide a return code of 0 to the calling process, so let us output the string “Hello World!”. We add the following code after line #8:
cout << "Hello World!" << endl;
The object
cout is a kind of output stream attached to the standard output of the process. The word
endl is known as a stream manipulator, or rather a function which operates on the string object. The
endl stream manipulator writes a new line character then flushes the stream buffer. You could safely buffer all of your output and I don’t think CGI would care.
Standard Output to Web
Well, we now have a first day C++ application. How about we move on to our problem domain? As our next requirement we don’t we output "Hell World!" to the web! First up is writing the CGI output header. We need to write the content type, status, then an empty line for to prepare the body. The following code gets inserted at line #8, right above the last added fragment.
cout << "Content-type: text/plain" << endl; cout << "Status: 200 OK" << endl; cout << endl;
Dynamism
I can make up words too :-p. Anyways, my requirements are to interpret the portion of the URL after the CGI script. If you recall from the section above, that is provided in the
PATH_INFO environment variables. I’m running my CGI application under a POSIX/*nix system, so we use the C function
char* getenv(char*). The return value is NULL if the variable is not set. So here is a simple application which outputs the value extra resource information minus the leading /. | https://meschbach.com/kb/simple-cgi.html | CC-MAIN-2020-40 | refinedweb | 736 | 64.2 |
Okay, it's clearer. I would still add a *little* bit of implementation to the examples, e.g.
@typing.datraclass_transform() def create_model(cls: type) -> Callable[[T], T]: cls.__init__ = ... cls.__eq__ = ... return cls
One of the things that threw me off was the return type, Callable[[T], T]. That's not really the type a class decorator should return -- but apparently `type` is sufficiently vague that it matches this, *and* mypy ignores class decorators (assuming they return the original class), so this passes mypy. But a callable isn't usable as a base class, so e.g.
class ProUser(User): plan: str
wouldn't work (and in fact pyright rejects this, at least if I leave the @dataclass_transform() call out -- I haven't tried installing that version yet).
Wouldn't it make more sense if create_model was typed like this?
T = TypeVar("T", bound=type)
def create_model(cls: T) -> T: cls.__init__ = ... cls.__eq__ = ... return cls
(Is it possible that you were just accidentally using the signature of dataclass_transform() as the signature of the create_model() class decorator? In the "Runtime Behavior" section you show that it returns Callable[[T], T].).)
I haven't really tried to understand what you're doing with the field descriptors, I suppose it's similarly narrow?
On Sat, Apr 24, 2021 at 8:00 PM Eric Traut eric@traut.com wrote:
I took a crack at updating the spec with additional explanations and examples. Let me know if that makes things clearer.
I don't think it makes sense to document the full implementation of these decorator functions or metaclasses in the spec. They're quite complex. For example, here's an implementation of the `@dataclass` decorator function:.... The implementations in `attrs`, `pydantic` and other libraries are similarly complex, but they're all doing roughly the same thing — synthesizing methods including an `__init__` method that includes one parameter for each declared data field.
The phrase "dataclass semantics" is meant to refer to the behaviors listed in the "Motivation" section at the beginning of the spec. Additional details can be found in the subsection titled "Dataclass Semantics".
-Eric
-- Eric Traut Contributor to Pyright & Pylance Microsoft Corp. _______________________________________________ Typing-sig mailing list -- typing-sig@python.org To unsubscribe send an email to typing-sig-leave@python.org Member address: guido@python.org | https://mail.python.org/archives/list/typing-sig@python.org/message/UM246J6ZFB6E2WBLYC7LGAN24AYEXQ7M/ | CC-MAIN-2022-33 | refinedweb | 385 | 57.47 |
Ransomware Making a Comeback 202
snydeq writes "Ransomware is back. After a hiatus of more than two years, a variant of the GpCode program has again been released, kidnapping victims' data and demanding $120 for its return, InfoWorld reports. 'Like the ransomware programs before it, GpCode encrypts a victim's files and then demands payment for the decryption key. The new version of GpCode — labeled GpCode.AX by security firm Kaspersky — comes with a bit more nastiness than previous attempts. The program overwrites files with the encrypted data, causing total loss of the original data, and uses stronger crypto algorithms — RSA-1024 and AES-256 — to scramble the information.'"
Backups (Score:5, Insightful)
Re: (Score:3, Insightful)
And mark your existing backups read-only. Although that might require an OS which wouldn't run this malware anyway.
Re: (Score:2, Insightful)
If your backups are simply on the same machine that you're backing up, you're missing at least 1/2 the point.
Re:Backups (Score:5, Interesting).
Re:Backups (Score:5, Funny)
because when it falls into a river of molten rock, man, it's gone.
Sounds like you learned that from experience. One of the cons of maintaining the data center for Sauron, huh? Hope the pay is good, at least.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The hours are good. But now that you meantion it, most of the minutes are pretty lousy. RESISTANCE IS FUTILE!!!
Re:Backups (Score:5, Insightful)
I hate to break it to ya buddy, but accidental deletion and hardware failure make up 100% of my data loss causes. Shocking, I know. You see, some people actually do patch their software and ensure their OS is up to date, and some people don't run executables from strange places.
Mounted, active storage is perfectly acceptable for backing up all but the absolute most critical of data.
Re: (Score:2)
When I was once responsible for a business computer network, of course we had tape backup and off-site storage, even for the fairly small operation we were. For my
Critical is in the eye of the beholder. (Score:2)
Re: (Score:2)
All the email my wife has ever sent or received is critical: just ask her (and she has been using email for more than 25 years).
Re: (Score:2, Insightful)
Always mounted? That won't save you from an rm -rf / (or would a mounted fsck make the files hard to recover without taking as long as wipe?) I'm assuming you're running a highly secure *nix OS because otherwise, you're asking for it.
I back up my laptop, PDA(s), keychain flash drive, and my home server's boot drive to an encrypted disk on the server that's normally unmounted. As long as the box doesn't get broken into (good luck) and then someone does a dd -if
/dev/urandom -of /dev/sdx it'll be safe. A ligh
Re:Backups (Score:5, Insightful)
Amazon et. al. while cheap and off-site and probably pretty secure would require encryption at least. I don't want unencrypted data there. Makes it a bit more cumbersome.
The killer is going to be the upload. I've 2 Mbit up, uploading my data set to Amazon would saturate my pipe for about 55 hours straight. And that's a show stopper.
I'm slowly looking for 64GB USB drives. They exist but the local shop has only 32 GB, so have to look further. That's a much easier solution than Amazon.
Re: (Score:2)
Jungledisk, one of several S3 clients, offers several encryption options. It's a pretty decent service but lacks robust logging.
Re: (Score:2)
The primary show stopper for me is the upload speed. It's just too long. I had a quick look at it; Amazon is looking at the TB range for storage and the GB range for transfers. Most of the charges are for transfers, not for storage.
When you have a 100 Mb pipe to the Internet, yes then it's getting interesting. 1 GB then takes you 1 1/2 minutes, instead of over an hour it takes for me. For your average home connection it's worse, for those people it's simply not an option. To me it seems mainly targeting mi
Re: (Score:3)
Jungledisk uses differential copying, so once you have your original data up there it only needs to copy the changed par
Re: (Score:2)
I understand what you do there. Two problems I have with it:
1) Data that is stored "out there" is to be encrypted, before it's sent out. Do updates work that way in the first place? You can not decrypt data while it's out - storing your decryption key out there with your data pretty much beats the purpose.
2) Archiving. I prefer to keep at least four monthly backups. So if one backup is broken likely the other three are still OK, and against accidental deletion that is found out about only much later.
Re: (Score:3)
I'm just not sure if Jungledisk can do differential updates when you're encrpyting your files. I am not using their latest products so I'm not sure. A lot of the data I'm storing is just my iPhoto library so I am not encrypting that. That's the only potential problem I see for you, if you are changing large files very often and the di
Re: (Score:2)
Re: (Score:3)
just make the key the md5sum or sha1sum or whatever of whichever bitlength you need of a common passphrase you will always remember.
You lose it you can recreate what it was on a new machine with common checksum tools.
Re: (Score:2)
The benefit of Jungledisk is that the backup is online. For very small amounts of data that won't change often (e.g. a key) you don't need to make backups as frequently, and you can use physical security to protect it. For example:
Store a copy of the key in a safety deposit box at your bank.
Or keep a copy on a USB drive that is on your person at all times.
Or come up with a scheme to regenerate your key.
Re: (Score:2)
1. You don't have store a private key. You store a passphrase. If you can't manage to remember a passphrase, see point #2.
2. It's easier to store a piece of paper with a private key somewhere than it is to store a rotating pile of hard drives. Duh.
3. For most people, it's easier to sign up for an online backup service than it is to find a friend to peer bandwidth with, set up sftp, rsync, cron, etc. It's also easier to use an online service th
Re: (Score:2)
The private key can fit on a cheap usb thumb drive, or even a piece of paper. You can put a copies of your key in your safe-deposit box, at your friends' houses, at your relatives houses, at your work and home. The key doesn't change frequently, so you aren't driving around swapping media.
Re: (Score:2)
Upload time is not a big deal - I have about 30 GB uploaded to Mozy, over a 0.5 Mbps upload link. The main thing is to ensure the upload doesn't completely hog your upstream bandwidth, and that subsequent backups use block-level incremental technology, so only the actual data changed is sent.
Mozy and other online backup services are very effective, in addition to a local full system image (ideally to another server not a USB hard drive.) A USB flash drive is not very useful for backup, as it's far too eas
Re: (Score:2)
Virtually any respectable backup application will only ship changes up once the initial backup is complete. It'll saturate your pipe for a few days, but once it's one it's done. After that, it's really not too bad.
Re: (Score:2)
I remember downloading a metric ton of 1.44mb files back in 1998. 56k was fast back then and 55 hours to fill my expensive HDD was the norm.
GOML.
Re: (Score:2)
40Gig could be stored on a big USB stick (yeah, yeah, not really a good backup solution... spare me that). And that USB stick could be taken with you, so it won't get any more "offsite". If you should die in a fire, I guess the data loss (because your USB stick is dying in your pocket in the same fire) should be your least problem.
:)
Re: (Score:2)
Exactly, my idea too.
I was more thinking of taking that stick back home, have four of them or so, and rotate. Losing office and home (about 10 km apart) at the same time is not likely.
Re: (Score:2)
Only for the initial upload. Where I work, we have about 10-12TB of data and do a full weekly backup to LTO3 tape over gigabit ethernet and fiber channel. It takes about 55-60 hours to run, which we live with, because we have to.
Chances are, much of your 40GB isn't essential data. Don't back up your pr0n and mp3 collection, and just concentra
Re: (Score:2)
Yes that is important data.
Some 25 GB is my e-mail archive - about 8 years of mails, lots and lots of attachments. Some 5 GB personal photos. A little bit of software that I wrote. And the rest is my documentation (invoices, contracts, finances, etc).
Oh and a bit for my ldap database with all my customer's and supplier's contact information, the
/etc tree, and some other system bits to make re-install easier.
Re: (Score:2)
It doesn't sound like you are doing them any favors.
Re:Backups (Score:5, Insightful)
Whenever I see family/friends/co-workers using external drives for "backup" I have to repress the urge to launch into a lecture on the absurdity of relying on a local, always mounted backup.
You know, malware is not the only threat to data. There's also hard disk failures, and human error. "Always-mounted" external disks protect against both.
WesternDigital and all the other purveyors of external hard disks should be ashamed of themselves for promoting their products as a reasonable backup solution.
... and even if you are concerned about "always mounted" being vulnerable to malware, you can always keep your drive securely stashed away, and only connect it once a week to do your backup.
The ONLY kind of calamity that such devices protect you from is accidental deletion or hardware failure.
Which is already quite useful. Even though we like to scoff at windows users, most malware is not interested in trashing user's data, and anti-virus programs still manage to catch most malware (if one is installed).
...or catastrophic disaster (flood, fire, theft).
... which are quite rare compared to the more usual failure modes (hard disk failures, or accidentally deleted the wrong files).
Considering how cheap Amazon S3 [amazon.com] is, off-site backups are finally a real solution for the average person.
You've got to trust Amazon to respect the privacy of your data.
Re:Backups (Score:5, Informative)
Antiviruses catch only a declining percentage of malware, so you can't rely on them - see [wikipedia.org] which shows that even in 2007 the average percentage caught was about 50%. Various independent tests confirm this, particularly for zero-day viruses (i.e. you must rely on heuristics in the AV product, not signatures). In 2007, 23% of infected PCs had up to date antivirus: [pandasecurity.com] and [pandasecurity.com]
Even when there is coverage for a specific virus/trojan, highly polymorphic ones are often not caught - for example the Zeus banking trojan, which steals from bank accounts while hiding the illicit transactions and resulting balance from the user, is missed in 77% of cases - [darkreading.com]
Re: (Score:2)
Re: (Score:3)
How does that work with incremental backups, though? Does that mean if you have 50GB of encrypted data, you would need to upload the entire 50GB every time you change a single file?
Jungledisk can do file level encryption on the fly. This probably isn't a great solution if you're dealing with something like 50GB truecrypt volumes.
Some S3 clients (jungledisk) can send up only the changed parts of files. Of course, if a huge chunk of the 50G has changed, then you're boned. If you are regularly changing huge files of that kind, then another backup solution is probably better suited for you than S3. Either that, or a really fast connection.
Re: (Score:2)
why can't you just encrypt the diff files & upload those?
Re: (Score:2)
Actually I'm in one of the "test markets" for the new caps which will be 36GB for home and 76GB for business, so S3 won't be an option for anyone but businesses much longer. For my customers that need reliable backup on the cheap I actually recommend the WD Essentials, but I recommend TWO drives, one for home and one for work. Once a week they switch them, so at the absolute worst they are looking at a two week loss max instead of a complete loss.
As much as I'd like to be able to have backups all sent to
Re: (Score:2)
Any realistically reliable backup process for home users can't depend on the user doing something daily/weekly such as swapping media. That's a realistic option for people that are very process oriented, or for a business situation where it's your job to swap media. For home users, it's unrealistic to expect people to swap media when they're hardly motivated to install regular system updates.
A solution that maintains its self and is off site is by far the best option. As far as the complaints about slow
Re: (Score:3)
If your PC gets stolen or destroyed and you have a backup on an external hard drive that is stored safely off-site, how are you not protected?
Re: (Score:2)
My brother bought a large external hard disk and moved all of his data on to it in order to re-format his computer. He then stood up, walked away from his desk, caught the cable around his foot and launched the disk at the opposite wall. Bye bye data.
Re: (Score:2)
The problem with remote backup, is the bandwidth requirements...
Most home users have extremely poor upstream connectivity, so uploading all your data to a remote server is not a terribly practical idea.
I use an external (wireless) networked drive to backup my laptop, so whenever i'm at home it gets backed up automatically... This has saved me from hardware failure and would potentially save me from theft if someone stole my laptop (they are less likely to find the wireless drive which is hidden away in the
Re: (Score:2)
Most of "real life" data loss is due to, you guessed it, accidental deletion and hardware breakdown. At least in my experience. Granted, it's been a while since I was employed as helpdesk, but there has not been a single case of malicious deletion, malware related data corruption or other intentional data tampering that would have affected locally accessible and write enabled backups.
Of course offsite backups and the like are important for companies who would be very liable for it if their data was gone. Th
Re: (Score:3)
The ONLY kind of calamity that such devices protect you from is accidental deletion or hardware failure.
Fortunately these are by FAR the most common data loss ailments that will hit your average clueless user. Off-site is just overkill for most. Fire is not something that most people experience in their lives. A hard disk crash, however, is. And accidental deletion most certainly is.
Re: (Score:2).
They're cheap enough to buy several of them and swap them out periodically.
If you have enough crap to justify using public storage, it makes a lot of sense. And, frankly, no amount of encryption can beat simply not transmitting that data.
CrashPlan (Score:2)
CrashPlan is excellent. $50/year for one computer and unlimited space, indefinitely-kept versioning and deleted files, and a daemon that runs in the background all the time, with a separate GUI frontend.
I wish there were a referral plan so I could get something from this plug, but as of now, there's not.
:/ haha Anyway, check it out. For a long time I used Duplicity to a web hosting account, but CrashPlan is easier and more reliable.
Re: (Score:2)
In any case, unless Joe average wants to enter a password and/or RSA token code every few h
Re: (Score:2)
Considering how cheap Amazon S3 is, off-site backups are finally a real solution for the average person.
Wow, how do you figure that cheap? Am I missing something? From the calculator on their site it looks like making a 250GB backup would cost you ~$50 the first month, and then ~$25 thereafter (assuming you could do an rsync style backup and your data doesn't change much).
And you ever need to get that 250GB back, it's gonna cost you $40 just to download it!
No thanks. For the cost of one month of service I could buy a TB drive and do it myself
Re: (Score:2)
You mean the two most overwhelmingly common ways people lose data they need?
Re: (Score:2)
And mark your existing backups read-only. Although that might require an OS which wouldn't run this malware anyway.
Oh, you can easily mark files read-only in Windows, always could. The trouble is, it's as easy for malware to re-mark them as read-enabled. As others have said, keep both onsite and offsite backups.
Re: (Score:3)
Exactly.
It makes me wonder how come this kind of scams still work, I mean everyone is backing up their data on off-line media, right? Right? Oh, wait...
Re: (Score:2)
Or teach your backups to be smart and warn you if they notice a significantly larger number of files changing.
In a company (or with you at home), there is usually a fairly stable number of documents getting modified per day and thus their backups need modification. So unless that malware does it REALLY slowly (read: a handful of files per day, tops), you do notice a significant spike of changes.
My... (Score:2)
Encryption (Score:4, Funny)
Re: (Score:3)
But we'll encrypt it again for you! For free!
(What's really scary is that I am tempted now to write ransomware that displays that and an "I agree" button, and only actually encrypts and locks the user out if he clicks that "I agree" button. Just to see how many morons will fall for it)
Allright, bring back the Slot Machines of DOS! (Score:5, Funny)
I remember back when I was running MSDOS 5, and at first Bootup it cut to a screen with a Slot Machine that said it was a Virus holding my MBR and File Allocation Table ransom until I get such and such combination after I pull the Arm. It also said if I tried to turn-off the computer, then all my data is already gone unless I got the sequence in this game to restore my MBR and FAT.
Needless to say, I left the computer on all day and drove to my grandmother's Insanitarium/Old-Folk's home and said I didn't come visit these past 10 years because I've always given her bad luck and now I need her more than ever. She stopped taking her pills, said goodbye to the trees and bushes and pigeons as I walked her to my car, and upon arriving at my desk she knew exactly what to do: she pulled-out her vile of lipstick, puckered some on her mouth, and gave the computer screen a kiss. She was insane again.
Fuck you Slot Machine! I pulled the Arm, and I won back my MBR and FAT. I told my grandmother she could walk back home, and so I gave her $10 to buy some cigarettes and lunch, and I and her Retired-Living Facility have never seen her since.
Preemptive strike (Score:2)
Kaspersky might have labeled it, but only running AVG ensures there's no chance of catching it
;)
Re: (Score:2)
Re: (Score:2)
Unless you run 64-bit and updated to AVG 2011 - [slashdot.org]
Ok, a question or two (Score:5, Interesting)
The whole point of these malware authors is to ransom data for cash, right?
How the hell do they get paid? And if that is an answerable question, that brings question number two.
Why the hell can't the law find them?
There would be a money trail of some sort. The money has to go from victim to the criminal. That is traceable.
Isn't this really just a gigantic "kick me" sign?
Re: (Score:2, Insightful)
If the money ends up going to a country like Somalia what are you going to do?
Ask for the Somali government's help to get your 100 bucks back?
Good luck with that.
Re: (Score:2)
How are you going to make a payment to Somalia?
I doubt they have a working banking system.
Making overseas payments of such small amounts is anyway an issue: bank charges can literally make half that amount disappear en route.
Re: (Score:3)
While Western Union doesn't cover Somalia, it does cover practically everywhere else. Nigeria (or most of sub-Saharan Africa for that matter) is a good place to get lost.
Re: (Score:2)
Talk the RIAA into funding a full-scale invasion of Somalia? They're all pirates, you know
:).
Re: (Score:2)
Just an example method of payment, there are exchanges from PayPal US$ to BitCoin [slashdot.org] (and back). It would be easy enough to set this up to ask for credit card details and automate the payment, funds could then be converted back into real money (anonymously) at a later date.
Although I doubt that they are smart enough to do this.
Re:Ok, a question or two (Score:4, Insightful)
How the hell do they get paid?
... and this is the Achilles heel of just about every ransom ploy. Most kidnappings for ransom fail at the "money handover" stage.
Re: (Score:3)
suckers. Usually there's money mules who transfer the money around.. sometimes they're given the job of buying goods and sending those goods to someone else who sells them, etc, etc. It's all traditional money laundering techniques being done by "work from home" saps.
Re: (Score:2)
Re: (Score:2)
I could imagine (but I usually over-estimate peoples intelligence) that the virus might also look for the presence of the right content.
Someone might be reluctant to go to the police with "Officer, Officer, someone encrypted my 100MB of important business data and my 600GB collection of pirated movies and illegal stuff!!!!!"
Re: (Score:3)
I can tell you an example: I was victim to credit card fraud a couple of years ago (I think it was skimmed at a parking lot acception credit cards as a pass).
I went to the police after an unautorized payment was made.
They came back to me a few months later with what happened: Somebody in Germany got the credit card data from somebody in california to buy stuff to be delivered to moscow (1 Playstation and a Gameboy). I never understood how such an tranaction was accepted for payment with credit card...). The
Re: (Score:2)
Criminal gangs often have mules to collect and launder money for them, these mules are often unsuspecting victims of scams too.
The criminals behind the scams are also often located in countries with very lax law enforcement that either doesn't care about the crimes taking place, or only care that they get their bribes from the criminals.
Re: (Score:2)
Ok, great. I'm like the guys in Office Space who don't know how to launder money.
So. Wanna illuminate me or are you satisfied with being merely cryptic? Because if you make that kind of info public maybe The Community can figure out a way to bring these assholes to justice.
Re: (Score:2)
I was going to link to an auditing web site via 2 URL shorteners, but it wouldn't let me.
Re:Ok, a question or two (Score:4, Informative)
Ok, great. I'm like the guys in Office Space who don't know how to launder money.
So. Wanna illuminate me or are you satisfied with being merely cryptic?
The thing is that most of these sites will ransom you for your credit card info to make the payment, its almost never just the amount they claim that they want to steal from you.
So you go to their website and enter the info. They return your data. They go and they use your credit card to make a deposit to a paypal account that they've hacked - its not actually one of theirs its of an unsuspecting victim. They run the money through a couple of those, whose purchasing history is actually protected so the cops need a warrant to search through it - which will often just put the wrong person under suspicion.
Eventually they run it to an account outside of the US's Jurisdiction.
Re: (Score:2)
Your just pointing out why your not creative enough to think of an operate such a scheme. Its very easy to move and collect money anonymously without getting caught, I won't go into specifics but it can be done via nominee structures.
I can vouch for that. Uncle Osama knows what he's talking about on these matters. By the way hows the cave Ossie?
Re: (Score:2)
That's something that usually does NOT work, because banks (of course not in self interest of cashing in on the lost interest, only for the added security and safety of the money transfer) usually hold the money for a few days before forwarding it to a country where getting it back is near impossible. And in every other case, you may rest assured that the police is already waiting for the person whose account this money should have been sent to and asks him
... well, why.
Western Digital is the way. You depo
Re: (Score:2)
Damn that Western Digital. That's why I only send money via Seagate or Toshiba.
No data is actually encrypted..... (Score:5, Informative)
Fixable possibly, but be careful anyway... (Score:5, Interesting)
I'd feel a little better about the proposed solution (let a disk utility recover the partitions) if they had actually tried a disk utility to see if it could in fact find the partitions and restore them. It does seem like it should work... and copying that thing back by hand is not a task I'd take on lightly with anyone's data but my own.
Also wouldn't the thing that messed up the MBR in the first place still be in your Windows installation? I didn't see that they tried to boot from that drive after repairing the MBR. It could be the ransomware is just waiting for you to reboot and will do something nasty if you've not entered the password. It seems like even after a recovery you should take the drive to a different system and back it up immediately before you tried to boot from it again, but they do not mention that.
Re: (Score:2)
Gpart should be able to do it.
1) 2) 2) -- They can't count to three (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
Funny how these crooks can write ransomware but they can't count to three: 1) 2) 2) [fortinet.com]
You've obviously never interviewed people for a programming position.
Re: (Score:2)
Re: (Score:3)
TFA says its a new varient of this virus (which means it may actually encrypt the data)
Re: (Score:2)
Kaspersky's Kamluk says that "Pushing [the] reset/power button on your desktop may save a significant amount of your valuable data!"
Such insightful precautions from teh [sic] professionals! Their advice goes completely against the fact that no data is encrypted.
Reading and writing a 512 byte MBR obviously takes less time than encrypting all your user documents. That is smaller than the size of a new, blank word doc (in the new compressed
.docx format!)
Nobody would hit that power button fast enough.
Re: (Score:2)
32 blank word
.docx's to be exact - 16,384 / 512 = 32
Re: (Score:2)
I'd use TestDisk [wikipedia.org], it actually searches the whole disk for the filesystems. Helped me when a friend brought me a disk with a corrupted partition table.
Who would trust them? (Score:4, Insightful)
Re:Who would trust them? (Score:4, Insightful)
Unless word gets out that you don't get your data back after paying.
And this is the internet. The first thing people will do after this happens is painting it all across facebook and twitter.
Re: (Score:2)
This con has been widely known for many years. It still works.
Re: (Score:2)
Not fully correct - if they refuse to decrypt your PC even after you pay them, you tell everybody who would listen, if even out of frustration, that paying for virus X does not help, leading to the criminals having no trust from their victims. And nobody likes to pay $120 for nothing, so they will most likely loose potential revenue from their scheme. When people pay, they expect something in return (that's what paying means) - if they don't get anything, they tell other people and it matters little whether
Kaspersky (Score:2)
I have an uneasy feeling about Kaspersky in all sorts of situations, including this one. Just saying that the 3 ways to gain from this activity is either to be building the virus or to be building and selling the antivirus.
The third possibility is left to the imagination and that's the one that makes me uneasy.
Re: (Score:2)
Seriously- how many viruses are out there again? Tens of thousands? Do you *really* think writing one more virus is going to have a measurable (positive) impact on anti-virus sales?
On the other hand, if Kaspersky or McAfee would be writing viruses and they were found out, what do you think that would do to their reputation? How many people do you t
Ovekill. (Score:2)
For 90% of victims changing the file name would be adequate "encryption". Simpler yet would be to just delete the files, collect $120 for returning them, and move on to the next victim. After all, these people have already demonstrated their stupidity by downloading the malware in the first place.
Will any makes of Ransomware try to use the DMCA t (Score:2)
Will any makes of Ransomware try to use the DMCA to force you to pay?
Or maybe even on the fake AV apps may try that some day.
EULA (Score:2)
Imagine if a semi-legtimate company did this. Would they be legally allowed to do it if the EULA said they would?
Re: (Score:2)
That won't be abused.
Re: (Score:3)
Maybe it could rot13 the text of the comment, and then have a javascript autotranslate on click thing. That way it would be worthless for SEO type stuff.
If it got any positive mods whatsoever it wouldn't do it to avoid it being used as a "I disagree" option on otherwise decent posts.
Re: (Score:2)
Not running executables from unknown sources is perfectly practical advice on linux systems where your downloading cryptographically signed packages from the vendor of the distro you already have installed (and therefore already trust)...
Similarly on most modern phones which have integrated app stores..
But what about on osx and windows where no such repository exists, and where the default installs are severely lacking in useful applications? | http://it.slashdot.org/story/10/12/03/038255/Ransomware-Making-a-Comeback | CC-MAIN-2015-40 | refinedweb | 5,440 | 71.34 |
HOUSTON (ICIS)--Here is Wednesday’s midday ?xml:namespace>
CRUDE: May WTI: $99.75/bbl, up 56 cents; May Brent: $106.77/bbl, down 22 cents
NYMEX WTI crude futures worked higher in thin volume as another drawdown in crude inventories at the Cushing, Oklahoma delivery hub overshadowed an overall build in crude stocks and a substantial build in the Gulf Coast refining region. The US stock market staged a slight rally in response to released data showing a rise in orders for durable goods.
RBOB: Apr $2.8930/gal, up 1.02 cents
Reformulated blendstock for oxygen blending (RBOB) gasoline futures traded higher during morning hours after the US Energy Information Administration (EIA) report showed a larger-than-expected decline in gasoline inventories and an increase in consumption rates.
NATURAL GAS: Apr $4.369/MMBtu, down 4.2 cents
The front-month contract slipped from Tuesday’s 2% rise, falling back below the $4.40/MMBtu mark despite slightly cooler near-term weather forecasts and expectations for another bullish gas storage report from the EIA on Thursday.
ETHANE: lower at 28.50 cents/gal
Ethane spot prices were slightly lower in early trading on weaker natural gas futures.
AROMATICS: benzene wider at $4.80-5.05/gal
Prompt benzene spot prices moved to within a wider bid/offer range during the day, according to market participants. There were no fresh trades heard, and the morning range was further apart from $4.86-4.88/gal FOB (free on board) the previous session.
OLEFINS: ethylene done lower at 51.25 cents/lb, RGP wider at 61-62 cents/lb
US March ethylene traded at 51.25 cents/lb early in the day, down from the previous trade at 51.75 cents/lb done on 24 March, as inventories remain high. US March refinery-grade propylene (RGP) bid/offer levels were heard at 61-62 cents/lb, wider than the previous day’s reported trade at 62 cents/lb.
For more pricing intelligence please visit | http://www.icis.com/resources/news/2014/03/26/9766648/noon-snapshot-americas-markets-summary/ | CC-MAIN-2016-30 | refinedweb | 334 | 67.76 |
Hi,
Is there anyway that I can access all the properties of a control(activex or normal) during runtime and get their values. I dont want to speicify the property name. I should get the all the property names and their values. It is something like reflection namespace in VB.NEt. Is it posiible in vb6.0
Thanx
Not that I'm aware of... I've only ever tried it in .NET... so I don't know for sure if there is a way in VB6 (never had a reason
You'll have to make use of the TypeLib Information Library in tblinf32.dll
All my Articles
Hannes
Thank you very much. Tlbinf32.dll is the right dll. I got what I need. Once again Thank you..
That is good news Well done!
Please mark your thread Resolved. You can do this by clicking on Thread Tools ( above your first post ), and then selecting Mark Thread Resolved.
Thank you,
Hannes
View Tag Cloud
Forum Rules | http://forums.codeguru.com/showthread.php?499128-RESOLVED-Accessing-properties-of-controls | CC-MAIN-2014-49 | refinedweb | 165 | 86.91 |
Acknowledgment is given for using some contents from Wikipedia.).
The NodeEdit }EditwhileEditEdit:
Data
Arrays_8<<_9<< computers.
Data Structures
Introduction - Asymptotic Notation - Arrays - List Structures & Iterators
Stacks & Queues - Trees - Min & Max Heaps - Graphs
Hash Tables - Sets - Tradeoffs. Link-Lists)
- Index - Fast access to every element in the array using an index [], not so with linked list where elements in beginning must be traversed to your desired element.
- Faster - In general, It is faster to access an element in an array than accessing an element in a linked-list.
- Link-Lists Advantages (vs. Arrays)
- Resize - Can easily resize the link-list by adding elements without affecting the majority of the other elements in the link-list.
- Insertion - Can easily insert an element in the middle of a linked-list, (the element is created and then you code pointers to link this element to the other element(s) in the link-list)..
Data Structures
Introduction - Asymptotic Notation - Arrays - List Structures & Iterators
Stacks & Queues - Trees - Min & Max Heaps - Graphs
Hash Tables - Sets - Tradeoffs
Stacks and QueuesEdit
StacksEditEdit
The basic linked list implementation is one of the easiest linked list implementations you can do. Structurally it is a linked list.
type Stack<item_type> data list:Singly Linked List<item_type> AnalysisEdit
In a linked list, accessing the first element is an
operation becauseEdit
Using stacks, we can solve many applications, some of which are listed below.
Converting a decimal number into a binary numberEditiEditEditEdit
Input: (((2 * 5) - (1 * 2)) / (11 - 9)) '*': case '(': charactop = cpush(charactop, rd); break; case ')':; } }
Output of the program:
Input entered at the command line: (((2 * 5) - (1 * 2) / (11 - 9)) [3]
Evaluation of Infix Expression which is not fully parenthesizedEdit. Actions that will be performed at the end of each input Opening brackets (2.1) Push it into stack and then Go to step (1) Digit (2.2) Push into stack, Go to step (1) Operator (2.3) Do the comparative priority check (2.3.1) if the character stack's top contains an operator with equal or higher priority, then pop it into op Pop a number from integer stack into op2 Pop another number from integerEditEditEdit
Problem DescriptionEditEditEdit
-ortEdit it's right.
The final partitions look as follows:
Therefore, 48 has been placed in it's it's ProblemEditEdit
Input: An array P with n elementsEditEdit
QueuesEditEdit AnalysisEditEdit
Performance AnalysisEdit
Priority Queue ImplementationEdit
Related LinksEdit
DequesEdit
ReferencesEdit
Data Structures
Introduction - Asymptotic Notation - Arrays - List Structures & Iterators
Stacks & Queues - Trees - Min & Max Heaps - Graphs
Hash Tables - Sets - Tradeoffs
TreesEditalEdalEdEdit
preorder: 50,30, 20, 40, 90, 100 inorder: 20,30,40,50, 90, 100
BalancingEditEdit
A typical binary search tree looks like this:
TermsEditEdit.
ExampleEditEdit
-Edit
It is assumed that you have already found the node that you want to delete, using the search technique described above.
Case 1: The node you want to delete is a leafEdit
For example, to delete 40...
- Simply delete the node!
Case 2: The node you want to delete has one childEdit
- Directly connect the child of the node that you want to delete, to the parent of the node that you want to delete.
For example, to delete 90...
- Delete 90, then make 100 the child node of 50.
Case 3: The node you want to delete has two childrenEdit
To delete a node, you first have to find the child node, called the "successor" or the "replacing node", that will replace the deleted node. This can be done in one of two ways, which are essentially a mirror image of each other.
- Find the left-most node in the right subtree of the node being deleted. After you have found the node you want to delete, go to its right node, then for every node under that, go to its left node until the node has no left node; that node will be the successor.
- Find the right-most node in the left subtree of the node being deleted. After you have found the node you want to delete, go to its left node, then for every node under that, go to its right node until the node has no right node; that node will be the successor.
The following examples use the first!
Case 1: The successor is the right child of the node being deletedEdit
-.
Case 2: The successor isn't the right child of the node being deletedEdit
This is best shown with an example
To delete 30...
- Move the successor into the place where the deleted node was and make it inherit both of its children. So 35 moves to where 30 was and 20 and 40 become its children.
- Move the successor's (35's) right subtree (if it exists) to where the successor was. So 37 becomes the left child of 40.
Node deletionEdit
In general, remember that a node's left subtree's right-most node is the closest node on the left, and the right subtree's left-most node is the closest node on the right, and either one of these can be chosen to replace the deleted node. The only complication is when the replacing node (i.e., the successor) has a child subtree on the same side as the successor's side of the deleted node; in that case, the easiest thing to do is to always move the same-side subtree of the successor into the position vacated by the successor (i.e., the child of the successor's parent) even if the subtree is empty.
red-black treesEdit
A red black tree is a self-balancing tree structure that uses a color attribute , andEdit
Executive SummaryEdit
-. (written by armchair wa-r who has never implemented a database. :P beat you to it).
B Trees were described originally as generalizations of binary search trees , where a binary tree is a 2-node B-Tree, the 2 standing for two children, with 2-1 = 1 key separating the 2 children. Hence a 3-node has 2 values separating 3 children, and a N node has N children separated by N-1 keys. moreEdit
package btreemap; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.util.Map; import java.util.Set; import java.util.SortedMap; import java.util.TreeMap; /** can't work without setting a comparator */ public class BTreeMap<K, V> implements SortedMap<K, V> { private static final int NODE_SIZE = 100; = v; } } /** * * - this represents the result of splitting a full block into * a left block, and a right block, and a median key, the right * block and the median key held in a BlockEntry structure as above. * @param <K> * @param <V> * @param <V>g */ static class SplitRootEntry<K, V> { BTBlock<K, V> right; BlockEntry<K, V> entry; SplitRootEntry(BlockEntry<K, V> entry, BTBlock<K, V> right) { this.right = right; this.entry = entry; } SplitRootEntry() { super(); } } /** * this is used to return a result of a possible split , during recursive * calling. * * * * @param <K> * @param <V> */ static class resultAndSplit<K, V> { /** * null , if there is no split. */ SplitRootEntry<K, V> splitNode; V v; resultAndSplit(V v) { this.v = v; } resultAndSplit(SplitRootEntry<K, V> entry, V v) { this.v = v; this.splitNode = entry; } } /** * used to represent the insertion point after searching a block if compare * is zero, then a match was found, and pos is the insertion entry if * compare < 0 and pos == 0 , then the search ended up less than the * leftmost entry else compare > 0 , and the search will be to the immediate * right of pos. * * * */> c) { entries = new ArrayList<BlockEntry<K, V>>(); maxSz = size; this.comparator = c; } /** * PosAndCompare usage: if compare is zero, then a match was found, and * pos is the insertion entry if compare < 0 and pos == 0 , then the * search ended up less than the leftmost entry else compare > 0 , and * the search will be to the immediate right of pos. * * * */ // private void blockSearch(K k, PosAndCompare pc) { // for (int i = 0; i < entries.size(); ++i) { // pc.compare = comparator.compare(k, entries.get(i).k); // if (pc.compare == 0) { // pc.pos = i; // return; // } // if (pc.compare < 0 && i == 0) { // pc.pos = 0; // return; // } // // if (pc.compare < 0) { // pc.pos = i - 1; // pc.compare = 1; // return; // } // // } // pc.pos = entries.size() - 1; // pc.compare = 1; // // // binary search, it's hard to get it right ! // // int left = 0; // // int right = entries.size(); // // // // while (left <= right && left < entries.size()) { // // // pc.pos = (right - left) / 2 + left; // // pc.show(n + 1); } else { System.err.println("No block right of #" + i); } } showTabs(n); System.err.println("End of Block Info\n\n"); } private void showTabs(int n) { // TODO Auto-generated method stub for (int i = 0; i < n; ++i) { System.err.print(" "); } } } @Override public SortedMap<K, V> subMap(K fromKey, K toKey) { TreeMap<K, V> map = new TreeMap<K, V>(); root.getRange(map, fromKey, toKey); return map; } @Override public SortedMap<K, V> headMap(K toKey) { // TODO Auto-generated method stub return subMap(root.headKey(), toKey); }; @Override public SortedMap<K, V> tailMap(K fromKey) { // TODO Auto-generated method stub return subMap(fromKey, root.tailKey()); } @Override public K firstKey() { // TODO Auto-generated method stub return root.headKey(); } @Override public K lastKey() { // TODO Auto-generated method stub return root.tailKey(); } @Override public int size() { // TODO Auto-generated method stub return 0; } @Override public boolean isEmpty() { // TODO Auto-generated method stub return false; } @Override public boolean containsKey(Object key) { // TODO Auto-generated method stub return get(key) != null; } @Override public boolean containsValue(Object value) { // TODO Auto-generated method stub return false; } @Override public V get(Object key) { // TODO Auto-generated method stub return root.get((K) key); } @Override public V put(K key, V value) { resultAndSplit<K, V> b = root.put(key, value); if (b.splitNode != null) { root = new BTBlock<K, V>(root.maxSz, root.comparator); root.rightBlock = b.splitNode.right; root.entries.add(b.splitNode.entry); } return b.v; } @Override public V remove(Object key) { // TODO Auto-generated method stub return null; } @Override public void putAll(Map<? extends K, ? extends V> m) { // TODO Auto-generated method stub } @Override public void clear() { // TODO Auto-generated method stub } @Override public Set<K> keySet() { // TODO Auto-generated method stub return null; } @Override public Collection<V> values() { // TODO Auto-generated method stub return null; } @Override public Set<java.util.Map.Entry<K, V>> entrySet() { // TODO Auto-generated method stub return null; } }Edit(); } } }
ReferencesEdit
- William Ford and William Tapp. Data Structures with C++ using STL. 2nd ed. Upper Saddle River, NJ: Prentice Hall, 2002.
External LinksEdit Treaps: heaps where elements can be deleted by name
TreapsEdit
covered in this example in the Algorithms book. Algorithms/Left_rotation
GraphsEditEditEdit
In a directed graph, the edges point from one vertex to another, while in an undirected graph, they merely connect two vertices.
Weighted GraphsEditEdit
Adjacency Matrix RepresentationEditEditEdit
Depth-First SearchEdit
Start at vertex v, visit its neighbour w, then w's neighbour.
Time complexity and common uses of hash tablesEdit
Hash tables are often used to implement associative arrays, sets and caches. Like arrays, hash tables provide constant-time O(1) lookup on average, regardless of the number of items in the table. However, the rare worst-case lookup time can be as bad as O(n). Compared to other associative array data structures, hash tables are most useful when large numbers of records of data are to be stored.
Hash tables may be used as in-memory data structures. Hash tables may also be adopted for use with persistent data structures; database indexes commonly use disk-based data structures based on hash tables.
Hash tables are used to speed-up string searching in many implementations of data compression.
In computer chess, a hash table is generally used to implement the transposition table.
Choosing a good hash functionEdit
A good hash function is essential for good hash table performance. A poor choice commonly go undetected.
The literature is similarly] is occupied slot[i].value := value else (see language support for associative arrays),EditEdit.
Associative array implementationEdit.
Hashtables use a lot of memory but can be very fast searches, in linear time.
In general you should use the data structure that best fits what you want to do, rather than trying to find the most efficient way.
[TODO:] Use asymptotic behaviour to decide, most importantly seeing how the structure will be used: an infrequent operation does not need to be fast if it means everything else will be much faster
[TODO:] Can use a table like this one to compare the asymptotic behaviour of every structure for every operation on it.
Sequences (aka lists):
* singly-linked lists can push to the back in O(1) with the modification that you keep a pointer to the last node
Associative containers (sets, associative arrays):
- Please correct any errors
Various Types of Trees
[TODO:] Can also add a table that specifies the best structure for some specific need e.g. For queues, double linked. For stacks, single linked. For sets, hash tables. etc...
[TODO:] Could also contain table with space complexity information (there is a significative cost in using hashtables or lists implemented via arrays, for example). | https://en.m.wikibooks.org/wiki/Data_Structures/All_Chapters | CC-MAIN-2014-15 | refinedweb | 2,175 | 55.34 |
I have built a python addin where a user selects a bunch of polygon features in ArcMap which then queries an Oracle database and provides a spreasheet of the results.
This works well but the final peice of the puzzle is I need to be able to prompt my users to enter their credentials for the oracle database login so that the script can query the database.
For the record, I am using the python module:
import cx_Oracle
to connect to the database using the syntax:
db = cx_Oracle.connect('{0}/{1}@//random.company:1234/db1.company.com'.format(username, password)
to login. To explain, I am passing the username and password into the connection string via the .format() method. Which ever way I design a login prompt, the script will grab the users credentials and pass them into the connection string.
In the python addin help documentation, I do not see a function or property anywhere to pop up a dialogue that will accept some sort of textual input. For example, a text box with 2 entry fields. One for the username and one for the password and then a button to accept the credentials. Or some other option that will allow this.
Couple answers to questions you may have:
Q: Why don't you set the environment workspace to the oracle database connection which gives an automatic dialogue to prompt for credential:
(i.e arcpy.env.workspace = r"Database Connections\oracleDB.sde")
A: This works fine, however, after logging into the database, I need to query it. Arcpy offers the 'Make Query Layer' tool to build and compile a query in Oracle. After working with this tool for a week or so, it turns out it has a bug which populates incorrect query results (see bug number below). ArcGIS and arcpy do not offer any other option to submit complex queries to Oracle sde connection via python. This is why I choose to use cx_Oracle for this portion of my script. That and it executes much much faster!
BUG-000090452 : The Make Query Layer tool does not respect the Unique fields variable in Model builder, or the oid_fields keyword in Python arcpy.MakeQueryLayer_management() command.
Q: Why don't you use some third party GUI builder like Tkinter?
A: Any use of Tkinter in ArcGIS (Map, Catalogue, etc) will cause it to crash. From what I've read, ArcGIS will not support third party GUI builders. Also, I did test a tkinter widget this and it crashed everytime when using in ArcMap. outside of ArcMap, it worked fine.
Q: Why not just use a combobox as a textual input for a user name and/or password?
A: The problem I have with this is, when the user types into the combobox, I can't mask the characters the user is entering for the password (i.e. when the user types into the box, only asterix appears). I don't want to expose their password to a lurking employee looking over their shoulders.
Anyways, what I am looking for is a long shot I think. if anyone can throw an idea or 2 out there, I'm willing to test it.
long shot.... since I don't use addins, but harkening back to 'the day' you could control the visible width of a text box or combo box etc but it would accept anything you typed in without truncating characters If you can make it a couple of characters wide, then the lurker would have to really lean in to catch the password | https://community.esri.com/thread/185046-prompting-user-to-log-in-to-database-in-python-addin | CC-MAIN-2020-45 | refinedweb | 592 | 69.92 |
Application Cleaner is a simple concept that performs a search and replace on program code/data to remove any spyware URLs within program code to stop programs such as web-browsers from calling home and was written in VS2010 using C# and makes extensive use of the System.IO.Filestream and System.IO.Fileinfo classes to open, read and then edit program code or data.
System.IO.Filestream
System.IO.Fileinfo
The program works by starting a new thread that calls the code that is shown below that is a recursive function used to iterate the directory tree of the folders and calls the function "ScanFolderFiles" with a Directoryinfo object to perform the processing.
ScanFolderFiles
Directoryinfo
private static void ScanFolders(DirectoryInfo DInfo)
{//Recursive function used to scan sub-folders if "RootFolderOnly" is not set to true
ScanFolderFiles(DInfo);
if (RootFolderOnly) return;//Nope we don't need to index sub-folders this time
foreach (DirectoryInfo SubDIofo in DInfo.GetDirectories())
{
ScanFolders(SubDIofo);
}
}
private static void ScanFolderFiles(DirectoryInfo DInfo)
{//Scan for files and then call ReadFile to index links
foreach (FileInfo FInfo in DInfo.GetFiles())
{
if (!Running) return;
while (Paused) { Thread.Sleep(2000); }
if (!FInfo.Name.EndsWith(".backup"))
{
ReadFile(FInfo); //Might add a load more links to be displayed
DisplayNewLinks(); //Now display any new links we found
}
}
);//We have done with the folder
}
The user is given the option of pausing or terminating the scanners thread by setting the static bool values of Running or Pause to true or false.
static bool
Running
Pause
true
false
ReadFile opens up the file using a file stream that reads all of the file into a byte[] array that is then chopped up into a byte array of 10k chunks so that URLs can be searched for and added to a links dictionary with the Links.Displayed value set as false and then DisplayNewLinks() is called.
ReadFile
byte[]
Links.Displ
false
DisplayNewLinks()
private static void DisplayNewLinks()
{//Any link in our dictionary of links that have not been displayed is saved as a string array
foreach (Link L in Scanner.Links.Values)
{
if (!L.Displayed)
{//Show any new links in the list-view
L.Displayed = true;//Only show the link once
AddNewFind(L.Finfo.Name, L.LinkText, L.Find,
L.ChunkCount,L.Start , L.ImageIndex, L.Finfo.Directory.FullName);
}
}
}
Note that all of the above work was completed using the Scanners own thread and that the "AddNewFind" function locks a "NewFind" string before appending the data past in as '¬' separated values to the end of the string so that a form timer can later call Scanner.GetListViewItems() that again locks the "NewFind" string and then proceeds to convert the string into ListViewItems that are then appended to the forms main ListView.
AddNewFind
NewFind
string
Scanner.GetListViewItems()
ListViewItems
ListView
public static Dictionary<int, ListViewItem> GetListViewItems()
{//Called on the UI thread and returns new data to be displayed in the list-view
Dictionary<int, ListViewItem> LVItems = new Dictionary<int, ListViewItem>();
string[] Items = null;
lock (NewFind)//We lock it here so we don't get a threading error.
{
Items = NewFind.Replace(Environment.NewLine,
"~").Split('~');//Take a quick copy
NewFind = "";
MessageCount = 0;
}
foreach (string Item in Items)
{//Split the string and covert the data to list-view-items ready to add to the list-view
string[] Data = Item.Split('¬');
if (Data.Length > 2)
{
int Num = 0;
int.TryParse(Data[0], out Num);
ListViewItem LVI = new ListViewItem(Data);//Conver an array to values
if (Num > 0)
LVItems.Add(Num, LVI);
}
}
return LVItems;
}
Reading files and then processing the data is very CPU intensive and without any Thread.Sleeps() in the above code the form timer would hardly get a chance to fire and the form would freeze up so depending on how many links are waiting to be displayed, the main Scanner thread sleeps now and then to give the U.I a chance to catch up and that concludes our part on Threading.
Thread.Sleeps()
Application cleaner uses color coded flags in the list-view to denote the type of URL found so that the user can perform a manual search and replace or double-click items in the list for auto-replace to do the work but I soon discovered that your average web-browser contains about 5000 URLs embedded in the program code, DLLs and data with one well known web-browser having over 20,000 URLs ( I will come back to this later) so the rush was on to sort the list-view results.
The code that was added to the form to sort the list view is shown below and you should note from the picture above that columns [0], [4] and [5] are all numeric values and are used in the code below.
private SortOrder OrderBy = SortOrder.Ascending;
private void LVScanner_ColumnClick(object sender, ColumnClickEventArgs e)
{//Sort the list-view but note that this will slow the program down when new rows are added
bool IsNumber = false;
if (OrderBy == SortOrder.Ascending) OrderBy = SortOrder.Descending;
else OrderBy = SortOrder.Ascending;
int Col = int.Parse(e.Column.ToString());
if (Col > 6) return;//We don't have a column number 7
if (Col == 0 || Col == 4 || Col == 5) IsNumber = true;//We need to tell our
//"ListViewItemComparer" to sort the data as a number and not a string
this.LVScanner.ListViewItemSorter = new ListViewItemComparer
(e.Column, OrderBy, IsNumber);//The class was easy to write and is worth a look
LVScanner.Sort();
}
Our code for the ListViewItemComparer class is shown below and it could be expanded to pre-scan some of the column data to decide if the column is numeric or not but all that takes time and code so I decide to simply hardcode the values into the code for now.
ListViewItemComparer
using System;
using System.Collections;
using System.Windows.Forms;
using System.Text;
public class ListViewItemComparer : IComparer
{//This class is used so that the listview can be sorted by clicking on the columns
private int col;
private SortOrder order;
private bool IsNumber = false;
public ListViewItemComparer()
{//Constructor
col = 0;
order = SortOrder.Ascending;
}
public ListViewItemComparer(int column, SortOrder order, bool isNumber)
{//Constructor
col = column;
this.order = order;
this.IsNumber = isNumber;
}
public int SafeGetInt(string Text)
{//This also sorts ip-addresses based on the first digit or double numbers
Text = Text.Trim();
int Value = 0;
int End = Text.IndexOf(".");
if (End > -1) Text = Text.Substring(0, End);
End = Text.IndexOf(" ");
if (End > -1) Text = Text.Substring(0, End).Trim();
End = Text.IndexOf("/");
if (End > -1) Text = Text.Substring(0, End);
int.TryParse(Text, out Value);
return Value;
}
public int CompareNumber(object x, object y)
{
int returnVal = -1;
int IntX = SafeGetInt(((ListViewItem)x).SubItems[col].Text);
int IntY = SafeGetInt(((ListViewItem)y).SubItems[col].Text);
if (IntX > IntY)
returnVal = 1;
if (order == SortOrder.Descending)
// Invert the value returned by String.Compare.
returnVal *= -1;
return returnVal;
}
public int Compare(object x, object y)
{
int returnVal = -1;
if (this.IsNumber) return CompareNumber(x, y);
returnVal = String.Compare(((ListViewItem)x).SubItems[col].Text,
((ListViewItem)y).SubItems[col].Text);
// Determine whether the sort order is descending.
if (order == SortOrder.Descending)
// Invert the value returned by String.Compare.
returnVal *= -1;
return returnVal;
}
}
Well yes and that's without the cached history or cookies and one of the reasons this number is so high is because when a program is compiled that uses embedded Photoshop images then each image will contain a URL pointing to ns.adobe.com and then you have ww3.com XML schemas all over the place plus hundreds of URLs needed for SSL-Certificates from Verisign & Co but even then it still leaves a lot.
With so much data, it became necessary to remove this common type of data from the results so that users could see the wood through the trees and the way that I decided to do this was to first add the option of hiding SSL-Certificate results and to then make use of profile that could be loaded by the user to filter the results for a particular type of data.
As you can see, our data is now sorted by state in the above picture and if you look close at the top two rows, you will see that Application Cleaner also manages to search the program code and data for anything that looks like an IP-Address but sometimes makes a mistake with four digit version numbers that look the same as an IP-Address.
Scanning for embed IP-Addresses is nice to have because viruses often use these addresses to call home and this program makes them easy to spot but many browsers also hardcode Googles DNS-Server IP address of 8.8.8.8 into the browser so that these browsers can bypass your local security if they are being blocked by URLs in the firewall or the DNS-Server/Virus blocker is trying to block the requests.
The project uses profile files that configure the scanner for a particular type of data to be indexed and contains a list of values to be automatically searched and replace and the function used to load profiles is shown below and works a bit like the old Windows .ini files with a bit of a twist and for me the Windows registry with its 400,000 folders is getting a bit full and far too many programs get to read its contents to generate fingerprints.
public static string LoadSettingFromFile(string Key, string Default, string Profile)
{//Load a string from the file system if we can find the file or
// just create it and if the string key is not in the file than
//add it with a default value
string FileName = Environment.CurrentDirectory + "\\" + Profile;
if (!File.Exists(FileName))
{//We need to add a new profile file
File.WriteAllText(FileName, Key + " " + Default + Environment.NewLine);
return Default;
}
string[] Lines = File.ReadAllLines(FileName);
foreach (string Line in Lines)
{//Read all the lines in the file to find our key
if (Line.ToLower().Trim().StartsWith(Key.ToLower()))
return Line.ChopOffBefore(Key).Trim();//Yes found the value so return it
}
StreamWriter SW = File.AppendText(FileName);//We need to add the new key and default to the profile file
SW.WriteLine(Key + " " + Default);
SW.Close();
return Default;
}
I use a proxy server and a DNS-Server to block spyware on our LAN and I was getting a bit tired of seeing the proxy server logs filling up over night when no one was even browsing and I also knew that even Microsoft was bypassing the Windows proxy server setting to try to call home using encrypted SSL traffic because my firewall as a last line of defence was blocking these malicious requests from getting out so I decide to tackle this problem at source by editing the machine code of the programs that were behind the attempts to use backdoors.
It now takes me about half an hour to doctor most browsers using this program so that they are unable to upload super cookies that get installed in browsers when you have to run one of them "Install" programs that uses a downloader to install a browser that makes a profit by leaching and then selling your private data to the highest bidder.
This is my second attempt with this little program and my mistake in the first version was not to index the program contents but to instead do a quick search and replace in one go when the file was being read but this proved to be a bit slow when having to re-scan the data each time a search and replace was done but that has been fixed now but it would be nice to somehow make it even fast since it can take five minutes to scan the index for a project.
Replacing URLs strings in sections of machine code was easy so long as the length of the text being replaced is kept the same to ensure that machine code jumps in the code all remain the same and for this reason the program will reject any attempt to replace "SpywareHome.com" with "abc,com" but this is not the case with XML type data so it would be nice to have find a fix for this but it would need to take into account that the last-write date for the files must be preserved because some programs check for this.
As you can see, I am useless at documentation and the help text for the program needs rewriting from scratch by anyone but you will find the full source code for the project is well commented and you are free to download a copy by clicking the link at the top of the page and to make the project your own.
Enjoy!
Dr Gadgit. | https://www.codeproject.com/Articles/897409/Application-Cleaner-NSA-Killer | CC-MAIN-2018-22 | refinedweb | 2,115 | 56.69 |
Ok so i need to make a program that you put in an email address and it checks for the @ symbol and then makes sure its not the first digit or the last digit. Now i know that there is " like s.islower( for lowercase) and s.isupper but i dont know if theres a s.is "@"
can someone help me?
def main(): print "This program validates a email address" email = raw_input("Enter Your email: ") if isValid(email): print ("That is a valid email") else: print ("That email is not valid.") raw_input("\nPress <enter> to close window.") def isValid(s): if s.is"@"() == 0: print "There is no @ symbol" return False if s.isfirst(@) == 0: print "the @ cannot be first" return False if s.islast(@) == 0: print "the @ cannot be last" return False return True main() | https://www.daniweb.com/programming/software-development/threads/264717/need-help-with-some-lines-of-code | CC-MAIN-2018-39 | refinedweb | 136 | 85.18 |
This page contains information on how to diagnose and fix common errors. It is divided into sections based on error codes and log messages.
Response code 400
Connecting to:... TuningFork:Web: Response code: 400 TuningFork:Web: Response message: Bad
You can get this error if your API key is invalid. See Enable the API and section Configure the plugin..
Multiple Google.Protobuf.dll files
PrecompiledAssemblyException: Multiple precompiled assemblies with the same name Google.Protobuf.dll included for the current platform. Only one assembly with the same name is allowed per platform. Assembly paths: ... Error: The imported type `Google.Protobuf.IMessage<T>' is defined multiple times
You can get one of these errors if your project contains multiple
Google.Protobuf.dll files. Remove one of the
.dll files to resolve this
conflict.
Attempting to call method ... for which no ahead of time (AOT) code was generated
ExecutionEngineException: Attempting to call method 'Google.Protobuf.Reflection.ReflectionUtil+ReflectionHelper' ... for which no ahead of time (AOT) code was generated.
You can see this error on some versions of Unity. This error occurs if the AOT compiler is not generating code for generic methods. For information on how to force generate required code, see section Ahead-of-time compile (AOT).
The type or namespace name 'Protobuf' does not exist in the namespace 'Google'
The type or namespace name `Protobuf' does not exist in the namespace `Google'. Are you missing an assembly reference?
Make sure your project is using .NET 4.x. Check Player Settings > Other Settings > Configuration > Scripting Runtime Version. | https://developer.android.com/games/sdk/performance-tuner/unity/troubleshooting | CC-MAIN-2020-45 | refinedweb | 255 | 52.05 |
.thousandmonkeys;31 32 import nextapp.echo2.app.TaskQueueHandle;33 34 /**35 * Note to developers who might use this class as an example:36 * Don't. This is a *very unusual* use of asynchronous tasks.37 * See the documentation for examples of how asynchronous tasks38 * might normally be used.39 */40 public class GhostTask 41 implements Runnable {42 43 /**44 * Creates and starts a new <code>GhostTask</code>.45 * 46 * @param app the application to test47 * @param taskQueue the <code>TaskQueueHandle</codE> to which tasks will be48 * added 49 */50 static void start(ThousandMonkeysApp app, TaskQueueHandle taskQueue) {51 app.enqueueTask(taskQueue, new GhostTask(app, taskQueue));52 }53 54 private TaskQueueHandle taskQueue;55 private ThousandMonkeysApp app;56 57 /**58 * Creates a new <code>GhostTask</code>.59 * 60 * @param app the application to test61 * @param taskQueue the <code>TaskQueueHandle</code> to which tasks will be 62 * added63 */64 private GhostTask(ThousandMonkeysApp app, TaskQueueHandle taskQueue) {65 this.taskQueue = taskQueue;66 this.app = app;67 }68 69 /**70 * @see java.lang.Runnable#run()71 */72 public void run() {73 app.iterate();74 app.enqueueTask(taskQueue, this);75 }76 }77
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/nextapp/echo2/testapp/thousandmonkeys/GhostTask.java.htm | CC-MAIN-2016-44 | refinedweb | 199 | 59.09 |
Convert IP addresses to Countries using Python on Windows, Linux, Unix.
IP address to Country program
Why convert IP addresses to Countries?
If you use any kind of log management or intrusion prevention or detection method then it's likely that these can import or make use of a list of known bad IP addresses. If so, then there might be a need to create known entities and enrich the data with geographic attributes.
As an example, there is a list kept at which is a list of networks and IP addresses that are known 'bad'.
The format is:
AA.BB.CC.DD/nn ; <name> <newline>
We would like to turn this into a CSV or similar and add the location.
Another source is found at
Warning: tabs tend to get eaten by blogging software, so some of these examples might be missing code indentation. Python needs this indentation. Download the final code at the end of this article.
GeoIP data source
There is a source of IP address to Location mapping at with a limit of 25 lookups per day. For example, the shot below is for an IP address currently in the 'bad' list: 2.56.0.5
Demo from maxmind
Get the data.
You can see that this is in the Ukraine which is a country notorious for naughty net high-jinx.
The maxmind database is a subscription service with either a site license or a site license with optional updates.
However, there is a free version called 'GeoLite' which is under a Creative Commons Attribution-ShareAlike 3.0 Unported License and you have to include the following as attribution:
"This product includes GeoLite data created by MaxMind, available from"
Download the ZIP for Country or Country and City.
Don't open this in Excel because it's more than 65535 rows long and won't completely load. Also note there is a binary version for use with API and databases. I also discovered that Unicode ASCII is used for some of the names.
Here is the format of the file called GeoLiteCity-Blocks.csv
startIpNum, endIpNum ,locId
"7602176" , "7864319" ,"16"
This is the format of the file called GeoLiteCity-Location.csv
locId,country,region,city ,postalCode,latitude,longitude,metroCode,areaCode
562 ,"ZA" ,"02" ,"Mpumalanga" ,"" ,-27.8818,31.5340 , ,
Not all fields are used in all rows.
How do we interpret this data?
Let's do this by example. Let's take the line
"7602176","7864319","16"
from GeoLiteCity-Blocks.csv
and
16,"AT","","","",47.3333,13.3333,,
from GeoLiteCity-Location.csv
"16" is the location code which matches in both files as an index. The code "AT" is the top level domain code for Austria, and you can verify this using google maps by pasting the coordinates 47.3333,13.3333 into maps.google.com directly into the search bar.
These country codes are also listed in a table.
What are those odd-looking IP addresses?
Normally, decimal-dotted IP addresses are used, but in this case, it's a single (quoted) string of digits. Let's assume that it's a single integer that can be computed from an IP address. Let's take a simple address like 1.2.3.4 and create a single integer.
Each digit in the decimal-dotted notation is a power of 256 (Because it's 8 bit and 2^8 = 256) so the formula to convert 1.2.3.4 into a single integer is:
(1*256^3)+(2*256^2)+(3*256^1)+(4*256^0) = 16909060
(You can make Google do that just by "searching" for that formula - cut and paste (1*256^3)+(2*256^2)+(3*256^1)+(4*256^0) into the Google search bar to find out.)
Conversely, to go from an integer to decimal-dotted notation you need to use the following formula:
If C4, D4, E4 and F4 are spreadsheet cells, and x is a named cell containing the ip address integer, then
Given IP address x (as a decimal integer) to convert to C4.D4.E4.F4
C4=INT(x/(256^3))
D4=INT((x-(C4*(256^3)))/(256^2))
E4=INT((x-(C4*(256^3)+D4*(256^2)))/(256))
F4=x-(C4*(256^3)+D4*(256^2)+(E4*256))
e.g. The integer 41735425 = 2.124.213.1
For a spot-check, there is an on line calculator.
Now we need a scripting language.
We could use several languages to perform lookups. It depends what you want to do. A front-end could be written in HTML using PHP or Javatext or similar, or we could use a command line scripting language like Python or Perl. Both are popular choices. We could also use C.
Python seems like a good choice. It's stable and well documented and has several Integrated Development Environments. It's also platform independent.
We could also use Visual Basic VB or one of the GUI-based integrated environments to make a nice GUI front end. But let's stick with a command line version because this will let us suck in IP addresses in a batch mode which can then be imported into whatever system you need them for.
Python for Windows.
Let's go for this freeware version.
Download the installer for your operating system and launch it. This will install the files and after that you can launch in interactive shell the program menu (what used to be the start-menu).
Interactive shell
pygeoip
There just happens to be a "pure Python API for MaxMind GeoIP database" documented at code.google.com and we should use this and save ourselves a lot of coding.
We need to find out how to install that module because, as you can see above, it's not seen by the interactive Python shell.
There is a download. Get the file called pygeoip-<version>.tar.gz and the *apidocs.zip for the documentation. It's probably worth reading some documentation on python modules first. (Also here.)
In the interactive shell import the "sys" module (This is always supplied during install).
>>> import sys
Now you can find out which DOS paths are checked when trying to import a module:
>>> print sys.path
['', 'C:\\Windows\\system32\\python27.zip',
'C:\\Python27\\DLLs',
'C:\\Python27\\lib',
'C:\\Python27\\lib\\plat-win',
'C:\\Python27\\lib\\lib-tk',
'C:\\Python27',
'C:\\Python27\\lib\\site-packages',
'C:\\Python27\\lib\\site-packages\\win32',
'C:\\Python27\\lib\\site-packages\\win32\\lib',
'C:\\Python27\\lib\\site-packages\\Pythonwin',
'C:\\Python27\\lib\\site-packages\\setuptools-0.6c11-py2.7.egg-info']
You can see there are several places that you could put the module called pygeoip. The one highlighted contains a lot of *.py files and we could place the contents of the pygeoip.zip file there. However, the zip file contains a file called setup.py and in there we find Python code that will do the module installation for us.
Extract the zip file somewhere and use a DOS prompt and navigate to that directory.
Issue this help command to see what can be done:
C:\Python27\python.exe setup.py --help
This tells us that setup.py install will install the file and that's just what we need.
C:\Python27\python.exe setup.py install
The README
Pure Python GeoIP API. The API is based off of MaxMind's C-based Python API [1],
but the code itself is based on the pure PHP5 API [2] by Jim Winstead and Hans Lellelid.
It is mostly a drop-in replacement, except the
`new` and `open` methods are gone. You should instantiate the GeoIP class yourself:
gi = GeoIP('/path/to/GeoIP.dat', pygeoip.MEMORY_CACHE)
The only supported flags are STANDARD, MMAP_CACHE, and MEMORY_CACHE
If you have any questions or find a bug, have a look at the project page [3] or
contact Jennifer Ennis <zaylea at gmail dot com>
[1]
[2]
[3]
Test your module
RELAUNCH the interactive shell and type:
>>> import sys,pygeoip
This should not return any errors. If there are no errors, then the new module is installed and is ready for use.
Before we can use it, let's revisit the MaxMind download site and get the binary version (not the CSV) because this is cleaner. It's too big for a spreadsheet anyway, and the pygeoip module works with the binary (*.dat) file directly.
Make a folder called C:\GeoIP
Unzip the .dat file into C:\GeoIP
Now we can test it in the interactive Python window.
You should already have imported the sys and pygeoip modules. Check the name of the *.dat file in C:\GeoIP. Mine was called geoLiteCity.dat and so we can do this:
>>> geo = pygeoip.GeoIP('C:\GeoIP\GeoLiteCity.dat')
This makes a 'handle' called geo and we can use it to access the data.
Here is the whole interactive test
>>> import sys,pygeoip
>>> geo = pygeoip.GeoIP('C:\GeoIP\GeoLiteCity.dat')
>>> print geo.record_by_addr('20.2.3.4')['country_name']
United States
Now we can write a python script only a few lines long that will match most IP addresses to a country. (The database is not foolproof).
Open a text editor and put this code inside it:, saving it as ip2C.py
#!/usr/bin/env /c:/Python27/python
import pygeoip, sys
geo = pygeoip.GeoIP('C:\GeoIP\GeoLiteCity.dat')
for row in sys.stdin:
rec = geo.record_by_addr(row)
print rec['country_name']
NOTE Python uses white-space indentation as demarcation for code blocks and you have to consistently use the right number of tabs or spaces as indentation. It's one of very few annoying 'features' of Python.
Then create or obtain a list of IP addresses (one per line) and put them into a file called ip.txt and try this from your command prompt:
ip2C.py < ip.txt
You should get a list of countries for each of the IP addresses.
What can we do with this?
There are several internet sites that keep lists of IP addresses involved in 'bad' things. We could pull that data in and add country-data to it. This extra data could be useful for raising alerts on incoming or outgoing internet traffic.
Since the list contents could change frequently, it would be nice to have a way to automatically retrieve them. An ideal program for this is called wget and is available for Windows and *nix. It certainly comes with the excellent cygwin package for Windows.
This is an install for Windows wget follow the obvious links to find the latest version. It installs into C:\Program Files (x86)\GnuWin32
With wget, you can download a website or data. Here is an example:
>"C:\Program Files (x86)\GnuWin32\bin\wget.exe"
This is a website that generates a list of phishing sites and makes them available for download. wget let's you get this from a script. Please visit the website for rules about automated downloading.
This is the format of the .csv version of the data:
phish_id,url,phish_detail_url,submission_time,verified,verification_time,online,target 123456, National Example Bank
wget
--2012-07-12 10:34:06--
Resolving ()... 176.9.54.236
Connecting to ()|176.9.54.236|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 302961 (296K) [text/plain]
Saving to: `all.txt.1'
100%[======================================>] 302,961 85.9K/s in 3.4s
2012-07-12 10:34:17 (85.9 KB/s) - `all.txt.1' saved [302961/302961]
ip2C.py
import pygeoip, sys
geo = pygeoip.GeoIP('C:\GeoIP\GeoLiteCity.dat')
for row in sys.stdin:
rec = geo.record_by_addr(row)
print '{},\"{}\"'.format(row.strip() , rec['country_name'])
A simple example
For now, let's use wget to pull down a simple list of IP addresses: See the pane to the right. This could be done on a daily basis.
Now let's modify the script to print the IP address and the country as a CSV. Note that we have to use the format command and wrap the country in double quotes because some countries are listed like "China , Republic of " and we need to protect the comma inside that string. In the Python code, the double quotes are 'escaped' with a back-slash.
Try it out
Here is the command, and some sample output.
> python.exe ip2C.py < all.txt
- 119.46.101.34,"Thailand"
- 119.46.133.214,"Thailand"
- 119.46.56.75,"Thailand"
- 119.46.90.28,"Thailand"
- 119.53.196.76,"China"
- 119.57.37.9,"China"
- 119.57.77.163,"China"
- 119.6.7.27,"China"
- 119.62.48.23,"China"
- 119.66.129.77,"Korea, Republic of"
- 119.66.144.93,"Korea, Republic of"
- 119.70.227.139,"Korea, Republic of"
- 119.71.213.76,"Korea, Republic of"
- 119.74.243.150,"Singapore"
- 119.82.252.240,"Cambodia"
- 119.82.73.242,"India"
- 119.84.117.74,"China"
- 119.84.117.75,"China"
- 119.92.225.242,"Philippines"
Let's get more information
In the Python code above, the variable rec is an array. It's what is called an associative array which means that the index is a non-ordinal. That in turn simply means that the index is not a number. Actually, this index is a string of characters. Another term used in this context is 'key value pair'. The key that we used is 'country name'.
There are other values in this database and we can get them all by printing the whole record as follows:
print rec
That's rather simple. Here is a single sample output:
{'city': '', 'time_zone': 'Asia/Taipei', 'longitude': 121.0, 'metro_code': '', 'country_code3': 'TWN', 'latitude': 23.5, 'postal_code': None, 'country_code': 'TW', 'country_name': 'Taiwan'}
From this, we can list the available indexes:
- city
- time_zone
- longitude
- metro_code
- country_code3
- latitude
- country_code
- country_name
... and modify our script to also print out the city ( or any of the other data ).
print '{},\"{}\",\"{}\"'.format(row.strip() , rec['country_name'], rec['city'])
Extracting the Spamhaus IP addresses
Here is sample data from the Spanhaus sire mentioned at the start of this article.
; Spamhaus DROP List 07/12/12 - (c) 2012 The Spamhaus Project ; Last-Modified: Tue, 10 Jul 2012 22:45:16 GMT ; Expires: Fri, 13 Jul 2012 01:02:23 GMT 2.56.0.0/14 ; SBL102988 14.192.0.0/19 ; SBL123577 14.192.48.0/21 ; SBL131019 14.192.56.0/22 ; SBL131020
If we pulled this in using wget, then tried to process it with the existing python script, then it would fail because the IP address is not isolated. We need a way to extract it. We also need to ignore the semi-colon. There are several ways to do this. It would be nice to find the most flexible method to make our script work for other formats too. In fact, if we use a module called "re" in Python, this gives us the power of "regular expressions" (I wrote a full hub about this. We should be able to extract valid IP addresses from each line, no matter where they appear.
The module is imported like this:
import re
It's installed by default so you won't need to find and download a special module.
A better IP address finder
s='kj lkj lkjsdfjj 2.3.4.5/23;ll kjdskj 4.5.6.7 45.76.345.32 1.-6.34. 100.200.300.400./234'
ip=re.findall(r'\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b',s)
print ip
['2.3.4.5', '4.5.6.7']
Use the interactive shell to test it.
import re
s='1.2.3.4/34;sksdfjkl'
ip = re.findall( r'[0-9]+(?:\.[0-9]+){3}', s )
print ip[0]
If you run these lines, then the output is a clean IP address.
But the regex used above will find invalid IP addresses too so we need a better pattern.
This example)to the right) shows not only how the single line of regex will find only valid ip addresses, it will also find them no matter how they are buried in the string.
The output is an array, and we can pick these out one by one.
for i in ip:
print i
import re s=';123//;456#999' r = re.split(';|#|//',s) if r[0]=='': print 'Nothing' s='This is an IP that is valid 1.2.3.4 see!; and a comment 1.4.5.6' r = re.split(';|#|//',s) if r[0]=='': print 'Nothing' else: print r[0]
Ignoring the comment lines in spamhaus
The spamhaus file uses a semi-colon at the start of the line to indicate that the line should be ignored. We also need to prepare each line by stripping everything to the right of a comment character. This is because the comment might contain a valid IP address and we should ignore it. There are several characters used for comments so our script should be able to pick from a list of them. Luckily, in Python, there is an easy way to split up a string based on a set of delimiters. All we need to do us split the string based on some common comment delimiters, and throw away all but what is before the comment.
Here is a start, using a single delimiter option of a semi-colon.
import re
s='123;456'
print re.split(';',s)
... and the result:
$ python.exe x.py
['123', '456']
We can add more delimiters and test it thus:
import re
s='123;456'
print re.split(';',s)
s=';123//;456#999'
print re.split(';|#|//',s)
with the result:
$ python.exe x.py
['123', '456']
['', '123', '', '456', '999']
Note that the first element of the output array is null. We can simply use the first element as the input string and ignore it if it is null. This takes care of all the comment lines.
The script to the right contains an example of what we need to extract the non-comment portion of an input string.
wget and portability
wget is great, but there is a way to do what we need directly from Python, and this is an advantage for portability. Instead of separately installing wget, and possibly needing to set up paths, and find a way to call wget from python, we can use a module called urllib. Read about it first of course.
This is another module that comes built-in to Python which is great because it's less to worry about when installing onto a new system. Many environments use http proxies to access the internet. The module deals with this transparently if there is no authentication. In a Windows environment, if no proxy environment variables are set, proxy settings are obtained from the registry’s Internet Settings section. Under Linux etc it will take note of the environment variable as in this example:
export http_proxy=""
Example- obtain Spamaus data
Here is sample code and output to pull in data from Spamhaus
import urllib2
response = urllib2.urlopen('')
html = response.read()
print html
Output
; Spamhaus DROP List 07/12/12 - (c) 2012 The Spamhaus Project
; Last-Modified: Tue, 10 Jul 2012 22:45:16 GMT
; Expires: Fri, 13 Jul 2012 02:38:21 GMT
2.56.0.0/14 ; SBL102988
14.192.0.0/19 ; SBL123577
14.192.48.0/21 ; SBL131019
(etc)
This is very good because we can use this in our ip2C.py script to get the data directly from the internet.
It is important to gracefully deal with errors. Python will trap errors using the 'try' keyword. Once we trap an error, it would be a good idea to save the error somewhere. However, for now, let's just report this to the standard error channel.
Sample error handling code
import sys, urllib2
someurl=''
req = urllib2.Request(someurl)
else:
print 'everything is fine'
Putting it all together
A fully working, copyrighted, licensed script is available called ip2Cdir.py from this link.
You may use it with attribution of the author (me), and of course the maxmind database. See the comments in the file for details.
Run it from the command line on a Windows system with interactive Python installed in the default directory and also with the pygeoip python module installed. You will need internet access and either no proxy, or one that does not require authentication.
To run open a command prompt and type
python ip2Cdir.py
(Assuming your path variable will find python, and the script is in the current directory).
If you have any problems - hit the comment section below. I can't promise a fast response as I probably won't have time to maintain the script. After all, this was just done on a day off feeling sick. I don't get that kind of time often.
#!/usr/bin/env /c:/Python27/python # # "This product includes GeoLite data created by MaxMind, available from" # # The first line works for 'interactive Python' as installed on Windows. # For unix/linux/mac etc this line will need to be changed to suit. # # This script pulls a list of IP addresses (Assumed one per line) # from the internet at the specified variable 'someurl' below. # Then it uses an IP address to country database to print a # csv list containing # <IP> , <"Country"> , <"City"> # # The script may be used to obtain other fields indexed by any of: # # country_name # city # longitude # latitude # time_zone # metro_code # country code # country_code3 # postal_code # # To modify the output, change the output format string # # # obtain and install the module pygeoip before using this script. # # ( download pygeoip-0.2.2.tar.gz and run the Python install script ) # # obtain GeoLiteCity.dat on a regular basis and put it into C:\GeoIP # # # # Author: Jeremy Lee 12/07/2012 version 0.1 beta # # License: Use for any pupose under the condition that the Author is # referenced as above, and the maxmind attribution remains intact. # # # potential enhancements: # 1. Use a .ini file for configuration. # 2. Provide a GUI to configure the .ini file # 3. Detect the presence and date of the geoIP data and install automatically # 4. Make an installer script # 5. Merge multiple sources of potentially malicious IP address lists. # 6. Write the output to a file # 7. Update the output file on a periodic basis # 8. Parameterise the output based on .ini information using a format string # # import pygeoip, sys, urllib2, re geo = pygeoip.GeoIP('C:\GeoIP\GeoLiteCity.dat') # # # Choose a source of malicious IP addresses here # #someurl='' someurl='' # Pull in the data from the internet req = urllib2.Request(someurl) # Try to open it and report any errors to stderr # At this point, the script terminates. else: # At this point, we have the internet data in 'response' while True: s=response.readline().strip() if s.__len__() == 0 : break # Here, we terminate the loop because there is no more data in 'response' # This next line splits the row where we find delimeters that are used as comments. # Presently, a semi-colon, hash or two slashes are considered comments. You can # add more if needed. r = re.split(';|#|//',s) # Now we only use the first element in the split array as data. If the comment # was at the beginning, then this element is empty. if r[0]!='': # The data is not empty - so look for a valid ip address ip=re.findall(r'\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b',r[0]) # Assume an IP address was found... rec = geo.record_by_addr(ip[0]) # If so, then 'rec' is True and we can continue if rec: # Modify the following output format as needed print '{},\"{}\",\"{}\"'.format(ip[0] , rec['country_name'], rec['city']) | https://hubpages.com/technology/Convert-IP-addresses-to-Countries-using-Windows | CC-MAIN-2017-26 | refinedweb | 3,959 | 75.61 |
When you work with your data: Contacts, Companies, Deals, Tickets etc, you can:
import data
create new records manually
This all may simply create duplicates in your existing folder.
When importing from a csv file, on the very last step you will be asked to make sure you are not about to import those.
Select a field to match duplicates by (this field MUST be unique e.g. Email, ID, Phone number). Last name field is not unique as you can have contacts with the same last name. Phone number can be used in different formats and still be recognized by the system (e.g. 049 123 45 67, +49-123-45-67, 1234567, etc)
You can then choose: either import the duplicates but merge them with the existing (same records) and update them with the data from the file OR not import the duplicates at all.
2. When working with the folder (e.g. Contacts), you create a new record and by chance create the same contact. We can prevent it:
Go to the settings>duplicate prevention and click on create new rule for a specific folder.
When creating a new rule for duplicate prevention you can select a single unique identifier field by clicking on the drop down menu and selecting the field, for example e-mail address.
You can also create rules composed by more than one field. For example if the contact has the same Name, Family Name and Company.
The system can give you 1 option out of 2: either warn about the potential duplicate BUT allow you to create the record anyway OR forbid the duplicate creation.
You can add as many rules as you need.
3. When you type in the potential duplicate (with the option "forbid duplicate prevention"), the system will show you the existing record with the same value found and will not allow you creating the new record.
Haven’t found the answers you’re looking for? Ask our User Community. | https://help.nethunt.com/en/articles/4198520-duplicate-prevention | CC-MAIN-2022-40 | refinedweb | 331 | 70.33 |
Interview QuestionProgram Managers
Country: United States
@sri The complexities are O(n^2). Temporally this can be explained as we can solve the problem by traversing the matrix, even if you can do it by traversing just half of it. Spatially is easy to see using the worst case in which there is no preexisting connection between the cities. In that case you would need to generate n-1 + n-2 ..... + 1 connections which has a quadratic cost. Basically the question is asking you to generate a clique, you can research the topic to get a deeper understanding.
Solution in Python:
def RoadBuilder(nCities, builtRoads):
solutions = set()
m = [ [0] * nCities for _ in xrange(nCities) ]
for x,y in builtRoads:
m[x][y] = 1
m[y][x] = 1
for x in xrange(nCities):
for y in xrange(x+1, nCities):
if m[x][y] == 0:
solutions.add((x, y))
return solutions
#include <iostream>
using namespace std;
#include <vector>
vector<pair<int, int> > build_roads(int n, vector<pair<int, int> > in) {
vector<vector<int> > matrix(n, vector<int>(n));
for(vector<pair<int, int> >::iterator it = in.begin(); it != in.end(); ++it) {
// check wheth the it->first and it->second within the limits
matrix[it->first][it->second] = 1;
}
vector<pair<int, int> > out;
for(int i =0; i < n; i++) {
for(int j=i+1; j<n; j++) {
if (!matrix[i][j]) {
out.push_back(pair<int,int>(i,j));
}
}
}
return out;
}
int main() {
// your code goes here
int ara[][2] = {{0,1}, {1,2}, {2,3}};
vector<pair<int, int> > in;
for(int k = 0; k < sizeof(ara)/sizeof(ara[0]); k++) {
in.push_back(pair<int, int>(ara[k][0], ara[k][1]));
}
vector<pair<int, int> > result = build_roads(4, in);
for(vector<pair<int, int> >::iterator it = result.begin(); it != result.end(); ++it) {
cout << "{" << it->first<<"," <<it->second<<"}\n";
}
return 0;
}
create the adjacency matrix from the given pairs, build all possible pairs (unidirectional) and add to result if no edge is present. In Java
- Chris May 23, 2017 | https://careercup.com/question?id=5723805839261696 | CC-MAIN-2019-13 | refinedweb | 340 | 51.48 |
On 21/07/2008, at 5:27 AM, Avi Flax wrote:
> While I can now create and "call" a view, I'm still a little confused
> as to the terminology involved and the intended usage patterns.
Yes, it's a very foreign way of thinking if you're coming from SQL-
land. Expect a few hours of staring blankly at strange-looking
javascript and thinking "Huh!?". It'll click eventually though : )
> For instance, you used the term "namespace". What might a namespace
> be? Also, the design document includes the key "views" - are there
> other keys supported? Or is that TBD? Also, apparently I can define
> multiple view functions in the "view" key - that's cool, but I don't
> quite understand the intended usage pattern.
>
> For example, how should I name my namespaces and my views?
Well, I think this is completely up to you. You should probably name
them loosely under their "category". If you look at how some of the
libraries do it, they name them after the document type they're
intended to be associated with - for example if you have the doc.type
"user" you might put any views intended to work on them _view/users.
The "view" key is as far as I know hardcoded and permanent, you're
free to create as many design documents with whatever _design/names
you want but that key is necessary.
> Perhaps like this?
>
>
>
>
>
>
Well they seem pretty reasonable to me. However, you could save
yourself some duplication by defining more than one key in the view
and having something like this:
and then pass the "type" string into it. Obviously that depends how
you are storing the type etc.
> ...? Or maybe I'm missing the point, perhaps it's intended to be as
> open-ended as it is, so we can make up our own usage patterns.
I think you've hit the nail on the head there!
Sho | http://mail-archives.apache.org/mod_mbox/couchdb-user/200807.mbox/%3C20B7CB19-96D4-4898-ACF9-86F428D8133B@gmail.com%3E | CC-MAIN-2014-15 | refinedweb | 321 | 73.07 |
Is it possible to hide the 'Stop' cross in the upper lefthand corner?
I am making an animation movie in Pythonista, which is to be recorded and distributed.
Of course, I use Scene for this purpose. But, there is always the stop-cross in the upper left hand corner visible.
Is there a way to hide this unwanted cross?
You could run the scene with a
SceneView, and present it using the
hide_title_baroption.
Here's a short example with a scene that just draws a green background:
from scene import * class MyScene (Scene): def draw(self): background(0, 0, 1) view = SceneView() view.scene = MyScene() view.present('fullscreen', hide_title_bar=True)
(swipe down with two fingers to close the view)
There's currently no way to hide the status bar (time, battery...) though.
- chrislandau203
This is definitely not working in the recent implementation of Pythonista. Any new ideas for how to solve this? I want to remove the X for the full screen app I'm building.
Discussed here-
Now with objc_util it is possible to hide status bar(battery, time etc)
def hide_close(self, state=True): from objc_util import ObjCInstance v = ObjCInstance(self.view) for x in v.subviews(): #if 'UIButton' in x.description(): if str(x.description()).find('UIButton') >= 0: x.setHidden(state)
This works better for me. | https://forum.omz-software.com/topic/1487/is-it-possible-to-hide-the-stop-cross-in-the-upper-lefthand-corner | CC-MAIN-2020-40 | refinedweb | 219 | 77.33 |
Because the procedure was relatively lengthy and the final outcome was extremely good, i started compiling a guide that must be used during
Here's an excerpt from the document:
Why use DFS as your software repository
Consider the following scenario, you have one Primary/Central Site server with two (2)
secondary servers that connect with a saturated or bandwidth constraint WAN Link,
control of the bandwidth consumed is at high priority.
- You need one logical place to access the data from any of the three sites and always
access the closest one.
- You want to quickly move the data to another drive when hard drive space is limited
and attach this replica in the same namespace in such a way that users won’t notice
the change.
- If something happens in a secondary site’s DFS folder, you want the clients to
fallback to another site.
- You need to organize the packages and any other software in a manner you
understand and apparently can’t do it with SCCM’s software distribution shares
(which are hidden).
All these are the benefits of using DFS with Replication; however there is a cost for using
them and this is a small administrative overhead (it’s really small).
Hope it helps as it did for me!
by: gsimos on 2011-03-23 at 05:19:56ID: 24993 | http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/Systems_Management_Server/A_5042-Guide-for-deploying-software-that-resides-in-DFS-folders-via-Configuration-Manager-2007.html | crawl-003 | refinedweb | 225 | 52.73 |
In this section, you'll learn how to show modal views in order to get confirmation from the user. We'll start with the successful scenario, where an action generates a successful outcome that we want the user to be aware of. Then we'll look at the error scenario, where something went wrong and you don't want the user to move forward without acknowledging the issue.
Let's start by implementing a modal view that's displayed as the result of the user successfully performing an action. Here's the
Modal component that's used to show the user a success confirmation:
import React, { PropTypes } from 'react'; import { View, Text, Modal, } from 'react-native'; import styles from './styles'; // Uses "<Modal>" ...
No credit card required | https://www.oreilly.com/library/view/react-and-react/9781786465658/ch20s02.html | CC-MAIN-2019-22 | refinedweb | 126 | 62.98 |
This post is for beginners who are starting in WCF. I will be focused and help you to create and consume first WCF Service in simplest steps. We will follow steps as following,
At the end of this post you should able to create first WCF Service and consume that in a Console Application. I will keep adding further posts in this series to make WCF easier for you.
Project setup
- Launch any edition of Visual Studio 2010 or 2012.
- Create a project by choosing WCF Service Application project template from WCF tab.
- Delete default created IService1.cs and Service1.svc file.
Step 1: Create Service Contract
To create a Service Contract,
- Right click on project and add New Item
- From Web tab choose WCF Service to add.
- Let us give Calculator.svc name of service.
Next you need to perform following tasks,
- Open ICalculator.cs and remove void DoWork() function.
- Make sure attribute of Interface is set as [ServiceContract]
- Define functions you want to create as part of Contract
- Set Attribute of functions as [OperationContract]
You will create ICalculator Service Contract with four basic calculator operations as following,
ICalculator.cs
using System.ServiceModel; namespace fourstepblogdemo { [ServiceContract] public interface ICalculator { [OperationContract] double AddNumbers(double number1, double number2); [OperationContract] double SubstractNumbers(double number1, double number2); [OperationContract] double MultiplyNumbers(double number1, double number2); [OperationContract] double DivisionNumbers(double number1, double number2); } }
Above you have created a Service Contract with four Operation Contracts. These contracts will be part of Service Contract and exposed to clients. By this step you have created ICalculator Service Contract.
Step 2: Expose Endpoints with Metadata
In this step you need to expose Endpoints and Metadata of service. To do this open Web.config file. We are going to create one Endpoint with basicHttpBinding. We are adding metadata Endpoint also to expose metadata of service. We need metadata at the client side to create proxy.
<services> <service name="fourstepblogdemo.Calculator"> <endpoint address="" contract="fourstepblogdemo.ICalculator" binding="basicHttpBinding"/> <endpoint address="mex" contract="IMetadataExchange" binding="mexHttpBinding"/> </service> </services>
Step 3: Implement Service
In this step we need to implement service. To implement service open Calculator.svc and write codes to implement functions is defined in ServiceContract ICalculator.cs. Delete implementation of DoWork function from Calculator.svc and implement services as following
Calculator.svc.cs
namespace fourstepblogdemo { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "Calculator" in code, svc and config file together. // NOTE: In order to launch WCF Test Client for testing this service, please select Calculator.svc or Calculator.svc.cs at the Solution Explorer and start debugging. public class Calculator : ICalculator { public double AddNumbers(double number1, double number2) { double result = number1 + number2; return result; } public double SubstractNumbers(double number1, double number2) { double result = number1 - number2; return result; } public double MultiplyNumbers(double number1, double number2) { double result = number1 * number2; return result; } public double DivisionNumbers(double number1, double number2) { double result = number1 / number2; return result; } } }
As of now you have created Service and configured Endpoint. Now you need to host service. There are many processes in which a WCF Service can be hosted. Some processes are
- Managed Application
- IIS
- ASP.Net Web Server
- Windows Service
- App Fabric
In this post we are not going into details of WCF Service hosting and we will consider simplest hosting option. Let us host service in ASP.Net Web Server. To host press F5 in visual studio.
In browser you can see Service as following.
To view metadata of Service click on URL of wsdl. You may notice that Service is hosted on localhost.
Step 4: Consume Service
There are various ways a WCF SOAP Service can be consumed in different kind of clients. In this post we will consume service in a Console Application. Launch Visual Studio and create Console Application project.
Now there are two ways you can create proxy at client side.
- Using svcuitl.exe at command prompt
- By adding Service Reference
In this post we will create proxy at client side using Add Service Reference. In Console Project right click on Reference and select option of Add Service Reference. In Add Service Reference dialog copy paste address of Service or if WCF Service and Console Client project is in same solution then click on Discover. If there is no error in Service then you will find Service Reference added as given in following image. If you want you can change name of Reference. I am leaving here default name ServiceReferenc1.
You can consume service at client as following,
- Create instance of proxy class
- Call different operations from service
You can create instance of proxy class as following
And let us say you want to make a call to Add function. That can be done as following
So at client side you can call all four functions of Calculator Service as following
using System; using ConsoleClient.ServiceReference1; namespace ConsoleClient { class Program { static void Main(string[] args) { CalculatorClient proxy = new CalculatorClient(); double addResult = proxy.AddNumbers(9, 3); Console.WriteLine("Result of Add Operation"); Console.WriteLine(addResult); double subResult = proxy.SubstractNumbers(9, 3); Console.WriteLine("Result of Substract Operation"); Console.WriteLine(subResult); double mulResult = proxy.MultiplyNumbers(9, 3); Console.WriteLine("Result of Multiply Operation"); Console.WriteLine(mulResult); double divResult = proxy.MultiplyNumbers(9, 3); Console.WriteLine("Result of Division Operation"); Console.WriteLine(divResult); Console.ReadKey(true); } } }
Press F5 to run Console Client Application. You will get desired result.
Just now you have create a Calculator WCF Service and consumed that in a Console Application. In further posts I will simplify other concepts for WCF for beginners. I hope you find this post useful. Thanks for reading.
9 thoughts on “Four Steps to create first WCF Service: Beginners Series”
Thanks sir to share this. It is so helpful and personally I want to read this type blog. It gives a confidence that I can learn WCF. Looking for your new blog. Thanks Sir
Really it is good article …. but i got a error i can’t add service by giving address…So pls guide me where i went wrong…
sir how to convert existing web services into restful wcf ?
Step 2 seems to be giving me a bunch of problems can you post what the entire Web.config file is supposed to look like?
Sir, How can make my service global, which will be accessible by any application.
I have been hosted in IIS | https://debugmode.net/2013/04/16/four-steps-to-create-first-wcf-service-beginners-series/ | CC-MAIN-2017-09 | refinedweb | 1,065 | 51.04 |
While I haven't specifically tested this patch it looks reasonable to me. Are you going to do an engineering/commit cycle with it?
-Matt Matthew Dillon <[EMAIL PROTECTED]> :The patch initializes nbuf (and many other things) statically again. :The only losses are slight bloat of the data section and the ability :... : :%%% :Index: subr_param.c :=================================================================== :RCS file: /home/ncvs/src/sys/kern/subr_param.c,v :retrieving revision 1.52 :diff -u -2 -r1.52 subr_param.c :--- subr_param.c 6 Feb 2002 01:19:19 -0000 1.52 :+++ subr_param.c 23 Feb 2002 07:44:45 -0000 :@@ -56,31 +56,27 @@ : #define HZ 100 : #endif :-#define NPROC (20 + 16 * maxusers) :... : :Bruce To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message | https://www.mail-archive.com/freebsd-current@freebsd.org/msg35546.html | CC-MAIN-2018-47 | refinedweb | 127 | 60.92 |
Content-type: text/html
XmInstallImage - A pixmap caching function that adds an image to the pixmap cache
#include <Xm/Xm.h>
Boolean XmInstallImage (image, image_name). Points to the image structure to be installed. The installation process does not make a local copy of the image. Therefore, the application should not destroy the image until it is uninstalled from the caching functions. Specifies a string that the application uses to name the image. After installation, this name can be used in .Xdefaults for referencing the image. A local copy of the name is created by the image caching functions.X), XmGetPixmap(3X), XmDestroyPixmap(3X) | http://backdrift.org/man/tru64/man3/XmInstallImage.3X.html | CC-MAIN-2016-44 | refinedweb | 103 | 61.73 |
import pylab as plt import time as time from math import log from sys import stderr import random as random def timer(f, *args, **kwargs): """ This can be used for testing the speed of certain operations. """ start = time.perf_counter() f(*args, **kwargs) return int((time.perf_counter() - start)*10**9)
There are several functions required to perform RSA. The most interesting is
bezout(a,b) which solves for integer s and t given positive integers a and b so that sa+tb=gcd(a,b)
def gcd(m,n): k = m % n return n if k == 0 else gcd(n, k)
def bezout(a, b): """ bezout: Finds s and t so that s*m + t*n = gcd If gcd(m,n) = 1, then a mod n = m^-1 mod n The point at each stage of the Eucldean algorithm Assume a >= b(not necessary) r_0 = a, r_1 = b, r_2 = r_0 mod r_1, ... r_i = r_{i-2} mod r_{i-1} So r_0 > r_1 > r_2 ... >= 0 At each stage s_i*a + t_i*b = a_i q_i*a_{i-1} + a_i = a_{i-2} q_i(s_{i-1}*a + t_{i-1}*b) + a_i = s_{i-2}*a + t_{i-2}*b [s_i, t_i] = [s_{i-2}, t_{i-2}] - q_i*[s_{i-1}, t_{i-1}] (*) and q_i = floor(a_{i-2}/a_{i-1}) As soon as r_{i} = 0 we are done and want to take s_{i-1}, t_{i - 1} (*) indicates why I use "pairs" rather than s, t Themain point is there is some s, t so that: gcd(a,b) = gcd(b, a mod b) = s*a + t*b Tracking this gives the algorithm """ a0 = [1, 0] # [s_{i-2}, t_{i-2}] a1 = [0, 1] # [s_{i-1}, t_{i-1}] B = [a, b] r = b while r > 0: aa = a0[0]*B[0] + a0[1]*B[1] # a_{i-2} bb = a1[0]*B[0] + a1[1]*B[1] # a_{i-1} r = aa % bb # a_i if r == 0: break q = aa // bb # q_i tmp = [x - q*y for x,y in zip(a0, a1)] # [s_i, t_i] a0 = a1 # reset for next round a1 = tmp; s = a1[0]; t = a1[1]; gcd = bb; return (s, t, gcd)
Computing memodϕ(n) or cdmodϕ(n) can take a long time if exponentiation without thinking. The following uses about log2(d) multiplication and mod operations instead of the d many of these operations if
m ** d % n were used.
def fast_exp(m,e,n): if e == 0: return 1 if e == 1: return m % n tmp = fast_exp(m, e >> 1, n) if (e >> 1) < ((e + 1) >> 1): return (tmp * tmp * m) % n return (tmp * tmp) % n
The following plot indictes the time difference between using fast_exp and not for relatively small exponents. The blue is the running time of regular exponentiation in nanoseconds.
ns = [i*20 for i in range(0,100)] b = random.randint(2**20,2**21) N = 2**63 times1 = [timer(fast_exp, b, i, N) for i in ns] times2 = [timer(lambda : b**i % N) for i in ns] fig, ax = plt.subplots() ax.set_ylim(0, 200000) ax.plot(times1, 'r.') pt = ax.plot(times2,'b.')
The next two helper functions convert a text message to an integer and vice versa.
def message_to_int(message): """ Simply feed in one ascii code for a character at a time. So message_to_int("Hi") -> 0x4869, since, H -> 0x48 and i -> 0x69. """ int_message = 0 for c in message: int_message = (int_message << 8) + ord(c) return int_message def int_to_message(n): message = "" while n > 0: B = n % 256 n = n >> 8 c = chr(B) message = c + message return message
Two more helper functions, isPrime does the obvious test, it optimizes a bit by testing only odd possible factors for primes and only up to the square root of n. The function Primes does a complete prime factorization of n. It is really the running time of these and similar functions that make RSA secure. If you try the isPrime on p or q below in the "Real World" parameters section, then don't plan on seeing the process terminate. To test primality on truely large numbers probabilistic primality tests are used. It is also useful to know about how likely you are to find a prime, the Prime Number Theorem basically provides that if an odd number is chosen at random that is around n bits, then the probability that that number is prime is approximately nln(2)2. so if we choose a random odd number, p~, that is around 516 bits, then P(p~∈Primes)=216⋅log(2)2≈1.34%. After 53 random attempts the probability for finding a prime is already over 50%.
def isPrime(n): if n == 2: return True if n % 2 == 0: return False i = 1 while (2*i+1)*(2*i + 1) <= n: if n % (2*i + 1) == 0: return False i += 1 return True def Primes(n): i = 2 a = [] while i*i <= n: while n % i == 0: n = n//i a.append(i) i = i + 1 if n > 1: a.append(n) return a
Here would be some "real world" prime pairs. The primes p and q are 516 bits making n 1024 bits as required for RSA, but our codes run too slow on these.
p = 0xE0DFD2C2A288ACEBC705EFAB30E4447541A8C5A47A37185C5A9CB98389CE4DE19199AA3069B404FD98C801568CB9170EB712BF10B4955CE9C9DC8CE6855C6123 q = 0xEBE0FCF21866FD9A9F0D72F7994875A8D92E67AEE4B515136B2A778A8048B149828AEA30BD0BA34B977982A3D42168F594CA99F3981DDABFAB2369F229640115 n = 0xCF33188211FDF6052BDBB1A37235E0ABB5978A45C71FD381A91AD12FC76DA0544C47568AC83D855D47CA8D8A779579AB72E635D0B0AAAC22D28341E998E90F82122A2C06090F43A37E0203C2B72E401FD06890EC8EAD4F07E686E906F01B2468AE7B30CBD670255C1FEDE1A2762CF4392C0759499CC0ABECFF008728D9A11ADF e = 0x40B028E1E4CCF07537643101FF72444A0BE1D7682F1EDB553E3AB4F6DD8293CA1945DB12D796AE9244D60565C2EB692A89B8881D58D278562ED60066DD8211E67315CF89857167206120405B08B54D10D4EC4ED4253C75FA74098FE3F7FB751FF5121353C554391E114C85B56A9725E9BD5685D6C9C7EED8EE442366353DC39 d = 0xC21A93EE751A8D4FBFD77285D79D6768C58EBF283743D2889A395F266C78F4A28E86F545960C2CE01EB8AD5246905163B28D0B8BAABB959CC03F4EC499186168AE9ED6D88058898907E61C7CCCC584D65D801CFE32DFC983707F87F5AA6AE4B9E77B9CE630E2C0DF05841B5E4984D059A35D7270D500514891F7B77B804BED81 # You can test that d = e^-1 mod phi phi = (p-1)*(q-1) check = "e and d are inverses mod phi" if e * d % phi == 1 else "e and d are not inverses mod phi" print(check, file=stderr) # You can even run bezout(e, (p - 1)*(q - 1)) to get d (so this is clearly not slow operation!) (d, _, _) = bezout(e, phi) hex(d % phi)
These function generate primes to use for RSA. You can choose the size of the primes to look for. Do not try a size like 1024 bits as required for real RSA applications!
import random as random def gen_prime(bits=32): """ Search for a prime around 32 bits. """ p = random.randint(2**bits, 2**(bits+1)) while not isPrime(p): p = random.randint(2**bits, 2**(bits+1)) return p def gen_prime_pair(seed=None, bits=32): random.seed(seed) p = gen_prime(bits) q = p while (q == p): q = gen_prime(bits) return p,q
Here is the actual RSA. The first function uses the gen_prime_pair together with a seed for the random number generator (the "pass phrase" -- really just noise.) The encoding function simply takes the message, converts it to an integer, m, which must be <n−1 and then computes c=memodn. This is the cypher text. The decoding function, then computes cdmodn which is again m and then reconstructs the text from its numeric representation.
def RSA_gen_key_pair(pass_phrase, bits = 32): """ Decide if you wish to use a 32 bit or a 64 bit primes and enter a pass phrase. This generates a (public, private) pair, public = (n, e) where n = p*q, private = d so that e*d = 1 (mod phi) where phi = (p-1)*(q-1). The message to be encoded must be < n and so if n has k-bits, then the message length is roughly k/8. (8 bits = 1 byte per character.) Since n is roughly 2 * bits, bits should be set to "message length" * 4 at least. """ p, q = gen_prime_pair(pass_phrase, bits) n = p * q m = (p - 1)*(q - 1) gcd = 0 while gcd != 1: e = random.randint(23, n) (d, s, gcd) = bezout(e, m) public = (n, e) private = d return public, private % m def RSA_encode(public, message): m = message_to_int(message) if m > public[0] - 1: print("Message is too long only %s charaters are allowed."%(int((log(public[0])/log(2))/8)),file=stderr) return fast_exp(m, public[1], public[0]) def RSA_decode(public, private, cypher): return int_to_message(fast_exp(cypher, private, public[0]))
There are things to watch out for. Suppose I wish to code "Hello", this is five 8 bit (1 byte) characters, or 40 bits. For n to have 40 bits we need p and q to have around 20 bits so when running the key_gen, we need to use bits = 20.
public, private = RSA_gen_key_pair("My Secret Key", 20)
cypher = RSA_encode(public, "Hello")
print("The hex version of the cypher text is {} this is what is \ transmitted \nalong with the public key (n, e) = ({},{}).".format(hex(cypher),hex(public[0]), public[1]))
message = RSA_decode(public, private, cypher) print(message)
The following simulates a brute force break of a 32 bit RSA encryption. The primes are chosen to be 16 bit, so the search space has size 216=65536.
public, private = RSA_gen_key_pair("My Secret Key", 16) cypher = RSA_encode(public, "CODE")
Brute force factor n and comput the private key, then decrypt.
n = public[0] e = public[1] p, q = Primes(n) phi = (p - 1) * (q - 1) (d, _, _) = bezout(e, phi) d = d % phi plain_text = fast_exp(cypher, d, n) message = int_to_message(plain_text) | https://share.cocalc.com/share/fa3e56c84c4012296dfc5beda5aa9b0b90f0a686/RSA.ipynb?viewer=embed | CC-MAIN-2020-10 | refinedweb | 1,493 | 65.35 |
#include <wx/infobar.h>.
wxInfoBar calls its parent wxWindow::Layout() method and assumes that it will change the parent layout appropriately depending on whether the info bar itself is shown or hidden. Usually this is achieved by simply using a sizer for the parent window layout and adding wxInfoBar to this sizer as one of the items. Considering the usual placement of the info bars, normally this sizer should be a vertical wxBoxSizer and the bar its first or last element so the simplest possible example of using this class would be:
See the dialogs sample for more sophisticated examples.
Currently this class is implemented generically (i.e. in the same platform-independent way for all ports) and also natively in wxGTK but the native implementation requires a recent – as of this writing – GTK+ 2.18 version.
Add a button to be shown in the info bar.
The button added by this method will be shown to the right of the text (in LTR layout), with each successive button being added to the right of the previous one. If any buttons are added to the info bar using this method, the default "Close" button is not shown as it is assumed that the extra buttons already allow the user to close it.
Clicking the button will generate a normal EVT_COMMAND_BUTTON_CLICKED event which can be handled as usual. The default handler in wxInfoBar itself closes the window whenever a button in it is clicked so if you wish the info bar to be hidden when the button is clicked, simply call
event.Skip() in the button handler to let the base class handler do it (calling Dismiss() explicitly works too, of course). On the other hand, if you don't skip the event, the info bar will remain opened so make sure to do it for at least some buttons to allow the user to close it.
Notice that the generic wxInfoBar implementation handles the button events itself and so they are not propagated to the info bar parent and you need to either inherit from wxInfoBar and handle them in your derived class or use wxEvtHandler::Connect(), as is done in the dialogs sample, to handle the button events in the parent frame.
Create the info bar window.
Notice that unlike most of the other wxWindow-derived classes, wxInfoBar is created hidden and is only shown when ShowMessage() is called. This is more convenient as usually the info bar is created to be shown at some later time and not immediately and so creating it hidden avoids the need to call Hide() explicitly from the code using it.
This should be only called if the object was created using its default constructor.
Return the effect animation duration currently used.
Return the effect currently used for hiding the bar.
Return the effect currently used for showing the bar.
Remove a button previously added by AddButton().
Set the duration of the animation used when showing or hiding the bar.
By default, 500ms duration is used.
Set the effects to use when showing and hiding the bar.
Either or both of the parameters can be set to wxSHOW_EFFECT_NONE to disable using effects entirely.
By default, the info bar uses wxSHOW_EFFECT_SLIDE_TO_BOTTOM effect for showing itself and wxSHOW_EFFECT_SLIDE_TO_TOP for hiding if it is the first element of the containing sizer and reverse effects if it's the last one. If it is neither the first nor the last element, no effect is used to avoid the use of an inappropriate one and this function must be called if an effect is desired.
Show a message in the bar.
If the bar is currently hidden, it will be shown. Otherwise its message will be updated in place. | http://docs.wxwidgets.org/3.0/classwx_info_bar.html | CC-MAIN-2018-34 | refinedweb | 621 | 59.33 |
77
While tidying up up some of the examples recently I noticed that the C++
header file plstream.h includes plplot.h. Currently it needs to do that
because it needs the various constants, pltr? functions etc. This is all
very well but it does clutter the namespace a little - particularly with
both the C and the C++ versions of the API functions being available.
Potentially this could lead to confusion, although currently nearly all
the C++ versions in plstream just call the related C version.
I guess the "best" solution would be to split the constants etc out from
the C API definitions. Only the constants need then be included by
plstream.h, and they could be included either in the plstream of in
their own namespace (plplot maybe?). This is going to require some
fairly major rejigging of plplot.h and probably plstream.h though. Since
this could have a big impact I wanted to discuss it on the list first.
Of course this also has some API implications since might require code
to be changed to use the new namespace. I would propose we made the
necessary structural changes, but for now the default is the old
behaviour. The new, tighter, namespace scheme could be enabled via a
define perhaps. If we change the C++ examples to use this then hopefully
it will encourage people to adopt it before the C++ API becomes too
widespread. At some later stage when we are making other API changes we
could remove support for the old, laxer, scheme.
Any thoughts?
Andrew).
A good reason I can think of is that it is a safe limit for
all drivers. In that case, I would have to look for other
solutions.
Can anyone comment on that?
Regards,
Arjen
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/plplot/mailman/plplot-devel/?viewmonth=200406&page=3&style=flat | CC-MAIN-2016-40 | refinedweb | 339 | 73.68 |
Before reading this article, please go through the following article.
The scale animation behavior allows you to change a control’s scale by increasing or decreasing the control through animation. For example, perhaps you want an entry field to change size when the user taps it.
Reading this article, you can learn how to scale an image in UWP apps development, with XAML and Visual C#.
The following important tools are required for developing UWP.
Now, we can discuss step by step app development.
Step 1
Open Visual Studio 2015. Go to Start -> New Project-> Select Universal (under Visual C# -> Windows) -> Blank App -> give a suitable name for your app (UWPScaleImg) -> OK.
After choosing the Target and Minimum platform version that your application will support, the Project creates App.xaml and MainPage.xaml. Step 2
Open the file MainPage.xaml in Solution Explorer and add the Microsoft.Toolkit.Uwp.UI.Animations reference in the project. For adding Reference, right click your project (UWPScaleImg) and select "Manage NuGet Packages".
Choose "Browse" and search Microsoft.Toolkit.Uwp.UI.Animations.
Add images in the Assets folder. Step 5
Add the image control for displaying the image for scaling. Step 6
Add the following Xaml namespaces and code for Scaling in Mainpage.xaml.
View All | http://www.c-sharpcorner.com/article/scaling-an-image-using-uwp-with-xaml-and-c-sharp/ | CC-MAIN-2017-43 | refinedweb | 211 | 60.31 |
Win.
Aw come on, Raymond. You should know that most readers come here for the analysis!
@someone else: I don’t know. Quite a large number of people seem to come here just for the nitpicking and vista-bashing.
Back to the topic at hand, it makes sense. Once a process is launched and has used its own command line, it and a lot else has little use for the original command line. I was surprised it’s even kept somewhere.
Well, no, it would be useful for programs like Process Explorer (already mentioned) to give the user a better idea of what’s running on their system than "Host Process for Windows Services", either to determine whether it’s malicious or to track down what’s taking up so much RAM/disk space. I could even think of cases where taking action automatically on such information is useful. Think antiviruses removing known threats based on command line arguments.
@Alexandre: Well, you come for the analysis but you stay for the vista-bashing :-).
"….Process Explorer (already mentioned) to give the user a better idea of what’s running on their system than "Host Process for Windows Services","
not only does process explorer tell you that, you can also use "tasklist /svc" from the command line
From a contractual perspective, it’s kept around because it’s a parameter to main(), and by C convention main doesn’t return, putting its parameters out of scope. However, you can of course do whatever you want, including terminating the thread that main is in.
I come for the analysis, but I stay for the insults!
On a related note, if I remember correctly, some of djb’s daemontools (used with qmail on Unix) deliberately overwrite their command lines, so that they can report status and errors to someone using ‘top’ or ‘ps’.
Don’t even need to bother with Process Explorer – on Vista, the Task Manager can show the (possibly over-written) command line.
And what will the poor Vista-bashers do now that Windows 7 is out?
Yes it does, when the program finishes! Hence why it returns an int which the stub which called main then passes to exit().
I suppose this qualifies as an "application trying to fool you", but a specific use case is for applications that accepts passwords via command-line arguments.
@Timothy, here? I’m running Windows 7 and don’t see the command line.
Sorry, by "here" I meant "where"
Similarly, can you see the environment variables for a process?
@Alexandre: You should have used a capitol ‘V’ in "Vista", not that it deserves it.
Pre-emptive nit-pick: I know I spelled capital incorrectly.
Raymond, being this precise is hard. You do a great job!
Vista / Win7 Task manager:
"View" menu
"Select Columns"
Scroll down to "Command line"
@Boris: You can add a column for command-line in the "Processes" list.
Argh, why didn’t I know that! Thanks.
Random aside: Windows 7 (Pro x64) now allows you again to drag and drop directories from Explorer to the command prompt to save typing the full path. This useful feature was removed from Vista.
Yep, just tested the following dummy program:
#include <windows.h>
int main(int argc, char **argv)
{
wchar_t *x = GetCommandLineW();
while(*x) *x++ = L’x’;
Sleep(10000000);
return 0;
}
When I opened it up with Process Explorer, it reported the command line as "xxxxxxxxxx" instead of what it actually was.
"not keeping track of information you don’t need" is wrong. You do need it, for diagnostics.
While deliberately overwriting the command line for purposes of reporting status (as mentioned above) could be useful — how do you know how much room you have?
I guess you can be reasonably safe in assuming that if someone gave you 20 characters in actual parameters then the buffer is at least that big, but I don’t think there are any particular guarantees anywhere, especially when a program is being run with no parameters.
So it might end up just being another way to buffer-overrun yourself…
Also: is there a Win32 way of getting the (possibly overwritten) command line of a process, or is it only available through WMI? (I don’t regard WMI as part of Win32.)
"Windows 7 (Pro x64) now allows you again to drag and drop directories from Explorer to the command prompt to save typing the full path. This useful feature was removed from Vista."
As mentioned by Raymond before, that was a side effect of the introduction of UIPI to Vista. Restoring this feature to Windows 7 required moving console windows into a separate conhost.exe.
You mean like inject a DLL using CreateRemoteThread into another process? And that DLL then calls GetCommandLineW and sends you the result back?
And don’t forget that the command line doesn’t necessarily include the correct, or indeed any, executable. (WinDbg 6.6.0007.5 gets this wrong.)
That’s the only way I could think of, yes. I was wondering if there were anything more elegant.
Although then I noticed that Raymond <a href="">had covered this before</a>.
@Miral
I do not know if you consider this more elegant
but i guess ReadProcessMemory instead of injecting a thread has less risk of destroying something.
It still has the sampe problems with reading commandline from privileged processes and is surely equally unsupported…
AFAIK this is the way most tools implement reading commandline… | https://blogs.msdn.microsoft.com/oldnewthing/20091125-00/?p=15923/ | CC-MAIN-2016-30 | refinedweb | 911 | 62.58 |
got - A tool to make it easier to manage multiple code repositories using different VCSen
version 1.333
cd some/proj/in/a/vcs got add # answer prompts for various information # or run with '-D' to take all defaults # show managed repositories got list got ls # run a command in selected repositories got do --tag perl --command "ls t/" # show managed repositories sorted by path (default = sort by name) got ls -p # remove repo #1 from the list got remove 1 # remove repo named 'bar' from the list got remove bar # remove all repos tagged 'foo' without confirmation prompts got rm -f -t foo # remove repo #3 without confirmation prompts and be noisy about it got rm -f -v 3 # show status (up-to-date, dirty, etc.) for all repos got status # show status for repo #3 got st 3 # fetch upstream for all repositories got fetch # fetch upstream for repo #3 got fetch 3 # update all repos with configured remotes got update # update repo named 'bar' got up bar # Note: if a repo is in the list but doesn't have a local checkout, 'got # update' will create directories as needed and do the initial check out. # Run the 'git gc' command to garbage collect in git repos got gc # spawn a subshell with working directory set to 'path' of repo #1 got chdir 1 # spawn a subshell with working directory set to 'path' of repo foo got cd foo # or use 'tmux' subcommand to open a new tmux window instead got tmux 1 got tmux foo # N.b., 'tmux' will reuse an existing window if one is open # checkout a local working copy of a repo and add it to your list of repos. # will prompt for info on what to name repo, tags, etc. got clone <git/http/ssh url> # As above, but accept defaults for all options without prompting got clone -D <git/http/ssh url> # fork a github repo, add it to your list of repos, and check it out in # the current working directory got fork # note: the default path to a repo added via 'fork' is a directory # named 'repo_name' in the current working directory # if you just want to fork without checking out a working copy: got fork --noclone # finally, please note that you need a C<~/.github-identity> file set up # with your access token or your username and password in the following key-value # format: user username pass password # *OR* access_token token # note that if you specify both, the access_token value will be used # show version of got got version
got is a script to make it easier to manage all the version controlled repositories you have on all the computers you use. It can operate on all, some, or just one repo at a time, to both check the status of the repo (up to date, pending changes, dirty, etc.) and sync it with any upstream master.
got also supports forking a GitHub repo and adding it to the list of managed repositories.
In addition to the subcommand-specific options illustrated in the SYNOPSIS, all the subcommands accept the following options:
--verbose / -v
Be more verbose about what is happening behind the scenes
--quiet / -q
Be more quiet
--tags / -t
Select all repositories that have the given tag. May be given multiple times. Multiple args are (effectively) 'and'-ed together.
--skip-tags / -T
Skip all repositories that have the given tag. May be given multiple times. Multiple args are (effectively) 'or'-ed together. May be combined with -t to select all repos with the -t tag except for those with the -T tag.
--no-color / -C
Suppress colored output
--color-scheme / -c
Specify a color scheme. Defaults to 'dark'. People using light backgrounds may want to specify "-c light".
The name given to the option indicates a library to load. By default this library is assumed to be in the 'App::GitGot::Outputter::' namespace; the given scheme name will be appended to that namespace. You can load something from a different namespace by prefacing a '+'. (E.g., '-C +GitGot::pink' will attempt to load 'GitGot::pink'.)
If the requested module can't be loaded, the command will exit.
See COLOR SCHEMES for details on how to write your own custom color scheme.
Commands may be limited to a subset of repositories by giving a combination of additional arguments, consisting of either repository names, repository numbers (as reported by the '
list' subcommand), or number ranges (e.g.,
2-4 will operate on repository numbers 2, 3, and 4).
Note that if you have a repository whose name is an integer number, bad things are going probably going to happen. Don't do that.
Color scheme libraries should extend
App::GitGot::Outputter and need to define four required attributes:
color_error,
color_warning,
color_major_change, and
color_minor_change. Each attribute should be a read-only of type 'Str' with a default value that corresponds to a valid
Term::ANSIColor color string.
Seeing Ingy döt Net speak about AYCABTU at PPW2010 was a major factor in the development of this script -- earlier (unreleased) versions did not have any way to limit operations to a subset of managed repositories; they also didn't deal well managing output. After lifting his interface (virtually wholesale) I ended up with something that I thought was worth releasing.
drdrang prodded me about making the color configuration more friendly to those that weren't dark backrgound terminal people. The colors in
App::GitGot::Outputter::light are based on a couple of patches that drdrang sent me.
Currently git is the only supported VCS.
John SJ Anderson <genehack@genehack.org>
This software is copyright (c) 2015 by John SJ Anderson.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~genehack/App-GitGot-1.333/bin/got | CC-MAIN-2017-13 | refinedweb | 971 | 57 |
Screen Updating and Cursor Movement Optimization: A Library Package Kenneth C. R. C. Arnold Elan Amir Computer Science Division Department of Electrical Engineering and Computer Science University of California, Berkeley Berkeley, California 94720 ABSTRACT This document describes a package of C library functions which allow the user to: 1) update a screen with reasonable optimization, 2) get input from the terminal in a screen-oriented fashion, and 3) independent from the above, move the cursor op- timally from one point to another. These routines all use the termcap(5) database to describe the capabilities of the terminal. Acknowledgements This package would not exist without the work of Bill Joy, who, in writing his editor, created the capability to generally describe terminals, wrote the routines which read this database, and, most importantly, those which implement optimal cursor movement, which routines I have simply lifted nearly intact. Doug Merritt and Kurt Shoens also were extremely important, as were both willing to waste time listening to me rant and rave. The help and/or support of Ken Abrams, Alan Char, Mark Horton, and Joe Kalash, was, and is, also greatly appreciated. Ken Arnold 16 April 1986 The help and/or support of Kirk McKusick and Keith Bos- tic (public vi!) was invaluable in bringing the package ``into the 90's'', which now includes completely new data structures and screen refresh optimization routines. Elan Amir 29 December 1992 Screen Package PS1:19-1 1. Overview In making available the generalized terminal descrip- tions in termcap(5), much information was made available to the programmer, but little work was taken out of one's hands. The purpose of this package is to allow the C pro- grammer to do the most common type of terminal dependent functions, those of movement optimization and optimal screen updating, without doing any of the dirty work, and with nearly as much ease as is necessary to simply print or read things. 1.1. Terminology In this document, the following terminology is used: window: An internal representation containing an image of what a section of the terminal screen may look like at some point in time. This subsection can either encom- pass the entire terminal screen, or any smaller portion down to a single character within that screen. terminal: Sometimes called terminal screen. The package's idea of what the terminal's screen currently looks like, i.e., what the user sees now. This is a special screen: screen: This is a subset of windows which are as large as the terminal screen, i.e., they start at the upper left hand corner and encompass the lower right hand corner. One of these, stdscr, is automatically provided for the programmer. 1.2. Compiling Applications In order to use the library, it is necessary to have certain types and variables defined. Therefore, the program- mer must have a line: #include <curses.h> at the top of the program source. Compilations should have the following form: cc [ flags ] file ... -lcurses -ltermcap 1.3. Screen Updating In order to update the screen optimally, it is neces- sary for the routines to know what the screen currently looks like and what the programmer wants it to look like next. For this purpose, a data type (structure) named WINDOW Screen Package PS1:19-3 is defined which describes a window image to the routines, including its starting position on the screen (the (y, x) co-ordinates refresh() (or wrefresh() if the window is not stdscr) is called. Refresh() makes the terminal, in the area covered by the window, look like that window. Note, therefore, that changing something on a window does not change the terminal. Actual updates to the terminal screen are made only by calling refresh() or wrefresh(). This allows the programmer to maintain several different ideas of what a portion of the terminal screen should look like. Also, changes can be made to windows in any order, without regard to motion efficiency. Then, at will, the programmer can effectively say "make it look like this", and the package will execute the changes in an optimal way. 1.4. Naming Conventions As hinted above, the routines can use several windows, but two are always available: curscr, which is the image of what the terminal looks like at present, and stdscr, which is the image of what the programmer wants the terminal to look like next. The user should not access curscr directly. Changes should be made to the appropriate screen, and then the routine refresh() (or wrefresh()) should be called. Many functions are set up to deal with stdscr as a default screen. For example, to add a character to stdscr, one calls addch() with the desired character. If a different window is to be used, the routine waddch() (for window- specific addch()) is provided[1]. This convention of prepending function names with a "w" when they are to be applied to specific windows is consistent. The only routines which do not do this are those to which a window must always ____________________ [1] Actually, addch() is really a "#define" macro with arguments, as are most of the "functions" which act upon stdscr. PS1:19-4 Screen Package be specified. In order to move the current (y, x) co-ordinates from one point to another, the routines move() and wmove() are provided. However, it is often desirable to first move and then perform some I/O operation. In order to avoid clumsi- ness, most I/O routines can be preceded by the prefix "mv" and the desired (y, x) co-ordinates can then be added) co-ordinates. If a window pointer is needed, it is always the first parameter passed. 2. Variables Many variables which are used to describe the terminal environment are available to the programmer. They are: type name description __________________________________________________________________ WINDOW *curscr current version of the screen (terminal screen). WINDOW *stdscr standard screen. Most updates are usually done here. char * Def_termdefault terminal type if type cannot be determined bool My_term use the terminal specification in Def_term as ter- minal, irrelevant of real terminal type char * ttytype full name of the current terminal. int LINES number of lines on the terminal int COLS number of columns on the terminal int ERR error flag returned by routines on a fail. int OK flag returned by routines upon success. Screen Package PS1:19-5 3. Usage This is a description of how to actually use the screen package. For simplicity, we assume all updating, reading, etc. is applied to stdscr, although a different window can of course be specified. 3.1. Initialization In order to use the screen package, the routines must know about terminal characteristics, and the space for curscr and stdscr must be allocated. These functions are performed by initscr(). Since it must allocate space for the windows, it can overflow core when attempting to do so. On this rather rare occasion, initscr() returns ERR. initscr() must always be called before any of the routines which affect windows are used. If it is not, the program will core dump as soon as either curscr or stdscr are referenced. How- ever, it is usually best to wait to call it until after you are sure you will need it, like after checking for startup errors. Terminal status changing routines like nl() and cbreak() should be called after initscr(). After the initial window allocation done by initscr(), specific window characteristics can be set. Scrolling can be enabled by calling scrollok(). If you want the cursor to be left after the last change, use leaveok(). If this isn't done, refresh() will move the cursor to the window's current (y, x) co-ordinates after updating it. Additional windows can be created by using the functions newwin() and subwin(). delwin() allows you to delete an existing window. The vari- ables LINES and COLS control the size of the terminal. They are initially implicitly set by initscr(), but can be altered explicitly by the user followed by a call to initscr(). Note that any call to initscr(), will always delete any existing stdscr and/or curscr before creating new ones so this change is best done before the initial call to initscr(). 3.2. Output The basic functions used to change what will go on a window are addch() and move(). addch() adds a character at the current (y, x) co-ordinates, returning ERR if it would cause the window to illegally scroll, i.e., printing a char- acter in the lower right-hand corner of a terminal which automatically scrolls if scrolling is not allowed. move() changes the current (y, x) co-ordinates to whatever you want them to be. It returns ERR if you try to move off the win- dow. As mentioned above, you can combine the two into mvaddch() to do both things in one call. PS1:19-6 Screen Package The other output functions (such as addstr() and printw()) all call addch() to add characters to the window. After a change has been made to the window, you must call refresh(). when you want the portion of the terminal covered by the window to reflect the change. In order to optimize finding changes, refresh() assumes that any part of the window not changed since the last refresh() of that win- dow has not been changed on the terminal, i.e., that you have not refreshed a portion of the terminal with an over- lapping window. If this is not the case, the routines touchwin(), touchline(), and touchoverlap() are provided to make it look like a desired part of window has been changed, thus forcing refresh() to check that whole subsection of the terminal for changes. If you call wrefresh() with curscr, it will make the screen look like the image of curscr. This is useful for implementing a command which would redraw the screen in case it got messed up. 3.3. Input Input is essentially a mirror image of output. The com- plementary. If it is not, getch() sets it to be cbreak, and then reads in the character. 3.4. Termination In order to perform certain optimizations, and, on some terminals, to work at all, some things must be done before the screen routines start up. These functions are performed in getttmode() and setterm(), which are called by initscr(). In order to clean up after the routines, the routine endwin() is provided. It restores tty modes to what they were when initscr() was first called. The terminal state module uses the variable curses_termios to save the original terminal state which is then restored upon a call to endwin(). Thus, anytime after the call to initscr, endwin() should be called before exiting. Note however, that endwin() should always be called before the final calls to delwin(), which free the storage of the windows. 4. Cursor Movement Optimizations One of the most difficult things to do properly is motion optimization. After using gettmode() and setterm() to get the terminal descriptions, the function mvcur() deals with this task. It usage is simple: simply tell it where you Screen Package PS1:19-7 are now and where you want to go. For example mvcur(0, 0, LINES/2, COLS/2); would move the cursor from the home position (0, 0) to the middle of the screen. If you wish to force absolute address- ing, you can use the function tgoto() from the termlib(7) routines, or you can tell mvcur() that you are impossibly far away, For example, to absolutely address the lower left hand corner of the screen from anywhere just claim that you are in the upper right hand corner: mvcur(0, COLS-1, LINES-1, 0); 5. Character Output and Scrolling The character output policy deals with the following problems. First, where is the location of the cursor after a character is printed, and secondly, when does the screen scroll if scrolling is enabled. In the normal case the characters are output as expected, with the cursor occupying the position of the next character to be output. However, when the cursor is on the last column of the line, the cursor will remain on that position after the last character on the line is output and will only assume the position on the next line when the next character (the first on the next line) is output. Likewise, if scrolling is enabled, a scroll will be invoked only when the first character on he first line past the bottom line of the window is output. If scrolling is not enabled the characters will be output to the bottom right corner of the window which is the cursor location. This policy allows consistent behavior of the cursor at the boundary conditions. Furthermore, it prevents a scroll from happening before it is actually needed (the old package used to scroll when a character was written to the right- most position on the last line). As a precedent, it models the xterm character output conventions. 6. Terminal State Handling The variable curses_termios contains the terminal state of the terminal. Certain historical routines return infor- mation: baudrate(), erasechar(), killchar(), and ospeed(). These routines are obsolete and exist only for backward com- patibility. If you wish to use the information in the curses_termios structure, you should use the tsetattr(3) routines. PS1:19-8 Screen Package 7. Subwindows Subwindows are windows which do not have an independent text structure, i.e., they are windows whose text is a sub- set of the text of a larger window: the parent window. One consequence of this is that changes to either the parent or the child window are destructive to the other, i.e., a change to the subwindow is also a change to the parent win- dow and a change to the parent window in the region defined by the subwindow is implicitly a change to the subwindow as well. Apart from this detail, subwindows function like any other window. 8. The Functions In the following definitions, "[*]" means that the "function" is really a "#define" macro with arguments. addch(char ch);- Add the character ch on the window at the current (y, x) co-ordinates. If the character is a newline ('\n') the line will be cleared to the end, and the current (y, x) co-ordinates will be changed to the beginning off the next line if newline mapping is on, or to the next line at the same x co-ordinate if it is off. A return ('\r') will move to the beginning of the line on the window. Tabs ('\t') will be expanded into spaces in the normal tabstop positions of every eight characters. This returns ERR if it would cause the screen to scroll illegally. addstr(char *str);- Add the string pointed to by str on the window at the current (y, x) co-ordinates. This returns ERR if it would cause the screen to scroll illegally. In this case, it will put on as much as it can. baudrate();- Returns the output baud rate of the terminal. This is a system dependent constant (defined in <sys/tty.h> on BSD systems, which is included by <curses.h>). box(WINDOW win, char vert, char hor); Draws a box around the window using vert as the charac- ter for drawing the vertical sides, and hor for drawing Screen Package PS1:19-9 the horizontal lines. If scrolling is not allowed, and the window encompasses the lower right-hand corner of the terminal, the corners are left blank to avoid a scroll. cbreak();- Set or the terminal to cbreak mode. clear();- Resets the entire window to blanks. If win is a screen, this sets the clear flag, which will cause a clear- screen sequence to be sent on the next refresh() call. This also moves the current (y, x) co-ordinates to (0, 0). clearok(WINDOW *scr, int boolf);- Sets the clear flag for the screen scr. If boolf is non-zero, this will force a clear-screen to be printed on the next refresh(), or stop it from doing so if boolf is 0. This only works on screens, and, unlike clear(), does not alter the contents of the screen. If scr is curscr, the next refresh() call will cause a clear-screen, even if the window passed to refresh() is not a screen. clrtobot();- Wipes the window clear from the current (y, x) co- ordinates to the bottom. This does not force a clear- screen sequence on the next refresh under any cir- cumstances. This has no associated "mv" command. clrtoeol();- Wipes the window clear from the current (y, x) co- ordinates to the end of the line. This has no associ- ated "mv" command. crmode();- Identical to cbreak(). The misnamed macro crmode() and nocrmode() is retained for backwards compatibility with ealier versions of the library. PS1:19-10 Screen Package delch(); Delete the character at the current (y, x) co- ordinates. Each character after it on the line shifts to the left, and the last character becomes blank. deleteln(); Delete the current line. Every line below the current one will move up, and the bottom line will become blank. The current (y, x) co-ordinates will remain unchanged. delwin(WINDOW *win); Deletes the window from existence. All resources are freed for future use by calloc(3). If a window has a subwin() allocated window inside of it, deleting the outer window the subwindow is not affected, even though this does invalidate it. Therefore, subwindows should be deleted before their outer windows are. echo();- Sets the terminal to echo characters. endwin(); Finish up window routines before exit. This restores the terminal to the state it was before initscr() (or gettmode() and setterm()) was called. It should always be called before exiting and before the final calls to delwin(). It does not exit. This is especially useful for resetting tty stats when trapping rubouts via sig- nal(2). erase();- Erases the window to blanks without setting the clear flag. This is analagous to clear(), except that it never causes a clear-screen sequence to be generated on a refresh(). This has no associated "mv" command. erasechar();- Returns the erase character for the terminal, i.e., the character used by the user to erase a single character Screen Package PS1:19-11 from the input. flushok(WINDOW *win, int boolf); Normally, refresh() fflush('s); stdout when it is fin- ished. flushok() allows you to control this. if boolf is non-zero (i.e., non-zero) it will do the fflush(), otherwise it will not. getch();- Gets a character from the terminal and (if necessary) echos it on the window. This returns ERR if it would cause the screen to scroll illegally. Otherwise, the character gotten is returned. If noecho has been set, then the window is left unaltered. In order to retain control of the terminal, it is necessary to have one of noecho, cbreak, or rawmode set. If you do not set one, whatever routine you call to read characters will set cbreak for you, and then reset to the original mode when finished. getstr(char *str);- Get a string through the window and put it in the loca- tion pointed to by str, which is assumed to be large enough to handle it. It sets tty modes if necessary, and then calls getch() (or wgetch()) to get the charac- ters needed to fill in the string until a newline or EOF is encountered. The newline stripped off the string. This returns ERR if it would cause the screen to scroll illegally. gettmode(); Get the tty stats. This is normally called by initscr(). getyx(WINDOW *win, int y, int x); Puts the current (y, x) co-ordinates of win in the variables y and x. Since it is a macro, not a function, you do not pass the address of y and x. idlok(WINDOW *win, int boolf); PS1:19-12 Screen Package Reserved for future use. This will eventually signal to refresh() that it is all right to use the insert and delete line sequences when updating the window. inch();- Returns the character at the current position on the given window. This does not make any changes to the window. initscr(); Initialize the screen routines. This must be called before any of the screen routines are used. It initial- izes the terminal-type data and such, and without it none of the routines can operate. If standard input is not a tty, it sets the specifications to the terminal whose name is pointed to by Def_term (initially "dumb"). If the boolean My_term is non-zero, Def_term is always used. If the system supports the TIOCGWINSZ ioctl(2) call, it is used to get the number of lines and columns for the terminal, otherwise it is taken from the termcap description. insch(char c); Insert c at the current (y, x) co-ordinates Each char- acter after it shifts to the right, and the last char- acter disappears. This returns ERR if it would cause the screen to scroll illegally. insertln(); Insert a line above the current one. Every line below the current line will be shifted down, and the bottom line will disappear. The current line will become blank, and the current (y, x) co-ordinates will remain unchanged. killchar();- Returns the line kill character for the terminal, i.e., the character used by the user to erase an entire line from the input. leaveok(WINDOW *win, int boolf);- Screen Package PS1:19-13 Sets the boolean flag for leaving the cursor after the last change. If boolf is non-zero, the cursor will be left after the last update on the terminal, and the current (y, x) co-ordinates for win will be changed accordingly. If boolf is 0 the cursor will be moved to the current (y, x) co-ordinates. This flag (initially 0) retains its value until changed by the user. move(int y, int x); Change the current (y, x) co-ordinates of the window to (y, x). This returns ERR if it would cause the screen to scroll illegally. mvcur(int lasty, int lastx, int newy, int newx); Moves the terminal's cursor from (lasty, lastx) to (newy, newx) in an approximation of optimal fashion. This routine uses the functions borrowed from ex ver- sion 2.6. It is possible to use this optimization without the benefit of the screen routines. With the screen routines, this should not be called by the user. move() and refresh() should be used to move the cursor position, so that the routines know what's going on. mvprintw(int y, int x, const char *fmt, ...); Equivalent to: move(y, x); printw(fmt, ...); mvscanw(int y, int x, const char *fmt, ...); Equivalent to: move(y, x); scanw(fmt, ...); mvwin(WINDOW *win, int y, int x); Move the home position of the window win from its current starting coordinates to (y, x). If that would put part or all of the window off the edge of the ter- minal screen, mvwin() returns ERR and does not change PS1:19-14 Screen Package anything. For subwindows, mvwin() also returns ERR if you attempt to move it off its main window. If you move a main window, all subwindows are moved along with it. mvwprintw(WINDOW *win, int y, int x, const char *fmt, ...); Equivalent to: wmove(win, y, x); printw(fmt, ...); mvwscanw(WINDOW *win, int y, int x, const char *fmt, ...); Equivalent to: wmove(win, y, x); scanw(fmt, ...); newwin(int lines, int cols, int begin_y, int begin_x); Create a new window with lines lines and cols columns starting at position (begin_y, begin_x). If either lines or cols is 0 (zero), that dimension will be set to (LINES - begin_y) or (COLS - begin_x) respectively. Thus, to get a new window of dimensions LINES x COLS, use newwin(0, 0, 0, 0). nl();- Set the terminal to nl mode, i.e., start/stop the sys- tem from mapping <RETURN> to <LINE-FEED>. If the map- ping is not done, refresh() can do more optimization, so it is recommended, but not required, to turn it off. nocbreak();- Unset the terminal from cbreak mode. nocrmode();- Identical to nocbreak(). The misnamed macro nocrmode() is retained for backwards compatibility with ealier versions of the library. Screen Package PS1:19-15 noecho();- Turn echoing of characters off. nonl();- Unset the terminal to from nl mode. See nl(). noraw();- Unset the terminal from raw mode. See raw(). overlay(WINDOW *win1, WINDOW *win2); Overlay win1 on win2. The contents of win1, insofar as they fit, are placed on win2 at their starting (y, x) co-ordinates. This is done non-destructively, i.e., blanks on win1 leave the contents of the space on win2 untouched. Note that all non-blank characters are overwritten destructively in the overlay. overwrite(WINDOW *win1, WINDOW *win2); Overwrite win1 on win2. The contents of win1, insofar as they fit, are placed on win2 at their starting (y, x) co-ordinates. This is done destructively, i.e., blanks on win1 become blank on win2. printw(char *fmt, ...); Performs a printf() on the window starting at the current (y, x) co-ordinates. It uses addstr() to add the string on the window. It is often advisable to use the field width options of printf() to avoid leaving things on the window from earlier calls. This returns ERR if it would cause the screen to scroll illegally. raw();- Set the terminal to raw mode. On version 7 UNIX[2] this also turns off newline mapping (see nl()). ____________________ [2] UNIX is a trademark of Unix System Laboratories. PS1:19-16 Screen Package refresh();- Synchronize the terminal screen with the desired win- dow. If the window is not a screen, only that part covered by it is updated. This returns ERR if it would cause the screen to scroll illegally. In this case, it will update whatever it can without causing the scroll. As a special case, if wrefresh() is called with the window curscr the screen is cleared and repainted as it is currently. This is very useful for allowing the redrawing of the screen when the user has garbage dumped on his terminal. resetty();- resetty() restores them to what savetty() stored. These functions are performed automatically by initscr() and endwin(). This function should not be used by the user. savetty();- savetty() saves the current tty characteristic flags. See resetty(). This function should not be used by the user. scanw(char *fmt, ...); Perform a scanf() through the window using fmt. It does this using consecutive calls to getch() (or wgetch()). This returns ERR if it would cause the screen to scroll illegally. scroll(WINDOW *win); Scroll the window upward one line. This is normally not used by the user. scrollok(WINDOW *win, int boolf);- Set the scroll flag for the given window. If boolf is 0, scrolling is not allowed. This is its default set- ting. standend();- Screen Package PS1:19-17 End standout mode initiated by standout(). standout();- Causes any characters added to the window to be put in standout mode on the terminal (if it has that capabil- ity). subwin(WINDOW *win, int lines, int cols, int begin_y, int begin_x); Create a new window with lines lines and cols columns starting at position (begin_y, begin_x) inside the win- dow win. This means that any change made to either win- dow in the area covered by the subwindow will be made on both windows. begin_y, begin_x are specified rela- tive to the overall screen, not the relative (0, 0) of win. If either lines or cols is 0 (zero), that dimen- sion will be set to (LINES - begin_y) or (COLS - begin_x) respectively. touchline(WINDOW *win, int y, int startx, int endx); This function performs a function similar to touchwin() on a single line. It marks the first change for the given line to be startx, if it is before the current first change mark, and the last change mark is set to be endx if it is currently less than endx. touchoverlap(WINDOW *win1, WINDOW *win2); Touch the window win2 in the area which overlaps with win1. If they do not overlap, no changes are made. touchwin(WINDOW *win); Make it appear that the every location on the window has been changed. This is usually only needed for refreshes with overlapping windows. tstp() This function will save the current tty state and then put the process to sleep. When the process gets res- tarted, it restores the saved tty state and then calls wrefresh(curscr); to redraw the screen. Initscr() sets the signal SIGTSTP to trap to this routine. PS1:19-18 Screen Package unctrl(char *ch);- Returns a string which is an ASCII representation of ch. Characters are 8 bits long. unctrllen(char *ch);- Returns the length of the ASCII representation of ch. vwprintw(WINDOW *win, const char *fmt, va_list ap); Identical to printw() except that it takes both a win- dow specification and a pointer to a variable length argument list. vwscanw(WINDOW *win, const char *fmt, va_list ap); Identical to scanw() except that it takes both a window specification and a pointer to a variable length argu- ment list. waddbytes(WINDOW *win, char *str, int len); This function is the low level character output func- tion. Len characters of the string str are output to the current (y, x) co-ordinates position of the window specified by win. The following functions differ from the standard func- tions only in their specification of a window, rather than the use of the default stdscr. waddch(WINDOW *win, char ch); waddstr(WINDOW *win, char *str); wclear(WINDOW *win); wclrtobot(WINDOW *win); wclrtoeol(WINDOW *win); wdelch(WINDOW *win); wdeleteln(WINDOW *win); werase(WINDOW *win); wgetch(WINDOW *win); wgetstr(WINDOW *win, char *str); winch(WINDOW *win);- winsch(WINDOW *win, char c); winsertln(WINDOW *win); wmove(WINDOW *win, int y, int, x"); Screen Package PS1:19-19 wprintw(WINDOW *win, char *fmt, ...); wrefresh(WINDOW *win); wscanw(WINDOW *win, char *fmt, ...); wstandend(WINDOW *win); wstandout(WINDOW *win); PS1:19-20 Screen Package 1. Examples Here we present a few examples of how to use the pack- age. They attempt to be representative, though not comprehensive. Further examples can be found in the games section of the source tree and in various utilities that use the screen such as systat(1). The following examples are intended to demonstrate the basic structure of a program using the package. An addi- tional, more comprehensive, program can be found in the source code in the examples subdirectory. 1.1. Simple Character Output This program demonstrates how to set up a window and output characters to it. Also, it demonstrates how one might control the output to the window. If you run this program, you will get a demonstration of the character output chrac- teristics discussed in the above Character Output section. #include <sys/types.h> #include <curses.h> #include <stdio.h> #include <signal.h> #define YSIZE 10 #define XSIZE 20 int quit(); main() { int i, j, c; size_t len; char id[100]; FILE *fp; char *s; initscr(); /* Always call initscr() first */ signal(SIGINT, quit); /* Make sure wou have a 'cleanup' fn */ crmode(); /* We want cbreak mode */ noecho(); /* We want to have control of chars */ delwin(stdscr); /* Create our own stdscr */ stdscr = newwin(YSIZE, XSIZE, 10, 35); flushok(stdscr, TRUE); /* Enable flushing of stdout */ scrollok(stdscr, TRUE); /* Enable scrolling */ erase(); /* Initially, clear the screen */ standout(); move(0,0); while (1) { Appendix A c = getchar(); switch(c) { case 'q': /* Quit on 'q' */ quit(); break; case 's': /* Go into standout mode on 's' */ standout(); break; case 'e': /* Exit standout mode on 'e' */ standend(); break; case 'r': /* Force a refresh on 'r' */ wrefresh(curscr); break; default: /* By default output the character */ addch(c); refresh(); } } } int quit() { erase(); /* Terminate by erasing the screen */ refresh(); endwin(); /* Always end with endwin() */ delwin(curscr); /* Return storage */ delwin(stdscr); putchar('\n'); exit(0); } 1.2. Twinkle This is a moderately simple program which prints pat- terns on the screen. It switches between patterns of aster- isks, putting them on one by one in random order, and then taking them off in the same fashion. It is more efficient to write this using only the motion optimization, as is demon- strated below. # include <curses.h> # include <signal.h> Appendix A /* * the idea for this program was a product of the imagination of * Kurt Schoens. Not responsible for minds lost or stolen. */ # define NCOLS 80 # define NLINES 24 # define MAXPATTERNS 4 typedef struct { int y, x; } LOCS; LOCS Layout[NCOLS * NLINES]; /* current board layout */ int Pattern, /* current pattern number */ Numstars; /* number of stars in pattern */ char *getenv(); int die(); main() { srand(getpid()); /* initialize random sequence */ initscr(); signal(SIGINT, die); noecho(); nonl(); leaveok(stdscr, TRUE); scrollok(stdscr, FALSE); for (;;) { makeboard(); /* make the board setup */ puton('*'); /* put on '*'s */ puton(' '); /* cover up with ' 's */ } } /* * On program exit, move the cursor to the lower left corner by * direct addressing, since current location is not guaranteed. * We lie and say we used to be at the upper right corner to guarantee * absolute addressing. */ die() { signal(SIGINT, SIG_IGN); mvcur(0, COLS - 1, LINES - 1, 0); endwin(); exit(0); } Appendix A /* * Make the current board setup. It picks a random pattern and * calls ison() to determine if the character is on that pattern * or not. */ makeboard() { reg int y, x; reg LOCS *lp; Pattern = rand() % MAXPATTERNS; lp = Layout; for (y = 0; y < NLINES; y++) for (x = 0; x < NCOLS; x++) if (ison(y, x)) { lp->y = y; lp->x = x; lp++; } Numstars = lp - Layout; } /* * Return TRUE if (y, x) is on the current pattern. */ ison(y, x) reg int y, x; { switch (Pattern) { case 0: /* alternating lines */ return !(y & 01); case 1: /* box */ if (x >= LINES && y >= NCOLS) return FALSE; if (y < 3 || y >= NLINES - 3) return TRUE; return (x < 3 || x >= NCOLS - 3); case 2: /* holy pattern! */ return ((x + y) & 01); case 3: /* bar across center */ return (y >= 9 && y <= 15); } /* NOTREACHED */ } puton(ch) reg char ch; { reg LOCS *lp; reg int r; reg LOCS *end; LOCS temp; Appendix A end = &Layout[Numstars]; for (lp = Layout; lp < end; lp++) { r = rand() % Numstars; temp = *lp; *lp = Layout[r]; Layout[r] = temp; } for (lp = Layout; lp < end; lp++) { mvaddch(lp->y, lp->x, ch); refresh(); } } PSD:19-2 Screen Package Contents 1 Overview ............................................ 1 1.1 Terminology .................................... 1 1.2 Compiling Applications ......................... 1 1.3 Screen Updating ................................ 1 1.4 Naming Conventions ............................. 3 2 Variables ........................................... 4 3 Usage ............................................... 5 3.1 Initialization ................................. 5 3.2 Output ......................................... 5 3.3 Input .......................................... 6 3.4 Termination .................................... 6 4 Cursor Movement Optimizations ....................... 6 5 Character Output and Scrolling ...................... 7 6 Terminal State Handling ............................. 7 7 Subwindows .......................................... 8 8 The Functions ....................................... 8 Appendix A ............................................ 20 1 Examples ............................................ 20 1.1 Simple Character Output ........................ 20 1.2 Twinkle ........................................ 21. | http://www.mirbsd.org/htman/sparc/manPSD/19.curses.htm | CC-MAIN-2015-32 | refinedweb | 5,798 | 63.39 |
namespace The concept of
namespace yes Linux The way the kernel is used to isolate kernel resources . adopt namespace Some processes can only see some resources related to themselves , Other processes can only see resources related to themselves , These two processes can't feel the existence of each other at all . The specific implementation is to specify the related resources of one or more processes in the same namespace in .
Linux namespaces It is a kind of encapsulation and isolation of global system resources , Make a difference namespace Processes with independent global system resources , Change one namespace System resources in will only affect the current namespace The process in , For others namespace The process in does not affect .
namespace Use of
Maybe the vast majority of users are like me , It's using docker I didn't know until I got to know linux Of namespace technology . actually ,Linux Kernel Implementation namespace One of the main goals of is to achieve lightweight virtualization ( Containers ) service . In the same namespace The next process can perceive each other's changes , And I don't know anything about the progress of the outside world . In this way, the process in the container can have an illusion , Think you're in a separate system , So as to achieve the purpose of isolation . in other words linux kernel-provided namespace Technology for docker The emergence and development of container technology provide basic conditions .
We can docker From the perspective of implementers, how to implement a resource isolated container . For example, whether it can pass chroot Command to switch the mount point of the root directory , To isolate the file system . To communicate and locate in a distributed environment , The container must have a separate IP、 Routing and ports, etc , This requires isolation of the network . At the same time, the container also needs a separate host name to identify itself in the network . Next, you need to communicate between processes 、 Isolation of user rights, etc . Last , The application running in the container needs a process number (PID), Naturally, we also need to communicate with PID In isolation . In other words, these six isolation capabilities are the foundation of a container , Let's see linux Kernel namespace What kind of isolation capability does the feature provide for us :
The first six in the table above namespace It's the isolation technology necessary to implement the container , As for the latest Cgroup namespace It has not been docker use . I believe that in the near future, all kinds of containers will also be added to Cgroup namespace Support for .
namespace The history of development
Linux Some of them have been implemented in very early versions namespace, Like the kernel 2.4 And that's what happened mount namespace. Most of the namespace Support is in the kernel 2.6 Done in , such as IPC、Network、PID、 and UTS. There's something else namespace A special , such as User, From the kernel 2.6 It's starting to come true , But in the kernel 3.8 It's just announced that it's finished . meanwhile , With Linux The development of container technology and the demand brought by the continuous development of container technology , There will also be new namespace Be supported , For example, in the kernel 4.6 I added Cgroup namespace.
Linux Multiple API Used for operation namespace, They are clone()、setns() and unshare() function , In order to determine which one is isolated namespace, Using these API when , You usually need to specify some call parameters :CLONE_NEWIPC、CLONE_NEWNET、CLONE_NEWNS、CLONE_NEWPID、CLONE_NEWUSER、CLONE_NEWUTS and CLONE_NEWCGROUP. If you want to isolate multiple namespace, have access to | ( Press bit or ) Combine these parameters . And we can get through /proc Here are some files to operate namespace. Let's take a look at the brief usage of these interfaces .
View the... To which the process belongs namespace
From version number to 3.8 Start with the kernel of ,/proc/[pid]/ns The directory will contain the namespace Information , Use the following command to view the namespace Information :
$ ll /proc/$$/ns
First , these namespace Files are all linked files . The format of the content of the linked file is xxx:[inode number]. Among them xxx by namespace The type of ,inode number Is used to identify a namespace, We can also understand it as namespace Of ID. If one of the two processes namespace Files point to the same linked file , Explain that its related resources are in the same namespace in .
secondly , stay /proc/[pid]/ns Another function of placing these linked files in is , Once these linked files are opened , Just open the file descriptor (fd) There is , So even if it's time to namespace All processes under have ended , This namespace It will always be there , Subsequent processes can be added .
Except for the way you open the file , We can also prevent... By mounting files namespace Be deleted . For example, we can take the current process of uts Mount to ~/uts file :
$ touch ~/uts $ sudo mount --bind /proc/$$/ns/uts ~/uts
Use stat Order to check the results :
It's amazing ,~/uts Of inode And link to inode number It's the same , They're the same file .
clone() function
We can go through clone() Create... While creating a new process namespace.clone() stay C The declaration in the library is as follows :
/* Prototype for the glibc wrapper function */ #define _GNU_SOURCE #include <sched.h> int clone(int (*fn)(void *), void *child_stack, int flags, void *arg);
actually ,clone() Is in C An encapsulation defined in the language library (wrapper) function , It's responsible for building the stack of new processes and calling the clone() system call .Clone() It's actually linux system call fork() A more general implementation of , It can go through flags To control how many functions are used . Altogether 20 Varied CLONE_ At the beginning falg( Sign a ) Parameters are used to control clone All aspects of the process ( For example, whether to share virtual memory with the parent process ), Now we will only introduce namespace dependent 4 Parameters :
- fn: Specify a function to be executed by the new process . When this function returns , Child process termination . This function returns an integer , Indicates the exit code of the child process .
- child_stack: Pass in the stack space used by the subprocess , That is, the user mode stack pointer is assigned to the child process esp register . Calling process ( To call clone() The process of ) New stacks should always be allocated to child processes .
- flags: Indicates which CLONE_ The first flag bit , And namespace Relevant CLONE_NEWIPC、CLONE_NEWNET、CLONE_NEWNS、CLONE_NEWPID、CLONE_NEWUSER、CLONE_NEWUTS and CLONE_NEWCGROUP.
- arg: Point to pass on to fn() The parameters of the function .
In subsequent articles , We mainly pass clone() Function to create and demonstrate various types of namespace.
setns() function
adopt setns() Function to add the current process to an existing namespace in .setns() stay C The declaration in the library is as follows :
#define _GNU_SOURCE #include <sched.h> int setns(int fd, int nstype);
and clone() The function is the same ,C In the language library setns() Functions are also right setns() Encapsulation of system calls :
- fd: I want to join in namespace File descriptor for . It's a point /proc/[pid]/ns File descriptors for files in the directory , You can get it by directly opening the linked file in the directory or by opening a file that has the linked file in the directory .
- nstype: Parameters nstype Let the caller check fd Point to the namespace Whether the type meets the actual requirements . If this parameter is set to 0 Means not to check .
As we mentioned earlier : You can mount it namespace Keep it . Retain namespace The purpose is to add the process to this namespace To prepare for . stay docker in , Use docker exec To execute a new command in an already running container, you need to use setns() function . In order to bring the new namespace Use it , It also needs to be introduced execve() Series of functions ( The author is in 《Linux Create subprocesses to perform tasks 》 It was introduced in the article execve() Series of functions , Interested students can go to learn about ), This function can execute the user's command , A more common usage is to call /bin/bash And accept the parameters to run a shell.
unshare() function and unshare command
adopt unshare Functions can be performed on the original process namespace Isolation . That is to create and add new namespace .unshare() stay C The declaration in the library is as follows :
#define _GNU_SOURCE #include <sched.h> int unshare(int flags);
Just like the previous two functions ,C In the language library unshare() Functions are also right unshare() Encapsulation of system calls . call unshare() The main function of : You can isolate resources without starting a new process , It's equivalent to jumping out of the original namespace To operate .
By default, the system also provides a program called unshare The order of , It's actually calling unshare() system call . Below demo Use unshare Command to put the current process's user namespace Set up a root:
summary
namespace yes linux Features provided by the kernel , Born for virtualization . With docker It's the birth of container technology , I've also devoted my life behind the scenes for a long time namespace Technology is coming to you . The author tries to make a comparison between namespace Technology learning and understanding to deepen the understanding of container technology , So the next step is to learn from the articles namespace Every bit of , I hope I can make progress with my classmates . | https://javamana.com/2021/04/20210416123211621r.html | CC-MAIN-2022-40 | refinedweb | 1,605 | 57.91 |
Hi, I'd like to see what people think about making some changes to freedesktop.org to create a stronger "center of gravity" for X/Linux/UNIX desktop development shared between the desktop projects, toolkits, and applications such as Mozilla and OpenOffice.org. Concretely, I'm proposing the following steps: 1. We welcome desktop-related development projects on an indiscriminate basis; if it's desktop-related and open source, you can use freedesktop.org hosting. 2. We move freedesktop.org to better hosting facilities. 3. We investigate the idea of making a versioned "desktop platform release" that would be a distribution with multiple modules, much like a GNOME or KDE release. It would contain a snapshot of stable tarballs for various desktop platform components. More details on each of these follow. 1. More Projects === Given better hosting and the option to use ACLs, there's no reason not to be a wide open community where anyone who wants to do desktop-related work is welcome. I've already been handing out freedesktop.org accounts more or less to anyone who asks, but asking isn't all that attractive, since we lack important things such as bugzilla. One immediate need in this area is to host Keith Packard's work, including the set of font libraries (fontconfig, Xft), and other X-related work he would like to do. Another obvious thing to host is Carl Worth's Xr library, as GTK+ and I believe other toolkits are looking to use this as a vector graphics engine. Xr has a couple of dependency libraries as well. freedesktop.org already hosts D-BUS, CSL, pkg-config, and desktop-file-utils, among other pieces of software, in addition to specifications. Future areas of freedesktop.org work could also have an implementation component: - some shared library for the URI namespace (VFS/ioslave) - a sound server - my proposed hardware library () - a configuration system? - useful bits factored out of OpenOffice.org or Mozilla Undoubtedly people can think of more. A question is whether we're interested in hosting any of the applications themselves, or only shared underpinnings of applications/desktops. If for example projects such as XFCE or ROX wanted to use freedesktop.org CVS, I think that would be fine. But perhaps there's some line where we don't want to be quite as busy as sourceforge. 2. Hosting === Suggest the following: - a larger server (more mem/cpu/disk) to handle more traffic - dedicated list server rather than using listman.redhat.com - a bugzilla instance - cvs infrastructure to allow maintainers to put ACLs on their cvs modules - ssh rather than pserver for cvs access - some way to track real names and email addresses for each cvs account - a server where people other than me can have shell accounts and help with server maintenance Right now the cvs/web server is a fairly underpowered machine at Red Hat and is not dedicated exclusively to freedesktop.org, which means I'm the only one who can log into it. If we can host Keith Packard's work, we have an offer for a server on a large Internet connection at Portland State University. This is currently hosting xwin.org. We can also host a server at Red Hat's colocation site, along with gnome.org/vger.kernel.org/sources.redhat.com, but we would need to find a server to colocate there. 3. Platform Release === Counting only what freedesktop.org has so far, plus fontconfig, Xft, Xr + dependencies, there are already quite a few tarballs to download that are prerequisites to use GNOME, KDE, and the large apps. You can easily get bad combinations of these tarballs; fontconfig/freetype combinations that are no good, for example. Plus, it's annoying for people to track the version numbers and releases of all these sub-modules.. The goal of a coordinated release will obviously be more useful as we have more shared underpinnings in the platform. But I think it's worth starting to work toward. === So that's it. I believe these changes would make freedesktop.org more useful, and remove some of the ways that it bottlenecks on me. Comments are welcome. Please feel free to post "me too" or send me private mail, as otherwise it's hard to judge consensus. Havoc | https://listman.redhat.com/archives/xdg-list/2003-July/msg00101.html | CC-MAIN-2017-13 | refinedweb | 716 | 64 |
Introduction
What if we could help optimize data input as much as possible for our mobile users?
This was the thought I had when I decided trying to create a custom SAPUI5 control that could gather data from speech recognition and place it for further processing.
I have found we can make it possible by using a Cordova plugin, and by deploying our SAPUI5 application as a Hybrid application. On the case we would want to convert an already existing application and enable our control we can use the feature of Hybrid Mobile Project from Web App on the SAP Web IDE, although this feature has been marked as experimental on the SAP Web IDE Hybrid App Toolkit Add-on documentation for now.
This control will behave as a normal sap.m.input with the speech recognition button disabled on the case our application is not running on an Hybrid container.
Here is a short demo on how the final control looks:
So let’s move on and see how we can create and use this control.
Prerequisites
We are going to need the following tools:
- SAP Web IDE 1.15
- Hybrid Application Toolkit 1.8.2
- A mobile device (in my case Android) for testing.
- On this case, our Android device should be connected
- Have the proper drivers installed
- Have the Developer mode enabled
- Have Android 4.4 and the Usb debugging mode enabled, if you wish to debug the project with the chrome://inspect functionality.
- Getting the macdonst/SpeechRecognitionPlugin · GitHub plugin and integrate it with HAT ( How to do this will be explained later )
Creating a SAPUI5 project to build and test the control
Ok, so first thing first, we will create a SAPUI5 project application in order to be able to write, build and test our control.
So, we should enter on the SAP Web IDE, once there we should select File>New>Project and then specify we want a SAPUI5 Application Project. We will make this project hybrid at a later stage, so we can use the Cordova plugin for voice recognition.
We shall name it CustomControlApp and proceed.
Next we can should enter our namespace ( custom.controls.demo ), select the view type as XML and we introduce our view name ( CustomControlView ).
After selecting Finish, we will see that the following project structure will be created for us:
Adding the control to our project
Once that we have our project generated, we will need to a folder named control, under the project root directory. Next we will create a new file inside this folder named SpeechRecognitionInputControl.js where we will place our custom control.
Now, we can include the following code on our SpeechRecognitionInputControl.js control file:
We will not visit the whole process of creating a custom control on SAPUI5 by now, but here are a couple of blogs that may help you jump start on this process:
Basically, what we are doing on this control is inheriting from a sap.m.Input control first. By that we will have the properties and methods of this control. Next, we are aggregating a sap.m.button control to fire the Speech recognition process.We also check that our application is running on a Hybrid container, and creating a new SpeechRecognition object to use the functionality of the Cordova plugin.
Ok, so next, we will add the following code to our CustomControlView, so we can use our custom control on our sample application:
From this code whe can see that we are registering our xml namespace xmlns:custom=”control” and later we use it to declare our <custom:SpeechRecognitionInputControl> control.
Next on our controller we should add the following code:
The important part that we can see is that we should make the location of our control known to SAPUI5. We do this by using the sap.ui.localResources(‘control’) directive. Next we should import our custom control class with jQuery.sap.require(“control.SpeechRecognitionInputControl”).
Of course, we can add more instances of our control programmaticaly, and for this we include the onButtonPress event handler.
After doing this steps we could test our project to verify that our custom control is being added and our project runs. To do so, we can right click on the index.html file > Run > Run as > Web application option. At this step we will not be able to use the Speech recognition functionality yet, but we will do it by adding some extra steps next.
Adding the Speech recognition plugin to Hybrid Application Toolkit
Now we should add our Custom plugin to our HAT installation. For this, I recommend we follow the steps shown on this blog from Simmaco Ferreiro: How to use custom Cordova plugins with SAP Web IDE and Hybrid Application Toolkit .
What we need to do is basically getting the macdonst/SpeechRecognitionPlugin · GitHub from git, on our custom plugin folder for HAT installation, by using the following command:
git clone
Then, when we proceed to install the HAT, we should specify the folder where we have our custom plugins and we should select our Speech recognition plugin next.
Once that we have completed the HAT installation and generation of the Companion App, we will proceed to convert our existing SAPUI5 application to an Hybrid Application project from within SAP Web IDE.
Making our SAPUI5 application an hybrid application
To complete this step, we should have enabled the Hybrid App Toolkit plugin on SAP Web IDE, and have our HAT installed and running. We can find steps to do this on the following document: How to use Hybrid Application Toolkit (HAT) and test the apps
Now, to convert our existing SAPUI5 application project to a Hybrid Mobile Project, we can right click our main folder project and follow the path New > Hybrid Mobile Project from Web App.
This is a one step wizard where we should specify which project should be converted.
Next, we need will configure the information of the application that we are going to deploy.
To do this, we should right click our project’s main folder and follow the path Project Settings > Device configuration.
We here should specify the App Name: CustomControlApp, the App ID: com.custom control, and the Version: 1.0.0.
We have to specify the platform where our application will run under the Platform section. In my case I selected Android.
Next, with our HAT running, we should right click on the Custom button, under the Plugin settings and select our SpeechRecognition plugin.
Deploying our application
Now it is time to deploy our application and testing it. For this, right click our project main folder and follow the path Deploy > Deploy to local Hybrid App Toolkit. This will deploy the application to HAT and it will create and compile our hybrid app.
Now, we can run our project by right clicking our index.html > Run on > Android device.
This will install our project’s application on our device. We now can click on the microphone button and see it in action (for the cordova plugin to work as the full Android speech recognition we will need to have an active internet connection).
Debbuging our application
As a last step, we could debbug our application by using the chrome://inspect functionality. For this, we could follow the steps included on the following document: Getting Started with Kapsel – Part 3 — AppUpdate (SP09+
Conclusion
I hope you have enjoyed this blog, I see a lot of really interesting tools and concepts that SAP is providing us, and that we can apply trying to build better experiences for our users. I hope that you find this control useful too and that you may take this idea and implement this functionality in your projects. Thanks a lot ! Happy coding 🙂
Note: You can get the full source project here: eamarce/SAPUI5CustomControlApp · GitHub
Thanks!
Great tool and great explanation! Improvement at it´s finest!
Hi,
I was able to run the application on the web successfully, but when I tried it on the android device, it shows a blank page.
just for clarification, on the web, the microphone button is disabled, it must be disabled itself on the web application right?
Hello Sanjo,
That’s correct, on the web the mic button would be disabled due to validations to assure the application is a cordova app.
Regarding the blank page it may be the case that a component is not in place, were you able to add the SpeechRecognition plugin to the project?
Best Regards!
Edgar Martínez.
Hi Edgar,
Yes, I was able to add the SpeechRecognition plugin correctly. Can you help me how to debug the app that I have installed on the phone? I tried to inspect via chrome, but my app isn’t detected on the webview.
Thanks,
Sanjo Thomas
Hi Sanjo,
Sure, We can think about something to debug it… Perhaps you could send me your current .APK file and I could give it a try inside the Chrome Inspector.
If you wish we can continue this conversation by direct messaging.
Best Regards!
Edgar Martínez.
Hi Edgar, my issue was resolved. After debugging, i found line 29 in speechrecognitioninputcontrol.js wasn’t working. I just commented the code and it worked.
The issue was with this code: var isCompanionApp = sap.hybrid.getUrlParameterName(“companionbuster”);
Hello Sanjo,
Thanks for your input and your interest on this content. I will be working a bit on the code to make that part a bit more flexible and then release a new revision,
Thank you!
Best regards!
Edgar.
Good one. Thanks for sharing.
It’s very useful and helpful blog with detailed informatoin. Thank you very much!
I created App in newer environment. Ant it runs on my Samsung Tab. There is one issue that I ‘m trying to resolve. Could you please tell who currently can help to find out the reason or give me the hint? In chrome://inspect I see that SpeechRecognition objects are in place but the button still stays inactive.
Thank you very much in advance!
Tthe actual reason was that the App was detected as “sap.hybrid == undefined”.
Hi Albert,
I faced the same issue with yours, could you pls share your solution kindly? Thanks.
Thanks Edgar! It works in my Android device now.
I list something which is different with the blog when I tried to build this App.
Just for your information.
Thank Edgar, this blog is so great, teaching me a lot of things.
Thanks & Best Regards,
Hao Dai
Hello Everyone,
Thank you for your comments, I’m glad to hear you have found this useful.
I will try to include some of these comments and considerations as changes for a re-edition of the blog.
Thanks a lot!
Best Regards!
Edgar Martinez. | https://blogs.sap.com/2015/10/01/sapui5-cordova-speech-recognition-input-control/ | CC-MAIN-2019-09 | refinedweb | 1,793 | 62.27 |
How to: Perform Custom Join Operations (C# Programming Guide)
This example shows how to perform join operations that are not possible with the join clause. In a query expression, the join clause is limited to, and optimized for, equijoins, which are by far the most common type of join operation. When performing an equijoin, you will probably always get the best performance by using the join clause.
However, the join clause cannot be used in the following cases:
When the join is predicated on an expression of inequality (a non-equijoin).
When the join is predicated on more than one expression of equality or inequality.
When you have to introduce a temporary range variable for the right side (inner) sequence before the join operation.
To perform joins that are not equijoins, you can use multiple from clauses to introduce each data source independently. You then apply a predicate expression in a where clause to the range variable for each source. The expression also can take the form of a method call.
The first method in the following example shows a simple cross join. Cross joins must be used with caution because they can produce very large result sets. However, they can be useful in some scenarios for creating source sequences against which additional queries are run.
The second method produces a sequence of all the products whose category ID is listed in the category list on the left side. Note the use of the let clause and the Contains method to create a temporary array. It also is possible to create the array before the query and eliminate the first from clause.
class CustomJoins { #region Data class Product { public string Name { get; set; } public int CategoryID { get; set; } } class Category { public string Name { get; set; } public int ID { get; set; } } // Specify the first data source. List<Category> categories = new List<Category>() { new Category(){Name="Beverages", ID=001}, new Category(){ Name="Condiments", ID=002}, new Category(){ Name="Vegetables", ID=003}, }; // Specify the second data source. List<Product> products = new List<Product>() { new Product{Name="Tea", CategoryID=001}, new Product{Name="Mustard", CategoryID=002}, new Product{Name="Pickles", CategoryID=002}, new Product{Name="Carrots", CategoryID=003}, new Product{Name="Bok Choy", CategoryID=003}, new Product{Name="Peaches", CategoryID=005}, new Product{Name="Melons", CategoryID=005}, new Product{Name="Ice Cream", CategoryID=007}, new Product{Name="Mackerel", CategoryID=012}, }; #endregion static void Main() { CustomJoins app = new CustomJoins(); app.CrossJoin(); app.NonEquijoin(); Console.WriteLine("Press any key to exit."); Console.ReadKey(); } void CrossJoin() { var crossJoinQuery = from c in categories from p in products select new { c.ID, p.Name }; Console.WriteLine("Cross Join Query:"); foreach (var v in crossJoinQuery) { Console.WriteLine("{0,-5}{1}", v.ID, v.Name); } } void NonEquijoin() { var nonEquijoinQuery = from p in products let catIds = from c in categories select c.ID where catIds.Contains(p.CategoryID) == true select new { Product = p.Name, CategoryID = p.CategoryID }; Console.WriteLine("Non-equijoin query:"); foreach (var v in nonEquijoinQuery) { Console.WriteLine("{0,-5}{1}", v.CategoryID, v.Product); } } } /* Output: Cross Join Query: 1 Tea 1 Mustard 1 Pickles 1 Carrots 1 Bok Choy 1 Peaches 1 Melons 1 Ice Cream 1 Mackerel 2 Tea 2 Mustard 2 Pickles 2 Carrots 2 Bok Choy 2 Peaches 2 Melons 2 Ice Cream 2 Mackerel 3 Tea 3 Mustard 3 Pickles 3 Carrots 3 Bok Choy 3 Peaches 3 Melons 3 Ice Cream 3 Mackerel Non-equijoin query: 1 Tea 2 Mustard 2 Pickles 3 Carrots 3 Bok Choy Press any key to exit. */
In the following example, the query must join two sequences based on matching keys that, in the case of the inner (right side) sequence, cannot be obtained prior to the join clause itself. If this join were performed with a join clause, then the Split method would have to be called for each element. The use of multiple from clauses enables the query to avoid the overhead of the repeated method call. However, since join is optimized, in this particular case it might still be faster than using multiple from clauses. The results will vary depending primarily on how expensive the method call is.
class MergeTwoCSVFiles { static void Main() { // See section Compiling the Code for information about the data files. string[] names = System.IO.File.ReadAllLines(@"../../../names.csv"); string[] scores = System.IO.File.ReadAllLines(@"../../../scores.csv"); // Merge the data sources using a named type. // You could use var instead of an explicit type for the query. IEnumerable<Student> queryNamesScores = // Split each line in the data files into an array of strings. from name in names let x = name.Split(',') from score in scores let s = score.Split(',') // Look for matching IDs from the two data files. where x[2] == s[0] // If the IDs match, build a Student object. select new Student() { FirstName = x[0], LastName = x[1], ID = Convert.ToInt32(x[2]), ExamScores = (from scoreAsText in s.Skip(1) select Convert.ToInt32(scoreAsText)). ToList() }; // Optional. Store the newly created student objects in memory // for faster access in future queries List<Student> students = queryNamesScores.ToList(); foreach (var student in students) { Console.WriteLine("The average score of {0} {1} is {2}.", student.FirstName, student.LastName, student.ExamScores.Average()); } //Keep console window open in debug mode Console.WriteLine("Press any key to exit."); Console.ReadKey(); } } class Student { public string FirstName { get; set; } public string LastName { get; set; } public int ID { get; set; } public List<int> ExamScores { get; set; } } /* Output: The average score of Omelchenko Svetlana is 82.5. The average score of O'Donnell Claire is 72.25. The average score of Mortensen Sven is 84.5. The average score of Garcia Cesar is 88.25. The average score of Garcia Debra is 67. The average score of Fakhouri Fadi is 92.25. The average score of Feng Hanying is 88. The average score of Garcia Hugo is 85.75. The average score of Tucker Lance is 81.75. The average score of Adams Terry is 85.25. The average score of Zabokritski Eugene is 83. The average score of Tucker Michael is 92. */
Create a Visual Studio console application project that targets .NET Framework 3.5 or later. By default, the project has a reference to System.Core.dll and a using directive for the System.Linq namespace.
Replace the Program class with the code in the previous example.
Follow the instructions in How to: Join Content from Dissimilar Files (LINQ) to set up the data files, scores.csv and names.csv.
Press F5 to compile and run the program.
Press any key to exit the console window. | http://msdn.microsoft.com/en-us/library/bb882533(v=vs.100).aspx | CC-MAIN-2014-10 | refinedweb | 1,094 | 57.77 |
Namespace and Remapping
Hello all, I have been reading the documentation and running test examples on namespaces and remaps but I still cannot achieve what I am trying to do. My attempts have also spawned new questions which I could not find answers to.
First Problem: My node publishes topics in the following manner:
/mynode/[robot_name]/sim/twist /mynode/[robot_name]/sim/odom /mynode/[robot_name]/sim/base_scan
I would like to remap the sim/* commands to other commands:
/mynode/[robot_name]/sim/twist -> /[robot_name]/cmd_vel /mynode/[robot_name]/sim/odom -> /[robot_name]/odom /mynode/[robot_name]/sim/base_scan -> /[robot_name]/base_scan
Note, that [robot_name] is dynamic, based on an input file. (Just like stage's robot_0, robot_1, robot_2, etc).
Removing the node name was simple, but the sim/* -> new_name did not work. In the launch file:
<remap from="/mynode" to ="/" /> --- This works <remap from="sim/twist" to ="cmd_vel" /> --- These don't <remap from="sim/odom" to ="odom" /> --- These don't <remap from="sim/base_scan" to ="base_scan" /> --- These don't
Could anyone offer any suggestions on a fix?
Second Problem: When I remap "/mynode" to "/", parameters I pass into my node aren't found anymore. NodeHandle.searchParam does not find them, but they still exist. For example:
namespace of node = "/" (this is because of the remap above) parameters = "/mynode/PARAM1" , "/mynode/PARAM2" (parameters are not remapped!!)
As stated, searchParams does not find them, but getParam("/mynode/Param1") works (and of course getParam("Param1") fails since the node's namespace changed. Does anyone have any thoughts? Thank you for your time! | https://answers.ros.org/question/11728/namespace-and-remapping/ | CC-MAIN-2018-05 | refinedweb | 253 | 53.41 |
Python Data Structures and Algorithms: Counting sort algorithm
Python Search and Sorting : Exercise-10 with Solution
Write a Python program Program for counting sort.
According".
Sample Solution:
Python Code:
def counting_sort(array1, max_val): m = max_val + 1 count = [0] * m for a in array1: # count occurences count[a] += 1 i = 0 for a in range(m): for c in range(count[a]): array1[i] = a i += 1 return array1 print(counting_sort( [1, 2, 7, 3, 2, 1, 4, 2, 3, 2, 1], 7 ))
Sample Output:
[1, 1, 1, 2, 2, 2, 2, 3, 3, 4, 7] quick sort algorithm.
Next: Write a Python code to create a program for Bitonic | https://www.w3resource.com/python-exercises/data-structures-and-algorithms/python-search-and-sorting-exercise-10.php | CC-MAIN-2019-39 | refinedweb | 109 | 54.19 |
Facebook has just introduced several sweeping new changes to its platform at F8, the annual conference for Facebook developers. The web has been abuzz about the implications of Facebook's latest move towards making its platform available on as many web sites as possible.
As Bret Taylor, head of Facebook Platform products, noted on the Developer Blog, the next evolution of the platform focuses is based on two fairly important themes:.
Targeting these two themes, Facebook has released three new components: social plugins, the Open Graph protocol, and the Graph API.
Social plugins enable developers to easily add user interaction to web pages (e.g., a "Like" button) using an
<iframe> or a combination of XFBML and FaceBook's JavaScript SDK. These plugins essentially provide establish a distributed means for content sharing and interaction in a fairly seamless way.
The Open Graph Protocol provides a way for developers to integrate content with Facebook's social graph. In essence, this means that content can be linked with one or more users, across different parts of their profiles, including profile pages, activity streams, news feeds, and even search results. Currently the protocol is implemented by adding several tags to the
<head> of a web page (based on the Open Graph Protocol namespace) and by including a "Like" button (social plugin). The specification for Open Graph Protocol has been published at.
The Graph API is the next generation of the Facebook API, and it is aimed at providing access to various parts of Facebook's social graph data. The new API is completely RESTful, and by default results are returned in JSON. Access to various data objects has been streamlined, as is evident by the examples below:
-.0)
Documentation for the new API is available, including advanced parts of the API such as FQL and FBML as well as the now deprecated "Old REST API." Note that the new API utilizes OAuth 2.0 for authentication and it also includes integration with Insight, Facebook's analytics service.
And there was one policy change announced today that got a big round of applause from the developer audience: third party applications may now retain Facebook for longer than 24 hours. According to Justin Smith at Inside Facebook, "the change was one primarily motivated by technical costs being created by the policy for developers, instead of any intended change in how developers should use user data." And Mark Zuckerberg said that
Zynga has had to download user information 100 million times per day because of our policy. Developers were having to architect entire systems just to do this. There aren’t any other changes in the policies on how developers can use the data.
Whatever the motivation, while this can indeed make developers lives easier, it also raises privacy issues as more and more Facebook data gets moved outside of Facebook.
Overall, this is certainly a bold move by Facebook in its effort to expand its reach both in terms of users and content across the web (or what Jeremiah Owyang refers to as a "Crusade of Colonization"). The new API, coupled with the Open Graph Protocol and social plugins, present developers with several ways of tapping into the Facebook platform without much heavy lifting.
I find it funny that Facebook is making things harder for their bread and butter (canvas apps that keep people looking at a Facebook page with Facebook ads) and making it easier for outside sites to drive traffic away from Facebook. Seems to me that all these new API changes and taking away useful things like notifications and requests are Facebook's way of sabotaging themselves from the inside. About the only positive change I see (for Facebook and third-party developers) out of all of this is allowing app developers to cache Facebook data for more than 24 hours and cut down on costs/traffic.
What's the difference with the old API in terms of features?
[...] Programmable Web [...]
[...] Facebook announced the Graph API in April, it was generally accepted by the developer community as a huge step towards open social [...]
[...] were by far the most common type. We can’t discuss social without mentioning Facebook, which made sweeping changes to its platform, launching the Facebook Graph API. With proper user permissions, this API makes a user’s [...]
[...] לפעול. ××, לדוגמ×, פיתחת עבור הלקוח ×פליקצית פייסבוק, ×›×שר ידוע ×©×”× ×ž×©× ×™× ×ת ×”API ×©×œ×”× ×חת לתקופה ורוב ×”×¡×™×›×•×™×™× ×©×פליקציה שפותחה ×”×™×•× ×œ× ×ª×¤×¢×œ בעוד חצי [...]
[...] by far the most common type. We can’t discuss social without mentioning Facebook, which made sweeping changes to its platform, launching the Facebook Graph API. With proper user permissions, this API makes a user’s [...]
[...] by far the most common type. We can’t discuss social without mentioning Facebook, which made sweeping changes to its platform, launching the Facebook Graph API. With proper user permissions, this API makes a user’s [...]
[...] big changes started in April, with Facebook’s announcement of its new Facebook Graph API and Open Graph Protocol. At the heart of these moves was a shift in [...]
Can I simply just say what a comfort to discover somebody that truly understands what they're talking about online. You certainly understand how to bring a problem to light and make it important. A lot more people really need to read this and understand this side of the story. It's surprising you're not more popular since you most certainly possess the gift.
[...] were by far the most common type. We can’t discuss social without mentioning Facebook, which made sweeping changes to its platform, launching the Facebook Graph API. With proper user permissions, this API makes a user’s [...] | http://www.programmableweb.com/news/facebook-makes-huge-api-changes-open-graph-protocol-and-much-more/2010/04/22 | CC-MAIN-2016-07 | refinedweb | 942 | 61.06 |
What do the autorhythmic cardiac muscle fibers do?
They generate action potentials that trigger heart contractions, automatically. They act as a pacemaker and form conduction systems.
Where is the sinoatrial (SA) node and what does it do?
-right wall of right atrium, medial to superior vena cava
-electrically signals both atria to begin contraction
Where is the atrioventricular (AV) node and what does it do?
-right atrium, medial to right AV valve
-electrically signals both ventricles to begin contraction
Where is the atrioventricular bundle and what does it do?
-extends from AV node, passes through fibrous skeleton and branching into interventricular septum towards the apex
-conducts action potentials
What are Purkinje fibers and what do they do?
-terminal branches of AV bundle branches extending up lateral walls of myocardium
-produce depolarization of myocardium
What happens after the SA node is innervated?
The charge travels to the atrioventricular (AV) node in interatrial septum.
What happens after the AV node is innervated?
The charge enters the atrioventricular (AV) bundle, also known as the Bundle of HIS.
What happens after the AV bundle is innervated?
The charge enters the right and left bundle branches which extends through interventricular septum, towards the apex.
What is the last step of charge innervation?
The Purkinje fibers conduct action potentials to remainder of ventricular myocardium.
What modifies the strength of each heartbeat?
Nerve impulses from the autonomic nervous system (ANS) and hormones modify timing and strength of each heartbeat (Do NOT establish a fundamental rhythm).
What is the process of action potential movement of the heart?
Action potential-->depolarization-->plateau-->repolarization.
What happens during depolarization?
Contractile fibers have stable resting membrane potential. Voltage-gated fast Na+ channels open- Na+ flows in. Then deactivate and Na+ inflow decreases.
What channels are open during depolarization?
Ca2+ channels (slow). Ca2+ moved from the interstitial fluid into cytosol. Triggers contraction.
How is depolarization maintained?
Through voltage-gated K+ channels balancing Ca2+ inflow with K+ outflow.
What happens during repolarization?
Additional voltage-gated K+ channels open, and the outflow of K+ restores negative resting membrane potential. Calcium channels close.
What is the refractory period?
The time interval during which second contraction cannot be triggered (takes longer than the contraction itself).
What is the P wave?
The excitation of the SA node (the beginning of the action potential). Atrial depolarization.
What is the QRS complex?
The action potential found in the AV bundle and our over the ventricles. Masks atrial repolarization.
What is the S-T segment?
It beings shortly after the QRS complex, and is the contraction of the ventricles, ventricular systole.
What happens during relaxation?
Both the atria and the ventricle are relaxed. In each cycle, atria and ventricles alternately contract and relax.
Where does the sound of the heartbeat come from?
The blood turbulence caused by the closing of heart valves.
What is Cardiac Output (CO?)
The volume of blood ejected from the left (or right) ventricle into the aorta (or pulmonary trunk) each minute.
What is the Frank-Starling law of the heart?
The more the heart fills with blood during diastole, the greater the force of contraction during systole.
What two factors influence EDV?
The duration of ventricular diastole, and the venuous return (volume of blood returning to the right ventricle).
What is contractility?
The strength of contraction at any given preload. Ability of heart to stretch, generate tension.
What increases contractility?
Positive inotropic agents (increase stroke volume, promote Ca2+ inflow, epinephrine, norepinephrine, digitalis.
What decreases contractility?
Negative inotropic agents (anoxia, acidosis, some anesthetics, and increased K+ in interstitial fluid)
What is afterload?
Pressure that must be overcome before a semilunar valve can open; how much pressure has to be generated after a contraction to get the blood out.
What does an increase in afterload lead to?
It causes stroke volume to decrease (blood remains in ventricle at the end of systole).
What aids in the regulation of the heartbeat?
Cardiac output depends on heart rate and stroke volume, adjustments in the heart rate are important short term, and the autonomic nervous system and epinephrine/norepinephrine are MOST important.
What part of the brain regulates autonomic regulation of the heart?
The medulla oblongata, in the cardiovascular center.
How does norepinephrine control heart rate?
In SA and AV node speeds; rate of spontaneous depolarization and in contractile fibers which enhance Ca2+ entry, increasing contractility.
How can the heart rate be slowed down?
By the release of acetylcholine, which slows the rate of spontaneous depolarization. Released by parasympathetic nerves.
How can the heart rate be changed by the ANS?
Increases or decreases in frequency of nerve impulses in both sympathetic and parasypmathetic branches.
What hormone, besides epinephrine and norepinephrine, increase heart rate?
Thyroid hormones increase heart rate and contractility as well. | http://quizlet.com/4790442/ap-ch20-heart-conduction-questions-flash-cards/ | CC-MAIN-2014-49 | refinedweb | 791 | 52.15 |
15 November 2010 10:01 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
“The plant was completed on-schedule and on-budget and marks a major step forward in Neste Oil's clean traffic fuel strategy,” it said.
The plant cost around €550m ($753.4m) to build, the company said.
It would be mostly using palm oil as feedstock to produce its renewable diesel.
Neste Oil has a similar-sized facility under construction in
NExBTL renewable diesel is a premium fuel that is compatible with all diesel engines and existing fuel distribution systems, according to the company. The fuel could be blended with fossil diesel or used as such, the company said.
Neste Oil currently operates two renewable diesel plants that came onstream at Porvoo | http://www.icis.com/Articles/2010/11/15/9410238/neste-oil-starts-up-800000-tyear-biodiesel-plant-in-singapore.html | CC-MAIN-2014-23 | refinedweb | 124 | 56.76 |
Here you will see, How to prints Alphabets through Java all loops (for, While and do while). In these loops we are just increments char value and characters are increment continuously.
public class PrintAlphabets { public static void main(String[] args) { // By for Loop char ch; for (ch = 'a'; ch <= 'z'; ch++) { System.out.print(ch + " "); } System.out.println(); // By While loop char c = 'a'; while (c <= 'z') { System.out.print(c + " "); c++; } System.out.println(); // By do while loop c = 'A'; do { System.out.print(c + " "); c++; } while (c <= 'Z'); } }
More
For more Algorithms and Java Programing Test questions and sample code follow below links
Advertisements
One thought on “How to Prints Alphabets by Java Loops”
You must log in to post a comment. | https://facingissuesonit.com/how-to-prints-alphabets-by-java-loops/ | CC-MAIN-2019-04 | refinedweb | 124 | 64 |
The promise of using Docker during development is to deliver a consistent environment for testing across developer machines and the various environments (like QA and production) in use. The difficulty is that Docker containers introduce an extra layer of abstraction that developers must manage during coding.
Docker enables application code to be bundled with its system requirements definition in a cross-platform, runnable package. This is a graceful abstraction for solving a fundamental need in deploying and managing software runtimes, but it introduces an extra layer of indirection that must be dealt with when programmers are doing what they do: iteratively modifying and testing the internals of the software and its dependencies.
The last thing you want to do is slow down the dev cycle. A good discussion of these matters at a conceptual level is here.
Even if you or your team are not committed to using Docker across dev machines as a matter of process, there are several use cases for modifying and debugging code running inside a container. For example, a developer can use Docker to mimic a production environment to reproduce errors or other conditions. Also, the ability to remotely debug into a host running the Dockerized app can allow for hands-on troubleshooting of a running environment like QA.
We are going to stand up a simple Java app in a VM on Google Cloud Platform (GCP), Dockerize it, then remotely debug it and modify its code from Visual Studio Code running on a local host.
We’ll cover two essential needs: updating the running codebase without restarting the container and debugging into a running, containerized app. As an additional benefit, we’ll do this process on a remotely running container. This means you'll have an approach for remotely debugging a service like a QA server, as well as a local development host.
Set up Java and Spring Boot
Step one is to go to the GCP console (and sign up for a free account if you don't have one). Now go to the Compute Engine link, which will give you a list of VMs and click Create Instance.
If you select an N1 micro server, it will be in the free tier. However, Docker is a bit of a resource hog so I recommend using a general purpose E2 server (clocking in around $25 per month for 24/7 use). I named mine dev-1.
Go ahead and configure the network for this instance. Click the Network tab in the middle of the VM details and in the Network Tags field, add
port8080 and
port8000.
Now go to the left-hand menu and open VPC Networks -> Firewall. Create two new rules (click the Create Firewall Rule button) to allow all source IPs (0.0.0.0/0) to access TCP port 8080, with label
port8080, and TCP port 8000, with label
port8000. With these in place, the new VM instance will allow traffic to the app server you will create on 8080 and to the default Java debug port of 8000.
SSH to the new server by clicking back to Computer Engine -> VM instances, finding the new instance (dev-1), and clicking the SSH button.
Now let’s set up Java. Type
sudo apt-get update, followed by
sudo apt-get install default-jdk. When that is done,
java --version should return a value.
Next, install the Spring CLI via SDKMAN (an SDK manager) so we can use Initializr from the shell. Run the following commands:
sudo apt install zip
curl -s "" | bash
source "/home//.sdkman/bin/sdkman.sh"
Now
sdk version should work.
Next install the Spring CLI tool with
sdk install springboot. Now you can quickly create a new Spring Boot Java web app with the following command:
spring init --dependencies=web idg-java-docker
The new project will reside in /idg-java-docker. Go ahead and
cd into that directory.
The Spring Boot app includes the
mvnw script so you don’t need to install Maven manually. Spin up the app in dev mode by typing
sudo ./mvnw spring-boot:run.
If you navigate to http://<your instance IP>:8080 in the browser (you can find the IP address in the list on the GCP console), you should now receive the Spring White Label Error page, because there are no routes mapped.
Map a URL route
Just add a quick-and-dirty endpoint for testing. Use
vi src/main/java/com/example/javadocker/DemoApplication.java (or your editor of choice) to modify the main class to look like Listing 1.
Listing 1. Add an endpoint
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@SpringBootApplication
@RestController
public class DemoApplication {
@RequestMapping("/")
public String home() {
return "Hello InfoWorld!";
}
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
Now you can stop Tomcat with
Ctrl-c and rebuild/restart by typing
./mvnw spring-boot:run. If you navigate to the app in the browser, you’ll see the simple “Hello InfoWorld” response.
Dockerize the project
First install Docker as per the official Docker instructions for Debian. Type each of the following commands in docker-ce docker-ce-cli containerd.io
Create a Dockerfile
There are several ways to create a Dockerfile, including using a Maven plug-in. For this project we’ll build our simple Dockerfile by hand to get a look at it. For a nice intro to Java and Docker, check out this InfoWorld article.
Use an editor to create a file called
dockerfile and add the contents of Listing 2.
Listing 2. A basic Java/Spring Dockerfile
# syntax=docker/dockerfile:1
FROM openjdk:16-alpine3.13
WORKDIR /app
COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline
COPY src ./src
CMD ["./mvnw", "spring-boot:run"]
We are ignoring groups and users for simplicity here, but in a real-world situation, you would need to deal with that.
This Dockerfile uses OpenJDK as a base layer, then we move to a /app working directory. Next we bring in all the Maven files and run Maven in offline mode. (This allows us to avoid re-downloading the dependencies later.) The Dockerfile then copies the app sources over, and runs the
spring-boot:run command.
Note that we are driving towards a dev-enabled image, not a production one. You wouldn’t use
spring-boot:run for production.
Stop the running app if it’s still up.
Let’s build and run this now. First run the
docker build command:
sudo docker build --tag idg-java-docker
Wait for the build, then follow with
docker run:
sudo docker run -d -p 8080:8080 idg-java-docker
This will build your Docker image and then start it in a new container. When you call the run command, it will spit back a UID, such as (in my case):
d98e4d19dab71fa69b2331485b70b5c87f20de864238e5798ad3aa8c5b576014
You can double check the app is running and available on port 8080 by visiting it with a browser again.
You can check the running containers with
sudo docker ps. You should see the same UID running. Stop it with
sudo docker kill. Note that you only have to type enough of the UID to be unique (similar to a Git check-in ID), so in my case
sudo docker kill d98.
This Dockerfile is a reasonable beginning (users and layers would come next) to running the app, but pause for a moment and consider what you’d need to do to update the running application. To change even the simple greeting message you would need to make the code change, stop the running Docker container, build the image with
docker build, and start the container with
docker run.
How can we improve this situation?
Use Docker Compose
The answer is we’ll run Spring Boot with devtools with remote debug enabled, and expose the debug port in Docker. To manage this in a declarative way (instead of command-line arguments), we’ll use Docker Compose. Docker Compose is a powerful way to express how Docker runs, and it supports multiple targets (aka multi-stage builds) and external volume mounting.
The default config file is docker-compose.yml, which runs on top of the configuration found in the Dockerfile.
First install the Docker binary:
sudo curl -L "(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
Then run:
sudo chmod +x /usr/local/bin/docker-compose
Now you can run:
docker-compose --version
Tip: If you ever need to explore inside a running container you can run one of the following commands (depending on the underlying OS in the image):
sudo docker exec -it 646 /bin/sh
sudo docker exec -it 646 /bin/bash
sudo docker exec -it 646 powershell
Now that Docker Compose is available, let’s create a config file for it, docker-compose.yml, as seen in Listing 3.
Listing 3. docker-compose.yml
version: '3.3'
services:
idg-java-docker:
build:
context: .
ports:
- 8000:8000
- 8080:8080
environment:
- SERVER_PORT=8080
volumes:
- ./:/app
command: ./mvnw spring-boot:run -Dspring-boot.run.jvmArguments="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000"
The first key fact here is that both ports 8080 and 8000 are open. 8000 is the conventional Java debug port, and is referenced by the
command string. The
docker-compose command overrides the
CMD definition in the Dockerfile. To reiterate,
docker-compose runs atop the Dockerfile.
Type
sudo docker-compose build --no-cache idg-java-docker to build the image.
Start the app with
sudo docker-compose up. Now kill it with
Ctrl-c.
Now run the container in the background with
sudo docker-compose up -d for detached mode. Then you can shut it down with
sudo docker-compose down.
Commit your new app with
git init,
git add .,
git commit -m "initial".
Now visit GitHub.com and create a new repository. Follow the instructions to push the project:
git remote add origin
git branch -M main
git push -u origin main
Now open Visual Studio Code on your local system. (Or any remote debug-enabled Java IDE; for more info on running VS Code and Java check here. All modern IDEs will clone a repo directly from the GitHub repo clone address.) Do that now.
Now open the Java debug configuration for your IDE. In VS Code, this is the launch.json file. Create a configuration entry like that seen in Listing 4. Other IDEs (Eclipse, IntelliJ, etc.) will have similar launch config dialogs with the same fields for entry.
Listing 4. Debug configuration for IDE client
{
"type": "java",
"name": "Attach to Remote Program",
"request": "attach",
"hostName": "<The host name or ip address of remote debuggee>",
"port": 8000
},
Plug in the IP address from your VM, then launch this config by going to Debug and run the “Attach to Remote Program” config.
Once the debugger is attached, you can modify the DemoApplication.java file (for instance, change the greeting message to “Hello InfoWorld!”) and save it. Now click the “Hot module swap” button (the lightning bolt icon). VS Code will update the running program.
Browse to the app in the browser again, and you’ll see it will reflect the change.
Now for the last trick. Set a breakpoint by double-clicking at line 13 in the IDE. Now visit the app again. The breakpoint will hit, the IDE will come up, and full debugging capabilities will be available.
There are other approaches to Dockerizing a dev flow, but the manner described in this article gives you code updating and debugging for both localhost and remote systems in a relatively straightforward setup. | https://www.infoworld.com/article/3638548/how-to-use-docker-for-java-development.html | CC-MAIN-2022-05 | refinedweb | 1,948 | 64.91 |
At 02:47 PM 3/13/2007 -0500, Ian Bicking wrote: >The open issues section has three issue. One is a matter of defining some >naming convention, and as long as people *try* to match up their >conventions it will work. The second has a proposed solution. The last >is merely aesthetic. > >These are the "real issues" you are referring to? No - I'm saying that the real issues are all (and always) specific to the particular data type being exchanged. >That's not much easier, really. It would still be documented, still needs >to be implemented and defined properly. The biggest difference is that it >needs to be done again for each type of object. It has to be anyway. >I didn't understand what you were proposing, I think. I still don't >really know what get_file_storage means. It would return a cgi.file_storage encoding the request body. >It's a nice idea, but as far as I know no one has actually used >wsgi.file_wrapper. I believe that the Jython WSGI implementation provides one, or something analagous that wraps certain types of Java stream objects. >Except insofar as "type" is variable in my specification, I don't think it >is substantially different. That is indeed the substance of the difference - yours is a meta-specification, rather than a specification. As a result, it's more complicated to grasp than a pattern... and significantly more difficult to get *right*. And without examples, it's basically impossible to get right. >If no one cares about this, then I guess I can just put it under the >httpencode namespace where it was before, but I don't see any reason to >make it less general. It'll be worth making it general when there are more examples of the pattern to generalize from. As you pointed out yourself, there are very few at the moment. | https://mail.python.org/pipermail/web-sig/2007-March/002614.html | CC-MAIN-2016-44 | refinedweb | 315 | 67.25 |
Apache::SWIT - mod_perl based application server with integrated testing.
package MyHandler; use base 'Apache::SWIT'; # overload render routine sub swit_render { my ($class, $r) = @_; return ({ hello => 'world' }, 'my_template.tt'); # or return { hello => 'world' }; to rely on swit.yaml # based generation } # overload update routine, usually result of POST sub swit_update { my ($class, $r) = @_; # do some work ... # and redirect to another page return '/redirect/to/some/url'; }
This is pre-alpha quality software. Please use it on your own risk.
This module serves as yet another mod_perl based application server.
It tries to capture several often occuring paradigms in mod_perl development. It provides user with the tools to bootstrap a new project, write tests easily, etc.
Sends HTTP default headers: session cookie and content type.
$r is apache request and
$ct is optional content type (defaults to
text/html; charset=utf-8.
Dies with first line of
$msg using Carp::croak and dumps request
$r and
@data_to_dump with Data::Dumper into /tmp/swit_<time>.err file.
Maximal size of POST request. Default is 1M. Overload it to return something else.
Entry point for an update handler. Calls $class->swit_update($apr) function with
Apache2::Request parameter. The result of
swit_update henceforth is called
$to is passed down.
If
$to is regular string then 302 status is produced with Location equal to
$to.
If
$to is array reference and first item is a number then the status with
$to-[0]> is produced and $to->[1] is returned as response body. $to->[2] may by used as content type.
I.e. [ 200, "Hello", "text/plain" ] will respond with
200 OK status and
Hello as a body with
text/plain as content type.
The first item can also be
INTERNAL magic string. In that case internal redirect to the second array item is produced.
Of
$to parameters only $to->[0] is mandatory.
Much needed documentation is non-existant at the moment.. | http://search.cpan.org/~bosu/Apache-SWIT-0.54/lib/Apache/SWIT.pm | CC-MAIN-2016-50 | refinedweb | 316 | 68.67 |
18 replies on
2 pages.
Most recent reply:
Sep 23, 2010 12:17 AM
by
chris jennings
update: part II is now available
Weaving a thread throughout these numbers, one idea is pervasive;
the defect rate seems unaffected by the choice of programming language.
So 1 in every 10 lines of Scala will contain a defect, as will 1 in every 10 lines of assembly.
This is important when considering how many lines of code that each of
these languages need in order to implement the same feature.
Given that the rate of defect creation remains constant
and techniques for detection are slowing down,
it makes sense that defects should be tackled from the other direction;
instead of increasing the rate of detection we can reduce the rate of creation.
Specifically, by reducing the number of lines of code.
Modern languages have already made great progress in this area.
For example, studies comparing Scala and Java quote that the same feature requires anywhere from 3x to 10x fewer lines in the Scala implementation.
The extra lines needed in Java are just accidental complexity;
as well as providing more places for defects to appear they create more noise.
This is not good news for the developer who later has to come back and read and maintain the code and is exactly what we want to avoid... longer code with more bugs that's harder to maintain.
There are a number of features in Scala that help keep the daemon boilerplate under control.
Many of these have already been documented elsewhere, but they include:
This:
List(2,3,4).foreach(println)
is shorter than this:
for(x <- List(2,3,4)) {
println(x)
}
Instead of:
Map<Integer, String> theMap = new HashMap<Integer, String>
a Scala developer can write:
val theMap = HashMap[Int, String]
There's no need to duplicate the <Integer, String> construct.
Scala code also tends to produce better quality errors.
So less is more, right?
Well, sometimes... if you're reducing accidental complexity then less is definitely a good thing.
But not always; many design patterns recognised as good practice also tend to increase the line count of a system.
For example:
lots of Good Things(tm) will add code...
Of course, this isn't without benefit. Many patterns help to express intent more effectively and make the code more testable.
Good refactoring and splitting up large blocks of functionality will make the code easier to maintain,
and the newly-named fragements also help to document the code.
It might seem a contradiction that less lines are good and more lines are good,
but the increased line count here isn't just adding accidental complexity, it's adding structure and intent and documentation.
This all helps maintainers and testers to keep defects down, so the main goal is still being achieved!
With one exception... The use of forwarders in object composition, decorators, etc.
Take the following example (adapted from an article in Wikipedia):
abstract class I {
def foo()
def bar()
def baz()
}
class A extends I {
def foo() = println("a.foo")
def bar() = println("a.bar")
def baz() = println("a.baz")
}
class B(a : A) extends I {
def foo() = a.foo() // call foo() on the a-instance
def bar() = println("b.bar")
def baz() = a.baz()
}
val a = new A
val b = new B(a)
Here, class B implements the contract of I by delegating some of the work to an instance of A
In a worst case scenario this could lead to class B containing tens of forwarder methods that do nothing
but call through an instance of A, with hundreds of lines of code just to state that:
For any functionality not implemented in this class, delegate to the member variable "a"
it's almost as bad as javabean properties...
If "A" doesn't need any additional logic to create an instance, and it's always constructed alongside an instance of B (or some other class)
for purposes of delegation, then it can be made a trait - and the problem is solved:
trait A {
def foo() = println("a.foo")
def bar() = println("a.bar")
def baz() = println("a.baz")
}
class B {
def bar() = println("b.bar")
}
val b = new B with A
The trait A contains both the contract and default implementation for the methods
(although it could also leave some definitions abstract if desired)
Multiple traits can be mixed-in like this when constructing the value "b",
which as shown is of type "B with A"
Mix-ins help, a lot! But if "A" has to be looked up via JNDI, or needs a factory method
to construct, or already exists at the time we need to use it, then mix-ins are powerless to help.
Autoproxy is a Scala compiler-plugin created to help with exactly this situation
By using a simple annotation, the compiler can be instructed to generate
delegates in situations where mix-ins just don't help.
Returning to the original example:
abstract class I {
def foo()
def bar()
def baz()
}
class A extends I {
def foo() = println("a.foo")
def bar() = println("a.bar")
def baz() = println("a.baz")
}
class B(@proxy a : A) extends I {
def bar() = println("b.bar")
}
val a = new A
val b = new B(a)
The @proxy annotation will generate the foo() and baz() methods in class B,
identical to the hand-written versions shown previously.
Using @proxy with a trait, things become even easier:
trait A {
def foo() = println("a.foo")
def bar() = println("a.bar")
def baz() = println("a.baz")
}
class B(@proxy a : A) {
def bar() = println("b.bar")
}
val a = new A
val b = new B(a)
Behind the scenes, traits are implemented as interfaces plus a separate class containing any concrete implementation.
This means that @proxy can add A (the interface) to superclasses of B, allowing B to be used as an instance of A.
There is no need to explicitly break out I as an inteface.
The wiki and source for the autoproxy plugin can be found on github.
In the next article I'll cover a few usage scenarios for the plugin
And after that, some of the challenges involved in adding this to the scala compiler,
a process that one commentator described as "bear wrestling"
for(x : List(2,3,4)) { | https://www.artima.com/forums/flat.jsp?forum=106&thread=275135 | CC-MAIN-2022-21 | refinedweb | 1,052 | 70.13 |
On Wed, Aug 15, 2001 at 06:59:47PM -0400, Dan Sugalski wrote: > At 11:50 PM 8/13/2001 -0700, Paul Prescod wrote: > >Tim Peters wrote: > > >... > > > IIRC, Greg's fabled free-threading version of Python took a speed hit of > > > about a factor of 2 (for a program using only 1 thread, compared to that > > > same program without the free-threading patches). Yah. That's about right. The largest problem was dealing with dictionaries. Since you never knew whether a specific dictionary was shared (between threads) or not, you always had to lock the access. And since all namespaces use dictionaries... > >The Perl guys considered this unacceptable and I can kind of see their > >point. You have two processors but you get roughly the same performance > >as one? No. For definitional purposes, let's say that for normal Python on one processor, you get 1 Python Speed Unit (PSU). With my free-threading patches, that uniprocessor would get around 0.6 PSU. Move to a multiprocessor with 2 CPUs, running a 2-thread program with no synchronization (e.g. each is simply doing their thing rather than potentially intefering with each other). Regular Python would get about 0.95 PSU because the GIL imposes some overhead (and you can't ever get more than 1 because of the GIL). With the free-threading, you would get about 1.2 PSU. On a three processor system, regular Python still gets 0.95 PSU. The free-threading goes up to maybe 1.6 PSU. We observed non-linear scaling with the processors under free threading. 2 processors was fine, but 3 or 4 didn't buy you much more than 2. The problem was lock contention. With that many things going, the contention around Python's internal structures simply killed further scaling performance. >... > I racked up a whole list of "Things to Not Do With Threads" when hacking > the original perl thread model. (The first of which is "wedge them into an > interpreter that wasn't written with threads in mind..." :) Battle scars > are viewable on request. I hear ya. Same here. But Python has since become ever worse re: free threading capability (more globals to arbitrate access to). Last time, I tried to optimize the memory associate with each list/dict. That slowed some things down. Atomic incr/decr wasn't really available under Linux (Win has InterlockedIncrement and friends), so the Linux incr/decr was a bit slower than it should it have been. There are a number of things that I'd do differently the next time around. Cheers, -g -- Greg Stein, | https://mail.python.org/pipermail/python-dev/2001-August/017099.html | CC-MAIN-2016-30 | refinedweb | 434 | 75.2 |
In my previous article, Introduction to NIEM and IEPDs, I created an IEPD browser
to navigate a NIEM conformant schema. There are a variety of online tools available for navigating the NIEM model, building schema subsets and IEPDs,
and mapping the schema to a logical model created in UML. This article reviews the online tools available on the site.
Granted, this is not your usual CodeProject article. My purpose in publishing this article is to introduce CodeProject to some of the free tools
currently available for working with the NIEM, should anyone ever encounter a requirement to work with this model. There is also a lot of opportunity
to further develop tools that work with this model, which is something that I'm interested in pursuing, and perhaps a reader will be interested
in this as well. It is always important to understand the existing tools to better determine what the needs are for the NIEM community,
and that's the purpose of this article.
Although it might be possible to build something with just bare hands and ingenuity, it always helps to have a few tools in your toolbox to make
the job easier and more efficient. The same goes for working with the National Information Exchange Model ((NIEM). There are a number of tools available
to help with NIEM, many of which are included on the website. In this article, we will review the following tools,
and discuss what they do, why they are needed, and how you can use them:
Note: Bear with me in the listing of the tools, as they don't have catchy names or succinct titles. Many are named simply for what they do;
this can assist in immediate recognition of their value, but also make for awkward article subheadings.
Clicking on this link takes you to the NIEM Schema Subset Generation Tool (SSGT). I'll discuss the schema subset generation tool later; for now,
we'll look just at the search and navigation features. The default web page provides the basic search capability, allowing you to search by:
The most common searches are for "Property" and "Type". An advanced search option brings up a user interface in which you can specify
advanced options, such as searching for the exact phrase, what you want to match on, and which domains you want to search.
During the IEPD creation process, you will most likely, at some point, create a UML diagram of your logical model. Once the logical model
is created, you will want to search the NIEM for the semantic equivalent types (classes) and elements (properties) of your UML model, so they
can be mapped to NIEM schema types and elements. Furthermore, you will discover NIEM types that are not semantic equivalent, but are suitable
to extend or substitute for a custom type (which goes into your extension schema) required in your exchange.
For example, to find if the NIEM contains a "Citation" type, we can change the "Search for a" to "Type", enter the word "Citation"
in the search box, and click on the Search button. This yields a single result, "j:CitationType".
Citation
At this point, we can drill into the elements of the CitationType by clicking on the plus-sign to the left of the "Add" button.
You can drill further into the sub-elements, and so forth, by navigating into the hierarchy. Similarly, to see what type j:CitationType inherits,
you can click on the text "show inheritance", which reveals that j:CitationType inherits from nc:ActivityType.
Lastly, next to each element is the text "details"; when you click on this, it shows the definition for that element.
This is very valuable in checking whether the NIEM type has semantic equivalence to the desired type in the exchange.
CitationType
j:CitationType
nc:ActivityType
The SSGT allows you to select a subset of NIEM components to include in your exchange. Your exchange will never require all of the types
defined in the NIEM—instead, your exchange will include a subset of components that either have direct semantic equivalence or that you
will extend in the extension schema(s) that are included in your exchange. By selecting a subset of the NIEM components, you reduce the size
of the exchange and improve the parsing/validation performance. The SSGT is used to build this subset, which can then
be downloaded using the "Generate Documents" link. Each component that you add in your subset schema will be placed in the schema
file that corresponds to the namespace from which the component was selected. For example, if we add j:CitationType to our subset schema,
the j:CitationType will be found in the jxdm domain folder and the nc:ActivityType, from which
j:CitationType derives (or inherits), will be found in the niem-core folder.
After we have searched and found the desired component in the NIEM, there are several options for adding the component to our subset schema.
For example, searching for the type "Citation" finds "j:CitationType". We can now add just that type without any elements by clicking
on the "Add" button to the left of "j:CitationType". This adds just the type without any elements.
It is unlikely that you will not want to include any elements of that type; however, you can always go back later and all elements,
and the SSGT, will include them correctly under the CitationType. Alternatively, you can click on the "Add" button to the left
of the text "Add All", and all the elements of that type will be added.
Note that this does not add child elements. For example, the elements under j:CitationIssuedLocation are not added, so if you wanted
specific elements related to the citation issued location (such as nc:LocationAddress), you will need to expand j:CitationIssuedLocation
and add the desired sub-elements. This is a "top-down" approach to generating the subset schema.
j:CitationIssuedLocation
nc:LocationAddress
Conversely, if you are more comfortable with a "bottom-up" approach, you can drill into the types and start adding types from
the bottom. For example, you could drill into j:CitationType/j:CitationIssuedLocation and first add nc:LocationAddress,
then work your way up by adding j:CitationIssuedLocation.
Because j:CitationIssuedLocation is an element of j:CitationType, j:CitationType is automatically added to the subset schema:
Whereas, j:CitationIssuedLocation was not automatically added when adding just nc:LocationAddress.
This is because the type containing the element nc:LocationAddress is nc:AddressType, whereas the type containing the element j:CitationIssuedLocation
is j:CitationType. When generating a subset schema, the containing type is always included.
nc:AddressType
When adding elements, you can specify the cardinality right away by clicking on the down arrow next to the "Add" button.
This provides a list of cardinality options and will generate the appropriate "minOccurs" and "maxOccurs" attributes for that element.
The desired cardinality is usually determined by your model and should be documented in the UML model which you are using as a reference for mapping
to the NIEM. You can, however, defer selecting the cardinality until later. Once your subset schema is created, you can click
on the "Edit Cardinality" link to define the cardinality for all elements.
minOccurs
maxOccurs
You can also delete types and elements by clicking on the checkbox to the left of the type/element and then clicking on the "Delete" button.
Creating a large subset schema probably won't be accomplished in a single session, and may be the work of several people as well.
The SSGT lets you save the schema subset as a "wantlist", allowing you (or others) to return to your work later on.
The "Save current wantlist to a file" is available on the "Generate Documents" page. You can later load your wantlist from
the "Options" link on the main page of the SSGT, where you will find the section "Load Wantlist".
The mapping tool helps you create the exchange and extension schemas, and to map your UML model with the NIEM. It will also generate
the subset schema for your exchange. In order to use this tool, you will need to generate a UML 1.4 or UML 2.1 metamodel in XMI representation using a tool such as ArgoUML or Eclipse.
Once you have uploaded the XMI model, you can use this tool to map the classes and properties in your UML to the NIEM.
While this tool provides NIEM search capabilities, it is less sophisticated than the SSGT navigation and search, so you may want
to have a separate browser tab or window open to the SSGT. When you locate a type or element using the mapping tool, you have the option
to use this tool, create an exchange on the main web page, and then upload one or more XMI models. You can then begin mapping the exchange by clicking on the XMI model link.
Classes and their derivatives are presented as root level artifacts in a navigation tree. For example, if you load the XMI for the Citation UML,
the navigation tree displays "DocumentType" and "CitationBatch", and both can be expanded to show their properties. Classes that are part
of the UML model by association are not directly represented; instead, the properties of these associative classes can be mapped by navigating
the association described in the UML model. Therefore, it is important that you label your associations (some UML modelers allow for unnamed associations)
because the association names will appear in the navigation tree of the mapping tool.
DocumentType
CitationBatch
If the association name is missing, then the mapping tool displays the associated class name. So, when working with the mapping tool,
remember that it is association-centric. Whereas the SSGT will explicitly show you the containing types (for example, adding j:CitationType
automatically brings in nc:ActivityType), the mapping tool will not explicitly show this to you. However, it behaves the same as the SSGT,
so when you map Citation to j:Citation, the niem-core subset schema will also include nc:ActivityType. This brings up another difference
between the mapping tool and the SSGT—with the mapping tool, you are mapping properties to elements.
j:Citation
Once you have mapped your UML classes and properties, you can generate the subset schema, extension schema, and exchange schemas,
by clicking on the "here" in the text: "To generate and download a mapping report, click here." This will take you to a page where,
near the bottom, you can click on the text "generate an exchange schema." From this page, you may also create a mapping report, mapping set, wantlist, subset schema, and constraint schema.
With this tool, you can upload or enter the schemas and documentation required for an IEPD. Along with the metadata, the tool assembles
everything in a package that conforms to the NIEM IEPD specification. With this tool, you can also validate that the minimum artifacts and
metadata are present for a NIEM-conformant IEPD. You can work with several IEPDs at once in various stages of completion. Once the IEPD has
been created, you can make the IEPD, or parts of the IEPD, publically accessible. You may also download IEPDs that other people have shared.
Begin with signing in, as IEPDs are specific to your account. Once you have signed in, you can create a new IEPD by uploading an IEPD
zip file or by creating one using the tool. When creating a new IEPD, you will be asked to specify the root directory name and will be able
to upload various artifacts for that IEPD.
The artifact list is quite extensive, and includes artifact types such as business requirements and rules, constraint, subset, extension and exchange schemas,
various documentation types, and so forth.
After specifying the artifacts (this does not have to be a complete list), you will be asked to enter some basic information about the IEPD.
The last step offers you the option to validate the IEPD and/or upload the IEPD artifacts. The validation does not validate the artifacts—instead,
it validates the information in the metadata, such as version, URI, domain, authoritative source point of contact, etc. You can always add or edit
your artifacts by clicking on the "Edit" link, then selecting one of the options: Edit Metadata and Artifacts, Edit Visibility/Sharing, or Deleting the IEPD.
You may also download the IEPD. This includes all of your artifacts; in addition, the download includes the file "metadata.xml", which describes
your metadata entered on the tool's web pages, as well as a catalog.html file. The catalog file can be viewed in your web browser and provides links
to various artifacts. Most browsers support viewing the XML/XSD files directly in the browser. (Note that Google's Chrome shows only the comments—it
appears to strip off the XML tags in XSD and XML files.)
If you wish, you can make your IEPD publicly available by registering it with the IEPD Clearinghouse. You can also search public IEPDs by clicking
on the "Search" link in the "work with IEPDs" section of the tool's webpage.
This tool generates a code list (eye color, hair style, etc.) from an Excel spreadsheet. Code lists are commonly used as substitution
groups along with generic text entry, and represent a pick list of valid options for a particular element. To create your own code list,
download the template zip file. When you extract it and open the XLS document, note that there are two tabs with sample code lists.
You can replace/remove/add new tabs as appropriate. (The nice thing about the tool being tab-aware is that you can manage all of your code lists in a single spreadsheet.)
When you're ready to generate your code list, fill in the information presented on the tool's page and click the "Build Schema"
button (on my browser, the word "Schema" is truncated). After the code list schema is generated, your browser will open a window
to save the file locally. This will include your code list schema, the XLS file you uploaded, and the necessary NIEM artifacts to support the code list, such as the structures schema.
The migration assistance tool can convert GJXDM 3.0.3 wantlist to a NIEM 2.0 wantlist, or a NIEM 1.0 wantlist to a NIEM 2.0 wantlist.
The resulting output will be a NIEM 2.0 wantlist, NIEM 2.0 subset schema, and a migration report. The migration report contains actions
taken and choices made in migrating the wantlist, issues that cannot be resolved automatically, and statistics indicating the degree of migration
resolution. This tool cannot migrate extension or constraint schemas.
To use this tool, specify the GJXDM 3.0.3 or NIEM 1.0 wantlist file and the wantlist version, then click on the Migrate button. The generated
documents can then be saved locally for your review.
When creating an IEPD, the search and navigation tools are indispensible when mapping your logical model to the NIEM. Most IEPDs will usually
have code lists that are specific to the domain—creating these code lists with the code list generator tool is a simple process of editing
an Excel spreadsheet to generate the code list artifacts. The mapping tool itself is very useful in creating the subset, extension, and exchange
artifacts required by the IEPD. These artifacts, supplemented with other artifacts (such as documentation, examples, and metadata) using
the "Work with IEPDs" tool provides a comprehensive tool for managing and publishing the IEPD.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/Articles/34436/Introduction-to-NIEM-Tools | crawl-003 | refinedweb | 2,607 | 51.28 |
Name:
[14069] chyld
Member: 73 months
Authored: 2 videos
Description: how far down the rabbit hole would you like to go? ...
del.icio.us + python 3 [ID:1118] (2/2)
in series: Python 3
video tutorial by chyld, added 02/09
Name:
[14069] chyld
Member: 73 months
Authored: 2 videos
Description: how far down the rabbit hole would you like to go? ...
Our authors tell us that feedback from you is a big motivator. Please take a few moments to let them know what you think of their work.
Explaining how to integrate the delicious web services with python 3.
import xml.dom.minidom import urllib.request import webbrowser def getLinksByTag(tags, count): delicious = [] url = "" + tags + "?count=" + count response = urllib.request.urlopen(url) dom = xml.dom.minidom.parse(response) items = dom.getElementsByTagName("item") for item in items: titles = item.getElementsByTagName("title") title = titles[0].childNodes[0].nodeValue links = item.getElementsByTagName("link") link = links[0].childNodes[0].nodeValue delicious.append((title,link)) return delicious def start(): tags = input("tags (use + between tags) :: ") count = input("count (between 1 and 100) :: ") delicious = getLinksByTag(tags, count) while(len(delicious)): index = 0 print("\n" * 100, "-" * 80) for title, link in delicious: print("[{0:02d}] :: {1}".format(index, str(title.encode('ascii', 'ignore').decode()))) index += 1 try: choice = int(input("open (100 to quit) :: ")) except ValueError: choice = -1 if(0 <= choice < len(delicious)): webbrowser.open(delicious[choice][1]) if(choice == 100): break
Got any questions?
Get answers in the ShowMeDo Learners Google Group.
Video statistics:
- Video's rank shown in the most popular listing
- Video plays: 467 video, it is very well explained. I'm using Python to parse an xml document from the Google Books service and you're explanations were very helpful. Python is my first programming language, and I've only been at it a little while.
Few comments:
Since an empty list evaluates to false, len isn't needed, you can do:
while(delicious):
There's no need for counters as well, as enumerate yeilds the results. Here's how I would've written start:
def start():
tags = input("tags (use + between tags) :: ")
count = input("count (between 1 and 100) :: ")
delicious = getLinksByTag(tags, count)
choice = -1
while delicious and choice != 100:
print("\n" * 100, "-" * 80)
for index, (title, link) in enumerate(delicious):
print("[{0:02d}] :: {1}".format(index, str(title.encode('ascii', 'ignore').decode())))
try:
choice = int(input("open (100 to quit) :: "))
webbrowser.open(delicious[choice][1], 2)
except (ValueError, IndexError) as e:
pass
You could also store the displayed options, preventing the need to loop the list for each iterations display.
good stuff. well layed out. short example to give see how easy it is too mix and match stuff from the internet.
Terrific!
Terrific!
Good tutorial. Aside from some minor imprecisions, very good educational material. Keep them comming.
Yes, I think it's because of python's power that allowed me to write something useful with minimal amounts of code.
Brilliant demonstration of Python's power and versatility! I've got almost 5000 delicious tags so this gives me a start on some serious datamining. I think many people will be blown away by the size of that delicious module, given what it achieves. It's great to see someone demonstrating one of their own projects, and such a cool one at that. Thanks.
The ShowMeDo Robot says - this video is now published, thanks for adding to ShowMeDo. | http://showmedo.com/videotutorials/video%3Fname%3D4540010 | CC-MAIN-2015-14 | refinedweb | 568 | 57.98 |
The positive numbers 1, 2, 3... are known as natural numbers. The program below takes a positive integer from the user and calculates the sum up to the given number.
Visit this page to find the sum of natural numbers using a loop.
Sum of Natural Numbers Using Recursion
#include <stdio.h> int addNumbers(int n); int main() { int num; printf("Enter a positive integer: "); scanf("%d", &num); printf("Sum = %d", addNumbers(num)); return 0; } int addNumbers(int n) { if (n != 0) return n + addNumbers(n - 1); else return n; }
Output
Enter a positive integer: 20 Sum = 210
Suppose the user entered 20.
Initially,
addNumbers() is called from
main() n is
equal to 0.
When n is equal to 0, there is no recursive call. This returns the
sum of integers ultimately to the
main() function. | https://cdn.programiz.com/c-programming/examples/natural-number-sum-recursion | CC-MAIN-2020-40 | refinedweb | 136 | 66.94 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.