text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
A Hammier Javascript
Ham is another altJS language, similar to CoffeeScript. What makes Ham different is that it is written as a PEG, and does not have significant whitespace. Ham looks very similar to Javascript at first, but offers (hopefully) many useful features.
Ham was written using the Canopy PEG Parser Generator, and Javascript. I am currently working towards self-hosting Ham but it is not quite there yet.
Ham is written in an MVC style manner, where model is the AST, view is the javascript translations (using ejs templates), and the controller is the tree translators. This makes Ham extremely easy to hack on, and fun!
Since Ham is extremely similar to Javascript, you can get almost perfect syntax hilighting for free by using the Javascript hilighters, which is a pretty neat side effect.
Ham supports python style list ranges and slicing.
var range = [1..5];range === [1, 2, 3, 4, 5]; // truerange[1:] === [2, 3, 4, 5]; // truerange[:4] === [1, 2, 3, 4]; // truerange[::2] === [1, 3, 5]; // true
Ham supports list comprehensions, similar in style to Haskell.
var cross = [x*y | x <- range, y <- range[::-1]];
Ham makes it fun to use lambda's.
var sum = |x, y| { return x + y; }// If the body of the lambda is a single expression,// then the `return` statement and semicolon can be dropped.var sum = |x, y| { x + y }// Lambda's are an easy way to iterate a list:[1, 2, 3].each(|| { console.log('repeating'); });// If the lambda takes no parameters, the `||` can be dropped.[1, 2, 3].each({ console.log('repeating');});// When invoking a function with a lambda as the _only_ parameter, the parentheses can be dropped[1, 2, 3].each {console.log('repeating');};
Some people would prefer to use Classical Inheritence instead of Javascript's prototypical inheritence, that's fine:
class Hamburger extends MeatMeal {eat: { console.log('om nom nom'); }};// Ham just uses Backbone style .extend() for inheritence, so this translates easily to:// var Hamburger = MeatMeal.extend({ ... });
Stolen from Coffeescript, is the prototype shortcut:
String::startsWith = |str| { this.substr(0, str.length) === str };
Would be nice to have some inference at compile time, with contracts at runtime for what couldn't be inferred.
var x:string = 3; // TypeError -> typeof "x" is string.var sum = |x:num, y:num| { x + y }; // we could infer the return type easily herevar idk = ||:string { "hello" }; // I'm not sold on the return type syntax here
I like python style imports, but I think it might be hard/impossible to reconcile it with CommonJS style require. Another option is to rewrite a CommonJS style require for the browser, similar to browserify.
import Backbone, _ from 'vendor/backbone'; // would work great for browser, but hard for CommonJS
I also sometimes find myself with a need for python style Decorators, so Ham will have some form of them.
@watch(notify_change)var the_ghost_man = 3;
Yeah, I haven't gotten around to unary operators yet. I've been focussing on the cool stuff for now.
I haven't implemented while or for loops yet, as I am still experimenting with syntax for them. I've been getting by
largely with the combination of ranges and list comprehensions with
.each.
npm install -g harm
Then write some Ham.js code, and
harm <filename> to run it. | https://www.npmjs.com/package/ham-script | CC-MAIN-2016-07 | refinedweb | 549 | 64 |
Search: Search took 0.02 seconds.
- 31 Aug 2012 2:23 AM
- Replies
- 2
- Views
- 1,247
Try the workaround:
public class MyGridView<M> extends GridView<M> {
@Override
protected void afterRender() {
dataTable.getStyle().setProperty("tableLayout", "auto");
...
- 10 Jul 2012 1:35 AM
- Replies
- 2
- Views
- 879
No, "Selected Company" provides the details fields below and left-side with primary data. Cancel resets the editable fields to this data.
- 9 Jul 2012 1:08 PM
- Replies
- 2
- Views
- 879
Steps to reproduce for any 3.0.0* versions:
Change the field "Updated:" to any other date with the date picker, not with keyboard
Click...
- 20 Feb 2012 2:16 PM
- Replies
- 1
- Views
- 818
Reproduced in Beta-3.
- 27 Jan 2012 1:11 PM
- Replies
- 1
- Views
- 818
Keyboard input is impossible in Chrome 16.
- 12 Jan 2012 8:39 AM
- Replies
- 24
- Views
- 6,306
ComboBoxCell class is missing constructor which accepts custom appearance. Is there any workaround for this?
- 20 Nov 2011 1:04 PM
- Replies
- 3
- Views
- 1,846
Tracing my code in dev mode I got
java.lang.IllegalArgumentException: String is not complete HTML (ends in non-inner-HTML context): <div id='x-widget-68_[hnsf'U]' class='GJA1Q0MMGB'>
Since the...
- 14 Nov 2011 11:09 AM
- Replies
- 5
- Views
- 2,283
Try to bind a StoreFilterField with a Grid and
1. Load store data
2. Type filter query
3. Reload the store with another data set
4. Press backspace until StoreFilterField is empty
5. Reload the...
- 14 Nov 2011 2:31 AM
- Replies
- 5
- Views
- 2,283
My workaround for this:
new ListStore(props.id()) {
@Override
public void replaceAll(List newItems) {
super.replaceAll(newItems);
applyFilters();
...
- 13 Nov 2011 6:37 AM
- Replies
- 5
- Views
- 2,283
I think that the method com.sencha.gxt.data.shared.ListStore#replaceAll is missing "else" part like
if (isFiltered()) {
...
} else {
visibleItems.addAll(newItems);
}
I caught it...
- 28 Oct 2011 9:58 AM
I've found the what was caused the mistake:
public interface Binder extends UiBinder<Widget, MainView> {}
should be
public interface Binder extends UiBinder<Widget, MainViewImpl> {}
and...
- 23 Oct 2011 1:37 PM
Thank you for the quick answer. The code was copied to a blank GWTP view with provided=true. I think the cause of this issue is somewhere outside the pasted code fragments. But I don't know where...
- 22 Oct 2011 2:28 PM
I'm playing with the code borrowed from and I cannot resolve the following issue:
ERROR: com.sencha.gxt.data.shared.TreeStore...
Results 1 to 13 of 13 | http://www.sencha.com/forum/search.php?s=f6b23bed198697d904b6cc4cb9449d02&searchid=10269109 | CC-MAIN-2015-11 | refinedweb | 426 | 75.71 |
A working skeleton to get started with outside-in TDD
Test Driven Development is part of my daily work as a developer. I shared already a few thoughts about it here and strategies to test legacy applications as well.
Recently I read the book Growing Object Oriented Software Guided by Tests and had a few ideas on how to approach things in a different way, thinking on testing from end-to-end, from the start - I relate this approach to the London school of TDD, also known as outside-in TDD.
Even though the style is well known in the industry, getting a proper setup is not something standardized - at least, I couldn’t find any. On one hand, it vary for different programming languages, for example, java uses junit1 and selenium2 whereas in PHP it could be PHPUnit3 and Behat4, in javascript jest5 and cypress6 (or any combination between them). On the other hand, such setup is not take into account when deciding which style to choose.
In this blog post I am going to share, the setup I have used and how I apply outside-in TDD in my projects. I used the “skeleton” approach because this is what I feel comfortable when building something to get started with. I relate that with any bootstrap that any framework provides.
Common ground
To get start with the setup first, there is a bit of history to go over the TDD style we are aiming here. Outside-in is known to start by broad acceptance test (from the outside, no worries about the implementation) and as soon as it fails (for the right reason) we switch to the next test, but in this case, more specific, to start implementing the needed functionality to make the acceptance test to pass. This is also known as the double TDD loop depicted in the GOOS book [1].
The question is, how to have the minimum setup to get started with outside-in?
To start to answer this question the approach I chose was to think about what I need to start with outside-in? The minimum requirements I could think of are:
- Be able to do and intercept HTTP request of any kind (be it loading style, javascript, requests to third party apis and so on)
- Available documentation and widely adopted by the community
- It should allow writing test without relying too much in implementation details
An extra nice to have would be to avoid switching of testing framework. Allowing writing both acceptance tests and unit tests, therefore I found this one to be a bit tricky as some tread-off need to be taken into account. For the time being I decided to postpone this kind of decision.
I noticed that outside-in means different things depending who you ask. The common ground I found is that developers agree that outside means the part that is far away from the implementation. For example, asserting on text output, api responses, browser elements - all of those are what we expect, but without saying how.
Cypress and testing library
One of the first ecosystems I got to start with outside-in was javascript. In my opinion, one of the key aspects that made me chose cypress and testing library ones the fact that they were popular and I agree with the philosophy.
For example, cypress is made in nodejs and has integration with different browser venders, making it the project to go when the we are talking about browser automation.
On the other hand, testing library grew its popularity due the fact that it treats testing as it should: focusing on the code behavior, rather than implementation details. Which allows refactor and change on the code without coupling with tests.
The folder structure I chose has no particular reason, it’s one I felt more comfortable with (files like package.json, the folder public/ and others have been removed for readability):
├── cypress -------------------| │ ├── downloads | Under cypress is where the tests │ ├── fixtures | are far away from implementation and │ │ ├── 5kb.json | where I used the name acceptance to │ │ ├── bigUnformatted.json | depict that. │ │ ├── example.json | │ │ ├── formatted | │ │ │ └── 2kb.txt | │ │ └── unformatted | │ │ ├── 2kb.json | │ │ └── generated.json | │ ├── integration | │ │ └── acceptance.spec.ts | Here the file has the first loop of │ ├── plugins | outside-in. Writing this test failing │ │ └── index.ts | first and then moving to the inner │ ├── screenshots | loop. │ ├── support | │ │ ├── commands.js | │ │ └── index.js | │ ├── tsconfig.json | │ └── videos | │ └── acceptance.spec.ts.mp4 | ├── cypress.json -------------------| ├── src │ ├── App.test.tsx │ ├── App.tsx │ ├── components │ │ ├── Button.test.tsx <----- │ │ ├── Button.tsx <----- │ │ ├── JsonEditor.tsx <----- source code and test under the same │ │ ├── Label.test.tsx <----- folder. Here is where we care about │ │ ├── Label.tsx <----- the implementation, double loop TDD. │ │ └── tailwind.d.ts <----- │ ├── core │ │ ├── cleanUp.ts │ │ ├── formater.test.ts │ │ ├── formatter.ts │ │ └── __snapshots__ │ │ └── formater.test.ts.snap │ ├── index.scss │ ├── index.tsx │ ├── react-app-env.d.ts │ ├── reportWebVitals.ts │ ├── setupTests.ts │ ├── __snapshots__ │ │ └── App.test.tsx.snap
For cypress, the folder structure is the same as the default installation when setting up. Everything related is inside the folder cypress/.
For testing library, I used another approach which is having the test files in the same directory as the production code. Personally, I found easier to get going on the daily basis having those together, instead of a folder called tests, for two reasons:
- I don’t have to mirror the test structure with the source code (1-1 association)
- It makes easier to have a mental snapshot of the feature I am working has the tests along side the folder
I am using this specific setup for the json-tool, a tool that makes formatting json easy and combines privacy first. The following snippet was extracted from the piece of code in the acceptance.spec.ts, to start with the first loop in the outside-in mode:
describe('json tool', () => { const url = '/'; beforeEach(() => { cy.visit(url); }); describe('User interface information', () => { it('label to inform where to place the json', () => { cy.get('[data-testid="label-json"]').should('have.text', 'place your json here'); }); }); describe('Basic behavior', () => { it('format valid json string', () => { cy.get('[data-testid="json"]').type('{}'); cy.get('[data-testid="result"]').should('have.value', '{}'); }); it('shows an error message when json is invalid', () => { cy.get('[data-testid="json"]').type('this is not a json'); cy.get('[data-testid="result"]').should('have.value', 'this is not a json'); cy.get('[data-testid="error"]').should('have.text', 'invalid json'); }); }); });
The next example is the implementation for the details in which we need to build in other to make the acceptance test to pass, keep in mind that for the inner loop in the outside-in, we might have tests distributed in different files (this is exactly what happened with the json-tool). The file App.test.tsx holds the specific details in the test:
import { fireEvent, render, screen, act } from '@testing-library/react'; import App from './App'; import userEvent from '@testing-library/user-event'; import { Blob } from 'buffer'; import Formatter from './core/formatter'; describe('json utility', () => { test('renders place your json here label', () => { render(<App />); const placeJsonLabel = screen.getByTestId('label-json'); expect(placeJsonLabel).toBeInTheDocument(); }); test('error message is hidden by default', () => { render(<App />); const errorLabel = screen.queryByTestId(/error/); expect(errorLabel).toBeNull(); }); test('inform error when json is invalid', async () => { render(<App />); const editor = screen.getByTestId('json'); await act(async () => { fireEvent.change(editor, {target: { value: 'bla bla' }}); }); const result = screen.getByTestId('error'); expect(result.innerHTML).toEqual('invalid json'); }); test.each([ ['{"name" : "json from clipboard"}', '{"name":"json from clipboard"}'], [' {"name" : "json from clipboard"}', '{"name":"json from clipboard"}'], [' {"name" : "json from clipboard"}', '{"name":"json from clipboard"}'], [' { "a" : "a", "b" : "b" }', '{"a":"a","b":"b"}'], ['{ "a" : true, "b" : "b" }', '{"a":true,"b":"b"}'], ['{ "a" : true,"b" : 123 }', '{"a":true,"b":123}'], ['{"private_key" : "-----BEGIN PRIVATE KEY-----\nMIIEvgI\n-----END PRIVATE KEY-----\n" }', '{"private_key":"-----BEGIN PRIVATE KEY-----\nMIIEvgI\n-----END PRIVATE KEY-----\n"}'], [`{ ":"" }` ], ['{"key with spaces" : "json from clipboard"}', '{"key with spaces":"json from clipboard"}'], ])('should clean json white spaces', async (inputJson: string, desiredJson: string) => { render(<App />); const editor = screen.getByTestId('json'); await act(async () => { userEvent.paste(editor, inputJson); }); await act(async () => { userEvent.click(screen.getByTestId('clean-spaces')); }); const result = screen.getByTestId('result'); expect(editor).toHaveValue(inputJson); expect(result).toHaveValue(desiredJson); }); });
The key takeaway here is the difference between the outer loop and the inner loop when writing outside-in. Starting from more generic way and the going down into the details as I hope is depicted in the tests.
Footnotes
Table of contents
Got a question?
If you have question or feedback, don't think twice and click here to leave a comment. Just want to support me? Buy me a coffee! | https://marabesi.com/tdd/2021/12/16/a-working-skeleton-to-get-started-with-outside-in-tdd.html | CC-MAIN-2022-05 | refinedweb | 1,428 | 54.83 |
I got a compilation error in my ASP.NET MVC3 project that tested my sanity today. (As always, names are changed to protect the innocent)
The type or namespace name 'FishViewModel' does not exist in the namespace 'Company.Product.Application.Models' (are you missing an assembly reference?)
The type or namespace name 'FishViewModel' does not exist in the namespace 'Company.Product.Application.Models' (are you missing an assembly reference?)
Sure looks easy! There must be something in the project referring to a FishViewModel.
The first thing I noticed was the that error was occuring in a folder clearly not in my project and in files that I definitely had not created:
%SystemRoot%\Microsoft.NET\Framework\(versionNumber)\Temporary ASP.NET Files\
App_Web_mezpfjae.1.cs
I also ascertained these facts, each of which made me more confused than the last:
The problem stemmed from a file that was not included in the project but still present on the file system:
(By the way, if you don’t know this trick already, there is a toolbar button in the Solution Explorer window to “Show All Files” which allows you to see files all files in the file system)
In my situation, I was working on the mission-critical Fish view before abandoning the feature. Instead of deleting the file, I excluded it from the project.
However, this was a bad move. It caused the build failure, and in order to fix the error, this file must be deleted.
By the way, this file was not in source control, so the build server did not have it. This explains why my build server did not report a problem for me.
So, what’s going on? This file isn’t even a part of the project, so why is it failing the build?
This is a behavior of the ASP.NET Dynamic Compilation. This is the same process that occurs when deploying a webpage; ASP.NET compiles the web application’s code. When this occurs on a production server, it has to do so without the .csproj file (which isn’t usually deployed, if you’ve taken your time to do a deployment cleanly). This process has merely the file system available to identify what to compile.
So, back in the world of developing the webpage in visual studio on my developer box, I run into the situation because the same process is occuring there. This is true even though I have more files on my machine than will actually get deployed.
I can’t help but think that this error could be attributed back to the real culprit file (Fish.cshtml, rather than the temporary files) with some work, but at least the error had enough information in it to narrow it down.
I had previously been accustomed to the idea that for c# projects, the .csproj file always “defines” the build behavior. This investigation has taught me that I’ll need to shift my thinking a bit to remember that the file system has the final say when it comes to web applications, even on the developer’s machine! | http://gamecontest.geekswithblogs.net/jkauffman/archive/2012/03/15/development-quirk-from-asp.net-dynamic-compilation.aspx | CC-MAIN-2019-22 | refinedweb | 516 | 64.41 |
I'm interested in programming languages design and implementation and the Ring programming language (general-purpose multi-paradigm language released on January 25th, 2016) is my third project in this domain after Programming Without Coding Technology (2005-2015) and Supernova programming language (2009-2010).
In this article, I will try to introduce the language, why it's designed! and what you can do using it.
From the beginning, remember that this is an open source project that you can get, use and modify for free (MIT License). If you want to contribute or get the source code, just check the project source code (GitHub) and the language website for more resources like (documentation and support group).
Also, you can download Ring 1.0 for Windows (Binary Release) or Ring 1.0 for Ubuntu Linux (Binary Release).
Also we have Ring 1.0 for Mac OS X (Binary Release) and Ring 1.0 for Mobile App Development using Qt
In November, 2011, I started to think about creating a new version of the Programming Without Coding Technology (PWCT) software from scratch. I was interested in creating multi-platform edition of the software beside adding support for Web & Mobile development. Most of the PWCT source code was written in VFP (Microsoft Visual FoxPro 9.0 SP2) and the software comes with a simple scripting language for creating the components called (RPWI). The software contains components that support code generation in programming languages like Harbour, C, Supernova & Python.
What I was looking for is a programming language that can be used to build the development environment, provides multi-platform support, more productivity, better performance, can be used for components scripting & can be used for developing different kinds of applications. Instead of using a mix of programming languages, I decided to use one programming language for creating the development environment, for components scripting & for creating the applications.
I looked at many programming languages like C, C++, Java, C#, Lua, PHP, Python & Ruby. I avoided using C or C++ directly because I want high-level of productivity more than the level provided by these languages, also a language behind visual programming environment for novice programmers or professionals must be easy to use & productive. Java & C# are avoided for some reason too! I wanted to use a dynamic programming language and these languages are static typing, Java is multi-platform, also C# through Mono, but the use of huge number of classes and forcing the use of Object-Orientation, using a verbose language is not right for me. I need a small language, but fast and productive, also I need better control on the Garbage Collector (GC), I need a better one that is designed for fast applications.
Lua is small and fast, but it’s avoided because I need more powerful language for large applications. PHP is a Web programming language and it’s syntax is very similar to C, this leads to a language not general as I want and not simple as I need to have. Python & Ruby are more like what I need, but I need something more simple, smaller, faster & productive. Python and Ruby are Case-Sensitive, the list index start counting from 0, you have to define the function before calling it, Ruby usage of Object-Orientation and message passing is more than what I need and decrease performance, Python syntax (indentation, using self, :, pass & _) is not good for my goals.
All of these languages are successful languages, and very good for their domains, but what I need is a different language that comes with new ideas and intelligent implementation (Innovative, Ready, Simple, Small, Flexible and Fast).
The Ring.
The language is designed for a clear goal:
I have 10 samples to introduce, you can run all of these samples using the online version provided by the Ring website.
Sure I will start with the hello world program!
See "Hello, World!"
The Ring uses (See) and (Give) for printing output and getting input from the user.
See
Give
The language uses special keywords (different from other languages). These special words selected to be (small and fast to write and still with clear meaning). Anyway, if you are going to add the language to your project, you can hack the source code and modify anything!
This from the beginning demonstrates that Ring is not a clone from any other language but sure the language borrows some ideas from other languages like C,C++, C#, Java, PHP, Python, Ruby, Lua, Basic, Supernova and Harbour. Also the language coms with new ideas specially for abstraction and creating natural and declarative interfaces to be used as domain-specific languages. Also, the language implementation (transparent and visual) helps in using it in Compiler courses. Being a multiparadigm language that is fast and written in ANSI C also provides a good chance for embedding the language in C/C++ projects.
The language is not case sensitive, you can write "SEE", "see" or "See".
SEE
see
The Main function is optional and will be executed after the statements, and is useful for using the local scope.
The Ring Uses Dynamic Typing and Lexical scoping. No $ is required before the variable name!
You can use the '+' operator for string concatenation and the language is weakly typed and will convert automatically between numbers and strings based on the context.
$
+
string
The list index starts from 1. You can call functions before definition. The assignment operator uses Deep copy (no references in this operation). You can pass numbers and strings by value, but pass lists and objects by reference.
The for in loop can update the list items. We can use Lists during definition as in the next example.
1
aList = [ [1,2,3,4,5] , aList[1] , aList[1] ]
see aList # print 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
We can easily exit from more than one loop.
for x = 1 to 10
for y = 1 to 10
see "x=" + x + " y=" + y + nl
if x = 3 and y = 5
exit 2 # exit from 2 loops
ok
The language encourages organization, forget bad days using languages where the programmer starts with function then class then function and a strange mix between things! Each source file follows the next structure:
This enables us to use Packages, Classes and Functions without the need to use a keyword to end these components.
We can write one line comments and multi-line comments
The comment starts with # or //.
Multi-line comments are written between /* and */.
#
//
/*
*/
Ring comes with transparent implementation. We can know what is happening in each compiler stage and what is going on during the run-time by the Virtual Machine Example: ring helloworld.ring -tokens -rules -ic
ring helloworld.ring -tokens -rules -ic
==================================================================
==================================================================
Hello, World!
The Ring programming language is designed using the PWCT visual programming tool and you will find the visual source of the language in the folder "visualsrc" - *.ssf files and the generated source code (In the C Language) in the src folder and the include folder. Fork me on GitHub.
The next screen shot demonstrates what I mean by visual implementation:
The language is not line sensitive, you don't need to write ; after statements, also you don't need to press ENTER or TAB, so we can write the next code:
;
See "The First Message" See " Another message in the same line! " + nl
The next code create a class called Point that contains three attributes X, Y and Z. No keywords are used to end the package/class/function definition. Also, we can write the attributes names directly below the class name.
Point
X
Y
Z
We can use classes and functions before their definition, In this example, we will create new object, set the object attributes, then print the object values..
{ }
After the object access using { } if the class contains a method called BraceEnd(), it will be executed!
BraceEnd()
TimeForFun = new journey
# The first surprise!
TimeForFun {
Hello it is me # What a beatiful programming world!
}
# Our Class
Class journey
hello=0 it=0 is=0 me=0
func GetHello
See "Hello" + nl
func braceEnd
See "Goodbye!" + nl
The next example presents how to create a class that defines two instructions:
Also keywords that can be ignored like the ‘the’ keyword
the
We learned how to use Natural statements to execute our code and using the same features, we can use nested structures to execute our code.
The next example from the Web library, generates() + "<a href='" +
Link + "'> "+ Title + " </a> " + nl
Class Div from ObjsBase
Func braceend
cOutput += nl+'<div'
addattributes()
AddStyle()
getobjsdata()
cOutput += nl+"</div>" + nl
cOutput = TabMLString(cOutput)
We can use the Ring language from C/C++ programs using the next functions:
RingState *ring_state_init();
ring_state_runcode(RingState *pState,const char *cCode);
ring_state_delete(RingState *pState);
The idea is to use the ring_state_init() to create new state for the Ring Interpreter, then call the ring_state_runcode() function to execute Ring code using the same state. When we are done, we call the ring_state_delete() to free the memory.
ring_state_init()
ring_state_runcode()
ring_state_delete()
#include "ring.h"
#include "stdlib.h"
int main(int argc, char *argv[])
{
RingState *pState = ring_state_init();
printf("welcome\n");
ring_state_runcode(pState,"see 'hello world from the ring programming language'+nl");
ring_state_delete(pState);
}
Sure the previous sample can't be tested using the online version of the language where you will need a C compiler + the Ring language header files and library (that you can build from the source code)
The Ring API comes with the next functions to create and delete the state. Also we have functions to create new variables and get variables values.
RingState * ring_state_init ( void ) ;
RingState * ring_state_delete ( RingState *pRingState ) ;
void ring_state_runcode ( RingState *pRingState,const char *cStr ) ;
List * ring_state_findvar ( RingState *pRingState,const char *cStr ) ;
List * ring_state_newvar ( RingState *pRingState,const char *cStr ) ;
void ring_state_main ( int argc, char *argv[] ) ;
void ring_state_runfile ( RingState *pRingState,const char *cFileName ) ;
We can create more than one ring state in the same program and we can create and modify variable values.
To get the variable list, we can use the ring_state_findvar() function.
ring_state_findvar()
To create new variable, we can use the ring_state_newvar() function.
ring_state_newvar()
Example:
#include "ring.h"
#include "stdlib.h"
int main(int argc, char *argv[])
{
List *pList;
RingState *pState = ring_state_init();
RingState *pState2 = ring_state_init();
printf("welcome\n");
ring_state_runcode
(pState,"see 'hello world from the ring programming language'+nl");
printf("Again from C we will call ring code\n");
ring_state_runcode(pState,"for x = 1 to 10 see x + nl next");
ring_state_runcode(pState2,"for x = 1 to 5 see x + nl next");
printf("Now we will display the x variable value from ring code\n");
ring_state_runcode(pState,"see 'x value : ' + x + nl ");
ring_state_runcode(pState2,"see 'x value : ' + x + nl ");
pList = ring_state_findvar(pState,"x");
printf("Printing Ring variable value from C , %.0f\n",
ring_list_getdouble(pList,RING_VAR_VALUE));
printf("now we will set the ring variable value from C\n");
ring_list_setdouble(pList,RING_VAR_VALUE,20);
ring_state_runcode(pState,"see 'x value after update : ' + x + nl ");
pList = ring_state_newvar(pState,"v1");
ring_list_setdouble(pList,RING_VAR_VALUE,10);
pList = ring_state_newvar(pState,"v2");
ring_list_setdouble(pList,RING_VAR_VALUE,20);
ring_state_runcode(pState,"see 'v1 + v2 = ' see v1+v2 see nl");
ring_state_runcode(pState,"see 'end of test' + nl");
ring_state_delete(pState);
ring_state_delete(pState2);
}
Output:
hello world from the ring programming language
Again from C we will call ring code
1
2
3
4
5
6
7
8
9
10
1
2
3
4
5
Now we will display the x variable value from ring code
x value : 11
x value : 6
Printing Ring variable value from C , 11
now we will set the ring variable value from C
x value after update : 20
v1 + v2 = 30
end of test
We can extend the Ring Virtual Machine (RingVM) by adding new functions written in the C programming language or C++. The RingVM comes with many functions written in C that we can call like any Ring function.
We can extend the language by writing new functions then rebuilding the RingVM again, or we can create shared library (DLL/So) file to extend the RingVM without the need to rebuild it.
Each module function may contain the next steps:
The structure is very similar to any function (Input - Process - Output). But here, we will use the Ring API for the steps 1, 2, 3 and 5.
The next code represents the sin() function implementation using the Ring API and the sin() C function.
sin()
void ring_vm_math_sin ( void *pPointer )
{
if ( RING_API_PARACOUNT != 1 ) {
RING_API_ERROR(RING_API_MISS1PARA);
return ;
}
if ( RING_API_ISNUMBER(1) ) {
RING_API_RETNUMBER(sin(RING_API_GETNUMBER(1)));
} else {
RING_API_ERROR(RING_API_BADPARATYPE);
}
}
You can read more about extension from this tutorial.
Also, you can use a code generator that comes with Ring to quickly generate wrappers for C/C++ functions/classes.
You can use the language with any editor, also the Ring team provides extensions for Notepad++, SubLime Text 2, Geany Editor, Visual Studio and Atom editor.
The next screen shot demonstrates using Ring in SubLime Text 2.
The next screen shot demonstrates using Ring in Atom (my favorite code editor).
You can learn about using the Allegro game programming library from the Ring language through this tutorial.
The next example displays and rotates an image:
Load "gamelib.ring"
al_init()
al_init_image_addon()
display = al_create_display(640,480)
al_set_target_bitmap(al_get_backbuffer(display))
al_clear_to_color(al_map_rgb(255,255,255))
image = al_load_bitmap("man2.jpg")
al_draw_rotated_bitmap(image,0,0,250,250,150,0)
al_draw_scaled_bitmap(image,0,0,250,250,20,20,400,400,0)
al_flip_display()
al_rest(2)
al_destroy_bitmap(image)
al_destroy_display(display)
The next screen shot demonstrates the application during the runtime:
For now, the Ring programming language comes with a simple CGI Library for creating web applications.
Hello World program:
#!b:\ring\ring.exe -cgi
Load "weblib.ring"
Import System.Web
WebPage()
{
Text("Hello World!")
}
The library is written in Ring (around 5K lines of code).
The next screen shot demonstrates the power of the library that uses Object-Oriented Programming and the MVC design pattern.
You can read more about the library from this tutorial.
The next example asks the user about his/her name, then says Hello!
Load "guilib.ring"
MyApp = New qApp {
win1 = new qWidget() {
setwindowtitle("Hello World")
setGeometry(100,100,400,130)
label1 = new qLabel(win1) {
settext("What is your name ?")
setGeometry(10,20,350,30)
setalignment(Qt_AlignHCenter)
}
btn1 = new qpushbutton(win1) {
setGeometry(10,200,100,30)
settext("Say Hello")
setclickevent("pHello()")
}
btn2 = new qpushbutton(win1) {
setGeometry(150,200,100,30)
settext("Close")
setclickevent("pClose()")
}
lineedit1 = new qlineedit(win1) {
setGeometry(10,100,350,30)
}
layout1 = new qVBoxLayout() {
addwidget(label1)
addwidget(lineedit1)
addwidget(btn1)
addwidget(btn2)
}
win1.setlayout(layout1)
show()
}
exec()
}
Func pHello
lineedit1.settext( "Hello " + lineedit1.text())
Func pClose
MyApp.quit()
The next screen shot demonstrates the application during the runtime.
You can read more about GUI development using Ring from this tutorial.
The tutorial comes with many samples including a simple Cards game developed using RingQt where each player get 5 cards, the cards are unknown to any one. Each time one player clicks on one card to see it, if the card is identical to another card, the play gets points for each card. If the card value is “5”, the player get points for all visible cards.
5
The next screen shot while running the game using a Mobile (Android):
The Ring is a multi-paradigm language that will give you the choice to select the right paradigm for your problem.
Try the language (It's FREE OPEN SOURCE), then think about creating new projects (frameworks) based on the language power (declarative programming and natural programming) and I'm sure that you will get success.
We need contributors so feel free to join and help us (improve our code, add features, report bugs, fix some bugs!, write libraries, provide extensions, write documentation,. | https://www.codeproject.com/Articles/1089887/The-Ring-Programming-Language?fid=1902936&df=90&mpp=25&prof=True&sort=Position&view=Normal&spc=Relaxed&fr=26 | CC-MAIN-2018-47 | refinedweb | 2,610 | 59.74 |
The StringWriter class is s subclass of Writer class and it writes the String to an output stream. To write a string, this character stream collects the string into a string buffer and then constructed a string. The buffer of StringWriter automatically grows according to data. The important methods of StringWriter class are write(), append(), getBuffer(), flush() and close().
public class StringWriter extends Writer
import java.io.*; public class StringWriterTest { public static void main(String args[]) { String str = "Welcome to Tutorials Point"; try { StringWriter sw = new StringWriter(); sw.write(str); StringBuffer sb = new StringBuffer(); sb = sw.getBuffer(); System.out.println("StringBuffer: "+ sb); System.out.println("String written by StringWriter: "+ sw); sw.close(); } catch (IOException ioe) { System.out.println(ioe); } } }
StringBuffer: Welcome to Tutorials Point String written by StringWriter: Welcome to Tutorials Point | https://www.tutorialspoint.com/what-is-the-importance-of-a-stringwriter-in-java | CC-MAIN-2019-51 | refinedweb | 132 | 52.26 |
Prepare your environment for cross-platform
C/C++ development with NetBeans, and put the
C/C++ Pack to work creating a native library for Java applications
When NetBeans 5.5 was released in late 2006, it radically changed its
own value proposition by offering first-class support for a language that
doesn’t run inside a JVM. The NetBeans C/C++ pack provided to C/C++ programmers
most features Java developers were already used to: advanced source editing
with syntax highlighting and code completion, built-in CVS support, hyperlinks
to navigate function declarations, a class hierarchy browser, an integrated
debugger, and integration with the make tool.
This article
focuses on how the C/C++ pack can help Java developers. Although I’m sure you
all would like to code the whole world in pure Java, reality frequently
challenges us to interface with native code, be it legacy systems, a device
vendor SDK or a high-performance math library. Also, sometimes we need to use
native code to improve the user experience, by means of tighter integration
with the underlying operating system. Wouldn’t it be better to do all this from
the same IDE we already use for Java development?
We’ll show how
to leverage NetBeans and the C/C++ Pack to develop portable native libraries
using C/C++, and how to integrate them with Java code in a way that eases
deployment in multiple platforms.
NetBeans C/C++
Pack is more than just C/C++ coding support for the Java developer. It also
suits many native code projects very well. The sidebar “Other open
source C/C++ IDEs” compares the Pack with some popular open-source
IDEs for C/C++.
Installing
NetBeans C/C++ Pack
Installing the
C/C++ Pack per se will be a no-brainer for most users. No matter if you’ve
installed the NetBeans IDE using the zip package or one of the native
installers, you only need to run C/C++ Pack’s installer and point it to your
NetBeans IDE installation directory. (Note that, although the C/C++ Pack is
mostly Java code with just one tiny native library, there’s no multiplatform
zip archive like the ones provided for the IDE.)
The installer
itself will work the same for all supported platforms: Windows, Linux and
Solaris. But configuring your environment for using C/C++ Pack may not be so
easy. Just like the core NetBeans IDE needs a compatible JDK installation, the
C/C++ Pack will require a C/C++ compiler and standard libraries and headers. So
you need to install and configure these in advance.
To meet the
Pack’s prerequisites, we’ll rely on the popular suite formed by the GNU C
Compiler (GCC), GNU Binutils, GNU Make and GNU Debugger (GDB). This is the
suite that received most of the QA effort of the C/C++ Pack developer team1, and it’s portable to Windows, Linux
and Solaris environments.
Using the same
compiler suite for all platforms greatly simplifies dealing with portable (and
even non-portable) C/C++ code, as you won’t need to spend time fighting
compiler directives, runtime library inconsistencies and language dialects.
Besides, you’ll find that in most cases the GNU toolset competes head-to-head
with other C compilers in both speed and optimization quality.
1 The only other compiler suite supported so far is the Sun Studio product for Solaris
and Linux.
Installing
the GNU toolset on Linux
Linux users
should have no problem obtaining the GNU toolset for their preferred platform.
Mine is Fedora Core 6, and as I installed a “development workstation” using
Anaconda I already had everything ready for NetBeans C/C++ Pack. Users who
didn’t install Linux development tools when configuring their systems should
have no problem using either yum, up2date, yast or apt to install the GNU toolset.
Stay clear of CD-bootable mini-distros
like Knoppix for real development work. Instead, install a full-featured distro
in a native Linux partition in your main hard disk. The few additional
gigabytes used will prove to be a small cost for all the hassle you’ll avoid.
Solaris users
will also find it easy to install the GNU toolset; there are detailed
instructions on the NetBeans Web site. But be warned: if you think you’d be
better served by the native platform C compiler (Sun Studio), think again. This
is because NetBeans C/C++ Pack’s debugger needs the GNU Debugger, and
GDB has some issues running code generated by Sun compilers. So you can use
Sun’s compiler to produce final code, but you’d better use the GNU toolchain
for development.
Installing
the GNU toolset on Windows
Windows users
won’t be able to use native C/C++ compilers from Microsoft, Borland or Intel,
and will have to stick with a Windows port of the GNU toolset. There are two
options: Cygwin
and MinGW.
The C/C++
Pack’s docs at netbeans.org provide detailed instructions for using
Cygwin, but I strongly advise you to use MinGW instead. The reason is that
Cygwin relies on a Unix emulation layer, while MinGW uses native Windows DLLs
for everything. Code compiled with Cygwin uses the standard GNU runtime library
(glibc) on an emulation of Unix system calls, and semantics like mount
points, pipes and path separators. But code compiled with MinGW will use
standard Microsoft runtime libraries such as MSVCRT.DLL.
Cygwin has its
uses, as many Linux and Unix software (specially open-source software) that has
not yet been ported to Windows is easy to run under Cygwin without
virtualization overhead. But I doubt you’d want to compromise stability and
compatibility with the native platform when developing native libraries for use
with Java applications. So MinGW is the way to go. The sidebar “Installing MinGW” provides detailed instructions.
Checking
prerequisites
Whatever your
platform of choice, you need access to the GNU toolset from your operating
system command prompt. It may be necessary to configure the system PATH before
using NetBeans C/C++ Pack. You can check that you have all prerequisites are
available before proceeding by using the commands
displayed in Figure 1. (Although this figure shows a Windows command
prompt, you’ll be able to run the same commands from either the Linux or
Solaris shells.) If you get software releases older than the ones shown,
consider upgrading your GNU toolset.
Figure 1. Verifying that the GNU toolset is installed and configured correctly, and is using compatible releases.
When pure
Java
is not enough
Now that you have NetBeans C/C++ installed
and its prerequisites configured, let’s present this article’s use case. You’re
developing a desktop Java application with cryptographic features, which saves
sensitive data such as key rings and private keys in a local file system
folder. You want to be sure that only the user who’s running the application
can read (and of course write) files to that folder.
The standard Java libraries provide methods
in the java.io.File class for checking if a
file can be read or written by the current user, but these methods don’t check
if other users can also read or write the same files. There are new methods in
Java SE 6 that deal with file permissions, and work in progress under JSR 293;
but if your application has to support Java 5 or 1.4, there’s no escaping from
native code. So our application will use native system calls to verify local
folder permissions during initialization, and refuse to start if it finds the
folder is not secure.
Java doesn’t provide an easy way to declare
external methods, like Free Pascal or Visual Basic, but it does of course
provide the Java Native Interface, a standard and portable way to call native
code from Java and vice versa. With the above use case in mind, we have to
design an abstraction that hides platform details and the corresponding native
code from the higher application layers. In the end, the apparent complexity of
dealing with JNI may actually be an advantage, because it forces us to design
the interface between Java and native code, instead of just going ahead and
invoking operating system APIs directly.
The Java wrapper code
Let’s get our feet wet. Start NetBeans,
create a Java Class Library Project, and name it “OSlib”. This project will
contain all interfaces between our hypothetical application and the native
operating system. Then create a new class named “FilePermissions”, with the
code shown in Listing 1.
Listing 1. FilePermissions.java – Utility class with a native method.);
}
The native keyword, you’ll remember, means that the method’s implementation will be
provided by a native dynamic library. That library in our code is loaded by a
static initializer in the class itself.
Following Test-Driven Development practices,
I’ll create unit tests instead of creating a test application for the OS
interface. Right click Test Packages in the Projects window and select New>Test
for Existing Class to generate a skeleton for testing the native method.
Then change this skeleton to make it look like Listing 2.
Listing 2. Unit tests for FilePermissions native methods
The unit tests use a properties file (shown
in the same listing) to get each test’s target filesystem path. This way, all
file paths can be easily changed to comply with native-platform naming
conventions, without needing to recompile the tests themselves. Also, don’t
forget to create the target files and give them appropriate permissions.
If everything is fine so far, running the
tests (by selecting the FilePermissionsTest class and pressing Shift+F6) should give the output shown in Figure 2.
The UnsatisfiedLinkError exception is thrown because we haven’t yet provided the native
method implementation.
Figure 2. Running JUnit tests for the unfinished native method.
C and C++ are of course much older than Java, and are still the
languages of choice for many high-profile open-source projects. Based on that,
on could guess there would be many other strong cross-platform and open-source
C/C++ IDEs. You’ll find that NetBeans C/C++ Pack may be the strongest one
around, however. Let’s look at some C/C++ Pack’s competitors.
DevCPP
DevCPP is very popular among Windows developers. It’s lightweight, well
supported, and, like NetBeans, relies on external make tools and C/C++
compilers. Additionally, it supports a wide variety of C/C++ compilers. Though
DevCPP is written using Borland Delphi, an attempt to port it to Linux (using
Kylix) failed. So DevCPP is not an option for cross-platform C/C++ development.
OpenWatcom
The Watcom C/C++ compiler is cross-platform but offers no Unix support;
it targets Windows and OS/2. Though not very user-friendly, it comes with an
integrated debugger and a help system. It was once the compiler of choice for
high-performance C/C++ applications, with its enhanced code optimizer and
support for all Intel processor variants. When Sybase bought Watcom, though,
the C/C++ compilers and IDEs fell into obscurity. Later the tools were released
as open-source software. Nowadays, it looks like the community project is going
well, but there’s still no support for Unix and Linux systems. This makes
OpenWatcom essentially a Windows-only IDE and not suitable for our purposes.
Anjuta
Anjuta is based on the complete GNU toolset for C/C++ development. In
addition to the tools supported by C/C++ Pack, it supports the GNU Autotools, a
set of scripts that simplifies generating Makefiles for multiple operating
systems and compilers. It’s also focused on GNOME development, so it provides
templates for GTK, Gnome and Glade applications.
While DevCPP and OpenWatcom are Windows-only, Anjuta and KDeveloper (see
next) are Unix-only. Some users have reported success running both under Cygwin,
but they are still far from providing robust support for compiling and
debugging native Windows applications.
For Unix developers, Anjuta provides integrated access to man pages and GNOME documentation. Its integrated debugger, like C/C++ Pack, relies
on GDB. The latest releases provide integration with Glade, the Gnome visual UI
builder.
KDevelop
Everything said before about Anjuta applies to KDevelop, if you just
replace GTK/Glade/GNOME with Qt/QtDesigner/KDE. Anjuta and KDevelop are strong
C/C++ IDEs for open-source desktops, but they don’t cut it as cross-platform
IDEs.
Eclipse CDT
C/C++ development support in Eclipse is almost as old as Eclipse IDE
itself, but it has not matured as fast as the support for Java. Although
currently labeled as release 4.0, Eclipse CDT doesn’t provide many features
beyond those in NetBeans C/C++ Pack (which is younger).
Also like NetBeans, Eclipse CDT doesn’t integrate yet with visual
development tools for Gnome, KDE or Windows. It has the advantage of supporting
compilers other than the GNU compilers, but this won’t be a real plus if your
goal is developing cross-platform C code.
Red Hat is developing GNU Autotools and RPM generation plug-ins which,
when they are released as production level, may become Eclipse CDT’s real
advantage over NetBeans C/C++ Pack (at least for Unix/Linux users). On the
other hand, NetBeans is the development IDE for Open Solaris, so don’t expect
it to fall short in enhancements for Unix developers.
Conclusion
The only flaw one would find in C/C++ Pack, comparing it to other
open-source alternatives for C/C++ development, is the lack of operating-system
and third-party library documentation support in the help system. That would be
also its main drawback when compared to proprietary C/C++ IDEs. But if you
evaluate alternatives for cross-platform C/C++ development, the strongest (and
only) competitor for NetBeans is also its main competitor in the Java space,
that is, Eclipse.
The native code project
Our unit tests are ready, but getting native
code working alongside Java code is not trivial. We’ll mock the native method
implementation so we can focus on how to build a native library that can be
called by Java code. Start by creating a C/C++ Dynamic Library Project, as
shown in Figure 3. Name the project “NativeOSlib” and clear the “Set as
main project” checkbox.
Figure 3. Creating a C/C++ Project in NetBeans.
New C/C++ projects are created empty, except
for a generated Makefile (see Figure 4), and are structured in
virtual folders organized by file type – not by package names like Java
projects. You’ll be pleased to know that NetBeans C/C++ Pack includes a
Makefile editor (even though there’s still no support for running arbitrary
Makefile targets as there is for Ant buildfiles).
Figure 4. The new C/C++ project in NetBeans’ Projects window.
Generating JNI Stubs
We’re ready to begin writing our C code.
First remember that all JNI-compliant native methods should use the declaration
generated by JDK’s javah tool. You could turn to the operating system
command prompt to generate the C JNI stubs, but there’s a better solution. It’s
the JNI Maker project, a plug-in module that adds a context menu for generating
JNI header files from Java classes. Just get the nbm package from jnimaker.dev.java.net and install it using NetBeans’s Update Center. After restarting the IDE, you
should see a new menu item as shown in Figure 5.
Figure 5. Generating a JNI stub using the JNI Maker plug-in module.
Before
generating JNI stubs, make sure you’ve built the Java project. JNI Maker uses
the distribution JARs.
Now select Generate JNI Stub from the FilePermissions class’s context menu. NetBeans shows a standard File Save dialog, where you can select a folder to save the generated FilePermissions.h header file. Move into the NativeOSlib project folder and create a new src folder (C/C++ Projects do not have a default file structure with separate
source and test folders like Java projects do). Save the header file there. The
output window will look like Figure 6 if the operation is successful.
Figure 6. Output from the Generate JNI Stub command.
JNI Maker Release 1.0 will only work
correctly under Windows, but the generated code will compile and run fine on
Unix/Linux. The project developers have been contacted about the module’s
cross-platform issues and by the time you read this there should be a new
release that will work on all platforms supported by NetBeans C/C++ Pack
Using the JNI Maker module has the same
effect as running the following command from the operating system prompt,
assuming the OSlib project folder is the current directory and NativeOSlib project folder is a sibling:
$ javah -classpath dist/OSlib.jar
-jni -o ../NativeOSlib/src/FilePermissions.h
org.netbeans.nbmag3.util.FilePermissions
org.netbeans.nbmag3.util.FilePermissions
The MinGW project provides a native port of the
GNU toolset for Windows platforms. Included in the base distribution are GNU C,
C++, Objective-C, Ada, Java and Fortran compilers, plus an assembler and a
linker; there’s also support for dynamic libraries and Windows resource files.
Additional packages provide useful tools like Red Hat Source Navigator, Insight
GUI debugger and a handful of Unix ports like the wget download manager.
MinGW stands for “Minimalist GNU for Windows”.
But it’s “minimalist” only when compared to the Cygwin environment. (Cygwin
tries to emulate a full Unix shell, complete with bash scripting, user commands
and a Unix-like view of the filesystem.)
In fact, MinGW is complete to the point of
providing Win32 API header files, and many popular open-source applications
like Firefox have their Windows releases compiled using it. (Recent Cygwin
releases include many MinGW enhancements as a cross-compiling feature, showing
how Windows development is “alien” to MinGW alternatives.)
If you check the project’s website, it looks
like MinGW development has been stalled for quite a few years; the problem is
that the site was automatically generated by a script that read the project’s
SourceForge area, and developers simply got tired of catching up with sf.net’s design changes. However, MinGW is a very healthy project with active mailing
lists and frequent file releases.
There is an installer for the base distribution
named mingw-x.x.exe that downloads selected packages from SourceForge
and installs them. The same installer can be used to update an existing MinGW
installation.
Individual packages are downloaded to the same
folder where the installer was started. This allows you to later copy the
entire folder to another workstation and install MinGW there without the need
of an Internet connection. Most extra packages provide their own installers or
can simply be unpacked over an existing MinGW installation.
To satisfy C/C++ Pack’s prerequisites, you’ll
need to download and install three MinGW packages: the base distribution
itself, the GDB debugger, and the MSys distribution.
Installing
MinGW
Download MinGW-5.1.3.exe (or newer) from
the project’s current file releases at sf.net/project/showfiles.php?group_id=2435,
then launch it to see a standard Windows installer.
On the third step of the wizard (the second
screen in Figure S1) you only need to select “MinGW base tools” and
optionally “g++ compiler”. Also, the Java Compiler may be interesting to play
with, because of its ability to generate native machine code from Java sources
and bytecodes, but it’s not supported by NetBeans yet. Interestingly, the g77
(Fortran) compiler will be officially supported very soon.
After downloading all selected packages, the
installer will ask for the destination directory and unpack all packages there.
It’s left to the user to configure environment variables so that MinGW tools
can be used from the Windows command prompt.
Installing
GDB
As we’ve seen, NetBeans C/C++ Pack needs GDB to
be able to debug C programs. The MinGW distribution packages GDB as a
stand-alone installer.
At the time of writing, the latest stable MinGW
package for GDB was release 5.2.1, which won’t refresh the NetBeans debugger’s
Local Variables window correctly. To solve this, download gdb-6.3-2.exe (or newer) from MinGW Snapshot Releases to a temporary folder and run it.
Though you don’t need to install GDB over MinGW, your life will be easier if
you do, as you won’t need to add another folder to your PATH system environment
variable.
Installing
MSys
The MinGW base distribution already includes a
make tool named mingw32-make.exe, but NetBeans C/C++ Pack won’t be happy
with it. MinGW’s make tool is patched to be more compatible with other native
Windows C compilers, and NetBeans expects a Unix-style make tool. NetBeans
generated Makefiles even expect to find standard Unix file utilities such as cp and rm.
The MinGW MSys package satisfies these dependencies. It is a “Minimal System” that provides a
Unix-style shell and file utilities, and allows open-source projects based on
GNU Autotools to be easily built using MinGW.
Download msys-1.0.10.exe or newer to a
temporary folder and start it. At the final installation step, a batch script
configures the integration between MSys and MinGW. You will still have to add
the MSys programs folder to the system PATH (in my case, E:\MSys\1.0\bin),
as you did for the MinGW base distribution.
That’s it. After running three installers and
downloading about 23 MB, we are ready to develop C/C++ applications and libraries
using the NetBeans IDE and C/C++ Pack on Windows.
Figure S1. Screens from MinGW’s base distribution installer.
(The command is broken to fit the column
width, but it should be typed in a single line, of course.)
Now add the generated C header file to the
NativeOSlib project. Right click Header Files inside the NativeOSlib project folder in NetBeans’ Projects window, and select Add Existing Item.
Then browse to the file src/FilePermissions.h and open it. The code will
look like Listing 3.
Listing 3. FilePermissions.h – JNI Stub for native methods in the FilePermissions class.
/*
Mocking native code
Due to space constraints, we won’t show you
the final C code for the FilePermissions.isPrivate() native method, but the sources available for download will provide
working implementations for both Windows and Unix (Posix) systems.
To create the C implementation file, right
click Source Files and select New>Empty C File, then type
“FilePermissions.c” as the file name and “src” as the folder name. A new node
named FilePermissions.c should be created under Source Files.
Copy the C stub function declaration from FilePermissions.h to FilePermissions.c and change it to include the header file. Also add
parameter names. The code should look like Listing 4. (Listing 3 highlights the declaration you have to copy, and Listing 4 highlights
the changes after copying.)
Listing 4. FilePermissions.h – JNI mock implementation for the FilePermissions native methods.
#include "FilePermissions.h"
JNIEXPORT jboolean JNICALL Java_org_netbeans_nbmag3_util_FilePermissions_isPrivate
(JNIEnv *env, jclass clazz, jstring path) {
return JNI_TRUE;
}
#include "FilePermissions.h"
JNIEXPORT jboolean JNICALL Java_org_netbeans_nbmag3_util_FilePermissions_isPrivate
(JNIEnv *env, jclass clazz, jstring path) {
return JNI_TRUE;
}
At this point, Unix and Linux users should
be ready to build the native code and run unit tests again2. But
Windows users first have to change a few project properties to make MinGW
generate Windows-compatible JNI DLLs. The sidebar “JNI and MinGW”
details these
configurations.
2 At least if you use JDK packages compatible with your distro package manager, like the IBM and BEA JDKs provided by RHEL and SuSE Enterprise, or the RPM Packages from jpackage.org. If not, you’ll have to add your JDK include folder to the GNU C compiler include directory. The configurations will be similar to the ones presented in the “JNI and MinGW” sidebar, but you won’t need to change either the linker output file name or additional compiler options.
Right click the NativeOSlib project
and select Clean and Build Project. If there are no errors, you should
see make’s output as in Figure 7.
Figure 7. Building the NativeOSlib project under Linux.
Running unit tests again
You need to set the OSlib project’s java.library.path system property before running it, or you’ll still get UnsatisfiedLinkError exceptions. Open the project’s Properties dialog, select the Run category and change VM Options to specify the full path to the NativeOSlib project’s platform-specific native-library folder, which is inside the dist folder (see Figure 8). In Linux, this will be PROJECT_HOME/dist/Debug/GNU-Linux-x86;
in Windows, PROJECT_HOME\dist\Debug\GNU-Windows.
Figure 8. Configuring the java.library.path property so unit tests can find the native code library on Linux.
Now run the unit tests again. The result
should be as shown in Figure 9. Since the mock native code always
returns true, some tests pass even if you have not created target test folders
or forgotten to setup their access permissions. Anyway, the first test should
fail because it takes an extra step to check if the target file path
actually exists.
Figure 9. Running unit tests using a mock native implementation.
Unix and Windows native C/C++ compilers use different conventions for
mangling function names1*,
exporting global symbols from libraries and setting up stack frames. JNI on
Windows uses Microsoft conventions for Windows DLLs, while GCC uses its own
conventions for dynamic libraries. This means that if you simply try to compile
and link a dynamic library, MinGW will stick to its Unix origins and produce a
DLL that is incompatible with native Windows C/C++ compilers. The JVM won’t be
able to get native method implementations from that library and will generate
more UnsatisfiedLinkExceptions.
The solution is to add a few command-line
options when compiling C/C++ sources: ‑D_JNI_IMPLEMENTATION
-Wl,--kill-at. Open the C/C++ Dynamic Library Project
properties and expand C/C++>Command Line, then type these
options in the Additional Options text field (see Figure S1).
You also need to add your JDK include folders (JAVA_HOME\include and JAVA_HOME\include\win32) to the project properties. Open
C/C++>GNU C Compiler>General and change the Include
Directories field as shown in Figure S2.
You need one last change in the C/C++ Dynamic
Library Project properties so you can generate a JNI-compatible DLL. By
default, NetBeans chooses a library name that corresponds to Cygwin
conventions, but we need to use native Windows conventions. So you need to
enter the Linker>General category and remove the “cyg” prefix
from the Output field (Figure S3).
1“Mangling” is the process used for generating public C++ function names in object files. It’s needed because the C language doesn’t support function overloading, and, to keep backward compatibility, C++ compilers generate a function name that encodes parameter types.
Figure S1. MinGW compiler options for generating JNI-compatible DLLs
Figure S2. Configuring JDK include folders for MinGW
Figure S3. Changing the output file name for compliance with Windows DLL naming conventions
Managing platform-specific compiler
settings
NetBeans C/C++ Pack puts object files in the build and dist folders, inside subdirectories named after the
target platform, for example GNU-Linux-x86 or GNU-Windows. But it
won’t save different compiler options for each target, forcing you to have a
different project for each platform if there’s a need for platform-specific
compiler settings.
You can solve this using NetBeans C/C++
Pack’s multiple configurations feature. Open NativeOSlib’s project
properties and notice the Configuration combo box on the top of the
window (Figure 10). The default configurations are meant to save
different compiler settings for Debug and Release builds, like keeping symbol
information for Debug builds and optimizing code for Release builds. So if you
want platform-specific configurations, you may need to create Release and Debug
variants for each platform.
Figure 10. Combo box for changing compiler configurations for a C/C++ project.
The Manage Configurations button to
the side of the combo box lets you create new configurations either from
scratch or as a copy of an existing configuration (see Figure 11).
You’ll notice I renamed the generated Debug configuration to Debug-Linux and copied it to a new configuration named Debug-Windows. Doing this
lets you change the Windows configuration to include all options needed by
MinGW for generating JNI-compatible DLLs, while keeping the default settings
for the Linux configuration.
Figure 11. Creating, renaming or copying configurations.
NetBeans-generated Makefiles provide many
extension points (like the Ant buildfiles generated by the IDE), and they can
be used outside the IDE. For example, for building the Debug-Windows configuration you’d type the following command at the operating system prompt:
make CONF=Debug-Windows
Thus, you could have Continuous Integration
servers for many platforms, all being fed by the same CVS or Subversion source
tree. And thanks to GNU C cross-compiler features it would be possible to have
a “compile farm” that generates native binaries for multiple platforms, without
the need for multiple OS installations. For example, a Linux server could
generate both Windows and Solaris
SPARC binaries.
Conclusions
NetBeans C/C++ Pack provides a rich
environment for developing C and C++ applications and libraries. It’s useful
for Java developers that need to interface with native code and, of course, for
developing fully-native applications. Compiler configuration may pose some
challenges for Windows developers if they never tried GNU compilers before, but
the effort will certainly pay off because of the increased portability of both
code and Makefiles.
<![if !vml]><![endif]>
Links
Fernando Lozano
(fernando@lozano.eti.br) is an independent consultant and has worked with information systems since 1991. He’s the Community Leader of the Linux Community at Java.net, webmaster for the Free Software Foundation and counselor to the Linux Professional Institute. Lozano helps many open-source projects and teaches at undergraduate and postgraduate college courses. He’s also a technical writer and book author, as well as Contributing Editor at Java Magazine (Brazil) and freelance writer for other leading IT publications. | https://netbeans.org/community/magazine/html/03/c++/index.html | CC-MAIN-2016-44 | refinedweb | 5,038 | 53.61 |
Let's start by describing some context. I have a table in my database called 'Area'.
It has the following fields (id, id of the chief of that area and the name of the area):
- ID
- IDJefe
- Nombre
I also have an Entity Framework .edmx file for my database which creates strongly-typed classes to represent the 'Area'. In my application each, Area object will be a row from the Area table.
I want you to create a new C# class called Area (the same name as the table) and make it a partial. This is so the generated Area class combines with our own custom Area class.
Remember to create the partial class in the same namespace as the .edmx file, that way they can combine!
Now inside of partial class write this in:
namespace <YourNamespaceGoesHere> { [Bind(Include = "IDJefe, Nombre")] [MetadataType(typeof(Area_Validation))] public partial class Area { } public class Area_Validation { [Required(ErrorMessage = "Jefe is required.")] public int IDJefe { get; set; } [Required(ErrorMessage = "Nombre del area is required.")] public string Nombre { get; set; } } }
Let's breakdown wants going on.
First, I bind certain fields so they are included in the validation, meaning that can be modified. Everything that is not on the include fields cannot be modified.
Next I tell it to validate using my custom validation class called Area_Validation.
Inside the Area_Validation class we must manually create the fields, and type in the datatype. Unfortunately this method doesn't allow us to have a strongly typed class. No intellisense help.
With the ErrorMessage we can type in what error we want to display when the field doesn't pass validation.
Now let's see the process for the Action called Create:
[HttpPost] public ActionResult Create(Area area, FormCollection values) { if (ModelState.IsValid) { repo.Add(area); repo.Save(); return RedirectToAction("Details", new { id = area.ID }); } return View(area); }
When a user tries to save this 'area' object, the ModelState will now verify using our own custom created validation and if failed will pass the invalid fields to the View.
Now let's see the Edit Action:
[HttpPost] public ActionResult Edit(int id, FormCollection values) { var area = repo.GetArea(id); if (TryUpdateModel(area)) { repo.Save(); return RedirectToAction("Details", new { id = area.ID }); } return View(area); }
Same deal, the TryUpdateModel is a built in helper method that tries to update the area and if failed will populate the model with information of invalid fields.
I hope you enjoyed this tutorial! | http://www.dreamincode.net/forums/topic/185179-aspnet-mvc2-how-to-use-basic-model-validation/ | CC-MAIN-2016-26 | refinedweb | 408 | 57.16 |
Opened 5 years ago
Last modified 5 years ago
#5452 enhancement new
Test modules in `twisted/test/` for which preferred replacements already exist must be marked as such
Description
For a while we've been moving away from having so many tests in twisted/test/. This started with the subproject split and proceeded from there. It's sensible to try to keep unit tests close to implementation. So tests for twisted/python/reflect.py make more sense in twisted/python/test/test_reflect.py than in twisted/test/test_reflect.py. Splitting things up also helps avoid ugly namespace collisions (or rather, the ugly results of having to resolve such collisions).
In some cases, we have started adding new tests in a new location while leaving many old tests in the old location, though. The old location is an attractive nuisance to new contributors who don't understand what's going on, and reasonably believe that it's a good idea to add new TCP-related unit tests to twisted/test/test_tcp.py.
We need to identify all test modules in twisted/test/ which are not supposed to have new tests added to them and mark them with information pointing to the preferred location for such tests.
Hopefully this will guide contributors to adding tests in the correct location.
Change History (1)
comment:1 Changed 5 years ago by thijs
- Cc thijs added | https://twistedmatrix.com/trac/ticket/5452 | CC-MAIN-2016-50 | refinedweb | 231 | 62.38 |
We love the Objective-C runtime for three main reasons: dynamic introspection, behavior changing, and the ability to analyze private APIs. In this talk, Boris Bügling asks, to what extent does this runtime functionality remain in Swift? Just enough, perhaps, to still do interesting things.
What is a
SwiftObject? (0:26)
We all love Swift, but very likely we still have lots of questions about it. For example, what is a Swift object? The answer to that depends. If you inherit from
NSObject, you will get an object which behaves like a regular
NSObject. All of its variables will be properties, and it is fully interoperable with Objective-C. For these objects, we can just import the
Objective-C.runtime and work with it.
If we don’t inherit from anything, we have an implicit superclass which is called
SwiftObject. In terms of the runtime, all of the instance variables are only
ivars without any type encoding. Thus, we cannot actually inspect their values. All the methods are not Objective-C methods — they are implemented completely differently.
SwiftObjects are not at all interoperable with Objective-C.
// Inherit from NSObject class MyObject : NSObject { } important ObjectiveC.runtime // No inheriting class MyObject { }
We can still inspect those types. If we look at a
SwiftObject, it looks a bit different from an
NSObject. It has one
ivar called
magic, an
isa pointer to the metaclass, and
refCount. It implements the
NSObject protocol. If you have an
NSObject, you have the
isa pointer for the metaclass, as well as the implemented
NSObject protocol.
How Does Bridging Work? (2:01)
We might ask ourselves, “How does bridging work if there are two completely different kinds of objects?” The answer is that it doesn’t. What does that mean?
Let’s use the example of an array. If we don’t import
Foundation, we cannot cast that array to
AnyObject. It will not compile. If we look at its class, we see that it is a Swift array, which is actually a struct. Once we import
Foundation into our code, we can then cast that array to an
AnyObject. This will give us an array of type
_NSSwiftArrayImpl that is a subclass of
NSArray.
Get more development news like this
let array = [0, 1, 2] // 'as AnyObject' => ! info(array) // is a Swift.Array import Foundation let objc_array: AnyObject = [0, 1, 2] as AnyObject info(objc_array) // is a Swift._NSSwiftArrayImpl // comparing different array types => compiler error as well //let equal = objc_array == array
This means that bridging relies on type inference. At some point, if you bridge to Objective-C, you have a different kind of array. That’s how bridging works for all of the Standard Library types, like Strings or Dictionaries. If you use these types in an Objective-C context, you will get a subclass of the Objective-C type.
Objective-C Runtime (3:16)
There were three things we loved about the Objective-C runtime that we can still do when we use Swift. The first thing is dynamic introspection. We could change behaviour as we wanted; as Rubyists would call it, “monkey patching”. Finally, we could analyze private APIs.
Dynamic Introspection (3:52)
Firstly, dynamic introspection. If we inherit from
NSObject, or have used any kind of Cocoa frameworks, we can still use the runtime to inspect properties. After we import
Objective-C.runtime, we can walk through all the properties to get their names.
var propertyCount : UInt32 = 0 var properties : UnsafeMutablePointer<objc_property_t> = class_copyPropertyList(myClass, &propertyCount) for i in 0 ..< propertyCount { println("Property: " + String.fromCString(property_getName(properties[Int(i)]))!) }
In pure Swift, not so much. But there is hope! If we look at the Swift Standard Library, we see code that is private, but not documented anywhere else. One example is
MirrorType, which is a reflection mechanism that Xcode uses to bring you support for Swift.
// Excerpt from the Swift Standard Library /// How children of this value should be presented in the IDE. enum MirrorDisposition { case Struct case Class case Enum [...] } /// A protocol that provides a reflection interface to an underlying value. protocol MirrorType { [...] }
With that, we can import something like KVO. We have a custom operator for that, which gets an object and a key. We use the
reflect method from the Standard Library to get this
mirror object for our instance. We can then walk through its children. If
childKey matches the key that we want, we return the value. If we try to use this technique on a struct which has two floats, we can get its values. So, there are still some ways to do introspection, albeit in a more private manner.
infix operator --> {} func --> (instance: Any, key: String) -> Any? { let mirror = reflect(instance) for index in 0 ..< mirror.count { let (childKey, childMirror) = mirror[index] if childKey == key { return childMirror.value } } return nil }
Change Behavior (5:38)
Again, if we use any kind of inheritance from
NSObject, we can still use the runtime as we did before. It’s a bit more cumbersome, because there is actually a difference between having a Swift closure and an Objective-C block. The attribute
@objc_block can be used to convert a Swift closure into an Objective-C block. However, the signature of
imp_implementationWithBlock in the runtime API takes an
AnyObject, so we also need to use
unsafeBitCast. Then we can simply set the implementation to our block. In the following example, we use this technique to override the description of a string. Once we do so, we get the string back instead of the actual description.
let myString = "foobar" as NSString println(myString.description) // foobar let myBlock : @objc_block (AnyObject!) -> String = { (sself : AnyObject!) -> (String) in "✋" } let myIMP = imp_implementationWithBlock(unsafeBitCast(myBlock, AnyObject.self)) let method = class_getInstanceMethod(myString.dynamicType, "description") method_setImplementation(method, myIMP) println(myString.description) // ✋
NSInvocation (6:40)
One thing that doesn’t work is
NSInvocation. That is completely off-limits, no matter what you try to do, whether you use objects directly from
NSObject or not.
What about pure Swift? The Swift library SWRoute is a proof of concept for function hooking in Swift. It uses
rd_route, a Mach specific injection library for C. This library allows you to swizzle C on platforms using Max OS X or iOS. To use it with Swift, the author essentially looked at the memory layout of the
swift_func_object and implemented a struct containing the function address of the function being called. Using that, you can write some C to get to the function address of the function object. Once you have that, you can also change where it points to.
#include <stdint.h> #define kObjectFieldOffset sizeof(uintptr_t) struct swift_func_object { uintptr_t *original_type_ptr; #if defined(__x86_64__) uintptr_t *unknown0; #else uintptr_t *unknown0, *unknown1; #endif uintptr_t function_address; uintptr_t *self; }; uintptr_t _rd_get_func_impl(void *func) { struct swift_func_object *obj = (struct swift_func_object *) *(uintptr_t *)(func + kObjectFieldOffset); return obj->function_address; }
Purely in Swift (7:59)
Can we do this without C? I actually wrote this before I found that library. Let’s take a step back — how do we find out about these things? Mike Ash wrote a memory dumper that allows you to dump the memory of any Swift object that you may have. Using that, I dumped a function object. It starts with an eight byte pointer to something called a “partial apply forwarder for reabstraction thunk helper”. This is basically a trampoline function that allows you to always have a level of indirection when you call the Swift function. It contains a pointer to a struct. There is also a pointer to
_TF6memory3addFTSiSi_Si, which is actually a function pointer that we need. We can define some structs in Swift to get to the function pointer.
struct f_trampoline { var trampoline_ptr: COpaquePointer var function_obj_ptr: UnsafeMutablePointer<function_obj> } struct function_obj { var some_ptr_0: COpaquePointer var some_ptr_1: COpaquePointer var function_ptr: COpaquePointer }
Now, we’re trying to dynamically load a function. We can do this statically with
@asmname and get the function from C without any kind of bridging header. After this attribute, you give the name of the function and the declaration in order to call it.
@asmname("floor") func my_floor(dbl: Double) -> Double println(my_floor(6.7)) let handle = dlopen(nil, RTLD_NOW) let pointer = COpaquePointer(dlsym(handle, "ceil")) typealias FunctionType = (Double) -> Double
We can also call it dynamically using
dlopen and
dlsym. We get to the
ceil function from
libm, and we also define a
FunctionType. After pulling in our structs from earlier, we can
unsafeBitCast our function object to this
f_trampoline structure. The trampoline struct also has an initializer that takes a struct, copies, and changes the function pointer within it. We use that to get a new function object pointing to the
ceil function. We can
unsafeBitCast it back to our
FunctionType, and finally, we can call it.
struct f_trampoline { [...] } struct function_obj { [...] } let orig = unsafeBitCast(my_floor, f_trampoline.self) let new = f_trampoline(prototype: orig, new_fp: pointer) let my_ceil = unsafeBitCast(new, FunctionType.self) println(my_ceil(6.7))
If we run this, we see where we can actually call both functions, one statically and the other dynamically. Swift is really keen on inlining when you optimize. Mangling with function pointers in the internal structures does not work well when you optimize, so this isn’t something you can really use in practice.
Go in Reverse (11:32)
However, we can use this technique the other way around! We can take a Swift function and pass it to some C code as a function pointer. That can be useful for calling Legacy APIs, or any kind of C APIs that deal with function pointers as callbacks.
In this example, we declare a C function taking a
FunctionPointer. We can pull this into our Swift program using
@asmname. Then, let’s define a function,
greeting, that prints to the screen. We can again use
unsafeBitCast to get a more accessible function object, and we can get to the
FunctionPointer. Cast that to a
CFunctionPointer type and pass it to
executeFunction. This actually works!
void executeFunction(void(*f)(void)) { f(); } @asmname("executeFunction") func executeFunction(fp: CFunctionPointer<()->()>) func greeting() { println("Hello from Swift") } let t = unsafeBitCast(greeting, f_trampoline.self) let fp = CFunctionPointer<()->()> (t.function_obj_ptr.memory.function_ptr) executeFunction(fp)
This doesn’t depend on optimization because the
FunctionPointer needs to actually work. So, this might be something that you can apply in practice if you interface with any C APIs.
Analyse Private API (13:03)
To analyze a private API, let’s look at this example
myClass and pretend that it is private. We have a variable
someVar and a function
someFuncWithAReallyLongNameLol. When I compile it, I have a proof of concept that I wrote called
swift-dump, which works like
class-dump does for Objective-C. It takes a binary and returns all the classes declared within. But
class-dump uses, of course, the Objective-C runtime to do so. How can we do it for Swift?
Well, if we look at the Swift binary, we can see a lot of symbols with mangled names. Swift uses name-mangling to generate unique identifiers for all the things you use inside your Swift programs, whether it be classes, methods, or variables. Inside Xcode, we have a tool called
swift-demangle. We can pass it a mangled name to get back a somewhat readable name. Using that, we can demangle the contents of a binary to reconstruct what exists within.
As many of you know, we can now have emoji identifiers, but how are they encoded in the binary? One interesting tidbit about name-mangling might be how emoji are actually encoded. If we define a class using the “thumbs-up” emoji, we can compile it using
nm to get to the global symbols. Once we demangle the name, we can see both the Swift and the Punycode representations for the Unicode.
Recap (16:49)
So with that, let’s recap what we can still do in Swift, in terms of runtime functionality. If we use Objective-C to derive things, we can just import objects to runtime. Introspection exists, somewhat. Changing behavior is really hard, mostly because of the optimization the Swift compiler does. It is not really feasible to poke around in the internal structures. However, reverse engineering is still fine. We can look at a binary and see exactly what is inside, whether that be functions or classes.
With that, thank you!
Resources (16:58)
- Memory dumper, written by Mike Ash
- Airspeed Velocity’s blog
- Apple’s Swift blog
- Swift: How did I do horrible things? by Russ Bishop
About the content
This talk was delivered live in March 2015 at Swift Summit London. The video was transcribed by Realm and is published here with the permission of the conference organizers. | https://academy.realm.io/posts/swift-summit-boris-bugling-runtime-funtime/ | CC-MAIN-2018-22 | refinedweb | 2,108 | 58.18 |
First, quickly: AWS Amplify has a new Admin UI. Amplify always had a CLI that helps you build projects by setting up stuff like auth, storage, and APIs. That’s super useful, but now, you can do those things with the new Admin UI. And more, like model your data (!!), right from a local UI. That’s great for people like… me (I like my GUIs).
Now, slower.
Let’s start with the idea of Jamstack. Static Hosting + Services, right? Amplify is that: static hosting is part of the offering. You connect an Amplify project with a Git repo (you don’t have to, you could upload a zip, but let’s be real here). When you push to the designated branch on that repo (probably
main or
master), it deploys. That’s part of the magic of development today that we all
know and love expect.
Static hosting might be all you need. Done.
But a lot of sites need more. Maybe your site is client-side rendered (for some of it), so the JavaScript hits an API for data and then renders. What data? What API? AWS has these things for you. For us front-enders, that’s probably AWS AppSync, which is like real-time GraphQL (cool). How do you set that up? Well, you can do it in the CLI, but it’s now way easier with the Amplify Admin UI.
Say you’re building a blog structure. Blogs have Posts, Posts have Comments. And so:
I’ll tell ya, coming from a WordPress upbringing and identifying mostly as a front-end developer, this feels doable to me. It’s not far from using Advanced Custom Fields in WordPress to model some data for a Custom Post Type. No wonder the line is so gray between front-end and back-end development.
Now that the Amplify Admin UI has this data modeled out, I can yank it down into my project and the whole schema is mocked out.
I’m bullish on GraphQL, but I can tell ya, all the setup of it is generally over my head. I’m generally very happy just being a consumer of a GraphQL API that is already set up, or doing minor tweaks. This, though, feels doable for me. The visual builder and the freebie scaffolding of the schema… yes please.
At this point then you have this project you can test and deploy. Once it’s deployed, there is a real data store in the cloud ready for data. How do you use it? It’s CRUD time! Create, Replicate, Update, and Delete, the core tenants of all good websites, right? Well, It’s Just JavaScript™. Here’s how you create a new blog, then a post in that blog:
import { DataStore } from '@aws-amplify/datastore'; import { Blog } from './models'; const newBlog = await DataStore.save( new Blog({ "name": "Name of Blog" }) ); await DataStore.save( new Post({ "title": "Blog Post Title", "blogID": newBlog.id }) );
That all works because the database exists and our app knows all about the data model. But what is
DataStore in AWS Amplify? That’s yet another thing that AWS Amplify helps with. They have libraries to make all this easier. You don’t have to manually write
fetch calls and do error handling and all that… Amplify libraries make life easier with all sorts of helpers (like you see above).
With all that setup, this slide I saw in their developer preview I got a peak at should make sense
Back to the Jamstack thing… now we’ve got Static Hosting going and we can deploy our website to it. By the way, that can be anything. A vanilla HTML/CSS/JavaScript thing. A React, Vue, or Angular app. Native apps too. Amplify doesn’t care, it just helps with the deployment and services.
Here’s a look at the Admin UI, where you can see the navigation with all the services you can set up, deployment activity, the ability to model (and edit) data, etc.:
What else is in there? With auth for one. If you’re storing data and managing it with API’s, it’s highly likely you’ll be dealing with authentication as well. Amplify has you covered. Need some to run some code server-side? You’ve got your functions right in there of course. Lambdas (serverless functions) are AWS bread and butter. Analytics? You bet.
Another thing you’ll surely be interested in is the different development stories. Like what is local development like? Well, it’s super good. Guess what?! Those screenshots above of the Admin UI… those aren’t some online dashboard in the AWS console, those are locally hosted on your own site. All this data modeling and storage and editing and such happens locally. Then you can push to live to any environment. Production, of course, but also whatever sort of staging environments you need.
When you need production data pulled down locally, you just… do that (with a command given to you right in the Admin UI).
You can join the Amplify team to find out more – they’ll be demoing on Twitch with Q&A this week:
Thursday, Dec. 3rd at 10-11am PST/ 7pm GMT
Friday, Dec. 4th at 1-3pm PST / 9pm GMT
I’m thinking this new Admin UI world is going to open up AWS Amplify to a lot more people. Having a UI to manage your site just feels better. For someone like me, it gives me a more complete understanding of what is going on with the backend and services, and more control over things. And yet, give me total freedom on the front end to do what I want to do, and also handle so many of the things I don’t (deployment, SSL, etc.) 👏
The post Amplify, Amplified appeared first on CSS-Tricks.
You can support CSS-Tricks by being an MVP Supporter. | http://design-lance.com/tag/amplified/ | CC-MAIN-2021-10 | refinedweb | 989 | 75.5 |
I.
Let me get some of the negatives out of the way up front. At first there was no usable build of ReSharper, and going back to life without it isn't fun. Sure, it doesn't yet support the new features of C#, but the refactoring, formatting and inspection tools that the product brings to VS really are essential. While VS has a lot of tools to isolate you from coding (as is expected of an IDE), ReSharper has a lot of tools that help you write better and cleaner code. I mean, Alt+Enter alone is worth the price of admission (adds an import statement for the namespace of the type you just tried to type automagically).
I was really hoping that the giant web.config would go away with the integration of the AJAX stuff, but it's actually bigger than ever. I expected that at least the various HttpHandler overrides would go away, but apparently that wasn't in the cards.
The new CSS stuff is surprisingly not as useful as I expected it might be. That's kind of a buzz kill. In fact, there are times when I experimented with the designer and was surprised to see that what I was composing was not how the page actually appeared in IE. It also seems to crash, a lot, when I'm editing a style sheet. That might be ReSharper causing that, which is a little weird since I don't think it touches CSS, but I'm sure the validation engine has something to do with it.
The killer feature is easily the client-side script debugging. If you really embrace the coding model that the AJAX framework uses, this feature is gold. If you've read one of the AJAX books and still can't wrap your head around it, read it again or read another, because once you get it and use it, it's amazing stuff. It's a little annoying to have to use IE, but I'm mostly over it.
I've only toyed with LINQ, and I guess I still haven't learned enough about it to make me say, "Wow, that's awesome stuff I must use as much as possible!" I'm sure that's just a matter of time. The general improvements to C#, like the automatic properties for example, are big time savers. I feel like I haven't explored those enough.
Other than that, it does feel a little more snappy and building seems to be much faster. It's not a huge upgrade, but definitely one you should consider if you haven't already, if for the script stuff alone (assuming you're an ASP.NET developer). Like I said, I feel a little underwhelmed, but it's familiar and fast and generally steady. I like that they're practically giving it away at various Microsoft events too. Giving it away to sell server product is a wise strategy.
Well it gets the big thumbs down from me. All the pros don't outweigh this one big Con. I still have a lot of legacy classic asp code mixed in with my asp.net pages. The shear volume of the classic asp pages means they won't go away soon. Removing the syntax coloring for classic asp that was there in Visual Studio 2005 makes the conversion even harder for me. Visual Studio 2008 does let me debug those pages using the stop keyword still, but it always crashes after I'm done debugging.
How about giving me a copy of Visual Studio 2008? I used to get Visual Studio from my employer's MSDN subscription but my current employer has not given me access to that. I suppose it does not matter because I'm still doing most of my work in Visual Studio 2003. :(
Hi Jeff,
I hope you understand the reason for the large web.config file. Obviously Microsoft wanted to make this an upgrade that could be installed on machine with ASP.NET 2.0 without breaking anything. So in other words, the web server considers (in IIS for example) you are still just running ASP.NET 2 and thus the extra features are introduced via web.config. Sure they could have introduced an entire new version, and had you select "ASP.NET 3.5" in IIS, but overall the current solution is probably the best and most incremental way of upgrading sites, and avoiding production errors.
The CSS features were crashing a lot of me with Beta 2 (and even crashed for ScottGu when he presented at Mix UK in September), but I have not played with it on RTM.
Overall I love the RTM product. Seriously - this is an amazing product, and I do not agree with your sentiment.
David
"It also seems to crash, a lot, when I'm editing a style sheet." -- I had this happen to me a few times, and most of the times the Resharper "catches" the problem and asks what you were doing. Of course it still restarts the IDE. I believe this is a Resharper issue.
I feel underwhelmed myself. Having CSS support, mainly the dropdown after class or CssClass is really cool except that it only works in certain cases -- when it knows where the css file is coming from. It's really an undercooked feature. I'm working with user controls and pages that have their style sheet themes set at runtime or with dynamic master pages. In all these cases it can't figure out where the css is going to come from and it underlines the css class for me, which I find rather annoying. It should allow me give it a hint or some sort of way to specify where the css is coming from and make this feature actually useful.
Another thing that I was looking forward is assigning event handlers without going into design mode. Well, you can, but you still have to have the designer loaded either in split view or after a full switch to the designer. In all other cases the properties window is just useless, so really nothing changed here.
The designer is quicker than the old one, not much quicker though, but now there's a first time access delay from the time you switched to the designer and you can actually see the rendered form and the time the properties window is updated. This delay can be a few seconds long so you're just sitting there wondering what's going on and then it just shows up. They kind of broke this one in my opinion.
Yes, it's somewhat faster and has all the cool new features like LINQ and the language improvements, but I feel the actual IDE hasn't changed that much. There are number of things that I feel should have improved that haven't.
Ohh well...
Greg....I agree 100% that lack of classic asp support is a huge huge let down. That alone will keep me in 2005 for a long time.
Wow, I'm really surprised anyone is still using old ASP.
@Jeff better believe it. I've got a mixed ASP/ASP.net website. The Classic ASP code is colorized in VS2008 and I can debug it too. I find that using VS2008 to debug Classic ASP all too often produces the COM "Switch to/Continue waiting" dialog which requires TaskMan to kill.
A final thought - it was easier to integrate a solid, secure HTML Sanitizer into Classic ASP than it was into ASP.Net (ie Caja JsHtmlSanitizer vs AntiSwamy). | http://weblogs.asp.net/jeff/archive/2007/12/07/impressions-of-vs2008-after-a-couple-of-weeks.aspx | crawl-002 | refinedweb | 1,273 | 71.24 |
Php freelancing work pune jobsr...
Looking for a candidate who can work for our application and do promotion work.
import Layouts from other website urgent a Junior Unity Developer from Pakistan for some easy tasks in 2D, 3D Unity Work I need at LOW budget you to fill in a spreadsheet with data.
I have ongoing work related to our previous project 'Package Designer'
cordova app fix. need someone who can work now
Signup here - [login to view URL] we will call you for discussion soon .first comer first select . and if you don`t have 2 minutes time to signup , then sorry we don`t have time to talk with you .
Hello, I...
I have some work, in an Excel spreadsheet. need an Android app. I would like it designed and built.
Need Servicenow and Salesforce trainer on part time or full time basis at Pune location.
I have some work, in an Excel spreadsheet. I need to insert some data to the excel spreadsheet. The project is about designing a table and storing the data
I have some work, in an Excel spreadsheet.... [login to view proposal by "Hi......
Desired Expertise of the freelancer: 1)designing aesthetically good websites 2) wallet design 3)escrow facility set up 4) follows any professional coding style guide.
It's a small work for an expert. The website is a video games database. I need the following option in the main search engine : A radio button called Pirate Games, when checked : it should include all games that have the term "#PIRATE#" in an already existing keyword field. (you don't have to create this field in the database, it's already presents). By default unchecked,...
only Russian country members only allowed..design for my website , site look like paypal. dont bid other country members.
Frameworks : Code Igniter/Laravel Description :- Hands-on experience with any E-commerce website. • Must have good analytical and problem-solving skills. • Good Knowledge of OOPS, PHP, MVC frameworks, MySql, CSS, AJAX, HTML5. • Should be able to work under tight deadlines
looking for freelancers for Our platform only skilled people
I have the 2 types of PDFs and I need to split and resize PDF page size. I will share more detail info during interview. Looking forward to hear from you. Thanks: [login to view URL]
SIMPLE Eview work needs to be done within 36 Hours.
I have ongoing work related to our previous project 'Brand development for financial tool'
This involves all types of data entry duties. | https://www.freelancer.com/job-search/php-freelancing-work-pune/2/ | CC-MAIN-2019-30 | refinedweb | 419 | 75.5 |
Quoting Oren Laadan (orenl@cs.columbia.edu):> > > Serge E. Hallyn wrote:> >Support checkpoint and restart of tasks in nested pid namespaces. At> >Oren's request here is an alternative to my previous implementation. In> >this one, we keep the original single pids_array to minimize memory> >allocations. The pids array entries are augmented with a pidns depth> > Thanks for adapting the patch.> > FWIW, not only minimize memory allocations, but also permit a more> regular structure of the image data (array of fixed size elements> followed by an array of vpids), which simplifies the code that needs> to read/write/access this data.> > >(relative to the container init's pidns, and an "rpid" which is the pid> >in the checkpointer's pidns (or 0 if no valid pid exists). The rpid> >will be used by userspace to gather more information (like> >/proc/$$/mountinfo) after the kernel sys_checkpoint. If any tasks are> >in nested pid namespace, another single array holds all of the vpids.> >At restart those are used by userspace to determine how to call> >eclone(). Kernel ignores them.> >> >All cr_tests including the new pid_ns testcase pass.> >> >Signed-off-by: Serge E. Hallyn <serue@us.ibm.com>> >---> > [...]Thanks, Oren - all other input is taken into what I'm about to post,except:> >@@ -293,10 +295,15 @@ static int may_checkpoint_task(struct ckpt_ctx *ctx, struct task_struct *t)> > _ckpt_err(ctx, -EPERM, "%(T)Nested net_ns unsupported\n");> > ret = -EPERM;> > }> >- /* no support for >1 private pidns */> >- if (nsproxy->pid_ns != ctx->root_nsproxy->pid_ns) {> >- _ckpt_err(ctx, -EPERM, "%(T)Nested pid_ns unsupported\n");> >- ret = -EPERM;> >+ /* pidns must be descendent of root_nsproxy */> >+ pidns = nsproxy->pid_ns;> >+ while (pidns != ctx->root_nsproxy->pid_ns) {> >+ if (pidns == &init_pid_ns) {> >+ ret = -EPERM;> >+ _ckpt_err(ctx, ret, "%(T)stranger pid_ns\n");> >+ break;> >+ }> >+ pidns = pidns->parent;> > Currently we do this while() loop twice - once here and once when> we collect the vpids. While I doubt if this has any performance> impact, is there an advantage to doing it also here ? (a violation> will be observed there too).With the new logic (ripped verbatim from Louis' email) such a movewould make the checkpoint_vpids() code a bit uglier. I'm about toresend, please let me know if you still want the code moved....> >diff --git a/kernel/nsproxy.c b/kernel/nsproxy.c> >index 0da0d83..6d86240 100644> >--- a/kernel/nsproxy.c> >+++ b/kernel/nsproxy.c> >@@ -364,8 +364,13 @@ static struct nsproxy *do_restore_ns(struct ckpt_ctx *ctx)> > get_net(net_ns);> > nsproxy->net_ns = net_ns;> >- get_pid_ns(current->nsproxy->pid_ns);> >- nsproxy->pid_ns = current->nsproxy->pid_ns;> >+ /*> >+ * The pid_ns will get assigned the first time that we> >+ * assign the nsproxy to a task. The task had unshared> >+ * its pid_ns in userspace before calling restart, and> >+ * we want to keep using that pid_ns.> >+ */> >+ nsproxy->pid_ns = NULL;> > This doesn't look healthy.> > If it is (or will be) possible for another process to look at the> restarting process, not having a pid-ns may confuse other code in> the kernel ?No task will have this nproxy attached before we assign a validpid_ns. The NULL pid_ns is only while it is in the objhash butnot attached to a task.thanks,-serge | http://lkml.org/lkml/2010/3/23/8 | CC-MAIN-2016-50 | refinedweb | 508 | 65.01 |
Embedding AutoCAD 2009 in a standalone dialog
This post takes a look at another topic outlined in this overview of the new API features in AutoCAD 2009.
AutoCAD 2009 introduces the ability to embed the application in a standalone dialog or form via an ActiveX control. This capability has been around for a number of releases of AutoCAD OEM, but this feature has now been made available in the main AutoCAD product.
The way the control works is to launch an instance of AutoCAD in the background (it should go without saying that AutoCAD needs to be installed on the system, but I've said it, anyway :-) and it then pipes the graphics generated by AutoCAD into the area specified by the bounds of the control. It also then pipes back any mouse movements or keystrokes, to allow the embedded AutoCAD to be controlled. It's pretty neat: you'll see the standard cursor, be able to enter commands via dynamic input, and more-or-less do whatever can be done inside the full product.
The control is especially handy if you want to present a reduced user-interface to the people using the product (which is really what AutoCAD OEM is for, in a nutshell, although the development effort involved in creating a full AutoCAD OEM application makes it inappropriate for quick & easy UI streamlining).
Let's start our look at this control by creating a new C# Windows Application project in Visual Studio 2005 (you can use whatever ActiveX container you like, though - it should even work from a web-page or an Office document):
Once Visual Studio has created the new project, we need to add our control to the toolbox. If you right-click on the toolbox, you'll be able to select "Choose Items...".
From here, there should be an item "AcCtrl" in the list of COM Components. Otherwise you can browse to it in c:\Program Files\Common Files\Autodesk Shared\AcCtrl.dll.
Then you simply need to place the control on your form.
Once we've done that, we're going to add a few more controls - for the drawing path, and a text string for commands we want to try "posting" to the embedded AutoCAD application.
Here's the C# code we'll use to drive the embedded control from the form. You should be able to work out what the various controls have been called in the project by looking at the code.
using System;
using System.Windows.Forms;
namespace EmbedAutoCAD
{
public partial class MainForm : Form
{
public MainForm()
{
InitializeComponent();
}
private void browseButton_Click(
object sender, EventArgs e)
{
OpenFileDialog dlg =
new OpenFileDialog();
dlg.InitialDirectory =
System.Environment.CurrentDirectory;
dlg.Filter =
"DWG files (*.dwg)|*.dwg|All files (*.*)|*.*";
Cursor oc = Cursor;
String fn = "";
if (dlg.ShowDialog() ==
DialogResult.OK)
{
Cursor = Cursors.WaitCursor;
fn = dlg.FileName;
Refresh();
}
if (fn != "")
this.drawingPath.Text = fn;
Cursor = oc;
}
private void loadButton_Click(
object sender, EventArgs e)
{
if (System.IO.File.Exists(drawingPath.Text))
axAcCtrl1.Src = drawingPath.Text;
else
MessageBox.Show("File does not exist");
}
private void postButton_Click(
object sender, EventArgs e)
{
axAcCtrl1.PostCommand(cmdString.Text);
}
}
}
Finally, when we run the application and load a drawing via the browse/load buttons, the real fun starts. :-)
Try entering commands via dynamic input, or via the "Post a command" textbox. You might feel a little disorientated due to the lack of a command-line (I do love my command-line ;-), but dynamic input allows you to at least see what you're typing.
Here's the C# project for you to download.
Please tell me how to get xrefs to show up using the AcCtrl.dll control from 2009 TruEView via VB.
Posted by: tb | May 28, 2008 at 06:04 PM
Do your xrefs get loaded when launching the DWG TrueView executable directly?
I haven't yet looked into xref support when TrueView is hosted by the control, but I thought I'd start by asking this question.
Kean
Posted by: Kean | May 29, 2008 at 01:32 PM
Hi Kean,
nice tool. But on some occacions, the whole ACAD frame comes up, with all menus, icons, ribbons, commandline and so on.
This happens e.g. when a message on the commandline is printed or a dialgbox opens. And when you work with additional DBX/ARX loaded, this happen nearly every time. How to avoid this?
Greetings
Markus
Posted by: Markus Hannweber | August 07, 2008 at 05:05 PM
Hi Markus,
Please submit a reproducible case via the ADN site, and we'll look into it.
Regards,
Kean
Posted by: Kean | August 07, 2008 at 05:08 PM
Hi Kean,
While in a AutoCAD session, I wrote a command that displays the above as a (modal) form. When exiting i get an error from AcVBA.arx (access violation reading ...).
My questions:
- is there another way to display entities in a separate window/control, or is AcCtrl the way to go? (I need zoom/pan/selection functionality from the window)
- is there a way to display the control without starting another instance of AutoCAD?
Thanks,
Harrie
Posted by: harrie | October 09, 2008 at 05:33 PM
Hi Harrie,
I would recommend against using the AcCtrl control in-process to AutoCAD: this effectively fires up and drives a separate instance of AutoCAD behind the scenes.
You will hopefully find the BlockView control more interesting, as mentioned in this previous post.
Regards,
Kean
Posted by: Kean | October 10, 2008 at 09:03 AM
Hi Kean:
A DWG file with XREF displays well on the stand alone DWG TrueView 2009 but not when using DWG TrueView 2009 as an ActiveX control. Does this feature work when used as an ActiveX control?
(The layer manager dialog invoked from the activeX control still lists all the layers just like the stand alone does.)
Thanks,
Bala
Posted by: Bala Padmanabhan | February 25, 2009 at 08:34 PM
Hi Bala,
This issue has now been mentioned a couple of times in comments on this post...
I've just tried it myself, and can see the behaviour you're describing. Running SysInternals' ProcessMonitor shows that the xrefed DWG files are being accessed, but it's true that they aren't being displayed.
I recommend submitting the issue via the ADN site, if you're a member (this blog isn't a forum to get support... I unfortunately don't have time to investigate issues in depth unless they relate specifically to code I've posted).
Regards,
Kean
Posted by: Kean Walmsley | February 26, 2009 at 11:11 AM
Hi Kean
I've been trying this with AutoCAD 2010 and get the following error ...
"Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG))"
at the line ...
axAcCtrl1.Src = drawingPath.Text;
I've tried REGSVR32 AcCtrl.dll, with no effect
Have you any ideas/pointers ?
Thanks in advance,
Nick
Posted by: Nick Hall | July 01, 2009 at 09:29 AM
Hi Nick,
I'm not going to be able to look into this anytime soon - would you mind submitting it via ADN?
Thanks & regards,
Kean
Posted by: Kean Walmsley | July 01, 2009 at 05:15 PM | http://through-the-interface.typepad.com/through_the_interface/2008/03/embedding-autoc.html | crawl-002 | refinedweb | 1,180 | 61.77 |
#include <complex.h>
double complex catan(double complex z);
float complex catanf(float complex z);
long double complex catanl(long double complex z);
These functions shall compute the complex arc tangent of z, with branch cuts outside the interval [-i, +i] along the imaginary axis.
These functions shall return the complex arc tangent value, in the range of a strip mathematically unbounded along the imaginary axis and in the interval [-pi/2, +pi/2] along the real axis.
No errors are defined.
The following sections are informative.
None.
None.
None.
None.
ctan() , the Base Definitions volume of IEEE Std 1003.1-2001, <complex.h> | http://www.makelinux.net/man/3posix/C/catanf | CC-MAIN-2015-48 | refinedweb | 104 | 52.19 |
Dan Nicolaescu <address@hidden> writes: > Richard Stallman <address@hidden> writes: > > > If you look at the various .o files with size, I expect > > you can find out where the increase really is. > > > It looks like all the increase comes from alloc.o, specifically from > "pure". > For temacs from CVS: > (gdb) p sizeof (pure) > $1 = 1980000 > > for temacs from 21.3: > (gdb) p sizeof(pure) > $1 = 720000 > > changing the size of "pure" to be the same as in 21.3 makes temacs be > roughly the same size as the 21.3 temacs. But that does not help > because with the smaller size "emacs" cannot be created, it crashes > while loading loadup.el. Trying do decrease "pure" does not work, it > temacs crashes when trying to dump... > > Another interesting fact is that the current CVS uses > 993528 pure bytes and 21.3 uses 715540 pure bytes. > Yet the sizeof(pure) difference is so much bigger. In puresize.h we have this: #ifndef PURESIZE_RATIO #if VALBITS + GCTYPEBITS + 1 > 32 #define PURESIZE_RATIO 9/5 /* Don't surround with `()'. */ #else #define PURESIZE_RATIO 1 #endif #endif I think that the '+1' part (corresponding to the now defunct MARK bit) in "VALBITS + GCTYPEBITS + 1" should be removed. Since VALBITS in CVS is one bigger that in 21.3, we get the 9/5 ratio rather than the '1' ratio. The rest of the increase is simply that BASE_PURESIZE has been increased from 720000 to 1100000 -- that value seems too big, so we should probably lower it before final release of 21.4. BTW, what was the rationale for the following change? 2002-02-15 Andreas Schwab <address@hidden> * puresize.h (BASE_PURESIZE): Increase to 9/5. The commantary is wrong -- the change was to PURESIZE_RATIO, not BASE_PURESIZE (the ratio used to be 8/5)? -- Kim F. Storm | http://lists.gnu.org/archive/html/emacs-pretest-bug/2004-09/msg00057.html | CC-MAIN-2014-35 | refinedweb | 298 | 75.1 |
If you ask for the
System. property of a shell item, the timestamp that comes back is up to two seconds different from the file's actual timestamp. (Similarly for
System. and
System..) Why is that?
This is an artifact of a decision taken in 1993.
In general, shell namespace providers cache information in the ID list at the time the ID list is created so that querying basic properties from an item can be done without accessing the underlying medium.
In 1993, saving 4KB of memory had a measurable impact on system performance. Therefore, bytes were scrimped and saved, and one place where four whole bytes were squeezed out was in the encoding of file timestamps in ID lists. Instead of using the 8-byte
FILETIME structure, the shell used the 4-byte DOS date-time format. Since the shell created thousands of ID lists, a four-byte savings multiplied over thousands of items comes out to several kilobytes of data.
But one of the limitations of the DOS date-time format is that it records time in two-second increments, so any timestamp recorded in DOS date-time format can be up to two seconds away from its actual value. (The value is always truncated rather than rounded in order to avoid problems with timestamps from the future.) Since Windows 95 used FAT as its native file system, and FAT uses the DOS date-time format, this rounding never created any problems in practice, since all the file timestamps were already pre-truncated to 2-second intervals.
Of course, Windows NT uses NTFS as the native file system, and NTFS records file times to 100-nanosecond precision. (Though the accuracy is significantly less.) But too late. The ID list format has already been decided, and since ID lists can be saved to a file and transported to another computer (e.g. in the form of a shortcut file), the binary format cannot be tampered with. Hooray for compatibility.
Bonus chatter: In theory, the ID list format could be extended in a backward-compatible way, so that every ID list contained two timestamps, a compatible version (2-second precision) and a new version (100-nanosecond precision). So far, there has not been significant demand for more accurate timestamps inside of ID lists.
You'll have revamp the date structure in a few years anyways when 2107 rolls around. By that time, people will probably be complaining about the headaches caused by porting their Win256 apps to Win512 and wishing for the good old days of Win128 programming!
>wishing for the good old days of Win128 programming!
Win64 ought to be enough for anybody.
No, seriously.
"the binary format cannot be tampered with" There are already multiple versions of the ID list format used by the default FS IShellFolder. IIRC there are even some undocumented ID list functions that stuff hidden data into the ID list.
@ Andre, read "Inside the AS/400" by Frank Soltis. You will understand why 128bit is very important.
@ Raymond, thanks for this. I enjoy reading about how we got from Point A to Point B.
Is there any documentation on the Shell's ID format for files? I've always wondered ever since I first dumped a PIDL…
In practice of course the precision, as Mr Chen hints, is illusory. Most computers synchronise clocks only weekly, and most computer clocks can easily drift by two seconds or more within a week.
There is no such thing as "equality" for time or any other Real quantity. There is only "near enough".
@12BitSlab haven't read that book, but that seems incredibly farfetched. There's just no reason even accounting for extra security for ASLR to need more than 2^64 bytes of addressable memory space in the foreseeable future.
Now having larger arithmetic units is something else, but there we're already far over 128bit anyhow and nobody uses that to address the bitness of systems.
What's the use case you see for 128 bit address space?
@voo, More's law suggests an exponential expansion of computational power, whether measured by instructions per second or storage capacity – specifically a doubling every 18 months.
Since (if) that's true (and I think it is) one bit will have to be added on average every 18 months – or eight bits per twelve years. That being the supposition, the journey from 64 bits to 128 might take at most 100 years or so (and much less, maybe a quarter, for high-performance systems) but we will eventually arrive.
And even then, some systems will still be running COBOL.
I've clearly been reading this blog too long. As soon as you said "two seconds" I knew what the issue is!
@ voo,
The AS/400 was the successor of the S/38. It uses a concept of single level store. The OS views the world as a flat 128 bit memory space. It doesn't even know that disk drives are attached (except the WRKDSKSTS command). The hardware takes care of paging stuff to/from memory. A consequence of this is that there is no difference between a page fault for code and a page fault for data. They are, in fact, one in the same. There are no more records, files, etc. from an MI view (MI is roughly the equivalent of Assembler on a 400). There are only spaces. Cleanest architecture I have ever worked with in my career.
When the S/38 first shipped in 1981, it was at least 4-5 decades ahead of everything else — from an architectural view.
@12BitSlab Interesting concept, but even then: The total space of the www in 2013 according to Wikipedia was 4*10^21 byte so around 2^70 byte.
And just like we can access files with 64 bit offsets on 32 bit machines, the sane skulls work here too, although I can certainly see the elegance in the basic approach. But even then I wager we should be fine for another decade or three ;)
In WinDbg with symbols for shell32 type SHELL32!IL*Hidden* and press tab =) and this is in addition to the various versions of the normal FS PIDLs that I believe exists. Too bad MS cannot even document the layout of the PIDL created by SHSimpleIDListFromPath because these are actually passed across component boundaries, including 3rd-party components…
Some people probably wouldn't be happy unless Windows's precision was Planck time.
This post is (un?)surprisingly free of reactive snarky commments.
@Deanna: I thought I was having a deja-vu.
@12BitSlab
There were excellent features in the AS/400 architecture. Sadly, almost all the programs I encountered were written in RPG. I need mind bleach now just to forget about it.
>So far, there has not been significant demand for more accurate timestamps inside of ID lists.
How much demand do you need, and how do we demand it? There's no easy way to get in touch with Windows development.
This design decision also gives rise to the annoyance that Explorer details view doesn't show timestamps before 1/1/1970 – and yes, such timestamps are used by people doing photo archival for example. However, MS don't seem to think that anyone has any such files based on the response to an outstanding bug (still) in the MFC framework that prevents an application loading a file with a timestamp earlier than 1/1/1970.
DaveL: a photo _file_ created, modified or accessed earlier than 1/1/1970?
Surely that's a total misuse of the file property in question; you'd need something more custom than that.
@Boris. If the photo was shot in 1960, surely it's perfectly valid for someone to expect its created timestamp to say 1960? There are metadata properties for photo (and other media file types) timestamps, but if you investigate you'll find that most every mainstream application currently does it differently, so the file system timestamp is something that works (and it's what Explorer shows). Besides, if the file system supports the timestamp, the OS UI should show it correctly IMO). :)
PS. I just rewrote "Date taken" an iPhone photo to 1954. I also customized the Explorer folder view to show this property in one of the columns. If there is a problem, it hasn't been stated clearly.
PPS. I posted my second comment before I saw yours.
It's not valid because Explorer deals in files, and users understand the concept of a file. Explorer is obviously doing it correctly because it's using a custom property. So if there is a problem, it's in the applications that assume Date taken == File created. They should learn to work with special properties.
@Boris. As to getting applications to work (consistently), yes, I agree they should, but if you investigate you'll find they agree to disagree about the best way to do media metadata :)
@ Rob G
I agree with you re: RPG III. That stuff will rot your brain.
@DaveL: I switched my Photos folder to Details, right-clicked on column headers and checked "Date taken" from the right-click menu which appeared. The column appeared and now I can sort by it or whatever.
Do you have specific examples of which applications are causing problems? I've never really tried scanning old photos, editing their properties and exchanging them between say, iOS, Mac OS X and Windows 7, but the date taken is part of EXIF metadata and the major vendors need to support reading and editing of such metadata. I can't be sure, of course, but I get the feeling you're not talking about current, standardized applications from major vendors (Apple, Microsoft, etc.), but perhaps something freeware or years out of date.
@Anon: According to Wikipedia, DecTape[0] supports file creation timestamps[1], so it's possible that someone somewhere has a "hello world" program in PDP-10 assembly language created in the mid-late '60s, that they'd like to transfer/restore to a more modern system with full fidelity.
How they achieve such a transfer is, of course, an exercise for the reader :-)
[0] en.wikipedia.org/…/DECtape
[1] en.wikipedia.org/…/Comparison_of_file_systems
@Anon: Ohmygod. I just checked the "supporting operating systems" part of that Wikipedia comparison page, and just learned about AncientFS[0]
From the DecTape comments: "The tap(1) command in ancient Unix (Editions First through Third) was used to save and restore selected portions of the file system hierarchy on DECtape. Even though the on-tape format was the same across these editions, you need to specify a Unix edition because the epoch changed. (The epoch was not 00:00 GMT Jan 1 1970 until Fourth Edition Unix.)" That epoch fact is a neat bit of Unix trivia I didn't know before!
So, using Linux, one could legitimately conceivably try to copy a pre-1970 file from a Ver 1 Unix tape image, to an NTFS partition, while preserving the file-creation timestamp. If the filesystem supports it, I don't think it's unreasonable to expect the OS to display it!
[0] osxbook.com/…/ancientfs
> There's just no reason even accounting for extra security for ASLR to need more than 2^64 bytes of addressable memory space in the foreseeable future.
That assumes conventional addressing and memory structures.
In the past, I have worked on the design of a system that needed 128-bit virtual addresses.
One reason you might need it, and I think this is the AS/400 case too, is that objects have unique addresses. Once an object is deleted, no future object will ever reuse that same virtual address.
(This does not apply to Win128 of course, it's the general case I am talking about)
>Even OUTSIDE the Windows world, it is statistically a near-impossibility to have any file created prior to 1970, as only three filesystems existed then,
Maybe I'm lacking context for your remark, but I've stored files on four of those three file systems; here named by the OS:
George 3
Eldon 2
TOPS 10
E4 (CTL Modular One OS)
>What's the use case you see for 128 bit address space?
High resolution 3D printing. Just as program code still fits in small spaces but data – images and videos especially – require more memory (and more addressable memory), the higher the resolution of 3d printers, the more memory we'll need. I haven't done the math, but atomic scale printing of houses is just "a few" technical issues away from being a reality, I (optimistically) expect.
@Boris. "Date Taken" is a metadata property (System.Photo.DateTaken), and you're right, some views of Explorer do show that metadata property correctly, but the (widely used) details doesn't do that, and its view does not show file system dates prior to 1/1/1970. It's perfectly valid for *any* file on a Windows system to have any of its 3 timestamp values be a date prior to 1/1/1970.
@Quadko: You're either completely overestimating the size of whatever image or video (3d, 5d, no matter) or underestimating how enormous 2^64 a number is. To give an example: You can store 1/64th of the total WWW in that memory space if wikipedia is to be believed (or the total WWW around 2010).
Now the reasons that 12bitslab and dave brought up are more realistic – I mean it's still not going to be a problem for the next few decades, but I could see us running into such limitations for large clusters at the end of this century.
@DaveL
It isn't valid for any natively-supported Windows filesystem to display a *FILESYSTEM* date prior to 1/1/1970, because it is impossible for that file to have existed on the filesystem prior to 1970. There was no FAT until 1977, and FAT is the oldest natively-supported Windows filesystem.
*NON*-filesystem dates (such as "When the picture was taken") have no relationship with filesystem dates.
Even OUTSIDE the Windows world, it is statistically a near-impossibility to have any file created prior to 1970, as only three filesystems existed then, none of which are actually supported today in any meaningful way.
(To be more pedantic, and someone else will have to verify, I believe Creation Time wasn't supported until the 90s, so it would be technically impossible to have a file with a Creation Date prior to 199X on a FAT* or NTFS filesystem.)
@voo: Memory-mapping the entire Internet.
(And if you say that's impossible, or a silly idea… consider that people probably said that about memory-mapped files back when they only had 16 address bits)
@Quadko
You're talking about what more than 2^64 bytes of *memory* could be used for, not what more than 2^64 bytes of *address space* could be used for.
@dave
I didn't know the CTL had a file system. It isn't generally listed.
Also, did it support file timestamps? (Context is that the file system must support a creation timestamp.)
@Karellen
When were Creation timestamps implemented in DECTape? I thought they were a feature of DECTape II.
v1 Unix sprang into existence in 1969; While it is conceivable that there are files stored from that year, I can't find evidence that it was in 'use' until the 70s. I also don't see any documentation of the v1 FS capabilities.
The original "decision" was actually taken in 1973 by Gary Kildall, along with a few other things, like 8.3 :-)
Axel: CP/M didn't even have file sizes down to the byte. When it had timestamps, they were down to the minute. About the only thing the MS-DOS FAT filesystem format inherited from CP/M is the filename conventions.
@Anon – I was just going off the "Metadata" table on Wikipedia's "Comparison of File Systems" page (which I linked to) which has an unqualified "Yes" in the DECtape/Creation timestamps box. I'd be happy[0] to be corrected if you (or anyone else) has a citation or personal recollection which asserts otherwise.
[0] But also disappointed, because then my "moving a pre-1970 file from a DECtape image to an NTFS partition" thought experiment would no longer be viable. | https://blogs.msdn.microsoft.com/oldnewthing/20150408-00/?p=44283 | CC-MAIN-2018-09 | refinedweb | 2,735 | 61.46 |
9]:
%matplotlib inline import gluonbook as gb import mxnet as mx from mxnet import autograd, gluon, image, init, nd from mxnet.gluon import data as gdata, loss as gloss, utils as gutils import sys from time import time
9.1.1. Common Image Augmentation Method¶
In this experiment, we will use an image with a shape of \(400\times 500\) as an example.
In [2]:
gb.set_figsize() img = image.imread('../img/cat1.jpg') gb.plt.imshow(img.asnumpy())
Out[2]:
<matplotlib.image.AxesImage at 0x7f7ad86f41d0>
The drawing function
show_images is defined below.
In [3]:
# This function is saved in the gluonbook package for future use. def show_images(imgs, num_rows, num_cols, scale=2): figsize = (num_cols * scale, num_rows * scale) _, axes = g)
9)
9)
9)
9)
9.1.2.1. Using a Multi-GPU Training Model¶
We train the ResNet-18 model described in “ResNet” section on the CIFAR-10 data set. We will also apply the methods described in the “Gluon Implementation in Multi-GPU Computation” section, and use a multi-GPU training model.
First, we define the
try_all_gpus function to get all available
GPUs.
In [15]:
def try_all_gpus(): # This function is saved in the gluonbook package for future use. ctxes = [] try: for i in range(16): # Here, we assume the number of GPUs on a machine does not exceed is saved in the gluonbook package for future use. def evaluate_accuracy(data_iter, net, ctx=[mx.cpu()]): if isinstance(ctx, mx.Context): ctx = [ctx] acc = nd.array([0]) n = 0 for batch in data_iter: features, labels, _ = _get_batch(batch, ctx) for X, y in zip(features, labels): y = y.astype('float32') acc += (net(X).argmax(axis=1) == y).sum().copyto(mx.cpu()) n += y.size acc.wait_to_read() return acc.asscalar() / n
Next, we define the
train function to train and evaluate the model
using multiple GPUs.
In [18]:
# This function is saved in the gluonbook = 0.0, 0.0, 0.0, 0.0 start =() train_acc_sum += sum([(y_hat.argmax(axis=1) == y).sum().asscalar() for y_hat, y in zip(y_hats, ys)]) train_l_sum += sum([l.sum().asscalar() for l in ls]) trainer.step(batch_size) n += batch_size m += sum([y.size for y in ys]) test_acc = evaluate_accuracy(test_iter, net, ctx) print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, ' 'time %.1f sec' % (epoch + 1, train_l_sum / n, train_acc_sum / m, test(), g)
9.1.2.2. Comparative Image Augmentation Experiment¶
We first observe the results of using image augmentation.
In [20]:
train_with_data_aug(train_augs, test_augs)
training on [gpu(0), gpu(1)] epoch 1, loss 1.3641, train acc 0.516, test acc 0.579, time 37.9 sec epoch 2, loss 0.7995, train acc 0.717, test acc 0.732, time 34.7 sec epoch 3, loss 0.5870, train acc 0.794, test acc 0.759, time 34.7 sec epoch 4, loss 0.4704, train acc 0.837, test acc 0.765, time 34.7 sec epoch 5, loss 0.3922, train acc 0.864, test acc 0.836, time 34.7 sec epoch 6, loss 0.3258, train acc 0.888, test acc 0.816, time 34.7 sec epoch 7, loss 0.2715, train acc 0.905, test acc 0.837, time 34.7 sec epoch 8, loss 0.2334, train acc 0.919, test acc 0.851, time 34.6 sec epoch 9, loss 0.1923, train acc 0.933, test acc 0.823, time 34.6 sec epoch 10, loss 0.1664, train acc 0.943, test acc 0.851, time 34.6 sec
For comparison, we will try not to use image augmentation below.
In [21]:
train_with_data_aug(test_augs, test_augs)
training on [gpu(0), gpu(1)] epoch 1, loss 1.4358, train acc 0.490, test acc 0.610, time 34.9 sec epoch 2, loss 0.8370, train acc 0.704, test acc 0.698, time 34.7 sec epoch 3, loss 0.6001, train acc 0.791, test acc 0.756, time 34.7 sec epoch 4, loss 0.4469, train acc 0.843, test acc 0.772, time 34.7 sec epoch 5, loss 0.3285, train acc 0.885, test acc 0.797, time 35.0 sec epoch 6, loss 0.2308, train acc 0.919, test acc 0.766, time 34.8 sec epoch 7, loss 0.1636, train acc 0.942, test acc 0.814, time 34.7 sec epoch 8, loss 0.1206, train acc 0.957, test acc 0.806, time 34.7 sec epoch 9, loss 0.0873, train acc 0.969, test acc 0.791, time 34.7 sec epoch 10, loss 0.0785, train acc 0.972, test acc 0.822, time 34.7 sec
As you can see, even adding a simple random flip may have a certain impact on the training. Image augmentation usually results in lower training accuracy, but it can improve testing accuracy. It can be used to cope with overfitting.
9.
9.1.4. Problems¶
- Add different image augmentation methods in model training based on the CIFAR-10 data set. Observe the implementation results.
- With reference to the MXNet documentation, what other image augmentation methods are provided in Gluon’s
transformsmodule? | http://gluon.ai/chapter_computer-vision/image-augmentation.html | CC-MAIN-2019-04 | refinedweb | 858 | 82.51 |
A Dart interface to Firebase Cloud Messaging (FCM) for Android and iOS
You can use fcm-push either as commandline tool or you can use it as a library.
Activate fcm-push:
pub globale); }
Copyright 2017,
Follow me on Twitter or star this repo here on GitHub.
A Dart interface to Firebase Cloud Messaging (FCM) for Android and iOS
This CHANGELOG.md was generated with Changelog for Dart
Add this to your package's pubspec.yaml file:
dependencies: fcm_push: "^1.1.9"
You can install packages from the command line:
with pub:
$ pub get
with Flutter:
$ flutter packages get
Alternatively, your editor might support
pub get or
packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:fcm_push/fcm_push.dart';
We analyzed this package, and provided a score, details, and suggestions below.
Detected platforms: Flutter, web, other
No platform restriction found in primary library
package:fcm_push/fcm_push.dart.
Maintain an example.
Create a short demo in the
example/directory to show how to use this package. Common file name patterns include:
main.dart,
example.dartor you could also use
fcm_push.dart.
Use
analysis_options.yaml.
Rename old
.analysis_optionsfile to
analysis_options.yaml. | https://pub.dartlang.org/packages/fcm_push | CC-MAIN-2018-09 | refinedweb | 201 | 50.73 |
How can I use threading in Python?pool =) I/O):
multiprocessing.dummy replicates the API of multiprocessing, but is no more than a wrapper around the threading module.
import urllib2from multiprocessing.dummy import Pool as ThreadPoolurls = [ ' ' ' ' ' ' ' ' Make the Pool of workerspool = ThreadPool(4)# Open the URLs in their own threads# and return the resultsresults = pool.map(urllib2.urlopen, urls)# Close the pool and wait for the work to finishpool.)
Here's a simple example: you need to try a few alternative URLs and return the contents of the first one to respond.
import Queueimport threadingimport urllib2# Called by each threaddef get_url(q, url): q.put(urllib2.urlopen(url).read())theurls = [" " = Queue.Queue()for u in theurls: t = threading.Thread(target=get_url, args = (q,u)) t.daemon = True t.start()s = q.get()print s
This is a case where threading is used as a simple optimization: each subthread is waiting for a URL to resolve and respond, to put its contents on the queue; each thread is a daemon (won't keep the process up if the main thread ends -- that's more common than not); the main thread starts all subthreads, does a
get on the queue to wait until one of them has done a
put, then emits the results and terminates (which takes down any subthreads that might still be running, since they're daemon threads).
Proper use of threads in Python is invariably connected to I/O operations (since CPython doesn't use multiple cores to run CPU-bound tasks anyway, the only reason for threading is not blocking the process while there's a wait for some I/O). Queues are almost invariably the best way to farm out work to threads and/or collect the work's results, by the way, and they're intrinsically threadsafe, so they save you from worrying about locks, conditions, events, semaphores, and other inter-thread coordination/communication concepts.
NOTE: For actual parallelization in Python, you should use the multiprocessing module to fork multiple processes that execute in parallel (due to the global interpreter lock, Python threads provide interleaving, but they are in fact executed serially, not in parallel, and are only useful when interleaving I/O operations).
However, if you are merely looking for interleaving (or are doing I/O operations that can be parallelized despite the global interpreter lock), then the threading module is the place to start. As a really simple example, let's consider the problem of summing a large range by summing subranges in parallel:
import threadingclass SummingThread(threading.Thread): def __init__(self,low,high): super(SummingThread, self).__init__() self.low=low self.high=high self.total=0 def run(self): for i in range(self.low,self.high): self.total+=ithread1 = SummingThread(0,500000)thread2 = SummingThread(500000,1000000)thread1.start() # This actually causes the thread to runthread2.start()thread1.join() # This waits until the thread has completedthread2.join()# At this point, both threads have completedresult = thread1.total + thread2.totalprint result
Note that the above is a very stupid example, as it does absolutely no I/O and will be executed serially albeit interleaved (with the added overhead of context switching) in CPython due to the global interpreter lock. | https://codehunter.cc/a/python/how-can-i-use-threading-in-python | CC-MAIN-2022-21 | refinedweb | 535 | 52.09 |
accessible
Skip over Site Identifier
Skip over Generic Navigation
Skip over Search
Skip over Site Explorer
Site ExplorerSite Explorer
Posts: 19
Rating:
(3)
import from Process Automation User Connection, Talk to Other Process Users
Brian Anderson posted 9/28/05:I have one APS system for three process areas. Alarms for each area are put into one alarm group. I would now like to take advantage of built-in alarm functionality by having multiple alarm groups for each process area. Modifying the alarm filters using the existing method makes the filter expression too long.
One of our existing filter expressions is:
[<alarm>(severity)]>=1&&[<alarm>(enabled)]==1&&[<alarm>(dis)]==0&&([.ALMMASKS1(2,Member)]||[.PRI_ALMMASKS1(2,Member)]||[.SEC_ALMMASKS1(2,Member)])
This expression allows alarms from any point that has its limit alarm configured in alarm group 2. I am now pursuing an alarm management strategy that would have multiple alarm groups and only a deviation alarm for some points. The necessary alarm filter expression would include ALMMASKS1, ALMMASKS2, ALMMASKS3, PRI_ALMMASKS1, PRI_ALMMASKS2, PRI_ALMMASKS3, SEC_ALMMASKS1, SEC_ALMMASKS2, and SEC_ALMMASKS3 for up to ten alarm groups. This filter expression is too long and is not allowed by the system.
Is there a different way to express that an alarm is configured in a group without referring to all nine ALMMASKS tables? Has anyone configured an alarm filter for multiple alarm groups? What would you recommend? Also, I would like to display the current mode of each alarm group. Does anyone know where this infomation is located in the rtap database and how it might be displayed? Thanks very much for your help. | https://support.industry.siemens.com/tf/ww/en/posts/aps-alarming-for-multiple-alarm-groups/5701/?page=0&pageSize=10 | CC-MAIN-2019-26 | refinedweb | 267 | 54.83 |
I spent yesterday evening doing something I haven’t done in a while: tinkering. You may have seen the news that there’s a big change coming in Firefox. The short version is that later this year, the old extension model is going to be retired permanently, and extensions using it will no longer work. As someone with an extension on addons.mozilla.org, I’ve received more than a few emails warning me that they’re about to go dark. This isn’t the first time Mozilla has tried to entice folks to move on from XUL Overlays: Jetpack was a similar effort to allow extensions to play better within a sandbox. This time I think it’s going to stick: the performance benefits seems undeniable, and as a developer the prospect of writing a single extension to support multiple browsers is pretty appealing.
Over a year ago I took a stab at porting OpenAttribute to Browser (Web)Extensions. I read the Firefox code and basically understood it, but only because it was the third iteration of something I’d built. The Chrome code — which should be close to a proper WebExtension — was almost inscrutable to me. So naturally I wanted to start with tests. But a year ago I couldn’t quite make the connection for some reason. WebExtensions split your code between the page (content scripts) and the background process. Long running things belong in the background, and the two communicate via message passing. After reading about the coming XUL-pocalypse, I decided to take another run at it.
Last night, though, I focused on something far smaller: just understanding how to put together a WebExtension using the technologies I’m familiar with — react, redux — and the ones I’m interested in — TypeScript. The result is an extension that doesn’t do much, but it is written in TypeScript, and it does work in both Firefox and Chrome from a single code base.
The attribution extensions I’ve written have always had a data flow problem. There’s the question of what triggers parsing, where the extracted data is stored, and how you update the display. Not to mention how do you do that without slowing down overall browser performance. I’ve had good luck with React in other projects: it feels like it forces me to think of things more functionally, making it easier to write tests: does this component do the right thing with the right data? does this other thing send the right signals with the right input? Cool. But how to do that across the process boundary between background and content scripts?
webext-redux is a package that makes it easy to manage a Redux store in both processes and keep it in sync. The only real wrinkle is that the actions you fire on the content side have to be mapped to actions on the background process, which is where the mutations all take place.
So why TypeScript? I’ve been enjoying ES6 and the changes it brings to JavaScript. But I’ve still missed the types you get in Go with MyPy. TypeScript is interesting: it’s duck typed, but the ducks seem to quack louder than they do in Python.
I was particularly intrigued by ambient modules, which is how TypeScript provides type information for third party JavaScript libraries you may want to integrate. Luckily type definitions already exit for the web extension API, and it’s easy to write a (useless) one to quell the compiler warnings.
I think the biggest shift I’ve been trying to make is understanding imports.
import * as actions from './actions' feels weird to write, and to be honest I’m not sure how it differs from
import actions from './actions' when there’s not a default export.
I like TypeScript enough to try another experiment in the future. The compiler already pointed out a couple of errors that would have been hard to track down.
Up next: figuring out how to test web extensions and build a single code base that runs under Chrome, Firefox, Edge, and Opera. | https://www.yergler.net/category/mozcc/ | CC-MAIN-2018-43 | refinedweb | 685 | 70.84 |
Asked by:
Azure Function using unmanaged/native dll
Question
I have a very simple project built in Visual Studio 2017. It contains an Azure C# Function that uses an HttpTrigger to pass two values to be added together and return the result. Another project contains a mixed-mode C++ assembly. It creates an instance of an unmanaged C++ class and delegates the summation to it. This works fine in debug/release mode 32 bit on my development machine. When I publish to Azure it fails to load the mixed mode assembly. The assembly and native dll are located in a bin folder under the root of the function folder. This is the absolute simplest example I could think of to try and test out interop. Any input you could provide would be greatly appreciated.
I am trying to get an understanding of how to do this properly. We have a large legacy base of libraries that I need to leverage if this works.
Exception while executing function: UnAdder
Microsoft.Azure.WebJobs.Host.FunctionInvocationException : Exception while executing function: UnAdder ---> System.IO.FileNotFoundException : Could not load file or assembly 'UnAdderWrapper.DLL' or one of its dependencies.
UnAdder.cs
using System.Linq; using System.Net; using System.Net.Http; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.Azure.WebJobs.Host; using UnAdderWrapper; namespace UnFunction { public static class UnAdder { [FunctionName("UnAdder")] public static HttpResponseMessage Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "UnAdder/val1/{val1}/val2/{val2}")]HttpRequestMessage req, int val1, int val2, TraceWriter log) { log.Info("C# HTTP trigger function processed a request."); // Adder aw = new Adder(); int sum = aw.Sum(val1, val2); return req.CreateResponse(HttpStatusCode.OK, $"Hello Sum={sum}"); } } }
UnAdderWrapper.h, .cpp
// UnAdderWrapper.h #pragma once using namespace System; #include "..\Adder\Adder.h" namespace UnAdderWrapper { public ref class Adder { public: Adder() : m_adder(new UnAdder) {} ~Adder() { delete m_adder; } int Sum(int Val1, int Val2); protected: !Adder() { delete m_adder; } private: UnAdder * m_adder; }; } // cpp #include "stdafx.h" #include "UnAdderWrapper.h" int UnAdderWrapper::Adder::Sum(int Val1, int Val2) { return m_adder->DoSum(Val1, Val2); }
Native Adder.h, .cpp
// Adder.h #pragma once #define EXPORT_API __declspec(dllexport) class EXPORT_API UnAdder { public: int DoSum(int val1, int val2); }; // cpp // Adder.cpp : Defines the exported functions for the DLL application. // #include "stdafx.h" #include "Adder.h" int UnAdder::DoSum(int val1, int val2) { return val1 + val2; }
All replies
- Loading native assemblies is currently not supported in Azure Functions. Kindly see,
- Edited by Ling Toh Saturday, March 31, 2018 12:21 AM
- Proposed as answer by Micah McKittrickMicrosoft employee, Moderator Thursday, April 12, 2018 9:54 PM
Thanks for your feedback. That issue and related threads, unless I missed it, nor anywhere else in all the docs of Azure Functions do I see a definitive statement that loading unmanaged dlls is not supported. I do see several questions about how to do it and MS techs trying to figure it out. And I see a feature request from 2017. I would suggest adding a clear statement in the sand box restrictions.
So, maybe look at it differently...what exactly does UnAdderWrapper.dll do?
FYI, I vaguely remember a question last year at a talk on something similar and someone from somewhere on some Azure them said "so long as it's managed code, sure". At this point, I would presume unmanaged code is not supported because they only say they support manage code.
If you want to run unmanaged code, a container is what you'd likely need. | https://social.msdn.microsoft.com/Forums/en-US/1eadcf95-e691-4932-a6f9-57c24b1bcc98/azure-function-using-unmanagednative-dll?forum=AzureFunctions | CC-MAIN-2020-10 | refinedweb | 582 | 51.65 |
I'm looking to add certain functionaly to PIL. I want to be doing this in C, with it being computationally expensive. (I want to play with curve adjustments and decomposing to HSV etc ala gimp). What's the best way to do this? Should I uninstall my PIL from my distro, get the source, compile and install, then make edits compile and install, or can I create a separate PIL namespace, like PIL.MyTest, which is compiled and installed separately? Any pointers to get going on this would be greatly appreciated. Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/image-sig/2009-February/005423.html | CC-MAIN-2017-04 | refinedweb | 101 | 75.91 |
something like this(note this is a sample to demonstrate the probelm, so try not to get caught up on the pointless-ness of the code):
public class abstract Foo{ public abstract int GetX(); }; public class Foo_A : Foo{ public override int GetX(){ return 1; } }; class class OtherClass<T> where T : Foo{ public void GetTX(){ return T.GetX(); } }basically, the result of GetX doesn't need anything inside of the class, it just provides additional information for my OtherClass to access. One way I know i can make this work is to create an instance of T, in my particular case I have an array of T already, so i just call the method by accessing the first member of the array, but this feels incredibly dirty.
basically i need some type of static abstract class or const, since i'm going to be accessing OtherClass like so:
OtherClass<Foo_A> = new OtherClass<Foo_A>();I feel their should be a way for me to access a static method in Foo_A through the OtherClass.
Edited by slicer4ever, 05 April 2013 - 08:18 PM. | http://www.gamedev.net/topic/641372-c-constrained-generics-and-derived-static-methods/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024 | CC-MAIN-2016-36 | refinedweb | 180 | 53.89 |
Partial Class contact_contact_form Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not IsPostBack Then txtEmail.Text = "" txtName.Text = "" txtComment.Text = "" End If lblYear.Text = DatePart(DateInterval.Year, Date.Now) End Sub Protected Sub btnBook_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnBook.Click If Page.IsValid Then Call SendEmail() Call ClearFields() lblInfo.Visible = True lblInfo.Text = "<p style='color:red;'>Your email was sent successfully! Thank you for contacting Vintage Motor Club.</p>" End If End Sub Private Sub ClearFields() txtEmail.Text = "" txtName.Text = "" txtComment.Text = "" End Sub Private Sub SendEmail() Dim strBody As String = "You have received an email from the Vintage Motor Club website contact form. The details are below." & vbCrLf & vbCrLf strBody &= "Date of Contact: " & Date.Today.ToShortDateString & vbCrLf strBody &= "Name: " & txtName.Text.Trim & vbCrLf strBody &= "Email: " & txtEmail.Text.Trim & vbCrLf strBody &= "Question/Comment: " & txtComment.Text.Trim & vbCrLf & vbCrLf 'to Admin Dim msg As New MailMessage With msg .To.Add(New MailAddress("kroberts@keifersdesign.com", "Administrator")) .From = New MailAddress(txtEmail.Text.Trim, txtName.Text.Trim) .Subject = "Contact Form Submission - Vintage Motor Club" .Body = strBody End With Call AsyncMail.SendAsyncEmail(msg) paste the code from "AsyncMail.SendAsyncEmail"
i know nothing about this...so thanks for you help
Open in new window
Return "mail.yourhost.com"
to make sure that server is always used as an outgoing SMTP. Maybe that's the problem here. The "SendAsyncEmail" method calls the "GetMailServer" routine to get the SMTP server to use.
The server you return ("mail.yourhost.com") should be a server that can send emails without requiring quthentication. This may also be the problem. Some SMTP servers will work without authentication only if you the receipient of the message is in the same domain as the mail server, otherwise you have to authenticate to SMTP. Perhaps you can put a different mail server, some that you have access to from the server this code executes on, that doesn't need authentication and can send emails to anyone..
So for instance the site is hosted on their server? In the AsyncMail.vb file can i change all that info to use my server or will thier be some sort of conflict with coding?
So, once you are sure you have the latest code files, find a file that ends with .sln. This is the main solution file, you can open it in Visual Studio and it will load all other files a solution uses. Then you can compile the solution from menu Build -> Rebuild, or Rebuild All. If your configuration was "Debug" then your DLL-s will be in bin/Debug folder. If your configuration was "Release", you will find them in bin/release folder. In general, you should use the "Release" configuration when building for production. You can change the active configuration in project properties.
If you never used these tools before, it may be best to ask someone else to build it and make a test publish first, before leaving it in production. In any case, before any experiments, backup your IIS folders so you can revert in case something isn't right. If your app uses a database back-end + if you are not 100% sure that you have the latest sources, don't play with it, because things can easily get out of hand. Try to contact the company/person that produced, or made the last editions to the code, have them do the change and take notes :) | https://www.experts-exchange.com/questions/26079950/aspx-contact-email-form.html | CC-MAIN-2018-51 | refinedweb | 586 | 68.87 |
Today, everywhere we look, Machine Learning is around us in some form or the other. This subset of Artificial Intelligence has found diverse applications across all parallels of the industry, and rightly so. Even though Machine Learning is an emerging field, it has opened up a slew of possibilities to explore.
Now, the question is, which programming language to use for Machine Learning projects?
Python and C++ are two of the most popular programming languages. Both of these languages boast of an active community, dedicated tool support, an extensive ecosystem of libraries, and commendable runtime performance. However, the focus of today’s post is going to be Machine Learning in C++.
Why C++ for Machine Learning?
It is a well-established fact that Machine Learning requires heavy-duty CPU performance, and this is precisely what C++ guarantees. When it comes to speed and performance, C++ leaves behind Python, Java, and even C#. Another major advantage of using C++ for Machine Learning is that it has pointer support, a feature not available in many of the popular programming languages.
For the successful implementation of Machine Learning in C++, the foremost thing to do is to acquaint yourself with C++ libraries. Thankfully, C++ has some great libraries for Machine Learning, including Shark, MLPack, and GRT (Gesture Recognition Toolkit).
Now, let’s dive into the discussion of Machine Learning libraries in C++.
Machine Learing Libraries in C++
1. Shark
Shark is an open-source, modular library in C++. It is the perfect library for Machine Learning since it has extensive support for supervised learning algorithms like linear regression, k-means, neural networks, and clustering, to name a few.
Shark also includes numerous methods for linear and nonlinear optimization, kernel-based learning algorithms, numerical optimization, and a host of other ML techniques. It is the ideal tool for both research and building real-world applications. Shark has excellent documentation and is compatible with Linux, Windows, and macOS.
How to install Shark?
To install Shark, you have to get the source packages from the official downloads page. After this, you must build the library by writing the following code:
mkdir Shark/build/
cd Shark/build
cmake ../
make
You must know that Shark has two dependencies – Boost and CMake. While on Linux ad Windows, usually ATLAS is used, on macOS, Accelerate is the default linear algebra library. In macOS, you can use MacPorts to obtain the necessary packages, like so:
sudo port install boost cmake
However, under Ubuntu, you have to install the required packages by using the following statement:
sudo apt-get install cmake cmake-curses-gui libatlas-base-dev libboost-all-dev
Here are the steps for installing Shark:
- First, download the source packages from the downloads page and unpack them.
- Launch the CMake GUI
- Select “Where is the source code” to set the path to the unpacked Shark location.
- Select “Where to build the directory” to set the path where you want to store the Visual Studio project files.
- Choose the “Add Entry” option. Now, add an Entry BOOST_ROOT of type PATH and set it to your boost install directory.
- Again, add an Entry BOOST_LIBRARYDIR of type PATH and set it to your boost library directory.
- Finally, choose the apt Visual Studio compiler and double-click on the “Configure” option, followed by the “Generate” option.
2. mlpack
mlpack is a C++ library that is designed explicitly for performance. It promises to offer fast and extensible implementations of pioneering ML algorithms. The unique aspect of this C++ library is that it provides the ML algorithms as simple command-line programs, Python bindings, Julia bindings, and C++ classes, all of which you can integrate into larger-scale ML solutions.
How to install mlpack?
The installation process of MLPack varies from platform to platform.
For Python, you can get the source package through pip or conda, like so:
pip install mlpack
conda install -c conda-forge mlpack
You can refer to the mlpack in Python quickstart guide for more details.
For Julia, you can get the sources via Pkg, as follows:
import Pkg;
Pkg.add(“mlpack”)
For Ubuntu, Debian, Fedora, and Red Hat, you can install mlpack using a package manager. The mlpack command-line quickstart guide is a good place to start. You can also build it from source following the Linux build tutorial.
For Windows, you can download prebuilt binaries – Windows 64 bit – MSI Installer and Windows 64 bit – ZIP. You can also install it using a package manager like vcpkg, or build from source following the Windows build tutorial.
Coming to macOS, you can install the library via homebrew, like so:
brew install mlpack
Read: Highest Paying Machine Learning Jobs
3. GRT (Gesture Recognition Toolkit)
GRT or Gesture Recognition Toolkit is an open-source, cross-platform C++ library. It is specially designed for real-time gesture recognition. It includes a comprehensive C++ API that is further solidified by a neat and easy-to-use GUI (Graphical User Interface).
GRT is not only beginner-friendly, but it is also extremely easy to integrate into existing C++ projects. It is compatible with any sensor/data input, and you can train it with your unique gestures. Furthermore, GRT can adapt to your custom processing or feature extraction algorithms as and when needed.
How to install GRT?
The first thing you must do is to download the GRT package. After this, you must locate the GRT folder in the main gesture-recognition-toolkit folder and add the GRT folder (including all the subfolders) to the desired project.
You can start using the GRT by adding the complete code stored in the GRT folder to your C++ project. In case you use IDEs like VisualStudio or XCode, you can add the GRT folder files to your project following this path – “File -> Add Files to project.” You can also drag the GRT folder (from Finder or Windows Explorer) into the IDE to add all the files from the GRT folder to your project.
Once you’ve added the code contained in the GRT folder to your project, you can use all the GRT functions/classes. All you have to do is add the following two lines of code to the top of the header file in the project where you want to use the GRT code:
#include “GRT/GRT.h”
int main (int argc, const char * argv[])
{
//The main code for your project…
}
In this code, the first line adds the main GRT header file (GRT.h) to the project. The GRT.h file contains all of the GRT module header files, and hence, you don’t have to enter any other GRT header files manually. However, the second line declares that the GRT namespace is being used. This eliminates the need to write GRT:: WhatEverClass each time you want to use a GRT class – you can write WhatEverClass and be done with it.
However, remember that you have to specify the physical path where you stored the GRT folder on your hard drive, based on the IDE you use.
Also Read: Career in Machine Learning
Conclusion
These three C++libraries are perfect to handle almost all your ML needs. The key to mastering Machine Learning in C++ is first to learn these libraries, understand their specialties and functions, and then implement them to specific ML requirements.. | https://www.upgrad.com/blog/machine-learning-libraries-in-cplusplus/ | CC-MAIN-2021-04 | refinedweb | 1,218 | 62.07 |
As a long time fan and user of IFTTT.com we would see the Numerous Channel from time to time and wonder what it did, how it could be used, etc. The Numerous tagline is "Numerous follows the most important numbers in your life and keeps them all up to date, all in one place" and that certainly got our attention.
-----
Numerous runs on the iPhone and the app has a fantastic UI. The developer must be some rare artist/SWE hybrid type or something. The app flows well, takes advantage of new iOS features (TouchID, Apple Watch, etc.) and is easy to personalize. The graphs are clear and it even lets you customize when you get notifications (value to big/small, value change, change percentage exceeded, comments updated) and more. It short; it is a joy to use.
Also, there are dozens of pre-programmed "metrics" that you can follow. Check them out. They are interesting and useful, but could get boring and after a while you will want to roll your own metrics to track more personal data useful for your applications.
-----
That's where this blog post steps in. Numerous is really easy to get your personal data into it. After only a short time we had created "numerous" Numerous App metrics to:
- track temperature/humidity sensors scattered around the house
- track Internet UP/DOWN status
- track security system triggers
- track UP/DOWN status of LAN devices (security cams, Raspbery PIs, Imps, etc.)
- several other miscellaneous things....
------
How to get data into Numerous?
One way is to use the Maker Channel on IFTTT.com as a trigger to the Numerous Channel on IFTTT.com. This requires no coding and is an easy way to pass three variables in one Maker Channel event trigger. If you want to include the Maker Channel trigger URL into a Python or PHP script the applications are endless.
If you don't mind a little extra Python code, the task is almost as easy and offers more power and flexibility. To give credit where credit is due, it is made simple with the Python API App by outofmbufs.
Here's my simple example using Python and a Raspberry PI to get data to the iPhone. The example shows how to create a Numerous metric, write data and comments to it, and view a graph of the data.
-----
Run the Numerous app on the iPhone and create a new 'metric'. See the "+" sign in the upper right. Click that.
-----
Give your metric a name. Don't worry about what you call it, you can change it later. Remember, I told you the app was flexible....
-----
After you create the metric you can customize the way it looks. Change the title and description to suit your needs, add a meaningful background image, input data manually, set privacy and notification setting, etc. There is even a way to set the units. I put the units as "megaFonzies" in the example to denote how cool the Numerous App is.
-----
Making the appearance pretty is fine, but what we want is to feed the metric data from the Raspberry PI via a Python script. For that you need two numbers:
- your personal and secret API key: Get this by selecting "Settings/Developer Info" from the main screen of the Numerous app. Don't share this API key. It will start with "nmrs_" and is shown as "nmrs_xxxxxxxxxxx" in the source code example below.
- the METRIC ID that you are writing to: In the Numerous app click on the tile for the metric you just created and customized. In the upper right you will see the classic "menu bar" icon. Click that then "Developer Info" to find your METRIC ID.
Modify the Python script below with your API Key and METRIC ID values and run it. Your metric will start recording the values that you feed it:-----
Want a graph of the data? Turn the iPhone to landscape mode. You can pinch to zoom in and out for both the X and Y axis! How cool is that? At least 52 megaFonzies!
-----
That's pretty much it. The Python script example below will introduce you to all the basics. It writes the system clock seconds value to the metric, but that could be any value your RasPI can produce or scrape from the web. The Numerous metric in the example is 'Public' that anyone can view, update, and comment on. Feel free to use it for testing. Good luck!
-----
# WhiskeyTangoHotel.com [OCT2015]
# Super easy way to track/graph RasPI data to your mobile device using NumerousApp.
# This simple prog logs the "seconds" value of the localtime value to a Numerous metric.
# The Numerpus mobile app will log, track, and graph the data for you.
# Of course, this value could be a sensor reading, CPU usage, Core Temp, or any other numerical value.
# D/L the NumerousApp and install it on your mobile device.
# iPhone:
# Google Play:
#
# See for full APIs and other really cool applications.
#
# Also check out the Numerous integration to IFTTT.com for more awesomeness.
# Numerous can act as a trigger and receive triggers from most IFTTT.com channels
# Teaming the Maker channel on IFTTT.com with Numerous is VERY powerful!
# 1st, do a one time install from the terminal using "sudo pip install numerous"
# Must have pip installed to do this: "sudo apt-get install python-pip"
# Needed for the following import line.
from numerous import Numerous
import time # Needed to get the system clock settings
MyKey = "nmrs_xxxxxxxxxxx" # Your personal Numerous App API secret key.
MyMetric = "2884060434653363716" # Get this by clicking "Developer Info" from the numerous app
# "2884060434653363716" is set as an open/public metric
# Anyone can write to, so you just enter your personal API key to test.
# Link to load the metric to the app:
# Set up for calls
nr = Numerous(apiKey=MyKey)
metric = nr.metric(MyMetric)
# This will return the Numerous App Label string.
label = metric['label']
# This will return the current value of the metric
Current_value = str((metric.read()))
# Let's get the current seconds value of the system clock, convert that
# to a number, and write that seconds value to the Numerous metric
Timestamp_seconds = int(time.strftime("%S", time.localtime()))
metric.write(Timestamp_seconds)
# Uncomment the next line to add a like (a thumbs up) to the Numerous metric
#metric.like()
# This will add a comment to the Numerous metric
Comment_update = "System timestamp is: " + time.strftime("%a, %d %b %Y %H:%M:%S", time.localtime())
metric.comment(Comment_update)
#Print info to the screen
print " "
print time.strftime("%a, %d %b %Y %H:%M:%S", time.localtime())
print "Metric Name: " + label
print "----------------------"
print "Current value:" + Current_value # returns last current value of the Numerous metric
print "New value: " + str(Timestamp_seconds) # the value written to NumerousApp
----- | http://www.whiskeytangohotel.com/2015/ | CC-MAIN-2019-47 | refinedweb | 1,132 | 65.52 |
{"Failed to set surface for mouse cursor: CreateIconIndirect(): The parameter is incorrect.\r\n"} System.InvalidOperationException
Stack Trace: StackTrace " at Microsoft.Xna.Framework.Input.MouseCursor.PlatformFromTexture2D(Texture2D texture, Int32 originx, Int32 originy)\r\n" string
So I was messing around with Mouse.SetCursor(MouseCursor.FromTexture2D(sprite, 0, 0));
And by messing around I mean setting it several times per second (i.e. every frame). Windows 10, VS 2017.
This crash pops out after maybe half a minute of incessant clicking, but is 100% reproducible under those conditions.
Has anyone seen this before? Google has failed me.
I'd appreciate any pointers on how to prevent this crash.
It seems that I can avoid it by setting mouse cursor sprite less frequently, but that means I can't have an animated mouse cursor. Even then, that's not a guarantee if there's a bug somewhere in the pipeline.
Should I just put this inside a giant try{} block and forget about it?
Thanks for reading!
MouseCursor.FromTexture2D generates a new texture each time. It's possible you're generating too many textures that aren't being disposed based on how often you're calling it and using the textures.
MouseCursor.FromTexture2D
Instead, call FromTexture2D on each sprite you need in your animation and cache the MouseCursors you get back. Then, call Mouse.SetCursor and pass in the cached MouseCursor object when it's time to switch. Here's a quick, untested example:
FromTexture2D
MouseCursors
Mouse.SetCursor
MouseCursor
public class MouseCursorAnim
{
private MouseCursor[] Cursors = null;
public void CacheCursors(Texture2D[] mouseAnim)
{
Cursors = new MouseCursor[mouseAnim.Length];
for (int i = 0; i < Cursors.Length; i++)
{
Cursors[i] = MouseCursor.FromTexture2D(mouseAnim[i], 0, 0);
}
}
public void SetCursor(int frame)
{
Mouse.SetCursor(Cursors[frame]);
}
}
Hey, awesome. Going to stress test the hell out of this and report back.
I tested it, and your solution worked great. Memory is holding steady and as of 5 minutes running, there are no crashes while I'm pumping a new cursor in at every frame. Thank you!
I am missing something about Texture2D. I understand that generating new textures every frame is a bad idea, but I'm curious exactly what's happening under the hood. Is it creating a texture in video card memory or RAM? What should I read to avoid such snafus in the future?
Awesome; I'm glad to hear that did the trick!
To my understanding, creating a Texture2D (and many other graphics resources) via the constructor requires you to dispose the Texture2D when you're finished with it because it's a native resource, so the GC doesn't see it and won't free the memory associated with it. Failing to do so will result in memory leaks.
Texture2D
To help with your issue, I looked at the source for one of the MouseCursor platform implementations. If you'd like to delve more into how it all works, I'd recommend checking out the MonoGame source code since documentation is scarce on this subject. There's quite a lot to digest, but it's nice to know how some of this is working under the hood.
Texture data is stored in video memory only.
I remember shawn hargraves saying that he didn't like from file which is what this is basically because of just the scenario you gave. However FromFile is sort of requisite at times and you need to know a little bit about the dangers of it when using it.
Consider the following scenario you create a texture as you did or using from file or set data.
Texture2D t;
// firstt = MouseCursor.FromTexture2D(mouseAnim[i], 0, 0);
lets say you do this later on again but load in a different texture without calling dispose.
// secondt = MouseCursor.FromTexture2D(mouseAnimB[i], 0, 0);
.then you call dispose..
t.Dispose();
What allocated memory does t now refer to ?t refers to the second texture only now.So.How does the first texture get disposed now ? ...
Answer is ... it doesn't and you can't the reference is now dangling in limbo.
So what and were is the reference to the first texture now ?
This may not be spot on technically but practically this is how it is.
The reference is pinned memory basically and the gc can't reclaim it.The video card thinks you are using it so its not unloaded there either. Worse yet its got the reference to t for first and second memory areas.You no longer have acess to it.The gc does but it wont touch it.
It's essentially a semi dangling reference but since its not accessible and can't be called even by the gc it may not even crash the app or just might do so at a unexpected time or variable time depending on other things going on in the os (Like maybe a Alt Enter or Tab).
Video cards do their own thing the driver has more control over it then the os and (when the card starts to fill up say in the case were you have multiple apps running at once) they can if need be grab the reference they have and clear out memory temporarily then sort of reload it using that reference. Which also means if the second texture you loaded has a different size then the first this could be a app or card crashing event when the reference for the first and the second are the same when they shouldn't be and there could even be a buffer overflow or underflow into a protected memory area which would probably now days just be a app crashing event too.
All this applys to set data as well. It's not so bad if you just load a small texture and forget to dispose it before exit but if you did this a crap load then exit the app and reload it over and over it can add up and you can actually tax the card with no way to clear that memory till you restart your computer.
That's terrifying.
So what's the proper way of disposing of a texture that was created with a "new .FromTexture2D" call as opposed to Content.Load, and if there isn't, why is that in the codebase?
Call Dispose() on the Texture2D when you're finished with it. An example can be found here.
Dispose() | http://community.monogame.net/t/solved-crash-when-setting-a-hardware-mouse-cursor-from-a-sprite-any-ideas/11209/9 | CC-MAIN-2019-13 | refinedweb | 1,069 | 64.71 |
ScalaCheck is a tool for testing Scala and Java programs, based on property specifications and automatic test data generation. The basic idea is that you define a property that specifies the behaviour of a method or some unit of code, and ScalaCheck checks that the property holds. All test data are generated automatically in a random fashion, so you don't have to worry about any missed cases.
Fire up the Scala interpreter, with ScalaCheck in the classpath.
> scala -cp ScalaCheck-1.5.jar
Import the forAll method, which creates universally quantified properties. We will dig into the different property methods later on, but forAll is probably the one you will use the most. Note that it was called property in earlier versions (pre 1.5) of ScalaCheck. property is now a deprecated method, since forAll is a much better name.
scala> import org.scalacheck.Prop.forAll
Define a property.
scala> val propConcatLists = forAll { (l1: List[Int], l2: List[Int]) =>
l1.size + l2.size == (l1 ::: l2).size }
Check the property!
scala> propConcatLists.check
+ OK, passed 100 tests.
OK, that seemed alright. Now define another property.
scala> val propSqrt = forAll { (n: Int) => scala.Math.sqrt(n*n) == n }
Check it!
scala> propSqrt.check
! Falsified after 1 passed tests:
> -1
Not surprisingly, the property doesn't hold. The argument -1 falsifies it.
A property is the testable unit in ScalaCheck, and is represented by the org.scalacheck.Prop class. There are several ways to create properties in ScalaCheck, one of them is to use the org.scalacheck.Prop.forAll method like in the example above. That method creates universally quantified properties directly, but it is also possible to create new properties by combining other properties, or to use any of the specialised methods in the org.scalacheck.Prop object.
Universally quantified properties
As mentioned before, org.scalacheck.Prop.forAll creates universally quantified properties. forAll takes a function as parameter, and creates a property out of it that can be tested with the check method. The function should return Boolean or another property, and can take parameters of any types, as long as there exist implicit Arbitrary instances for the types. By default, ScalaCheck has instances for common types like Int, String, List, etc, but it is also possible to define your own Arbitrary instances. This will be described in a later section.
Here are some examples of properties defined with help of the org.scalacheck.Prop.forAll method.
import org.scalacheck.Prop.forAll
val propReverseList = forAll { l: List[String] => l.reverse.reverse == l }
val propConcatString = forAll { (s1: String, s2: String) =>
(s1 + s2).endsWith(s2)
}
When you run check on the properties, ScalaCheck generates random instances of the function parameters and evaluates the results, reporting any failing cases.
You can also give forAll a specific data generator. See the following example:
import org.scalacheck._
val smallInteger = Gen.choose(0,100)
val propSmallInteger = Prop.forAll(smallInteger)(n => n >= 0 && n <= 100)
smallInteger defines a generator that generates integers between 0 and 100, inclusively. Generators will be described closer in a later section. propSmallInteger simply specifies that each integer generated should be in the correct range. This way of using the forAll method is good to use when you want to control the data generation by specifying exactly which generator that should be used, and not rely on a default generator for the given type.
Conditional Properties
Sometimes, a specification takes the form of an implication. In ScalaCheck, you can use the implication operator ==>:
val propMakeList = forAll { n: Int =>
n >= 0 ==> (List.make(n, "").length == n)
}
Now ScalaCheck will only care for the cases when n is not negative.
If the implication operator is given a condition that is hard or impossible to fulfill, ScalaCheck might not find enough passing test cases to state that the property holds. In the following trivial example, all cases where n is non-zero will be thrown away:
scala> import org.scalacheck.Prop._
scala> val propTrivial = forAll( (n: Int) => (n == 0) ==> (n == 0) )
scala> propTrivial.check
! Gave up after only 4 passed tests. 500 tests were discarded.
It is possible to tell ScalaCheck to try harder when it generates test cases, but generally you should try to refactor your property specification instead of generating more test cases, if you get this scenario.
Using implications, we realise that a property might not just pass or fail, it could also be undecided if the implication condition doesn't get fulfilled. In the section about test execution, the different results of property evaluations will be described in more detail.
Combining Properties
A third way of creating properties, is to combine existing properties into new ones.
val p1 = forAll(...)
val p2 = forAll(...)
val p3 = p1 && p2
val p4 = p1 || p2
val p5 = p1 == p2
val p6 = all(p1, p2) // same as p1 && p2
val p7 = atLeastOne(p1, p2) // same as p1 || p2
Here, p3 will hold if and only if both p1 and p2 hold, p4 will hold if either p1 or p2 holds, and p5 will hold if p1 holds exactly when p2 holds and vice versa.
Grouping properties
Often you want to specify several related properties, perhaps for all methods in a class. ScalaCheck provides a simple way of doing this, through the Properties trait. Look at the following specifications of some of the methods in the java.util.String class:
import org.scalacheck._
object StringSpecification extends Properties("String") {
specify("startsWith", (a: String, b: String) => (a+b).startsWith(a))
specify("endsWith", (a: String, b: String) => (a+b).endsWith(b))
specify("substring", (a: String, b: String) =>
(a+b).substring(a.length) == b
)
specify("substring", (a: String, b: String, c: String) =>
(a+b+c).substring(a.length, a.length+b.length) == b
)
}
The Properties class contains a main method that can be used for simple execution of the property tests. Compile and run the tests in the following way:
$ scalac -cp ScalaCheck-1.5.jar StringSpecification.scala
$ scala -cp ScalaCheck-1.5.jar:. StringSpecification
+ String.startsWith: OK, passed 100 tests.
+ String.endsWith: OK, passed 100 tests.
+ String.substring: OK, passed 100 tests.
+ String.substring: OK, passed 100 tests.
You can also use the check method of the Properties class to check all specified properties, just like for simple Prop instances. In fact, Properties is a subtype of Prop, so you can use it just as if it was a single property. That single property holds if and only if all of the contained properties hold.
There is a Properties.include method you can use if you want to group several different property collections into a single one. You could for example create one property collection for your application that consists of all the property collections of your individual classes:
object MyAppSpecification extends Properties("MyApp") {
include(StringSpecification)
include(...)
include(...)
}
Labeling Properties
Sometimes it can be difficult to decide exactly what is wrong when a property fails, especially if the property is complex, with many conditions. In such cases, you can label the different parts of the property, so ScalaCheck can tell you exactly what part is failing. Look at the following example, where the different conditions of the property have been labeled differently:
val complexProp = forAll { (m: Int, n: Int) =>
val res = myMagicFunction(n, m)
(res >= m) :| "result > #1" &&
(res >= n) :| "result > #2" &&
(res < m + n) :| "result not sum"
}
We can see the label if we define myMagicFunction incorrectly and then check the property:
scala> def myMagicFunction(n: Int, m: Int) = n + m
myMagicFunction: (Int,Int)Int
scala> complexProp.check
! Falsified after 0 passed tests.
> Label of failing property: "result not sum"
> ARG_0: "0"
> ARG_1: "0"
It is also possible to write the label before the conditions like this:
val complexProp = forAll { (m: Int, n: Int) =>
val res = myMagicFunction(n, m)
("result > #1" |: res >= m) &&
("result > #2" |: res >= n) &&
("result not sum" |: res < m + n)
}
The labeling operator can also be used to inspect intermediate values used in the properties, which can be very useful when trying to understand why a property fails. ScalaCheck always presents the generated property arguments (ARG_0, ARG_1, etc), but sometimes you need to quickly see the value of an intermediate calculation. See the following example, which tries to specify multiplication in a somewhat naive way:
val propMul = forAll { (n: Int, m: Int) =>
val res = n*m
("evidence = " + res) |: all(
"div1" |: m != 0 ==> (res / m == n),
"div2" |: n != 0 ==> (res / n == m),
"lt1" |: res > m,
"lt2" |: res > n
)
}
Here we have four different conditions, each with its own label. Instead of using the && operator the conditions are combined in an equivalent way by using the Prop.all method. The implication operators are used to protect us from zero-divisions. A fifth label is added to the combined property to record the result of the multiplication. When we check the property, ScalaCheck tells us the following:
scala> propMul.check
! Falsified after 0 passed tests.
> Labels of failing property:
"lt1"
"evidence = 0"
> ARG_0: "0"
> ARG_1: "0"
As you can see, you can add as many labels as you want to your property, ScalaCheck will present them all if the property fails.
Generators are responsible for generating test data in ScalaCheck, and are represented by the org.scalacheck.Gen class. You need to know how to use this class if you want ScalaCheck to generate data of types that are not supported by default, or if you want to use the forAll method mentioned above, to state properties about a specific subset of a type. In the Gen object, there exists several methods for creating new and modifying existing generators. We will show how to use some of them in this sections. For a more complete reference of what is available, please see the API scaladoc.
A generator can be seen simply as a function that takes some generation parameters, and (maybe) returns a generated value. That is, the type Gen[T] may be thought of as a function of type Gen.Params => Option[T]. However, the Gen class contains additional methods to make it possible to map generators, use them in for-comprehensions and so on. Conceptually, though, you should think of generators simply as functions, and the combinators in the Gen object can be used to create or modify the behaviour of such generator functions.
Lets see how to create a new generator. The best way to do it is to use the generator combinators that exist in the org.scalacheck.Gen module. These can be combined using a for-comprehension. Suppose you need a generator which generates a tuple that contains two random integer values, one of them being at least twice as big as the other. The following definition does this:
val myGen = for {
n <- Gen.choose(10,20)
m <- Gen.choose(2*n, 500)
} yield (n,m)
You can create generators that picks one value out of a selection of values. The following generator generates a vowel:
val vowel = Gen.elements('A', 'E', 'I', 'O', 'U', 'Y')
The elements method creates a generator that randomly picks one of its parameters each time it generates a value. The distribution is uniform, but if you want to control it you can use the elementsFreq combinator:
val vowel = Gen.elementsFreq(
(3, 'A'),
(4, 'E'),
(2, 'I'),
(3, 'O'),
(1, 'U'),
(1, 'Y')
)
Now, the vowel generator will generate E:s more often than Y:s. Roughly, 4/14 of the values generated will be E:s, and 1/14 of them will be Y:s.
Generating Case Classes
It is very simple to generate random instances of case classes in ScalaCheck. Consider the following example where a binary integer tree is generated:
sealed abstract class Tree
case class Node(left: Tree, rigt: Tree, v: Int) extends Tree
case object Leaf extends Tree
import org.scalacheck._
import Gen._
import Arbitrary.arbitrary
val genLeaf = value(Leaf)
val genNode = for {
v <- arbitrary[Int]
left <- genTree
right <- genTree
} yield Node(left, right, v)
def genTree: Gen[Tree] = oneOf(genLeaf, genNode)
We can now generate a sample tree:
scala> genTree.sample
res0: Option[Tree] = Some(Node(Leaf,Node(Node(Node(Node(Node(Node(Leaf,Leaf,-71),Node(Leaf,Leaf,-49),17),Leaf,-20),Leaf,-7),Node(Node(Leaf,Leaf,26),Leaf,-3),49),Leaf,84),-29))
Sized Generators
When ScalaCheck uses a generator to generate a value, it feeds it with some parameters. One of the parameters the generator is given, is a size value, which some generators use to generate their values. For example the Gen.listOf generator generates lists of random length, bounded by the size value. If you want to use the size parameter in your own generator, you can use the Gen.sized method:
def matrix[T](g: Gen[T]): Gen[Seq[Seq[T]]] = Gen.sized { size =>
val side = scala.Math.sqrt(size)
Gen.vectorOf(side, Gen.vectorOf(side, g))
}
The matrix generator will use a given generator and create a matrix which side is based on the generator size parameter. It uses the Gen.vectorOf which creates a sequence of given length filled with values obtained from the given generator.
Conditional Generators
Conditional generators can be defined using Gen.suchThat in the following way:
val smallEvenInteger = Gen.choose(0,200) suchThat (_ % 2 == 0)
Conditional generators works just like conditional properties, in the sense that if the condition is too hard, ScalaCheck might not be able to generate enough values, and it might report a property test as undecided. The smallEvenInteger definition is probably OK, since it will only throw away half of the generated numbers, but one have to be careful when using the suchThat operator.
Generating Containers
There is a special generator, Gen.containerOf, the generates containers such as lists and arrays. You can use it in the following way:
val genIntList = Gen.containerOf[List,Int]
val genStringStream = Gen.containerOf[Stream,String]
val genBoolArray = Gen.containerOf[Array,Boolean]
By default, ScalaCheck supports generation of List, Stream, Set, Array, and ArrayList (from java.util). You can add support for additional containers by adding implicit Buildable instances. See Buildable.scala for examples.
There is also Gen.containerOf1 for generating non-empty containers, and Gen.containerOfN for generating containers of a given size.
The arbitrary Generator
There is a special generator, org.scalacheck.Arbitrary.arbitrary, which generates arbitrary values of any supported type.
val evenInteger = Arbitrary.arbitrary[Int] suchThat (_ % 2 == 0)
val squares = for {
xs <- Arbitrary.arbitrary[List[Int]]
} yield xs.map(x => x*x)
The arbitrary generator is the generator used by ScalaCheck when it generates values for property parameters. Most of the times, you have to supply the type of the value to arbitrary, like above, since Scala often can't infer the type automatically. You can use arbitrary for any type that has an implicit Arbitrary instance. As mentioned earlier, ScalaCheck has default support for common types, but it is also possible to define your own implicit Arbitrary instances for unsupported types. See the following implicit Arbitrary definition for integers, that comes from the ScalaCheck implementation.
implicit def arbInt: Arbitrary[Int] = Arbitrary(Gen.sized (s => Gen.choose(-s,s)))
To get support for your own type T you need to define an implicit method that returns an instance of Arbitrary[T]. Use the factory method Arbitrary(...) to create the Arbitrary instance. This method takes one parameter of type Gen[T] and returns an instance of Arbitrary[T].
Now, lets say you have a custom type Tree[T] that you want to use as a parameter in your properties:
abstract sealed class Tree[T] {
def merge(t: Tree[T]) = Internal(List(this, t))
def size: Int = this match {
case Leaf(_) => 1
case Internal(children) => (children :\ 0) (_.size + _)
}
}
case class Internal[T](children: Seq[Tree[T]]) extends Tree[T]
case class Leaf[T](elem: T) extends Tree[T]
When you specify an implicit generator for your type Tree[T], you also have to assume that there exists a implicit generator for the type T. You do this by specifying an implicit parameter of type Arbitrary[T], so you can use the generator arbitrary[T].
implicit def arbTree[T](implicit a: Arbitrary[T]): Arbitrary[Tree[T]] =
Arbitrary {
val genLeaf = for(e <- Arbitrary.arbitrary[T]) yield Leaf(e)
def genInternal(sz: Int): Gen[Tree[T]] = for {
n <- Gen.choose(sz/3, sz/2)
c <- Gen.vectorOf(n, sizedTree(sz/2))
} yield Internal(c)
def sizedTree(sz: Int) =
if(sz <= 0) genLeaf
else Gen.frequency((1, genLeaf), (3, genInternal(sz)))
Gen.sized(sz => sizedTree(sz))
}
As long as the implicit arbTree function is in scope, you can now write properties like this:
val propMergeTree = forAll( (t1: Tree[Int], t2: Tree[Int]) =>
t1.size + t2.size == t1.merge(t2).size
Collecting Generated Test Data
It is possible to collect statistics about what kind of test data that has been generated during property evaluation. This is useful if you want to inspect the test case distribution, and make sure your property tests all different kinds of cases, not just trivial ones.
For example, you might have a method that operates on lists, and which behaves behaves differently if the list is sorted or not. Then it is crucial to know if ScalaCheck tests the method with both sorted and unsorted lists. Let us first define an ordered method to help us state the property.
def ordered(l: List[Int]) = l == l.sort(_ > _)
Now state the property, using Prop.classify to collect interesting information on the generated data. The property itself is not very exciting in this example, we just state that a double reverse should return the original list.
import org.scalacheck.Prop._
val myProp = forAll { l: List[Int] =>
classify(ordered(l), "ordered") {
classify(l.length > 5, "large", "small") {
l.reverse.reverse == l
}
}
}
Check the property, and watch the statistics printed by ScalaCheck:
scala> myProp.check
+ OK, passed 100 tests.
> Collected test data:
78% large
16% small, ordered
6% small
Here ScalaCheck tells us that the property hasn't been tested with any large and ordered list (which is no surprise, since the lists are randomised). Maybe we need to use a special generator that generates also large ordered lists, if that is important for testing our method thoroughly. In this particular case it doesn't matter, since the implementation of reverse probably doesn't care about wether the list is sorted or not.
We can also collect data directly, using the Prop.collect method. In this dummy property we just want to see if ScalaCheck distributes the generated data evenly:
val dummyProp = forAll(Gen.choose(1,10)) { n =>
collect(n) {
n == n
}
}
scala> dummyProp.check
+ OK, passed 100 tests.
> Collected test data:
13% 7
13% 5
12% 1
12% 6
11% 2
9% 9
9% 3
8% 10
7% 8
6% 4
As we can see, the frequency for each number is around 10%, which seems reasonable.
As we've seen, we can test or properties or property collections by using the check method. In fact, the check method is just a convenient way of running org.scalacheck.Test.check (or Test.checkProperties, for property collections).
The Test module is responsible for all test execution in ScalaCheck. It will generate the arguments and evaluate the properties, repeatedly with larger and larger test data (by increasing the size parameter used by the generators). If it doesn't manage to find a failing test case after a certain number of tests, it reports a property as passed.
There are several overloaded versions of the Test.check and Test.checkProperties methods. If it is used without any arguments besides the property itself, ScalaCheck will use a default set of testing parameters, and print the results to the console. The following versions of the check method will not print anything to the console, however:
def check(prms: Test.Params, p: Prop): Test.Result
def check(prms: Test.Params, p: Prop, propCallback: Test.PropEvalCallback): Test.Result
Test.Params is a class that encapsulates testing parameters such as the number of times a property should be tested, the size bounds of the test data, and how many times ScalaCheck should try if it fails to generate arguments. Test.PropEvalCallback is a callback function that can be used if one wants to display one's own console or GUI test report.
All versions of check returns an instance of Test.Result which encapsulates the result and some statistics of the property test. Test.Result.status is of the type Test.Status and can have the following values:
/** ScalaCheck found enough cases for which the property holds, so the
* property is considered correct. (It is not proved correct, though). */
case object Passed extends Status
/** ScalaCheck managed to prove the property correct */
sealed case class Proved(args: List[Arg]) extends Status
/** The property was proved wrong with the given concrete arguments. */
sealed case class Failed(args: List[Arg], label: String) extends Status
/** The property test was exhausted, it wasn't possible to generate enough
* concrete arguments satisfying the preconditions to get enough passing
* property evaluations. */
case object Exhausted extends Status
/** An exception was raised when trying to evaluate the property with the
* given concrete arguments. */
sealed case class PropException(args: List[Arg], e: Throwable, label: String) extends Status
/** An exception was raised when trying to generate concrete arguments
* for evaluating the property. */
sealed case class GenException(e: Throwable) extends Status
The checkProperties returns test statistics for each property in the tested property collection, as a list. See the API documentation for more details.
One interesting feature of ScalaCheck is that if it finds an argument that falsifies a property, it tries to minimise that argument before it is reported. This is done automatically when you use the Prop.property and Prop.forAll methods to create properties, but not if you use Prop.forAllNoShrink. Let's look at the difference between these methods, by specifying a property that says that no list has duplicate elements in it. This is of course not true, but we want to see the test case minimisation in action!
import org.scalacheck.Arbitrary.arbitrary
import org.scalacheck.Prop.{forAll, forAllNoShrink}
val p1 = forAllNoShrink(arbitrary[List[Int]])(l => l == l.removeDuplicates)
val p2 = forAll(arbitrary[List[Int]])(l => l == l.removeDuplicates)
val p3 = forAll( (l: List[Int]) => l == l.removeDuplicates )
Now, run the tests:
scala> p1.check
! Falsified after 11 passed tests:
> ARG_0 = "List(8, 0, -1, -3, -8, 8, 2, -10, 9, 1, -8)"
scala> p2.check
! Falsified after 4 passed tests:
> ARG_0 = "List(-1, -1)" (2 shrinks)
scala> p3.check
! Falsified after 7 passed tests:
> ARG_0 = "List(-5, -5)" (3 shrinks)
In all cases, ScalaCheck found a list with duplicate elements that falsified the property. However, in the two second cases the list was shrunk into a list with just two identical elements in it, which is the minimal failing test case for the given property. Clearly, it's much easier to find a bug if you are given a simple test case that causes the failure.
Just as you can define implicit Arbitrary generators for your own types, you can also define default shrinking methods. This is done by defining an implicit method that returns a Shrink[T] instance. This is done by using the Shrink(...) factory method, which as its only parameter takes a function and returns an instance of Shrink[T]. In turn, the function should take a value of the given type T, and return a Stream of shrank variants of the given value. As an example, look at the implicit Shrink instance for a tuple as it is defined in ScalaCheck:
/** Shrink instance of 2-tuple */
implicit def shrinkTuple2[T1,T2](implicit s1: Shrink[T1], s2: Shrink[T2]
): Shrink[(T1,T2)] = Shrink { case (t1,t2) =>
(for(x1 <- shrink(t1)) yield (x1, t2)) append
(for(x2 <- shrink(t2)) yield (t1, x2))
}
When implementing a shrinking method, one has to be careful to only returned smaller variants of the value, since the shrinking algorithm otherwise could loop. ScalaCheck has implicit shrinking methods commons types such as integers and lists.
We have described how ScalaCheck can be used to state properties about isolated parts - units - of your code (usually methods), and how such properties can be tested in an automated fashion. However, sometimes you want to not only specify how a method should behave on its own, but also how a collection of methods should behave together, when used as an interface to a larger system. You want to specify how the methods - or commands - affect the system's state throughout time.
An example could be to specify the workflow of an ATM. You'd want to state requirements such as that the user has to enter the correct PIN code before any money could be withdrawn, or that entering an erronous PIN code three times would make the machine confiscate the credit card.
Formalising such command sequences using ScalaCheck's property combinators is a bit tricky. Instead, there is a small library in org.scalacheck.Commands for modelling commands and specifying conditions about them, which can then be used just as ordinary ScalaCheck properties, and tested with the org.scalacheck.Test module.
Let us now assume we want to test the following trivial counter class:
class Counter {
private var n = 0
def inc = n += 1
def dec = n -= 1
def get = n
def reset = n = 0
}
We specify the counter's commands by extending the org.scalacheck.Commands trait. See the comments in the code below for explanations on how Commands should be used:
object CounterSpecification extends Commands {
// This is our system under test. All commands run against this instance,
// and all postconditions are checked on it.
val counter = new Counter
// This is our state type that encodes the abstract state. The abstract state
// should model all the features we need from the real state, the system
// under test. We should leave out all details that aren't needed for
// specifying our pre- and postconditions. The state type must be called
// State and be immutable.
case class State(n: Int)
// initialState should reset the system under test to a well defined
// initial state, and return the abstract version of that state.
def initialState() = {
counter.reset
State(counter.get)
}
// We define our commands as subtypes of the traits Command or SetCommand.
// Each command must have a run method and a method that returns the new
// abstract state, as it should look after the command has been run.
// A command can also define a precondition that states how the current
// abstract state must look if the command should be allowed to run.
// Finally, we can also define a postcondition which verifies that the
// system under test is in a correct state after the command exectution.
case object Inc extends Command {
def run(s: State) = counter.inc
def nextState(s: State) = State(s.n + 1)
// if we want to define a precondition, we assign a function that
// takes the current abstract state as parameter and returns a boolean
// that says if the precondition is fulfilled or not. In this case, we
// have no precondition so we just let the function return true. We
// could also have skipped defining a precondition at all.
preCondition = s => true
// when we define a postcondition, we assign a function that
// takes two parameters, s and r. s is the abstract state before
// the command was run, and r is the result from the command's run
// method. The postcondition function should return a Boolean (or
// a Prop instance) that says if the condition holds or not.
postCondition = (s,r) => counter.get == s.n + 1
}
case object Dec extends Command {
def run(s: State) = counter.dec
def nextState(s: State) = State(s.n - 1)
postCondition = (s,r) => counter.get == s.n - 1
}
// This is our command generator. Given an abstract state, the generator
// should return a command that is allowed to run in that state. Note that
// it is still neccessary to define preconditions on the commands if there
// are any. The generator is just giving a hint of which commands that are
// suitable for a given state, the preconditions will still be checked before
// a command runs. Sometimes you maybe want to adjust the distribution of
// your command generator according to the state, or do other calculations
// based on the state.
def genCommand(s: State): Gen[Command] = Gen.elements(Inc, Dec)
}
Now we can test our Counter implementation. The Commands trait extends the Prop type, so we can use CounterSpecification just like a simple property.
scala> CounterSpecification.check
+ OK, passed 100 tests.
OK, our implementation seems to work. But let us introduce a bug:
class Counter(private var n: Int) {
def inc = n += 1
def dec = if(n > 10) n -= 2 else n -= 1 // Bug!
def get = n
def reset = n = 0
}
Lets test it again:
scala> CounterSpecification.check
! Falsified after 25 passed tests:
> COMMANDS = "Inc, Inc, Inc, Inc, Inc, Inc, Inc, Inc, Inc, Inc, Inc, Dec" (5 shrinks)
ScalaCheck found a failing command sequence (after testing 25 good ones), and then shrank it down. The resulting command sequence is indeed the minimal failing one! There is no other less complex command sequence that could have discovered the bug. This is a very powerful feature when testing complicated command sequences, where bugs may occur after a very specific sequence of commands that is hard to come up with when doing manual tests. | http://code.google.com/p/scalacheck/wiki/UserGuide | crawl-002 | refinedweb | 4,880 | 55.95 |
Currently when I want to wipe a USB disk with pseudorandom data in Linux I do the following:
dd if=/dev/urandom of=/dev/sdb conv=notrunc
urandom is very, very slow, it gets to the point where the bottleneck is not the device.
urandom
I know of another method -- the Mersenne twister. This is used in one instance by DBAN as a PRNG to securely erase data with, and it is easily 'random' enough for wiping drives -- and it is very fast. However, I'm not sure how I would use it in Linux. Is there a Mersenne twister program which I can then pipe into dd to wipe drives with?
The wipe utility uses a Mersenne Twister PRNG for the random passes.
wipe
Why are you using DD to wipe drives? shred is designed specifically to do that and is common to all modern distros.
shred
The Mersenne Twister is not cryptographically secure. After observing 624 outputs from the algorithm, it is possible to predict all past and future outputs. I suppose it's better than having all 0's and 1's in the sense that it will better mask the underlying magnetic signature, but that's less effective since your adversary will know the exact pattern that was written.
I'm no security expert, but I suppose my answer would be do NOT use the Mersenne Twister for this task due to that reason. But then again, it's all kind of a moot point, since writing the whole drive with anything will render its previous contents unrecoverable with current technology.
mersenne twister has a ginormous periodicity that's very hard to even get a value for on a machine without using pari/GP. my understanding is that mt19937 (nmersenne twister) from #include <random> in c++ is more like and
#include <random>
w=32 n=624 which states the periodicity (using ttcalc) is (2^((n-1)*w)-1)/32=(2^((624-1)*32)-1)/32=big huge huge number - cpu internals can't handle that size of a number without truncating it to some ridiculously small number. so let's find out the number of digits for that periodicity: ceil(log(abs((2^((624-1)*32)-1)/32)+1;10)/log(10;10)) = 6000 digits worth of number. even ttcalc only goes up to 99 digits. togiveyou an idea, 2^64=18,446,744,073,709,551,616 and is calculable with cpu instructions, but not even the IEEE754 FPU in your processor can handle this size of number. this is where you get into bignum math and number-theory calculators like pari/GP that can handle stuff to 10,000 digits easy - and print it too.
my understanding of /dev/urandom and /dev/random is that they are very poor for long runs like wiping disks and should not be used for steady streams of random numbers only for single-number-acquires like passwords and other crypto, etc.
for better performance on a disk wiping program, maybe nwipe or writing your own c++ utility would be a good idea that uses mt19937 (it's slow, but effective). you can truncate the 32 bits to just 8 or double your bandwidth by using 16 bits worth the 64-bit mt19937 should give you 32 bits worth (same - should write a bug report about the implementation problem in gcc). the upper bit is useless, I guess they figured everybody was going to use a signed number and never use negative numbers. the <random> c++ template library should accommodate both signed and unsigned data types, not just signed.
<random>
best thing would be to write a c++ program. I think wipe as an idea is flawed based on and under Usage where it says "It is designed for security, not speed, and is poorly suited to generating large amounts of random data."
if the only thing you were worrying about was periodicity (distance between repeated patterns), I don't think this is a problem. cryptography is not one of my specialties, but
my understanding of the NIST wipe is you should do something like 8 or 15 passes of MT, to get the magnetic ghost images down to a minimum. the more the merrier of course, but probably with diminishing returns the more you do.
not sure what this kind of stress does to the drive. be carefulwhat usb dock you use, startech SATDOCKU3SEF was the only one I found with a nice and usable fan on it (crank up to max) should this be necessary.
also, I like parallelizing jobs to save time. just stickan & on the end of the command and do as many jobs as you have channels. it willuse 1 thread per job. if your server has 120 threads(4cpux30T), then you can have up to 120-1 jobs, leaving 1 for the system. tocheckon the jobs, use the jobs command,and ITHINK towait forthejobs todoashutdown orwhatever use the wait command. like wait ; shutdown -r 0
wait ; shutdown -r 0
I am still trying to figure out how to make use of the foreach command with `ls /dev/disk/by-id there must be a bug in BASH because towipe everything from a livecd I should be able to wipe all disks with
foreach
I am developing a web page at
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
816 times
active
2 years ago | http://superuser.com/questions/238880/using-mersenne-twister-to-wipe-disks | CC-MAIN-2016-22 | refinedweb | 912 | 65.86 |
Update my RTC value
I want to update the RTC of my Lopy. Reading the docs seems that I just have to write the following code:
from machine import RTC rtc = RTC() rtc.ntp_sync("es.pool.ntp.org") # I'm from Spain print(rtc.now(),'\n')
But the output is:
(1970, 1, 1, 0, 0, 0, 817, None).
Then I thought, ok, I need to be connected to internet, thus I write the following code:
from machine import RTC, idle from network import WLAN wlan = WLAN(mode=WLAN.STA) wlan.connect(ssid='vodafone55s6654', auth=(WLAN.WPA2, 'uh5a55ss8d4wed55')) while not wlan.isconnected(): idle() print("WiFi connected succesfully") rtc = RTC() rtc.ntp_sync("es.pool.ntp.org") print(rtc.now(),'\n')
But I have the same output,
(1970, 1, 1, 0, 0, 0, 817, None), yet.
If I put
print(rtc.synced())returns
Falseconfirming that it could not sync. I tried to init it with a random date but
it is useless.
(With
rtc.ntp_sync("pool.ntp.org")or
"0.europe.pool.ntp.org"instead of
rtc.ntp_sync("es.pool.ntp.org")the outputs are the same)
Thank you for all!
- Gijs Global Moderator last edited by
Hi,
It might take a couple of seconds / minutes before the device is synced to the ntp time, depending on the availability and how often you already tried (I think they have some sort of fair use policy). You might want to use a loop like this:
while not rtc.synced(): time.sleep(1)
Which will continue after syncing | https://forum.pycom.io/topic/6996/update-my-rtc-value/1?lang=en-US | CC-MAIN-2021-25 | refinedweb | 253 | 66.94 |
9669/using-salt-module-tomcat-for-deploying-war-file
tomcat.deploy_war is an execution module, not a state module. In general, execution modules like tomcat.deploy_war are always named imperatively.
You cannot use execution modules in states directly; instead, they are intended to be used in ad-hoc Salt commands.
On the other hand state modules are intended to be used in states and are named declaratively (by the desired end state). In many cases, an execution module has a corresponding state module -- in your case tomcat.deploy_war and tomcat.war_deployed
Yes and no, It's a bit complex ...READ MORE
Docker needs root access therefore maven commands ...READ MORE
To do git commit:
def getGitCommit() {
...READ MORE
I think your executor node cannot connect .. make a dev.eslintrc or similar ...READ MORE
Using passwords on instances is an absolute ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/9669/using-salt-module-tomcat-for-deploying-war-file | CC-MAIN-2019-47 | refinedweb | 149 | 53.98 |
What I'm trying to do is write a program that creates the HTML tags for an N by M table. Basically I prompt the user for the number of rows N and the number of columns M. Then the program prompts the user for the N times M items that will go into the table, basically a string, or number.
Then the program writes out the HTML for a table that contains the items. I figured what I'll do I'll have any array for the max number of
rows and columns. Then after the user enters the number of rows and columns, a while loop would prompt the user to enter something for that row + column, until they're all filled.
That was my original intention, but all I keep getting is just a whole bunch of words the repeats itself like an never ending loop. And I'm
at a complete loss trying to figure this out.
Code:
#include <stdio.h>
/**
*/
int
main( void )
{
int rows;
int columns;
char string;
int MaxRows[10];
int MaxColumns[10];
printf( "Enter the number of rows: " );
scanf( "%d" , &rows );
printf( "Enter the number of columns: " );
scanf( "%d" , &columns );
while( rows < MaxRows && columns < MaxColumns)
printf( "Enter the text that will go into this row " );
printf( "your table: %d" , rows , columns );
}
If anybody can get me on the right track, or at tell me where I'm royaly screwing up, it would sure be appreciated. | http://cboard.cprogramming.com/c-programming/71593-newbie-needs-help-creating-html-table-printable-thread.html | CC-MAIN-2016-44 | refinedweb | 243 | 73.1 |
- Cookie policy
- Advertise with us
© Future Publishing Limited, Quay House, The Ambury, Bath BA1 1UA. All rights reserved. England and Wales company registration number 2008885....
You see, a lot of our content is drawn from the archives of Linux Format magazine, a magazine that is produced using Mac OS X, Adobe InDesign and a dozen other proprietary tools. No, it's not ideal, but sadly we have little say in the matter. Besides, you don't think the folks behind Cross Stitcher magazine stitch their magazine together, do you?
Anyway, the problem with converting articles from the magazine world to the internet world is the need to replace special symbols with more simple equivalents. Magazines, for example, make extensive use of curly quotes, em- and en-dashes, ellipses (...), non-breaking spaces and more. And while these do have HTML equivalents, they are rarely used, so we strip all these out and replace them with simple ASCII characters.
Once that's done, we need to convert articles to HTML, and that's what takes the real time. Or at least it did - until we started using Snippets. You see, Snippets allows you to insert pre-defined blocks of text either by using a keystroke or tab completion. That's not an uncommon feature in a text editor, admittedly, but Snippets makes it particularly easy to create more advanced placeholders for text replacement, even allowing output from the shell and Python scripts.
Before we get started with some example snippets, you need to make sure you have the Snippets plugin enabled. Fire up Gedit and go to Edit > Preferences. From the Plugins tab, scroll down and make sure Snippets is checked. Once that's done you should see the Configure Plugin button become active - that's where we'll be doing most of our work.
Before you can start using Snippets you need to make sure it's enabled.
Copy and paste this into your Gedit window:
He thrusts his fists Against the posts And still insists He sees the ghosts.
That gives you four simple lines of text. Save that file as test.html - it doesn't really matter where. But what does matter is that Gedit now knows your file is HTML, which means Snippets knows too and will activate its list of HTML snippets. For example, if you select the whole first line and press Shift+Alt+W, it will automatically be wrapped in a <p> tag, like this:
<p>He thrusts his fists</p>
What's more, Gedit will automatically leave the first "p" highlighted so that you can replace it just by typing, and any changes you make will be copied across to the </p> tag so that it matches. To see how this works, go back to the Configure Plugin window for Snippets (you'll also find it under Tools > Manage Snippets) then look inside the HTML category for the snippet called "Wrap Selection in Open/Close Tag".
When you select that snippet, the code behind it will appear in the top-right of the window, and should look like this:
<${1:p}>$GEDIT_SELECTED_TEXT</${1}>
$1 is a placeholder, which means it's something you want to type into after the snippet has been inserted. In this case, there's only one placeholder, $1, but it appears twice. Because the same placeholder number has been used twice - known as a "mirror placeholder" - Gedit will ensure that whatever you type in the first $1 will be synchronised with the second $1, which is how you can change <p> to <h1> and it will be changed in the closing tag too.
The ":p" part of the snippet sets a default value, in this case the HTML paragraph tag, "p". If you leave off the default value, Gedit will just leave a blank spot for you to type.
Finally, the most important part: $GEDIT_SELECTED_TEXT. This is a special value inside snippets that will be automatically replaced with the contents of whatever text you had selected before the snippet was used. You'll be using this a lot!
That basic snippet had just one placeholder, which meant that it prompted you to edit only one piece of text. A slightly more complicated snippet is called "Wrap Selection as Link", and has two placeholders. To test it out, select the second of your text lines and press Shift+Alt+L. You should see this:
<a href="">Against the posts</a>
After the snippet has run, Gedit will leave the insertion cursor over "" with that text highlighted so you can replace it by typing. Type in there and then - this is the important bit - press Tab. When you do that, Gedit will move the caret to "Against the posts" and select it just as it had selected "". That's because this snippet has two placeholders defined: "$1:" and "$2:GEDIT_SELECTED_TEXT".
You can have as many of these placeholders as you need, and any number of them can have default values specified - just press Tab to move between them.
We're going to create a new snippet from scratch to show off some of the more advanced features of Snippets. Go to Tools > Manage Snippets to bring up the Snippets Manager, then scroll up to the top of the category list to where it says "Global".
The Snippets Manager lets you create and edit your own snippets across a number of different languages.
Select that, then click the Create New Snippet button (it's just above the Help button at the bottom of the window).
Call your new snippet "Insert file" and put the following inside it:
$(1:cat $GEDIT_SELECTED_TEXT)
Notice how that uses standard parentheses (ie "(" and ")") rather than braces ("{" and "}") - that's because the parts inside the parentheses will be sent to a shell, executed, then be replaced with whatever the shell command prints out.
What that code example does is run "cat" on a filename, with the filename being specified as $GEDIT_SELECTED_TEXT so all you have to do is type a filename into Gedit, select it, then run the snippet. Underneath the text area for entering snippets are some text boxes marked "Activation". Click inside the Shortcut Key box then press any shortcut you want to assign to this snippet and it will become active immediately.
You can run any shell commands you like, as long as they return their output back using stdout. For example, the "bc" mixed with Gedit Snippets means we can create a simple calculator. In the Snippets Manager, create a new snippet under the Global category called Calculator. Give it this text:
$(1:echo $GEDIT_SELECTED_TEXT | bc)
Give it the shortcut Ctrl+Shift+C, then click Close and type this into your Gedit file:
5 + 10 / 2 * 9
Select it all, then press Ctrl+Shift+C to see your text replaced with the answer to the calculation: 50.
Gedit detects the end of the shell command by looking for a closing parenthesis (")"), so if your command includes one you need to put a backslash before it, like this:
$(1:echo "This is a (test\)" > file.txt)
That will save the text "This is a test" to file.txt. As you can see, there is no need to escape the opening parenthesis ("(").
Here's where Snippets really starts to prove its worth: you can run Python code wherever and whenever you want to. Let's start with a simplified example of a snippet we use here on TuxRadar to kill off curly quotes:
$< return $GEDIT_SELECTED_TEXT.replace('“', '"') >
That is designed to convert this:
“Hello, world!”
...into this:
"Hello, world!"
The "$<" part signals the beginning of your Python code, and it ends with the corresponding ">" at the end of the snippet. In between is a call to the replace() method of a Python string - in this case, $GEDIT_SELECTED_TEXT - which replaces a curly quote with a simple ASCII one.
When you've finished making all your changes to the input, you hand any output back to Gedit using Python's "return" statement.
That initial snippet fixes the opening curly quote, but not the closing curly quote, which is easily fixed simply by making the snippet span more than one line, like this:
$< output = $GEDIT_SELECTED_TEXT.replace('“', '"') output = output.replace('”', '"') return output >
If you're obsessed with brevity, you could write that as a one-liner using something like this:
$<return $GEDIT_SELECTED_TEXT.replace('“', '"').replace('”', '"') >
As with shell placeholders, you need to be sure to escape the end symbol if needed, which in the case of Python placeholders is >. To give you a working example of this, consider our original text:
He thrusts his fists Against the posts And still insists He sees the ghosts.If you wanted to put <p> tags around each of those lines, you could select each line individually and use Shift+Alt+W. But with Python we can select the whole paragraph, split it by new lines, then add HTML tags for each line, like this:
$< lines = $GEDIT_SELECTED_TEXT.split("\n"); output = ""; for line in lines: output += "<p\>" + line + "</p\>\n"; return output >
What that does is split up the selected text by line breaks, then loop over each line and add it along with the <p> start and end tags to the output. However, because <p> and </p> both contain the > symbol, Gedit will think that means the end of your Python snippet. So, the solution is to put a \ before the > so that Gedit ignores it.
Note that we've used "\n" to add line breaks of our own into the output to keep things neat.
Copy and paste this into your Gedit document:
a apples b bananas c cherries d dates
That's a list of items, one per line, where the item on the left is separated from the item on the right by a tab character. This is a fairly common way to work with tables in plain-text, but with a bit of Gedit + Snippets + Python magic we can whip that up into a real HTML table.
What we need to do is simple:
...while remembering that any time we want to print > we need to put a \ before it. Here's how to do that in Python:
$< lines = $GEDIT_SELECTED_TEXT.split("\n"); output = '<table\>\n'; for line in lines: output += '<tr\>'; columns = line.split("\t"); for item in columns: output += '<td\>' + item + '</td\> ' output += '</tr\>\n'; output += '</table\>'; return output >
If you assign that to a shortcut, then select the four lines from before and run the snippet, you'll get this:
<table> <tr><td>a</td> <td>apples</td> </tr> <tr><td>b</td> <td>bananas</td> </tr> <tr><td>c</td> <td>cherries</td> </tr> <tr><td>d</td> <td>dates</td> </tr> </table>
If you wanted to be a bit more fancy and include some zebra stripes for your table rows, just drop in a counter variable and use modulo (%) to check whether it's odd or even, like this:
$< lines = $GEDIT_SELECTED_TEXT.split("\n") output = '<table\>\n' counter = 0 for line in lines: if counter % 2 == 0: output += '<tr class="even"\>' else: output += '<tr class="odd"\>' columns = line.split("\t") for item in columns: output += '<td\>' + item + '</td\> ' output += '</tr\>\n' counter += 1 output += '</table\>' return output >
There's a huge amount you can do with Snippets, particularly when you start taking advantage of shell and Python placeholders. But before we let you run free, here are three last pointers:
Have fun - and make sure you take a look through some of the awesome snippets that are included by default!
You should follow us on Identi.ca or Twitter
Your comments
Thank you
Anonymous Penguin (not verified) - March 29, 2010 @ 12:04am
Thanks so much for this post, I was wondering how to create a simple feature for converting HTML entities, thought I was going to have to write a plugin, this is much much easier!
One question though: My snippets will not work when placed in the "Global" section. They only work when I place them in the category appropriate to the document type. Any idea why?
Jason
Thanks a lot
marenostrum (not verified) - June 15, 2010 @ 9:14am
That's a very usefull article for me.
Great article.
Jeff in Calgary (not verified) - October 7, 2010 @ 8:15am
I find this so useful. Sometimes the smallest things save so much time and effort.
I love gedit!
Initially when I got on ubuntu I thought; where is a good editor and how slow was I overlooking gedit in favor of all the recommended ones from others.
Those people do not understand how great gedit is!
Thank you, cheers.. I owe you a beer
Jeff in Calgary
Thanx, I used this to make a
Anonymous Penguin (not verified) - October 21, 2010 @ 4:26pm
Thanx, I used this to make a comment snippet, // comment before each selected line. Now figuring out if I can make one to uncommnent also.
$<
lines = $GEDIT_SELECTED_TEXT.split("\n");
output = "";
for line in lines:
output += "//" + line + "\n";
return output
>
adapted your paragraph python snippet
techjacker (not verified) - December 1, 2010 @ 9:35pm
thanks for posting the great python paragraph snippet
I was getting annoyed with all the empty paragraph tags it was creating for blank lines though so I adapted it to remove those by adding this line:
new_output = output.replace("<p\></p\>", "");
Not the most elegant but then again I don't claim to know python! Full code below:
$<
lines = $GEDIT_SELECTED_TEXT.split("\n");
output = "";
for line in lines:
output += "<p\>" + line + "</p\>\n";
new_output = output.replace("<p\></p\>", "");
return new_output
>
snippet for removing in-paragraph line breaks
Sander Evers (not verified) - February 2, 2011 @ 12:33pm
$<
# Reverse word wrap: Replace single newline characters between lines by spaces.
import re
r = re.compile('(?<!\n)\n(?=.)')
return r.sub(' ',$GEDIT_SELECTED_TEXT)
>
thank
tux.think (not verified) - June 13, 2012 @ 12:49pm
hi thanks for the information, was very useful
Thanks
Raj kiran (not verified) - July 20, 2012 @ 7:39am
Thanks.
Thank You!
rnx (not verified) - October 7, 2012 @ 7:49am
Thank you very much indeed! This was very helpful. Very detailed and very clear.
Thanks! I love gedit
tuxaxut (not verified) - November 5, 2012 @ 1:51pm
Thanks for the quick info. Gedit is great for coding!
I think other site
Veronika.S (not verified) - November 9, 2012 @ 10:28am
I think other site proprietors should take tuxradar.com as an model, very clean and excellent user friendly style and design, let alone the content. You are an expert in this topic! Great!
Thanks for your thoughts.
Suk.T (not verified) - November 10, 2012 @ 2:05pm
Thanks for your thoughts. One thing I have noticed is that banks plus financial institutions know the dimensions and spending practices of consumers and also understand that many people max outside their real credit cards around the getaways. They correctly take advantage of this specific fact and then start flooding ones inbox and snail-mail box along with hundreds of 0 APR credit cards offers shortly after the holiday season closes. Knowing that if you're like 98% of all American general public, you'll leap at the possible opportunity to consolidate credit debt and switch balances for 0 annual percentage rates credit cards. Wonderful!
Currently it appears like
Delma.T (not verified) - November 10, 2012 @ 4:31pm
Currently it appears like Expression Engine is the preferred phorumging platform available right now. (from what I've read) Is that what you're using on your phorum? Great!
What do you wish to tell it?
airmaxs68134 (not verified) - May 6, 2013 @ 1:20pm.
Extremely useful
Pádraic (not verified) - August 27, 2013 @ 3:27pm
Brilliant, many thanks indeed for this! | http://www.tuxradar.com/content/save-time-gedit-snippets | CC-MAIN-2017-26 | refinedweb | 2,604 | 68.81 |
From Fedora Project Wiki
Description
This test case tests that the thermostat shell works correctly. This command provides a command line shell client for interacting with thermostat.
Setup
- Boot into the machine/VM you wish to test.
- If thermostat is not installed yet, install thermostat.
- Clear storage data:
rm -rf ~/.thermostat/data/db/*
- Start the thermostat storage:
thermostat storage --start
How to test
- Start the thermostat shell:
thermostat shell
- At the "Thermostat >" prompt type "help". This should show the list of all available commands. Feel free to use any of these after step 4 (otherwise these would be pretty boring ;-))
- Pressing the cursor up key should bring up the history. In this case "help". Other known shell keyboard shortcuts should work too: e.g. CTRL+L
- Next type "list-vms". You should be asked to provide username and password if you are using thermostat for the first time. In both cases typing return (i.e. empty string) should be sufficient.
- In another terminal, start a thermostat agent:
thermostat agent
- At the "Thermostat >" prompt type "list-vms" again.
- connect -d
- disconnect
- connect -d mongodb://127.0.0.1:27518
- Press CTRL+D
Expected Results
- After step 1, your terminal should look similar to the following (version should be 1.0.0 unlike the picture):
- After step 2 available commands should be:
help show help for a given command or help overview clean-data Drop all data related to all of the specified agents connect persistently connect to storage disconnect disconnect from the currently used storage dump-heap trigger a heap dump on the VM find-objects finds objects in a heapdump find-root finds the shortest path from an object to a GC root list-heap-dumps list all heap dumps list-vms lists all currently monitored VMs object-info prints information about an object in a heap dump ping using the Command Channel, send a ping to a running agent save-heap-dump-to-file saves a heap dump to a local file show-heap-histogram show the heap histogram validate validates a thermostat plug-in XML file against the schema vm-info shows basic information about a VM vm-stat show various statistics about a VM
- After step 3 the output should look like:
HOST_ID HOST VM_ID STATUS VM_NAME
- At step 5 the output of list-vms should show some JVMs and should not prompt for username/password.
- At step 6 step thermostat should report that it is already connected to storage. This is because some commands try to automatically connect to some pre-configured storage URL (which may fail in some cases):
Already connected to storage: URL = mongodb://127.0.0.1:27518 Please use disconnect command to disconnect.
- At step 7 no errors/exceptions are expected.
- At step 8 connect should have succeeded this time around
- The last step should exit the shell without errors/exceptions. | https://fedoraproject.org/wiki/QA:Testcase_thermostat_shell | CC-MAIN-2019-39 | refinedweb | 479 | 60.65 |
NAME
Number::Phone::Country - Lookup country of phone number
SYNOPSIS
use Number::Phone::Country; #returns 'CA' for Canada my $iso_country_code = phone2country("1 (604) 111-1111");
or
use Number::Phone::Country qw(noexport uk); my $iso_country_code = Number::Phone::Country::phone2country(...);
or
my ($iso_country_code, $idd) = Number::Phone::Country::phone2country_and_idd(...);
DESCRIPTION
This module looks up up the country based on a telephone number. It uses the International Direct Dialing (IDD) prefix, and lookups North American numbers using the Area Code, in accordance with the North America Numbering Plan (NANP). It can also, given a country, tell you the country code, and the prefixes you need to dial when in that country to call outside your local area or to call another country.
Note that by default, phone2country is exported into your namespace. This is deprecated and may be removed in a future version. You can turn that off by passing the 'noexport' constant when you use the module.
Also be aware that the ISO code for the United Kingdom is GB, not UK. If you would prefer UK, pass the 'uk' constant.
I have put in number ranges for Kosovo, which does not yet have an ISO country code. I have used KOS, as that is used by the UN Development Programme. This may change in the future.
FUNCTIONS
The following functions are available:
- country_code($country)
Returns the international dialing prefix for this country - eg, for the UK it returns 44, and for Canada it returns 1.
- idd_code($country)
Returns the International Direct Dialing prefix for the given country. This is the prefix needed to make a call from a country to another country. This is followed by the country code for the country you are calling. For example, when calling another country from the US, you must dial 011.
- ndd_code($country)
Returns the National Direct Dialing prefix for the given country..
- phone2country($phone)
Returns the ISO country code (or KOS
It has not been possible to maintain complete backwards compatibility with the original 0.01 release. To fix a bug, while still retaining the ability to look up plain un-adorned NANP numbers without the +1 prefix, all non-NANP numbers *must* have their leading + sign.
Another incompatibility - it was previously assumed that any number not assigned to some other country was in the US. This was incorrect for (eg) 800 numbers. These are now identified as being generic NANP numbers.
Will go out of date every time the NANP has one of its code splits/overlays. So that's about once a month then. I'll do my best to keep it up to date.
WARNING
The Yugoslavs keep changing their minds about what country they want to be and what their ISO 3166 code and IDD prefix should be. YU? CS? RS? ME? God knows. And then there's Kosovo ...
AUTHOR
now maintained by David Cantrell <david@cantrell.org.uk>
originally by TJ Mather, <tjmather@maxmind.com>
country/IDD/NDD contributions by Michael Schout, <mschout@gkg.net>
Thanks to Shraga Bor-Sood for the updates in version 1.4.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | https://metacpan.org/pod/release/DCANTRELL/Number-Phone-3.0009/lib/Number/Phone/Country.pm | CC-MAIN-2019-22 | refinedweb | 529 | 65.32 |
Adding your Task to the Dependency Plotting Tool¶
How to create a task dependency graph is described in Analysis Tools.
By default, it should pick up and plot the new task. However, you might want to
customize the plotting a bit, e.g. if you’re introducing a new ‘family’ of tasks
or you’d like them plotted as a cluster instead of individually. In both cases, we
need to modify the
tools/plot_task_dependencies.py script.
Colouring in the Task Nodes¶
First, decide on a colour. If you want to use the same colour as already existing
task types, find its key in the
task_colours dict, which is defined at the
top of the file. If you want to add a new colour, add it to the
task_colours
dict with a new key.
Then the script needs to identify the task types with which it is working.
To do so, it will check the task names, which are generated following the scheme
taskname_subtaskname, where
taskname is defined in
taskID_names and
subtasknbame is defined in
subtaskID_names in
task.c. In
tools/plot_task_dependencies.py, you’ll have to write a function that recognizes your
task by its name, like is done for example for gravity:
def task_is_gravity(name): """ Does the task concern the gravity? Parameters ---------- name: str Task name """ if "gpart" in name: return True if "grav" in name: return True return False
You’ll need to add the check to the function
get_task_colour():
if taskIsGravity(name): colour = task_colours["gravity"]
Feel free to pick out a nice color for it :)
Adding Clusters¶
In certain cases it makes sense to group some tasks together, for example the self
and pair tasks when computing hydro densities, gradients, or forces. To do this,
you’ll need to modify the function
task_get_group_name in
src/task.c. The group
is determined by the task subtype, e.g.
case task_subtype_grav: strcpy(cluster, "Gravity"); break;
But since the task type itself is also passed to the function, you could use that as well if you really really need to. And that’s it! | https://swift.dur.ac.uk/docs/Task/adding_to_dependency_plotting_tool.html | CC-MAIN-2022-27 | refinedweb | 344 | 67.79 |
Brian Warner wrote: > [my apologies if this gets duplicated, I've been having SMTP problems] It appears that this message was relayed twice, once today and once yesterday, and both times I only received the cc to me, and not the copy that was addressed to the twisted-python list. So I'm quoting the whole message below in case others haven't seen it. > Stephen Waterbury <waterbug at pangalactic.us> writes: > >> I am in the process of converting some of my code to use zope.schema >> instead of a similar thing that I independently invented. Since I am >> also planning on using Foolscap, and since Foolscap has a schema >> module, I compared zope.schema and foolscap.schema, and to me they >> seem to have much overlap in design intent. In particular, in the >> zope.schema use cases ... > >> So I was wondering -- would it make sense to use zope.schema >> in Foolscap? > > Yeah, that sounds quite intriguing. The Foolscap schemas serve a number of > similar purposes: > > #1: provide a clear place to document a program's remote interfaces. This > role is aimed at humans, not programs. > > #2: make it convenient to enforce those interfaces on the receiving end.. by > relying upon the guards, the code that actually does stuff is easier to > read and easier to analyze for other potential problems > > #3: enforce those interfaces on the sending end, to bring the discovery of > programming errors closer to their cause > > #4: enforce those interfaces at the receiving *wire*, to mitigate > resource-exhaustion attacks > > #1 is satisfied by pretty much anything, as it's more a set of documentation > #conventions than anything else. The existing zope.interface.Interface class > #does this pretty well. > > The rest require that the interfaces be machine-parseable. I enhanced z.i's > Interface (in foolscap.RemoteInterface) to allow the arguments and return > values to have meaning.. if zope.schema does something similar, I'd love to > take a look at it and see if it can capture the same expressiveness. Yes, that appears to be one of the things zope.schema does. > I'm undecided as to whether #4 is a good idea or not. It seemed like a good > idea when I first started, but I've had some smart people tell me it's not > the best place to attempt to solve DoS problems. Worse yet, the > implementation is so incomplete that I've personally had to disable schema > checking on things that would otherwise be useful (in particular I think a > "PolyConstraint" in which the two branches are both containers fails to match > the tokens on the wire correctly, even though such a thing would be quite > useful just at the post-serialization phase). So I'm tempted to drop #4 as a > design criterion and if z.s can represent enough to make that work, great, if > not, drop the feature. > >> A nice practical factor is that zope.schema has been packaged as >> a separate namespace module, similarly to zope.interface. > > I have to admit that I'm slightly hesitant to add a new dependency. But maybe > it isn't too big or too unwieldy. There doesn't seem to be a debian package > for it, though.. Right, and for both of those reasons (extra dependency and the debian packaging lag) I would like to see zope.schema merged into zope.interface, since zope.schema depends on zope.interface and I don't think it would be a big burden on zope.interface to carry along zope.schema even if zope.interface users don't all use it. I'll do a little lobbying for that on interface-dev (which has been extremely quiet lately ... probably a good sign). > But I'll definitely check it out. Thanks for the suggestion! > > -Brian > | http://twistedmatrix.com/pipermail/twisted-python/2007-July/015768.html | CC-MAIN-2014-42 | refinedweb | 629 | 67.35 |
Table of Contents
- 1 Python Seaborn Tutorial
- 2 Why Seaborn?
- 3 Getting Started with Seaborn
- 4 Conclusion
Python Seaborn Tutorial
Seaborn is a library for making statistical infographics in Python. It is built on top of matplotlib and also supports numpy and pandas data structures. It also supports statistical units from SciPy.
Visualization plays an important role when we try to explore and understand data, Seaborn is aimed to make it easier and the centre of the process. To put in perspective, if we say matplotlib makes things easier and hard things possible, seaborn tries to make that hard easy too, that too in a well-defined way. But seaborn is not an alternative to matplotlib, think of it as a complement to the previous.
As it is built on top of matplotlib, we will often invoke matplotlib functions directly for simple plots at matplotlib has already created highly efficient programs for it.
The high-level interface of seaborn and customizability and variety of backends for matplotlib combined together makes it easy to generate publication-quality figures.
Why Seaborn?
Seaborn offers a variety of functionality which makes it useful and easier than other frameworks. Some of these functionalities are:
- A function to plot statistical time series data with flexible estimation and representation of uncertainty around the estimate
- Functions for visualizing univariate and bivariate distributions or for comparing them between subsets of data
- Functions that visualize matrices of data and use clustering algorithms to discover structure in those matrices
- High-level abstractions for structuring grids of plots that let you easily build complex visualizations
- Several built-in themes for styling matplotlib graphics
- Tools for choosing color palettes to make beautiful plots that reveal patterns in your data
- Tools that fit and visualize linear regression models for different kinds of independent and dependent variables
Getting Started with Seaborn
To get started with Seaborn, we will install it on our machines.
Install Seaborn
Seaborn assumes you have a running Python 2.7 or above platform with NumPY (1.8.2 and above), SciPy(0.13.3 and above) and pandas packages on the devices.
Once we have these python packages installed we can proceed with the installation. For
pip installation, run the following command in the terminal:
pip install seaborn
If you like conda, you can also use conda for package installation, run the following command:
conda install seaborn
Alternatively, you can use pip to install the development version directly from GitHub:
pip install git+
Using Seaborn
Once you are done with the installation, you can use seaborn easily in your Python code by importing it:
import seaborn
Controlling figure aesthetics
When it comes to visualization drawing attractive figures is important.
Matplotlib is highly customizable, but it can be complicated at the same time as it is hard to know what settings to tweak to achieve a good looking plot. Seaborn comes with a number of themes and a high-level interface for controlling the look of matplotlib figures. Let’s see it working:
#matplotlib inline import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns np.random.seed(sum(map(ord, "aesthetics"))) #Define a simple plot function, to plot offset sine waves def sinplot(flip=1): x = np.linspace(0, 14, 100) for i in range(1, 7): plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip) sinplot()
This is what the plot looks like with matplotlib defaults:
If you want to switch to seaborn defaults, simply call ‘set’ function:
sns.set() sinplot()
This is how the plot look now:
Seaborn figure styles
Seaborn provides five preset themes: white grid, dark grid, white, dark, and ticks, each suited to different applications and also personal preferences.
Darkgrid is the default one. The White grid theme is similar but better suited to plots with heavy data elements, to switch to white grid:
sns.set_style("whitegrid") data = np.random.normal(size=(20, 6)) + np.arange(6) / 2 sns.boxplot(data=data)
The output will be:
For many plots, the grid is less necessary. Remove it by adding this code snippet:
sns.set_style("dark") sinplot()
The plot looks like:
Or try the white background:
sns.set_style("white") sinplot()
This time, the background looks like:
Sometimes you might want to give a little extra structure to the plots, which is where ticks come in handy:
sns.set_style("ticks") sinplot()
The plot looks like:
Removing axes spines
You can call
despine function to remove them:
sinplot() sns.despine()
The plot looks like:
Some plots benefit from offsetting the spines away from the data. When the ticks don’t cover the whole range of the axis, the trim parameter will limit the range of the surviving spines:
The plot looks like:
You can also control which spines are removed with additional arguments to despine:
sns.set_style("whitegrid") sns.boxplot(data=data, palette="deep") sns.despine(left=True)
The plot looks like:
Temporarily setting figure style
axes_style() comes to help when you need to set figure style, temporarily:
with sns.axes_style("darkgrid"): plt.subplot(211) sinplot() plt.subplot(212) sinplot(-1)
The plot looks like:
Overriding elements of the seaborn styles
A dictionary of parameters can be passed to the
rc argument of
axes_style() and
set_style() in order to customize figures.
Note: Only the parameters that are part of the style definition through this method can be overridden. For other purposes, you should use
set() as it takes all the parameters.
In case you want to see what parameters are included, just call the function without any arguments, an object is returned:()
The plot looks like:
Scaling plot elements
Let’s try to manipulate scale of the plot. We can reset the default parameters by calling set():
sns.set()
The four preset contexts are – paper, notebook, talk and poster. The notebook style is the default, and was used in the plots above:
sns.set_context("paper") sinplot()
The plot looks like:
sns.set_context("talk") sinplot()
The plot looks like:
Conclusion
In this lesson, we have seen that Seaborn makes it easy to manipulate different graph plots. We have seen examples of scaling and changing context.
Seaborn makes it easy to visualize data in an attractive manner and make it easier to read and understand. | https://www.journaldev.com/18583/python-seaborn-tutorial | CC-MAIN-2019-39 | refinedweb | 1,042 | 52.09 |
ukesmith123Members
Content count119
Joined
Last visited
Community Reputation153 Neutral
About lukesmith123
- RankMember
lukesmith123 posted a topic in 2D and 3D ArtI am converting some files to dds and I'm a little confused about which DXT versions of dds I should be using. For diffuse textures with no alpha should I use DXT1(no alpha) ? Also I have normal maps which contain specular maps in the alpha channel. Will DXT3 be poorer quality but smaller file size then using DXT5? Also should I create mip maps for normal map textures or not? thanks so much!
lukesmith123 posted a topic in General and Gameplay ProgrammingWhen doing additive blending of two separate animations that run at the same time, should any unused bones of one animation simply remain in the bindpose? For instance when blending a shooting animation with a running animation, should the shooting animations lower body remain in the bindpose?
lukesmith123 replied to lukesmith123's topic in Math and PhysicsThanks for the replies. The area of intersection sum is really useful. I found some good stuff in a book and I think the fastest method when using min/max is separating axis: [CODE] if (Max.X < B.Min.X || Min.X > B.Max.X) return false; if (Max.Y < B.Min.X || Min.Y > B.Max.Y) return false; return true; [/CODE]
lukesmith123 posted a topic in Math and PhysicsWhats the fastest way to test containment between two 2D AABBs? Also I'm testing for intersections like this: [CODE] return (Abs(Corners[0].X - B.GetCorners[0].X) * 2 < (Corners[1].X + B.GetCorners[1].X)) && (Math.Abs(Corners[0].Y - B.GetCorners[0].Y) * 2 < (Corners[1].Y + B.GetCorners[1].Y)); [/CODE] if I always need to check for intersections as well as containment is there a better method I could use to check both at the same time more cheaply?
lukesmith123 replied to lukesmith123's topic in General and Gameplay ProgrammingAh I see. Great answers thanks!
lukesmith123 replied to lukesmith123's topic in General and Gameplay ProgrammingOne thing, with the last point you made, would that not be a much slower method than just creating a frustum from the 4 corners in the first place? Also do you have any opinion on whether projecting to screen space vs creating frustums from the portals is faster?
lukesmith123 replied to lukesmith123's topic in General and Gameplay ProgrammingFantastic thanks so much!
lukesmith123 replied to lukesmith123's topic in General and Gameplay ProgrammingOk, so would you check the players bounds against all of the cell bounding boxes each frame? or would you need to keep track of the players movement through portals to speed this up? So when you project the corners of the portal into screen space and build a 2D bounding box from them how do you project the objects to screen space and get a 2D box from those? Would you take the corners of the objects bounding box and project them to screen space and then create the smallest 2D box from the 8 corners? Thanks so much for the tips!
lukesmith123 posted a topic in General and Gameplay ProgrammingI have a couple of questions about portal systems for visibility determination that I couldnt find any info on and I hope somebody here could answer. How do you keep track of which cell the player is currently in? Do you contain each cell in a bounding box? if so, what about cells that aren't box shaped? Also I have read that a good method is to project the portal into screen space and check which objects in the connecting cell are within the 2d bounds of the portal. Could anybody explain how to do this I dont really understand how to project the portal from world to screen space and test collisions with objects this way. thanks,
lukesmith123 replied to lukesmith123's topic in General and Gameplay ProgrammingWow very nice thanks!
lukesmith123 posted a topic in General and Gameplay ProgrammingIs there any good books on the subject of portals for visibility determination? I'm looking for something that is fairly in depth with example code. I have a few books that mention the subject for example the morgan kaufman collision book but it only skims the topic and doesnt go into detail. thanks
lukesmith123 replied to lukesmith123's topic in Graphics and GPU ProgrammingThey both look excellent thank you.
lukesmith123 posted a topic in Graphics and GPU ProgrammingCould anybody reccomend any books that cover the subject of calculating lightmaps aswell as radiosity calculation. I've read a few breif articles but I'm really hoping there are some more comprehensive books on the subject with code examples etc. thanks,
lukesmith123 posted a topic in Engines and MiddlewareHi, I followed a tutorial in the book 'programming game AI by example' which teaches an introduction to scripting with lua. I'm getting some unexpected memory leaks which I cant understand how they can happen. But as I am very new to lua and luabind I'm guessing that I might be missing something obvious here. I register functions from the statemachine and bot class in the bot class constructor (lua_close is called in the bot class destructor): [CODE] lua = luaL_newstate(); luabind::open(lua); luaL_openlibs(lua); luabind::module(lua) [ luabind::class_<StateMachine<Bot>>("StateMachine") .def("ChangeState", &StateMachine<Bot>::ChangeState) .def("CurrentState", &StateMachine<Bot>::CurrentState) .def("SetCurrentState", &StateMachine<Bot>::SetCurrentState) ]; luabind::module(lua) [ luabind::class_<Bot>("Bot") .def("one", &Bot::one) .def("two", &Bot::two) .def("GetStateMachine", &Bot::GetStateMachine) ]; luaL_dofile(lua,"lua.lua"); luabind::object states = luabind::globals(lua); if (luabind::type(states) == LUA_TTABLE) { stateMachine->SetCurrentState(states["State_one"]); } [/CODE] And the statemachine class looks like this: [CODE] template <class entity_type> class StateMachine { private: entity_type* Owner; luabind::object currentState; public: StateMachine(entity_type* owner):Owner(owner){} void SetCurrentState(const luabind::object& s){currentState = s;} void UpdateStateMachine() { if(currentState.is_valid()) { (currentState)["Execute"](Owner); } } void ChangeState(const luabind::object& new_state) { (currentState)["Exit"](Owner); currentState = new_state; (currentState)["Enter"](Owner); } const luabind::object& CurrentState()const{return currentState;} }; [/CODE] Everything works as expected but I am left with 6 memory leaks when I exit the program. Also when I remove the luabind::module code where the classes are registered then the memory leaks dissapear. Does anybody have any ideas? thanks,
lukesmith123 replied to lukesmith123's topic in Math and PhysicsEDIT: Ah I realised I had made a stupid mistake elsewhere and the code that I originally posted was fine. Sorry! | https://www.gamedev.net/profile/180866-lukesmith123/?tab=reputation | CC-MAIN-2017-30 | refinedweb | 1,074 | 52.39 |
Developing the program of demonstration of the queries at the LINQ language
In the topic is described the step by step process of solving the task, that uses a queries at the LINQ language. Using this example you will learn to solve the similar tasks. In the topic is shown:
- reading data from file by using the StreamReader class;
- data representation as a generalized dynamic array List<T>;
- examples some queries at the LINQ, which solve the task.
The given topic shows the step by step of process of performing laboratory work in one of the educational institutions of our planet.
The task
File “Workers.txt” consists from the following information about workers:
- identification code;
- name of worker;
- the kind of education;
- specialty;
- year of birth.
File “Salary.txt” contains:
- identification code;
- the salary for the first half;
- the salary for the second half.
You can download the file “Workers.txt” here. Download the file “Salary.txt” here.
Solve the following tasks:
- Print the names and initials of workers above 35 years old.
- Print the identification code of worker with maximum salary in second half-year.
- Print the name, initials and type of education of those workers whose salary is lower than the average salary for the year.
The tasks must be executed using LINQ queries.
Additional considerations
As can be seen from the file structures “Workers.txt” and “Salary.txt“, these files have the same field “identification code“. This means that when entering data into files, you must be careful. The identification codes must be unique in the whole file and they must be repeated in the different files.
The content of “Workers.txt” file:
1; Ivanov I.I.; Bachelor; Programmer Engineer; 1900 2; Petrov P.P.; Master; Team Lead; 1950 3; Sidorov S.S.; Super Master; Software Architect; 1990 4; Johnson J.J.; Super Bachelor; HTML-coder; 1997 5; Nicolson J.J.; Bachelor; DevOps engineer; 1992
The content of “Salary.txt” file:
1; 23550; 26580 2; 26800; 28390 3; 24660; 27777 4; 35880; 44444 5; 55555; 39938
Performing
1. Creating the project as Windows Forms Application.
Create the project as Windows Forms Application. The example of creating a new project as Windows Forms Application is described here.
Save the files of project in any folder. Copy the files “Workers.txt” and “Salary.txt” in the same folder where is located the execution file (*.exe) of program.
2. Create the main form of application.
Create the form as shown in Figure 1. The following controls are placed on the form:
- three controls of Label type. Automatically the three objects (class instance) are created with names label1, label2, label3;
- three controls of Button type. Automatically the three objects are created with names button1, button2, button3;
- three controls of ListBox type. Automatically the three objects are created with names listBox1, listBox2, listBox3.
Figure 1. The main form of program
3. Setting up of controls of the form.
Set up the following controls on the form:
- – select the control label1 (by using mouse). In the control label1 property Text = “Workers.txt“;
- – in the control label2 property Text = “Salary.txt“;
- – in the control label3 property Text = “Result“;
- – in the control button1 property Text = “Task 1“;
- – in the control button2 property Text = “Task 2“;
- – in the control button3 property Text = “Task 3“.
After setting, form has the view as shown in Figure 2.
Figure. 2. The form after setting the controls
4. Connecting of the namespace System.Linq.
To use the LINQ-queries, you need to include the namespace System.Linq. As a rule, the namespase System.Linq is included automatically, when application is created as Windows Forms Application.
In the file Form1.cs the connection string is following:
using System.Linq;
5. Developing the internal data structures, that correspond to the files “Workers.txt” and “Salary.txt“.
Data that correspond to the one string of file “Workers.txt” is advisable to represent as “Workers” structure:
struct Workers { public string code; // identification code public string name; // name of worker public string education; // type of eduction public string profession; // speciality public int year; // year of birth }
Also, data that correspond to the one string of file “Salary.txt” is advisable to represent as “Salary” structure:
struct Salary { public string code; // identification code public float salary1; // salary in the first half-year public float salary2; // salary in the second half-year }
Since strings in the files can be a lot, all the data can be placed in the form of generalized dynamic array of type List<T>.
For the “Workers” and “Salary” structures, the dynamic arrays has the following definition:
List<Workers> lw = null; // the list of structures of "Workers" type List<Salary> ls = null; // the list of structures of "Salary" type
After inputting the structures the Form1 clas has the following view:
... public partial class Form1 : Form { struct Workers { public string code; // identification code public string name; // name public string education; // type of education public string profession; // speciality public int year; // year of birth } struct Salary { public string code; // identification code public float salary1; // salary in the first half-year public float salary2; // salary in the second half-year } List<Workers> lw = null; // the list of structures of Workers type List<Salary> ls = null; // the list of structures of Salary type public Form1() { InitializeComponent(); } } ...
6. Connecting the System.IO namespace.
To read data from files, in the program are used the possibilities of StreamReader class, that included in the .NET Framework library. Therefore, to use the methods of these classes, you need to add the following string at the beginning of “Form1.cs” file:
using System.IO;
7. Creating the methods Read_Workers() and Read_Salary() to read data from files “Workers.txt” and “Salary.txt“.
To read data from files “Workers.txt” and “Salary.txt” you need to input the two methods in the class Form1:
- Read_Workers();
- Read_Salary().
Listing of method Read_Workers() is following:
// reading data from file "Workers.txt" public void Read_Workers() { // create the object of StreamReader class, that corresponds to the file "Workers.txt" StreamReader sr = File.OpenText("Workers.txt"); string[] fields; // the variable, that corresponds to the fields of Workers structure string line = null; Workers w; // reading the string line = sr.ReadLine(); while (line != null) { // split the string by substrings - delimiter is the symbol ';' fields = line.Split(';'); // creating the structure of Workers type w.code = fields[0]; w.name = fields[1]; w.education = fields[2]; w.profession = fields[3]; w.year = Int32.Parse(fields[4]); // adding the structure of Workers type in the list List<Workers> lw.Add(w); // adding the string in listBox1 listBox1.Items.Add(line); // read the next string line = sr.ReadLine(); } }
Method Read_Workers() reads data from file “Workers.txt” and writes them in:
- dynamic array lw of List<Workers> type;
- control listBox1 to display on the form.
Listing of method Read_Salary() is following:
// read the data from file "Salary.txt" public void Read_Salary() { // create the object of StreamReader class, that corresponds to file "Salary.txt" StreamReader sr = File.OpenText("Salary.txt"); string[] fields; // the variable, that corresponds to the fields of structure Workers string line = null; Salary s; // read the string line = sr.ReadLine(); while (line != null) { // split the string by substrings - delimiter is the symbol ';' fields = line.Split(';'); // creating the structure of Salary type s.code = fields[0]; s.salary1 = (float)Convert.ToDouble(fields[1]); s.salary2 = (float)Double.Parse(fields[2]); // adding the structure of Salary type in the list List<Salary> ls.Add(s); // adding the string in listBox2 listBox2.Items.Add(line); // read the next string line = sr.ReadLine(); } }
Method Read_Salary() reads data from file “Salary.txt” and writes them in:
- dynamic array ls of type List<Salary>;
- control listBox2 to display on the form.
8. Programming the Form1() constructor of main form. Reading data from files.
After running the program, data from files must be loaded automatically into the controls “listBox1” and “listBox2”.
Therefore, in the constructor Form1() of class you need add methods Read_Workers() and Read_Salary().
Also, in the constructor Form1() of class is added the memory allocation for dynamic arrays lw and ls. Listing of constructor of main form is as follows:
public Form1() { InitializeComponent(); // memory allocation for lists lw = new List<Workers>(); ls = new List<Salary>(); // clear the controls of ListBox type listBox1.Items.Clear(); listBox2.Items.Clear(); listBox3.Items.Clear(); // read data from file "Workers.txt" Read_Workers(); Read_Salary(); }
9. Programming the events of clicking on the button “Task 1“.
The paragraph 1 of task is solved, when user clicks on the button “Task 1“. As a result, the corresponding event handler will be called.
An example of programming the event of clicking on the button in the application is described here.
Listing of event handler of clicking on the button “Task 1“:
// Names and initials of workers older than 35 years private void button1_Click(object sender, EventArgs e) { // query named "names" var names = from nm in lw where nm.year < (2016-35) select nm.name; listBox3.Items.Clear(); // clear the list // add the result of query "names" in the listBox3 foreach (string s in names) listBox3.Items.Add(s); }
In the listing above, the LINQ-query is formed. This query is named as “names“:
var names = from nm in lw where nm.year < (2016-35) select nm.name;
The query is begun from the “from” operator. In the “from” operator is given the variable of range, that is named as “nm“. The dynamic array lw of type Workers is the data source in the operator from.
The next step is the “where” operator that is a condition. The element in data source should sutisfy this condition, so that it can be obtained on request.
The query ends by the “select” operator. The “select” operator sets, what must be outputed by a query. In the given example is selected the “name” field of “Workers” structure.
To execute the query, you must use the “foreach” loop.
foreach (string s in names) listBox3.Items.Add(s);
Since, the result of LINQ-query is the string type, then the variable s of type string is described in the loop.
10. Programming the event of clicking on the button “Task 2“.
The event handler of cliciking on the button “Task 2” is following:
// identification code of worker with maximum salary for second half-year. private void button2_Click(object sender, EventArgs e) { // query max_salary var max_salary = (from ms in ls select ms.salary2).Max(); // method Max() returns the maximum value listBox3.Items.Clear(); listBox3.Items.Add(max_salary); }
In this LINQ-query is used the method Max(), that returns a maximum value in the list. This list is formed into the query with name max_salary.
11. Programming the event of clicking on the button “Task 3“.
Listing of event handler of clicking on the button “Task 3” is following:
// The names and education kind those workers, // which have the salary less then average for year private void button3_Click(object sender, EventArgs e) { var result = from w in lw from sl in ls let avg = (from s in ls select (s.salary1 + s.salary2)).Average() where ((sl.salary1 + sl.salary2) < avg) && (w.code == sl.code) select w.name +" - " + w.profession; listBox3.Items.Clear(); foreach (string s in result) listBox3.Items.Add(s); }
In this handler is formed the query with name result.
In the query you need realize the data fetch from two data sources. In our case these are the following sources:
- the dynamical array lw of type List<Workers>. The name and kind of education are selected from this array;
- the dynamical array ls of type List<Salary>. The salary is selected from this array.
To select the data from two sources the query must consist of two nested “from” operators.
To select data that correspond to unique identifier code (in the “where” operator), the string (w.code==sl.code) is used:
where (...) && (w.code == sl.code)
To calculate the average, is used method Average() of .NET Framework environment. The average is saved in the variable avg, which is entered into query by using the “let” operator:
... let avg = (from s in ls select (s.salary1 + s.salary2)).Average() ...
12. Run the program.
Now you can run the program. | http://www.bestprog.net/en/2016/12/14/developing-the-program-of-demonstration-of-the-queries-at-the-linq-language/ | CC-MAIN-2017-39 | refinedweb | 2,023 | 67.76 |
Here's some functions that I wrote to help with collision detection in a retro arcade remake of Asteroids using Python and Pygame. Some simple assertion tests are included at the bottom.
Features:
The entry point for an intersect test is the getIntersectPoint function.
# geometry.py
#
# Geometry functions to find intersecting lines.
# Thes calc's use this formula for a straight line:-
# y = mx + b where m is the gradient and b is the y value when x=0
#
# See here for background
#
# Throughout the code the variable p is a point tuple representing (x division
from pygame import Rect
# Calc the gradient 'm' of a line between p1 and p2
def calculateGradient(p1, p2):
# Ensure that the line is not vertical
if (p1[0] != p2[0]):
m = (p1[1] - p2[1]) / (p1[0] - p2[0])
return m
else:
return None
# Calc the point 'b' where line crosses the Y axis
def calculateYAxisIntersect(p, m):
return p[1] - (m * p[0])
# Calc the point where two infinitely long lines (p1 to p2 and p3 to p4) intersect.
# Handle parallel lines and vertical lines (the later has infinate 'm').
# Returns a point tuple of points like this ((x,y),...) or None
# In non parallel cases the tuple will contain just one point.
# For parallel lines that lay on top of one another the tuple will contain
# all four points of the two lines
def getIntersectPoint(p1, p2, p3, p4):
m1 = calculateGradient(p1, p2)
m2 = calculateGradient(p3, p4)
# See if the the lines are parallel
if (m1 != m2):
# Not parallel
# See if either line is vertical
if (m1 is not None and m2 is not None):
# Neither line vertical
b1 = calculateYAxisIntersect(p1, m1)
b2 = calculateYAxisIntersect(p3, m2)
x = (b2 - b1) / (m1 - m2)
y = (m1 * x) + b1
else:
# Line 1 is vertical so use line 2's values
if (m1 is None):
b2 = calculateYAxisIntersect(p3, m2)
x = p1[0]
y = (m2 * x) + b2
# Line 2 is vertical so use line 1's values
elif (m2 is None):
b1 = calculateYAxisIntersect(p1, m1)
x = p3[0]
y = (m1 * x) + b1
else:
assert false
return ((x,y),)
else:
# Parallel lines with same 'b' value must be the same line so they intersect
# everywhere in this case we return the start and end points of both lines
# the calculateIntersectPoint method will sort out which of these points
# lays on both line segments
b1, b2 = None, None # vertical lines have no b value
if m1 is not None:
b1 = calculateYAxisIntersect(p1, m1)
if m2 is not None:
b2 = calculateYAxisIntersect(p3, m2)
# If these parallel lines lay on one another
if b1 == b2:
return p1,p2,p3,p4
else:
return None
# For line segments (ie not infinitely long lines) the intersect point
# may not lay on both lines.
#
# If the point where two lines intersect is inside both line's bounding
# rectangles then the lines intersect. Returns intersect point if the line
# intesect o None if not
def calculateIntersectPoint(p1, p2, p3, p4):
p = getIntersectPoint(p1, p2, p3, p4)
if p is not None:
width = p2[0] - p1[0]
height = p2[1] - p1[1]
r1 = Rect(p1, (width , height))
r1.normalize()
width = p4[0] - p3[0]
height = p4[1] - p3[1]
r2 = Rect(p3, (width, height))
r2.normalize()
# Ensure both rects have a width and height of at least 'tolerance' else the
# collidepoint check of the Rect class will fail as it doesn't include the bottom
# and right hand side 'pixels' of the rectangle
tolerance = 1
if r1.width < tolerance:
r1.width = tolerance
if r1.height < tolerance:
r1.height = tolerance
if r2.width < tolerance:
r2.width = tolerance
if r2.height < tolerance:
r2.height = tolerance
for point in p:
try:
res1 = r1.collidepoint(point)
res2 = r2.collidepoint(point)
if res1 and res2:
point = [int(pp) for pp in point]
return point
except:
# sometimes the value in a point are too large for PyGame's Rect class
str = "point was invalid ", point
print str
# This is the case where the infinately long lines crossed but
# the line segments didn't
return None
else:
return None
# Test script below...
if __name__ == "__main__":
# line 1 and 2 cross, 1 and 3 don't but would if extended, 2 and 3 are parallel
# line 5 is horizontal, line 4 is vertical
p1 = (1,5)
p2 = (4,7)
p3 = (4,5)
p4 = (3,7)
p5 = (4,1)
p6 = (3,3)
p7 = (3,1)
p8 = (3,10)
p9 = (0,6)
p10 = (5,6)
p11 = (472.0, 116.0)
p12 = (542.0, 116.0)
assert None != calculateIntersectPoint(p1, p2, p3, p4), "line 1 line 2 should intersect"
assert None != calculateIntersectPoint(p3, p4, p1, p2), "line 2 line 1 should intersect"
assert None == calculateIntersectPoint(p1, p2, p5, p6), "line 1 line 3 shouldn't intersect"
assert None == calculateIntersectPoint(p3, p4, p5, p6), "line 2 line 3 shouldn't intersect"
assert None != calculateIntersectPoint(p1, p2, p7, p8), "line 1 line 4 should intersect"
assert None != calculateIntersectPoint(p7, p8, p1, p2), "line 4 line 1 should intersect"
assert None != calculateIntersectPoint(p1, p2, p9, p10), "line 1 line 5 should intersect"
assert None != calculateIntersectPoint(p9, p10, p1, p2), "line 5 line 1 should intersect"
assert None != calculateIntersectPoint(p7, p8, p9, p10), "line 4 line 5 should intersect"
assert None != calculateIntersectPoint(p9, p10, p7, p8), "line 5 line 4 should intersect"
print "\nSUCCESS! All asserts passed for doLinesIntersect" | http://www.pygame.org/wiki/IntersectingLineDetection?parent=CookBook | CC-MAIN-2016-22 | refinedweb | 887 | 58.11 |
Image size of origin is 320*240.
Processing time is 30.96 second took.
The result of stitching
The result is pretty good. but, processing time is too much takes.
My computer spec is that.. (This is vmware system. The main system is mac book air 2013, i7 8bg)
The source code is very easy.
I think if we use stitching algorithm in realtime, we should be programing by GPU.
/////
#include < stdio.h > #include < opencv2\opencv.hpp > #include < opencv2\stitching\stitcher.hpp > #ifdef _DEBUG #pragma comment(lib, "opencv_core246d.lib") #pragma comment(lib, "opencv_imgproc246d.lib") //MAT processing #pragma comment(lib, "opencv_highgui246d.lib") #pragma comment(lib, "opencv_stitching246d.lib"); #else #pragma comment(lib, "opencv_core246.lib") #pragma comment(lib, "opencv_imgproc246.lib") #pragma comment(lib, "opencv_highgui246.lib") #pragma comment(lib, "opencv_stitching246.lib"); #endif using namespace cv; using namespace std; void main() { vector< Mat > vImg; Mat rImg; vImg.push_back( imread("./stitching_img/S1.jpg") ); vImg.push_back( imread("./stitching_img/S2.jpg") ); vImg.push_back( imread("./stitching_img/S3.jpg") ); vImg.push_back( imread("./stitching_img/S4.jpg") ); vImg.push_back( imread("./stitching_img/S5.jpg") ); vImg.push_back( imread("./stitching_img/S6.jpg") ); Stitcher stitcher = Stitcher::createDefault(); unsigned long AAtime=0, BBtime=0; //check processing time AAtime = getTickCount(); //check processing time Stitcher::Status status = stitcher.stitch(vImg, rImg); BBtime = getTickCount(); //check processing time printf("%.2lf sec \n", (BBtime - AAtime)/getTickFrequency() ); //check processing time if (Stitcher::OK == status) imshow("Stitching Result",rImg); else printf("Stitching fail."); waitKey(0); }/////
github
N image, realtime stitching.
source code:
how to work:
2 image stitching.
basic principal on vidoe(code and explanation):
basic principal on image(code and explanation):
Saludos muy bueno el artículo. Faltaría la imagen 4.
great blog mare! thank for your sharing
Thank you very much~!! ^^
The code work well when I stitched using only two of your images, but gave the following errors when used more than 2 image.
First-chance exception at 0x5baa677a in TEST_CV.exe: 0xC0000005: Access violation reading location 0x00000004.
First-chance exception at 0x756d9617 in TEST_CV.exe: Microsoft C++ exception: tbb::captured_exception at memory location 0x0012eaf4..
Unhandled exception at 0x756d9617 in TEST_CV.exe: Microsoft C++ exception: tbb::captured_exception at memory location 0x0012eaf4..
Please help me in fixing that.
Can you send me the image files? I will test on my computer. Thank you.
My email is feelmare@gmail.com
It's great code.... but I have problem with image that captured with fisheye lens.. how to stitch that images? any suggestions?
In my opinion, case of fisheye lens, you should do undistortion processing first.
refer to this page ->
This comment has been removed by the author.
it is great code ,,,, but can you help me how can i stitching video files together ???
thanks
Real-time video stitching is little bit different with image.
Big problem is proccesing time is late.
In order to implementation, Structure must be sperate of online and offline calculation part.
Please refer to the source code "stitching_detailed.cpp" of Opencv samples
I made real-time stitching program but I am difficult to share the code.
Because the source code is owned by my company.
refer to this video.
Thank you.
This comment has been removed by the author.
I have used the same code and it is running fine sometimes. But if I run it continuously more than two times, it is crashing.. giving the following error.
Access violation reading location 0x0000000000000008. and it the cursor is pointing to
pChore->m_pFunction(pChore); in Chores.cpp file.
When I ran using 640x480 size images, the application is generating panorama for 2 images and crashing some times for the same set of images. Moreover it is not able to run for more than 2 images.
Can you please let me know why it is happening like this?
This comment has been removed by the author.
hi , i faced same problem .... but when i using small images (320*240) for example , it will be correctly done .
it is very informative but is it possible to have this details using python language : my email: mandeepola@ymail.com
This comment has been removed by the author.
Hi, I have some problems when using the stitcher, maybe you can help me. I guess I'll remit to you personally by mail with the image. Thank you in advance.
I test your image
and upload code to github
and code
Thank you.
Thank you for your quick reply. It's strange your code is similar to mine and mine is not working properly. I´m using VS2013 as well, but OpenCV 3.1.0 x64 without Gpu, while you are using OpenCV 2.4.9 with Gpu. Do you know if the reason of my artifacts has to to with any of these three differences?
I think default option is different.
Default option looks similar (GraphCutSeamFinder(CostColor), BlocksGainCompensator and MultibandBlender after registration). Could it be due to any change in any of these blocks (probably multiband blender), that is messing up everything in the new version of OpenCV?
Hi, There was a run-time error on my PC. It says, "Unhandled exception at 0x00871BDA (opencv_imgproc2412d.dll) in T3.exe: 0xC000001D: Illegal Instruction." I use VS2012 and opencv2.4.12. Could you please help me?
I don't know about this error.
Please check build option 32bit, 64bit.
lib build condition. gpu include?
function usage different in another opencv version?
you can try in release type, it will be fine!
thank you.
i pasted this code in geany editor. compilation error ocured/home/aftab/Pictures/Screenshot from 2016-04-29 00:03:43.png
Hi Mare, thanks for the code. I used your exact code on my machine and fed the same set of input images, but my resulting image wasn't complete like yours. It looks like it did some stitching but not completely. Any ideas what may be wrong? Thanks!
Your code has 6 input images.
But there are only 5 pictures.
I tried running the program using your code.
Pictures 5 and 6 do not appear in the execution window.
And the input image was replaced with another image. But stitching failed.
It is example of Stitcher in OpenCV library.
It is very simple code, if you input overlap images.
If a common feature can not be found in the image, stitching may fail.
This comment has been removed by the author.
Hi you know any example of stitching but with the opossite perspective.
Something like this:
My intention to carry out an inspection of labels in metal cans.
Thanks in advance
Dear Mare! you're really doing nice job. Please let me know about your platform specification while executing your code for stitching images.
Hope to hearing from you soon.
Thanks,
Abdul Wahab
I am using window os, and visuals studio.
this code is standard code, so opencv version, platform(32/64), cuda version.. is not specification.
Thank you. | http://study.marearts.com/2013/11/opencv-stitching-example-stitcher-class.html | CC-MAIN-2017-26 | refinedweb | 1,136 | 70.9 |
07 April 2011 09:20 [Source: ICIS news]
By Liu Xin
?xml:namespace>
SINGAPORE
Spot prices were stagnant at $3,250-3,350/tonne (€2,275-2,345/tonne) CIF (cost, insurance and freight)
“We were earlier targeting moderate price increase of $30-50/tonne in April, but it is very difficult to implement any price increase and we are trying to maintain our prices,” said a regional producer.
Downstream demand in
“PC prices may have peaked following their recent hikes,” said a trader. “We are expecting some downward correction given ample supply and soft demand,” he added.
Spot PC prices had gained $150-200/tonne since mid-February, buoyed by rising feedstock bisphenol A (BPA) values and the expectations of tighter supply and firm raw material costs following the massive earthquake that hit
Mitsubishi Engineering Plastics shut its 110,000 tonne/year PC facility in Kashima, but is keeping its 60,000 tonne/year PC unit in Kurosaki running after the earthquake, sources said.
Idemitsu Kosan’s 47,000 tonne/year PC plant in
The power restrictions in
Meanwhile, the recent start-up of a Middle East-based 65,000 tonne/year optical-grade PC plant was expected to exert some downward pressure on spot values when cargoes arrive in
“PC prices look set to fall in April, but prevailing high BPA costs and lean margins would prevent any drastic declines,” said a regional trader.
Asian BPA spot prices were steady at $2,480-2,520/tonne CFR NE Asia on 25 March following the recent rally on the back of tight supply, according to ICIS data.
The spread between BPA and PC had narrowed to $600-800/tonnes, which was considered unhealthy by industry players. They prefer the spread to be over $800/tonne.
PC is a type of high specification engineering plastic, suitable for moulding and with a high resistance. Typical applications include automobiles, CDs and DVDs, electronic casings, returnable milk bottles, lighting and greenhouses.
($1 = €0.70)
Please visit the complete ICIS plants and projects database
For more information on PC, | http://www.icis.com/Articles/2011/04/07/9450620/asian-pc-set-to-fall-on-sluggish-demand-high-inventory.html | CC-MAIN-2015-11 | refinedweb | 345 | 53.75 |
MRML node to represent a 3D ROI. More...
#include <Libs/MRML/Core/vtkMRMLROINode.h>
MRML node to represent a 3D ROI.
Model nodes describe ROI data. They indicate where the ROI is located and the size of the ROI.
Definition at line 12 of file vtkMRMLROINode.h.
Definition at line 16 of file vtkMRMLROINode.h.
Transforms the node with the provided non-linear transform.
Reimplemented from vtkMRMLTransformableNode.
transform utility functions
Reimplemented from vtkMRMLTransformableNode.
Copy the node's attributes to this object
Reimplemented from vtkMRMLNode.
MRML methods.
Implements vtkMRMLTransformableNode.
Get node XML tag name (like Volume, Model)
Implements vtkMRMLTransformableNode.
Definition at line 40 of file vtkMRMLROINode.h.
Description get transformed planes for the ROI region
Indicates if the ROI is updated interactively
Propagate events generated in mrml.
Reimplemented from vtkMRMLNode.
Set node attributes
Reimplemented from vtkMRMLNode.
Get/Set for ROI Position in IJK cooridnates
Set/Get the InsideOut flag. This data member is used in conjunction with the GetPlanes() method. When off, the normals point out of the box. When on, the normals point into the hexahedron. InsideOut is off by default.
Get/Set for LabelText
Get/Set for radius of the ROI in IJK cooridnates
Get/Set for radius of the ROI in RAS cooridnates
Get/Set for ROI Position in RAS cooridnates Note: The ROI Position is the center of the ROI
update display node ids
Reimplemented from vtkMRMLNode.
Updates other nodes in the scene depending on this node or updates this node if it depends on other nodes when the scene is read in This method is called automatically by XML parser after all nodes are created
Reimplemented from vtkMRMLNode.
Indicates if the ROI is visible
Write this node's information to a MRML file in XML format.
Reimplemented from vtkMRMLNode.
The location of the ROI centroid in IJK space Note: The ROI Position is the center of the ROI
Definition at line 133 of file vtkMRMLROINode.h.
Control the orientation of the normals.
Definition at line 139 of file vtkMRMLROINode.h.
Definition at line 121 of file vtkMRMLROINode.h.
Definition at line 141 of file vtkMRMLROINode.h.
The radius of the ROI box in IJK space
Definition at line 136 of file vtkMRMLROINode.h.
The raidus of of the ROI box in RAS space
Definition at line 129 of file vtkMRMLROINode.h.
Definition at line 120 of file vtkMRMLROINode.h.
The ID of the volume associated with the ROI
Definition at line 145 of file vtkMRMLROINode.h.
The location of the ROI centroid in RAS space Note: The ROI Position is the center of the ROI
Definition at line 126 of file vtkMRMLROINode.h. | https://apidocs.slicer.org/master/classvtkMRMLROINode.html | CC-MAIN-2021-25 | refinedweb | 437 | 59.19 |
If you write the same object twice into the ObjectOutputStream using writeObject() method, typically you would expect that the size of the stream should increase approximately by the size of the object (and all the fields within that recursively). But it wouldn't happen so.
It is very critical to understand how writeObject() method works. It writes an object only once into a stream. The next time when the same object is written, it just notes down the fact that the object is already available in the same stream.
Let us take an example. We want to write 1000 student records into an ObjectOutputStream. We create only one record object, and plan to reuse the same record within a loop so that we save time on object creation. We will use setter methods to update the same object with next student's details. If we use writeObject() to carry out this task, changes made to all but the first student's records will be lost. (Go ahead and try the program given below)
To achieve the objective stated above, you must use writeUnshared() method call. (Change the writeObject() method to writeUnshared() method and convince yourself)
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.IOException;
import java.io.Serializable;
class StudentRecord implements Serializable {
public String name;
public String major;
}
public class ObjectStreamTest {
public static void main(String[] argv) throws IOException, java.lang.ClassNotFoundException {
// Open the Object stream.
ObjectOutputStream oos = new ObjectOutputStream(new FileOutputStream("objectfile.bin"));
// Create the record that will be reused.
StudentRecord rec = new StudentRecord();
// Write the records.
rec.name = "John"; rec.major = "Maths";
oos.writeObject(rec);
rec.name = "Ben"; rec.major = "Arts";
oos.writeObject(rec);
oos.close();
// Read the objects back to reconstruct them.
ObjectInputStream ois = new ObjectInputStream(new FileInputStream("objectfile.bin"));
rec = (StudentRecord)ois.readObject();
System.out.println("name: " + rec.name + ", major: " + rec.major);
rec = (StudentRecord)ois.readObject();
System.out.println("name: " + rec.name + ", major: " + rec.major);
ois.close();
}
}
5 comments:
Very nice example. Was stuck on it for a while and then I came across your post and cleared up a couple of misconceptions.
I'll soon write a little post about the same and a bit more on my blog.
Onkar Joshi.
Hi,
This blog entry helped me out a lot!
I'm working on a 3D FPS and I was using the writeObject method and wondering why I kept reading in the same #$^$%@ command over and over.
Now I realize why I should read the ENTIRE Java API spec for something like ObjectOutputStream (and not just scan for a method that looks like the one i want).
Thanks again for sharing your experiences with Serialization.
i spent 4 hours to discover this
thank you
greets from brazil
I spent hours trying to find the reason why I kept writing the same object.
Thanks for pointing this out.
yes, great note. The same happened to me I waste almost 3 hours wondering was wrong, just up until and came across this note. | http://royontechnology.blogspot.com/2007/01/notes-on-objectoutputstream.html | CC-MAIN-2017-47 | refinedweb | 507 | 69.18 |
End to End Scenario to Activate FIORI Analytical Apps in Front End from S/4 Hana.
With S/4 HANA we don’t need a separate server for FIORI unlike in transactional Apps . While my Functional Consultant was working in S/4 Hana Simple logistics he came out with an issue that the App :- Sales Order Fulfilment is not opening in Front end and is throwing an error :- Failed To Initialize.
From S/4 HANA on navigating to Trxn. /n/ IWFND/ERROR_LOG can read from error log :-
No service found for namespace ‘/SSB/’, name ‘SMART_BUSINESS_RUNTIME_SRV’, version ‘0001’
In Order to resolve the same We need to go to Trxn. /n/IWFND_MAINT_SERVICE and click on Add Service .
Add the OData service /SSB/SMART_BUSINESS_RUNTIME_SRV and post adding that we get the analytical app in the FIORI frontend.
Now navigate to the FIORI URL set :-
The Analytical HANA app is visible.
The above example is for Standard App :- Sales Order Fulfillment . In order to check all the apps it is known that we need to navigate to FIORI apps library :-(‘F0018’)/W13
Study the components and check it in S/4 HANA system . Not that every app contains OData Services or SICF for ex:- Create Sales Order as this varies from customer to customer.In that case we need to build App from Scratch and customize on the same. Wait out for the same in the series of my blogs.
Regards,
Somnath . | https://blogs.sap.com/2018/07/27/end-to-end-scenario-to-activate-fiori-analytical-apps-in-front-end-from-s4-hana./ | CC-MAIN-2021-49 | refinedweb | 239 | 70.63 |
Before you start
Learn what to expect from this tutorial and how to get the most out of it.
Scalable Vector Graphics (SVG) 1.1 is an XML language for describing two-dimensional vector graphics. Developed by the World Wide Web Consortium (W3C), it has the remarkable ambition of providing a practical and flexible graphics format in XML, despite the notorious verbosity of XML. SVG's feature set includes nested transformations, clipping paths, alpha masks, raster filter effects, template objects, and, of course, extensibility. SVG also supports animation, zooming and panning views, a wide variety of graphic primitives, grouping, scripting, hyperlinks, structured metadata, Cascading Style Sheets (CSS), a specialized Document Object Model (DOM) superset, and easy embedding in other XML documents. Overall, SVG has been one of the most widely and warmly embraced XML applications.
You can develop, process, and deploy SVG in many different environments, from mobile systems such as phones and Personal Digital Assistants (PDAs), to print environments..
Who should take this tutorial?'ll learn the basics of SVG in order to publish vector graphics on the Web using SVG. You'll learn how to render such images in a browser either stand-alone or embedded in XHTML.
This tutorial assumes knowledge of XML, XML namespaces, CSS, and basic XHTML. Even though this tutorial focuses on SVG on the Web, it requires no prior knowledge of SVG and starts with the basics of the language. If you aren't familiar with XML, take the tutorial Introduction to XML. If you need to learn about XML namespaces, read the article Plan to use XML namespaces, Part 1. If you're not familiar with CSS, especially as used with XML, take the tutorial Display XML with Cascading Stylesheets: Use Cascading Stylesheets to display XML, Part 1: Basic techniques to present XML in Web browsers. This tutorial introduces the use of CSS to style XML in browsers. If you aren't familiar with XHTML, a good place to start is XHTML, step-by-step. You should also understand the basic mathematics of the two-dimensional rectilinear coordinate system, also known as the Cartesian coordinate system. You might remember this best from high school mathematics as how to specify points along X and Y axes.
I highly recommend. Mac OS X users might want to try the Camino Web browser for SVG support. Microsoft® Internet Explorer users will require a plug-in such as the Adobe SVG Viewer. When showing browser output examples, I show screenshots of Firefox 1.5.0.2 on Ubuntu Linux®. Firefox is a popular Web browser available on Microsoft Windows®, Mac OS X, Linux, and other platforms. It is based on Mozilla's rendering engine.
About the examples in this tutorial
This tutorial features many examples of SVG files, either stand-alone or embedded in XHTML. All the files used in this tutorial are in the zip file, x-svggraphics-tutorial-files.zip. In this package, all files start with a prefix indicating what section they're covered in and what order of examples within the section. For example, the names of files from the first example in the third section start with eg_3_1.
Files that end with .svg are stand-alone SVG. Those that end with .xhtml are XHTML. A few files use other extensions such as .css for stand-alone CSS and .xsl for XSLT transform files.
I do take care to further list the example files in each panel and how each relates to the other, so if you follow along with the tutorial, you should be able to locate and experiment with the examples easily enough. | http://www.ibm.com/developerworks/xml/tutorials/x-svggraphics/ | crawl-003 | refinedweb | 603 | 56.45 |
Flutter is Google's UI toolkit for crafting beautiful, natively compiled applications for mobile, web, and desktop from a single codebase. In this post "Flutter Tutorial for Beginners", you'll learn: Exploring widgets, Building layouts, Responding to user input ...
Many people say that Flutter has a steep learning curve. If you have seen Flutter UI layout code like below (simplified from here), you may be inclined to agree with them.
Warning: Incomprehensible code block ahead. Scroll past and keep reading.
Widget build(BuildContext context) { ThemeData themeData = Theme.of(context); return new Scaffold( body: new Padding( padding: const EdgeInsets.all(10.0), child: new Column( mainAxisAlignment: MainAxisAlignment.spaceBetween, children: <Widget>[ new Expanded( child: new Align( alignment: FractionalOffset.center, child: new AspectRatio( aspectRatio: 1.0, child: new Stack( children: <Widget>[ new Positioned.fill( child: new AnimatedBuilder( animation: _controller, builder: (BuildContext context, Widget child) { return new CustomPaint( painter: new ProgressPainter( animation: _controller, color: themeData.indicatorColor, backgroundColor: Colors.white, ), ); } ), ), ], ), ), ), ), new Container( margin: new EdgeInsets.all(10.0), child: new Row( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ new FloatingActionButton( child: new AnimatedBuilder( animation: _controller, builder: (BuildContext context, Widget child) { return new Icon( _controller.isAnimating ? Icons.pause : Icons.play_arrow ); }, ), ), ], ), ), ], ), ), ); } }
Mountains are only steep if you climb straight up. And the Flutter learning curve is only hard if you try to do too much at once. Just as hiking trails with switchbacks makes a mountain climb more manageable, in this tutorial I will give you an opportunity to take some easy first steps to mastering Flutter. You are going to discover that it's a lot easier than you thought.
One of the first concepts that you encounter in Flutter are widgets, so we will be looking at what they are and how to use them. Most importantly, there will be lots of examples that you will be able to experiment with yourself. I encourage you to actually run the examples and make changes to them as you go through the tutorial. This will greatly improve your rate of learning and help solidify your understanding of the topics.
I hear and I forget. I see and I remember. I do and I understand.
I don't expect you to know much. That's the point of this tutorial. However, you should have already set up your development environment. Some people prefer Android Studio. Others like Visual Studio Code because it's more lightweight. The fact is that both work fine. I'm writing the text of this tutorial using Visual Studio Code and running the code for the examples below in Android Studio with Flutter 1.0.
If you haven't set up the Flutter development environment yet, then I highly recommend following the directions in the Flutter documentation. Unlike a lot of documentation, the Flutter docs are very thorough and easy to follow. You should have finished at least the first three steps below (but I highly recommend Step 4 as well).
Also feel free to check out the previous Pusher Blog tutorials Getting started with Flutter Part 1 and Part 2.
Widgets are just pieces of your user interface. Text is a widget. Buttons are widgets. Check boxes are widgets. Images are widgets. And the list goes on. In fact, everything in the UI is a widget. Even the app itself is a widget!
If you are familiar with Android or iOS development (no problem if you aren't), then you will make the immediate connection to views (for Android) and UIViews (for iOS). This is a good comparison to make and you will do fine to start your journey with this mindset. A more accurate way to think, though, is that a widget is a blueprint. Flutter uses these blueprints to build the view elements under the hood and render them to the screen.
When.
This is your first step on the way to mastering Flutter. But if you think of widgets as simple blueprints, then this first step shouldn't be a hard one.
We are not going to go into how to make a layout in this lesson, but it's helpful to know that widgets are arranged into a tree of parent and child widgets. The entire widget tree is what forms the layout that you see on the screen. For example, here is the widget tree for the default demo app when you start a new project. The visible widgets are marked with red lines. (The other widgets in this tree are used for layout and adding functionality.)
Note: You can view any project's widget tree by using the Flutter Inspector tool. In Android Studio it's a vertical tab on the far right near the top. In Visual Studio Code you can find it by running the command Flutter: Inspect Widget when running the app in debugging mode.
Widgets are immutable. That is, they cannot be changed. Any properties that they contain are final and can only be set when when the widget is initialized. This keeps them lightweight so that it's inexpensive to recreate them when the widget tree changes.
There are two types of widgets: stateless and stateful. Stateless widgets are widgets that don't store any state. That is, they don't store values that might change. For example, an Icon is stateless; you set the icon image when you create it and then it doesn't change any more. A Text widget is also stateless. You might say, "But wait, you can change the text value." True, but if you want to change the text value, you just create a whole new widget with new text. The Text widget doesn't store a text property that can be changed.
The second type of widget is called a stateful widget. That means it can keep track of changes and update the UI based on those changes. Now you might say, "But you said that widgets are immutable! How can they keep track of changes?" Yes, the stateful widget itself is immutable, but it creates a State object that keeps track of the changes. When the values in the State object change, it creates a whole new widget with the updated values. So the lightweight widget (blueprint) gets recreated but the state persists across changes.
A stateful widget is useful for something like a checkbox. When a user clicks it, the check state is updated. Another example is an Image widget. The image asset may not be available when the widget is created (like if it is being downloaded), so a stateless widget isn't an option. Once the image is available, it can be set by updating the state.
If this section was too much for you, then don't worry about it. It isn't necessary at all for today's tutorial. But if you would like to learn more, then check out the Flutter widgets 101 YouTube videos from the Flutter team or read the core principles in the docs. If you want to do some deeper research then I recommend watching Flutter's Rendering Pipeline and Flutter's Layered Design.
Next we are going get our hands dirty with some easy examples of common widgets. Again, I highly recommend that you follow along and run the code in your editor.
Start a new Flutter application project. I called my project
flutter_widget_examples, but you can call yours whatever you want.
Open the
main.dart file. It's in the
lib folder in your project outline.
Delete all the text in this file and replace it with
void main() {}
If you hot reload your app now it should be a blank screen. The
main() function is the starting point for every Flutter app. Right now ours does nothing, but in each of the examples below we will be testing a different Flutter widget here.
The first widget we are going to play with is called a Container. As you might have guessed from the name, it's a holder for other widgets. But we aren't going to put anything else in it to start with. We will just play with its color property.
Replace all the code in
main.dart with the following:
// importing this package gives us the dart widgets // as well as the Material Theme widgets import 'package:flutter/material.dart'; // the main() function is the starting point for every Flutter project void main() { // calling this method (you guessed it) runs our app runApp( // runApp() takes any widget as an argument. // This widget will be used as the layout. // We will give it a Container widget this time. Container( color: Colors.green, // <-- change this ), ); }
Note: You may have noticed the commas (
,) at the ends of some lines in Dart (the programming language that we write Flutter apps in). These commas are used for formatting lines. You could remove them but then the text would be written on a single line when auto-formatted.
Restart the app and see what you get. Then replace
Colors.green with other values. You will notice that if you try to do a hot reload nothing happens. We will fix that soon. For now just restart the app between every change.
Colors.red
Colors.blueAccent
Colors.deepPurple
This step was pretty easy, wasn't it? Now you know how to change property values in Flutter widgets.
Probably every single app that you make will have text, so the Text widget is definitely one that we need to look at.
I added some boilerplate code with explanations. You don't have to pay too much attention to it, though. I was going to leave it out, but using the MaterialApp widget makes the app look nicer and makes the rest of the code simpler. Also, having the
build() method lets us use hot reload to update after changes.
Replace all the code in
main.dart with the following code. Pay special attention to the
myWidget() method at the bottom. We will use it to return the Text widget that we are playing with here. In following examples you will only need to replace this method.
import 'package:flutter/material.dart'; void main() { // runApp() is a builtin method that initializes the app layout // MyApp() (see below) is a widget that will be the root of our application. runApp(MyApp()); } // the root widget of our application class MyApp extends StatelessWidget { // The build method rebuilds the widget tree if there are any changes // and allows hot reload to work. @override Widget build(BuildContext context) { // This time instead of using a Container we are using the MaterialApp // widget, which is setup to make our app have the Material theme. return MaterialApp( // The Scaffold widget lays out our home page for us home: Scaffold( // We will pass an AppBar widget to the appBar property of Scaffold appBar: AppBar( // The AppBar property takes a Text widget for its title property title: Text("Exploring Widgets"), ), // The body property of the Scaffold widget is the main content of // our screen. Instead of directly giving it a widget we are going // to break it out into another method so that things don't get // too messy here. body: myWidget(), ), ); } } // This is where we will play with the Text widget Widget myWidget() { return Text( "Hello, World!", ); }
You should see the following:
Change the text from "Hello, World!" to "Hello, Flutter!" and then do a hot reload.
If you want to increase the font size, you can add a TextStyle widget to the
style property of
Text. Replace the
myWidget() method above with the following:
Widget myWidget() { return Text( "Hello, Flutter!", style: TextStyle( fontSize: 30.0 ), ); }
There are lots of other changes you can make with the TextStyle widget, like color, font, shadows, and spacing to name a few.
If you want to add padding, you don't change a property. Instead, you wrap the Text widget with a Padding widget. In Flutter lots of layout related tasks use widgets instead of setting properties. Remember, a widget is a blueprint that affects how the UI looks.
Replace the
myWidget() method with the following:
Widget myWidget() { return Padding( // Set the padding using the EdgeInsets widget. // The value 16.0 means 16 logical pixels. This is resolution // independent, so you don't need to worry about converting // to the density of the user's device. padding: EdgeInsets.all(16.0), // When wrapping one widget with another widget, // you use the child property. child: Text( "Hello, Flutter!", ), ); }
If you have been doing the code along with me, your confidence should be increasing. It really isn't that hard to make widgets, is it? Buttons are another common need and Flutter has several types of button widgets. Although we are not doing anything in response to the button click in this tutorial, you can see in the code below where you could do something.
Widget myWidget() { return RaisedButton( child: const Text('Button'), color: Colors.blue, elevation: 4.0, splashColor: Colors.yellow, onPressed: () { // do something }, ); }
We used a RaisedButton here. The elevation affects the shadow under the button. The splash color is what you see when the button is clicked.
You can use a FlatButton widget if you don't want the elevation.
Widget myWidget() { return FlatButton( child: const Text('Button'), splashColor: Colors.green, onPressed: () { // do something }, ); }
For accepting user text input you use a TextField widget. Now that you already have experience with the widgets above, this one is simple. You just click in the TextField and the system keyboard automatically pops up. (If it is not popping up on the iOS simulator press Command + Shift + K.)
Widget myWidget() { return TextField( decoration: InputDecoration( border: InputBorder.none, hintText: 'Write something here' ), ); }
Remove
border: InputBorder.none, and run it again. Now there is a blue input border at the bottom of the TextField.
The most common way to display lots of data is with a ListView. Now, I have done lists before with Android RecyclerViews and iOS TableViews and I have to say that Flutter is way too easy. The stories that you have heard about Flutter having a steep learning curve may have been overrated.
Widget myWidget() { return ListView.builder( padding: EdgeInsets.all(16.0), // spacing of the rows itemExtent: 30.0, // provides an infinite list itemBuilder: (BuildContext context, int index) { return Text('Row $index'); }, ); }
What if you want the rows to respond to user taps? Then fill the rows with a ListTile widget instead of a plain Text widget. This also adds nice spacing, so we can take out the extra padding and item extent from the code above.
Widget myWidget() { return ListView.builder( itemBuilder: (BuildContext context, int index) { return ListTile( title: Text('Row $index'), onTap: () { // do something }, ); }, ); }
If you have done native development on Android or iOS before, you can see how much easier this is. And if you haven't, take my word for it. It's easier. Pat yourself on the back for choosing Flutter. This is going to save you so much time.
Widgets are the basic building blocks of Flutter apps. You can think of them like blueprints for telling Flutter how you want the UI to look. In this lesson we looked at some of the most common structural widgets. You can see that making these widgets wasn't that hard when you take small steps one at a time. Everything I did here you can continue to do using the documentation. Find the widget that you want to study in the widget catalog, cut and paste a minimal example, and then start playing around with it.
You probably don't need it, but the code for this tutorial is available on GitHub.
Oh, and regarding that incomprehensible block of code at the beginning of the tutorial, the solution is to break complex layouts into smaller pieces by using variables, methods, or classes. I’ll talk about that more next time when we explore layouts. Before you know it programming in Flutter will be second nature to you! In the meantime check out the resources below..
I am using Android Studio with the Flutter 1.0 plugin to run the code in this tutorial. If you are using Visual Studio Code, though, you should be fine. { @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar( title: Text("Building layouts"), ), body: myLayoutWidget(), ), ); } } // replace this method with code in the examples below Widget myLayoutWidget() { return Text("Hello world!"); }.
The widgets above only took one child. When creating a layout, though, it is often necessary to arrange multiple widgets together. We will see how to do that using rows, columns, and stacks..)
We don't have space to cover all of the layout widgets here, but you have seen the most important ones already. Here are a few others that deserve mentioning:
main.dartfile and you'll see that we are using a Scaffold widget already..
All the indentation in the code above makes it hard to read. The solution to this is to break the large code block into smaller chunks. There are a few ways to do this.
In the abbreviated code below the rows have been extracted from the bigger widget into variables.
Widget myLayoutWidget() { Widget firstRow = Row( ... ); Widget secondRow = ... Widget thirdRow = ... return Container( ... child: Column( children: [ firstRow, secondRow, thirdRow, ], ), ); } @override.
Flutter has a few builtin tools for helping you debug layouts.
In Android Studio you can find the Flutter Inspector tab on the far right. Here we see our layout as a widget tree. @override { @override _MyWidgetState createState() => _MyWidgetState(); } // state class class _MyWidgetState extends State<MyWidget> { @override { @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter', home: Scaffold( appBar: AppBar( title: Text('Flutter'), ), body: MyWidget(), ), ); } } // widget class class MyWidget extends StatefulWidget { @override _MyWidgetState createState() => _MyWidgetState(); } // state class // We will replace this class in each of the examples below class _MyWidgetState extends State<MyWidget> { // state variable String _textString = 'Hello world'; // The State class must include this method, which builds the widget @override'; @override; @override> { @override; @override { @override { @override, @required.
flutter mobile-apps ios dart
Google has announced new flutter 1.20 stable with many improvements, and features, enabling flutter for Desktop and Web
This article covers everything about mobile app wireframe design: what to do and what not, tools used in designing a mobile or web app wireframe, and. | https://morioh.com/p/4de7f885e159 | CC-MAIN-2020-40 | refinedweb | 3,035 | 66.44 |
I have a program with 2 panels, each with it's own class.
LeftPanel and RightPanel.
Whenever I click a button that does some calculations (like calculating total assets amount) in one of the panels, it doesn't update that info for the other panel.
For example....I have this seperate class that both the panel classes call on
public class Money { public int money; public boolean first_run; public Money(int money){ this.money = money; } public int getMoney(){ if (first_run==false){ money = 100; first_run=true; } return money; } public void setMoney(int money){ this.money = money; } }
When I setMoney in the RightPanel, I need it to change the textarea in LeftPanel.
Is it something like repaint()? Or some other method that I'm unaware of? (plz show how it might work too since I'd be unfamiliar with it)
Much Gracias,
Slyvr | https://www.daniweb.com/programming/software-development/threads/337524/updating-panels-in-my-bizsim | CC-MAIN-2018-22 | refinedweb | 141 | 65.32 |
Chapter 1
Boolean Logic and Gates
Circuit design is based on t he mat hemat ical branch of Boolean Logic, dealing wit h various
manipulat ions of t he values of TRUE and FALSE. You can see t hat t hese values can easily be
represent ed by 0's and 1's inside t he co
mput er. Boolean logic uses t he basic st at ement s AND,
OR, and NOT. Using t hese and a series of Boolean expressions, t he f inal out put would be one
TRUE or FALSE st at ement. Let me t ry t o illust rat e t his:
If A is true AND B is true, then (A AND B) is true
If
A is true AND B is false, then (A AND B) is false
If A is true OR B is false, then (A OR B) is true
If A is false OR B is false, then (A OR B) is false
If A is t rue and B is f alse, t hen some ot her condit ion t akes place. The let t er A and B can
represent
anyt hing needed in t he program. In programming languages, we use t he IF
st at ement t ypically t o show t hat IF some t hing is t rue, THEN do t his.
Since t ransist ors are eit her ON or OFF, represent ing 0 and 1, ands t hus could represent TRUE
or FALSE, placement
of t hese t ransist ors in various locat ions of a circuit will yield result s t hat
are based on Boolean logic. These designs are called Gat es. A gat e is an elect ronic device t hat
t akes in some input and out put s a single binary out put. Gat es are used t o do all
sort s of
t hings, but f or Boolean logic, we are concerned wit h t he AND, OR, and NOT gat es. These gat es
are designed t o out put Boolean result s.
An AND gat e would be t wo t ransist ors in a series circuit as shown in t he diagram t o t he right.
In order t o get
a value of 1 as an out put (t he binary equivalent t o TRUE) t hen bot h Input 1
and Input 2 would have t o be 1. That is, bot h condit ions must be t rue in order f or t hose
t ransist ors t o swit ch t o ON, complet e t he circuit, and creat e an Out put.
An OR gat e is si
milar. We have t wo t ransist ors wit h t wo input s. But, t he t ransist ors are locat ed
in parallel. So, if eit her one of t he t ransist ors close, t he circuit will produce an out put. This
corresponds t o t he Boolean logic of OR, where if eit her is t rue, t hen t he f in
al result is TRUE. A
diagram of t he OR gat e is below.
There is also a NOT gat e. It is const ruct ed a bit dif f erent ly, but t he principle is t he same. The
placement of t he t ransist ors, and in t
his case a resist or, f orms t he NOT Boolean expression.
A circuit is designed around t hese principles. Using various combinat ions of t hese gat es, you
can design gat es f or almost any purpose: loops, t est
-
f or
-
equalit y, mat hemat ical f unct ions, et c.
As you ca
n imagine, even designing one circuit t o simply add t wo numbers can t ake rat her
complex designs of various combinat ions of t hese basic gat es. A comput er chip is made up of
millions of such circuit s. Various opt imizat ions t o t he designs help t o increase ove
rall speed of
t he circuit s.
An int erest ing not e: 1
-
bit ADD circuit requires 3 NOT gat es, 16 AND gat es, and 6 OR gat es, f or
a t ot al of 25 gat es. To creat e a 32
-
bit ADD circuit would t hen t ake 800 gat es using a t ot al of
1,504 t ransist ors. In t he old vacuum
t ube based comput ers, t his many vacuum t ubes would
t ake up a space about t he size of a ref rigerat or. Today, t he complet e ADD circuit t akes up a
space smaller t han a pixel on t his screen, or t he period at t he end of t his sent ence.
The Binary Number System
A comput er runs on binary code. But, what is t his? basically, it is a number syst em. Let's look
at it:
Most people use a number syst em based on 10. We use t he digit s 0,1,2,3,4,5,6,7,8 and 9 t o
f orm our numbers. All numbers can be represent ed by any num
ber t imes 10 t o some power. For
inst ance:
14,393 = 1.4393 x 10^4
Using t hese numbers we can f orm int egers, decimals, et c. We all know t his, so let me not
delve int o t his any more.
Now, t he reason we cannot use t his syst em of numbering in t he comput er is
pret t y simple. it
would make lif e easier f or us. But, as we know, a comput er circuit is made out of t ransist ors.
Transist ors have t wo posit ions
-
on and of f. The comput er uses t hese posit ions t o represent 0
and 1. Since we do not have any syst ems wit h 10 s
t able posit ions, and we do have t he
t ransist or wit h 2 st able posit ions, we t hus use t he binary syst em.
The binary syst em uses base 2 inst ead of base 10 like we are used t o. To compare it, let's
again look at base 10:
234 = (2 x 10^2) + (3 x 10^1) + (4 x
10^0)
You can see how any number can be represent ed by a base of 10. In binary we use base 2:
10111 = (1 x 2^4) + (0 x 2^3) + (1 x 2^2) + (1 x 2^1) + (1 x 2^0)
10111 = 16 + 0 + 4 + 2 + 1 = 23
So, 10111 in binary is equal t o a value of 23. To represent i
nt egers, which can be posit ive or
negat ive, comput ers t ypically use a sign not at ion on t he binary. 0 is posit ive and 1 is negat ive
and t his number precedes t he t he rest of t he number. 1 10111 would be
-
23 f or example. How
does t he comput er dif f erent iat e t h
is f rom 110111? Simply be cont ext in t he program.
ASCII code is t he t erm f or t ext charact ers represent ed by t he comput er. Since comput ers ONLY
t hink in binary, ASCII charact ers are represent ed by cert ain binary numbers. Again, t he only
way t he comput er di
f f erent iat es bet ween t he ASCII charact er and t he number it self is by
cont ext. There is a whole chart showing all t he ASCII charact ers and t heir binary equivalent s,
but who t he hell needs t o see t hat?
Von Neumann Architecture
All comput ers share t he sam
e basic archit ect ure, whet her it be a mult i
-
million dollar mainf rame
or a Palm Pilot. All have memory, an I/O syst em, and arit hmet ic/logic unit, and a cont rol unit.
This t ype of archit ect ure is named Von Neumann archit ect ure af t er t he mat hemat ician who
con
ceived of t he design.
Memory
Comput er Memory is t hat subsyst em t hat serves as t emporary st orage f or all program
inst ruct ions and dat a t hat are being execut ed by t he comput er. It is t ypically called RAM.
Memory is divided up int o cells, each cell having
a unique address so t hat t he dat a can be
f et ched.
Input / Out put
This is t he subsyst em t hat allows t he comput er t o int eract wit h ot her devices and communicat e
t o t he out side world. It also is responsible f or program st orage, such as hard drive cont rol.
Arit hmet ic/Logic Unit
This is t hat subsyst em t hat perf orms all arit hmet ic operat ions and comparisons f or equalit y. In
t he Von Neumann design, t his and t he Cont rol Unit are separat e component s, but in modern
syst ems t hey are int egrat ed int o t he processor
. The ALU has 3 sect ions, t he regist er, t he ALU
circuit ry, and t he pat hways in bet ween. The regist er is basically a st orage cell t hat works like
RAM and holds t he result s of t he calculat ions. It is much f ast er t han RAM and is addresses
dif f erent ly. The ALU
circuit ry is t hat act ually perf orms t he calculat ions, and it is designed f rom
AND, OR, and NOT gat es just as any chip. The pat hways in bet ween are self
-
explanat ory
-
pat hways f or elect rical current wit hin t he ALU.
Cont rol Unit
The cont rol unit has t he
responsibilit y of (1) f et ching f rom memory t he next program
inst ruct ion t o be run, (2) decode it t o det ermine what needs t o be done, t hen (3) issue t he
proper command t o t he ALU, memory and I/O cont rollers t o get t he job done. These st eps are
done cont inuo
usly unt il t he last line of a program is done, which is usually QUIT or STOP.
At t he machine level, t he inst ruct ions execut ed by t he comput er are expressed in machine
language. Machine Language is in binary code and is organized by op code and address f ie
lds.
Op codes are special binary codes t hat t ell t he comput er what operat ions t o carry out. The
address f ields are locat ions in memory on which t hat part icular op code will act. All machine
language inst ruct ions are organized wit h t he op code f irst, t hen t
he memory addresses
f ollowing. For example: Let's assume we want t o add t wo number t oget her t hat are in memory
locat ions 99 and 100. Let's assume t he decimal 9 is t he op code f or t he ADD f unct ion. The
f ormat, t hen, f or t he command would be 9
-
99
-
100. Of cou
rse, t his is in decimal f orm and not
t he way t he comput er sees it. Convert t hese t o binary t o get:
0000100100000000011000110000000001100100
That's a 9, a 99, and a 100 put t oget her wit h no dashes. Now, you get an idea of just how a
comput er t hinks.
The
set of all operat ions a processor can do is called it s inst ruct ion set.
Function of each Major Computer Component
Each dif f erent part in a comput er has a dif f erent t ask t o perf orm. Each part works dif f erent ly
in order t o get it s job done. There are ma
ny misconcept ions about what part s do what job, and
here, we will set out t o correct t hem. Knowing what f unct ion each part has is very rewarding.
If one knows what part does one, t hey can easily narrow down problems in a comput er.
The Processor(CPU)
The
processor is known as t he brain of t he comput er. The processor is just a really f ast
calculat or. It adds, subt ract s, mult iplies and divides a mult it ude of numbers. There are t wo
part s of t he Processor t hat do t he mat h. The f irst part is called t he Int eger
unit. It's job is t o
t ake care of t he "easy" numbers, like
-
5, 13, 1/2, et c. It's mainly used in business
applicat ions, like word processors, spreadsheet s, and t he Windows Deskt op. The ot her half is
called t he Float ing Point Unit. It's job is t o t ake care
of t he really hard numbers, like t he
square root of 3, pi, "e.", and logarit hms. This part of t he CPU is mainly used in 3D games, t o
calculat e t he posit ion of pixels, and images.
The Hard Disk Drive(HDD)
The Hard Drive is simply a mult it ude of met al dis
ks t hat spin around inside your comput er,
wit h heads t hat move around t hose disks. Those heads read and writ e dat a t o t he met allic
disks. The reason f or using a Hard drive is because t he hard drive is t he only part inside a
comput er t hat st ores dat a while
t he comput er is of f. Your Hard Drive is what st ores all of your
set t ings, programs, and t he operat ing syst em while your comput er is of f. The only draw back
t o t he hard drive, is t hat it is mechanical. That means it has a t endency t o break down every
once i
n a while f or no reason, and it is slower t han elect ronic means of dat a st orage.
Random Access Memory(RAM)
The RAM is a chip t hat holds dat a, only as elect ricit y f lows t hough it. It is very f ast compared
t o t he Hard Drive, but is also expensive, which is
why we don't use it f or our primary dat a
st orage. RAM is used as an int erf ace bet ween t he Hard Drive and t he Processor. If t he
Processor needs some dat a t hat's on t he hard drive, t he RAM chipset will ret rieve t he dat a
f rom t he hard drive and put it int o
memory, so t he processor can access it f ast er. If t he
comput er runs out of room in t he RAM, it will make a f ile on your hard drive, called "Virt ual
RAM." "Virt ual RAM" is just an ext ension of real RAM on your Hard Drive. As said above, t he
Hard drive is mu
ch slower t han t he RAM, so when t he comput er get s t he dat a st raight f rom t he
Hard Drive, your comput er will also seem like it f reezes, because it will be running so slowly.
Once you shut your comput er of f, t here is not hing st ored in t he RAM, because t here
is no
elect ricit y f lowing t hough it.
Cache (L1 & L2)
The Cache is high speed RAM. It st ores commonly used dat a and inst ruct ions f rom t he
processor so t hat it doesn't have t o go t o t he slower RAM t o get it. This is why t he modern day
comput er is so f ast
. Wit hout cache, most processors would be limit ed in speed by t he RAM.
Wit hout it, your comput er would be running t erribly slow. The Cache is split up int o 2 dif f erent
Levels. The f irst level, L1, ranges in size f rom 32KB t o 128KB. It is split in half and
resides
wit h in t he CPU core, next t o t he Int eger and Float ing Point Unit. The f irst half st ores
commonly used dat a, and t he second half st ores common inst ruct ions t hat t he processor
carries out on t he dat a. The second level of cache, called L2, is f or dat
a only. Some L2 Caches
are on t he mot herboard. Ot hers are on a special cart ridge wit h t he CPU. Newer L2 Caches are
in t he CPU core, wit h t he L1 cache.
The Chipset
The chipset.
General PC St ruct ure
Inside a comput er case must be jam
-
packed wit h wires. Act ually, a t ypical PC has a lot of
wast ed space inside t he case. It is also, a
s I said, a collect ion of part s. If a part breaks, you
just buy a new one and t hrow it in. A PC does not have many part s at all and it's f airly
st raight
-
f orward.
A case is a rect angular box which houses all t he PC innards. Taking t he cover of f t he case
r
eveals t he gut s. up in t he corner is a gray box wit h a bunch of wires coming out. That is t he
power supply, Swit ch Mode Power Supply (SMPS). That device t akes t he power f rom t he
elect rical out let in t he wall and dist ribut es it t o t he devices wit hin t he com
put er. At t he ends
of each wire bunch will be a whit e plug. This plug plugs int o each device wit hin t he PC.
The largest circuit board in t he PC is t he mot herboard. The mot herboard usually lies f lat or
st ands upright against t he side of t he case. All t he
part s in t he PC is connect ed t o t he
mot herboard. The mot herboard serves as t he communicat ion cent er of t he PC. All dat a moves
t hrough it. All of t hose lit t le elect rical et chings in t he board it self are t he lit t le roadways t hat
t he dat a and elect ricit y move
around on.
In a big slot on t he mot herboard is t he main processor, or CPU. The CPU will have a big f an
hanging of f it. This serves t o keep t he processor cool. Wit hout t he f an, it would boil it self. The
processor is t he "brain" of t he PC. Usually below t
he processor are a series of smaller circuit
boards, mount ed perpendicular t o t he mot herboard and each in it's own slot. Each of t hese
boards are your expansion cards. They are your modem, sound card, video card, net working
card, and any ot her various card
s you may have. Each card is modular and separat e, meaning
you can remove t he cards and replace t hem wit h ease.
You will also see the various storage devices of the computer. All but the hard drive will be mounted so as to stick out the
front of the
case
when the case cover is on. All drives have a power cord and a wide, gray cable going into it. These wide, gray cables are cal
led
IDE cables. "IDE" refers to the type of data transfer used on the PC. Data travels over these cables and this is how data mov
e
s from
the drives back to the processor in order to be manipulated.
Chapter 2
Processors
A CPU History
CPUs have gone t hrough many changes t hrough t he f ew years since Int el came out wit h t he
f irst one. IBM chose Int el's 8088 processor f or t he brains of
t he f irst PC. This choice by IBM
made Int el t he leader of t he CPU market. They usually come out wit h t he new ideas f irst. Then
companies such as AMD and Cyrix come in wit h t heir versions, usually wit h some minor
improvement s and slight ly f ast er.
Int el pr
ocessors have gone t hrough f ive generat ions. A sixt h is t aking hold. The f irst f our
generat ions t ook on t he "8" as t he series name, which is why t he t echnical t ypes ref er t o t his
f amily of chips as t he 8088, 8086, and 80186. This goes right on up t o t he 80
486, or just 486.
Then came along t he Pent ium. Int el went of f and changed t he name on t his one. Anyway, t he
higher t he chip number, t he more powerf ul t he chip is, and t he more cost ly.
The f ollowing chips are t he Leaders of t he comput er world.
Int el 808
6 (1978)
This chip was skipped over for the original PC, but was used in a few later computers that didn't
amount to much. It was a true 16
-
bit processor and talked with its cards via a 16 wire data connection.
Int el 8088 (1979)
This is the chip used in
the first PC. It was 16
-
bit, but it talked to the cards via a 8
-
bit connection. It ran
at a whopping 4 MHz and could address only 1 MB of RAM.
NEC V20 and V30 (1981)
Clones of t he 8088 and 8086. They are supposed t o be about 30% f ast er t han t he Int el
one
s.
Int el 80186
The 186 was a popular chip. Many versions have been developed in it s hist ory. Buyers
could choose f rom CHMOS or HMOS, 8
-
bit or 16
-
bit versions, depending on what t hey
needed. A CHMOS chip could run at t wice t he clock speed and at one f ourt
h t he power
of t he HMOS chip. In 1990, Int el came out wit h t he Enhanced 186 f amily. They all
shared a common core design. t hey had a 1
-
micron
core design and ran at about
25MHz at 3 volt s.
Int el 80
286 (1982)
A 16
-
bit processor capable of addressing up t o 16 MB of RAM. This chip is able t o work
wit h
virt ual memory
. The 286 was t he f irst "real" processor. It int roduced t he concept
of prot ect
ed mode. It has t he abilit y t o mult it ask, having dif f erent programs run
separat ely but at t he same t ime. This abilit y was not t aken advant age of by DOS, but
f ut ure
Operat ing Syst ems
, such as Windows, cou
ld play wit h t his new f eat ure. This
chip was used by IBM in it s Advanced Technology PC AT. It ran at 6 MHz, but lat er
edit ions of t he chip ran as high as 20 MHz.
Int el 386 (1988)
Wit h t his chip, PCs began t o be more usef ul. The 386 was t he f irst 32
-
bit p
rocessor f or
PCs. It could, as a result, crunch t wice as much dat a on each clock cycle and it could
play around wit h 32
-
bit cards. It can t alk t o as much as 4 GB of real memory and 64
TB of virt ual memory. This Processor could also t eam up wit h a
mat h coprocessor
,
called t he 80387. It could also use processor cache, all 16 byt es of it. The reduced
version of t his chip is t he 386SX. This is a low
-
f at chip, cheaper t o make. It t alked
wit h t he c
ards via a 16
-
bit pat h. 386s range in speed f rom 12.5MHz t o 33MHz. 386
chips were designed t o be user f riendly. All chips in t he f amily were pin
-
f or
-
pin
compat ible and t hey were binary compat ible wit h t he previous 186 chips, meaning t hat
users didn't have
t o get new sof t ware t o use it. Also, t he 386 of f ered power f riendly
f eat ures such as low volt age requirement s and Syst em Management Mode (SMM) which
could power down various component s t o save power. Overall, t his chip was a big st ep
f or chip development.
It set t he st andard t hat many lat er chips would f ollow. It of f ered
a simple design which developers could easily design f or.
Int el 486 (1991)
The 486 brought t he brains of a 386 t oget her wit h an int ernal mat h coprocessor. It
was much f ast er. This chip h
as been pushed t o 120 MHz and is st ill in use t oday, in
older syst ems.
The f irst member of t he 486 f amily was t he 486SX. It was very power, ef f icient and
perf ormed well f or t he t ime. The ef f icient design led t o new packaging innovat ions.
The 486SX came in
a 176 lead Thin Quad Flat Pack (TQFP) package and was about t he
t hickness of a quart er.
The next members of t he 486 f amily were t he DX2s and DX4s. Their speeds were
obt ained due t o t he speed
-
mult iplying t echnology which enabled t he chip t o operat e at
clo
ck cycles great er t han t hat of t he bus. They also int roduced t he concept of
RISC
.
Reduced inst ruct ion set chips (RISC) do just a f ew t hings, but really f ast. This made
t his chip more ef f icient and set
it apart f rom t he older x86 chips. The DX2 of f ered 8 KB
of writ e
-
t hrough cache and t he DX4 of f ered 16 KB. This cache helps t he chip maint ain
it s one clock cycle per inst ruct ion given t hrough t he use of RISC.
It was split int o SX and DX versions. Bot h were
complet ely 32
-
bit, but t he SX lacks t he
mat h coprocessor. Nevert heless, t he SX version is roughly t wice as f ast as t he 386.
Act ually, t he mat h coprocessor in t he SX is t here, just disabled f or f inancial purposes.
The Pent ium (1993)
Int el brought t he PC t
o t he 64
-
bit level wit h t he Pent ium Processor in 1993. It has 3.3
million t ransist ors and perf orms at 100 million inst ruct ions per second (MIPS).
The Pent ium f amily includes t he 75/90/100/120/133/150/166/200/233 clock speeds. It
is compat ible wit h all of
t he older OS's including DOS, Windows 3.1, Unix, and OS/2.
It s superscalar design can execut e t wo inst ruct ions per clock cycle. The separat e
caches and t he pipelined f loat ing point unit increase it s perf ormance beyond t he x86
chips. It has SL power managem
ent f eat ures and has t he abilit y t o work as a t eam wit h
anot her Pent ium. The chip t alks over a 64
-
bit bus t o it s cards. It has 273 pins t hat
connect it t o t he mot herboard. Int ernally, t hough, it is really t wo 32
-
bit chips chained
t oget her t hat split t he wo
rk. The chip comes wit h 16 K of built in cache.
This chip, although fast, gets really hot. So, the use of a CPU fan is required with them. Intel has released more efficient
versions
of the chip that operate at 3.3 volts, rather than the usual 5 volts. Thi
s has reduced the heat some.
The processor has a burst mode t hat loads 256
-
bit chunks of dat a int o t he dat a cache
in a single clock cycle. It can t ransf er dat a t o t he memory at up t o 528 MB/Sec. Also,
Int el t ook it upon t hemselves t o hardwire several, hea
vily used commands int o t he
chip. This bypasses t he t ypical microcode library of commands. It also has a built in
self t est t hat t est s it self upon reset t ing.
The Pent ium Pro
This is a RISC chip with a 486 hardware emulator on it, running at 200 MHz or b
elow. Several
techniques are used by this chip to produce more performance than its predecessors; speed is achieved
by dividing processing into more stages, and more work is done within each clock cycle; three
instructions can be decoded in each one, as op
posed to two for the Pentium.
In additions, instruction decoding and execution are decoupled, which means that instructions can still
be executed if one pipeline stops (such as when one instruction is waiting for data from memory; the
Pentium would stop a
ll processing at this point). Instructions are sometimes executed out of order, that
is, not necessarily as written down in the program, but rather when information is available, although
they won't be much out of sequence; just enough to make things run s
moother.
It has a 8K cache
f or programs and dat a, but it will be a t wo chip set, wit h t he processor and a 256K
L2 cache
in t he same package. It is opt imized f or 32
-
bit code, so will run 16
-
bit cod
e no f ast er t han
a Pent ium, which is a big drag down. It ’s st ill a great processor f or servers, being it can
be in mult iprocessor syst ems wit h 4 processors, unlike t he newer Pent ium II (see below),
which can only be in dual CPU syst ems. Anot her good t hing
about t he Pent ium Pro is t hat
wit h t he use of a Pent ium 2 overdrive processor, you have all t he perks of a normal
Pent ium II, but t he L2 cache is f ull speed, and you get t he mult iprocessor support of t he
original Pent ium Pro.
The Pent ium II
The Pentium
II is kind of like the child of a Pentium MMX mother and the Pentium Pro Father. But
like real life, it doesn’t combine the best of it’s parents. It has an onboard L2 cache, but it runs at ½
speed, not at full speed. It can be used in Multiprocessor enviro
nments, but only in dual CPU areas.
Instead of the usual square package design, it comes in a Single Edge Contact (S.E.C.) cartridge. This
design offers higher performance through higher bus speeds. The core and the L2 cache are enclosed in
a plastic and m
etal cartridge, and this connects to the motherboard via a single edge connector instead
of a bunch of little pins typical of previous processors.
The Celeron
During t he t ime of t he Pent ium II wit h t he 100MHz Front side Bus, Int el f ound t hat it
could spli
t it's current market and sell lower cost and lower perf ormance chips and st ill
make a prof it. Wit h t his in mind, Int el creat ed t he f irst "Value" processor, named t he
Celeron. It was basically a Pent ium II, but didn't have t he half speed cache, in f act,
t h
e f irst Celeron's had no cache at all. This was f ound t o be a horrible mist ake, and
short ly Int el decided t o add 128KB of onchip L2 cache. Because t he cache was no
longer a limit ing f act or in overclocking, t hese 66MHz f ront side bus chips easily made it
up
t o t he 100MHz bus, a 50% overclock. Many hardcore t echs rushed out t o purchase
one of t hese chips, and overclock t hem madly because t he price / perf ormance rat io
was out st anding. Sooner or lat er, Int el dit ched t he PCB t he Celeron was on and creat ed
a
socke
t ed Celeron, similar t o it's Pent ium MMX socket ed chips.
The Pent ium
iii
This is basically a Pentium II, but with a newer MMX, which has 70 new instructions that increased the 3D performance,
and it run faster. It also had an electronic chip ID, which w
as supposed to be good, but the underworld of the Internet is
rebelling because that doesn’t allow for overclocking anymore. The reason they do that is to save money. They don’t have
to change the core design, which will cost money, to make a slower chip t
han they have technology for.
Around t he t ime of 600Mhz Pent ium iii, Int el t ook t he 512KB of half speed cache, and
made 256KB of f ull speed onchip cache wit h it. Just as wit h t he f irst Celeron, Int el
f ound t hat t he PCB was unnecessary, and moved t he chip i
nt o t he Socket 370 f ormat.
While doing t his, it also shrunk t he chip down t o t he 0.18micron process, as well as
adding SSE2 and a f ew t weaks t o t he L2 Cache st ruct ure.
Celeron II
Just as t he Pent ium iii was a Pent ium II wit h SSE and a f ew added f eat ures,
t he
Celeron II is simply a Celeron wit h a SSE, SSE2, and a f ew added f eat ures. In f act, t he
Celeron II is just like a Socket 370 Pent ium iii, but wit h half t he L2 cache t hat runs at
a slight ly slower speed. To be very t echnical, t he Celeron II is a Pent ium
iii. Int el
simply programs t he chip t o disable half of t he cache and make it slower, in order f or
t hem t o sell it in t he "Value" market and st ill make a prof it in t hat sect or.
Cooling
Back in t he 386 days, t here wasn't a need f or a special cooling sys
t em because t he chip was
slow and did not have many t ransist ors, t heref ore t he air f low f rom t he power supply was
enough t o cool t he chip. Today t hough, cooling is a very import ant issue. There are several
ways t o cool t he processor. Wit h t he release of t h
e 486, cooling became an issue. Wit h t he
slower 486's, it wasn't really a big deal, but wit h t he 486DX
-
66, cooling was an issue. This
clock
-
doubled chip got pret t y hot. From t hen on, chips ran f ast er and hot t er. All chips used
t oday require a special cooli
ng syst em. How much cooling depends on t he processor, t he case,
and t he t ype of cooling syst em you are using.
The t ype of processor is t he biggest variable in t he amount and t ype of cooling needed. For
example, t he Cyrix 6x86 is a nice Pent ium alt ernat ive
, but runs much hot t er t han t he Pent ium.
Run
-
of
-
t he
-
mill f ans could not keep it cool enough, so Cyrix had t o design a special heat sink
and f an t o keep t he chip cool.
Cooling Problems
A processor that is not cooled enough will show some strange errors.
Every processor has a safe range of temperatures that it can
handle. Once the temperature gets above that point, one will usually see random error messages. Many times, one will not susp
ect that
cooling is the problem because the error will seem to be comi
ng from another part. Common errors are system crashes, lockups, and
surprise reboots. It can also have program errors and memory errors along with many other things. Most cooling hardware is de
signed
for the AT case. The AT design is very poor when proces
sor cooling is concerned. An independent cooling system is required. In the
AT design, the processor is far away from the power supply. Also the fan blows out of the system instead of in, so there is n
ot much of
an air flow inside the case.
With the ATX d
esign, the processor was placed near a power supply that blows air directly over the chip. This helps quite a bit in
cooling, placing extra air flow over the processor. A CPU fan, though, is still recommended. With ATX, the air flow is more
conducive to co
oling. A case fan in the front pulls air in through the bezel. As heat rises, the power supply blows the air out the top,
rear of the case. If your case supports it, I would also recommend a case fan in the rear toward the top. Being that heat ris
es, you w
ant
the maximum air being blown out from the top of the case.
Heat Sinks
Heat sinks are used in many electronic devices for cooling, but for our purposes here, they are placed on processors to cool
them. The
operation is simple. It is a piece of metal, u
sually aluminum, with large fins protruding upward. This is placed on the chip. The fins, in
effect, increase the surface area of the chip's top, therefore allowing the heat to spread to more area. This reduces the hea
t. Then the air
flow from the fan cool
s the heat sink down. The larger and more pronounced the fins, the better the cooling will be. Some super fast
processors have truly huge heat sinks. We are also now seeing heat sinks on the video processors of our best video cards.
There are two types of
heat sinks. One is the passive heat sink. This type just sits there and has no moving parts. The other type, most
often used on processors, is the active heat sink. It is called active because it has a moving part, the fan. The heat sink i
s attached to
th
e processor in two ways. Some chips are shipped with heat sinks already glued on. Others are alone, and the heat sink must be
clipped on. In this case, a special chemical called heat sink compound is sometimes placed between the two for better heat tr
ansfe
r.
This is a white paste that is spread onto the processor. Very little is needed, just enough to fill in the gap of air that wo
uld be there
without the compound.
Guide to Slots, Sockets and Slockets
Here is a quick rundown of all the different sockets an
d slots for processors:
Socket 1
This is an old slot. It s f ound on 486 mot herboards and support s 486 chips, plus t he DX2, DX4
Overdrive. It cont ains 169 pins and operat es at 5 volt s. The only overdrive it will support is
t he DX4 Overdrive.
Socket 2
Thi
s Int el socket is a minor upgrade f rom t he Socket 1. It has 238 pins and is st ill 5 volt.
Alt hough it is st ill a 486 socket and support s all t he chips Socket 1 does, it has t he minor
addit ion of being able t o support a Pent ium OverDrive.
Socket 3
Anot her
Int el socket, cont aining 237 pins. It operat es at 5 volt s, but has t he added capabilit y
of operat ing at 3.3 volt s, swit chable wit h a jumper set t ing on t he mot herboard. It support s all
of t he Socket 2 processor wit h t he addit ion of t he 5x86. It is consider
ed t he lat est of t he 486
socket s.
Socket 4
We move int o Pent ium class machines wit h t he Socket 4, by Int el. This socket has 273 pins. It
operat es at a whopping 5 volt s. Due t o t his volt age, t his socket basically had no where t o go
but t he hist ory books.
It only support s t he low
-
end Pent ium 60
-
66 and t he Overdrive because
t hese chips are t he only Pent iums operat ing at 5 volt s. Beginning wit h t he Pent ium
-
75, Int el
moved t o t he 3.3 volt chip.
Socket 5
This socket operat es at 3.3 volt s wit h 320 pins. It sup
port s Pent ium class chips f rom 75MHz t o
133MHz. Newer chips will not f it because t hey need an ext ra pin. Socket 5 has been replaced
by t he more advanced Socket 7.
Socket 6
It is meant f or 486's. It is only a slight ly more advanced Socket 3 wit h 235 pins
and 3.3 volt
operat ion. This socket is f orgot t en. The market never moved t o use it because it came out
when 486's were already going of out st yle and manuf act urers couldn't see pumping money
int o changing t heir designs f or a 486.
Socket 7
Socket 7 is t he
most popular and widely used socket. It cont ains 321 pins and operat es in t he
2.5
-
3.3 volt range. It support s all Pent ium class chips, f rom 75MHz on up, MMX processors, t he
K5, K6, K6
-
2, K6
-
3, 6x86, M2 and M3, and Pent ium MMX Overdrives. This socket is t h
e indust ry
st andard and is being used f or sixt h
-
generat ion chips by IDT, AMD and Cyrix. Int el, however,
decided t o abandon t he socket f or it's sixt h
-
generat ion lineup. Socket 7 boards incorporat e t he
volt age regulat or which makes volt ages lower t han t he na
t ive 3.3 volt possible.
Socket 8
This is a high
-
end socket used primarily f or t he Pent ium Pro. It has 387 pins and operat es at
3.1/3.3 volt s. This socket only handles t he Pent ium Pro. It is designed especially t o handle t he
dual
-
cavit y st ruct ure of t he
chip. Since Int el decided t o move on t o Slot 1, t he Socket 8 is a
sort of dead end.
Slot 1
Int el complet ely changed t he scene wit h t his slot. It, inst ead of accept ing t he usual square
chip wit h pins on t he bot t om, t akes t he processor on a daught ercard. T
he daught ercard allows
f ast communicat ion bet ween t he processor and t he L2 cache, which lies on t he card it self. The
slot it self has 242 pins and operat es at 2.8
-
3.3 volt s. The Slot 1 is used mainly f or t he P2,P3
and Celeron, but Pent ium Pro users can use
t he slot by mount ing t heir processors in a socket 8
on a daught ercard which is t hen insert ed int o t he Slot 1. This gave Pent ium Pro users t he
abilit y t o upgrade.
Slot 2
A chip packaging design used in Int el Pent ium II chipset s, st art ing wit h t he Xeon CPU.
While
t he Slot 1 int erf ace f eat ures a 242
-
cont act connect or, Slot 2 uses a somewhat wider 330
-
cont act connect or. The biggest dif f erence bet ween Slot 1 and Slot 2, t hough, is t hat t he Slot 2
design allows t he CPU t o communicat e wit h t he L2 cache at t he CPU
's f ull clock speed. In
cont rast, Slot 1 only support s communicat ion bet ween t he L2 cache and CPU at half t he CPU's
clock speed.
Socket 370
Socket 370 is named f or t he number of pins t his cert ain socket has. Af t er Int el f ound a way t o
cheaply put t he cac
he of a CPU on t he die, it f ound t hat a separat e PCB f or t he processor was
cost ly and useless. Int el t hen t ook t he Chip of f of t he PCB, and creat ed Socket 370. It's
basically Socket 7 wit h an ext ra row of pins on all f our sides. The f irst processors t o use
it
were t he PPGA Celeron, t hen quickly f ollowing were t he FC
-
PGA Pent ium iii processors along
wit h t he Celeron II line. Socket 370 chips can be placed on a daughercard just like Socket 8
chips in order t o f it int o a Slot 1 Int erf ace. Socket 370 is also ma
de t o use previous Socket 7
heat sinks, alt hough most of t hem are t oo small t o cool t hese modern processors. This Socket is
used f or Pent ium iii, Celeron, and Celeron II processors.
Slot A
This is a new propriet ary slot design AMD decided t o use wit h t he
At hlon processor. Design
wise, it is similar t o t he Slot 1. But, Slot A uses a dif f erent prot ocol, called EV6. Using t his bus
prot ocol, which was creat ed by Digit al, AMD can increase t he RAM t o CPU dat a t ransf er t o
200MHz, giving us a 200MHz f ront side bus.
AMD had t o use t heir own Slot design since Int el
had ef f ect ively pat ent ed t he Slot 1 design so t hat AMD could not use it. Now, wit h t he At hlon
becoming more popular, more and more Slot A boards are coming out so t hat syst ems based on
t he At hlon are becomi
ng more common.
Socket A
Just as Int el f ound it's cheaper t o leave t he PCB of f of it's processors, AMD did t he same. It's
At hlon and Duron processors using t he .18 micron process bot h use Socket A. It support s t he
200MHz EV6 bus, as well as t he new 266MHz
EV6 bus. Unlike Socket 370, it requires a slight
modif icat ion a Socket 7 heat sink in order t o be used properly. Also, unlike Socket 370, t here is
no daughercard t hat provides Socket A chips t o be plugged int o Slot A int erf aces. Socket A also
of f ers many m
ore pins t han Socket 370, 462 in t ot al. Socket 370 chips can not plug int o Socket
A, and vise versa. This Socket is used f or bot h At hlon and Duron Processors.
Slockets
The slocket is a weird lit t le cont rapt ion. It s basically Slot 1 t o Socket 370 adapt er.
It comes in
ot her f lavors t oo. By doing a bunch of elect rical work
-
arounds, it is able t o successf ully
rerout e t he current s and make t he dif f erent int erf aces adapt. Some of t hem even have cut e
lit t le elect rical t ricks t hat allow t hings such as dual
-
proces
sor or overclocking despit e t he
clock
-
locking.
Overclocking Your Processor
Overclocking is going mainst ream, it seems, among end users. Almost all hardware web sit es
discuss t he subject, and most make it seem like it's easy, and t hat everyone does it. O
f
course, manuf act urers don't want you t o do it t o t heir processors. Some even clock
-
lock t heir
processors. But, on t he int ernet, it runs rampant. In many cases, t hough, you don't get t he
real st ory behind it. Of course, it can speed up t he syst em some, bu
t it has t he pot ent ial t o do
damage t o t he syst em.
So, yes, overclocking is a viable opt ion, if you know what you're doing, and you t hink t he
processor can handle it, but it should not be done by most people. It should not be done
especially on syst ems t h
at are very import ant in your daily operat ion. If you use your comput er
f or work or have import ant dat a on it, do not overclock.
Processor Voltage
In t oday's processors, volt age is a major concern. In t he pre
-
486DX66 days, everyt hing was 5
volt, so t here
was no quest ion on volt age, and nobody cared. Today, volt age concerns plague
every CPU buyer.
The more power t he CPU t akes, t he more heat it creat es. Heat is not good on t he
processor and can be a barrier in creat ing f ast processors. So, lowering t he vo
lt age in
f ast chips reduces t he heat.
Since lapt ops run on bat t eries, t he amount of power consumed by t he processor is a
large f act or. The lower t he volt age, t hen, t he longer t he bat t ery will last.
Many t imes, large companies are running hundreds of PC's
at one t ime, t heref ore t he
amount of power used is a concern.
Dual Volt age
Older CPU's t ypically ran at one volt age, 5 volt s. There was not hing else. As chips got f ast er
and power became a concern, designers dropped t he chip volt age down t o 3.3 volt s.
Then, as
chips got even f ast er and more powerf ul, t here was a need t o go below 3.3 volt s. So,
designers began t o incorporat e a split volt age design, or dual volt age. In t hese chips, t he
ext ernal I/O volt age is 3.3 volt s so t hat it is compat ible wit h ot her
mot herboards and t heir
component s. The core volt age, t hen, is lower, maybe 2.9 or 2.5 volt s. So, t he chip operat es at
2.5 volt s while it t alks t o t he mot herboard at 3.3 volt s. This keeps mot herboard designs t he
same. The only component s t hat changes is t h
e volt age regulat or t hat supplies t he correct
volt age t o t he CPU socket.
The core volt age is always changing wit h new processor designs. The new chips, f or t he most
part, use core volt ages below 3 volt. The Int el Pent iumMMX, Cyrix 6x86, and t he K6 use cor
e
volt ages of 2.8 or 2.9 volt s. The K6
-
233 is t he silly chip t hat operat es at a 3.2 volt core, way
beyond t he usual. The volt age regulat or convert s t he power t o t he correct core volt age f or t he
processor in t he socket. This is a reason f or t he list of proc
essors a mot herboard support s. The
volt age regulat or is designed t o supply only cert ain volt ages.
Volt ages of Specific Processors
Here is a table with the voltages of all common CPU's:
Processor
External Voltage
Core
Voltage
8088
5
5
8086
5
5
80286
5
5
80386S X (DX)
5
5
80486DX (S X)
5
5
I nt el 486DX2
5
5
A MD, C y ri x 486DX2
3.3
3.3
486DX4 (A l l mak es)
3.3
3.3
5x86 (A MD, C y ri x)
3.45
3.45
P ent i um 60, 66
5
5
P ent i um 75
-
200
3.3/3.52
3.3/3.52
P ent i um MMX
3.3
2.8
6x86
3.3
3.3
6x86L
3.3
2.8
A MD K5
3.52
3.52
P ent i um P ro
-
150
3.1
3.1
P ent i um P ro
-
166+
3.3
3.3
P ent i um I I (Kl amat h)
3.3
2.8
P ent i um I I Deschut es/ C el eron Mendoci no
3.3
2.0
P ent i um i i i (Kat mai )
3.3
2.0
P ent i um i i i (C oppermi ne)
3.3
1.65
-
1.75*
C el eron I I (C oppermi ne 128)
3.3
1.5
-
1.65*
A MD A t hl on (
S ock et A )
3.3
1.6
-
1.8*
A MD A t hl on (S l ot A )
3.3
1.6
-
1.8*
A MD Duron (S ock et A )
3.3
1.6
A MD K6
-
2+ / K6
-
I I I +
3.3
1.8
-
2.0*
A MD K6
-
I I I
3.3
2.2
A MD K6
-
2 w/ 3DN ow!
3.3
2.2
A MD K6 266/ 300
3.3
2.2
A MD K6
-
(166,200)
3.3
2.9
A MD K6
-
233
3.3
3.2
6x86L
3.3
2.8
6x86MX
3.
3
2.9
An Ast erisk (*) denot es t hat t he volt age of t he processor depends on t he int ernal reversion
and t he clock speed of t he chip. Please check t he chip it self in order t o f ind t he correct
volt age t o run it at.
Chapter 3
Motherboards
The motherboard is
the most important part of your computer. It is also one of the most compared, critiqued, and reviewed pieces of
hardware. Often, on the internet, you'll find reviews and debates over which board is best or which chipset is best. Sometime
s the
average rea
der gets left in the dust. We will try to explain what's going on and what it all means.
Chipsets
The mot herboard is generally t hought t o be t he most import ant part of a comput er. And yes, it
is. However, t he chipset on t he mot herboard is t he most impor
t ant part of t he board it self as it
def ines almost everyt hing about t he syst em. We have said t hat t he CPU is t he brain, t he BIOS
is t he nervous syst em. Well, t he chipset is like t he heart.
The chipset cont rols t he syst em and it s capabilit ies. It is t he hu
b of all dat a t ransf er. It is a
series of chips on t he mot herboard, easily ident if ied as t he largest chips on t he board wit h t he
except ion of t he CPU. Chip set s are int egrat ed, meaning t hey are soldered ont o t he board and
are not upgradable wit hout buying
a whole new mot herboard.
All dat a must go t hrough t he chipset. All component s t alk t o t he CPU t hrough t he chipset. To
make order out of all t his dat a, t he chipset makes use of t he DMA cont roller and t he bus
cont roller.
Since chipset s are so import ant and
have t o know how t o communicat e wit h all component s,
t hey must be designed f or your conf igurat ion and CPU. The chipset maker needs t o keep up
wit h BIOS and memory makers, since all of t hese part s work t oget her and t he chipset is t he
hub of it all.
A chi
pset is designed by t he manuf act urer t o work wit h a specif ic set of processors. Most
chipset s only support one generat ion of processors: most chipset s are geared specif ically f or
486 t ype syst ems, Pent ium class syst ems, or Pent ium Pro / Pent ium II syst ems.
Why make it
complicat ed like t hat? Well, t he reason is simple. The design of t he cont rol circuit ry must be
dif f erent f or each processor generat ion due t o t he dif f erent ways t hey employ cache, access
memory, et c. For example, t he Pent ium Pro and Pent ium II
have level 2 cache wit hin t he CPU
it self, so obviously t hey would need a dif f erent circuit ry design t han t he Pent ium, which has
level 2 cache on t he mot herboard.
Most mot herboards t hat support Int el Pent ium processors also support t heir equivalent s f rom
AMD and Cyrix. In f act, usually, t hese chips inst all just as an Int el chip, ot her t han t he f act
t hat you may need t o set dif f erent jumper set t ings f or bus speed or volt ages. I must not e
here, t hough, t hat t he dif f erent volt ages of t he CPU's and whet her t he
board will support it is
not a f unct ion of t he chipset, but of t he volt age regulat or. But, since Int el is t he largest
manuf act urer of Pent ium
-
class and higher chipset s, AMD and Cyrix are at a disadvant age. AMD
has evened out t he f ield a t ad wit h t heir AMD
-
640 chipset, aimed at opt imizing t he
perf ormance of AMD's K6. But also, companies such as Via and ALI are producing Super 7
chipset s aimed at non
-
Int el processors.
Processor Speed Support
Fast er processors require chipset s capable of handling t hem. The
specif icat ion of t he processor
speed is done using t wo paramet ers: t he memory bus speed and t he processor mult iplier. The
memory bus speed is t he processor's "ext ernal" speed, t he speed it t alks t o t he rest of t he
comput er at. The memory bus speed also (n
ormally) dict at es t he speed of t he PCI local bus,
which in most mot herboards runs at half t he memory bus speed. Typical modern bus speeds
are 50, 60, 66 and 75 MHz. Fast er syst ems use 83MHz or even 100MHz bus speeds. The
mult iplier represent s t he number by
which t he memory bus speed must be mult iplied t o obt ain
t he processor speed. Mult ipliers on modern PCs are normally 1.5x, 2x, 2.5x, 3x, 3.5x, or 4x,
t hough f ast er processors will event ually increase t his.
The chipset runs at t he speed of t he mot herboard
bus, usually 66MHz in most syst ems. Wit h
chipset s such as Int el's 440BX, Via's MVP3, and ALI's Aladdin V, many newer PC's are pushing
100MHz bus speeds. This part icularly helps t he perf ormance of Super 7 syst ems because t he L2
cache runs at t he speed of t h
e mot herboard. This doubles L2 cache speeds. Wit h Pent ium II's,
t he L2 cache is already running at 1/2 t he speed of t he processor, so increasing t he bus t o
100MHz won't help out as much.
The range of t he processor speeds support ed by t he chipset is indica
t ed, generally, by looking
at t he range of support ed memory bus speeds and mult ipliers. A t ypical Pent ium chipset will
support bus speeds of 50 t o 66 MHz wit h a mult iplier range of 1.5x t o 3.0x. This yields speeds
of 75, 90, 100, 120, 133, 150, 166 and 200
MHz.
Mult iple Processor Support
Some chipset s support t he abilit y t o make mot herboards t hat support t wo or f our processors.
The chipset circuit ry coordinat es t he act ivit ies of t he processors so t hat t hey don't int erf ere
wit h each ot her, and works wit h
t he operat ing syst em sof t ware t o share t he load bet ween
t hem. The st andard f or mult iprocessing in Pent ium and Pent ium Pro PCs is Int el's SMP
(symmet ric mult iprocessing). It only works wit h Int el processors. Of course, I should make
not e t hat, in order t o s
uccessf ully have a mult i
-
processor syst em, t hat much more t han a
support ing chipset is needed. You must have compat ible CPU's and a support ing OS.
Most modern comput ers use t hree bus t ypes: t he ISA bus f or slower, older peripherals, t he PCI
bus, and t he
AGP Bus.
The chipset cont rols t hese buses. It t ransf ers inf ormat ion t o and f rom t hem and t he processor
and memory. The chipset's capabilit ies in t his area det ermine what kinds of buses t he syst em
support s and how f ast t hey can get. For t his reason, Int el
calls it s chipset s "PCIset s". Most
modern PCs support t he ISA and PCI buses, but older chipset s support t he VESA Local Bus
inst ead of PCI.
Bus Bridges
A "bridge" is a net working t erm t hat ref ers t o a piece of hardware t hat connect s t wo dissimilar
net wor
ks and passes inf ormat ion f rom t he comput ers on one net work t o t hose on t he ot her,
and vice
-
versa. In t his way, t he chipset must use bus bridges t o connect t oget her t he dif f erent
bus t ypes it cont rols. The most common of t hese is t he PCI
-
ISA bridge, which
is used t o
connect t oget her devices on t hese t wo dif f erent buses.
IDE/ATA Hard Disk Cont roller
Almost all mot herboards now have support f or f our IDE (ATA) hard disks int egrat ed int o t hem,
t wo on each of t wo channels. Int egrat ing t his support makes sense
f or a number of reasons,
among t hem t he f act t hat t hese drives are on t he PCI bus, so t his saves an expansion slot and
reduces cost. The dat a t ransf er rat e of IDE drives is based on t heir using programmed I/O
(PIO) modes, and use of t he f ast est of t hese m
odes depends on support f rom t he PCI bus and
chipset. The abilit y t o set a dif f erent PIO mode f or each of t he t wo devices on a single IDE
channel, called
independent device timing
, is also a f unct ion of t he chipset. Wit hout t his
f eat ure, bot h devices must
run at t he speed of t he slowest drive.
More recent ly, ATA
-
33 drives have become t he t hing t o have. These enhanced IDE drives are
appealing mainly because of t heir at t ract ive price. Earlier chipset s only support ed PIO modes,
which required CPU involvement i
n every hard drive access. This isn't good when t rying t o
mult i
-
t ask. ATA
-
33 drives use DMA t o work wit hout CPU int ervent ion. This allows speeds of up
t o 33MBps. The concept of DMA is descibed below.
DMA Mode Support and Bus Mast ering
Direct memory acce
ss (DMA) provides a way f or devices t o t ransf er inf ormat ion direct ly t o and
f rom memory, wit hout t he processor's int ervent ion. It is st ill used by many devices, alt hough
newer t ransf er modes are now used f or high
-
perf ormance devices like hard disks. DMA is
cont rolled by t he chipset's DMA cont roller, and t he newer t he cont roller, t he more DMA modes
it s support s.
Bus mastering
is an enhancement of DMA whereby t he remot e device not only can send dat a t o
t he memory direct ly, it act ually t akes cont rol of t he bu
s, and perf orms t he t ransf er it self
inst ead of using t he DMA cont roller. This cut s down on t he overhead of having t he slow DMA
cont roller t alk t o t he device doing t he t ransf er, f urt her improving perf ormance. Bus mast ering
support is provided by t he chipset
.
USB & AGP Support
USB (Universal Serial Bus) is a new t echnology int ended t o replace t he current port s used f or
keyboards and mice. It is st ill unclear as t o whet her t his st andard will cat ch on and become
popular. USB has been around f or a while now,
alt hough it is st ill rat her rare t o see in act ion.
Despit e t his, most modern chipset s support USB.
AGP is anot her high
-
speed bus used f or graphics cards. This bus must be support ed by t he
chipset. The Int el 440LX used t o be t he only chipset t he support ed
it, but since t hen, many
more have emerged, including many not made by Int el.
Plug and Play
Plug and Play (PnP) is a specif icat ion t hat uses t echnology enhancement s in hardware, BIOSes
and operat ing syst ems, t o enable support ed devices t o have t heir sys
t em resource usage set
aut omat ically. Int ended t o help make inst allat ion easier by eliminat ing some of t he problems
wit h get t ing peripheral devices t o work t oget her, PnP requires support f rom t he chipset as
well.
Chipset s of f er support f or power manageme
nt on t he comput er. Most recent chipset s support a
group of f eat ures t hat reduce t he amount of power used by t he PC during idle periods. These
t ypes of f eat ures are deemed import ant f or a f ew reasons. First, many get concerned over t he
amount of power cons
umed by PC's when t hey are lef t on f or long periods of t ime. Secondly,
wit h t he use of lapt ops, many are concerned about t he lif e of t heir bat t ery.
Power management works t hrough a number of BIOS set t ings t hat t ell t he comput er when t o
shut down various p
ieces of hardware when it becomes idle. While, in t heory, t his is a good
idea, it does somet imes get in t he way. One example is t hat all
-
t oo
-
common wait t ime when
ret urning t o t he comput er t o wait f or t he hard drive t o power up. Somet imes, t he hard drive
w
ill power down t oo soon, and when you come back, you have t o wait a f ew seconds f or t he
drive t o power up again.
There are a number of t erms commonly heard in relat ion t o t hese power management f eat ures.
Energy Star
is a program st art ed by t he EPA t o bran
d PCs t hat are considered energy ef f icient
and incorporat e power management. Most modern PCs are Energy St ar compliant, and display
it s logo on t he t op of t he screen when t he BIOS boot s up.
Advanced Power Management
or APM
is t he name given t o t he componen
t in some operat ing syst ems (such as Windows 95) t hat
works wit h t he BIOS t o cont rol t he power management f eat ures of t he PC. APM allows you t o
set paramet ers in t he operat ing syst em t o cont rol when various power management f eat ures
will be act ivat ed.
Syst
em Management Mode
or SMM is a power
-
reduct ion st andard f or
processors. This allows t hem t o aut omat ically and great ly reduce power consumpt ion.
One of t he biggest issues wit h chipset s is what t ypes of memory t hey will support as well as
how much.
When p
urchasing a chipset, make sure you get one wit h support f or SDRAM. Wit h t his, 66MHz
is f ine f or most applicat ions, but wit h t he prices f or 100MHz chipset s coming down so much,
opt f or a chipset support ing t he 100MHz f ront side bus. You'll see t he dif f erence
in many
aspect s of t he comput er's use, especially t he more involved AGP
-
enabled applicat ions.
One needs t o pay at t ent ion t o how t he memory is support ed. A chipset can support a cert ain
amount of memory as well as is able t o cache a cert ain port ion of it.
This means t hat a cert ain
amount of t he main syst em memory will be cached by t he L2 cache, increasing perf ormance.
One of t he more f amous horror st ories is t he 430TX chipset by Int el. Alt hough it could support
up t o 256MB of SDRAM, it could only cache t he
f irst 64MB of it. This meant t hat wit h memory
amount s over 64MB, you were probably degrading t he syst em's perf ormance by quit e a bit.
Because Windows 95 loads it self int o t he higher memory areas, leaving t he lower areas f ree f or
DOS compat ibilit y, t his me
ant t hat t he OS and all syst em
-
crit ical applicat ions were being
hampered by t he crappy cache support.
When purchasing a chipset, make sure it can address 1MB or 2MB of L2 cache. Some come wit h
512K, which is adequat e, but don't consider 256K or lower. The
higher t he L2 cache, t he more
memory t he chipset is likely t o be able t o cache.
The chipset market will have t o evolve along wit h memory enhancement s. Minor t weaks t o
SDRAM, such as Double Dat e Rat e(DDR) SDRAM will ext end t he lif e of SDRAM, but will incl
ude
some t weaks t o t he chipset support. Int el's event ual move t o Rambus DRAM, or
RDRAM
, will
change everyt hing. While SDRAM delivers dat a st eadily at 66MHz of 100MHz, RDRAM will use
an 8
-
bit int erf ace and f ire dat a of f at 800MHz. Because of t he close int eg
rat ion of t he chipset
t o t he memory subsyst em, t he move t o RDRAM will require drast ic changes t o chipset design.
CMOS Backup Battery
The bat t ery in a PC is of t en one of t he most f orgot t en part s of t he comput er. It is quit e
import ant, t oo. It is what ho
lds all of your CMOS set t ings while your comput er is of f. Wit hout
it, you would have t o re
-
program your CMOS each and every t ime you t urned on your PC.
Expansion Cards
Expansion cards are t he small print ed circuit boards t hat you plug int o o
ne of t he slot s on your
mot herboard t hat make your comput er do "neat" t hings. They are video cards, sound cards,
modems, image capt ure cards, et c.
Expansion cards are one of t he simplest pieces of comput er hardware. You purchase your card
of choice, st ick
it in t he slot, and load t he new drivers.
Motherboard Slots
In order t o inst all an expansion card, you must st ick it int o one of t hose slot s on your
mot herboard. t here are dif f erent t ypes of slot s, alt hough only a f ew are st ill used t oday. Let's
look t hem
over.
Industry Standard Architecture (ISA):
This t ype of slot is t he oldest st ill in use
t oday. If you open up an old 286, you'll see a f ew of t hese. An 8
-
bit ISA slot is capable
of 0.625MB/sec t ransf er rat e bet ween t he card and t he mot herboard. Lat er v
ersions of
t his slot were 16
-
bit, capable of 2MB/sec. This is st ill slow compared t o t oday's
st andards, but cards such as modems do not require anyt hing f ast er t han t his. If you
look at your mot herboard's slot s, t he longer black ones are t he ISAs. If t hey
are all
one size, t hey are all ISAs. Modern boards are no boast ing any more t han maybe t wo
of t hese bad
-
boys, only because people only use t hem f or t heir modems or older cards
t hat haven't yet replaced.
Enhanced Industry Standard Architecture (EISA):
Thi
s t ype of slot is not used
very of t en in deskt op machines. It is used mainly in
servers
, or comput ers t hat host
net works. Wit h such a comput er, t he demands placed on it s component s are t oo big f or
ISA t o handle. Also, t he EISA bus is capable of
bus masteri
ng
, which allows
component s at t ached t o t he bus t o t alk t o each ot her wit hout bot hering t he CPU. This
f eat ure is much like SCSI and speeds up t he comput er quit e well.
Micro Channel Architecture (MCA):
Not t oo common eit her, t his bus was creat ed by
IBM. I
t is 32
-
bit, like EISA, but you can't st ick ISA cards int o it. MCA was capable of
bus mast ering, plus it could look at ot her devices plugged int o it and ident if y t hem,
leading t o aut omat ic conf igurat ion. MCA also produced less elect rical int erf erence,
redu
cing errors. MCA is hist ory. Don't get it. Nobody uses it.
Video Electronics Standard Association (VESA):
This is a very f ast int erf ace made
up mainly f or f ast new video cards. All of t hose f ancy videos and graphics require
much speed. The VESA
-
Local Bus
, or VL
-
Bus, is connect ed st raight t o t he CPU's own
int ernal bus, hence t he name "local". This bus can t ransf er dat a at 132MB/sec. VESA
buses are basically an ISA slot wit h an ext ra slot on t he end. The whole t hing is about
4 inches longer t han an ISA slot
. Again, you don't see t hese much anymore.
Peripheral Component Interconnect (PCI):
This is t he ot her very f ast bus
developed by Int el. It is dif f erent t han t he VL
-
Bus except t hat it runs at t he same
speed. There is a f ast int erf ace unit bet ween t he card
and t he CPU t hat does t he
t alking. This unit made t he bus independent of t he CPU, a drawback on t he VL
-
Bus,
which was limit ed t o t he 486. Also, you can plug cards int o it wit hout any conf iguring.
The bus is self
-
conf iguring, leading t o t he
plug
-
n
-
play
con
cept in which each add
-
on
card cont ains inf ormat ion about it self t hat t he processor can use t o aut omat ically
conf igure t he card. This slot is most popular t oday wit h Pent ium and lat er machines,
alt hough occasionally you will see one on a 486. If you're any
t hing like me, you never
have enough PCI slot s.
Personal Computer Memory Card International Association (PCMCIA):
This is a
special socket in which you can plug removable credit
-
card size devices. These circuit
cards can cont ain ext ra memory, hard drives
, modems, net work adapt ers, sound cards,
et c. Most ly, PCMCIA cards are used f or lapt ops, but many PC vendors have added
PCMCIA socket s t o t heir deskt op machines. The socket uses a 68 pin int erf ace t o
connect t o t he mot herboard or t o t he syst em's expansion
bus.
There are t hree t ypes of PC cards: Type 1 slot s are 3.3mm t hick and hold it ems such
as RAM and f lash memory. Type 1 slot s are most of t en seen in palmt op machines or
ot her handheld devices. Type 2 is 5mm t hick and I/O capable. These are used f or I/O
de
vices such as modems and net work adapt ers. Type 3 is 10.5mm t hick and used
mainly f or add
-
on hard drives. When buying PC Card equipment, you must consider t he
size of t he slot. In most cases, Type 3 can handle Type 2 and Type 1.
Accelerated Graphics Port
(AGP):
The newest t ype of bus slot creat ed f or t he high demands
of 3D graphical sof t ware. Since AGP is a hot t opic and t here is indeed much t o know.
Chapter 4
Memory
Your comput er's memory is necessary f or it s operat ion. It is f ully t ied in t o t he proc
essor,
chipset, mot herboard, and cache.
Memory Types
There are several dif f erent t echnologies available t oday when it comes t o memory. No longer
can you just buy a SIMM and st ick it in. There are many t ypes available. Let's discuss t hem
here. bec
ause the
user cannot disrupt the information.
There are dif f erent t ypes of ROM, t oo:
Programmable ROM(PROM)
. This is basically
a blank ROM chip t hat can be writ t en
t o, but only once. It is much like a CD
-
R drive t hat burns t he dat a int o t he CD. Some
companies use special machinery t o writ e PROMs f or special purposes.
Erasable Programmable ROM (EPROM)
. This is just like PROM, exc
ept t hat you can
erase t he ROM by shining a special ult ra
-
violet light int o a sensor at op t he ROM chip
f or a cert ain amount of t ime. Doing t his wipes t he dat a out, allowing it t o be rewrit t en.
Electrically Erasable Programmable ROM (EEPROM)
. Also called f
lash BIOS. This
ROM can be rewrit t en t hrough t he use of a special sof t ware program. Flash BIOS
operat es t his way, allowing users t o upgrade t heir BIOS.
ROM is slower t han RAM, which is why some t ry t o shadow it t o increase speed.
RAM
Random Access Memor
y (RAM) is what most of us t hink of when we hear t he word memory
associat ed wit h comput ers. It is volat ile memory, meaning all dat a is lost when power is t urned
of f. The RAM is used f or t emporary st orage of program dat a, allowing perf ormance t o be
opt imum.
Like ROM, t here are dif f erent t ypes of RAM:
Static RAM (SRAM)
. This RAM will maint ain it's dat a as long as power is provided t o
t he memory chips. It does not need t o be re
-
writ t en periodically. In f act, t he only t ime
t he dat a on t he memory is ref reshed o
r changed is when an act ual writ e command is
execut ed. SRAM is very f ast, but is much more expensive t han DRAM. SRAM is of t en
used as cache memory due t o it s speed.
There are a f ew t ypes of SRAM:
Async SRAM
. An older t ype of SRAM used in many PC's f or
L2
cache
. It is
asynchronous, meaning t hat it works independent ly of t he syst em clock. This
means t hat t he CPU f ound it self wait ing f or inf o f rom t he L2 cache.
Sync SRAM
. This t ype of SRAM is synchronous, meaning it is synchronized
wit h t he syst em clock. Wh
ile t his speeds it up, it makes it rat her expensive at
t he same t ime.
Pipeline Burst SRAM
. Commonly used. SRAM request s are
pipelined
, meaning
larger packet s of dat a re sent t o t he memory at once, and act ed on very
quickly. This breed of SRAM can operat e
at
bus
speeds higher t han 66MHz, so
is of t en used.
Dynamic RAM (DRAM)
. DRAM, unlike SRAM, must be cont inually re
-
writ t en in order
f or it t o maint ain it s dat a. This is done by placing t he memory on a ref resh circuit t hat
re
-
writ es t he dat a several hundred
t ime per second. DRAM is used f or most syst em
memory because it is cheap and small.
There are several t ypes of DRAM, complicat ing t he memory scene even more:
Fast Page Mode DRAM (FPM DRAM)
. FPM DRAM is only slight ly f ast er t han
regular DRAM. Bef ore t here
was EDO RAM, FPM RAM was t he main t ype used in
PC's. It is pret t y slow st uf f, wit h an access t ime of 120 ns. It was event ually
t weaked t o 60 ns, but FPM was st ill t oo slow t o work on t he 66MHz syst em bus.
For t his reason, FPM RAM was replaced by EDO RAM.
FPM RAM is not much
used t oday due t o it s slow speed, but is almost universally support ed.
Extended Data Out DRAM (EDO DRAM)
. EDO memory incorporat es yet
anot her t weak in t he met hod of access. It allows one access t o begin while
anot her is being complet ed
. While t his might sound ingenious, t he perf ormance
increase over FPM DRAM is only around 30%. EDO DRAM must be properly
support ed by t he chipset. EDO RAM comes on a SIMM. EDO RAM cannot operat e
on a bus speed f ast er t han 66MHz, so, wit h t he increasing use
of higher bus
speeds, EDO RAM has t aken t he pat h of FPM RAM.
Burst EDO DRAM (BEDO DRAM)
. Original EDO RAM was t oo slow f or t he
newer syst ems coming out at t he t ime. Theref ore, a new met hod of memory
access had t o be developed t o speed up t he memory. Burs
t ing was t he met hod
devised. This means t hat larger blocks of dat a were sent t o t he memory at a
t ime, and each "block" of dat a not only carried t he memory address of t he
immediat e page, but inf o on t he next several pages. Theref ore, t he next f ew
accesses w
ould not experience any delays due t o t he preceding memory
request s. This t echnology increases EDO RAM speed up t o around 10 ns, but it
did not give it t he abilit y t o operat e st ably at bus speeds over 66MHz. BEDO
RAM was an ef f ort t o make EDO RAM compet e w
it h SDRAM.
Synchronous DRAM (SDRAM)
. SDRAM is really t he new st andard f or PC
memory. It s speed is synchronous, meaning t hat it is direct ly dependent on t he
clock speed of t he ent ire syst em. St andard SDRAM can handle higher bus
speeds. In t heory, it can op
erat e at up t o 100MHz, alt hough it has been f ound
t hat higher qualit y
DIMM
s must be used f or st able operat ion at such speeds.
Hence
PC100 SDRAM
. Alt hough SDRAM is f ast er, t he speed dif f erence isn't
not iced by many users due t o t he f act t hat t he syst em cach
e masks it. Also,
many users are working on a relat ively slow 66MHz bus speed, which doesn't
use t he SDRAM t o is f ull capacit y. Using 100MHz chipset s, like t he BX and
ot her more modern chipset s, you can easily run your PC100 SDRAM at f ull
speed. Wit h some
newer chipset s by Via and ot hers, we now have PC
-
133 as
well.
RAMBus DRAM (RDRAM)
. Developed by Rambus, Inc. and endorsed by Int el
as t he chosen successor t o SDRAM. RDRAM narrows t he memory bus t o 16
-
bit
and runs at up t o 800 MHz. Since t his narrow bus t a
kes up less space on t he
board, syst ems can get more speed by running mult iple channels in parallel.
Despit e t he speed, RDRAM has had a t ough t ime t aking of f in t he market
because of compat ibilit y and t iming issues. Heat is also an issue, but RDRAM
has hea
t sinks t o dissipat e t his. Cost is a major issue wit h RDRAM, wit h
manuf act urers needing t o make major f acilit y changes t o make it and t he
product cost t o consumers being t oo high f or people t o swallow.
DDR
-
SDRAM
. This t ype of memory is t he nat ural evolut io
n f rom SDRAM and most
manuf act urers pref er t his t o Rambus because not much needs t o be changes t o make it. Also,
memory makers are f ree t o manuf act ure it because it is an open st andard, whereas t hey would
have t o pay license f ees t o Rambus, Inc. in order m
ake RDRAM. DDR st ands f or Double Dat a
Rat e. PC
-
100 and PC
-
200 DDR
-
SDRAM bot h use t he 100 MHz bus speed, but DDR shuf f les dat a
over t he bus over bot h t he rise and f all of t he clock cycle, ef f ect ively doubling t he speed. Of
course, chipset support is necessa
ry, but Via, ALi, and Micron have already decided t hey will
support DDR
-
SDRAM in t heir chipset s rat her t han RDRAM.
SDRAM Considerations
SDRAM is t he new st andard in PC memory, t he next st ep beyond t he now ancient EDO RAM.
But, in buying SDRAM f or you sys
t em, t here is some inf ormat ion you must consider.
Speed
SDRAM chips are generally named in t wo dif f erent ways. The most common way is t he
nanosecond rat ing. It is said t o have a "10 nanosecond" rat ing, which is t he common speed f or
SDRAM. The second me
t hod is t he MHz rat ing, like "100 Mhz".
SDRAM is synchronous, meaning it is t ied int o t he bus speed of t he syst em. This means t hat
t he memory must be f ast enough t o work on t he syst em you int end t o put it on. Unlike older
memory
t hat used
wait states
t o c
ompensat e f or slowness, SDRAM does not use wait st at es. The
memory, t hen, must be f ast enough f or t he syst em, t aking slack int o account.
It is really t his reason why SDRAM was creat ed in t he f irst place: t o make memory t hat could
keep up wit h t he syst em.
For older syst ems, EDO RAM does just f ine. At t he 66MHz speed, EDO
is a dream, as t hat is what it was really designed f or. It was soon f ound t hat EDO RAM worked
just f ine at even higher speeds, such as 75MHz or 83MHz. SDRAM was designed mainly t o
operat e w
it h st abilit y at bus speeds such as 100MHz. The problem wit h t his is t hat, unt il more
recent ly, we really had very f ew mot herboards t hat could make it t o 100MHz. Theref ore, how
are we t o know t hat t hat expensive SDRAM will really do it?
In modern syst ems,
f ast er bus speeds are t he norm. EDO RAM will not work wit h st abilit y, if at
all, in t hese syst ems. SDRAM is t hus used. PC
-
100 SDRAM is used in 100MHz syst ems. Newer
PC
-
133 is used on 133MHz syst ems.
2
-
clock vs. 4
-
Clock
Two types of SDRAMs are the 2
-
clo
ck t hat holds inf ormat ion about t he
SDRAM module, such as speed set t ings.
The mot herboard t hen queries t his chip f or inf o and
makes changes in t he set t ings t o work wit h t he SDRAM. Basically, t his allows t he SDRAM
module and t he chipset t o communicat e, making t he SDRAM more reliable on a larger number
of mot herboards. Some mot her
boards require t his f eat ure. You will have t o look at t he manual,
once again. If your board requires it, make sure you have it, because SDRAM wit hout t his
won't work.
When choosing SDRAM f or your comput er, you need t o know your mot herboard and get exact ly
t he t ype it requires
SDRAM, PC100, PC133, and DDR
PC
-
100
We all know t hat, when it comes t o memory, t hat SDRAM is t he way t o go. It is f ast er t han
EDO RAM, and support s higher bus speeds. EDO RAM is moving int o t he older syst ems, mainly,
while even t he
bargain PC's make t he move t o SDRAM.
But, t he world of SDRAM is not cut and dry. St andard SDRAM is great f or "older" boards. Now,
wit h t he release of BX mot herboards, and t he Super 7 boards, st andard SDRAM begins t o cause
problems. Why? Because even t houg
h it was originally said t hat SDRAM could go up t o 100MHz,
it really couldn't. In f act, some SDRAM even got unst able at t he 83MHz bus speed.
Ent er PC100. Basically, PC100 is SDRAM which meet s a cert ain specif icat ion t o work wit h
st abilit y at 100MHz. This
SDRAM usually operat es at 10ns, alt hough some is creat ed t hat is
f ast er. Since t he only qualif icat ion f or PC100 is t he abilit y t o operat e at 100MHz, t here is no
rule as t o t he access t ime. 10ns is t he minimum speed f or st abilit y at 100MHz. some companies
a
dvert ise PC100 f ast er t han t his, say 6ns, but, a lot of t imes you will f ind t his t o be
inaccurat e.
All PC100 is not equal. While it all operat es at 100MHz, when you get int o higher bus speeds
t han t hat, t he high
-
qualit y st uf f st art s t o st and out. The reas
on is t hat t he lat ency rat ing of
t he higher qualit y st uf f is lower. The lat ency is a measurement of how long it t akes f or ot her
hardware t o ret urn dat a t o t he RAM. The lower t he lat ency rat ing, t he bet t er t he chip, and t he
f ast er it will operat e.
The most
common, and cheaper, t ype of SDRAM chip uses GL or G8 chips. The "GL" or "G8" will
be seen on t he act ual SDRAM chips on t he memory circuit board, so you will know what you're
looking at. The GL's use a CAS lat ency of 3, which is pret t y st andard. The bet t e
r st uf f uses
"GH" chips, which uses a CAS lat ency of 2.
To operat e at 100MHz or 112MHz bus speeds, almost any of t his PC100 will work. But, bump it
up t o 133MHz, and you'll need t o get t he bet t er GH SDRAM wit h a CAS lat ency of 2. Only wit h
t his will you g
et st able operat ion at such high
front side bus
speeds.
Along wit h high qualit y PC100, one must t ake not ice of t he print ed circuit board on which t he
chips are mount ed. The qualit y of t hese boards, f or t he most part, is measured in t he amount
of layers. Y
ou can equat e t his t o t hickness. Obviously, t he t hicker t he mat erial of t he board,
t he less chance you have of damaging it, t he longer it will last, and t he less elect rical noise
you will get. so, t he more layers t he bet t er. Run
-
of
-
t he
-
mill, cheap SDRAM of
t en used good
qualit y chips, but t he manuf act urer would cut corners by using lower qualit y PCB's(Print ed
Circuit Boards). Of t en t hey would use 4
-
layer PCB's. Well, part of t he PC100 spec is a minimum
of 6
-
layer PCB. t his ensures a higher level of qualit y.
But, some manuf act urers use even bet t er
PCB's, such as 8
-
layer. Pay at t ent ion t o t his. The more layers t he bet t er.
So, if you f ind yourself buying a Super 7 or BX mot herboard, you should pick up some PC100
SDRAM wit h it. The older st uf f will work, but, wi
t hout PC100, you are st uck wit h your new
board's slower bus speeds.
PC
-
133
Basically, PC133 SDRAM is anot her implement at ion of t he same old SDRAM. It's basically t he
same SDRAM f rom t he days of t he LX Chipset, t he Pent ium II 333MHz processor, and t he
66M
Hz bus. The only dif f erence bet ween PC133 SDRAM and t he ot hers, is t hat t he PC133 has a
lower
latency
t han PC100 and PC66 SDRAM, which means it can run on a f ast er bus.
If you don't already know, PC133 SDRAM can run st ably on a 133MHz bus, just as PC100 r
an
st able on t he 100MHz bus, and PC66 ran st able on t he 66MHz bus. PC133 SDRAM increases t he
t ot al bandwidt h available t o t he processor f rom t he memory, because it runs f ast er. That is
because it raises t he speed limit, so t o speak, on t he road bet ween t he
Processor and t he RAM.
Somet imes it easy t o t hink of t he lines dat a moves bet ween t wo comput er component s as
roads. The road bet ween t he SDRAM and a current processor, like t he Pent ium III, is 64bit,
which can be t hought of as a 64 lane highway. Wit h old
er PC100 SDRAM, t he speed limit on
t hat road was 100MHz, which means t hat during a second, 100 million bit s moved t hough each
lane on t he highway. That's 6.4 Billion bit s, and as we all know, 8 bit s = 1 byt e. That means,
t hat wit h older PC100 SDRAM, t he pr
ocessor could get a maximum of 800MB per second. Wit h
PC133 SDRAM, t he speed of t hat road is increased t o 133.33 million bit s on each lane per
second. That t ranslat es int o 8.533 bill bit s. Using t he same mat h above, t hat means t he
processor could get a max
imum of 1060 MB per second (1.06GB) f rom t he SDRAM.
More dat a, of course, means bet t er perf ormance. Your games will run f ast er, business
applicat ions load f ast er, and even Windows boot s f ast er. Only problem is t hat t he perf ormance
increase isn't all t hat
much, and most of t he t ime it's hardly not iceable. Possibly wit h new
t ypes of SDRAM which are t rying t o compet e wit h
RamBus RAM
, users will see a much higher
perf ormance increase.
Double Data Rate
Well, t here really is not much t o say on t his t opic, becau
se t he t opic is rat her cut and dried.
DDR RAM is Normal SDRAM t hat sends dat a bot h on t he rising of t he clock cycle, and t he
f alling of t he clock cycle.
Twice t he sending of t he dat a, t wice t he dat a sent. Where st andard 100 MHz SDRAM has an
est imat ed 800
MB/sec dat a t ransf er rat e f or a t heoret ical maximum, DDR is, not surprisingly,
t wice t hat. No, we don’t act ually see t hat much bandwidt h, but t hat is t heir t heoret ical
maximum (64 bit X 100 MHz = 800 MB/s). DDR SDRAM would be 1600 MB/s. It s just f ast er, an
d
it will be cheaper t han Rambus RAM, and it s current ly support ed by quit e a f ew mot herboard
manuf act urers.
Chapter 5
Video Cards
The video card cont rols t he qualit y of what you see on your monit or. It cont ains all t he
circuit ry necessary f or displaying
graphics. It usually is a separat e card t hat f it s int o one of
your mot herboard's slot s, but somet imes t his circuit ry is incorporat ed int o t he mot herboard
it self.
When buying a new video card it is very import ant t o "mat ch" it t o your monit or. If you've g
ot
a f ancy new video card and a puny monit or, you just won't be able t o view t he wonders of your
card t hat you paid good money f or. It is a good idea t o buy t he video card f irst, t hen buy a
suit able monit or. Bet t er yet, buy t he monit or and t he card as a se
t so t hey will be perf ect ly
mat ched.
Accelerated Graphics Port
Today's sof t ware is increasing in graphics int ensit y. Even "mundane" business sof t ware uses
icons, chart s, animat ions, et c. When you add 3D games and educat ional sof t ware t o t he
equat ion, one
can see t hat t here is a crunch in bandwidt h f or graphical inf ormat ion. Wit h
newer sof t ware and games get t ing much more graphics int ensive, t he PCI bus is maxed out. In
f act, t he PCI bus, once considered very f ast, can now be considered a bot t leneck.
Int e
l knew t his. In response, t hey designed t he Accelerat ed Graphics Port, or AGP. Int el
def ines AGP as a "high perf ormance, component level int erconnect t arget ed at 3D graphical
display applicat ions and is based on a set of perf ormance ext ensions or enhanceme
nt s t o PCI."
In short, AGP uses t he main PC memory t o hold 3D images. In ef f ect, t his gives t he AGP video
card an unlimit ed amount of video memory. To speed up t he dat a t ransf er, Int el designed t he
port as a direct pat h t o t he PC's main memory.
AGP sounds
groundbreaking, and it is, no doubt, t he lat est craze in t he need f or graphical
speed. One reason it is f ast er t han PCI is t hat, while PCI runs at 33MHz, t he AGP bus runs
much f ast er. A 4X AGP bus runs at 4 t imes 33MHz, or 133MHz! Also, a normally clocked
PCI bus
can achieve a t hroughput of 132MB/s. Yes, t his is f ast, but when compared t o t he t hroughput s
of 3D games, one f inds t hat it is not enough. AGP, running in 2x mode (2 x 33 = 66MHz), can
achieve a t hroughput of 528MB/s! AGP pulls t his of f by const an
t ly t ransf erring dat a on bot h
t he rises and f alls of t he 66MHz clock cycle. Also, AGP makes use of sideband t ransf ers and
pipelining
so it can const ant ly t ransf er dat a wit hout depending on ot her component s in t he PC.
The
pipelining
abilit y of t he AGP bus
is a key point t hat explains why it provides a perf ormance
advant age. Since AGP pipelines operat ions it can process quicker and more ef f icient ly t han PCI
bus can. AGP uses a special organizat ion process f or all pending and processing request s. In
ef f ect, t
he bus can process one inst ruct ion while st ill recieving t he next inst ruct ions. This
allows much more t o be accomplished in a short er amount of t ime.
For a diagram of how t he AGP bus is st ruct ured, see
this diagram
provided by
Intel Corporation
.
One can
easily see why t he need f or a new graphical int erf ace is needed. While PCI served us
well, and st ill cont inues t o do so, it is bogged down by t he demand of f ull screen 3D graphics.
It works great f or 2D business sof t ware and most games, but int ense 3D ch | https://www.techylib.com/el/view/spotpull/chapter_1_boolean_logic_and_gates | CC-MAIN-2019-09 | refinedweb | 18,211 | 85.32 |
As part of our ongoing commitment to help build an interoperable Web that “just works,” we are changing the way Top Level Domains (TLD) names are parsed to use the Public Suffix List. This change can be previewed using Internet Explorer in the Windows 10 Technical Preview.
In the past, IE used a custom algorithm and kept a private list of domain name parsing exceptions. Owners of domain names that needed exception handling by our algorithm had to notify Microsoft that exception parsing was required.
Going forward, to increase interoperability we are switching our parsing to use the algorithms and domain list found at, which is a cross-vendor initiative also used by other browsers. Starting with the Windows 10 Technical Preview, IE will parse domain names in a more interoperable manner. After this change has been released in a product release you will no longer need to notify Microsoft of special domain names; we will automatically pick up and include the changes made at publicsuffix.org on a regular cadence. We are also evaluating bringing this change downlevel to accelerate the transition.
Join the Windows Insider Program to try the new top level domain name parsing in IE and let us know if you have feedback @IEDevChat or on Connect.
— David Walp, Senior Program Manager, Internet Explorer
What is the TLD parsing needed for?
Wow…. The fact that this list has to exist makes me want to cry….
But thanks for at least using the same list as everyone else now guys!
I thought with the recent influx of new TLDs manual lists are going to get phased out since they don't scale anymore.
@Joseph: Same-origin policy, e.g. what domain level a cookie can be set for.
Brenno: there is as yet no feasible replacement for publicsuffix.org. As long as Microsoft implement the algorithm correctly (i.e. if they see a domain that's not in the list, treat it like .com – i.e. flat namespace) then it's not a massive disaster if new gTLDs take a little while to percolate into the system, because most of them are flat like that anyway.
Gerv (PSL maintainer)
blogs.msdn.com/…/private-domain-names-and-public-suffixes-in-internet-explorer.aspx explains how Internet Explorer uses domain information.
Thanks Gerv. Yeah, from a security standpoint the possibly slow percolating (smartphones/tablets?) is certainly not going to be a problem. It might be a bit of a challenge to those companies getting their own TLD but maybe it serves them right. 😀 | https://blogs.msdn.microsoft.com/ie/2014/10/06/interoperable-top-level-domain-name-parsing-comes-to-ie/ | CC-MAIN-2018-22 | refinedweb | 424 | 55.24 |
[SOLVED]Is it possible to pass on password to process requiring root privileges
Suppose i start a process p1 using QProcess. The process p1 requires root privileges, so can we pass on password to process p1 without human intervention, ofcourse password being read from some location
- Benjamin Kloster
I can think of two ways, depending on what your process supports:
Pass the password as a command line argument. This obviously requires the target program to have such an option. See the documentation of QProcess::start on how to pass arguments.
If you start the target program from a terminal, does it ask for the root password? If yes, start the QProcess and wait until it's done with QProcess::waitForStarted (waitForReadyRead may work even better). Then you can pass it the password by using QProcess::write.
I hope one of those works for you.
- tobias.hunger Moderators
Note: If you pass the password via the command line, then it might show up in the process list.
Use OS specific methods. sudo or better setuid for linux etc... ShellExecute for windows.
-On Windows, I don't think it is possible, and for good reason: it is a security breach. You want the user to know that the process just elevated it's rights and now runs with root privileges.-
I stand corrected.
Of cause it is possible on Windows. There is an list of WINAPI functions to do that:
ShellExecute (maybe not the best use for this, but it works from win 2000 to win7)
CreateProcessWithLogon (simple to use)
and another two with more flexible options:
CreateProcessAsUser
CreateProcessWithToken
I use Ubuntu (Linux), actually my application does not need root privileges. But it has one module which requires root privileges. That module needs to be run everytime i start my application. i wish to ask for root password only the first time user starts the application and store the root password internally. Next time when user runs my application, i want to run the module using password stored on first usage, without troubling the user to type in password every time.
I still maintain that that is a security risk. How are you securely going to store that root password?
Is this module an executable file?
If so, you can simply set sticky bit during installation and use setuid() to gain root privilege in your module.
@
su - root
chmod +s <your executable binary>
@
@
#include <sys/types.h>
#include <unistd.h>
int main(...)
{
....
qDebug() << "Current user ID: " << getuid() << " user group: " << getgid();
if (setuid(0) != 0)
{
qCritical() << "Can't get root access";
return;
}
// we have root access now
qDebug() << "Current user ID: " << getuid() << " user group: " << getgid();
}
@
You then don't even need to know the root password.....
I was looking for a working setuid() method. Thanks a ton for making it look so easy.
Edit 1:
It didn't work!
@Current user ID: 1000 user group: 1000
Can't get root access@
Edit 2:
@Cannot connect creator comm socket /tmp/qt_temp.Vr2940/stub-socket: No such file or directory@
I get this error.
Moderator Edit: Instead of replying to yourself, please just edit your last post. I have merged your three posts into one; Andre
I have forgot to tell:
You should install your module as root user. Or chown it to be root.
- su
- chown root:root <yourbin>
- chmod +s <yourbin>
- exit to normal user
- check if "s" bit is set: ls -ahl <yourbin>
It should look like that:
@
-rwsr-sr-x 1 root root 7,2K Sep 27 11:19 <yourbin>
@
- ./<yourbin>
It works always!
- tobias.hunger Moderators
Making your binary suid means that anybody that can start it will be able to run it as root. That may or may not be what you want.
You could also consider moving the root-part out into a D-Bus service and then using "polkit": for the authentication. I never used it, but it seems to be what the cool kids do nowadays:-)
I will be using policykit (pkexec) but only first time for setting setuid. Besides, i would be setting a password lock inside the binary to prevent its unauthorised execution
Thanks! can you name some Linux Distros which don't support setuid.
[quote author="zester" date="1348839627"
[/quote]
What? setuid & getuid is implemented in kernel since 2.4(2?).* it belongs to each linux with this kernel version or above...
PAM is just package/ 3rd software.. it must be compiled/installed and configured. And for example is not by default on LFS, OpenELEC, etc...
I don't know and I agree with you but the last time I had this issue (2011?) both ubuntu and fedora had them disabled, meaning they wouldn't work. After doing alot of research I was informed that I should defiantly not be using those fucntions and that most linux distros had them disabled do to security concerns.
Maybe things have changed "I have no idea" I was just pointing out my past experiences and what I was told to use, policykit or pam. Maybe it has something to do with SELinux?
If setuid and getuid is working for you then use them.
Trust me I would much rather use functions that are already provided verses installing a thirdparty package like policykit or pam.
Here you go maybe this was the issue I was having back then.
SELinux is preventing dhcpd setgid/setuid access
Maybe ubuntu had the same bug?
Or see here
As far as being told not to use them..... What can I say, maybe it was an opinion made by someone with
more experance than I. I will look into it, If I can get rid of one more package that duplicates functionality then good ;)
right i have forgot about SELinux and grsecurity... they can prevent execution of setuid...
They also need more complex PAM configuration...
But didn't know what ubuntu or fedora have ever used SELinux in Desktop versions. SLED(S) and RHEL uses SELinux by default...
Anyway...
The right way for desktop endusers will be: using PAM
Standard way for linux will be: using kernels setuid
I am facing the same problem. You have come cross the problem.
what you did to solve...
I want user to enter the password only once. | https://forum.qt.io/topic/20169/solved-is-it-possible-to-pass-on-password-to-process-requiring-root-privileges | CC-MAIN-2018-13 | refinedweb | 1,042 | 65.83 |
We’ve previously seen
the basic implementation and motivation for
scalaz.Leibniz.
But there’s still quite a bit more to this traditionally esoteric
member of the Scalaz collection of well-typed stuff.
The word “witness” implies that
Leibniz is a passive bystander in
your function; sitting back and telling you that some type is equal to
another type, otherwise content to let the real code do the real
work. The fact that
Leibniz lifts into functions (which are a
member of the everything set, you’ll agree) might reinforce the
notion that
Leibniz is spooky action at a distance.
But one of the nice things about
Leibniz is that there’s really no
cheating: the value with its shiny new type is dependent on the
Leibniz actually existing, and its
subst, however much a glorified
identity function it might be, completing successfully.
To see this in action, let’s check in with the bastion of not evaluating stuff, Haskell.
{-# LANGUAGE RankNTypes, PolyKinds #-} module Leib ( Leib() , subst , lift , symm , compose ) where import Data.Functor data Leib a b = Leib { subst :: forall f. f a -> f b } refl :: Leib a a refl = Leib id lift :: Leib a b -> Leib (f a) (f b) lift ab = runOn . subst ab . On $ refl newtype On c f a b = On { runOn :: c (f a) (f b) } symm :: Leib a b -> Leib b a symm ab = runDual . subst ab . Dual $ refl newtype Dual c a b = Dual { runDual :: c b a } compose :: Leib b c -> Leib a b -> Leib a c compose = subst
We use newtypes in place of type lambdas, and a value instead of a method, but the implementation is otherwise identical.
OK. Let’s try to make a fake
Leib.
badForce :: Leib a b badForce = Leib $ \_ -> error "sorry for fibbing"
The following code will signal an error only if forcing the head
cons of the
substed list signals such an error. We never give
Haskell the chance to force anything else.
λ> subst (badForce :: Leib Int String) [42] `seq` 33 *** Exception: sorry for fibbing
Oh well, let’s try to bury it behind combinators.
λ> subst (symm . symm $ badForce :: Leib Int String) [42] `seq` 33 *** Exception: sorry for fibbing λ> subst (compose refl $ badForce :: Leib Int String) [42] `seq` 33 *** Exception: sorry for fibbing
Hmm. We have two properties:
idfrom
refl? The type-substituted data actually goes through that function. The same goes for the
substmethod in Scala.
Leibnizcombinators, the strictness forms a chain to all underlying
Leibnizevidence. If there are any missing values, the transform will also fail.
Leibniz
Let’s try a variant on
Leib.
sealed abstract class LeibF[G[_], H[_]] { def subst[F[_[_]]](fa: F[G]): F[H] }
This reads “
LeibF[G, H] can replace
G with
H in any type
function”. But, whereas the
kind
of the types that Leib discusses is
*, for
LeibF it’s
*->*. So,
LeibF[List, List] exhibits that the type constructors
List and
List are equal.
implicit def refl[G[_]]: LeibF[G, G] = new LeibF[G, G] { override def subst[F[_[_]]](fa: F[G]): F[G] = fa }
Interestingly, except for the kinds of type parameters, these
definitions are exactly the same as for
Leib. Does that hold for
lift?
def lift[F[_[_], _], A[_] , B[_]](ab: LeibF[A, B]): LeibF[F[A, ?], F[B, ?]] = ab.subst[Lambda[x[_] => LeibF[F[A, ?], F[x, ?]]]](LeibF.refl[F[A, ?]])
Despite that we are positively buried in type lambdas (yet moderated by Kind Projector) now, absolutely!
As an exercise, adapt your
symm and
compose methods from the last
part for
LeibF, by only changing type parameters and switching any
refl references.
def symm[A[_], B[_]](ab: LeibF[A, B]): LeibF[B, A] def compose[A[_], B[_], C[_]](ab: LeibF[A, B], bc: LeibF[B, C]): LeibF[A, C]
You can write a
Leibniz and associated combinators for types of
any kind; the principles and implementation techniques outlined
above for types of kind
*->* apply to all kinds.
PolyKinds?
You have to define a new
Leib variant and set of combinators for
each kind you wish to support. There is no need to do this in
Haskell, though.
λ> :k Leib [] Leib [] :: (* -> *) -> * λ> :t refl :: Leib [] [] refl :: Leib [] [] :: Leib [] [] λ> :t lift (refl :: Leib [] []) lift (refl :: Leib [] []) :: Leib (f []) (f []) λ> :t compose (refl :: Leib [] []) compose (refl :: Leib [] []) :: Leib a [] -> Leib a []
In Haskell, we can take advantage of the fact that the actual
implementations are kind-agnostic, by having those definitions be
applicable to all kinds via
the
PolyKinds language extension,
mentioned at the top of the Haskell code above. No such luck in
Scala.
In a post from a couple months ago,
Kenji Yoshida outlines an interesting way to simulate the missing
type-evidence features of Scala’s GADT support with
Leibniz. This
works in Haskell, too, in case you are comfortable with turning on
RankNTypes
but not
GADTs
somehow.
Let’s examine Kenji’s GADT.
sealed abstract class Foo[A, B] final case class X[A]() extends Foo[A, A] final case class Y[A, B](a: A, b: B) extends Foo[A, B]
For completeness, let’s also see the Haskell version, including the function that demands so much hoop-jumping in Scala, but just works in Haskell.
{-# LANGUAGE GADTs #-} module FooXY where data Foo a b where X :: Foo a a Y :: a -> b -> Foo a b hoge :: Foo a b -> f a c -> f b c hoge X bar = bar
Note that the Haskell type system understands that when
hoge’s first
argument’s data constructor is
X, the type variables
a and
b
must be the same type, and therefore by implication the argument of
type
f a c must also be of type
f b c. This is what we’re trying
to get Scala to understand.
def hoge1[F[_, _], A, B, C](foo: Foo[A, B], bar: F[A, C]): F[B, C] = foo match { case X() => bar }
This transliteration of the above Haskell
hoge function fails to
compile, as Kenji notes, with the following:
…/LeibnizArticle.scala:39: type mismatch; found : bar.type (with underlying type F[A,C]) required: F[B,C] case X() => bar ^
catamethod
Kenji introduces a
cata method on
Foo to constrain use of the
Leibniz.force hack, while still providing external code with usable
Leibniz evidence that can be lifted to implement
hoge. However,
by implementing the method in a slightly different way, we can use
refl instead.
sealed abstract class Foo[A, B] { def cata[Z](x: (A Leib B) => Z, y: (A, B) => Z): Z } final case class X[A]() extends Foo[A, A] { def cata[Z](x: (A Leib A) => Z, y: (A, A) => Z) = x(Leib.refl) } final case class Y[A, B](a: A, b: B) extends Foo[A, B] { def cata[Z](x: (A Leib B) => Z, y: (A, B) => Z) = y(a, b) }
Now we can replace the pattern match (and all other such pattern
matches) with an equivalent
cata invocation.
def hoge2[F[_, _], A, B, C](foo: Foo[A, B], bar: F[A, C]): F[B, C] = foo.cata(x => x.subst[F[?, C]](bar), (_, _) => sys error "nonexhaustive")
So why can we get away with
Leib.refl, whereas the function version
Kenji presents cannot? Compare the
cata signature in
Foo versus
X:
def cata[Z](x: (A Leib B) => Z, y: (A, B) => Z): Z def cata[Z](x: (A Leib A) => Z, y: (A, A) => Z): Z
We supplied
A for both the
A and
B type parameters in our
extends clause, so that substitution also applies in all methods
from
Foo that we’re implementing, including
cata. At that point
it’s obvious to the compiler that
refl implements the requested
Leib.
Incidentally, a similar style of substitution underlies the definition
of
refl.
Leibmember
What if we don’t want to write or maintain an overriding-style
cata?
After all, that’s an n² commitment. Instead, we can incorporate a
Leib value in the GADT. First, let’s see what the equivalent
Haskell is, without the
GADTs extension:
data Foo a b = X (Leib a b) | Y a b hoge :: Foo a b -> f a c -> f b c hoge (X leib) bar = runDual . subst leib . Dual $ bar
We needed
RankNTypes to implement
Leib, of course, but perhaps
that’s acceptable. It’s useful in
Ermine, which supports rank-N
types but not GADTs as of this writing.
The above is simple enough to port to Scala, though.
sealed abstract class Foo[A, B] final case class X[A, B](leib: Leib[A, B]) extends Foo[A, B] final case class Y[A, B](a: A, b: B) extends Foo[A, B] def hoge3[F[_, _], A, B, C](foo: Foo[A, B], bar: F[A, C]): F[B, C] = foo match { case X(leib) => leib.subst[F[?, C]](bar) }
It feels a little weird that
X now must retain
Foo’s
type-system-level separation of the two type parameters. But this
style may more naturally integrate in your ADTs, and it is much closer
to the original non-working
hoge1 implementation.
It also feels a little weird that you have to waste a slot carting around this evidence of type equality. As demonstrated in section “It’s really there” above, though, it matters that the instance exists.
You can play games with this definition to make it easier to supply
the wholly mechanical
leib argument to
X, e.g. adding it as an
implicit val in the second parameter list so it can be imported and
implicitly supplied on
X construction. The basic technique is
exactly the same as above, though.
Leibnizmastery
This time we talked about
substalways executes to use a type equality,
Leibnizes,
Leibnizmembers of data constructors.
This article was tested with Scala 2.11.2, Kind Projector 0.5.2, and GHC 7.8.3.
Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog | https://typelevel.org/blog/2014/09/20/higher_leibniz.html | CC-MAIN-2019-13 | refinedweb | 1,689 | 68.4 |
Hi Alex and Alan,
we already moved entire logic to org.apache namespace. We're keeping classes in com.cloudera
in place only for compatibility with tools that are based on sqoop (for example various connectors).
However those classes do not contain any logic, they are just inheriting from org.apache namespace
and do nothing. Let me show you what I mean on following example:
All other files in com.cloudera namespace have similar structure. They are just skeleton code
that is in place for compatibility without any logic (additional code).
Do we really need to remove entirely com.cloudera namespace or is this state acceptable for
graduating?
Jarcec
On Tue, Feb 28, 2012 at 10:39:35AM +0200, Alex Karasulu wrote:
> On Mon, Feb 27, 2012 at 10:10 PM, Alan Gates <gates@hortonworks.com> wrote:
>
> > The source code in Sqoop still exists in both com.cloudera.sqoop and
> > org.apache.sqoop packages and most of the code appears to include the
> > com.cloudera packages and not the org.apache packages. While in the
> > incubator this seems fine. Are we ok with this in a TLP? I couldn't find
> > any policy statements on it in the Apache pages.
> >
>
> Good catch Alan. You are right we are not OK with this situation. It needs
> to be corrected then another vote can be taken.
>
> Thanks,
> Alex
>
>
> >
> > On Feb 24, 2012, at 1:34 PM, Arvind Prabhakar wrote:
> >
> > > This is a call for vote to graduate Sqoop podling from Apache Incubator.
> > >
> > > Sqoop entered Incubator in June of 2011. Since then it has added three
> > > new committers from diverse organizations, added two new PPMC members,
> > > and made two releases following the ASF policies and guidelines. The
> > > community of Sqoop is active, healthy and growing and has demonstrated
> > > the ability to self-govern using accepted Apache practices. Sqoop
> > > community has voted to proceed with graduation [1] and the result can
> > > be found at [2].
> > >
> > > Please cast your votes:
> > >
> > > [ ] +1 Graduate Sqoop podling from Apache Incubator
> > > [ ] +0 Indifferent to the graduation status of Sqoop podling
> > > [ ] -1 Reject graduation of Sqoop podling from Apache Incubator
> > >
> > > This vote will be open for 72 hours. Please find the proposed board
> > > resolution below.
> > >
> > > [1]
> > > [2]
> > >
> > > Thanks,
> > > Arvind Prabhakar
> > >
> > > X. Establish the Apache Sqoop Project
> > >
> > > WHEREAS, the Board of Directors deems it to be in the best
> > > interests of the Foundation and consistent with the
> > > Foundation's purpose to establish a Project Management
> > > Committee charged with the creation and maintenance of
> > > open-source software related to efficiently transferring
> > > bulk data between Apache Hadoop and structured datastores
> > > for distribution at no charge to the public.
> > >
> > > NOW, THEREFORE, BE IT RESOLVED, that a Project Management
> > > Committee (PMC), to be known as the "Apache Sqoop Project",
> > > be and hereby is established pursuant to Bylaws of the
> > > Foundation; and be it further
> > >
> > > RESOLVED, that the Apache Sqoop Project be and hereby is
> > > responsible for the creation and maintenance of software
> > > related to efficiently transferring bulk data between Apache
> > > Hadoop and structured datastores; and be it further
> > >
> > > RESOLVED, that the office of "Vice President, Apache Sqoop" be
> > > and hereby is created, the person holding such office to
> > > serve at the direction of the Board of Directors as the chair
> > > of the Apache Sqoop Project, and to have primary responsibility
> > > for management of the projects within the scope of
> > > responsibility of the Apache Sqoop Project; and be it further
> > >
> > > RESOLVED, that the persons listed immediately below be and
> > > hereby are appointed to serve as the initial members of the
> > > Apache Sqoop Project:
> > >
> > > * Aaron Kimball kimballa@apache.org
> > > * Andrew Bayer abayer@apache.org
> > > * Ahmed Radwan ahmed@apache.org
> > > * Arvind Prabhakar arvind@apache.org
> > > * Bilung Lee blee@apache.org
> > > * Greg Cottman gcottman@apache.org
> > > * Guy le Mar guylemar@apache.org
> > > * Jaroslav Cecho jarcec@apache.org
> > > * Jonathan Hsieh jmhsieh@apache.org
> > > * Olivier Lamy olamy@apache.org
> > > * Paul Zimdars pzimdars@apache.org
> > > * Roman Shaposhnik rvs@apache.org
> > >
> > > NOW, THEREFORE, BE IT FURTHER RESOLVED, that Arvind Prabhakar
> > > be appointed to the office of Vice President, Apache Sqoop, to
> > > serve in accordance with and subject to the direction of the
> > > Board of Directors and the Bylaws of the Foundation until
> > > death, resignation, retirement, removal or disqualification,
> > > or until a successor is appointed; and be it further
> > >
> > > RESOLVED, that the initial Apache Sqoop PMC be and hereby is
> > > tasked with the creation of a set of bylaws intended to
> > > encourage open development and increased participation in the
> > > Apache Sqoop Project; and be it further
> > >
> > > RESOLVED, that the Apache Sqoop Project be and hereby
> > > is tasked with the migration and rationalization of the Apache
> > > Incubator Sqoop podling; and be it further
> > >
> > > RESOLVED, that all responsibilities pertaining to the Apache
> > > Incubator Sqoop | http://mail-archives.apache.org/mod_mbox/incubator-general/201202.mbox/%3C20120228085246.GM3186@garfield%3E | CC-MAIN-2015-06 | refinedweb | 778 | 54.22 |
PhpStorm 10.0.3 is now available
Posted on by
We are glad to announce that PhpStorm 10.0.3 build 143.1770 is available for download.
This build includes new features, bug fixes and improvements from the PHP, web and IntelliJ platform sides. From the PHP side, this build delivers:
- Fix of scalar types in namespaced classes for PHP 7 (WI-28283)
- Symfony 3 support for Command Line Tools
Please see our issue tracker for the full list of PHP-related issues fixed and release notes.
Download PhpStorm 10.0.3 build 143.1770 for your platform and please report any bugs or feature request to our Issue Tracker.
The Drive to Develop!
-JetBrains PhpStorm Team
44 Responses to PhpStorm 10.0.3 is now available
Aurimas says:January 8, 2016
Still not fully supports anonymous classes…
mnapoli says:January 8, 2016
> Fix of scalar types
Awesome!
AB says:January 8, 2016
Did you fix empty $_POST when using localhost?
NBCODING says:January 10, 2016
No, they did not. 🙁
Vladislav Soprun says:January 8, 2016
After update an error occurred with the addition of artisan!
—
Problem
Failed to determine version.
Command
C:…php.exe C:…artisan -V
Output
Laravel Framework version 5.2.7
Даша says:January 9, 2016
Don’t fix tools based on symfony console, i.e. laravel. Error
Problem
Failed to determine version.
Command
php.exe artisan -V
Output
Laravel Framework version 5.2.7
Kostya Zolo says:January 9, 2016
same in here.
Laravel 5.2.7
“Problem
Failed to determine version.”
BJ says:January 9, 2016
maybe use a real framework
Kamran Ahmed says:January 10, 2016
Ahan, and what that might be? Codeigniter?! lol
BJ says:January 10, 2016
zend.. symfony.. something with some balls
Кирилл Несмеянов says:January 10, 2016
Add “if (isset($argv[1]) && $argv[1] === ‘-V’) { die(‘Symfony version 2.7.8’); }” in artisan at start for L5.1 and “Symfony version 3.0.0” for L5.2
Kostya Zolo says:January 10, 2016
You’re the saver. Thank you.
Maxim Kolmakov says:January 11, 2016
Please vote for
Gildas Niyigena says:January 11, 2016
You really saved my day. I got this 2 days ago, with no chance to resolve it!
ErnestV says:February 6, 2016
Genius! Thanks a lot!
Maxim Kolmakov says:January 11, 2016
Please, vote for
Кирилл Несмеянов says:February 5, 2016
Lol
Maxim Kolmakov says:January 11, 2016
Please vote for
EspadaV8 says:January 10, 2016
Still importing ‘string’ in a namespace when adding a traits methods (possibly also does it when implementing methods that an interface/abstract class define.
EspadaV8 says:January 10, 2016
Yeah. If you have namespaced traits/classes and you use them in another namespaced class then scalar types are still imported as though they were part of the originating namespace.
Example code:
Video of PHPStorm importing scalars:
Maxim Kolmakov says:January 11, 2016
Thank you for reporting! Please vote for
EspadaV8 says:January 12, 2016
Thanks for creating that.
sprain says:January 10, 2016
Anything about the blurry text in external monitors?
Maxim Kolmakov says:January 11, 2016
Please vote for
Martin Janeček says:January 11, 2016
Will there be patch for 143.1480?
Maxim Kolmakov says:January 11, 2016
No, there is a patch only for 10.0.2.
CyncialOne says:January 12, 2016
Where can I learn more about what the “Copyright Plugin” is?
Maxim Kolmakov says:January 12, 2016
We will publish blog post really soon. Please stay tuned. Right now you can read about the functionality at:
CyncialOne says:January 17, 2016
Thanks!
Half Crazed says:January 12, 2016
I wish this application would update itself instead of forcing me to redownload the whole application and reinstall it, causing my path to application to change. Very annoying.
Kenny Silanskas says:January 12, 2016
Seriously. It’s absolutely maddening and causes me to skip most minor updates.
Maxim Kolmakov says:January 12, 2016
We’re always trying to provide patches for EAP->EAP and Release->Release versions. I’m sorry that we couldn’t do this for 10.0.1->10.0.2 but patch should work for 10.0.2->10.0.3.
Kenny Silanskas says:January 14, 2016
Thanks for the reply Maxim. I appreciate your team trying to resolve this issue. The patch did not work for 10.0.2 to 10.0.3 unfortunately and still shows the “Release Notes/Download” button. Just FYI. (Running EAP so not sure if that has any impact)
Edit: I misspoke. Apologies. The patch that prompts the download is between builds. (143.1480 – 143.1770)
Sp4cecat says:January 18, 2016
143.1770 is the straight major release; you’re still loading up EAP which is the older version. Had the same issue, but I am deliberately using older EAP because of Java issues with consoles on 2 screens – console keeps popping back to main screen
Kenny Silanskas says:January 18, 2016
Ah. Ok then. Yeah that’s why I had to stick to EAP as well. Java as usual ruining the day. 🙂 Thanks for the clarification.
Maxim Kolmakov says:January 12, 2016
Usually we’re trying to provide patches for EAP->EAP and Release->Release versions. I’m sorry that we couldn’t do this for 10.0.1->10.0.2 but patch should work for 10.0.2->10.0.3.
Andrew Beveridge says:January 14, 2016
Agreed, we have no better option than a dreadfully slow connection to our office and 160MB every other week is surprisingly painful (kills our whole office connection for 20 minutes).
I’d probably be less annoyed by this if there was a reasonable explanation provided for why a patch isn’t possible.
Could you perhaps add some more text to the “Platform and Plugin Updates” dialog whenever a patch isn’t available, to explain why?
Is there a better place I could post this request, e.g. feature request for core?
Drew says:April 14, 2016
I agree with Half Crazed, it’s 2016 this should be automated already.
Sp4cecat says:January 14, 2016
Any way I can get the ‘non-optimised’ java OSX version? The latest one forces me to use the ‘optimised’ version – I have two screens and keep the debug window on the left, the version that’s optimised for OSX keeps moving the debug window back to the main screen ..
James Mehorter says:January 28, 2016
I figured this out the other day! Hope it helps you too 🙂-
Sp4cecat says:January 31, 2016
Thanks James. It worked for 10.03, however looks like version 11 EAP comes with it bundled by default. Looking at workarounds for that; there’s a provision for specifying JDK version but the only one THAT will work with is Java 8, which seems to be the source of the problem .. thanks anyway!
Stephen says:January 25, 2016
I’ve tried few times to download 143.1770 and install (attempts were distributed within a few days) and each time after install (os x) the app I checked version and see current version is 143.1480 instead of 143.1770. Maybe some issue with CDN?
$ shasum -a 256 ~/Downloads/PhpStorm-10.0.3-custom-jdk-bundled.dmg
bd7d28974ef5587524389659dd27516c1067c35aebeee040821c638a18439e52 /Users/User/Downloads/PhpStorm-10.0.3-custom-jdk-bundled.dmg
Seems is all ok with checksum.
Stephen says:January 25, 2016
oh ok my bad. I’ve replaced the file with overwrote, but didn’t pay attention this is not EAP version.
Obaid Mukhtar Khan says:February 8, 2016
Greath | https://blog.jetbrains.com/phpstorm/2016/01/phpstorm-10-0-3/ | CC-MAIN-2020-29 | refinedweb | 1,258 | 67.35 |
Raspberry Pi Cluster Node – 02 Packaging common functionality
This post builds on the first step to create a Raspberry Pi Cluster node to package the common functionality that will be shared between master and clients.
Packaging common functionality
With the Raspberry Pi Cluster project there will be a number of things all nodes will do. This can lead to a large amount of duplicated code. To resolve the issue of copying the same code into both master and client code I will make a python package to hold the data.
Python packages are a way to collect common code together in easy to use modules. This allows multiple files to import the same code, reducing code duplication.
One of the major problems with copying code is that when improvements are made, some places that it is used can be missed. This means that a piece of code that was initially copied into a number of places may now differ. This shouldn’t be an issue if the improvement did not affect the overall functionality of the code. However if the change affected the behaviour you may now have multiple pieces of code doing slightly different things.
When programming you always try and avoid to have situations where this may occur and python packages provide a way to help resolve this problem.
As the amount of code we have increases I will refactor the code into different modules in the package. By keeping the code modular it will be easy to build much more complex node behaviour.
Creating a python package
To create our package we need to create a folder for it to live in. The package I will create will be called
RpiCluster. Therefore I have created a folder in my project directory called
RpiCluster.
To finish creating the package you need to add an
__init__.pyfile to the folder. This file tells python that the folder is a python package. If this is not included python will not treat your directory as a package and will not allow importing files from it. This does not need to contain anything but we will use it at a later date to configure some details about the package.
The
__init__.py file is safety feature so that folders which are named the same as python base modules do not hide these important modules. For example if your code has a
stringfolder you do not want it overriding the python
string module.
Now we have our package I can look at moving some of our code into it.
Moving our logging into the package
I am going to move the logging functionality into our newly created package. I am doing this because both the master and slave scripts will be logging their processes.
To start with I have copied all the logging code to a new file called
MainLogger.py. One of the improvements I am going to make is to allow each script to customize the name of the file it logs to.
This has the improvement in that each of our scripts can configure a different logger name and location to store the file. For now, while we only have a single master script this doesn’t matter too much but it will be helpful in the future.
The first section of the logger script reproduced below was originally in the main script.
import logging import time # First lets set up the formatting of the logger logFormatter = logging.Formatter("%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] %(message)s") # Get the base logger and set the default level logger = logging.getLogger() logger.setLevel(logging.DEBUG) # Create a console handler to output the logging to the screen consoleHandler = logging.StreamHandler() consoleHandler.setFormatter(logFormatter) consoleHandler.setLevel(logging.INFO) logger.addHandler(consoleHandler)
Here again we set up the logger with our custom formatter and set up a console handler. I have initially removed the section that adds a file handler to log the files to.
Instead of setting up the file handler by default I have moved this into a method. This allows any script calling the method to set up its own file handler using the function below.
def add_file_logger(filename): # Create a handler to store the logs to a file fileHandler = logging.FileHandler(filename) fileHandler.setFormatter(logFormatter) logger.addHandler(fileHandler)
Here I allow the script to pass in their filename which will be used to create a file handler. Now I have the basics of my logging code set up in my new
RpiCluster package.
Using our RpiCluster package in our main script
The only modifications made so far to the main script has been to remove all the logging setup. Now I can import the new package using the following import statement
from RpiCluster.MainLogger import add_file_logger, logger
The first part of the import statement tells python where the symbols we are going to import exist. Here we use the dot format to say that we are looking for symbols inside the
RpiCluster package and inside the
MainLogger.py file.
The import statement following that tells it to import our function to add the handler and the logger variable created. When the MainLogger file is loaded the code runs creating the logger object. Here we make that variable available in our main script.
Now I have imported the new package I can use the logger as before. The rest of the code in the script has not changed.
The full code is available on Github, any comments or questions can be raised there as issues or posted below. | https://chewett.co.uk/blog/881/raspberry-pi-cluster-node-02-packaging-common-functionality/ | CC-MAIN-2020-05 | refinedweb | 928 | 62.17 |
Are you sure?
Do you want to delete “Recursion: fractal tree” permanently? You will not be able to undo this action.
import math; # let's start with a nice background: Rectangle(color=['#acf', '#cef', '#fff']) Circle(x=50, y=100, width=200, height=40, color='#350') # "tree" is a function -- a reusable piece of code # In this case it's a _recursive_ function; that is, it calls itself. # With recursion, it is possible to build complex behavior from a # relatively simple base. def tree(x, y, angle, length): # calculate the end point with trigonometry: end_x = int(x + length * math.cos(angle)) end_y = int(y + length * math.sin(angle)) # draw one line: Line([(x, y), (end_x, end_y)], color="tan") # we need to stop at some point; only draw further if line length is over 5. if length > 5: # this call will draw one branch off our line tree(end_x, end_y, angle+0.3, length*2/3) # and this will draw the other line tree(end_x, end_y, angle-0.65, length*2/3) if length < 10: # we need some green at the top: Circle(x=end_x, y=end_y, width=10, height=10, color=["green", "darkgreen"]) # this single function call will draw the whole tree tree(60, 100, math.radians(270), 30) | https://shrew.app/show/snild/recursion-fractal-tree | CC-MAIN-2022-21 | refinedweb | 209 | 72.66 |
ec_spool_ctx_enum_messages
NameName
ec_spool_ctx_enum_messages — Enumerate messages in a spool
SynopsisSynopsis
#include "spool.h"
|
int **ec_spool_ctx_enum_messages** ( | ctx, | |
| | ht, | |
| | on_insert, | |
| | closure
); | |
ec_spool_ctx * <var class="pdparam">ctx</var>;
ec_hash_table * <var class="pdparam">ht</var>;
ec_spool_ctx_insertion_func <var class="pdparam">on_insert</var>;
void * <var class="pdparam">closure<.
Enumerate messages in a spool.
This routine must be called after the spool lock has been obtained. It will migrate older spool formats to the current version and perform an initial summary of the contents of the spool, which are exposed in the provided hashtable.
on_insert is called just prior to inserting into ht; its return value will be used as the dataptr inserted to the hash. You may also perform other tasks here, such as queuing up jobs.. | https://www.sparkpost.com/momentum/3/3-api/apis-ec-spool-ctx-enum-messages/ | CC-MAIN-2021-49 | refinedweb | 120 | 55.84 |
Centralized task scheduler implementation. More...
#include <mitsuba/core/sched.h>
Centralized task scheduler implementation.
Accepts parallelizable jobs and distributes their computational load both locally and remotely. This is done by associating different types of Worker instances with the scheduler. These try to acquire work units from the scheduler, which are then executed on the current machine or sent to remote nodes over a network connection.
Protected constructor.
Virtual destructor.
Acquire a piece of work from the scheduler – internally used by the different worker implementations.
Cancel the execution of a parallelizable process.
Upon return, no more work from this process is running. Returns false if the process does not exist (anymore).
Cancel the execution of a parallelizable process. Upon return, no more work from this process is running. When the second parameter is set to true, the number of in-flight work units for this process is reduced by one.
Return the total number of cores exposed through this scheduler.
Return a pointer to the scheduler of this process.
Get the number of local workers.
Look up a resource by ID & core index.
Return the ID of a registered resource.
Throws an exception if the resource cannot be found.
Return a resource in the form of a binary data stream.
Retrieve one of the workers by index.
Get the number of workers.
Does the scheduler have one or more local workers?
Does the scheduler have one or more remote workers?
Is the scheduler currently executing work?
Test whether this is a multi-resource, i.e. different for every core.
Has the scheduler been started?
Register a multiple resource with the scheduler.
Multi means that in comparison to the previous method, a separate instance is provided for every core. An example where this is useful is to distribute random generator state when performing parallel Monte Carlo simulations.
resources must be a vector whose length is equal to getCoreCount().
Register a serializable resource with the scheduler.
A resource should be thought of as a constant state that is shared amongst all processing nodes. Resources can be reused by subsequent parallel processes, and consequently do not have to be re-transmitted over the network. Returns a resource ID, which can be used to reference the associated data.
Register a worker with the scheduler.
Release the main scheduler lock – internally used by the remote worker.
Increase the reference count of a previously registered resource.
The resource must be unregistered an additional time after calling this function.
Schedule a parallelizable process for execution.
If the scheduler is currently running and idle, its execution will begin immediately. Returns
false if the process is already scheduled and has not yet terminated and
true in any other case.
Internally used to prepare a Scheduler::Item structure when only the process ID is known.
Announces the termination of a process.
Start all workers and begin execution of any scheduled processes.
Initialize the scheduler of this process – called once in main()
Free the memory taken by staticInitialization()
Cancel all running processes and free memory used by resources.
Unregister a resource from the scheduler.
Note that the resource's won't be removed until all processes using it have terminated)
falseif the resource could not be found
Unregister a worker from the scheduler. | http://mitsuba-renderer.org/api/classmitsuba_1_1_scheduler.html | CC-MAIN-2019-51 | refinedweb | 541 | 60.21 |
wcswcs - find a wide substring
#include <wchar.h> wchar_t *wcswcs(const wchar_t *ws1, const wchar_t *ws2);
The wcswcs() function locates the first occurrence in the wide-character string pointed to by ws1 of the sequence of wide-character codes (excluding the terminating null wide-character code) in the wide-character string pointed to by ws2.
Upon successful completion, wcswcs() returns a pointer to the located wide-character string or a null pointer if the wide-character string is not found.
If ws2 points to a wide-character string with zero length, the function returns ws1.
No errors are defined.
None.
This function was not included in the final ISO/IEC 9899:1990/Amendment 1:1994 (E). Application developers are strongly encouraged to use the wcsstr() function instead.
None.
wcschr(), wcsstr(), <wchar.h>.
Derived from the MSE working draft. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/wcswcs.html | CC-MAIN-2018-09 | refinedweb | 138 | 57.47 |
Introduction
Frozen graphs are commonly used for inference in TensorFlow and are stepping stones for inference for other frameworks. TensorFlow 1.x provided an interface to freeze models via
tf.Session, and I previously had a blog on how to use frozen models for inference in TensorFlow 1.x. However, since TensorFlow 2.x removed
tf.Session, freezing models in TensorFlow 2.x had been a problem for most of the users.
In this blog post, I am going to show how to save, load, and run inference for frozen graphs in TensorFlow 2.x.
Materials
This sample code was available on my GitHub. It was modified from the official TensorFlow 2.x Fashion MNIST Classification example.
Train Model and Export to Frozen Graph
We would train a simple fully connected neural network to classify the Fashion MNIST data. The model would be saved as
SavedModel in the
models directory for completeness. In addition, the model would also be frozen and saved as
frozen_graph.pb in the
frozen_models directory.
To train and export the model, please run the following command in the terminal.
$ python train.py
We would also have a reference value for the sample inference from TensorFlow 2.x using the conventional inference protocol in the printouts.
Example prediction reference: [3.9113933e-05 1.1972898e-07 5.2244545e-06 5.4371812e-06 6.1125693e-06 1.1335548e-01 3.0090479e-05 2.8483599e-01 9.5160649e-04 6.0077089e-01]
The key to exporting the frozen graph is to convert the model to concrete function, extract and freeze graphs from the concrete function, and serialize to hard drive.
#() layers = [op.name for op in frozen_func.graph.get_operations()] print("-" * 50) print("Frozen model layers: ") for layer in layers: print(layer) print("-" * 50) print("Frozen model inputs: ") print(frozen_func.inputs) print("Frozen model outputs: ") print(frozen_func.outputs) # Save frozen graph from frozen ConcreteFunction to hard drive tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir="./frozen_models", name="frozen_graph.pb", as_text=False)
Run Inference Using Frozen Graph
To run inference using the frozen graph in TensorFlow 2.x, please run the following command in the terminal.
$ python test.py
We also got the value for the sample inference using frozen graph. It is (almost) exactly the same as the reference value we got using the conventional inference protocol.
Example prediction reference: [3.9113860e-05 1.1972921e-07 5.2244545e-06 5.4371812e-06 6.1125752e-06 1.1335552e-01 3.0090479e-05 2.8483596e-01 9.5160597e-04 6.0077089e-01]
Because frozen graph has been sort of being deprecated by TensorFlow, and
SavedModel format is encouraged to use, we would have to use the TensorFlow 1.x function to load the frozen graph from hard drive.
# Load frozen graph using TensorFlow 1.x functions with tf.io.gfile.GFile("./frozen_models/frozen_graph.pb", "rb") as f: graph_def = tf.compat.v1.GraphDef() loaded = graph_def.ParseFromString(f.read()) # Wrap frozen graph to ConcreteFunctions frozen_func = wrap_frozen_graph(graph_def=graph_def, inputs=["x:0"], outputs=["Identity:0"], print_graph=True)
Once the frozen graph is loaded, we convert the frozen graph to concrete function and run inference.
def wrap_frozen_graph(graph_def, inputs, outputs, print_graph=False): def _imports_graph_def(): tf.compat.v1.import_graph_def(graph_def, name="") wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, []) import_graph = wrapped_import.graph print("-" * 50) print("Frozen model layers: ") layers = [op.name for op in import_graph.get_operations()] if print_graph == True: for layer in layers: print(layer) print("-" * 50) return wrapped_import.prune( tf.nest.map_structure(import_graph.as_graph_element, inputs), tf.nest.map_structure(import_graph.as_graph_element, outputs))
Convert Frozen Graph to ONNX
If TensorFlow 1.x and
tf2onnx have been installed, the frozen graph could be converted to ONNX model using the following command.
$ python -m tf2onnx.convert --input ./frozen_models/frozen_graph.pb --output model.onnx --outputs Identity:0 --inputs x:0
Convert Frozen Graph to UFF
The frozen graph could also be converted to UFF model for TensorRT using the following command.
$ convert-to-uff frozen_graph.pb -t -O Identity -o frozen_graph.uff
TensorRT 6.0 Docker image could be pulled from NVIDIA NGC.
$ docker pull nvcr.io/nvidia/tensorrt:19.12-py3
Conclusions
TensorFlow 2.x could also save, load, and run inference for frozen graphs. The frozen graphs from TensorFlow 2.x should be equivalent to the frozen graphs from TensorFlow 1.x. | https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/ | CC-MAIN-2020-40 | refinedweb | 709 | 52.66 |
Opened 9 years ago
Closed 9 years ago
Last modified 8 years ago
#1663 closed defect (fixed)
problem(?) in vote() function of views.py in tutorial04.txt of magic-removal /docs
Description
I'm heeding Adrian's call for m-r doc nitpicks.
My Python is pretty basic, but in the section of the m-r tutorial04.txt where the vote() function is defined, the code has this:
def vote(request, poll_id): ... selected_choice = p.choice_set.filter(pk=request.POST['choice']) ... selected_choice.votes += 1 ...
The pk= lookup returns a selected_choice that is a <class 'django.db.models.query.QuerySet'>, but that particular creature has no attribute votes to be incremented by the code a few lines further down, so it throws an AttributeError exception. I'm sure it's just a minor tweak of how you talk to the QuerySet class, but that is black magic that's way beyond me.
FWIW, I was able to poke around and find the votes attibute in the list returned by doing a ...
dir(selected_choice.model.objects.all()[0])
... but that's too ugly to be believed. So, anyways, I think there may be a problem here (but I could be wrong).
Change History (2)
comment:1 Changed 9 years ago by Dave St.Germain <dcs@…>
comment:2 Changed 9 years ago by anonymous
- Resolution set to fixed
- Status changed from new to closed
it should be p.choice_set.get(pk=request.POST['choice']) | https://code.djangoproject.com/ticket/1663 | CC-MAIN-2015-14 | refinedweb | 240 | 65.32 |
0
I wrote the below code and when I try to compile it I get the following error:
error C2109: subscript requires array or pointer type.
Can anyone tell me what I did wrong and how I can fix it? Thanks a lot! :)
// This program writes the bombsweeper game. It uses a two-dimensional array that is 5x5 that is initialized // at 0. 5 different pairs of random #s are generated (x,y) where a 1 at the location indicates a bomb. User // will input pairs to try to guess locations of the bombs for up to 7 guesses and at the end of the game // the location of the bombs will be displayed. User can repeat as many times as they want. #include "stdafx.h" #include <ctime> #include <cstdlib> #include <iostream> using namespace std; void userGuess(int bs[]); int main() { srand((unsigned)time(NULL)); // seed random # generator int bs[5][5] = {0}; // declare & initialize array // loop for 5 random sets of #s for (int j = 0; j < 5; j++) { int rn1 = rand() % 5; // random#1 int rn2 = rand() % 5; // random#2 bs[rn1][rn2] = bs[rn1][rn2] + 1; // put bomb in random location } // for void userGuess(int bs[]); cout << bs[5][5]; return 0; } void userGuess(int bs[]) { // loop for 7 guesses for (int i = 0; i < 7; i++) { // get user location guess int x, y; cout << "Enter your guess of the location of a bomb: i.e. 3 4): "; cin >> x >> y; if (bs[x][y] == 1) cout << "Good Job, you got one!" << endl; else cout << "Sorry, no bomb there. Try Again." << endl; } // for } // userGuess
Edited 7 Years Ago by mz_rigel: n/a | https://www.daniweb.com/programming/software-development/threads/227157/array-function-help | CC-MAIN-2016-50 | refinedweb | 275 | 75.54 |
Details
- Type:
Bug
- Status: Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: 0.23.0
- Fix Version/s: None
- Component/s: None
- Labels:None
Description.
Activity
- All
- Work Log
- History
- Activity
- Transitions
I have a more elegant work-around which doesn't involve deleting the data folders: edit the <hadoop-data-root>/dfs/data/current/VERSION file, changing the namespaceID to match the current namenode:
[jstehler@server19 ~]$ cat /lv_main/hadoop/dfs/data/current/VERSION
#Fri Aug 01 18:40:43 UTC 2008
namespaceID=292609117
storageID=DS-1525930547-66.135.42.149-50010-1217002151282
cTime=0
storageType=DATA_NODE
layoutVersion=-11
This allowed me to bring up the slave datanode and have it recognized by the namenode in the DFS UI. ?
Have had the same problem with version 0.19.0. On initial stage solved it deleting dfs.data.dir folders on the problematic datanodes and reformatting the namenode.
I saw this issue on our small 6-node cluster too. It took a while to identify the root cause of the problem. Symptoms were same as described here. In our case we have both 18 and 20 installed in our cluster, but we only run 20. A user saw the HDFS exception for their job, so they stopped 20 and thought of going back to 18 and tried to start it. And then they switched back to 20 again. In doing all this, version files of datanode and namenode got messed up and DNs n NN had different set of information in their version files. Apart from this peculiar usecase, as things are currently in hdfs, I think even one small misstep in upgrading the cluster can result in this bug, as is reported in previous comments. I think at the cluster startup time namenode and datanode should also exchange information contained in version file and in case of mismatch, they should reconcile the differences, potentially asking users input in case choices are not safe to make.
There are few workarounds suggested in previous comments. Which one of these is recommended one?
The second approach looks fine to me.
I feel the datanode losing blocks when it connects to empty namenode mistakenly is not a drawaback at all.
In the current scenario, even if a datanode mistakenly connects to another namenode, the probability of the namenode having the same blocks(of this datanode) in its blocksmap is very less. The namenode most times will invalidate the blocks...
When the datanode starts(after the namenode is formatted and started), can we override the namespace ID of the datanode with with the new namespace ID of the namenode instead of throwing exception?
The second approach looks fine to me.
Second approach is way too scary for me. -1.
In the current scenario data-nodes cannot mistakenly connect to another name-node as they will have different namespaceIds, and therefore blocks cannot be invalidated. Approach (2) will break this, but only in case of cTime=0.
Can we provide a config parameter saying
'datanode.format.required'
If this value is set to true, whenever the DN starts we can update the DN namespace id
with the NN namespace id.
If the value is set to false then we can continue with the existing behvaiour.
Kindly provide your comments.
If the value is set to false then we can continue with the existing behavior.
If it's configurable, I take back my -1.
However, please understand my worry. It's ops/support nightmare when datanodes report to incorrect namenode and lose millions of blocks at once. We had one case like that when one of our ops followed Jared's 'elegant approach' comment...
Why not the first approach if the second approach may cause data loss?
Though the second approach has a drawback.
The user has the option to configure if he wants the datanode to be formatted when namenode is formatted.
If the property is not configured then the behaviour will be in the normal way.
-1 overall. Here are the results of testing the latest attachment
against trunk revision 1134170.
StartupVersions
+1 contrib tests. The patch passed contrib unit tests.
+1 system test framework. The patch passed system test framework compile.
Test results:
Findbugs warnings:
Console output:
This message is automatically generated.
I think adding another config here is unnecessary. What's the downside of adding a "-format" flag to the datanode, and having "start-dfs -format" pass it along?
I agree with Todd and others. Option (1) seems to be the way to go.
If you add the config parameter, you will need to distribute new hdfs-site.xml to all data-nodes before formatting. Instead you could have just removed the storage directories.
I agree with you, But Removing complete storage directories will take good amount of time when huge number of blocks present. Instead we can just sync the namespeceIDs in DataNode startup based on the flag passed and let the blocks will be deleted asynchronously.
Uma, your approach doesn't work, if I understand it correctly. Block IDs are unique only within one cluster. If you change namespaceID on a DataNode the NN will treat that blocks as belonging to this cluster and can mix them up with those that were really created under the namespaceID.
Why would you optimize the format operation anyways? People actually don't format large clusters. I've never heard of such thing. Data is too important. So the format operation is mostly useful for small test clusters.
Option (1) gives an appropriate automation of manual removal of storage directories..
I agree with Konstantin. Allowing auto format may be dangerous.
I am ok with Option(1) as part of this JIRA, it gives an appropriate automation of manual removal of storage directories.. | https://issues.apache.org/jira/browse/HDFS-107?focusedCommentId=12977747&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-35 | refinedweb | 957 | 66.33 |
Results 1 to 2 of 2
- Join Date
- Oct 2011
- 27
- Thanks
- 2
- Thanked 0 Times in 0 Posts
Reversing the elements of an array please help?
Reversing the elements of an array involves swapping the corresponding elements of the array: the first with the last, the second with the next to the last, and so on, all the way to the middle of the array.
This is what i got but its not working. It compiles but wont change the contents of the array
using namespace std;
int main()
{
int score[10] = {10, 20, 30, 40, 50, 60, 70, 80, 90, 100};
for(int k = 0; k < 10; k++)
{
int temp = score[k]; // store the first element value in temp
score[k] = score[9-k]; // store the last element value in the first element
score[9-k] = temp; // store temp in the last element
}
return 0;
}
- Join Date
- Sep 2002
- Location
- Saskatoon, Saskatchewan
- 17,026
- Thanks
- 4
- Thanked 2,668 Times in 2,637 Posts
Looks to me that it will. You just need to stop halfway through.
You should ditch the magic numbers and base it off of .length instead. | http://www.codingforums.com/java-and-jsp/277145-reversing-elements-array-please-help.html | CC-MAIN-2017-09 | refinedweb | 192 | 66.2 |
Opened 6 years ago
Closed 6 years ago
#18640 closed Bug (fixed)
django.contrib.gis.gdal.DataSource fields give gibberish or segfault when accessed directly
Description
if you hold a feature in a variable, and access a field, it will give the correct value.
however, if you access a feature's field by layer[<index>][<field_name>], it gives gibberish, or a segfault.
this might be related to. however, given that the ticket was closed, it might be a different issue.
script to reproduce issue:
import os import django.contrib.gis from django.contrib.gis.gdal import DataSource GIS_PATH = os.path.dirname(django.contrib.gis.__file__) CITIES_PATH = os.path.join(GIS_PATH, 'tests/data/cities/cities.shp') ds = DataSource(CITIES_PATH) layer = ds[0] feature = layer[0] field = feature['Name'] print "this is valid: %r (%r)" % (field.value, list(field.value)) field = layer[0]['Name'] print "but this isn't: %r (%r)" % (field.value, list(field.value))
in python, results in segfault.
in ipython:
this is valid: 'Pueblo' (['P', 'u', 'e', 'b', 'l', 'o'])
but this isn't: ':\xd5\xe2=\xb1\xb4\xb9\xb4Q' ([':', '\xd5', '\xe2', '=', '\xb1', '\xb4', '\xb9', '\xb4', 'Q'])
saved as script, results in:
this is valid: 'Pueblo' (['P', 'u', 'e', 'b', 'l', 'o'])
but this isn't: ([])
this is on an ubuntu 11.04, with django 1.4, and gdal 1.7
Attachments (2)
Change History (6)
Changed 6 years ago by
comment:1 Changed 6 years ago by
Changed 6 years ago by
comment:2 Changed 6 years ago by
Yeah this is a problem. I've updated the implementation, but will need to review more when I get home from the sprints.
Add parent references to GDAL Feature/Field | https://code.djangoproject.com/ticket/18640 | CC-MAIN-2018-47 | refinedweb | 281 | 67.04 |
Tips
Tips and tricks #TIL and stuff I learned with Vue.js and JavaScript in general.
8 - Switch Case Components
Instead of having multiple
v-ifand
v-else-ifdirectives to display a specific component:
<First v- <!-- Some Children --> </First> <Second v- <!-- Some Children --> </Second> <Third v-else <!-- Some Children --> </Third>
Create a functional component and have it act as a
switchfor those components:
export default { name: 'SwitchComp', functional: true, props: ['condition', 'another'], render (h, { props, slots }) { if (props.condition) { return h('First', { props: { foo: 'x' } }, slots().first); } if (props.another) { return h('Second', { props: { bar: 'y' } }, slots().second); } return h('Third', { props: { baz: 'z' } }, slots().third); } };
Then your template would be much cleaner, we could then give each component its own children with named slots like this:
<SwitchComp : <template v-slot:first> </template> <template v-slot:second> </template> <template v-slot:third> </template> </SwitchComp>
7 - Computed Properties as Watchers
Some times you need to do something like this:
export default { computed: { someProp () { // ... this.someValue = 'something'; // ... } } };
While this isn't recommended it will do the job fine, the interesting thing here is that the
someValueprop will only be set when the computed property is re-evaluated whenever one of its reactive dependencies changes. However this is very dangerous as you may easily end up with an infinite loop.
A safer alternative is to use the
$watchAPI like this:
vm.$watch(function () { // Same code as the computed property. }, function (value) { // this will run when the evaluated value in the above function changes. });
This works as an alternative only when the computed property sole purpose is to set the
someValueprop. However if you absolutely need your computed property to set values, my only recommendation that if you cannot avoid it is to do that only with non-reactive properties.
6 - Injecting env in Nuxt apps
Nuxt offers multiple ways to define your env, but when you need more control over what gets injected I found this snippet very useful.
// in store/index.js export const state = () => ({ env: { API_URL: '', SOME_KEY: '' } }); export const mutations = { FILL_ENV(state) { // Fills the env only with the predefined keys in the state. // because we don't want everything. Object.keys(state.env).forEach(key => { state.env[key] = process.env[key]; }); } }; export const actions = { nuxtServerInit({ commit }) { commit('FILL_ENV'); // populates the env. } };
Then your env will be available to both server-side and client-side with:
this.$store.state.env;
This works very well for us and we don't have to define any ENV in the build step. This is very cool especially if your env is dynamically fetched.
5 - Use better names before you decide to use some fancy abstraction
I recently saw this:
// fetches a document by id. function fetchById() { //... } // fetches data by multiple ids. function fetchByIds() { // ... } // Fetches data with additional sub-documents. function _fetchById() { // ... }
And it was decided that to fix this, we need to use a service class/provider and dependency injection.
To me, This is just a case of very bad names, we can do this instead:
// fetches a document by id. function _fetchById() { //... } // fetches data by multiple ids. function _fetchMultipleIds() { // ... } // Simple overloading function fetchById(arg) { if (Array.isArray(arg)) return _fetchMultipleIds(arg); return _fetchById(arg); } // Just Call it what it is! function fetchWithSubdoc() { //... }
Abstractions are a tool. Don't use an orbital cannon if a hammer would do, Even if you would need an orbital cannon down the line. Just cross that bridge when you come to it.
4 - Mini state storesYou don't have to pull in Vuex just for a single global state object.
Keep it simple, if you have a simple state for your small app. You could use a Vue instance to store your state in and make it reactive for other components that imports it.
// in store.js export const state = new Vue({ data: () => ({ // your state }) });
Even Better yet, you could use
Vue.observablein Vue 2.6 releases:
// in store.js export const state = Vue.observable({ // Your state });
3 - Nullable !== OptionalCareful when marking a field in your GraphQL schema as nullable.
In GraphQL schemas, a nullable field is often labeled as optional. This is true in many cases, but always remember than the client-side can always send in null as well.
That means you could be working with either
undefiendor
nulldepending if the client provided it or not.
input PostInput { title: String! body: String! summary: String # This can be null! }
In mathematical terms, that means that the set of nullable fields includes both missing fields and existing fields with a null value which may not what you expect in your API. Always validate in your Backends!
2 - Specific Bindings vs v-bind allBe careful when debugging a 'v-bind' as specific bindings take priority.
<template> <div :</div> </template> <script> export default { data: () => ({ attrs: { id: 3 } }) }; </script>
This will always bind the
idprop to
17, and it doesn't matter if you switch the order.
1 - Vue Component with Private Properties in TSThis comes in handy if you want to add some private properties to a Vue component when working with TypeScript.
import Vue, { VueConstructor } from 'vue'; type withStuff = VueConstructor< Vue & { // Your private properties. $privateProp: number; $otherProp: string; } >; export const MyComponent = (Vue as withStuff).extend({ // component opts }); | https://logaretm.com/tips/ | CC-MAIN-2020-34 | refinedweb | 867 | 65.93 |
By Joe
Pranevich "Sleeping in the
Flowers. The names of nearly all files
in the /dev directory have been changed, but compatibility names
are provided. Most applications should not even notice the changes.
(Low level applications, such as the ppp daemon, or other programs
which rely on an intimate connection with the kernel will most
likely not be 100% compatible between major kernel revisions.) If
you are the type to update your distribution manually, please be
sure to read the CHANGES file and update any necessary packages
before submitting bug reports.
jpranevich@linuxtoday.com
jpranevich@lycos.com
(Work)
< This work has absolutely nothing to do with Lycos, my
employer. The views here are all mine and this article does not
constitute an endorsement from Lycos or anything of the sort. I do
enjoy working for them however. Reproduction or translation of this
article is fine, with permission. Email me. >
-- dependent on a particular architecture. For
example, the ADB (Apple Desktop Bus) mouse driver isn't really
applicable on the i386 port and so isn't supported. Linux kernel
developers strive to make drivers as general as possible, so as to
allow a driver to be reused with relatively little effort on a
different platform if a device because
I lack the time and the knowledge, it should be mentioned that
Linux 2.4 adds support for three new architectures: ia64 (Itanium),
S/390, and SuperH. I have no experience with these platforms so am
unsure as to their level of hardware support, etc. (Or even what
classes high bandwidth devices. While Linux 2.2 included support for the
IO-APIC (Advanced Programmable Interrupt Controller) on
multi-processor systems, Linux 2.4 will support these on
uni-processor systems and also support machines with multiple
IO-APICs. The support for multiple IO-APICs will allow Linux 2.4 to
scale much better than previous incarnations of Linux on high-end
hardware. embedded and older processors,
including processors without MMUs. The work presently is based
around the Linux 2.0 kernel and has largely not been integrated
into the master tree.
Linux 2.2 was a major improvement over Linux 2.0 and the Linux
1.x series. It supported many new filesystems, a new system of file
caching, and it was much more scalable. (If you want a list of
features new to Linux 2.2, you can read my article about it.) drive your
disks, read your files, and do all of the obvious and physical
things. Linux 2.4 is however much more than just these components.
These assorted drivers and APIs all revolve around a common center
of the Linux kernel. This center includes such fundamental features
as the scheduler, the memory manager, the virtual filesystem, and
the resource allocator.
Linux 2.4 is the first release of the Linux kernel which will
include a full-featured resource management subsystem. Previous
incarnations of Linux included some vestiges of support, but it was
considered kludgy and did not provide the functionality needed for
the "Plug and Play" world. Unlike many of the other internal
changes, many users will be able to directly experience this change
as it impacts the way resources are allocated and reported in the
kernel. As part of this change, the PCI card database deprecated in
Linux 2.2 has been un-deprecated so that all resources can have an
associated device name, rather than just an associated driver.
The new release of the Linux kernel also fixes some problems
with the way the VFS (virtual filesystem) layer and the file caches
were handled. In older versions of Linux, file caching was
dependent on a dual-buffer system which simplified some issues, but
caused many headaches for kernel developers who had to make sure
that it was not possible for these buffers to be out of synch.
Additionally, the presence of the redundant buffer increased memory
use and slowed down the system as the kernel would have to do extra
work to keep things in synch. Linux 2.4 solves these problems by
moving to a simpler single-buffer system.
A number of changes in Linux 2.4 can be described as "enterprise
level." That is, they may not be immediately useful to many desktop
users by work to strengthen Linux as a while. 4 gigabytes of RAM on Intel hardware, up
to 16 ethernet cards, 10 IDE controlletrs, your distribution when
they become ready for Linux 2.4.
Linux 2.4 also includes a much larger assortment of device
drivers and supported hardware than any other Linux revision and
any particular device you care to name has a decent shot at working
under Linux 2.4. (Of course, you should consult the documentation
before you go out and buy any new hardware, just in case. New
hardware especially may not be supported yet.)
One frequently asked question about Linux 2.4 is how much memory
it will require. Many operating systems seem to require more and
more memory and resources as they mature, but Linux 2.4 will
largely buck that trend by actually requiring less memory in
certain situations. Of course, Linux 2.4 includes much more
functionality than does Linux 2.2 and many of these features do
take up space so your mileage may vary. (Remember that most kernel
components can be disabled at compile-time, unlike many other
operating systems.)/..." (I'm not
sure exactly what the new naming convention will be.) This modified
scheme increases the available namespace for devices and allows for
USB and other "modern" device systems to be more easily integrated
into the UNIX/Linux the older names could be. (What, for
instance, would happen if you had more than 26 harddisks?) (USB), an external bus that is coming into prominence
for devices such as keyboards, mice, sound systems, scanners, and
printers. USB is a popular option on many new pieces of hardware,
including non-Intel hardware. Linux's support for these devices is
still in early stages but a large percentage of common USB hardware
(including keyboards, mice, speakers, etc.) is already supported in
the kernel.
More recently, Firewire support has been added into the Linux
kernel. Firewire is a popular option for many high-bandwidth
devices. Not many drivers (or devices) exist for this hardware
architecture yet, but this support is likely to improve over time,
as the architecture matures.
In its simplest form, a block device is a device.
In Linux 2.4, all the block device drivers have been rewritten
somewhat as the block device API has been changed to remove legacy
garbage from the interface and to completely separate the block API
from the file API at the kernel level. The changes required for
this API rewrite have not been major. However, module maintainers
who have modules outside the main tree may need to update their
code. (One should never assume full API compatibility for kernel
modules between major revisions.) are shipped with a maximum of two, this is not likely
to impact many desktop users. Secondly, there have been changes in
the IDE driver which will improve Linux 2.4's support for PCI and
PnP IDE controllers, IDE floppies and tapes, DVDs and CD-ROM
changers. And finally, Linux 2.4 includes driver updates which
should work around bugs present in some IDE chipsets and better
support the advanced features of others, such as ATA66.
While it would seem that the SCSI subsystem has not changed as
much as the IDE subsystem, the SCSI subsystem has been largely
rewritten. Additionally, a number of new SCSI controllers are
supported in this release. A further SCSI cleanup is expected
sometime during the 2.5 development cycle.
One completely new feature in the Linux 2.4 kernel is the
implementation of a "raw" I/O device. A raw device is one whose
accesses are not handled through the caching layer, instead going
right to the low-level device itself. A raw device could be used in
cases where a sophisticated application wants complete control over
how it does data caching and the expense of the usual cache is not
wanted. Alternatively, a raw device could be used in data critical
situations where we want to ensure that the data gets written to
the disk immediately so that, in the event of a system failure, no
data will be lost. Previous incarnations of this support were not
fit for inclusion as they required literally doubling the number of
device nodes so that every block device would also have a raw
device node. (This is the implementation that many commercial
UNIXes use.) The current implementation uses a pool of device nodes
which can be associated with any arbitrary block device.
One huge area of improvement for Linux 2.4 has been the
inclusion of the LVM (Logical Volume Manager) subsystem into the
mainstream kernel. This is a system, standard in Enterprise-class
UNIXes such as HP-UX and Tru64 UNIX (formerly Digital UNIX), that
completely rethinks the way filesystems and volumes are managed.
Without going into too many details, the LVM allows filesystems (defacto) standards-compliant
manner and in a way that will be at least somewhat familiar to
users of commercial UNIXes.
In addition to many of the other block device changes, Linux 2.4
also features updated loopback and ramdisk drivers which fix some
bugs in certain situations.
Block devices can be used in a number of ways. The most common
way to use a block device
XFS (aka Linux supports an extension to the UFS filesystem that
NextStep uses. It should be noted that HFS+, the new Macintosh
filesystem, is not yet supported by Linux. have,
for example, an external SCSI drive from a Macintosh and you want
to use it on your Linux PC. A number of new partition table types
have been added, including the format for IRIX machines.
Not all filesystems are mounted over block devices. bug fixes on an as-needed basis. This will vastly improve
Linux's ability to operate in networks with multiple versions of
Windows.
In the UNIX world, the Network Filesystem (NFS) protocol is the
method of choice for sharing files. Linux 2.4 includes for the
first the.
One of the largest improvements in this area is in regards to
Linux 2.4's support for keyboards and mice. Previous incarnations
of Linux included support for serial and PS/2 mice and keyboards
(and ADB, for instance, on the Macintosh.) Linux 2.4 also supports
using keyboards and mice attached to the USB ports. Additionally,
Linux 2.4 also supports keyboards on some systems where the
keyboard is not initialized by the BIOS and systems that have
trouble determining whether a keyboard is attached or not. And
finally, Linux 2.4 includes expanded support for digitizer pads and
features an emulation option to allow them to be used as normal
mice, even when this is not directly supported in hardware.
Linux's support for serial ports has not changed much since the
days of Linux 2.2. Linux 2.4 (and some later versions of Linux 2.2)
supports sharing IRQs on PCI serial boards; previously, this
feature was limited to ISA and on-board serial ports. Additionally,
Linux 2.4 has added a number of new drivers for multi-port serial
cards. It is hoped that these changes and others will make using
your serial ports under Linux 2.4 easier than before.
In a separate department, there has been some work since 2.2 on
supporting so-called "WinModems" (or "soft modems"). These are
modems which exist largely in software and whose drivers are often
only provided by the manufacturer for Windows. (Often the DSP or
other parts of the hardware must be implemented in software rather
than on the board.) While no code has been submitted to Linus for
the support of these beasts, several independent driver projects
have been working to get some support for these beasts in and the
first fruits of these labors are becoming usable outside the main
tree. While it will be a long time before we see most of these
devices supported under Linux, for the first time it actually
appears that the Open Source snowball is beginning to roll in this
direction.
Linux 2.4 also includes a largely rewritten parallel port
subsystem. One of the major changes in this area is support for
so-called "generic" parallel devices. This functionality can be
used by programs which access the parallel ports in unusual ways
or, more likely, just want to probe the port for PnP information.
Additionally, this rewrite allows Linux 2.4 users to access all the
enhanced modes of their parallel ports, including using UDMA (for
faster I/O) if supported by the hardware. Under the new Linux
kernel, it is also possible to direct all console messages to a
parallel port device such as a printer. This allows Linux to match
the functionality of many commercial UNIXes by being able to put
kernel and debug messages on a line printer..
Linux 2.4 includes a number of new drivers and improvements to
old drivers. Especially important here is Linux's support for many
more "standard" VGA cards and configurations, at least in some
modes. (Even if the mode is only 16 colors-- at least it works.)
support for the X Window System. (SVGAlib and other libraries allow
you to do direct video manipulation on supported hardware, however
the use of these libraries must be done carefully as there are some
security concerns and race conditions.)
One of the biggest changes in this respect is the addition of
the Direct Rendering Manager to the Linux kernel. The DRM cleans up
access to the graphics hardware and eliminates many ways in which
multiple processes which write to your video cards at once could
cause a crash. This should improve stability in many situations.
The DRM also works as an entry point for DMA accesses for video
cards. In total, these changes will allow Linux 2.4 (in conjunction
with Xfree4.x and other compatible programs) to be more stable and
more secure when doing some types of graphics-intensive work. These
changes should also make some kinds of television tuner cards more
workable under Linux. correctable deficiencies. Under Linux 2.2 and previous
versions, if you have a number of processes all waiting on an event
from a network socket (a web server, for instance), they will all
be woken up when activity which will allow us
to completely remove the "stampede effect" of multiple processes. as unserialized as possible so that it
will scale far better than any previous version of Linux.
Additionally, the entire subsystem has been redesigned to be as
stable as possible on multiprocessor systems and many races have
been eliminated. In addition, it contains many optimizations to
allow it to work with the particular quirks of the networking
stacks in use in many common operating systems, including Windows.
It should also be mentioned at this point that Linux is still the
only operating system completely compatible with the letter of the
IPv4 specification (Yes, IPv4; the one we've been using all this
time) and Linux 2.4 boasts an IPv4 implementation that is much more
scalable than its predecessor.
As routing through any Linux box. Previously,
this kind of functionality was largely only available with
dedicated and proprietary routing hardware. Unfortunately, this
major rewrite also includes a.
For Enterprise-level users, there are a number of features that
will better enable Linux to integrate into older and newer
components of existing network infrastructures. One important
addition in this respect is Linux 2.4's new support for the DECNet
and ARCNet protocols and hardware. This allows for better
interoperation with specialized systems, including older
Digital/Compaq ones. Also of special interest to this class of
users, Linux 2.4 will include support for ATM network adapters for
high-speed networking. serial
device PPP layer, such as for dial-up connections with modems. In
addition to the modularity, ISDN has been updated to support many
new cards. The PLIP (PPP over parallel ports) layer has also been
improved and uses the new parallel port abstraction layer. And
finally, PPP over Ethernet (PPPoE, used by some DSL providers)
support has been added to the kernel.
Although not present in Linux 2.4, there is work now on
supporting the NetBEUI protocol used by MS operating systems. While
Microsoft will be moving away from this protocol in its products
and towards TCP/IP, this protocol is still important for a number
of Windows-based network environments. (Previously, kernel
developers had commented that the protocol is too convoluted and
buggy to be supported in the kernel. Now that an implementation has
surfaced, it remains to be seen whether it will be stable enough to
ever be in an official kernel.)
kernel.
Linux 2.2 and Linux 2.0 included built-in support for starting a
Java interpreter (if present) whenever a Java application was
executed. (It was one of the first OSes to do this at the kernel
level.) Linux 2.4 still includes support for loading Java
interpreters as necessary, but the specific Java driver has been
removed and users will need to upgrade their configurations to use
the "Misc." driver. cryptography in
the main distribution. Import and export regulations for
cryptography are different around the world and many Linux
developers are loth.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. | http://www.linuxtoday.com/developer/2000041100304NWLF | CC-MAIN-2017-26 | refinedweb | 2,929 | 55.44 |
I'm having a problem with connecting two tables in FluentAPI. It's in fact a mix of FluentAPI and Data Annotations. I Looked at this question but It didn't help me. I tried with
Index, composed unique keys.
Basically
Foo is the main table.
Bar table is optional. The are connected via two columns.
key1 and
key2 pairs are unique. Think of it as a parent-child relationship with a restriction that 1 parent can have only 1 child:
Data Entities looks like this:
[Table("foo")] public class Foo { [Key] [Column("pk", TypeName = "int")] [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public int FooId { get; set; } [Column("key1", TypeName = "int")] public int Key1 { get; set; } [Column("key2", TypeName = "int")] public int Key2 { get; set; } public Bar Bar { get; set; } } [Table("bar")] public class Bar { [Key] [Column("key1", TypeName = "int", Order = 1)] public int Key1 { get; set; } [Key] [Column("key2", TypeName = "int", Order = 2)] public int Key2 { get; set; } public Foo Foo { get; set; } }
Here's how I was trying to connect them:
modelBuilder.Entity<Bar>().HasRequired(p => p.Foo).WithOptional(p => p.Bar);
What is wrong?
Bar DOES require
Foo.
Foo DOES have optional
Bar. <--- this should be totally enough, because
Foo has columns named exactly like primary keys in
Bar. But it doesn't work.
So I tried specifying foreign keys:
modelBuilder.Entity<Bar>().HasRequired(p => p.Foo).WithOptional(p => p.Bar).Map(p => p.MapKey(new[] { "key1", "key2" }));
It says:
"The number of columns specified must match the number of primary key columns"
Whaaaat? how? how come? Uhh..
I also tried:
modelBuilder.Entity<Bar>().HasIndex(table => new { table.Key1, table.Key2 });
So my questions are:
Why my solution doesn't work? I do have complex key specified
How can I slove it?
This is going to be a little complicated, and I might be completely wrong here, but from my experience EntityFramework relations don't work that way. It seems to me that if Foo is required and Bar is optional, then each Bar should have a way to join back to Foo uniquely based upon Foo's pk value.
That is to say that Bar should be defined as:
[Table("bar")] public class Bar { [Key] [Column("key1", TypeName = "int", Order = 1)] public int Key1 { get; set; } [Key] [Column("key2", TypeName = "int", Order = 2)] public int Key2 { get; set; } public int FooId { get; set; } public Foo Foo { get; set; } }
You would then need to use this FooId in your description of the relationship, not the composite key contained in Bar. EntityFramework has always required me to join on the entire primary key of the parent POCO, which must be a foreign key of the child POCO. You may still be able to join through the child's key in LINQ queries. | https://entityframeworkcore.com/knowledge-base/50161664/entity-framework-relationship-with-two-foreign-keys | CC-MAIN-2020-40 | refinedweb | 461 | 72.26 |
I am trying to write a program that will give me the volume of a cylinder when I run it. I already have a Circle class that computes the area of a circle. I am also using a seperate program to handle the input and output. Could someone tell me what I am doing wrong with my program?
public class Cylinder { // calls up and creates an object from the Circle class Private Circle base; Private double Height; //Constructor Public Cylinder(double h, double r) { // creates object from cirle class Circle base = new circle(r); height = h; } public double getVolume() { return CircleBase.area * height; } } | https://www.daniweb.com/programming/software-development/threads/229188/help-i-am-a-newbie | CC-MAIN-2017-34 | refinedweb | 103 | 59.64 |
In this post I’m going to explain the use of the ASP.Net MVC Framework’s BindingHelperExtensions class and how to use it as what I call a UI mapper (map user input fields to an object). The BindingHelperExtensions has the following five methods:
T ReadFromRequest<T>(this Controller controller, String key)
string ReadFromRequest(this Controller controller, String key)
void UpdateFrom(Object value, NameValueCollection values)
void UpdateFrom(Object value, NameValueCollection values, string objectPrefix)
void UpdateFrom(Object value, NameValueCollection values, string[] keys)
I will not go deep into the ReadFromRequest method, but they can be used to get a value out from the Request object, such as a QueryString or Form field. They are also extensions method to the Controller class. In this post I will write about the UpdateFrom methods which can be used to easy map a View’s user input fields to an object.
Note: The UpdateFrom methods aren’t extension methods.
The UpdateFrom methods can be used inside of a Controller’s action method to update an object in the Model based on input values. For example if you have a Customer object and want to fill it with values out from the Request.From, you can do the following in your Controller:
Binding.UpdateFrom(customer, Request.Form);
In the examples in this post I will use an alias:
using Binding = System.Web.Mvc.BindingHelperExtensions;
I only use it to avoid using the name of the BindingHelperExtension class to get access to the UpdateFrom methods.
The UpdateFrom will see if there is an input field in the Request.Form collection with the same name as the property of the object passed as an argument. If there is a match, the value from input filed will be set to the value of the object’s property with the same name. For example if you have an input filed like this:
<input type="text" name="CompanyName" .../>
And if the Customer object you pass to the UpdateFrom method has a property with the same name; the value from the CompanyName field will be set to the Customer’s CompanyName property. If you have a View like this:
<input type="text" name="CompanyName" />
<input type="submit" name="City" value="Create"/>
The value of the submit button will be set to the Customer object if it has a property with the name City. So be careful what name you set to the input elements. This can of course be handled by using one of the other two UpdateFrom methods. For example the UpdateFrom that takes a objectPrefix as an argument, can be used to pass in a prefix of the input field names which should be collected from the values argument and be set to the value argument.
void UpdateFrom(Object value, NameValueCollection values, string objectPrefix)
For example if you have input fields that you want to belong to a specific object in the Model, you can use an object prefix like this:
<input type="text" name="Customer.CompanyName" />
<input type="text" name="Customer.ContactName" />
<input type="text" name="Product.Description" />
Note: You can also use "_" to sepereate the prefix from the property name "Customer_CompanyName".When you use the UdateFrom method and you only want to map the values from the input fields with the prefix Customer to a Customer object passed to the UpdateFrom, you simply pass the “Customer” as an objectPrefix to the UpdateFrom method:
Binding.UpdateFrom(customer, Request.Form, "Customer");
The UpdateFrom will now only collect fields with the prefix “Customer”, and the names after the prefix will be mapped to properties of the Customer object with the same name. By using object prefix you can simply categorize input fields to be mapped to different objects. The last UpdateFrom method you can use is the one that takes an array of keys as an argument.
void UpdateFrom(Object value, NameValueCollection values, string[] keys)
By passing a string array with keys, only the specified keys will be collected from the values argument collection and be set to the value object’s properties. By using keys and pass a Request.Form to the values argument you can decide which input fields you want to collect.
Note: The values argument of the UpdateFrom takes an object of type NameValueCollection, so you pass for example pass the Request.Params collection if you want to get values from both QueryStrings and Forms. You can also pass your own NameValueCollection as you probably already have figured out ;) To end this post I will give you a simple example how to use the UpdateFrom and with LINQ to SQL to create a new Customer to the Northwind database.
public class CustomerController : Controller
{
private NorthwindDataContext dbc = new NorthwindDataContext();
public void Index()
{
RenderView("Customer");
}
public void New()
{
Customer customer = new Customer();
Binding.UpdateFrom(customer, Request.Form, "Customer");
dbc.Customers.InsertOnSubmit(customer);
dbc.SubmitChanges();
}
}
Customer.aspx
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Customer.aspx.cs" Inherits="MvcUpdateInsertExample.Views.Customer.Customer" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="" >
<head runat="server">
<title></title>
</head>
<body>
<div>
<form action="Customer/New" method="post">
Company name: <input type="text" name="Customer.CompanyName" /><br />
Contact name: <input type="text" name="Customer.ContactName" /><br />
<input type="submit" value="Create"/>
</form>
</div>
</body>
</html>
I hope this post gave you some brief understanding about how you can use the BindingHelperExtensions class to easy handle user input and map it to an object in the Model.
Hello Fredrik,
I cannot understand why the second parameter is the NameValueCollection type.
Fields in Html/ASPX page should have an unique ID, like properties in a Type (unique Name).
The NameValueCollection collection stores multiple string values under a single key. IMHO I believe that a StringDictionary is more correct for this scenario, because you cannot have multiple values for a specific type/property.
What's your opinion?
Best Regards,
Hi Fredrik,
What about the reverse of this..Is there a way to set all the form field values without DataBinding each individual input control?
Israel Ace:
Actaully several HTML input fields can have the same name, one example are checkboxes. This is from the W3C:
"Several checkboxes in a form may share the same control name. Thus, for example, checkboxes allow users to select several values for the same property. The INPUT element is used to create a checkbox control."
If you have several fields with the same name and use Request.Form to request a value, all the values of the fields with the requested name will be returned as a comma seperated string. It's exactly how the NameValueCollection also work.
Jake Scott:
If you use standard INPUT elements in the View, you can't access them from the server-side. So you need to add server-side script block into the field to set the values, for example:
<input type="test" value="<%= ViewData.CompanyName%>" .../>
Too many <% %> and <%= %> will clutter up the page and might make debugging and maintainence not easy.
Server controls simplifies development a lot in postback mode. I think a set of mvc controls can make databinding much easier.
thanks,
bill xie
Pingback from ASP.NET MVC - Daten vom View zum Controller übermitteln | Code-Inside Blog
This post is different from my others, this post is a step by step guide to create a ASP.NET MVC Web
Yesterday I posted a step by step guide by using the Preview 2 of the ASP.Net MVC Framework, the following
Ok, very good, however what about type checking in the binder. Currently it looks like the binder simply ignores type errors and returns no indication that a problem has occured. Which means that before you can do any binding at all you would have to check the form fields to ensure the correct TYPE of data was submitted. Ok, only int's , dates and other would need checking.
How do you handle situations where the form is inside a content page?
I am getting "ctl00$MainContent$Length" as the key for the length input.
Thanks,
Chris
I have a question about dynamically generated forms. Hypothetical situation: You have a form where a registered member of your website has a list of friends, and each friend can have a phone number stored with them. You have a page where all friends are listed and their phone numbers are in text boxes next to their names. The number of friends can vary from member to member... How would you go about using the binding helper to get values from an unknown number of fields? Is there a way to handle it if the text boxes have unique names? Do you have to name them all the same and parse the list yourself on the server side?
Very useful indeed, thanks alot Scott.
I have a question, how can you take data from the a view into a field without having declared a parameter to the method.
e.g.
the view is set to recognise all product objects (this is in the view code)
public partial class Add : ViewPage<ProductControllerViewData>
View html:
<label for="ProductNameLabel">Product Name:</label>
<%= Html.TextBox("ProductName")%>
Controller:
public void SaveProduct()
{
try
{
String ProductName = ...
here how can i get this productName value that is entered into the textbox from the form?
Sorry i meant Fredrik not Scott
An interesting issue was brought up on the S#arp Architecture discussion today (it was actually brought
how i bind dropdownlist from mvc ? | http://weblogs.asp.net/fredriknormen/archive/2008/03/13/asp-net-mvc-framework-2-the-bindinghelper-class-ui-mapper.aspx | crawl-002 | refinedweb | 1,573 | 61.77 |
This topic introduces the P2P acceleration feature, including its basic concepts, configurations, how to use it, and how to troubleshoot errors.
What is P2P acceleration
When an ECS instance pulls an image, all image data comes from the server. When dozens of ECS instances pull the same image at the same time, the server maintained by Alibaba Cloud ensures a smooth download experience. However, if your cluster consists of hundreds or even thousands of ECS instances, the server bandwidth limit can throttle your image distribution speed.
Container Registry Enterprise Edition supports P2P acceleration, which significantly improves the image download speed when a large number of cluster nodes are pulling the same image. This helps speed up application deployment.
- P2P acceleration performs better when the cluster contains more than 300 nodes.
- We recommend that you deploy cluster nodes across multiple zones and VSwitches.
- We recommend that you use ECS instances that support local SSD or have large memory.
- P2P acceleration may not be effective when the cluster contains a small number of nodes or idle memory is insufficient.
Configure the P2P acceleration plug-in
Currently, the P2P acceleration plug-in supports the following cluster types: Kubernetes cluster, multi-zone Kubernetes cluster, and managed Kubernetes cluster. Serverless Kubernetes cluster is not supported.
To install the plug-in, log on to a Linux or Windows server and use kubectl to connect
to your Kubernetes cluster. Run the
kubectl get pod command. If the output indicates that the cluster is running normally, you can then
install the plug-in.
We recommend that you use SSH to log on to a random node in the cluster. To log on to a worker node, see Connect to Kubernetes clusters through kubectl.
We recommend that you modify the
max-concurrent-downloads parameter in dockerd to speed up image download. Default is 3. You can change it
to a value between 5 and 20. For more information, see this Docker official document.
The installation script differs depending on the instance. Log on to the Container Registry console and check the P2P Acceleration page for detailed instructions.
Pull an image through P2P acceleration
To pull an image through P2P acceleration, you need to use a domain whose name contains
word
distributed. For example,
hello-df-registry-vpc.distributed.cn-hangzhou.cr.aliyuncs.com:65002. For more information, see the Install the P2P acceleration plug-in through a script
section on the P2P Acceleration page.
By default, port 65002 is used. If you want to use port 443, you can specify
export PORT="443" when you install the plug-in. Note that this will occupy port 443 on all nodes.
Before you pull an image, you need to log on to the corresponding image repository.
For example,
docker login hello-df-registry-vpc.distributed.cn-hangzhou.cr.aliyuncs.com:65002. To pull an image, use the docker pull command or specify the image when you create
an application in the console. For example, to pull from image repository bar under
namespace foo, run the following command:
docker pull hello-df-registry-vpc.distributed.cn-hangzhou.cr.aliyuncs.com:65002/foo/bar.
When you pull an image through P2P acceleration, image layer data is pre-downloaded in the background and then transmitted to Docker Engine. This is the reason that the download progress bar remains stuck at the beginning and reaches 100% within a short time later.
Performance
In testing, 300 ecs.i2.xlarge nodes, each of which uses a local SSD and has a specification of 4-core 8 GB, are used to concurrently pull an image, which consists of 4 image layers and each layer is 512 MB in size. The download time is shortened by 80% compared with when P2P acceleration is not used.
Troubleshooting
To list the pods that have installed the P2P acceleration plug-in, run the following command:
kubectl get pod -n cr-dfagent -o wide
- If the number of pods is not the same as that of worker nodes:
- Check whether the nodes where no pod is deployed have taints, which affect the scheduling of DaemonSet.
- Install the P2P acceleration plug-in again.
- If some of the pods are in the CrashLoopBackOff state, run the
kubectl logscommand to view logs about these pods.
kubectl -n cr-dfagent logs -f POD_NAME df-agent kubectl -n cr-dfagent logs -f POD_NAME df-nginx
If the issue persists, submit a ticket. | https://www.alibabacloud.com/help/doc-detail/120603.htm | CC-MAIN-2021-04 | refinedweb | 734 | 55.03 |
I would really advice you to learn about for-loops. They would cut down your code tremendously.
I would really advice you to learn about for-loops. They would cut down your code tremendously.
Thats what the Setters are for.
If you write:
private Rectangle(int width, int length) {
...
}
Then the constructor has been declared "private" and can not be used by any other classes other then the Rectangle class.
You can...
Your code is missing closing curly brackets. Whenever you open them you also have to close them later on.
Besides that the error message is pretty much self-explanatory, is it not?
Or instead just a custom class extending JDialog if JOptionPane does not give you the flexibility you want.
We would just be repeating what all the other tutorials and textbooks say because the errors you make are so basic there is not much more to say about them. I dont feel like repeating textbooks.
You should really go over the basics once again before trying this program, there are countless errors in the program. Starting with misplaced semicolon and variables that have no type defined. You...
I dont even know why java allows stupid things like that. Sometimes one has to wonder what the people were thinking when they wrote the ruleset.
Setters and Getters are created for private member variables. You could google them to read more information on how these methods usually look like. A simple example would be:
public class Coord {...
But the error message says something different. The IDE might highlight that because there might be a compile-time problem. However, your application crashed because of a runtime-problem, namely an...
The system cannot find the file specified. Is there anything unclear?
What do you want to use them for?
I am not going to read 50+ lines of code just to find that out. If the key is a string array, why are you not able to save it in a variable?
But what is it? A number? A string? A complex object with several attributes?
What exactly is this "key" you are talking about? Perhaps you should create a class for that or an interface.
If you want to save something use a variable.
As far as I know groovy code is translated into java byte-code before being interpreted, so everything possible with groovy should be possible with java too. Its just the different way to use it.
Read the API for the JTree class perhaps.
The error message is quite clear, it says it can not find a class called StateController.
Do not use the "--" operator when calling methods like that. The results will not be what you expect.
Instead you should just use "n - 1" or "k - 1". The problem with the "--" operator is that it...
This is impossible. If you are dealing with students you have no way of making this happen. You can not control their computers, you can not monitor their houses. These little buggers will always...
Yes exactly.
You have 5 items in your array. You have your loop run from 0 to 3 which is 4 indices. But then you only check for the fourth item in your if-condition. This happens because the...
Because it isnt.
You have a value on the left hand side of the assignment operator, and a value on the right hand side.
It is as if you would write "5 = 2 + 2". Its not an assignment, its not...
The curly brackets { and } have a meaning in java. The way you use them is the source of your problem. You have misplaced the brackets for your loop and thus your program does not do what you would...
Could you try to explain to us what you are trying to do in the erroneous line?
Try to explain it step by step:
Math.pow(hypotenuse,2)=Math.pow(side_1,2)+Math.pow (side_2,2); | http://www.javaprogrammingforums.com/search.php?s=2c5ba491ec12171dbd4ada630053c3de&searchid=1074050 | CC-MAIN-2014-41 | refinedweb | 656 | 77.23 |
From: John Femiani (JOHN.FEMIANI_at_[hidden])
Date: 2008-05-10 06:57:04
Bruno wrote:
> >>1. Prefer metafunctions in the point concepts requirements
> >> over traits classes, or I'm afraid the traits will get huge.
> >
> > If the traits are huge the abstraction is being made in the
> wrong place.
> > A good abstraction is a min-cut in a graph of dependencies. A huge
> > number of metafunctions seems equally bad to me. Instead the goal
> > should be to minimize the number of traits/metafunctions
> required to
> > be understood/specialize by the user.
>
> I know the principle of avoiding blob traits, as exposed in
> Abrahams and Gurtovoy's book. But I think it doesn't apply
> here just because the traits in question is *way* short. A
> type, a value, an accessor.
> And most algorithms need all of them. Does it really make
> sense to scatter them into several metafunctions??
>
Well, I wrote / suggested that because I have in mind a very generic set
of concepts associated with points that would be compatible with
libraries like CGAL.
I am worried that the traits will explode becuase there are so many uses
for a point class that have subtly different requirements. The number of
associated types etc. in the CGAL Kernel seems to indicate that in a
sturdy geometry library that might be the case.
eg, it looks a little bit like a point concept will require a 'space'
concept that will end up involving tons of associated types for
compatible geometric primatives (as in the CGAL Kernel).
> >>2. Put your concept mapping functions in a namespace rather
> >> than a class (do it like like Fusion does).
> >> Namespaces seem to provide a lot more flexibility, and
> >> the syntax is a LOT cleaner.
> >
> > I am considering the implications of your suggestion. It could be
> > that it can be made to work, but I'm not clear on how overloading
> > functions in a namespace is preferable to specializing a
> struct. It
> > seems that it is more prone to compiler errors related to
> overloading
> > ambiguity caused by bad user code and unfortunate
> combinations of user
> > code. With the traits there is no chance for ambiguity.
>
> I agree with Luke on this point, I'm afraid about nightmares
> that overloading ambiguities could bring to the user.
> However, I will consider doing a few tests to see the actual
> consequences of what you propose.
>
I am not talking about requiring user code to depend on ADL, I mean make
a special 'adapted' namespace like fusion does. I foresee less problems
with
::point::adapted::at_c<0>(point);
than I do with
point_traits<MyPoint>::template at_c<0>(point)
This involves 2 parts:
1. _if_ the traits get to be huge, it is possible to split namespaces
accross header files.
2. The annoying 'template' keyword can be a source of problems, since it
has been my experience that some compilers require it and others dont. I
am also concerned about the 'typename' keyword (for the same reason).
Some traits will also probably apply to multiple concepts, and since you
can't partially implement a traits class, you will have to mix them by
inheritance (I think) if you want to share some traits. Then you end up
with an issue about dependant names inherited from template base classes
that happens on g++ and not microsoft compilers.
> >>3. Provide separate headers for each metafunction.
> >> This allows users to fucus on metafunctions that matter.
> >
> > I am doing this already, though there is only one metafunction that
> > matters right now; the one that maps the user type to its related
> > concept.
>
> Same remark as above: one metafunction and one separate
> header of each of the 3 properties needed, I wonder if it's
> not a bit overkill...
>
I have only seen what look like the very beginnings of the development
of these concepts, and I made those comments anticipating an explosion
of traits.
>From earlier disccusion I really think that compatibility with CGAL will
be important for these concepts, and I worry when I look at the number
of typedefs required by the CGAL Kernel concept. It is a bit bigger than
two or three typedefs.
> >>4. Make all requirements explicit in the concept class. This
> >> way I can look at the concept code and see what they are.
>
> Aren't the requirements explicit enough in the concept class
> I've shown? If not, could you be more precise on what you'd
> like to see more clearly specified?
>
> Bruno
>
I was worried because somebody was talking about using the traits class
to add additional constraints. In your posted code, I dont see the
actual definition of a traits class (I see a 'dimension_checker', and I
see a point_traits template being used...)
-- John
Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | http://lists.boost.org/Archives/boost/2008/05/137233.php | crawl-001 | refinedweb | 816 | 64.1 |
[
]
Sébastien Brisard commented on MATH-581:
----------------------------------------
Thank you for this in-depth review.
# Regarding interfaces: OK for getting rid of LinearOperator, I just wanted to keep this one
to maintain some similarity with matrices in the existing code. However, I agree, this one
is not really useful. By the way, could you point to interesting docs on design recommendations
regarding the choice between interfaces and abstract classes. I often tend to follow the model
implemented in Swing
{code}
public interface Foo{}
public abstract class AbstractFoo{}
public class BasicFoo{}
{code}
which means that very often, the interface is not really needed. But you seem to indicate
in you previous comments that it might also turn into a *bad* design, which I would like to
avoid...
# Regarding formatting:
** not sure about what you mean by misalignment, but I guess that it has to do with checkstyle,
will look into it
** meta-comments: OK (sorry about that),
** I see your point with Javadoc comments, but I tend to think the contrary : I personally
always have the javadoc opened in a nearby browser, and then really appreciate the rich formatting
it offers, especially for mathematical formulas (fortunately, I didn't embedd MathML in this
proposal, otherwise I would have had you scream...). Nevermind, I'll get rid of these.
# Regarding the exceptions
** I agree with you, it would be highly desirable to keep a reference to the real offending
vector, instead of storing a deep-copy. However, the exceptions can be raised with either
{{double[]}} or {{RealVector}}, which makes it difficult to handle it in a consistent way
with references. That's the reason why I work with copies. From this point of view, the methods
{{copyOffendingVector}} and the likes can be used as "accessors".
** I am not aware of {{ExceptionContext}}, but I will look into it. Sounds interesting indeed
# Regarding the links between {{RealMatrix}} and {{RealLinearOperator}}
** What you suggest is great (have {{RealMatrix}} inherit from {{RealLinearOperator}})! Didn't
think about that...
** What I mean about unit testing is that I used simple matrices (Hilbert) to test the conjugate
gradient, and I then had to reimplement matrix-vector product in a {{RealLinearOperator}},
while it is already implemented in {{RealMatrix}}. Thinking about it, composition would have
done just as well, with a little less sweat. Oh well...
Thank you for proposing to take care of the design of the exceptions. Meanwhile, can you also
correct the formatting of these two classes? I'll take care of the rest. How should I submit
the new, corrected patch? As a new attachment to this ticket (say {{MATH-581-02.patch}}),
or as a new JIRA ticket?
> Support for iterative linear solvers
> ------------------------------------
>
> Key: MATH-581
> URL:
> Project: Commons Math
> Issue Type: New Feature
> Affects Versions: 3.0, Nightly Builds
> Reporter: Sébastien Brisard
> Labels: iterative, linear, solver
> Attachments: MATH-581-01: | http://mail-archives.apache.org/mod_mbox/commons-issues/201106.mbox/%3C1121614122.35886.1308899567490.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2019-26 | refinedweb | 472 | 53 |
Scatterplots are one of many crucial forms of visualization in statistics. With scatterplots, you can examine the relationship between two variables. This can lead to insights in terms of decision making or additional analysis.
We will be using the “Prestige” dataset form the pydataset module to look at scatterplot use. Below is some initial code.
from pydataset import data import matplotlib.pyplot as plt import pandas as pd import seaborn as sns df=data('Prestige')
We will begin by making a correlation matrix. this will help us to determine which pairs of variables have strong relationships with each other. This will be done with the .corr() function. below is the code
You can see that there are several strong relationships. For our purposes, we will look at the relationship between education and income.
The seaborn library is rather easy to use for making visuals. To make a plot you can use the .lmplot() function. Below is a basic scatterplot of our data.
The code should be self-explanatory. THe only thing that might be unknown is the fit_reg argument. This is set to False so that the function does not make a regression line. Below is the same visual but this time with the regression line.
facet = sns.lmplot(data=df, x='education', y='income',fit_reg=True)
It is also possible to add a third variable to our plot. One of the more common ways is through including a categorical variable. Therefore, we will look at job type and see what the relationship is. To do this we use the same .lmplot.() function but include several additional arguments. These include the hue and the indication of a legend. Below is the code and output.
You can clearly see that type separates education and income. A look at the boxplots for these variables confirms this.
As you can see, we can conclude that job type influences both education and income in this example.
Conclusion
This post focused primarily on making scatterplots with the seaborn package. Scatterplots are a tool that all data analyst should be familiar with as it can be used to communicate information to people who must make decisions. | https://educationalresearchtechniques.com/2018/11/19/scatter-plots-in-python/ | CC-MAIN-2019-18 | refinedweb | 360 | 68.16 |
Hi guys. I'm aware of the requirement to specify std:: as the namespace for string in some way ( whether it be using namespace std etc) - but the following error has stumped me as to its cause.
$ g++ Conf.cpp Conf.cpp:33: error: ‘string’ in namespace ‘std’ does not name a type
This is the only error - and here is a (truncated) Conf.cpp.
#ifndef Conf_H #include "Conf.h" #endif #include <string.h> #include "iniparser/iniparser.h" std::string Conf::getString(const char* name) {/* This is the line causing the error */ std::string rtn; rtn = string(iniparser_getstring(this->file, name)); return rtn; }
Line 33 as shown in the code sample is the definition of the function returning a string.
I've tried combinations of using std::string, but each time it states something similar "string does not name a type", "std::string has not been declared" etc..
Any suggestions would be most helpful.
Thanks,
PC_Nerd | https://www.daniweb.com/programming/software-development/threads/203540/string-in-namespace-std-does-not-name-a-type | CC-MAIN-2017-26 | refinedweb | 156 | 62.58 |
ecto_state_machine alternatives and similar packages
Based on the "State Machines" category.
Alternatively, view ecto_state_machine alternatives based on common mentions on social networks and blogs.
Machinery8.7 1.1 ecto_state_machine VS MachineryState machine thin layer for structs (+ GUI for Phoenix apps)
gen_state_machineAn idiomatic Elixir wrapper for gen_statem in OTP 19 (and above).
Reducer MachineSimple reducer-based state machine
state_mc1.8 0.0 ecto_state_machine VS state_mcState Machine for Ecto
Scout APM: A developer's best friend. Try free for 14-days
Do you think we are missing an alternative of ecto_state_machine or a related project?
README
Ecto state machine
This package allows to use finite state machine pattern in Ecto. Specify:
and go:
defmodule User do use Web, :model use EctoStateMachine, states: [:unconfirmed, :confirmed, :blocked, :admin], events: [ [ name: :confirm, from: [:unconfirmed], to: :confirmed, callback: fn(model) -> Ecto.Changeset.change(model, confirmed_at: Ecto.DateTime.utc) end # yeah you can bring your own code to these functions. ], [ name: :block, from: [:confirmed, :admin], to: :blocked ], [ name: :make_admin, from: [:confirmed], to: :admin ] ] schema "users" do field :state, :string, default: "unconfirmed" end end
now you can do:
user = Repo.get_by(User, id: 1) # Create changeset transition user state to "confirmed". We can make him admin! confirmed_user = User.confirm(user) # => # We can validate ability to change user's state User.can_confirm?(confirmed_user) # => false User.can_make_admin?(confirmed_user) # => true # Create changeset transition user state to "admin" admin = User.make_admin(confirmed_user) # Store changeset to the database Repo.update(admin) # List all possible states # If column isn't `:state`, function name will be prefixed. IE, # for column `:rules` function name will be `rules_states` User.states # => [:unconfirmed, :confirmed, :blocked, :admin] # List all possible events # If column isn't `:state`, function name will be prefixed. IE, # for column `:rules` function name will be `rules_events` User.events # => [:confirm, :block, :make_admin]
You can check out whole
test/dummy directory to inspect how to organize sample app.
Installation
If available in Hex, the package can be installed as:
Add ecto_state_machine to your list of dependencies in
mix.exs:
def deps do [{:ecto_state_machine, "~> 0.1.0"}] end
Custom column name
ecto_state_machine uses
state database column by default. You can specify
column option to change it. Like this:
defmodule Dummy.User do use Dummy.Web, :model use EctoStateMachine, column: :rules, # bla-bla-bla end
Now your state will be stored into
rules column.
Contributions
- Install dependencies
mix deps.get
- Setup your
config/test.exs&
config/dev.exs
- Run migrations
mix ecto.migrate&
MIX_ENV=test mix ecto.migrate
- Develop new feature
- Write new tests
- Test it:
mix test
- Open new PR!
Roadmap to 1.0
- [x] Cover by tests
- [x] Custom db column name
- [x] Validation method for changeset indicates its value in the correct range
- [x] Initial value
- [x] CI
- [x] Add status? methods
- [ ] Introduce it at elixir-radar and my blog
- [ ] Custom error messages for changeset (with translations by gettext ability)
- [x] Rely on last versions of ecto & elixir
- [ ] Write dedicated module instead of requiring everything into the model
- [ ] Write bang! methods which are raising exception instead of returning invalid changeset
- [ ] Rewrite spaghetti description in README | https://elixir.libhunt.com/ecto_state_machine-alternatives | CC-MAIN-2021-43 | refinedweb | 508 | 50.73 |
Set up JDK Mission Control with Red Hat Build of OpenJDK.
Installing Mission Control
For Microsoft Windows
For Microsoft Windows, the OpenJDK zip available via the Red Hat Customer Portal now contains JDK Mission Control and JDK Flight Recorder. Once un-archived, the JMC binary can be found in the
bin directory.
For Red Hat Enterprise Linux
You can add or remove software repositories from the command line using the subscription-manager tool as the root user. Use the
--list option to view the available software repositories and verify that you have access to RHSCL:
$ su - # subscription-manager repos --list | egrep rhscl
Depending which variant is used (e.g., server or workstation), you can enable the repo with the following command:
# subscription-manager repos --enable rhel-variant-rhscl-7-rpms
Install JMC with the following command:
$ yum install rh-jmc
We have now installed JMC. You can launch it by typing
JMC or heading off to the applications menu.
If you are running multiple versions of Java, as I do, and want to launch JMC from the command line, use the following options to launch JMC with the path to the Red Hat Build of OpenJDK.
$ scl enable rh-jmc bash $ jmc -vm /usr/lib/jvm/java-11-openjdk-11.0.2.7-0.el7_6.i386/bin
Real-time Monitoring
JMC allows you to perform real-time monitoring of JVMs. To do this, create a new connection from the File Menu, choose your JVM, and start JMX console. The result should give you an overview page with Processors, Memory consumption, Java heap use, JVM CPU usage, etc.
Now that we have JMC set up, let’s try to run an example and see how it works.
The following is a simple example of reading a couple of files. Indeed, there can be issues that we have not taken into consideration. In the example below, I have two files: a simple HTML file and a text file, which is about 1 GB.
import java.io.BufferedReader; import java.io.File; import java.io.FileReader; import java.io.IOException; public class TextFileReader { private File textFilePath = null; public TextFileReader(String textFilePath) { if (textFilePath == null) throw new IllegalArgumentException(); this.textFilePath = new File(textFilePath); } public void readFile() throws IOException { FileReader fileReader = new FileReader(textFilePath); BufferedReader bufferedreader = new BufferedReader(fileReader); StringBuffer sb = new StringBuffer(); String strLine; while ((strLine = bufferedreader.readLine()) != null) { sb.append(strLine); sb.append("\n"); } fileReader.close(); System.out.println(sb.toString()); } public static void main(String[] args) throws IOException{ new TextFileReader("index.html").readFile(); new TextFileReader("test.txt").readFile(); } }
Let’s execute the following commands to compile and run this example.
$ javac TextFileReader.java $ java -XX:+FlightRecorder -XX:StartFlightRecording=dumponexit=true,filename=filereader.jfr TextFileReader
In the above Java command, the parameter
-XX:StartFlightRecording will dump the results into
filereader.jfr.
Let’s look at the results by opening this file in JMC.
JMC reports in-depth details on the entire run; for example, JVM internals shows that GC is Stalling. Moreover, with a large file with not much memory, that is a problem, so we can fix the issue either by lowering the value of
-XX:InitiatingHeapOccupancyPercent and even more so ensuring that there is enough memory (e.g., Xms1024m -Xmx4096m).
Another great example can be found here by Jie Kang, Software Engineer at Red Hat, where he shows how the method profiling works, aiding in optimization of the original code.
JMC is very useful for understanding application behavior such as memory leaks, deadlock, and much more. Give it a try with the Red Hat Build of OpenJDK. | https://developers.redhat.com/blog/2019/03/15/jdk-mission-control-red-hat-build-openjdk/ | CC-MAIN-2019-13 | refinedweb | 596 | 54.83 |
Opened 3 years ago
Last modified 2 years ago
This function might fall into dead loop if there's a loop in the Foreign Key Relationships.
I find this happens to me (without fail) using this very simple Model:
from django.db import models
class A(models.Model):
B = models.ForeignKey('B')
class Admin:
list_display = ('B',)
class B(models.Model):
A = models.ForeignKey('A')
class Admin:
pass.
Ouch! My model was more like A->B, B->C, C->A. Same result.
I've opened this 4 month ago, but it's still open!
So I decided I will solve it myself!! I'll use this weekend to make a patch. I've made one 4 month ago but I can't find it now.
Hey Mr. Anon -
Waiting for that patch :)
Milestone Version 1.0 deleted
#3288 is a specific case of this but has a patch fixing this issue for both SVN and 0.96. Adding regression tests too.
By Edgewall Software. | http://code.djangoproject.com/ticket/2549 | crawl-002 | refinedweb | 163 | 87.82 |
TPK algorithm
The TPK algorithm is a program introduced by Donald Knuth and Luis Trabb Pardo to illustrate the evolution of computer programming languages. In their 1977 work "The Early Development of Programming Languages", Trabb Pardo and Knuth introduced a small program that involved arrays, indexing, mathematical functions, subroutines, I/O, conditionals and iteration. They then wrote implementations of the algorithm in several early programming languages to show how such concepts were expressed.
To explain the name "TPK", the authors referred to Grimm's law (which concerns the consonants 't', 'p', and 'k'), the sounds in the word "typical", and their own initials (Trabb Pardo and Knuth).[1] In a talk based on the paper, Knuth said:[2]
You can only appreciate how deep the subject is by seeing how good people struggled with it and how the ideas emerged one at a time. In order to study this—Luis I think was the main instigator of this idea—we take one program—one algorithm—and we write it in every language. And that way from one example we can quickly psych out the flavor of that particular language. We call this the TPK program, and well, the fact that it has the initials of Trabb Pardo and Knuth is just a funny coincidence.
In the paper, the authors implement this algorithm in Konrad Zuse's Plankalkül, in Goldstine and von Neumann's flow diagrams, in Haskell Curry's proposed notation, in Short Code of John Mauchly and others, in the Intermediate Program Language of Arthur Burks, in the notation of Heinz Rutishauser, in the language and compiler by Corrado Böhm in 1951–52, in Autocode of Alick Glennie, in the A-2 system of Grace Hopper, in the Laning and Zierler system, in the earliest proposed Fortran (1954) of John Backus, in the Autocode for Mark 1 by Tony Brooker, in ПП-2 of Andrey Ershov, in BACAIC of Mandalay Grems and R. E. Porter, in Kompiler 2 of A. Kenton Elsworth and others, in ADES of E. K. Blum, the Internal Translator of Alan Perlis, in Fortran of John Backus, in ARITH-MATIC and MATH-MATIC from Grace Hopper's lab, in the system of Bauer and Samelson, and (in addenda in 2003 and 2009) PACT I and TRANSCODE. They then describe what kind of arithmetic was available, and provide a subjective rating of these languages on parameters of "implementation", "readability", "control structures", "data structures", "machine independence" and "impact", besides mentioning what each was the first to do.
The algorithm[edit]
ask for 11 numbers to be read into a sequence S reverse sequence S for each item in sequence S call a function to do an operation if result overflows alert user else print result
The algorithm reads eleven numbers from an input device, stores them in an array, and then processes them in reverse order, applying a user-defined function to each value and reporting either the value of the function or a message to the effect that the value has exceeded some threshold.
ALGOL 60 implementation[edit]
begin integer i; real y; real array a[0:10]; real procedure f(t); real t; value t; f := sqrt(abs(t)) + 5 * t ^ 3; for i := 0 step 1 until 10 do read(a[i]); for i := 10 step -1 until 0 do begin y := f(a[i]); if y > 400 then write(i, "TOO LARGE") else write(i, y); end end.
The problem with the usually specified function is that the term
5 * t ^ 3 gives overflows in almost all languages for very large negative values.
C implementation[edit]
This shows a C implementation equivalent to the above ALGOL 60.
#include <math.h> #include <stdio.h> double f(double t) { return sqrt(fabs(t)) + 5 * pow(t, 3); } int main(void) { double a[11] = {0}, y; for (int i = 0; i < 11; i++) scanf("%lf", &a[i]); for (int i = 10; i >= 0; i--) { y = f(a[i]); if (y > 400) printf("%d TOO LARGE\n", i); else printf("%d %.16g\n", i, y); } }
Python implementation[edit]
This shows a Python implementation.
from math import sqrt def f(t): return sqrt(abs(t)) + 5 * t ** 3 a = [float(input()) for _ in range(11)] for i, t in reversed(list(enumerate(a))): y = f(t) if y > 400: print(i, "TOO LARGE") else: print(i, y)
Rust implementation[edit]
This shows a Rust implementation.
use std::io::{self, prelude::*}; fn f(t: f64) -> f64 { t.abs().sqrt() + 5.0 * t.powi(3) } fn main() { let mut a = [0f64; 11]; for (t, input) in a.iter_mut().zip(io::stdin().lock().lines()) { *t = input.unwrap().parse().unwrap(); } a.iter().enumerate().rev().for_each(|(i, &t)| match f(t) { y if y > 400.0 => println!("{} TOO LARGE", i), y => println!("{} {}", i, y), }); }
References[edit]
- ^ Luis Trabb Pardo and Donald E. Knuth, "The Early Development of Programming Languages".
- First published August 1976 in typewritten draft form, as Stanford CS Report STAN-CS-76-562
- Published in Encyclopedia of Computer Science and Technology, Jack Belzer, Albert G. Holzman, and Allen Kent (eds.), Vol. 6, pp. 419-493. Dekker, New York, 1977.
- Reprinted (doi:10.1016/B978-0-12-491650-0.50019-8) in A History of Computing in the Twentieth Century, N. Metropolis, J. Howlett, and G.-C. Rota (eds.), New York, Academic Press, 1980. ISBN 0-12-491650-3
- Reprinted with amendments as Chapter 1 of Selected Papers on Computer Languages, Donald Knuth, Stanford, CA, CSLI, 2003. ISBN 1-57586-382-0)
- ^ "A Dozen Precursors of Fortran", lecture by Donald Knuth, 2003-12-03 at the Computer History Museum: Abstract, video | https://www.technetiumbo542.site/wiki/Category:Articles_needing_additional_references_from_December_2009 | CC-MAIN-2021-31 | refinedweb | 940 | 56.49 |
Swift, here I come!
It’s time to start another version of the MarvelBrowser project. As I did with the Objective-C version, I begin the Swift version with a spike solution. But the first time was to see if I could satisfy Marvel’s authentication requirements. Basically, I needed to get the incantation correct. This time, I know the steps to take, but I will repeat them in Swift.
railroad spike by Tom Gill, used under CC BY-NC-ND 2.0
I have two goals:
Could you give me feedback on the Swiftiness of my code?
[This post is part of the series TDD Sample App: The Complete Collection …So Far][This post is part of the series TDD Sample App: The Complete Collection …So Far]
In Objective-C, I put the definitions of my public and private API keys into NSStrings:
static NSString *const MarvelPublicKey = @"my-public-key"; static NSString *const MarvelPrivateKey = @"my-private-key";
A big drawback of Objective-C is its lack of namespaces. Actually, there is a namespace: the single, global namespace. The real problem is that we can’t create more. To avoid clashes, we use long, verbose names.
So here’s how I decided to do it in Swift:
struct MarvelKeys { static let publicKey = "my-public-key" static let privateKey = "my-private-key" }
This struct is never instantiated. Its sole purpose is to add semantic organization.
In the viewDidLoad method of ViewController, I begin by concatenating a timestamp, the private key, and the public key:
override func viewDidLoad() { super.viewDidLoad() // Concatenate keys per let timeStamp = "1" // Hard-coded for spike let keys = timeStamp + MarvelKeys.privateKey + MarvelKeys.publicKey // Confirm manually: print(keys) }
Here are some things that strike me about Swift:
overridemethod provides important feedback. It asks the question, “Don’t you need to call
super?”
+. Ah well, purity sometimes gives way to pragmatism.)
This is where Swift first got hard for me. How do I call the plain-C function CC_MD5? Because Objective-C is a strict superset of C, everything in C is available. This is a strength of Objective-C that kept it going for 30 years. It’s also a disadvantage to have a language that is a two-headed beast, bringing along all of C’s lack of safety.
To access CC_MD5 from Swift, I had to create a bridging header MarvelBrowser-Swift-Bridging-Header.h:
#import <CommonCrypto/CommonCrypto.h>
How do I convert the concatenated keys to a UTF8 string, pass it in, and get data back? This was frustrating, but eventually I figured out something that works. (It can be made cleaner. But in a spike solution, clean isn’t the goal. Quick learning is the goal, so the code can be dirty.)
// Create MD5 hash: var digest = [UInt8](repeating: 0, count: Int(CC_MD5_DIGEST_LENGTH)) CC_MD5(keys, CC_LONG(keys.utf8.count), &digest) var hash = "" for (_, byte) in digest.enumerated() { hash += String(format: "%02x", byte) } // Manually confirm that it's 32 hex digits: print(hash)
Question: Is the first argument to CC_MD5 automatically converted to UTF8? Or am I just getting lucky, because my string happens to have nothing but ASCII-expressible characters?
To convert each byte to hex, the original Objective-C code used a classic for-loop. I thought about
for i in 0 ..< CC_MD5_DIGEST_LENGTH
But the only purpose of the index is to access elements of the
hash array. Using for-in over an enumeration seems like a more Swifty approach.
I see that stringWithFormat gets a lot less use in Swift.
// Manually confirm URL string: let urlString = "\(timeStamp)&apikey=\(MarvelKeys.publicKey)&hash=\(hash)" print(urlString)
Swift’s string interpolation makes it so you no longer have to worry about variable order. Again, this is so 2 years ago. Still, I pause to appreciate it.
I did find one thing to complain about: Isn’t there a way in Swift to wrap a long string literal across multiple lines?
Creating the data task, and logging the results, was pretty straightforward. In Swift 3, the hardest part was continuing to type “NS” by habit, and finding the right way to express UTF8 encoding.
// Create data task: let session = URLSession.shared let url = URL(string: urlString)! let dataTask = session.dataTask(with: url) { data, response, error in print("error: ", error); print("response: ", response); let str = String(data: data!, encoding: String.Encoding.utf8) print("data: ", str); } dataTask.resume()
I’m aware of the two exclamation marks in the code above. Force-unwrapping is generally something to avoid. But in this case, it’s just a spike solution. The code will be kept off to the side in the spike-networking branch. I’ll exercise more care in the master branch.
Ahh, that closure. There are two things I like about it. First is the name “closure” over the more generic “block”. It better expresses what happens already happens to variables in Objective-C blocks, so I like the more precise name.
Finally, the ability to express a closure argument as a trailing closure. Again, let me pause to appreciate this language.
I have to say, so far Swift is pretty cool. My appreciation may diminish when I get into stubbing and mocking. But for today, I will sit back and smile. And I was glad when I saw JSON results in the console.
How did I do as far as making my code Swifty? Got any tips for unwrapping nullables? Leave a comment below to add your observations!.
How can I tell whether this does what I want (create a single literal) rather than perform operations on several literals? | https://qualitycoding.org/my-first-swift/ | CC-MAIN-2018-30 | refinedweb | 931 | 67.76 |
Hello, I import the new library richfaces 4.0, but in my project when compiled show the error
package org.ajax4jsf.component.html does not exist,
import org.ajax4jsf.component.html.HtmlAjaxCommandLink;
who is the problem?, before uses de richfaces 3.3 in my spring proyect by now when update library to richfaces 4.0 show error.
Exists problems in librarys? or How I can resolve the problem.
Thank You.
Hi,
There are few changes for Richfaces4.
You need to see the migration guide from Richfaces3.3.3 to Richfaces4 from the following link:
Now to create a4j:commandLink dynamically, we need to use
UICommandLink component class instead of HtmlAjaxCommandLink
Please follow the component reference guide:
All the best
Thank you for your help, I resolved my problem. | https://developer.jboss.org/message/610716 | CC-MAIN-2015-18 | refinedweb | 128 | 61.73 |
Chapter 11. Classes in C++
Class definitions
Before we can use classes, we have to define them. Assume the following simple class:
In C++ there are two parts to every class:
Class definition
Class implementation
Important
- Class definition
The class definition defines the structure of the class. It defines the class name, member variables and member functions.
- Class implementation
The class implementation specifies the behavior of the class. It gives an implentation for its member functions.
Warning to all tha have used Java before: In Java they are both together, in C++ they have to be separated!
Sidenote: As you should be able to guess, pure virtual classes (interfaces) have no implementation.
But back to the class. The first step is redefining it for the actual implementation (we learned some of that earlier).
Use getters / setters instead of public attributes
Use Vector<> for multiplicity
In an actual project we would probably do this step in our head. But it doesn't hurt do to on paper:
Although it is technically possible and perfectly legal to declare public attributes in C++, it is not legal in this class! For all projects, designs, implementations, etc. you do in this class you have to use private attributes!
The example here is intentionally simple. In reality we would probably declare a default value for formal, but we need to know about constructors first (in one of the next classes).
Given the definition in UML we can now translate it into C++.
Example for a class definition:
class Hello
{ private:{ private:
bool formal;bool formal;
public:public:
void greeting();void greeting();
bool getFormal(); void setFormal(bool f);bool getFormal(); void setFormal(bool f);
};};
Lets look at the different parts:
Other notes
- Order
As in UML, in C++ it is convention to declare all variables first, then all member functions.
- Indentation
Usually the class definition starts with no indentatation. Visibility modifiers and the closing brace are on the same level as the class definition. Member attributes and operations are indented.
As you can see a C++ class definition shows the exact same thing as a UML class definition. This is not a coincidence. There are even programs that can produce one from the other. However, these are very expensive.
Practice:
Write a C++ class definition for this class:
Hints: You will not need any getters / setters here. The corrent type for "points" would be "Vector<Location>".
We have now defined a class and its interface. But now we have to give actual implementations for the defined methods.
Lets look at the Hello class again:
We have defined the class
And its attributes.
What is missing is the methods.
Fortunately, implementing the methods is much like we've seen in implementations before:
void Hello::greeting() { if (formal) cout << "Hello, nice to meet you!" << endl; else cout << "What's up?" << endl; } ...
The "Hello::" is borrowed from namespaces. In this case, a class is somewhat like a namespace (although a different thing)
The header line of a method implementation is:
Return data type (void if no return value), class name, colon-colon (::), method name, parameters
The body of a method is exactly the same as we learned in earlier.
In the method, we can make use of all attributes of the same class as if they were global variables. In the example given we can use "formal" because it is defined inside the class "Hello" and our greeting method is for the class "Hello".
Practice: Assume the following class definition:
class Point { private: float x; float y; public: float distanceFromOrigin(); };
Give an implementation for the distanceFromOrigin function. Note: the formula is root(x^2+y^2) (our course this is NOT C++ notation). | https://max.berger.name/teaching/s06/script/ch11.html | CC-MAIN-2021-21 | refinedweb | 614 | 55.54 |
Structs - Video
Will the following code compile?
using System;"
Can a struct have a default constructor (a constructor without parameters) or a destructor in C#?
No
Can you instantiate a struct without using a new operator in C#?
Yes, you can instantiate a struct without using a new operator
Can a struct inherit from another struct or class in C#?
No, a struct cannot inherit from another struct or class, and it cannot be the base of a class.
Can a struct inherit from an interface in C#?
Yes
Are structs value types or reference types?
Structs are value types.
What is the base type from which all structs inherit directly?
All structs inherit directly from System.ValueType, which inherits from System.Object.
Structure have default constructor which initialize the field member to zero.
"No, a struct cannot inherit from another struct or class, and it cannot be the base of a class."
"All structs inherit directly from System.ValueType, which inherits from System.Object."
Don't those two statements contradict one another?
System.ValueType does not derive from System.Object.
// Summary:
// Provides the base class for value types.
[Serializable]
[ComVisible(true)]
public abstract class ValueType
{
// Summary:
// Initializes a new instance of the System.ValueType class.
protected ValueType();
// Summary:
// Indicates whether this instance and a specified object are equal.
//
// Parameters:
// obj:
// Another object to compare to.
//
// Returns:
// true if obj and this instance are the same type and represent the same value;
// otherwise, false.
public override bool Equals(object obj);
//
// Summary:
// Returns the hash code for this instance.
//
// Returns:
// A 32-bit signed integer that is the hash code for this instance.
public override int GetHashCode();
//
// Summary:
// Returns the fully qualified type name of this instance.
//
// Returns:
// A System.String containing a fully qualified type name.
public override string ToString();
}
}
I disagree. Everything derives from System.Object in C#. Even the value types.
You don't see it in the definition because it is optional to have a class explicitly inhereting from System.Object.
why struct dont have destructor? Can any one explain me?
becoz struct is value type so dont need to destruct it | http://venkatcsharpinterview.blogspot.com/2009/02/c-interview-questions-on-structs.html | CC-MAIN-2018-26 | refinedweb | 355 | 61.63 |
I used the example of change flight mode, in order to change to STABILIZE mode, but the terminal printed ‘Command executed, but failed’.
Even though I had changed the firmware to ardusub, I printed MAV_TYPE and found that it was ‘fixed wing’, not ‘submarine’.
from pymavlink import mavutil import time # Create the connection # Need to provide the serial port and baudrate master = mavutil.mavlink_connection( '/dev/ttyACM0', baud=115200) mav_type=master.field('HEARTBEAT', 'type', master.mav_type) print(mav_type)
The above program prints 1, and the following figure shows that this is a fixed wing.
How do I solve this problem?Thank you. | https://discuss.bluerobotics.com/t/change-mode-failed/3682 | CC-MAIN-2018-47 | refinedweb | 101 | 58.18 |
Vol 9 ,Issue 12
1| ,
mujahid.riceplus@gmail.com
Work on gene-edited babies blatant violation of the law, says
China
Vice-minister condemns work of He Jiankui, but Chinese regulations are vague
Thu 29 Nov 2018 13.00 GMTLast modified on Fri 30 Nov 2018 01.00 GMT
He Jiankui speaking in Hong Kong on Wednesday, where he defended his work. Photograph:
Alex Hofford/EPA
2| ,
mujahid.riceplus@gmail.com
Xu called for the suspension of any scientific or technological activities by those involved in He’s
work.
The scientist has said his project was approved by an ethics committee at Harmonicare
Shenzhen women and children’s hospital, which has also denied any involvement.
He shocked the global scientific community when he claimed this week to have edited the
genes of embryos that resulted in the birth of twin girls named Lulu and Nana.
However, his work – a byproduct of personal ambition and a vague regulatory environment in a
country that has been pushing ahead in the field of gene editing for years – did not come as a
total surprise to everyone.
William Hurlbut, a bioethicist told local
media.
3| ,
mujahid.riceplus@gmail.com international scientific community continues to reel from He’s claims, which he defended
at a global summit on the topic in Hong Kong on Wednesday. The organising committee of the
conference, the International Summit on Human Genome Editing, called the scientist’s
statements “unexpected and deeply disturbing” and recommended an independent
assessment.
4| ,
mujahid.riceplus@gmail.com
,” the committee said in a statement on Thursday.
Officials from China’s national health commission promised on Thursday to “investigate and
deal with any unlawful behaviour” by He..”.
5| ,
mujahid.riceplus@gmail.com
If everyone who reads our reporting, who likes it, helps to support it, our future would be much
more secure. For as little as £1, you can support the Guardian – and it only takes a minute.
Thank you.-
of-the-law-says-china\\
“Guillemar” heat temperature stress tolerant rice contributes towards food security in Cuba. (Photo: F.
Sarsu/IAEA)
New rice and green bean plants are now being rolled out to help farmers grow more
of these staple foods despite higher temperatures caused by climate change. These
new ‘climate proof’ crop varieties were developed as part of a five-year project
aimed at helping countries to improve food security and adapt to changing climate
conditions. The project specifically addressed the improvement of tolerance of rice
and bean plants to high temperatures in drought-prone areas.
“Climate change is forcing food producers and farmers to change how they approach
agriculture,” said María Caridad González Cepero, a scientist at the National Institute
of Agricultural Science in Cuba. “New plant varieties, such as these ‘climate proof’
6| ,
mujahid.riceplus@gmail.com
rice and bean plants, offer a sustainable option for adapting to some of the negative
effects of climate change, which is important for ensuring food security today and in
the future.”
One of the major consequences of climate change has been the extreme fluctuation
in global temperatures. Higher temperatures have a direct and damaging effect on
plant development and yields. In many agricultural locations worldwide,
temperature extremes are causing plants to suffer, including staple crops such as rice
and green beans, also known as the common bean, which are essential to the diets
of millions of people worldwide..
One of these new rice varieties called ‘Guillemar’, which is drought tolerant, is now
being used in Cuba and has boosted crop yields by 10 per cent. Other countries such
as India, Pakistan, the Philippines, Tanzania and Senegal, are also preparing to
release new, high-yielding rice varieties suited to each countries’ temperature
conditions, while experts in Colombia and Cuba have had success with new varieties
of heat-tolerant, higher yielding common bean and tepary bean plants, which they
expect to release to farmers by 2020-2021.
7| ,
mujahid.riceplus@gmail.com
More food, more knowledge
Developing new plant varieties can help farmers grow more food and adapt to
climate change, but they also help scientists learn more about how plants are
affected by climate change and ways to refine and improve the plant breeding
process.ver the course of this five-year project, the team created methods for
screening the physiological, genetic and molecular components of plants as well as
for accurately assessing the plants’ genetic makeup to identify, select and breed
plants with desired traits.
A pre-field screening technique, for example, was refined to help plant breeders
accelerate the evaluation of plant varieties in controlled conditions such as a
greenhouse or growth chamber. This approach allows them to effectively narrow
down the number of possible plants for further field tests from a few thousand to
less than 100. By slimming down the options, it can reduce research and
development time from around three to five years to one year, which means new
plant varieties can reach farmers more quickly to help them stay ahead of climate
change and prevent food insecurity.
Many of the team’s methods and techniques are now being made accessible to other
researchers to research further. They are being made available through IAEA
coordinated research and technical cooperation projects with other teams of
scientists, as well as through more than 40 publications, including a recently
published open-access guidebook on Pre-Field Screening Protocols for Heat Tolerant
Mutants in Rice.
“Climate change is identified as one of the major challenges faced by the planet, and
crop adaptation to variations in climate is critical to ensure food and nutrition
security,” said Fatma Sarsu, an IAEA scientist and the lead officer of the project.
“Interdisciplinary research involving plant breeders, physiologists and molecular
biologists is key to the development of new varieties adapted to extreme
environments such as drought and high temperatures. Our collaborative research is
taking a major step towards addressing crop adaptation to climate change through
the development of these rice and bean varieties.“-
nuclear-technology
8| ,
mujahid.riceplus@gmail.com
Politicians create problems, scientists solve them
J MULRAJT+ T-
The economic system of the world measures GDP growth as a measure of success. GDP growth is,
however, an extractive process; it extracts natural resources which God has created over millions of
years. In so extracting, or over-extracting, it creates another set of problems, namely, climate
change, which are, and will continue, to have, natural, social and economic consequences.
In their must-read book, ‘Natural Capitalism’, authors Paul Hawkin, Amory Lowins and Hunter
Lowins, suggest that instead of GDP growth as a measure of success, we should concentrate on
‘natural’ capitalism.
For example, the P&L of, say, a coal mining company would, under its expense side, debit the
‘royalty’ paid and not the replacement cost of the extracted coal.
This results in wastage. In order to boost GDP growth, China overbuilt homes. The construction of
these resulted in a demand for steel, cement, glass and other things, all extracted. GDP growth
touched double-digits. Was that cause for celebration?
What consequences?
Greenpeace, for example, points to disappearing glaciers in western China’s provinces of Qinghai
and Gansu. These glaciers are the source of rivers that supply drinking water to 180 crore people. A
fifth are already gone.
The problems created by politicians’ penchant for GDP growth, irrespective of the cost to future
generations, are solved by scientists. Israel leads in water management. The country recycles 80 per
cent of its sewage waste water, using the recycled water for agriculture and public works. This
compares to only 30 per cent in India.
9| ,
mujahid.riceplus@gmail.com
Looming food crisis
Shortage of water will, obviously, affect food production, and thanks to myopic economic policies,
humankind is also hurtling towards a food crisis.
Here, too, science is coming to the rescue. Some 350 crore people use rice as a staple food. Rice
production cannot keep pace. Now agri scientists have made breakthroughs in ‘scuba rice’, which
survives long periods of flooding, and alkaline resistant ‘sea rice’ already growing on China’s
northern coast. China plans to plant ‘sea rice’ on 20 million hectares in its northern Shandon
province, which is alkaline, after which it can feed 80 million people.
Natural gas is now a fuel of choice, for environmental reasons. ONGC and OIL can increase
production of natural gas by a third, from 90 mmscmd if they get a higher price for it. Achieving the
increase would require a capex of $10 billion. However, the government has arm-twisted OIL to
spend ₹1,085 crore to buy back 4.45 per cent of its stock, in order to reduce the government’s fiscal
deficit.
The global stock market will get a boost from the statement of Fed Chairman Jeremy Powell, that
the US Fed may make fewer than the expected three rate hikes in 2019. The Indian stock markets
would be influenced by the election results of five States, due on December 11. Investors should lie
low till then, unless they can read electoral crystal balls.
(The writer is India Head — Finance Asia/Haymarket. The views are personal.)-
them/article25634356.ece
Le Quoc Doanh, deputy minister of Agriculture and Rural Development, made this statement at
a workshop on research for the development of a climate-resilient Southeast Asia held in Hanoi
on Wednesday.
“Vietnam has implemented a programme to reduce greenhouse gases by 2020,” he said.
10 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com.
11 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.comnam.nam.-
resilience.html
12 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
Critics of the bill argue that with restrictions on rice importation being lifted under the bill,
there will be an oversupply of imported rice in local markets and the ones to suffer the burden
first would be the local farmers.
But Villar allayed the farmers’ fears on the matter, saying the government has allotted a P10
billion fund to help them.
“Magbibigay po ang gobyerno ng sampung bilyong piso every year sa susunod na anim na taon
para kayo ay maging competitive against import,” Villar assured.
On November 22, the bicameral committee approved the allocation of P10 billion to the Rice
Competitiveness Enhancement Fund or Rice Fund.
This will be utilized for the improvement of farm machinery and equipment, seed production,
training for rice farming, and loan programs among other means to help the local farmers.
Several farmers expressed gratitude to the efforts of government to help them sustain their
livelihood.
“Iyon po ang pinakagusto kong natalakay ni Ma’am Senator Cythia Villar na magkakaroon ng
farm school dahil karamihan ng mga farmers ay wala pang sapat na edukasyon tungkol sa crop
production, inter cropping, multicrop production,” said Association of San Felipe Farmers
president Joely Reguidon.
“Malaking bagay, dahil nagbibigay sila ng mga traktora, hindi masyadong mahal ang upa, pero
mas maganda kung gagabayan ng gobyerno yung upa ng traktora. Kasi sa amin wala na kaming
kalabaw, hindi namin naaaffordan yung traktora,” added Rolan, also a member of the farmers’
group.
As of press time, the Rice Tariffication Bill is awaiting the signature of President Rodrigo Duterte
before it fully becomes a law. – Marje Pelayo (with reports from Leslie Huidem)
PASAY CITY, Philippines – Losing bidders PT&T and Sear Telecom have brought their complaints to the
Senate against the country’s winning third telco Mislatel Consortium.
Sear Telecom’s chairman and president Chavit Singson argued that Mislatel still has an existing contract
with them which bars Mislatel from entering into another deal.
Sear Telecom together with PT&T wants the government to disqualify Chinese-owned Mislatel from
being awarded as the country’s third telco.
13 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
“From the beginning disqualified ang Mislatel dahil may kontrata sa amin ang Mislatel,” Sison said
during Tuesday’s (November 27) inquiry on the matter.
On the other hand, the Department of Information and Communications Technology (DICT) said the
process of awarding the contract to the winning bidder will continue unless the court issues its decision
on the complaints of the losing bidder.
However still, the agency admitted that the legal battle will surely affect or delay the entry of the third
telco.
Nevertheless, DICT Undersecretary Eliseo Rio said they will still “abide by the decision of the court.”
On issues of security, Senator Grace Poe questioned National Security Adviser Secretary Hermogenes
Esperon on the possible implication to national security of the reported “hijacking” incident of China
Telcom on internet traffic that affects even powerful countries.
The incident is up for investigation by the National Intelligence and Coordinating Agency (NICA).
“We have read about that report. It is subject to validation on our part. We have the 90 days period to
do that. The winning provisional NMP will have to be undergo a background check by the National
Intelligence and Coordinating Agency,” said Esperon.
The DICT maintained that they see no threat to national security with the entry of the Chinese telecom
firm in the country.
“So the threat of Chinese product, Chinese people, really operating our telecommunication are really
here,” Rio said,
For its part, Mislatel assured that the rights of the Filipino people will be protected.
“We are a Filipino company. We will not allow national interest and national security to be
undermined,” assured Atty. Adel Tamano, Mislatel’s spokesperson.
But Poe insisted that a thorough scrutiny on the background of the third telco should still be conducted.
“Pero iba pa rin siyempre yung mismong korporasyon ang pagmamayari ay taga ibang bansa,” said the
chairperson of the Senate Committee on Public Services, Senator Poe.
Meanwhile, Senator Antonio Trillanes IV requested the presence of Mislatel’s owner Dennis Uy in the
next inquiry.
Uy was a no-show in today’s Senate inquiry. – Marje Pelayo (with reports from Nel
Maribojoc)
14 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
BY: Severious Kale-Dery
Dr Owusu Afriyie Akoto (3rd right) being taken through some certified seeds by Dr Agnes Kalibata (3rd left), President of AGRA, while
Mr Christoph Retzlaff and other officials look on
The Ministry of Food and Agriculture (MoFA) has entered into a 2.5 million euro
Public Private Partnership (PPP) agreement to boost rice production in the
country.
The agreement, known as “Ghana Rice Initiative”, is expected to last 36 months beginning this
month, November 2018.
It is being championed by the German Government and implemented by AGRA and other
partners.
Project
Dubbed “Public private partnership for competitive and inclusive rice value chain development:
Planting for Food and Jobs – Rice Chapter,” the project is aimed at increasing rice production,
strengthening and expanding access to output markets among others.Useful links Ghana news
| Ghana Business News | News in Ghana
It also intends to adopt a two-tier approach on short, medium and long-term solutions to
enable the government achieve its sub-sector goal of becoming self-sufficient in rice production
to improve the livelihoods of 128,763 farmers by 2020.
The project will be implemented in the Ashanti, Brong Ahafo, Northern, Central and Volta
regions.
15 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
Already, about 130,000 farmers from 110 districts in the beneficiary regions have been supplied
with subsidised certified seeds under the project.
Launch
The sector minister, Dr Owusu Afriyie Akoto, said the project was in line with the government’s
flagship ‘Planting for Food and Jobs’ programme.He stated that a recent tour of some regions
revealed that the Central Region had the capacity to produce double the rice requirements of
the country.
Dr Akoto said the rice that would be produced would be of high quality comparable to
international standards.
According to him, the rice currently produced in the country was good. “It is just that the milling
capacity is low and cannot cope with the increasing rice production in the country,” he added.
The minister also stated that the Planting for Food and Jobs programme was yielding results,
and added that besides rice, millet, soya beans, maize and vegetables, the programme also
extended support to groundnut and cassava farmers this year.
He further announced that his outfit was to receive a $220 million facility from Brazil and India
for the importation of machinery from those countries to “take away the drudgery of farmers”.
German Ambassador
For his part, the German Ambassador to Ghana, Mr Christoph Retzlaff, expressed the hope that
their partnership with AGRA on the project would be fruitful.
He said the initiative could pave the way for more domestic rice production and less
dependence on imports to help achieve the “Ghana beyond aid” agenda.
16 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
Mr Retzlaff also said the initiative would offer an opportunity to align and leverage resources
on existing programmes such as Competitive Rice Initiative (CARI), Green Innovation Centre
(GIC) and existing bilateral agricultural activities in the country.-
production.html.
Romero explained that the proposed liberalization of the rice supply in the country was designed to bring
local rice prices down by removing all rice import quotas and allowing more entities to import.
17 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
.
Koko defends Imelda’s right to post bail: Let’s be fair to the Marcoses
On Dec 1, 2018
Senator Aquilino “Koko” Pimentel III has asked the public to be “fair” to the Marcoses after the
Sandiganbayan allowed former First Lady and now Ilocos Norte Rep. Imelda Marcos to post bail while
pursuing legal remedies to her graft conviction.
18 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
In an interview aired over dwIZ radio Saturday (December 1), Pimentel said the Constitution allows all
accused to post bail for their charges except if the supposed violation is so severe as to merit life
imprisonment.
“Kung naging unfair sa atin ‘yung mga Marcoses, maging fair tayo sa kanila para makita naman nila na
ang laban natin ngayon ay patas unlike sa panahon nila na hindi patas and laban,” he said.
The Sandiganbayan found Marcos guilty of seven counts of graft in connection with her financial
interests in foundations based in Switzerland while her husband, the late dictator Ferdinand Marcos,
was President.
While the court sentenced Marcos to prison for her conviction, it recently released a resolution allowing
her to post a P300,000 bail and enjoy temporary liberty while she appeals the ruling.
Marcos recently informed the Sandiganbayan she has decided to take her case to the Supreme Court
instead of waiting for the anti-graft court to decide on her appeal
The move by China to lift the import ban on Niigata rice from Japan has more political than
economic significance, and it's a sign of warming bilateral ties between two major economies in
Asia amid worsening trade wars, experts said.
The General Administration of Customs of China said it would lift the import ban on rice grown
in Niigata, a production center of agricultural goods in Japan, according to a document released
on Wednesday. Imports should be in line with China's food safety, plant hygiene laws and
regulations, customs noticed.
In 2011, China's quality supervision agency banned imports of farm products from several
prefectures including Niigata following the Fukushima nuclear disaster.
Niigata prefecture is one of Japan's major rice production areas. In particular, the Uonuma
region in the prefecture is nationally famous for its koshihikari rice, according to a local travel
website.
19 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
Ryuichi Yoneyama, the former governor of Niigata, visited China to discuss the ban on imports
of farm products from the prefecture with China's quality supervision bureau and exchanged
opinions, he said in a Tweet posted on April 2.
"It's a great opportunity for Japanese farmers and traders once China lifts the import ban. As
Abenomics encourages exports, Japanese rice products will tap into a large consumption base,"
said Zhang Jifeng, a research fellow with the Institute of Japanese Studies at the Chinese
Academy of Social Sciences, told the Global Times.
Exports of Japanese rice, including sake and other processed products, stood at some 11,800
tons in 2017, far less than the government's target of 100,000 tons a year, according to the
Japan Times.
Considering that the agricultural sector accounts for only about 1.4 percent of Japan's GDP, the
move by China to allow more of these imports has a lot of political significance but little
economic benefit, Zhang noted.
"As support for the Liberal Democratic Party of Japan lies mainly in rural areas, it's important to
make farmers happy," he said.
The move also reflects a warming of bilateral ties after Japanese Prime Minister Shinzo Abe
paid a visit to China in October. These ties will play an important role amid heightened U.S.-
China trade tensions, the expert added.
China's further opening-up to Japanese rice products is also likely to benefit the whole supply
chain, including products such as rice cake and sake, said Chen Yan, executive director of the
Japanese Corporations (China) Research Institute.
By Sachin Murdeshwar
Mumbai:..
20 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com’m.De
21 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.commachining equipment, mechanical processing equipment, industrial robotand
automation equipment.-
india-china/
RESEARCH and development (R&D) in the agriculture sector must be prioritized in order to fast-
track mechanization, Senator Cynthia Villar said.
Senate Committee on Agriculture and Food chair Villar underscored the importance of
strengthening mechanization through research in order to increase productivity among Filipino
farmers.ARTICLE_MOBILE_AD_CODE
Villar said the National Economic Development Authority (Neda) has acknowledged the need to
invest in R&D to fast-track the growth and development of the agriculture sector.
The senator added mechanization is crucial in gearing Filipino farmers to be competitive in the
agriculture sector especially in the Asean Economic Community (AEC).
Villar cited the low production cost of rice in Vietnam with only P6 per kilograms (kg) compared
to the Philippines’ P12 production cost per kg.
With the passage of the Rice Tarrification Act, which is yet to be signed by President Rodrigo
Duterte, a huge budget is allocated for mechanization.
“Under the bill, the excess rice tariff revenues and the P10 billion fixed appropriation for the
Rice Competitiveness Enhancement Fund shall be used for providing direct financial assistance
to rice farmers. The bill is very specific as to where the funds will be spent, in securing that the
intended beneficiaries of the program can receive the funding,” Villar said Thursday, November
29, during her speech at the 1st Manufacturers’ Forum by Mindanao Agricultural Machinery
Industry Association at Apo View Hotel, Davao City.
22 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
The P10-billion rice fund will be allocated to the following: half of the amount will be for
providing rice farm machineries under the Philippine Center for Post-Harvest Development and
Modernization (PhilMech); 30 percent of the budget will be given to Philippine Rice Research
Institute (PhilRice) to be used for the development, propagation and promotion of inbred rice
seeds to rice farmers; 10 percent for credit facilities with minimal interest rates; and another 10
percent to fund extension services by PhilMech, Agricultural Training Institute, and the
Technical Education and Skills Development Authority-
mechanization.
23 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
Romero explained that the proposed liberalization of the rice supply in the country was
designed to bring local rice prices down by removing all rice import quotas and allowing more
entities to import.
.
File photo
24 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com.
National Food Authority warehouse in Quezon City -- MICHAEL VARCASMEMBERS of the National Food
Authority (NFA) Council had reservations about pursuing a government-to-government (G2G)
25 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
Basmati export stakeholders need to ensure the produce doesn’t have traces of pesticides and retains
its world-class quality
The EU had cut the maximum residue limit (MRL) for Tricyclazole, a fungicide used in India to protect the paddy crop from a
disease called ‘blast’, from 1 PPM to 0.01 PPM from December 31, 2017.
Since the beginning of the year, exporters of India’s aromatic and long-grained basmati rice and
officials from the commerce ministry have been deliberating on the issue arising out of stringent
import norms imposed by the European Union (EU), which sharply slashed the level of a commonly
used fungicide, Tricyclazole, in the rice imported into the continent.
The EU had cut the maximum residue limit (MRL) for Tricyclazole, a fungicide used in India to protect
the paddy crop from a disease called ‘blast’, from 1 PPM to 0.01 PPM from December 31, 2017. This
has put basmati rice exporters in a tough and challenging position.
27 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
Despite Indian authorities’ several communications to the EU stating that the fungicide can be phased
out gradually over the next three years, the EU has stuck to its stand. This is expected to slow down
India’s basmati rice exports to the EU in the current fiscal. It is likely to help India’s main competitor
Pakistan, as it exports aromatic long-grained rice to the EU and its farmers do not use Tricyclazole. It
has to be noted that farmers in Spain and Italy also use Tricyclazole on their paddy crop.
Officials from the All India Rice Exporters’ Association (AIREA) have stated that EU’s stringent MRL
norms are unrealistic to meet. “At least two crop cycles are required to effect the desired change.
Moreover, there is no scientific evidence that it is harmful to human health,” Vijay Setia, president,
AIREA, said. The EU and the US are high-value markets for basmati rice exporters, although a major
chunk of aromatic and long-grained rice is shipped to mostly Gulf countries including Iran, Saudi
Arabia, Kuwait and the United Arab Emirates.
Commerce ministry officials said, on an average, India annually exports 3.4-4 lakh tonnes of basmati
rice to the EU (mainly to the UK, the Netherlands, Italy, Belgium and France). The volume of annual
basmati rice exports to the EU is around 10% of the country’s annual aromatic rice shipment. In
anticipation of enforcement of stringent pesticides norms, in the last fiscal (2017-18), India exported
more than 4.5 lakh tonnes of basmati rice to the EU, and in the current fiscal the shipment would be
lower.
28 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com.
Despite the absence of uniformity in global pesticide standards concerning food products, officials
from the commerce and agriculture ministries, and rice industry, acknowledge the fact that India has
to put its house in order as far as the use of pesticide usage is concerned. “We have started to screen
our consignments (for possible pesticide traces) before being exported. We need regulatory support
in terms of judicious use of pesticides by farmers. Educating farmers does take time,” a commerce
ministry official said.
Pre-shipment testing of pesticide residues for export of basmati rice to the EU from the Basmati
Export Development Foundation (BEDF) laboratory in Modipuram, Uttar Pradesh, or other National
Accreditation Board for Testing and Calibration Laboratories (NABL), have been made mandatory by
the Agricultural and Processed Food Products Export Development Authority (APEDA). There has to
be greater thrust on promoting farming practices that reduce pesticide application by farmers and
ensure that farmers have adequate knowledge about proper usage of pesticides.
Official data says that there are 16 lakh farmers, mostly in Punjab, Haryana, western Uttar Pradesh
and few pockets of Uttarakhand, Himachal Pradesh and Jammu & Kashmir, engaged in basmati rice
cultivation. It is grown in around 16 lakh hectares. India exports key varieties such as Pusa Basmati 1
and Pusa Basmati 6 to the EU, and these are cultivated by around 6 lakh farmers.
In the just-concluded kharif season (2018), to curb the use of fungicides, AIREA, in association with
APEDA, conducted campaigns among basmati rice growers in many districts of Punjab, including
Amritsar, Tarn Taran, Gurdaspur, Ferozepur and Pathankot. Punjab’s agriculture department has
recruited volunteers to reach out to farmers about the negative impact of pesticides. The thrust of the
campaign has also been to educate farmers against using pesticides especially four weeks prior to
harvesting.
Agriculture ministry officials said farmers should use those pesticides that are recommended by state
agricultural universities. “There has to be prescription for all kinds of chemical pesticides to be used
for dealing with specific pests. Chemical shops must display the list of banned chemicals so that
farmers make informed choices,” an official said.
As all the pesticides sold in the market are registered with the Central Insecticide Board and
29 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
Registration Committee, under the Directorate of Plant Protection, Quarantine and Storage of the
agriculture ministry, the central government should be proactive, ensuring that banned pesticides are
not in circulation.
All key players in the basmati rice value chain need to work together with farmers for ensuring good
agricultural practices and exporters have to create a backward-linkage programme especially with
farmers to ensure that traces of pesticides are eliminated in the production process itself. Minimum
or judicious use of pesticides would improve and expand export potential of not only basmati rice, but
also of all other agricultural products.
Senior consultant, ICRIER. Views are personal-
stakeholders-need-to-ensure/1398588/
30 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
DATA DELIVERY On November 28, researcher Jiankui He gave scientists their first glimpse of data from the
creation of two gene-edited babies. Many in the scientific community have decried the work.
SPONSOR MESSAGE
A Chinese researcher who helped create the world’s first gene-edited babies publicly disclosed
details of the work for the first time to an international audience of scientists and ethicists, and
revealed that another gene-edited baby is due next year.
Lulu and Nana, twin girls whose DNA was edited with CRISPR/Cas9 to disable the CCR5 gene
involved in HIV infections, may soon be joined by another child, Jiankui He said on November
28. Another woman participating in a gene-editing trial to make children resistant to HIV
infection is in the early stages of pregnancy, He noted in a presentation at the second
International Summit on Human Genome Editing, held in Hong Kong.
He performed the experiments largely in secret — not even the Southern University of Science
and Technology in Shenzhen, China, where He worked until taking an unpaid leave in February
was aware of the study. He apologized that information about his work “leaked unexpectedly,”
a puzzling claim because He had granted interviews to the Associated Press and had recorded
several online videos. A manuscript describing the work is under review at a scientific journal,
He said.
Contentious experiments
In the presentation, He claimed that his experiments to disable the CCR5 gene might help
susceptible children, especially in the developing world, avoid HIV infection. “I truly believe this
is not only just for this case, but for millions of children that need this protection since an HIV
vaccine is not available … I feel proud.”
But He’s first public explanation failed to quell the controversy over his actions (SN Online:
11/27/18).
Producing babies from gene-edited embryos is “irresponsible,” and runs counter to a consensus
researchers reached in 2015 after the first international human gene-editing summit, said David
Baltimore after He’s presentation. “I personally don’t think it was medically necessary,” said
Baltimore, a Nobel laureate who has been influential in setting policy on DNA research and is
chair of the summit’s organizing committee.
There are lots of ways to avoid to HIV infection that don’t require risky tinkering with DNA. And
scientists aren’t convinced that editing human embryos with CRISPR/Cas9 is safe or ethical.
31 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
Scientists in the audience lined up to question He about how he recruited patients for the
study, informed them of the risk and consequences of the research and why he did the work in
the first place.
“I assume you’re well aware of this redline,” said Wensheng Wei of Peking University in Beijing,
echoing more broadly the sentiment of many in the scientific community. “Why did you choose
to cross it? And hypothetically if you didn’t know, why did you do all these clinical studies in
secret?” He did not answer the question.
He said he and his colleagues began experimenting with mice, monkeys and nonviable human
embryos to hone the editing technique. In that preliminary work, CRISPR editing of
the CCR5 gene didn’t produce any unwanted changes to other genes, which scientists call “off-
target” edits. Of 50 human embryos edited in one experiment, only one had a potential off-
target edit. Researchers can’t tell if that off-target edit was caused by CRISPR/Cas9 or is a
genetic tweak inherited from one of the embryo’s parents.
Lulu and Nana’s parents were one of seven couples recruited from an HIV patient group to take
part in He’s study. A consent form posted to his website bills the research as an HIV vaccine
development project. The baby’s father has HIV, but the virus is at undetectable levels in his
blood. The mother is not infected.
He and colleagues performed in vitro fertilization after washing the sperm to remove any
remaining traces of the virus. CRISPR/Cas9 protein and an RNA that guides the protein to
the CCR5 gene were injected into the egg along with the sperm. When the resulting embryos
had developed into a blastocyst, a stage just before implantation in the womb when the
embryo is a ball of about 200 cells, researchers removed several cells. The team examined, or
sequenced, three to five of those cells’ DNA for evidence of editing. In total, 31 embryos from
the seven couples reached the blastocyst stage. Of those, about 70 percent had edits of
the CCR5 gene, He said.
The embryo that developed into Lulu contained an edit that mimics a naturally occurring
mutation that helps protect some people from HIV. Initial testing also revealed evidence of an
off-target edit far from any genes in that embryo, He said. The embryo that developed into
Nana had a small deletion in the CCR5gene that would remove five of 352 amino acids from the
protein produced by the gene. Scientists don’t know whether that change would prevent HIV
from getting into cells. Nana’s embryo had no discernible off-target edits, He said.
He left it up to the parents to decide whether to implant the edited embryos, knowing that one
may have extra edits and the other may not be resistant to HIV. The couple decided to implant
both embryos.
32 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
After the girls were born, He and colleagues sequenced DNA from cells from the babies’
umbilical cord blood and determined that Lulu doesn’t have any off-target edits after all.
Unanswered questions
But researchers who saw He’s presentation aren’t convinced that he has presented enough
evidence to verify that the editing was successful and didn’t damage other genes. Previous
research has indicated that some cells in embryos may be incompletely edited or escape editing
entirely, creating a “mosaic” embryo (SN: 9/2/17, p. 6).
There would be no way to determine if every cell in an embryo is edited equally without
examining each cell’s DNA separately, says molecular geneticist Dennis Eastburn, who was not
at the summit. Additionally, traditional sequencing methods can’t detect all the possible off-
target changes CRISPR/Cas9 editing might produce in an embryo’s DNA, says Eastburn,
cofounder and chief science officer of Mission Bio in South San Francisco. To find
rearrangements of DNA, for example, researchers would need to do what’s called long-read
sequencing that could span large portions of a chromosome.
Far more troubling is that He chose to implant the embryos to establish pregnancies, all without
consulting scientific experts, ethicists and government regulators, says chemical biologist David
Liu.
The moment He decided to implant an edited embryo to create a human pregnancy was “the
critical juncture when his study went from being an eyebrow-raising, but not unprecedented
human embryo study similar to other ones done in China and other countries, to a deplorable
calamity,” says Liu, a Howard Hughes Medical Institute investigator at Harvard University and
the Broad Institute of MIT and Harvard.
He claims he consulted with several other experts, including some in the United States, before
moving ahead with his study. He’s university and Chinese authorities have launched
investigations of his work. Rice University in Houston is investigating the role one of its
researchers, Michael Deem, may have played in the research.
33 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
During their summit in October, Japanese Prime Minister Shinzo Abe urged Chinese President Xi
Jinping to lift the import restrictions on Japanese agricultural and other products.
China apparently examined the distances and wind directions from the crippled Fukushima No.
1 nuclear plant and decided to remove the ban on Niigata rice.
Japanese private companies have long hoped to resume rice exports to China, which accounts
for about 30 percent of the world market for the staple food.
The Japanese government plans to ask the Chinese government to further ease restrictions on
other food products.
The Abe administration has been promoting overseas sales of Japanese food products. It has
set a goal of 1 trillion yen ($8.8 billion) as the annual export amount of agricultural, forestry and
fishery products, as well as processed food.
But after the triple meltdown at the Fukushima No. 1 nuclear plant, 54 countries and regions
imposed restrictions on food imports from Japan.
Although the restrictions have been gradually eased, eight countries and regions--China, the
United States, South Korea, Singapore, the Philippines, Taiwan, Hong Kong and Macau--still ban
imports of certain products from certain areas of Japan, according to the agricultural ministry.
(This article was compiled from reports by Ayumi Shintaku and Takashi Funakoshi in Beijing and
Tetsushi Yamamura in Tokyo.)
| James Kon |
RICE imports to Brunei Darussalam were consistent from 2013 to 2018, with the exception of
2017, during which there was a drop in imports due to a shortage in supply, according to
statistics from the Treasury Department.
34 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
This has shown that it is a risk for Brunei Darussalam to subsist mainly on rice supplies from
overseas.
The need to boost self-sufficiency for food security was raised yesterday at the Knowledge
Convention 2018 by Haji Yusop bin Haji Mahmud, the Acting Accountant General at the
Treasury Department of the Ministry of Finance and Economy (MoFE), in his working paper,
‘Food Security and Consumer Safety Guarantee’.
“The Treasury Department plays a vital role in supporting assuring sufficient
food security for the nation, especially with rice as our basic food,” he said.
He also revealed that the Treasury Department was given the mandate to import the country’s
rice needs, with an import target of as much as 34,000 metric tonnes per year, from Thailand
and Cambodia.
Acting Accountant General at the Treasury Department of the MoFE Haji Yusop bin Haji
MahmudAttendees at the event. – PHOTOS: RAHWANI ZAHARI
This amount includes an additional six months of safety stock, together with the country’s actual
needs.
Meanwhile, the Department of Agriculture and Agrifood under the Ministry of Primary Resources
and Tourism (MPRT) has also come up with strategies to achieve self-sufficiency in stages, to
address and reduce the reliance on rice imports.
Haji Yusop said, “To make sure that the imported rice is safe for consumption, all rice imports into
the country must have a Phytosanitary Certificate from the Ministry of Agriculture, Forestry and
Fisheries of Cambodia and the Ministry of Agriculture and Cooperative in Bangkok. The certification
is a mandatory requirement. Without it, the rice would not be transported to Brunei.
35 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
“In addition, a certificate of quality is also needed from the import country, to meet the quality
standards. All the requirements are stipulated within the contract.
“To further ensure its quality and safeness for consumption, the department will send samples of
the imported rice to the scientific lab services under the Ministry of Health for an analysis.”
Haji Yusop iterated that food security and consumer safety is the responsibility of all. “The
development plans in the agriculture sector must be sustainable to ensure the survivability of the
people and the nation,” he said
Soyabean on NCDEX settled down by 1.29% at 3355 as buying by crushers was limited in the physical
market in the absence of any significant demand for soyoil and soymeal. Reports of higher arrivals and
bumper crop hope weighed on prices.
India's 2018-19 soybean production is projected at 13.46 million tons up 22.5% over previous year,
agriculture ministry data showed. According to senior government officials, China is likely to open its
doors to soybean from India after allowing the import of no basmati rice and raw sugar.
As per SOPA, India's soymeal exports in 2018/19 could jump as much as 70% from a year ago, buoyed by
expected purchases from the world's biggest soybean buyer China. Moreover, govt. plans to procure 44
lakh tonnes of oilseeds and pulses from farmers at MSPs in the ongoing kharif marketing season.
India's 2018-19 soybean production is projected at 13.46 million tons up 22.5% over previous year,
agriculture ministry data showed. Further, Soybean Processors Association of India (SOPA) has projected
an increase of 70% in soymeal exports during 2081-19 on expectations of fresh demand from China.
Arrivals of soybean in Madhya Pradesh are lower than expected and any significant increase in supplies
is unlikely till the assembly polls.
36 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
USDA indicated that 1.056 mt of soybeans were inspected for export in the week that ended on
November 15, down 22.13% from last week and less than half of the same week in 2017. At the Indore
spot market in top producer MP, soybean gained 17 Rupees to 3431 Rupees per 100 kgs.
Trading Ideas:
--Soyabean prices dropped as buying by crushers was limited in the physical market in the absence of
any significant demand for soyoil and soymeal.
--India's 2018-19 soybean production is projected at 13.46 million tons up 22.5% over previous year,
agriculture ministry data showed.
--NCDEX accredited warehouses soyabean stocks gained by 1364 tonnes to 124797 tonnes.
--At the Indore spot market in top producer MP, soybean gained 17 Rupees to 3431 Rupees per 100 kgs.-
3451/23502
“The renegotiated trade deal is a step towards securing future of Indonesian palm oil
exports to Pakistan,” Ehsan Malik, chief executive officer of a pan-industry advocacy
group Pakistan Business Council (PBC) told The News.
37 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
The Indonesian government has long been mulling import duty concessions for 20
tariff lines, including rice, mangoes and value-added textiles, under the revised
preferential trade agreement (PTA) originally signed with Pakistan in February 2012.
“The latest move is a preemptive move to shift of Pakistani palm oil buying orders to
Malaysia,” Malik said.
Indonesia is the dominant palm oil exporter to Pakistan, exporting worth $1.5 billion
of commodity in a year.
“(But) Indonesia is losing market share to the world’s No. 2 producer, Malaysia,”
Abdul Rasheed Jan Mohammed, chairman of the Pakistan Edible Oil Refiners
Association told Jakarta Globe.
The PBC estimates an incremental export potential arising from the duty relaxation of
$320 million, subject to capacity and quality competitiveness of the offerings.
38 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com
In 2017, Pakistan imported goods, mainly vegetable oil, worth $2.4 billion from
Indonesia. The country’s exports, however, stood at $241 million, leaving a trade
deficit of $2.2 billion.
“On an annualised basis the relaxation of tariffs will enhance exports from Pakistan to
Indonesia by 133 percent and reduce the deficit by 15 percent,” the PBC said in a
statement.
In January, Indonesia and Pakistan finalised the review process for the PTA and the
former agreed to grant tariff concessions on major exports from the latter, including
zero percent tariff on tobacco, textile fabric, rice, ethanol, citrus (kinnow), woven
fabric, t-shirts, apparel and mangoes. Indonesia’s global imports under these tariff
lines are around $600 million.
Previously, Pakistan provides preferential tariffs to 313 imports from Indonesia that
allows 232 imports from Pakistan under reduced duty structure.
Analysts said Pakistani exporters have been unable to benefit from around 200 tariff
lines, reflecting in decline in exports..
39 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com.This was stated by Adviser to Prime Minister on Commerce, Textiles and
40 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.-
for-pakistan/
41 | w w w . r i c e p l u s m a g a z i n e . b l o g s p o t . c o m ,
mujahid.riceplus@gmail.com | https://www.scribd.com/document/394619648/1st-December-2018-Daily-Global-Regional-Local-Rice-E-Newlsetter | CC-MAIN-2019-35 | refinedweb | 8,333 | 53 |
basic_oauth 0.1.3
Implements the "Resource Owner Password Credentials Grant" from Oauth v2.
==============
What is it?
-----------
The Oauth v2 spec defines several authorization grant. This library implements
the "Resource Owner Password Credentials Grant" as described in
<http: tools.ietf.
Requirements:
* [Flask]
* [Redis]
Why using it?
-------------
The goal of this Grant is to replace the classic "HTTP Basic over SSL" widely
used. With Oauth, you exchange your crendentials against a token.
This mechanism has several advantages:
* The client does not pass the full credentials for each request.
* The server does not check the username and password each time, it will
only check the access token, this will reduce the database lookups.
Basic Oauth uses Redis to store the sessions.
Is it secure?
-------------
__It would be stupid to use this mechanism without SSL__. Even if the token is
passed instead of the credentials, the credentials needs to be passed in clear
text during the Authentication phase. Also, it can be problematic to lose the
token.
To limit the risk of losing the token, every single token generated is signed
using the User-Agent and the client IP address. If an attacker tries to re-use
a stolen token, he will have to connect to the same IP and using the same
User-Agent (browser version, OS, architecture) to get access. A wrong try will
result in destroying the session.
How to use it?
--------------
Install basic_oauth from PYPI:
```
pip install basic_oauth
```
Create a sample WSGI app with [Flask]):
```python
import flask
import basic_oauth
app = flask.Flask(__name__)
oauth = basic_oauth.BasicOauth(app)
oauth.mount_endpoint('login', '/login')
oauth.mount_endpoint('script', '/js/oauth_client.js')
oauth.credentials.append(('johndoe', 'foobar42'))
# You can declare "oauth.authenticate_handler" to plug your own
# database instead of using the in-memory credentials
@app.route('/')
@oauth.require
def hello(user_id):
return 'Hello World!'
if __name__ == '__main__':
app.debug = True
app.run()
```
- Downloads (All Versions):
- 5 downloads in the last day
- 79 downloads in the last week
- 333 downloads in the last month
- Author: Sam Alba
- License:
Copyright (c) 2012 Samuel Alba <sam.alba: shad
- DOAP record: basic_oauth-0.1.3.xml | https://pypi.python.org/pypi/basic_oauth/0.1.3 | CC-MAIN-2015-14 | refinedweb | 349 | 59.9 |
#103 – XAML 2009
October 23, 2010 Leave a comment
.NET 4.0 introduced an update to the supported XAML vocabulary–the latest version supported is now XAML 2009. WPF and Silverlight do not yet support XAML 2009 (.NET 4 / Visual Studio 2010), but still support XAML 2006.
With respect to Visual Studio 2010, therefore, the features introduced in XAML 2009 can only be used in loose XAML files.
XAML 2009 introduces the following new features, beyond what was present in XAML 2006:
- x:Arguments allows calling non-default constructor (one with parameters)
- x:FactoryMethod allows calling a static method to construct an object rather than using a constructor
- x:Reference markup extension makes it easier to set a property value to point to an instance of another object
- x:TypeArguments allows use of generics
- Built-in support in x: namespace for standard CLR primitive data types (e.g. string, int, float, etc). Avoids adding a separate XML namespace.
See also: XAML 2009 Language Features | http://wpf.2000things.com/2010/10/23/103-xaml-2009/ | CC-MAIN-2013-48 | refinedweb | 164 | 54.02 |
CA1400: P/Invoke entry points should exist
A public or protected method is marked with the System.Runtime.InteropServices.DllImportAttribute. Either the unmanaged library could not be located or the method could not be matched to a function in the library. If the rule cannot find the method name exactly as it is specified, it looks for ANSI or wide-character versions of the method by suffixing the method name with 'A' or 'W'. If no match is found, the rule attempts to locate a function by using the __stdcall name format (_MyMethod@12, where 12 represents the length of the arguments). If no match is found, and the method name starts with '#', the rule searches for the function as an ordinal reference instead of a name reference.
No compile-time check is available to make sure that methods that are marked with DllImportAttribute are located in the referenced unmanaged DLL. If no function that has the specified name is in the library, or the arguments to the method do not match the function arguments, the common language runtime throws an exception.
To fix a violation of this rule, correct the method that has the DllImportAttribute attribute. Make sure that the unmanaged library exists and is in the same directory as the assembly that contains the method. If the library is present and correctly referenced, verify that the method name, return type, and argument signature match the library function.
The following example shows a type that violates the rule. No function that is named DoSomethingUnmanaged occurs in kernel32.dll.
using System.Runtime.InteropServices; namespace InteroperabilityLibrary { public class NativeMethods { // If DoSomethingUnmanaged does not exist, or has // a different signature or return type, the following // code violates rule PInvokeEntryPointsShouldExist. [DllImport("kernel32.dll")] public static extern void DoSomethingUnmanaged(); } } | http://msdn.microsoft.com/en-us/library/ms182208.aspx | CC-MAIN-2014-15 | refinedweb | 295 | 50.97 |
0.
The lowest layer of memory profiling involves looking at a single object in memory. You can do this by opening up a shell and doing something like the following:
1 2 3 4 5 6 7
>>> import sys >>> sys.getsizeof({}) 136 >>> sys.getsizeof([]) 32 >>> sys.getsizeof(set()) 112
The above snippet illustrates the overhead associated with a list object. A list is 32 bytes (on a 32-bit machine running Python 2.7.3). This style of profiling is useful when determining what type of data type to use.
The easiest way to profile a single method or function is the open source memory-profiler package. It's similar to line_profiler which I've written about before.
You can use it by putting the
@profile decorator around any function or
method and running
python -m memory_profiler myscript. You'll see
line-by-line memory usage once your script exits.
This is extremely useful if you're wanting to profile a section of memory-intensive code, but it won't help much if you have no idea where the biggest memory usage is. In that case, a higher-level approach of profiling is needed first.
There are a number of ways to profile an entire Python application. You can use the standard unix tools top and ps. A more Python specific way is guppy.
To use guppy you drop something like the following in your code:
1 2 3
from guppy import hpy h = hpy() print h.heap()
This will print you a nice table of usage grouped by object type. Here's an example of an PyQt4 application I've been working on:
1 2 3 4 5 6 7 8 9 10 11 12
Partition of a set of 235760 objects. Total size = 19909080 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 97264 41 8370996 42 8370996 42 str 1 47430 20 1916788 10 10287784 52 tuple 2 937 0 1106440 6 11394224 57 dict of PyQt4.QtCore.pyqtWrapperType 3 646 0 1033648 5 12427872 62 dict of module 4 11683 5 841176 4 13269048 67 types.CodeType 5 11684 5 654304 3 13923352 70 function 6 1200 1 583872 3 14507224 73 dict of type 7 782 0 566768 3 15073992 76 dict (no owner) 8 1201 1 536512 3 15610504 78 type 9 1019 0 499124 3 16109628 81 unicode
This type of profiling can be difficult if you have a large application using a relatively small number of object types.
Finally, I recently discovered memory-profiler comes with a script called mprof, which can show you memory usage over the lifetime of your application. This can be useful if you want to see if your memory is getting cleaned up and released periodically.
Using mprof is easy, just run
mprof run script script_args in your shell
of choice. mprof will automatically create a graph of your script's memory
usage over time, which you can view by running
mprof plot. Be aware that
plotting requires matplotlib.
I'm sure there are other approaches to profiling memory usage in Python. So let me know your recommendations in the comments.
0
Test your skills. Learn something new. Get help. Repeat.Start a FREE 10-day trial | https://www.pluralsight.com/guides/profiling-memory-usage-in-python | CC-MAIN-2018-47 | refinedweb | 542 | 72.56 |
webbrowser.open( )
Shouldn't the following code open a web browser?
import webbrowser url = '' webbrowser.open(url)
The URL needs to start with a protocol (usually
https://), otherwise nothing happens (although
Trueis returned).
Note that "http", "https" and "file" open in the built-in browser, which seems fairly useless if you are building a stand-alone app. You have to use "safari-" at the start of the protocol to open in Safari.
Wish there was an option for webbrowser.open to leave it for the OS to open, regardless of the URL.
@mikael - thank you for the additional information regarding the Safari URL scheme. I understand your opinion about the use of the built in browser and I agree. I normally would create my own browser or launch in Safari as you mentioned.
However, actually I have found it quite nice to have the webbrowser module if your stand alone app needs to display documentation, certain media, etc. | https://forum.omz-software.com/topic/2468/webbrowser-open | CC-MAIN-2018-51 | refinedweb | 158 | 74.49 |
Searching not any left. After that “procedure”, you backtrack until there is another choice to pick a node, if there isn’t, then simply select another unvisited node.
Implementation using Stack
The order of the visited nodes for the picture above is: 5 10 25 30 35 40 15 20
Implementing DFS using the Stack data structure
Node.java represents each “ball” or “circle” on the graph above. It has a val which represents the “value” of each ball. It also has a boolean variable called visited which as the name suggests, it represents whether a Node has been visited by the traversal or not. The third instance variable Node class has is an ArrayList which represents all the adjacents (or neighbours) to the current node called adjacents. (If you want to know more about ArrayList, you can view this tutorial.)
In terms of methods in this class, there is a simple constructor that takes in a value and creates an empty ArrayList, and Setter and Getter methods and also a method that allows adding an adjacent Node.
Node.java
import java.util.*; public class Node{ private int val; private boolean visited; private List<Node> adjecents; public Node(int val) { this.val = val; this.adjecents = new ArrayList<>(); } public void addAdjecents(Node n) { this.adjecents.add(n); } public List<Node> getAdjacenets() { return adjecents; } public int getVal() { return this.val; } public boolean isVisited() { return this.visited; } public void setVal(int v) { this.val = v; } public void setVisited(boolean visited) { this.visited = visited; } }
DFS.java
This class has only 1 method: the solution.
It uses Stack data structure and it takes Nodes as elements. It adds the specified element to the node and then marks it as visited. After that, there is a while loop that that keeps checking whether the stack is empty or not. If it isn’t, then remove one element from the stack, get the neighbours of the element that is being removed. Then, there is another loop, which purpose is to mark each neighbour node as visited and also it adds that neighbour node to the stack.
import java.util.*; public class DFS { public void stackSolution(Node node) { Stack<Node> DFS_stack = new Stack<Node>(); DFS_stack.add(node); node.setVisited(true); while (!DFS_stack.isEmpty()) { Node nodeRemove = DFS_stack.pop(); System.out.print(nodeRemove.getVal() + " "); List<Node> adjs = nodeRemove.getAdjacenets(); for (int i = 0; i < adjs.size(); i++) { Node currentNode = adjs.get(i); if(currentNode != null && !currentNode.isVisited()) { DFS_stack.add(currentNode); currentNode.setVisited(true); } } } } }
Main.java
In this class is the main method which creates 8 instances of the Node class and passes some values. (Keep in mind that this example below uses the graph above (the image). We are adding different nodes as neighbours to different nodes. After that, we start from node5 and traverse it.
import java.util.*; public class Main { public static void main(String [] args) { Node node5 = new Node(5); Node node10 = new Node(10); Node node15 = new Node(15); Node node20 = new Node(20); Node node25 = new Node(25); Node node30 = new Node(30); Node node35 = new Node(35); Node node40 = new Node(40); node5.addAdjecents(node10); node10.addAdjecents(node15); node15.addAdjecents(node20); node10.addAdjecents(node25); node25.addAdjecents(node35); node35.addAdjecents(node40); node25.addAdjecents(node30); DFS demo = new DFS(); System.out.println("DFS traversal of above graph: "); demo.stackSolution(node5); } }
Output:
DFS traversal of above graph: 5 10 25 30 35 40 15 20 | https://javatutorial.net/depth-first-search-example-java | CC-MAIN-2019-43 | refinedweb | 568 | 57.98 |
Hey guys! I'm back with another problem :(
I'm supposed to make a for loop program that will print only EVEN numbers from 77 down to 11. I also need to get their sum and average.
By the way here's the code I did:
public class Loop_For2 { public static void main (String[] args) { double sum = 0f; int ctr = 1, ctr2 = 0; for (;ctr<=77; ctr++, ctr2--) { System.out.println(ctr); sum += ctr; ctr++; } double ave = (double) sum/ctr2; System.out.println ("The sum is: " +sum); System.out.println ("The average is: " +ave); } }
I'm not supposed to use 'if'. Thanks again guys! | https://www.daniweb.com/programming/software-development/threads/324007/loops-again | CC-MAIN-2017-43 | refinedweb | 104 | 75.2 |
Canvas: Designing Workflows.
Signatures are often nicknamed “subtasks” because they describe a task to be called within a task.
subtaskmethod:
>>> add.subtask((2, 2), countdown=10) tasks.add(2, 2)
There is also a shortcut using star arguments:
>>> add.s(2, 2) tasks.add(2, 2)
Keyword arguments are also supported:
>>> add.s(2, 2, debug=True) tasks.add(2, 2, debug=True)
From any signature instance you can inspect the different fields:
>>> s = add.subtask((2, 2), {'debug': True}, countdown=10) >>> s.args (2, 2) >>> s.kwargs {'debug': True} >>> s.options {'countdown': 10}
It supports the “Calling API” which means it supports
delayand
apply_asyncor being called directly..subtask(args, kwargs, **options).apply_async() >>> add.apply_async((2, 2), countdown=1) >>> add.subtask() # 2 + 4 >>>.subtask((2, 2), countdown=10) >>> s.apply_async(countdown=1) # countdown is now 1
You can also clone signatures to create derivates:
>>> s = add.s(2) proj.tasks.add(2)
>>> s.clone(args=(4, ), kwargs={'debug': True}) proj.tasks.add(2, 4, debug=True)
Immutability¶
New in version 3.0.
Partials are meant to be used with callbacks, any tasks linked or chord callbacks will be applied with the result of the parent task. Sometimes you want to specify a callback that does not take additional arguments, and in that case you can set the signature to be immutable:
>>> add.apply_async((2, 2), link=reset_buffers.subtask. E.g., e.g the operation:
>>> items = zip(xrange(1000), xrangeflows..subtask((2, 2), immutable=True)
There’s also an
.sishortcut for this:
>>> add.si(2, 2)
Now you can create a chain of independent tasks instead:
>>> res = (add.si(2, 2) | add.si(4, 4) | add.s(8, 8))() >>> res.get() 16 >>> res.parent.get() 8 >>> res.parent.parent.get() 4
Simple group
You can easily create a group of tasks to execute in parallel:
>>> from celery import group >>> res = group(add.s(i, i) for i in xrange(10))() >>> res.get(timeout=1) [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
Simple chord
The chord primitive enables us to add callback to be called when all of the tasks in a group have finished executing, which is often required for algorithms that aren’t embarrassingly parallel:
>>> from celery import chord >>> res = chord((add.s(i, i) for i in xrange is not passed on to the callback:
>>> chord((import_contact.s(c) for c in contacts), ... notify_complete.si(import_id)).apply_async()
Note the use of
.siabove which creates an immutable signature.
Blow your mind by combining
Chains can be partial too:
>>> c1 = (add.s(4) | mul.s(8)) # (16 + 4) * 8 >>> res = c1(16) >>> res.get() 160
Which xrange xrange, which in practice means adding a callback task:
>>> res = add.apply_async((2, 2), link=mul.s(16)) >>> res.get() 4
The linked task will be applied with the result of its parent
task as the first argument, which in the above case will result
in
mul(4, 16) since the result is 4.
You can also add error callbacks using the
link_error argument:
>>> add.apply_async((2, 2), link_error=log_error.s()) >>> add.subtask((2, 2), link_error=log_error.s())
Since exceptions can only be serialized when pickle is used the error callbacks take the id of the parent task as argument instead:
from __future__ import print_function import os from proj.celery import app @app.task def log_error(task_id): result = app.AsyncResult(task_id) result.get(propagate=False) # make sure result written. with open(os.path.join('/var/errors', task_id), 'a') as fh: print('--\n\n{0} {1} {2}'.format( task_id, result.result, result.traceback), file=fh)
To make it even easier to link tasks together there()
Note
It’s not possible to synchronize on groups, so a group chained to another signature is automatically upgraded to a chord:
# will actually be a chord when finally evaluated res = (group(add.s(i, i) for i in range(10)) | xsum.s()).delay()
Trails¶
Tasks will keep track of what subtasks a task calls in the
result backend (unless disabled using
Task.trail) is not fully
formed (one of the tasks has not completed yet),
but you can get an intermediate representation of the graph
too:
>>> for result, value in res.collect(intermediate=True)): .... one in the current process, and a
GroupResult
instance is returned which can be used to keep track of the results,
or tell how many tasks are ready and so on:
>>> g = group(add.s(2, 2), add.s(4, 4)) >>> res = g() >>> res.get() [4, 8]
Group also supports iterators:
>>> group(add.s(i, i) for i in xrange(100))()
A group is a signature object, so it can be used in combination with other signatures.. did not raise an exception).
failed()
Return
Trueif any of the subtasks failed.
waiting()
Return
Trueif any of the subtasks is not ready yet.
ready()
Return
Trueif all of the subtasks are ready.
completed_count()
Return the number of completed subtasks.
revoke()
Revoke all of the subtasks.
join()
Gather the results for all of the subtasks and return a list with them ordered by the order of which they were called.
Chords¶
New in version 2.3.
Note
Tasks used within a chord must not ignore their results. If the result backend is disabled for any task (header or body) in your chord you should read “Important Notes”. xrange(100))(tsum.s()).s() >>> header = [add.s(i, i) for i in range(100)] >>> result = chord(header)(callback) >>> result.get() 9900
Remember, the callback can only be executed after all of the tasks in the
header)
Error handling¶
So what happens if one of the tasks raises an exception?
This was not documented for some time and before version 3.1 the exception value will be forwarded to the chord callback.
From 3.1 errors will propagate to the callback, so the callback will not be executed
instead the callback changes',)
If you’re running 3.0.14 or later you can enable the new behavior via
the
CELERY_CHORD_PROPAGATES setting:
CELERY_CHORD_PROPAGATES = True
While the traceback may be different depending on which result backend is
being used, you can see does not respect the ordering of the header group.
Important Notes¶
Tasks used within a chord must not ignore their results. In practice this
means that you must enable a
CELERY_RESULT_BACKEND in order to use
chords. Additionally, if
CELERY_IGNORE_RESULT is set to
True
in your configuration, be sure that the individual tasks to be used within
the chord are defined with
ignore_result=False. This applies to both
Task subclasses and decorated tasks.
Example Task subclass:
class MyTask(Task): abstract = True)
Map & Starmap¶
map and
starmap are built-in tasks
that calls the hundred thousand objects each.
Some may worry that chunking your tasks results in a degradation of parallelism, but this is rarely true for a busy cluster and in practice since you)()
which means that the first task will have a countdown of 1, the second a countdown of 2 and so on. | http://celery.readthedocs.org/en/latest/userguide/canvas.html | CC-MAIN-2015-27 | refinedweb | 1,156 | 68.06 |
07-07-2016 03:05 PM - edited 07-07-2016 03:46 PM
I am trying to call the uf.facet.AskSurfaceDataForFace method in Python. What is the tag for "facetface" argument? I looked through the documentation and I cannot find a reference to this.
In the AskFaceIdofFacet method, I have used the model( tag) and the facet Id successfully. Below is my code...
In the documentation below, "facet face" is reference only once...
for facet_body in workPart.FacetedBodies: self.theLw.WriteLine("Facet Journal Identifier is " + str(facet_body.JournalIdentifier)) #Get the facet Tag facet_tag=facet_body.Tag #this method does not work for some reason #faces_list=facet_body.GetFaces() ########end of loop###############3 #create instance of your facet uf_facet=NXOpen.UF.Facet() #Get number of facets, use tag from NXOpen.Facet.FacetedBody attribute num_facets=uf_facet.AskNFacetsInModel(facet_tag) #loop through all the facets, 0 to num_facets for i in range(0, num_facets): #returns id the of the face, an int #arguments are model id for facet_tag and facet id for i face_id=uf_facet.AskFaceIdOfFacet(facet_tag, i) #what is the right argument to pass? tried facet_tag, face_id and the iterator i surface_data=uf_facet.AskSurfaceDataForFace(facetface?????)
07-07-2016 04:34 PM
The GetFaces method in the first for loop should return an array/list of face tags but I get an "error return without exception set" error...
07-08-2016 09:35 AM
I have attached a small .VB test case that worked for me. I made a block and blended four edges. When I ran the attached, the ouput looked like this:
Found 1 FacetedBody objects
Faces for this body: 10
Radius: 0
Radius: 10
Radius: 10
Radius: 10
Radius: 0
Radius: 0
Radius: 0
Radius: 0
Radius: 0
Radius: 10
I saw an interesting note in the Open C docs in the UF_FACET chapter that said "note that face tags are stored only if "store_face_tags" is set to "true" when faceting a solid".
07-08-2016 01:58 PM - edited 07-08-2016 01:59 PM
Thanks @SteveLabout, I will try it out. Would you be able to post a link to that specific documentation? The python documentation doesn't have that note.
07-08-2016 02:15 PM
I'm not sure how to get a link that will go right to it, but you can navigate to it in the Programming Tools docs.
Open up the Open C Reference Guide. Select the uf_facet chapter from the long list on the left.
That note appears under both UF_FACET_ask_face_id_of_solid_face() and UF_FACET_ask_solid_face_of_facet().
The code I posted isn't calling either of those, but it seems like it might be an essential tidbit.
By the way, any time you are using a wrapper function - anything you call from the UFSession - we always suggest checking the Open C doc for the original function. The majority of the info related to the function appears only in this guide - it is not carried over to the .Net, Python of Java docs.
07-08-2016 02:40 PM - edited 07-11-2016 11:14 AM
thanks again,
i think i found the note that you are referring to. the "facet_face" parameter in the ask_surface_data_for_face function is only referenced once so not sure what it could be.......
07-11-2016 11:33 AM - edited 07-11-2016 11:43 AM
@SteveLabout Thank you again for posting that code. I tried to recreate it in Python but I get an "error return without exception set" message when trying to call the thisFctBody.GetFaces() function
It should return an array/list.
import NXOpen import NXOpen.BlockStyler import NXOpen.Features import NXOpen.UF import NXOpen.GeometricAnalysis import NXOpen.Facet class facet gather_data(self): self.theLw.Open() self.theLw.WriteLine("Bring up console") global facet_tag, dispPart, workPart, faces_list, num_facets dispPart= self.theSession.Parts.Display workPart = self.theSession.Parts.Work for thisFctBody in dispPart.FacetedBodies:
#Tag
facet_tag=thisFctBody.Tag
#this does not work, gives a "error return without exception set"
fctFaces=thisFctBody.GetFaces()
#this works and it is in the same method class thisFctBody.GetParameters() #from NXOpen.Facet.FacetedBody class for facet_body in workPart.FacetedBodies: #Tag facet_tag=facet_body.Tag self.theLw.WriteLine("Facet tag is " + str(facet_tag)) self.theLw.WriteLine("Number of Facets is " + str(facet_body.NumberOfFaces))
#this does not work, should return a list faces_list=facet_body.GetFaces() self.theLw.WriteLine("") self.theLw.WriteLine("") def main(): try: #create instance of class myfacet = facet_data() myfacet.gather_data() except Exception as ex: # ---- Enter your exception handling code here ----- NXOpen.UI.GetUI().NXMessageBox.Show("Block Styler", NXOpen.NXMessageBox.DialogType.Error, str(ex)) if __name__ == "__main__": main() | https://community.plm.automation.siemens.com/t5/NX-Programming-Customization-Forum/parameter-for-facet-surface-data-method/td-p/354705 | CC-MAIN-2017-51 | refinedweb | 758 | 60.72 |
I have a project and i did some of them but other i didn't do it
can any one help me please
in calcualte the average, sort and randomly
1. The program acceps
input for one student name and the student’s GPA . Maximum number of students and GPAs that can be entered are 20.2. The program calculates the average GPA for all the students that were entered.3. The program shows all student names with their GPA’s in a listbox.4. The program groups the student names in goups depending on the required group size and shows the student names in a listbox.5. The program can select a student from the student list (array) randomly and displays the name.
6. BONUS: The program sorts all GPAs and shows the student names with the GPAs in a sorted list
and this is my code
Code:namespace Final_Project { publicpartialclassformStudent:Form { // Declare and Initialize double[] gpas = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; string[]names = { " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " "}; int counter = 0; public formStudent() { InitializeComponent(); } privatevoid btnEnter_Click(object sender, EventArgs e) { // This method is called when we click the btn Enter buttom. // It will enter the student name & GPA. // The maximum number of student & GPA that can be entered are 20. // Declare and Initialize double gpa; string name = txtName.Text; // convert gpa = Convert.ToDouble(txtGPA.Text); // Move gpa to array // only if counter is lees than 21 if (counter < 20) { gpas[counter] = gpa; names[counter] = name; counter++; MessageBox.Show("The name was entered"); } else { MessageBox.Show("You cannot enter more names"); } } privatevoid btnList_Click(object sender, EventArgs e) { // This method is called when we click the btn List buttom. // It will list the student name & GPA in the list box. for (int i = 0; i < counter; i++) { lbStudentGPA.Items.Add(names[i] + " " + gpas[i]); } } } } | http://cboard.cprogramming.com/csharp-programming/148956-average-random-sort.html | CC-MAIN-2015-35 | refinedweb | 314 | 71.24 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.