text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
It is possible to use the serial port to receive commands directly in the Arduino code. We can for example control the GPIO from the serial monitor of a code editor such as the Arduino IDE or PlatformIO.
It is also possible to make several development boards or micro-controllers communicate with each other (STM32, ESP32, ESP8266) via the serial port.
Open the serial port in the Arduino code
Before being able to receive messages);
How to receive commands from the serial port?
Using the serial port as a command input is not much more complicated than the serial output.
We already know the Serial.print() command (and associated functions) which allows you to send characters to the serial port as well as the other derived commands presented in detail in this article.
To receive characters on the serial port, we have several commands (official documentation)
which allows to listen to everything that arrives on the serial port and to trigger a treatment.
allows to know the number of bytes (characters) available for writing to the serial buffer without blocking the write operation.
The methods to read in the buffer memory of the serial port
read whatever comes in on the serial port.
4 other more specialized functions
reads data from the serial buffer until the search string is found. The function returns true if the string is found.
reads data from the serial buffer until a target string of a given length or termination string is found. The function returns true if the target string is found.
is used to modify the Timeout, the waiting time before the execution of blocking Serial functions are aborted. The Timeout is 1 second by default.is used to modify the Timeout, the waiting time before the execution of blocking Serial functions are aborted. The Timeout is 1 second by default.
The wait time is in milliseconds. 1 second = 1000ms
The following functions are blocking: find(), findUntil(), parseInt(), parseFloat(), readBytes(), readBytesUntil(), readString(), readStringUntil().
Upload the project to drive an LED by sending commands to the serial port
Create a new sketch on the Arduino IDE or a new PlatformIO project and paste the following code.
The following orders are accepted
- to turn on the LED
- to turn off the LED
- to know the state of the LED (ON or OFF) which is stored in the variable led_status
- Any other command returns the error Invalid command
Before uploading the code, modify the constant LED_PIN which indicates the pin on which the LED is connected
#include <Arduino.h> #define LED_PIN 32 bool led_status = false; String command; void setup() { Serial.begin(115200); pinMode(LED_PIN, OUTPUT); } void send_led_status(){ if ( led_status ) { Serial.println("LED is ON"); } else { Serial.println("LED is OFF"); } } void loop() { if(Serial.available()){ command = Serial.readStringUntil('\n'); Serial.printf("Command received %s \n", command); if(command.equals("led=on")){ digitalWrite(LED_PIN, HIGH); led_status = 1; } else if(command.equals("led=off")){ digitalWrite(LED_PIN, LOW); led_status = 0; } else if(command.equals("ledstatus")){ send_led_status(); } else{ Serial.println("Invalid command"); } } }
Project circuit
For this project, we will simply connect an LED to a digital output of an Arduino board. The code is compatible with any ESP32 or ESP8266 board. Just change the pin that the LED is connected to.
Here the LED is plugged into output 32 of an ESP32 development board.
Explanation of the code
The Serial.available() command changes to true whenever the serial port buffer contains new characters. By placing the function in the main thead of the program, the loop(), we can immediately receive incoming commands
if( Serial.available() ){ ... processing of incoming commands on the serial port }
As we saw in the introduction, there are several functions available to read from the serial port buffer.
The serial monitor finishes sending it with the control character corresponding to the end of the line .
You can use the Serial.readUntil(“\n”) method which allows you to read up to the string passed in parameter. We store the string in the command variable of type String.
command = Serial.readStringUntil('\n');
All that remains is to use the string processing functions to identify the command and the parameter.
All the String processing functions are detailed in this article
Here, we will simply test if we have just received one of the three commands led=on, led=off , ledstatus .
The state of the LED is stored in the variable led_status
if(command.equals("led=on")){ digitalWrite(LED_PIN, HIGH); led_status = 1; } else if(command.equals("led=off")){ digitalWrite(LED_PIN, LOW); led_status = 0; } else if(command.equals("ledstatus")){ send_led_status(); } else{ Serial.println("Invalid command"); }
The send_led_status function prints the status of the LED as a string to the serial port using the println() function. To learn more about all the methods to write to serial port, you can continue by reading this article
void send_led_status(){ if ( led_status ) { Serial.println("LED is ON"); } else { Serial.println("LED is OFF"); } }
Test the project from the Arduino IDE serial monitor
Open the serial monitor from the Tools -> Serial monitor menu or using the icon
An input field is located above the serial monitor window. Press the Send button or the Enter key on your keyboard to send the string to the Arduino.
Now you can test the operation of the three commands led=on , led=off , ledstatus .
Video demonstration
Test the project from the PlatformIO serial monitor
By default, the PlatformIO serial monitor does not allow sending commands from the Terminal.
To be able to send commands (character strings) on the serial port, filters must be added using the monitor_filters option detailed here in the platformio.ini file.
Here is a sample configuration that you can use in your projects. The command is sent by pressing the enter key. The exchange log is written to a file at the root of the project.
monitor_filters = debug, send_on_enter, log2file
Save the configuration file then restart the serial monitor by clicking on the trash can.
Place the cursor (not visible) by clicking in the Terminal then enter the command. Send by pressing the enter key on the keyboard.
[TX:'l'] [TX:'e'] [TX:'d'] [TX:'='] [TX:'o'] [TX:'n'] [TX:'\r\n'] [RX:'C'] C [RX:'ommand received led=on \n'] ommand received led=on [TX:'l'] [TX:'e'] [TX:'d'] [TX:'='] [TX:'o'] [TX:'f'] [TX:'f'] [TX:'\r\n'] [RX:'C'] C [RX:'ommand received led=off \n'] ommand received led=off [TX:'l'] [TX:'e'] [TX:'d'] [TX:'s'] [TX:'t'] [TX:'a'] [TX:'t'] [TX:'u'] [TX:'s'] [TX:'\r\n'] [RX:'C'] C [RX:'ommand received ledstatus \nLED '] ommand received ledstatus LED [RX:'i'] i [RX:'s OFF\r\n'] s OFF
Use CoolTerm for Windows, macOS or Linux
A final practical alternative is the CoolTerm open source software developed by Roger Meier which you can download here .
Open the Connection menu then Settings . Select the COM port of the Arduino board, ESP32, ESP8266, STM32 as well as the speed , here 115200 bauds.
Save the connection parameters and click on the Connect icon
Open the Connection -> Send String… menu.
Enter the desired command then click on Send to send
Attention, the Enter key on the keyboard returns the cursor to the line, which will pose a problem with the Arduino code.
Updates
23/10/2020 Publication of the article
-
- Get started with the I2C bus on Arduino ESP8266 ESP32. Wire.h library | https://diyprojects.io/getting-started-arduino-receive-commands-from-the-serial-port-esp32-esp8266-compatible/?amp | CC-MAIN-2022-40 | en | refinedweb |
Software architects and programmers love low coupling. What is coupling? Why is coupling important? Let’s get started.
You will learn
- What is coupling?
- What are common examples of coupling in software?
- How can we reduce the amount of coupling between classes, between components, and the like?
Example of Couping
Let’s start with an example.
Let’s say a friend of mine is developing a Java component, in the form of a JAR file, for me to use. However, there is an implicit constraint imposed here - I need to use Java (or a JVM Based language) to use the utility JAR file! In other words, developing the component as a Java JAR has coupled me to using Java as well. To break free, I need to decouple.
How to Decouple? An Example:
Instead of providing me with a JAR, I could ask my friend to create a web service interface for me to access the same functionality. The concept would look something like this:
The web service is created around the JAR, and can be accessed from a Java, a PHP, or a .NET application. We can use any kind of application to invoke the web service.
This implies we have effectively decoupled from the underlying technology of the component. We are no longer affected by the fact that the component was developed in Java.
Another Example for Low Coupling - Spring Framework
Spring Framework has a highly modular structure:
Suppose we want to use a specific module from this framework; for instance, the Spring JDBC module.
What would happen if Spring tells you that you can only use Spring JDBC if you also use the Beans and Context modules?
You would probably not use it at all, because it introduces additional dependencies such as configuration etc.
Spring does get this right; it does not force you to use Beans or Context along with JDBC. In other words, Spring modules are not coupled with the other ones.
Class-level coupling - The
Order class
Let’s now go one level deeper. Let’s look at Coupling at the level of classes.
Let’s look at a simple shopping cart example:
class ShoppingCartEntry { public float price; public int quantity; } class ShoppingCart { public ShoppingCartEntry[] items; } class Order { private ShoppingCart cart; private float salesTax; public Order(ShoppingCart cart, float salesTax) { this.cart = cart; this.salesTax = salexTax; } public float orderTotalPrice() { float cartTotalPrice = 0; for(int i=0; i<cart.items.length; i++) { cartTotalPrice += cart.items[i].price * cart.items[i].quantity; } cartTotalPrice += cartTotalPrice * salesTax; return cartTotalPrice; } }
You would observe that
orderTotalPrice() knows the internal details of the
ShoppingCart and
ShoppingCartEntry classes:
- It accesses the
itemsfield of
ShoppingCartdirectly
- It accesses the
priceand
quantityfields of
ShoppingCartEntry, also directly
Scenario : Try and imagine a situation where we change the name of the
price field of
ShoppingCartEntry, to something else.
Approach : Code within
orderTotalPrice() would also need to change.
If you change the type of the
items array (possibly to a list) within
ShoppingCart, that would also lead to a change within
orderTotalPrice().
Order class is tightly coupled to the
ShoppingCart and
ShoppingCartEntry classes.
How do we decouple them?
Decoupling the
Order class Example
Here is one way of achieving this:
class ShoppingCartEntry { float price; int quantity; public float getTotalPrice() { return price*quantity; } } class CartContents { ShoppingCartEntry[] items; public float getTotalPrice() { float totalPrice = 0; for(ShoppingCartEntry item:items) { totalPrice += item.getTotalPrice(); } return totalPrice; } } class Order { CartContents cart; float salesTax; public Order(CartContents cart, float salesTax) { this.cart = cart; this.salesTax = salesTax; } public float totalPrice() { return cart.getTotalPrice * (1.0f + salesTax); } }
Note the following points:
- Instead of making the
priceand
quantityfields accessible,
ShoppingCartEntrynow makes a method named
getTotalPrice()available to
CartContents.
- The
CartContentsclass does something very similar, by also providing a
getTotalPrice()method for
Orderto use.
- The
Orderclass now only invokes the
getTotalPrice()methhod exposed by
CartContents, to compute the total cart value in
totalPrice().
Now,
- If the
pricefield in
ShoppingCartEntryhas its name changed, only
getTotalPrice()within the same class would be affected
- If the type of
itemswithin
CartContentsis changed from an array to a list, again only the
CartContents
getTotalPrice()method needs to be altered.
- The code within
Orderis not affected by either of these changes at all.
We have now completely decoupled
Order from both
ShoppingCartEntry and
CartContents.
Another Coupling Example with Spring Framework
Consider the following body of code:
public class BinarySearchImpl { public int binarySearch(int[] numbers, int numberToSearchFor) { BubbleSortAlgorithm bubbleSortAlgorthm = new BubbleSortAlgorithm(); int[] sortedNumbers = bubbleSortAlgorithm.sort(numbers); //... } }
You would notice that the
BinarySearchImpl class is directly dependent on the
BubbleSortAlgorithm class. If we need to change the actual sort algorithm, to use quicksort for instance, then a lot of code within
BinarySearchImpl needs to change.
We can solve this issue by making use of interfaces. Here is how our modified code would look like:
public intrface SortAlgorithm { public int[] sort(int[] numbers); }
If you use the Spring framework, you could use the
@Autowired annotation with the
BinarySearchImpl class, to automatically fetch an implementation of an available sort algorithm:
); //... } }
What we have achieved here is reduce the coupling between
BinarySeacrhImpl and a specific sort algorithm.
A Practical Viewpoint of Coupling
A good way of thinking about coupling, is if you change the inner details of a class or a component, do you need to make changes elsewhere as well?
Such dependencies are not desirable. If you intend to reuse code from one place to another, the related dependencies should be as few as possible.
Coupling can occur at multiple levels in an application:
- Class-level
- API-level
- Component-level
Let’s look at an example of coupling at component level:
Component-Level Coupling
Consider the following organization of an enterprise web application:
If the Security component were coupled with the Logging component, then wherever we need Security, we would also need to access Logging. That’s not good.
Coupling With Layers
Let’s look at an example of a layered web application:
It is organized into these three layers. Also assume that from the Web layer Controller, I need to call multiple methods from the Business layer. Let’s say five different methods need to be called for a single such requirement. This is a clear case of coupling.
A very effective way to avoid such layer-to-layer coupling is to use the Facade Pattern. You can create a Facade component on top of the Business layer, that manages calls to these five methods. The web layer can then make do with calling a single method from the Facade component.
All in all, decoupling makes the code more reusable and testable.
Summary
In this article, we looked at the concept of coupling at multiple levels: at the class, the component, the API and the layer level. We also looked at how to get around this factor through decoupling, at each of these levels. We identified the key question to ask about coupling: “If something changes at a particular place, would other things need to change as well? And If I use something, would I be forced to use something else as | https://girishgodage.in/blog/software-design-coupling-with-examples | CC-MAIN-2022-40 | en | refinedweb |
Originally published on NG-Conf
Programming is fun, especially when you love the technology you’re working on. We at Modus Create love the web and web technologies. One of the frameworks that we work with is Angular.
When you work with Angular on large scale apps, there comes a set of different challenges and problems that require diving deep into Angular. In this article, we’ll go through one such challenge: implementing keyboard navigation to a list component. An example use-case can be an autocomplete dropdown list which may have keyboard navigation.
To implement keyboard navigation entirely from scratch, we could develop a custom implementation, but that would take a bit of time and perhaps would be re-inventing the wheel – if something is already out there that does the job.
Angular Material has a CDK (Component Dev Kit) which provides a lot of cool components and services for creating custom components & services that we can ship as libraries or use in our applications. Angular Material itself is built on top of the Angular CDK. Angular Material’s
`a11y` package provides us a number of services to improve accessibility. One of those services is the
`ListKeyManager` service. Which we will be using to implement keyboard navigation into our app.
Let’s dive into the code:
First, if you don’t have the
`@angular/cdk` package installed in your app, run a quick
`npm install @angular/cdk --save`.
We have a repo created here which you can clone/fork and work locally on our example app as we go along. We’ll keep things simple for now but there can be more complex use cases when using
`ListKeyManager`. We’ll show you how we’ve implemented keyboard navigation in our demo app and you can implement this in a similar way.
Let’s go through what our demo app looks like. First, we’re loading some random users from randomuser.me api. We load them in our
`app.component.ts` and then we use our
`ListItemComponent` to display each user individually. We also have a search input which will filter the users based on their names.
See the code below for
`AppComponent`:
import { Component, OnInit } from "@angular/core"; import { UsersService } from "./core/services/users.service"; import { first } from "rxjs/operators"; @Component({ selector: "app-root", templateUrl: "./app.component.html", styleUrls: ["./app.component.scss"] }) export class AppComponent implements OnInit { users: any; isLoadingUsers: boolean; searchQuery: string; constructor(private usersService: UsersService) {} ngOnInit() { this.isLoadingUsers = true; this.usersService .getUsers() .pipe(first()) .subscribe(users => { this.users = users; this.isLoadingUsers = false; }); } }
In our view
(`app.component.html`), we have:
<div class="users-page"> <div class="users-page__loading"> Loading Users </div> <div class="users-age__main"> <h3 class="users-page__main__heading"> Users List </h3> <div class="users-page__main__search"> </div> <div class="users-page__main__list"> </div> </div> </div>
Notice that we’re looping over
`users` using
`*ngFor` and passing each user as an
`item` in the
`app-list-item` component. We’re also filtering the list using the
'filterByName' pipe which uses the value from the input above.
The
`app-list-item` component just displays the image, name and email of each user. Here is what the view code looks like:
<div class="item"> <div class="item__img"> <img> </div> <div class="item__content"> <div class="item__content__name">{{item.name.first}} {{item.name.last}}</div> <div class="item__content__email">{{item.email}}</div> </div> </div>
Notice that the
`div` with the class
`item` has a conditional class being applied i.e.
`item--active`. This would make sure that the active item looks different from the rest since we’re applying different styles on this class. The class
`item--active` would be applied when the
`isActive` property of the item is
`true`. We will use this later.
Moving forward, we’ll now include
`ListKeyManager` from the
`@angular/cdk/a11y` package in our
`app.component.ts` as:
import { ListKeyManager } from '@angular/cdk/a11y';
Then, we have to create a
`KeyEventsManager` instance in our
`app.component.ts` that we would use to subscribe to keyboard events. We will do it by creating a property in the
`AppComponent` class as:
export class AppComponent implements OnInit { users: any; isLoadingUsers: boolean; keyboardEventsManager: ListKeyManager; // <- add this constructor(private usersService: UsersService) { } ... }
We have declared the property
`keyboardEventsManager` but haven’t initialized it with anything. To do that, we would have to pass a
`QueryList` to
`ListKeyManager` constructor as it expects a
`QueryList` as an argument. The question is, what would this
`QueryList` be? The
`QueryList` should comprise of the elements on which the navigation would be applied. I.e. the
`ListItem` components. So we will first use
`@ViewChildren` to create a
`QueryList` and access the
`ListItem` components which are in
`AppComponent`‘s view. Then we will pass that
`QueryList` to the
`ListKeyManager`. Our AppComponent should look like this now:
import { Component, OnInit, QueryList, ViewChildren } from "@angular/core"; import { UsersService } from "./core/services/users.service"; import { first } from "rxjs/operators"; import { ListKeyManager } from "@angular/cdk/a11y"; import { ListItemComponent } from "./core/components/list-item/list-item.component"; // importing so we can use with `@ViewChildren and QueryList @Component({ selector: "app-root", templateUrl: "./app.component.html", styleUrls: ["./app.component.scss"] }) export class AppComponent implements OnInit { users: any; isLoadingUsers: boolean; keyboardEventsManager: ListKeyManager; @ViewChildren(ListItemComponent) listItems: QueryList; // accessing the ListItemComponent(s) here constructor(private usersService: UsersService) {} ngOnInit() { this.isLoadingUsers = true; this.usersService .getUsers() .pipe(first()) .subscribe(users => { this.users = users; this.isLoadingUsers = false; this.keyboardEventsManager = new ListKeyManager(this.listItems); // initializing the event manager here }); }
Now that we have created
`keyboardEventsManager`, we can initiate the keyboard events handler using a method named
`handleKeyUp` on the search input as we would press
`Up` and
`Down` arrow keys and navigate through the list observing the active item.
... import { ListKeyManager } from '@angular/cdk/a11y'; import { ListItemComponent } from './core/components/list-item/list-item.component'; import { UP_ARROW, DOWN_ARROW, ENTER } from '@angular/cdk/keycodes'; ... export class AppComponent implements OnInit { ... keyboardEventsManager: ListKeyManager; searchQuery: string; @ViewChildren(ListItemComponent) listItems: QueryList; constructor(private usersService: UsersService) { } ... /** * @author Ahsan Ayaz * @desc Triggered when a key is pressed while the input is focused */ handleKeyUp(event: KeyboardEvent) { event.stopImmediatePropagation(); if (this.keyboardEventsManager) { if (event.keyCode === DOWN_ARROW || event.keyCode === UP_ARROW) { // passing the event to key manager so we get a change fired this.keyboardEventsManager.onKeydown(event); return false; } else if (event.keyCode === ENTER) { // when we hit enter, the keyboardManager should call the selectItem method of the `ListItemComponent` this.keyboardEventsManager.activeItem.selectItem(); return false; } } } }
We will connect the
`handleKeyUp` method to the search input in
`app.component.html` on the
`(keyup)` event as:
If you debug the functions now, they would be triggered when up/down or enter key is pressed. But this doesn’t do anything right now. The reason is that as we discussed, the active item is distinguished when the
`isActive` property inside the
`ListItemComponent` is
`true` and the
`item--active` class is therefore applied. To do that, we will keep track of the active item in the
`KeyboardEventsManager` by subscribing to
`keyboardEventsManager.change`. We will get the active index of the current item in navigation each time the active item changes. We just have to set the
`isActive` of our
`ListItemComponent` to reflect those changes in view. To do that, we will create a method
`initKeyManagerHandlers` and will call it right after we initialize the
`keyboardEventsManager`.
Let’s see what our app looks like now:
Keyboard Nav using UP & DOWN arrow keys
BOOM💥! Our list now has keyboard navigation enabled and works with the
`UP` and
`DOWN` arrow keys. The only thing remaining is to show the selected item on
`ENTER` key press.
Notice that in our
`app.component.html`, the
`app-list-item` has an
`@Output` emitter as:
(itemSelected)="showUserInfo($event)"
This is what the
`ListItemComponent` looks like:
import { Component, OnInit, Input, EventEmitter, Output } from '@angular/core'; @Component({ selector: 'app-list-item', templateUrl: './list-item.component.html', styleUrls: ['./list-item.component.scss'] }) export class ListItemComponent implements OnInit { @Input() item; @Output() itemSelected = new EventEmitter(); isActive: boolean; constructor() { } ngOnInit() { this.isActive = false; } setActive(val) { this.isActive = val; } selectItem() { this.itemSelected.emit(this.item); } }
If you recall, in our
`handleKeyUp` method inside
`AppComponent`, we execute the below statement on
`ENTER` key press:
this.keyboardEventsManager.activeItem.selectItem();
The above statement is calling
`ListItemComponent`‘s
`selectItem` method which emits
`itemSelected` to the parent component. The emitted event in the parent calls the
`showUserInfo($event)` which finally alerts the message with the user name.
Let’s see how the completed app looks now:
Selecting active item using ENTER key
Conclusion
Angular CDK provides a lot of tools and as we’re working on complex projects, we’re continuously finding out great ways to create intuitive experiences that are easy to write and maintain. If you’re interested in building your own component libraries like Angular Material, do dive into Angular CDK and paste in the comments whatever cool stuff you come up with.
Happy coding – check out our Github repo for more.
Ahsan Ayaz
Related Posts
- Angular CDK brings Drag & Drop in Beta 7
In this article we'll go through one of the most anticipated features that all Angular…
- Angular + React Become ReAngular
Angular and React developers have revealed this morning that they are planning to merge to… | https://moduscreate.com/blog/adding-keyboard-navigation-to-angular-lists-using-angular-cdk-listkeymanager/ | CC-MAIN-2022-40 | en | refinedweb |
- 05 Aug, 2019 6 commits
- Heinrich Lee Yu authored
- Thong Kuah authored
Upgrade PostgreSQL versions in CI See merge request gitlab-org/gitlab-ce!31446
-
- Marcel Amirault authored
MD002 - First header should be level 1 MD006 - Start bullets at beginning of line MD019 - No multiple spaces after header style MD022 - Headers surrounded by blank lines MD025 - Only 1 level 1 header MD028 - No blank lines within blockquote MD038 - Spaces inside code span elements
-
- Marcia Ramos authored
- Link new doc to and from other docs - Add tbs section
- 04 Aug, 2019 1 commit
* 9.6.11 -> 9.6.14 * 10.7 -> 10.9 This is done in preparation for upgrading PostgreSQL in Omnibus:
- 03 Aug, 2019 4 commits
Clarify that we now use group based teams See merge request gitlab-org/gitlab-ce!31381
-
- Dustin Hemard authored
- 02 Aug, 2019 29 commits
- Drew Blessing authored
Update requirements wording in docs See merge request gitlab-org/gitlab-ce!31428
- Fatih Acet authored
Merge branch '50130-cluster-cluster-details-update-automatically-after-cluster-is-created' into 'master' Resolve "Cluster > Cluster details > Update automatically after cluster is created" Closes #65151 and #50130 See merge request gitlab-org/gitlab-ce!27189
- Mike Greiling authored
Only display the details of the cluster page when the cluster exists. If it is in "creating" state, show a message and a spinner
-
- João Cunha authored
Don't run danger on stable branches See merge request gitlab-org/gitlab-ce!31430
- Mike Greiling authored
Update dependency @gitlab/ui to ^5.12 See merge request gitlab-org/gitlab-ce!31242
- Enrique Alcántara authored
- Also, include pikaday styles through gitlab-ui
Uninstall Helm/Tiller See merge request gitlab-org/gitlab-ce!27359
- Dylan Griffith authored
Also creates specs Only allow Helm to be uninstalled if it's the only app - Remove Tiller leftovers after reser command - Fixes specs and offenses Adds changelog file Fix reset_command specs
-
- Jeffrey Cafferata authored
Make `needs:` to require a strong reference Closes #65512 See merge request gitlab-org/gitlab-ce!31419
- Paul Slaughter authored
Add md files to .prettierignore See merge request gitlab-org/gitlab-ce!31426
- Justin Boyson authored
This is to prevent prettier from auto formatting doc files.
- Lukas Eipert authored
-
- Bob Van Landuyt authored
Removes update_statistics_namespace feature flag See merge request gitlab-org/gitlab-ce!31392
- Kamil Trzciński authored
This changes `needs:` from weak reference to have a strong reference. This means that job will not be created unless all needs are present as part of a pipeline.
Resolve "Breakage in displaying SVG in the same repository" See merge request gitlab-org/gitlab-ce!31352
Support X_if_ee methods for QA tests See merge request gitlab-org/gitlab-ce!31379
- Drew Blessing authored
Update HA resource descriptions Closes #61192 and #27833 See merge request gitlab-org/gitlab-ce!31064
Respect needs on artifacts Closes #65466 See merge request gitlab-org/gitlab-ce!31413
- Fatih Acet authored
Improve job log rendering performance See merge request gitlab-org/gitlab-ce!31262
- Lukas '+ alert('Eipi') + ' Eipert authored
Currently we write out empty CSS classes (`class=""`) every time we create a new tag. This adds 9 unnecessary bytes per span element. In a recent trace, I have counted 11950 span elements. So we transported 105 unnecessary kilobytes!
- Mayra Cabrera authored
After measuring the impact of the namespace storage on. It was decided that it's performant enough. So we can freely remove the feature flag Related to
- Bob Van Landuyt authored
Resolve docker in docker problems See merge request gitlab-org/gitlab-ce!31417
- Paul Slaughter authored
Syncs the vue test utils helpers See merge request gitlab-org/gitlab-ce!31349
- Sam Beckham authored | https://foss.heptapod.net/heptapod/heptapod/-/commits/8debde4bb0edc87e269d23fd6aae4d592c8dca8c | CC-MAIN-2022-40 | en | refinedweb |
table of contents
NAME¶
XmSimpleSpinBox — a simple SpinBox widget class
SYNOPSIS¶
#include <Xm/SSpinB.h>
DESCRIPTION¶.
Classes¶
The XmSimpleSpinBox widget inherits behavior and resources from the Core, Composite and XmManager classes.
The class pointer is XmSimpleSpinBoxWidgetClass.
The class name is XmSimpleSpinBoxWidget.
New Resources¶).
- XmNarrowLayout
- Specifies the style and position of the SpinBox arrows. The following values are supported:
- XmARROWS_FLAT_BEGINNING
- The arrows are placed side by side to the right of the TextField.
- XmARROWS_FLAT_END
- The arrows are placed side by side to the left of the TextField.
- XmARROWS_SPLIT
- The down arrow is on the left and the up arrow is on the right of the TextField.
- XmARROWS_BEGINNING
- The arrows are stacked and placed on the left of the TextField.
- XmARROWS_END
- The arrows are stacked and placed on the right of the TextField.
- XmNarrowSensitivity
- Specifies the sensitivity of the arrows in the XmSimpleSpinBox. The following values are supported:
- XmARROWS_SENSITIVE
- Both arrows are active to user selection.
- XmARROWS_DECREMENT_SENSITIVE
- The down arrow is active and the up arrow is inactive to user selection.
- XmARROWS_INCREMENT_SENSITIVE
- The up arrow is active and the down arrow is inactive to user selection.
- XmARROWS_INSENSITIVE
- Both arrows are inactive to user selection.
- XmNcolumns
- Specifies the number of columns of the text field.
- XmNdecimalPoints
- Specifies the position of the radix character within the numeric value when XmNspinBoxChildType is XmNUMERIC. This resource is used to allow for floating point values in the XmSimpleSpinBox widget.
- XmNeditable
- Specifies whether the text field can take input.
-
- When XmNeditable is used on a widget it sets the dropsite to XmDROP_SITE_ACTIVE.
- XmNincrementValue
- Specifies the amount to increment or decrement the XmNposition when the XmNspinBoxChildType is XmNUMERIC. When the Up action is activated, the XmNincrementValue is added to the XmNposition value; when the Down action is activated, the XmNincrementValue is subtracted from the XmNposition value. When XmNspinBoxChildType is XmSTRING, this resource is ignored.
- XmNinitialDelay
- Specifies the amount of time in milliseconds before the Arrow buttons will begin to spin continuously.
- XmNnumValues
- Specifies the number of items in the XmNvalues list when the XmNspinBoxChildType resource is XmSTRING. The value of this resource must be a positive integer. The XmNnumValues is maintained by the XmSimpleSpinBox widget when items are added or deleted from the XmNvalues list. When XmNspinBoxChildType is not XmSTRING, this resource is ignored.
- XmNvalues
- Supplies the list of strings to cycle through when the XmNspinButtonChildType resource is XmSTRING. When XmNspinBoxChildType is not XmSTRING, this resource is ignored.
- XmNmaximumValue
- Specifies the upper bound on the XmSimpleSpinBox's range when XmNspinBoxChildType is XmNUMERIC.
- XmNminimumValue
- Specifies the lower bound on the XmSimpleSpinBox's range when XmNspinBoxChildType is XmNUMERIC.
- XmNmodifyVerifyCallback
- Specifies the callback to be invoked just before the XmSimpleSpinBox position changes. The application can use this callback to implement new application-related logic (including setting new position spinning to, or canceling the impending action). For example, this callback can be used to stop the spinning just before wrapping at the upper and lower position boundaries. If the application sets the doit member of the XmSimpleSpinBoxCallbackStruct to False, nothing happens. Otherwise, the position changes. Reasons sent by the callback are XmCR_SPIN_NEXT, or XmCR_SPIN_PRIOR.
- XmNposition
- The XmNposition resource has a different value based on the XmNspinBoxChildType resource. When XmNspinBoxChildType is XmSTRING, the XmNposition is the index into the XmNvalues list for the current item. When the XmNspinBoxChildType resource is XmNUMERIC, the XmNposition is the integer value of the XmSimpleSpinBox that falls within the range of XmNmaximumValue and XmNminimumValue.
- XmNrepeatDelay
- Specifies the number of milliseconds between repeated calls to the XmNvalueChangedCallback while the user is spinning the XmSimpleSpinBox.
- XmNspinBoxChildType
- Specifies the style of the XmSimpleSpinBox. The following values are supported:
- XmSTRING
- The child is a string value that is specified through the XmNvalues resource and incremented and decremented by changing the XmNposition resource.
- XmNUMERIC
- The child is a numeric value that is specified through the XmNposition resource and incremented according to the XmNincrementValue resource.
- XmtextField
- Specifies the textfield widget.
- XmNvalueChangedCallback
- Specifies the callback to be invoked whenever the value of the XmNposition resource is changed through the use of the spinner arrows. The XmNvalueChangedCallback passes the XmSimpleSpinBoxCallbackStruct call_data structure.
Inherited Resources¶
The XmSimpleSpinBox widget inherits behavior and resources from the following named superclasses. For a complete description of each resource, see the man page for that superclass.
Callback Information¶.
ERRORS/WARNINGS¶
The toolkit will display a warning if the application tries to set the value of the XmNtextField resource, which is read-only (marked G in the resource table).
SEE ALSO¶). | https://manpages.debian.org/bullseye/libmotif-dev/XmSimpleSpinBox.3.en.html | CC-MAIN-2022-40 | en | refinedweb |
My last post "C++20: The Core Language" presented the new features of the C++20 core language. Today, I continue my journey with an overview of the C++20 library.
The image shows you my plan for chrono library from C++11/14 was extended with a calendar and time-zone facility. If you don't know the Chrono library, read my posts to time.
Calendar: consists of types, which represent a year, a month, a day of a weekday, and an n-th weekday of a month. These elementary types can be combined to complex types such for example year_month, year_month_day, year_month_day_last, years_month_weekday, and year_month_weekday_last. The operator "/" is overloaded for the convenient specification of time points. Additionally, we will get with C++20 new literals: d for a day and y for a year.
Time-points can be displayed in various specific time zones.
Due to the extended chrono library, the following use-cases are easy to implement:
auto d1 = 2019y/oct/28;
auto d2 = 28d/oct/2019;
auto d3 = oct/28/2019;
If you want to play with these features, use Howard Hinnards implementation on GitHub. Howard Hinnard, the author for the calendar and time-zone proposal, also created a playground for it on Wandbox.
#include "date.h"
#include <iostream>
int
main()
{
using namespace date;
using namespace std::chrono;
auto now = system_clock::now();
std::cout << "The current time is " << now << " UTC\n";
auto current_year = year_month_day{floor<days>(now)}.year();
std::cout << "The current year is " << current_year << '\n';
auto h = floor<hours>(now) - sys_days{jan/1/current_year};
std::cout << "It has been " << h << " since New Years!\n";
}
Of course, C++20 uses the std::chrono namespace instead of the date namespace. Here is the output of the program:
A std::span stands for an object that can refer to a contiguous sequence of objects. A std::span, sometimes also called a view is never an owner. This contiguous memory can be an array, a pointer with a size, or a std::vector. A typical implementation needs a pointer to its first element and a size. The main reason for having a std::span<T> is that a plain array will be decay to a pointer if passed to a function; therefore, the size is lost. std::span<T> automatically deduces the size of the plain array or the std::vector. If you use a pointer to initialise a std::span<T>, you have to provide the size for the constructor.
template <typename T>
void copy_n(const T* p, T* q, int n){}
template <typename T>
void copy(std::span<const T> src, std::span<T> des){}
int main(){
int arr1[] = {1, 2, 3};
int arr2[] = {3, 4, 5};
copy_n(arr1, arr2, 3); // (1)
copy(arr1, arr2); // (2)
}
In contrast to the function copy_n (1), copy (2) don't need the number of elements. Hence, a common cause of errors is gone with std::span<T>.
C++ becomes more and more constexpr. For example, many algorithms of the Standard Template Library get with C++20 a constexpr overload. constexpr for a function or function template means that it could potentially be performed at compile time. The question is now, which containers can be used at compile time? With C++20, the answer is std::string and std::vector.
Before C++20, both can not be used in a constexpr evaluation, because there were three limiting aspects.
These limiting aspects are now solved.
Point 3 talks about placement-new, which is quite unknown. Placement-new is often used to instantiate an object in a pre-reserved memory area. Besides, you can overload placement-new globally or for your data types.
char* memory = new char[sizeof(Account)]; // allocate memory
Account* account = new(memory) Account; // construct in-place
account->~Account(); // destruct
delete [] memory; // free memory
Here are the steps to use placement-new. The first line allocates memory for an Account, which is used in the second line to construct an account in-place. Admittedly, the expression account->~Account() looks strange. This expression is one of these rare cases, in which you have to call the destructor explicitly. Finally, the last line frees the memory.
I will not go further into the details to constexpr Containers. If you are curious, read proposal 784R1.
cppreference.com/ has a concise description of the new formatting library: "The text formatting library offers a safe and extensible alternative to the printf family of functions. It is intended to complement the existing C++ I/O streams library and reuse some of its infrastructure such as overloaded insertion operators for user-defined types.". This concise description includes a straightforward example:
std::string message = std::format("The answer is {}.", 42);
Maybe, this reminds you of Pythons format string. You are right. There is already an implementation of the std::format on GitHub available: fmt. Here are a few examples from the mentioned implementation. Instead of std, it uses the namespace fmt.
std::string s = fmt::format("I'd rather be {1} than {0}.", "right", "happy");
// s == "I'd rather be happy than right."
fmt::memory_buffer buf;
format_to(buf, "{}", 42); // replaces itoa(42, buffer, 10)
format_to(buf, "{:x}", 42); // replaces itoa(42, buffer, 16)
// access the string with to_string(buf) or buf.data()"
As promised, I will dive deeper with a future post into the library. But first, I have to finish my high-level overview of C++20. My next post is about the concurrency149
Yesterday 6041
Week 36005
Month 149895
All 10307657
Currently are 155 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
which helps the function to detect the context in which the function is invoked.
in the case of template "copy" function (with the current signature), the compilers are not capable to find the function for int[3] types. the implicit conversion from int[3] to spawn is not applied.
the call of copy method needs to employ the explicit conversion from array to span.
copy(std::span(arr1), std::span(arr2)); | https://www.modernescpp.com/index.php/c-20-the-library | CC-MAIN-2022-40 | en | refinedweb |
SAP Web IDE: App to App Navigation in FLP Sandbox
Hi,
In this blog I will show how to preview 2 different applications in one FLP sandbox while developing in SAP Web IDE. I will also show an example of how to navigate from one app to another.
- Make sure you have 2 (or more) apps in your SAP Web IDE workspace.
- In each of the apps, make sure its .project.json file has an entry called “hcpdeploy” with the following parameters:
"hcpdeploy": { "account": “<your account name. For example, 'myaccount123' in http:/webide-myaccount123.dispatcher.hanatrial.ondemand.com/>”, "name": “<a unique name for the app, usually it is the name of the app in the workspace in lowercase without dots>”, "entryPath": “<the relative location of the component.js file to the root folder of the app. For example: webapp >” }
If you do not have this entry, please create it in each file.
- Create a new empty project in your workspace for the FLP sandbox by selecting Workspace > New > Folder.
For simplicity, let’s call this project FLPsandbox.
- Create a new folder called appconfig under the FLPsandbox’s root folder.
- In this folder, create a file called fioriSandboxConfig.json with the following content:
{ "applications" : { } }
- In the fioriSandboxConfig.jsonfile, under “applications” add the following for each of the projects you configured in step 1:
"FioriObject-action" : { "additionalInformation" : "SAPUI5.Component=<application namespace> “, "applicationType" : "URL", "url" : "<path to MyFioriApplication . MyFioriApplication name as it is in the Web Ide workspace >", "description" : "<The description for the application >", "title" : "<the title of the tile>" }
- Right-click on the FLPsandbox project and choose New > HTML5 Application Descriptor. The neo-app.json file is added to the project. Add the following at the beginning of the neo-app.json file under “routes” for each of the applications you configured in step 1:
{ "path": "/test-resources/sap/ushell/<application name in Web IDE workspace> ", "target": { "type": "application", "name": "<the unique name for the app, as it was defined in its .project.json file of the application>" } }, { "path": "/<application name in Web IDE workspace > ", "target": { "type": "application", "name": "<the unique name for the app, as it was defined in its .project.json file of the application>" } },
- Run the sandbox.The sandbox is launched with the configured apps as tiles. You can now navigate from the FLPSandbox to the apps.
- Select the FLPsandbox project and open the run configurations.
- Create a new “SAP Fiori Component on Sandbox” configuration and make the following configurations:
- Give it meaningful name
- In the General tab, the file name should be the fioriSandboxConfig.json. The field will be populated with the full path.
- In the Advanced Setting tab, under the Application Resources section, select Use my workspace first.
- Save and run the configuration.
- Add navigation between the apps.In the origin app, add the following code as a callback to the navigation event:
var oCrossAppNav = sap.ushell && sap.ushell.Container && sap.ushell.Container.getService("CrossApplicationNavigation"); var href_For_Product_display = (oCrossAppNav && oCrossAppNav.toExternal({ target: { shellHash : "<the target app key configuration defined in step 4, e.g. FioriObject-action>" }, params: a map of parameters to be forwarded to the target app })) || "";
- Re-run the sandbox.
That’s it!
Thanks for this post Elina! Very helpful!
Hello Elina,
This is an interesting blog! I tried this in local SAP Web IDE but the app is unable to find that the root folder is my workspace and I get 404 error for each component. Is there anyway possible that "advanced setting" feature is available in local sap web ide version too? Would love to know! Thanks.
Hi Prerana,
As this feature is heavily depends on SAP Cloud Platform, it is currently not part of personal edition. | https://blogs.sap.com/2016/08/14/sap-web-ide-app-to-app-navigation-in-flp-sandbox/ | CC-MAIN-2022-40 | en | refinedweb |
Integration of Ceph and Kubernetes
This blog is about the integration of ceph with kubernetes. Kubernetes needs persistent storage for application to to maintain the data even after pod fails or deleted. This storage may be storage devices attached to cluster shared by NFS, storage solutions provided by cloud providers. We are considering Ceph here. Before dive into actual process of integration, lets see some basic like what ceph, kubernetes is.
What is Kubernetes
Kubernetes is an..
Prerequisites
How to Do it ?
On Ceph admin
Lets see our ceph cluster up and running by ceph -s comamnd. Now we need to create a pool and image in it for kubernetes storage where we share it with kubernetes client to use.
$ ceph -s
$ ceph osd pool create kubernetes 100
$ ceph osd lspools
Now we need image to create in kubernetes pool we just created. We are naming image “kube”. And see the information of image we created.
For kubernetes to access the pool and images we created here on ceph cluster, needs some permission. So we provide it by copying ceph.conf and admin keyring. (Although it not advised to copy the admin keyring. For better practice keyring with some permission for desired pool is need share).
$ scp /etc/ceph/ceph.conf root@master:~
$ scp /etc/ceph/ceph.client.admin.keyring root@master:~
To make kubernetes master as ceph client we need to follow some steps:
- Install ceph-common packages
- Move ceph.conf and admin keyring to the /etc/ceph/ directory Now we can run ceph commands on kubernetes master. Moving further, we need to map our rbd image here. First enable the rbdmap service
$ systemctl enable rbdmap
$ rbd map kube -p kubernetes $ rbd showmapped
Note: If the rbdmap enabling fails, try removing rbdmap file from /etc/ceph directory. Depending upon which ceph version and OS you are using, this mapping is not done simply running these commands. You need to load rbd modules into kernel by modeprobe command. Also disable some feature of rbd by rbd feature disable command.
Connecting Ceph and kubernetes
Kubernetes to use ceph storage as backend, kubernetes needs to talk with ceph(not in general) via external storage plugin as this provision is not include in official kube-control-manager image. So we need either to customize the kube-control-manager image or use plugin- rbd-provisioner. In this blog, we are using external plugin. Lets create rbd-provisioner in kube-system namespace with RBAC. Before creating deployment check if rbd-provisioner has same image of ceph version as yours by running following command:
$ docker pull quay.io/external_storage/rbd-provisioner:latest $ docker history quay.io/external_storage/rbd-provisioner:latest | grep CEPH_VERSION
Now create deployment, save following in rbd-provisioner.yml.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
Run following command
$ kubectl create -f rbd-provisioner.yml
This will create service account RBAC, Role biding etc. Check if rbd-provisioner pod is running after few minutes by following command:
$ kubectl get pods -l app=rbd-provisioner -n kube-system
RBD volume provisioner needs admin key from Ceph to provision storage. To get the admin key from Ceph cluster use this command:
$ ceph --cluster ceph auth get-key client.admin
Now create Kubernetes secret with this key
$ kubectl create secret generic ceph-secret \
--type="kubernetes.io/rbd" \
--from-literal=key='COPY-YOUR-ADMIN-KEY-HERE' \
--namespace=kube-system
Note: If keyring other than admin also created for pool permissions, then we also need to create secret for the same. We are going to use admin keyring and admin user for creating kubernetes resources like storage class, pvc etc.
$ ceph mon dump
Now Lets create storage-class named storage-rbd:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storage-rbd
provisioner: ceph.com/rbd
parameters:
monitors: <MONITOR_IP_1>:6789, <MONITOR_IP_2>:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: kubernetes
userId: admin
userSecretName: ceph-secret
userSecretNamespace: kube-system
imageFormat: "2"
imageFeatures: layering
Save this in storage-class.yml file and run following command:
$ kubectl create -f storage-class.yml
$ kubctl get sc
And the last step is to create a simple PVC. Save following in file pvc.yml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: storage-rbd
and run
$ kubectl create -f pvc.yml
Summary
That’s it !! We created the persistent storage for kubernetes from ceph. Use this pvc to create pv and pod whatever you like. Suggestions are most welcomed.
Hope you enjoyed this blog. Happy Ceph-ing!! | https://rdwaykos.medium.com/integration-of-ceph-and-kubernetes-87713c4f8921 | CC-MAIN-2022-40 | en | refinedweb |
Introduction: The Pi Must Go On! Pi-powered RFID Musical Box.
An idea is born!
This Christmas, my wife and I entered a Hallmark store to let our 1 year old daughter look at the ornaments and trinkets. My wife and daughter were instantly drawn towards a glowing Disney princess castle. In the center of the castle was a platform, and as my wife took one of the five princesses from the front of the castle and placed her on the platform, lights started flashing and music started playing that corresponded with the selected princess. If Belle was placed on the platform, the castle played "Beauty and the Beast". Ariel sang "Part of your World" and so on.
Both my wife and daughter were enthralled by this castle! Finding out that these castles were sold out was a devastating blow. The closest castle was in a store hundreds of miles away. When we got home, my wife instantly searched online for sellers. The listed store price was a hefty $100 dollars or so, however, the online listings were upwards of $130-$160 as shown in these links:
I jokingly said, "I could probably make something like it for cheaper."
Knowing that I am a crafting novice and have the coding skills of a rock, she lovingly replied "I don't think so."
And the challenge was on!
DISCLAIMER: In the video there is a light buzzing sound. We apologize, that is our heater in the background and is NOT coming from the Pi or USB speaker.
Supplies
A Long and Winding Road:
Skip to the next heading if you only care about a shopping list.
I'm going to give you a brief tour of my thought process.
"LeFou, I'm afraid I've been thinking" - Gaston (singing)
"A dangerous pastime" - LeFou
"I know" - Gaston
My wife and I grew up loving Disney. We had some Disney figurines laying around just waiting to let their voice be heard. This project really got its kick-start when I went down the rabbit hole on this very website to try and find similar projects. My goals were:
- Create or find a way to distinguish one character from another (without this step, the project is dead in the water.)
- Integrate whatever solution I found into a fairly durable physical "user interface".
- Make the project completely self-sustaining (As poor as my coding skills are, my one year old doesn't know the difference between a Secure Shell and Winnie the Pooh). Thus, I needed a simple on/off switch with a code that basically runs itself.
For the cost and computing power, you would be hard-pressed to find a better solution for a project like mine than a Raspberry Pi. I arrived at what I thought was the best solution to everything when I happened upon PyImageSearch. This Adrian guy is simply incredible, and his tutorials/projects seemed tailor-made for my application.
Picture what I had in my mind as the perfect solution:
My daughter holds up a figurine and the computer runs a facial recognition algorithm on the figure. After recognition, the correct song would play. This would require almost no hardware (just the Pi, a camera and a speaker).
After several attempts, I came to the realization that this just wasn't working right for me. I could get the Pi to distinguish my face from my wife's face, but the figurines were another story.
I then thought that I could train my own neural network using TensorFlow to recognize the princesses. This guy also does some amazing work in Computer Vision, and even uses TensorFlow Lite for the Raspberry Pi.
After some success, but mostly failure and "OOM" (Out of Memory) errors on my somewhat outdated laptop, I realized that I was back to the drawing board. While looking for hardware options, I happened upon the RC522 RFID reader and I knew I had found the final answer. Below, I will list the supplies with links of what I absolutely needed, and then I will list the materials that we used for the stage. Most of these links are to Amazon. I could have bought cheaper from elsewhere, but the free shipping and quick delivery was too good to pass up.
Must Have's:
- Raspberry Pi 3 - I got this in a kit that came with a little case and power cable. I think a Pi Zero could handle this project easily. I got the Pi 3 when I thought I was going to do Object Detection/Neural Networks.
- Micro-SD Card - I got the 32 GB Sandisk Ultra Card, but something smaller/cheaper would have worked just fine.
- RFID RC522 Reader - Some people are getting these for less than $2 on Ebay. I chose to go with this one so I had a backup in case I messed something up.
- Momentary Push Button with LED - You could use any momentary button for this. I chose this one for the look and LED.
- MIFARE 1k tags - I was surprised at how affordable these are! Buy as many as you feel you need.
- USB Speaker - I got a new speaker because we don't have an extra one laying around. You could really use any speaker for this project.
- F-F Jumper Wires for GPIO Connections - If I wasn't a novice, I would probably have these laying around. You can also use M-F wires, and just solder to the RC522 leads. Your call.
- Solder
- Soldering Iron
- Figurines - We chose Disney, but you could really use anything (i.e. Harry Potter, Mario, P.J. Masks, etc)
- Song/Sound List - We have a bunch of Disney CD's, and we bought the few extra songs we needed. Make sure and keep the project legal.
I ended up spending about as much on this as we would have if we'd ordered one of the castles from Ebay, but if I'd been smarter initially, I probably could have cut the cost to about $50-$60 total (using Pi Zero, RC522 from Ebay, cheap push button, less tags, and speaker that I had laying around.) For the same price as the Disney castle, we got more figures to sing as well as a sense of accomplishment!
Optional:
- White box from local craft store
- Odds and ends pieces of balsa wood from local craft store
- Small piece of red velvet fabric
- Red thread
- Free Home Depot floor laminate samples (we got the ones with sticky bottoms)
- Drill
- Hot glue gun
- Sewing machine
- White paint
- Permanent Markers
- X-Acto knife
- Ruler
This is all very optional. The beauty of a project like this is that you get to shape it to your vision. We chose a stage setting for the characters, but you could do anything with it. You could just use a Pi case and paint it at no extra cost to you.
A Word of Encouragement
This was my very first project on a Raspberry Pi. Take it from someone who had never coded in Python or even heard of a GPIO Pin, this is doable! The amount of resources and help online is tremendous. If you have a question, I guarantee it has already been asked and answered! In a matter of weeks I had the thing running with the card reader, LED push button, and speaker. Did I mention that my coding skills are sad to say the least? If this idea of a singing RFID Reader appeals to you, then dig in and have some fun!
Step 1: Set Up Your Raspberry Pi
Getting Started
If you are a beginner to the Raspberry Pi (like I was), then you should certainly give this section a glance. I would love to take the credit for getting my Pi set up just how I wanted, but the truth is that I relied heavily on the blogs/videos/Instructables of others.
The links I have included below is just the tip of the proverbial iceberg. If I included all of the helpful resources that I skimmed, this would cease to be an Instructable and quickly become an online library of sorts. Our PC has a Windows 10 OS, so I will include links that I found helpful for my particular case, but I imagine setting up the Pi with a Unix-based OS would actually be easier. So don't dismay if you have a Mac OS or Linux, as you will catch right back up on the following steps.
Setting up SSH for a "headless Pi" environment
Check out this tutorial for how to "flash" Raspbian Buster OS onto your Mini SD Card. I recommend performing what he says from the top all the way to Step #2 exactly how he explained it. It worked well for me! The OpenCV stuff isn't very applicable to this project, so once you've expanded your filesystem, the tutorial isn't as useful for this project. Before you can "expand your filesystem", you need to either connect a monitor/keyboard to the Pi or work through a SSH or VNC connection.
I only have a laptop at home. As such, I don't have keyboards and computer mice just laying around, so needless to say I was looking for a "headless (no extra monitor)" solution to talking to my Pi. I found that the program Putty was very user friendly.
This guy helped me figure out the basics of working "headless" in a Windows 10 environment, and this guy was also helpful in getting a Secure Shell (SSH) enabled on my RPi. Finding the RPi's IP Address proved more difficult than I initially suspected, but by using raspberrypi.local in Putty, I was able to get things going. I also had to switch my Network settings from "Private" to "Public". I recommend enabling X11 forwarding as shown in the image above.
Like I mentioned above, Adrian at Pyimagesearch.com has a ton of helpful tutorials. He doesn't "officially support Windows", but he'll throw the occasional bone that will help even a novice like me figure things out. This link especially taught me how to perform "remote development" on the Pi.
The blog will talk you through handy things like SCP (I use WinSCP as shown in the image above), Secure Shell File Transfer Protocol (SFTP) in Sublime Text or Pycharm, and many other useful things. I also used VNC Viewer to work on my Pi (see the snapshot above), and I highly recommend using a combination of all the available tools to make your life as simple as possible.
WinSCP made file transfer from my laptop to the Pi much easier, and SFTP made it so I could develop in a proper IDE like Sublime Text on my PC and have the scripts automatically update in the RPi folders. Super handy!!
I used Sublime Text to for this project since it is free to try out with the option of purchasing a license. After seeing how useful and user-friendly this IDE is, I am most likely going to dish out the money to buy the license.
I'm in.... Now what?
Just like everyone else, I recommend performing the appropriate updates and package installations you need to perform. After you see the magical green "pi@raspberrypi:~ $" as shown in the first image above, you are ready to go! Type the following lines into the command prompt and be patient while the Pi does its thing.
sudo apt-get update sudo apt-get upgrade
Step 2: Prepping the RC522 Reader
Take a quick screen break
If you are reading this, then I will assume it means that you are communicating with your Pi and that things went smoothly in the previous steps. Now it's time to give your eyes a break from the monitor and put your hardware hat on! Do some jumping jacks too, because things are about to get really exciting.
Note: At this point, you really could plow on with the rest of the software and coding steps and come back to this step later, so I'll let you decide how this plays out. Sometimes, when you get on a roll, it is hard to stop!
Setting up the RFID-RC522 Reader
By the time you are finished reading this Instructable, you should be fairly familiar with this blog post from "Pi My Life Up". This tutorial was perhaps the keystone to my project. I used the GitHub Repository source code associated with the blog post, and used the wiring diagrams with slight modifications (more on that later).
Most of the RC522 readers come without the header pins soldered on, and you will have the option to solder the the straight or bent pins into the "plated through-hole's (PTH's)". I chose to use the bent pins since they gave me the best profile for my project. You can use a breadboard to check connections, but since we will be using the F/F electronic jumper wires, you won't need to solder anything on the RC522 or the Pi after you have the header pins soldered in place.
I used 0.062" (1.6 mm) lead-free, silver bearing electrical solder with a rosin core. My soldering iron is a medium duty 40-watt Weller. You can take a look here. I have only soldered a little bit in my lifetime, and I am pretty poor at it, which is why I only included a tiny photo of the connections I made. I should just own it, but I guess you could say that I'm afraid of the soldering police. Once you have your header pins on, you are ready to start on your button.
Step 3: Getting the Button Ready
Button, button, who's got the button?
Like I mentioned, I needed this project to be completely self-contained. I needed to make it so that my children could use it without any knowledge of how to program, and so I decided that a simple Momentary On/Off button would suffice. Pulling the power to the Raspberry pi without executing:
sudo shutdown -h now
has the potential to ruin your SD Card. Don't do it! Therefore, I needed a button that could safely turn the Raspberry Pi on and off again. Once again, you will need to take out your soldering iron and use F/F jumper cables. On your wire, snip off an inch or two and strip back enough of the jumper wire to solder to the leads on the button. The button I used had a built in 12v LED that didn't require a resistor, so I soldered the jumper wires onto the correct leads for the button. It was a bit tricky for me to figure out exactly which button leads to use, so I recommend checking out 3:53 to about 5:12 of this video.
I don't have the helping hands with the little alligator clips, so I kindly asked my wife to use her "helping hands" by holding the wires in the correct position while I made the connection.
Later, I will discuss the software and scripts behind using the on/off button.
Step 4: Putting It All Together
Wiring it up
The first image above shows a Fritzing diagram of how to wire all of the components. The Fritzing diagram shows the button and LED as separate components. Mine was together, but the wiring is the same.
I only used a soldered connection on the button leads. The rest of the connections were purely mechanical connections made by the F/F jumper wires. The diagram is the same as the one on the "Pi My Life Up" blog post, except the GND pin that they placed on RPi GPIO Pin 6 was moved to Pin 30, since Pin 6 needed to be reserved for the button. I will give a quick summary of which header pin on the RC522 connects to the Pi:
- SDA goes to Pin 24
- SCK goes to Pin 23
- MOSI goes to Pin 19
- MISO goes to Pin 21
- GND goes to pin 30
- RST goes to Pin 22
- 3.3v goes to Pin 1
The image above that shows the GPIO Pin labels is from the Raspberry Pi website. I referred back to this image to help me know which pin is which. To attach the LED Button to the Pi, you will need to make the following connections:
- LED anode (+) goes to Pin 8
- LED cathode (-) goes any GND (I used Pin 39)
- Momentary Button lead 1 goes to Pin 5
- Momentary Button lead 2 goes to Pin 6
Note that the Button is simply closing a circuit, so it does not matter which lead goes to which pin. Just place one on pin 5 and one on pin 6, and you should be good to go. Your button won't be working yet, but you should see the LED light up once you plug your RPi in.
It's time we give some life to the project!
Step 5: Setting Up the Software - RC522
Back to the Computer
At this point, you should have the following things completed:
- Set up some method of talking with the Pi. I recommend VNC or SSH, but you can also use a monitor and keyboard.
- Solder RC522 header pins.
- Solder jumper leads to LED Button.
- Connect all of the F/F jumper wires to the RC522, button, and Pi according to the diagram in the previous step.
This section is really going to be the meat of this project, so hang on tight. Please note that the scripts we will be using for this section have been added above.
Getting the RFID Reader working
The RFID reader that I ordered came with a MIFARE 1k key fob and a card as well, so before I even ordered the extra fobs, I was ready to test it out. I also ordered some of the NTAG203 micro RFID Transponders from adafruit (as in the picture above), but we decided that for the same price, we could get a whole bunch more of the Classic MIFARE 1k tags. I am excited to try using the NTAG203 transponders on some other project though.
Like I mentioned before, this blog post is what I followed to get things set up on my Raspberry Pi. I will give you a rundown of his steps, but I recommend taking a look at his article and video as well in case I miss something.
In the Raspberry Pi terminal, configure your RPi to be able to use the Serial Peripheral Interface (SPI) by first running this command:
sudo raspi-config
Using the arrow keys, select "5 Interfacing Options" and press Enter.
Then, arrow down to "P4 SPI" and hit Enter. When it asks "Would you like the SPI interface to be enabled?" select yes and hit enter. At this point reboot your Pi:
sudo reboot
Install the packages you will need to interface with the RFID reader:
sudo apt-get install python3-dev python3-pip
In order to handle the signals from the RC522, we will need to use the spidev python library:
sudo pip3 install spidev
The MFRC522 library comes from a Github. This library is what we will use to read and write data to the RFID tags. To download the library, enter the following command:
sudo pip3 install mfrc522
Create a folder in your Pi directory where you will keep all of the scripts and music files for this project. I called it "pi-rfid" just like the blog we are following along with:
mkdir ~/pi-rfid
Now we are going to create a script that makes use of the MFRC522 library to write data to 13.56 MHz RFID tags. To get to our new folder, we will change directory and then create a Python script called "Write.py".
cd ~/pi-rfid sudo nano Write.py
Inside of the nano editor, enter the following code:
#!()
The SimpleMFRC522() library is the way that we will talk with the reader. The try-finally block will handle exceptions and make sure that the last thing we do is cleanup the GPIO pins. To save the code in the nano editor, we will first press CTRL + X and then press Y and hit Enter.
Inside of the pi-rfid folder, run the Write.py script:
sudo python3 Write.py
Type the text you'd like to store in the RFID Tag, hit Enter, and place the tag on top of the RC522 reader. It should say "Written" if it was able to write correctly. As an example in the images above, I wrote Aladdin to the tag.
What good is an RFID reader though if it can only write? Let's create a Read.py file inside of the same /pi-rfid folder:
cd ~/pi-rfid sudo nano Read.py
If you want to check and make sure that your RC522 reader is working correctly, enter the following into the nano editor:
#!/usr/bin/env python import RPi.GPIO as GPIO from mfrc522 import SimpleMFRC522 reader = SimpleMFRC522() try: id, text = reader.read() print(id) print(text) finally: GPIO.cleanup()
Press CTRL + X, then press Y, and hit Enter to save the file. We will obviously need to modify this code to associate the text that has been written to the tag with a song, but for now let's run the script as is to make sure it is working. In the /pi-rfid folder, type:
sudo python3 Read.py
You will see that as you place your tag on the RC522 reader the text and id associated with the tag will be output. This is pretty amazing!
Getting the USB Speaker Ready
Using the USB Speaker is fairly straightforward. Even so, I had a few issues getting the RPi to default to the USB speaker. What I had to do is:
- Plug in the speaker.
- Make the Raspberry Pi look at "card #1" for the audio. In other words, change the default audio from the built-in jack and HDMI ports to the USB.
To make this happen, run:
sudo nano /usr/share/alsa/alsa.conf
Arrow down for a while, and then change
defaults.ctl.card 0 defaults.pcm.card 0
to
defaults.ctl.card 1 defaults.pcm.card 1
Hit CTRL + X, Press Y, and hit Enter. Now when you open up VNC Viewer, if you right click on the audio symbol in the upper right, you should see that the USB audio is selected by default as shown in the image above.
Creating the Playlist
For our song list, we only used songs that we had purchased on CD's. We made sure we were only using very small portions of each song. I tried to e-mail Disney about my specific project and whether it would be allowed or not, but I never got a response. Since this is not a commercial product and we are using songs that we purchased for our kid's enjoyment, we figured we were okay.
We used Audacity to snip the songs to the length we wanted and exported the songs as .wav files with the characters' names. For example, Jasmine's song clip was named Jasmine.wav and placed into the pi-rfid folder using WinSCP. This means that the filepath to this example file would be /home/pi/pi-rfid/Jasmine.wav. As soon as you have your list of songs and have written the correct text to all of the tags using the Write.py script, you only have one more step to go until your tags are singing!
Editing the Read.py Script
Before, the Read.py script could only recognize the tag. Now we are going to make it so that it recognizes the tag, loads the song associated with the tag, and plays it on the speaker.
I had some criteria that I used to design the code:
- I didn't want an exception to end the script, so I added some lines to skip misreads and only exit upon a "keyboard interrupt."
- I wanted the same character to be able to finish their song without endlessly restarting if my daughter chose to leave it on the reader.
- Any time a new character is placed on the reader, the song changes.
I'll walk you through the script line by line. At this point, I would love to also ask for advice. This is my first Instructable, first time ever coding in Python, first time setting up RFID tech, and first time using a Raspberry Pi. If any of you can think of a slicker way to do this, I would love the feedback as a comment. I'm new to the concept of try-except loops, so give me a holler if I am off my rocker with this code.
I would recommend using SFTP in Sublime Text to edit the script if you are comfortable with it.
First, open a Raspberry Pi terminal and type:
cd ~/pi-rfid sudo nano Read.py
If you are using SFTP, simply open up the Read.py file in the Raspberry Pi "server", and all changes you save in Sublime Text will automatically be made on the Pi.
Once you are in (either SFTP or nano editor in the terminal), type:
#!/usr/bin/env python
This helps the Pi know to read this file in Python.
Import all of the necessary packages:
import time from time import sleep import pygame import sys import RPi.GPIO as GPIO from mfrc522 import SimpleMFRC522 import re from subprocess import Popen
These libraries and functions will allow us to perform all the tasks required. Now type:
reader = SimpleMFRC522()
This is the same command as before. Now print some welcome messages:
print("Looking for cards") print("Press Ctrl-C to stop.")
Start a variable that you can use to check for repeat readings:
textloop = "Start"
The rest of the code will look like the following:
try: while True: print("Hold a tag near the reader") id, text = reader.read() print("ID: %s\nText: %s" % (id, text)) if text == textloop and pygame.mixer.music.get_busy() == True: pass else: character = " ".join(re.findall("[a-zA-Z]+", text)) filepath = ("/home/pi/pi-rfid/%s.wav"%character) # Just added the following three lines to prevent errors. # Code should skip the bad reading and continue to look # for tags if filepath == "/home/pi/pi-rfid/.wav": continue else: print("filepath is: %s"%filepath) pygame.mixer.init() pygame.mixer.music.load(filepath) # pygame.mixer.music.load("%s.wav"%(character)) pygame.mixer.music.set_volume(1.0) pygame.mixer.music.play() textloop = text sleep(2) except Exception: GPIO.cleanup() pass except KeyboardInterrupt: GPIO.cleanup() raise
The try-except loop is used to catch the exceptions and skip all of them except for a "keyboard interrupt". Inside of the try block, we are creating a while loop that is constantly reading. If a tag is read, the text is converted to a character name only (the text had a bunch of blanks after the name, so I took only the letters from the name).
This character name is loaded into pygame.mixer.music and the song is played. The textloop function is then updated to the current text, and the loop continues after sleeping for 2 seconds.
Note that the next time through, the program will essentially ignore the readings if two things are true: 1) text = textloop, and 2) there is a song playing. This makes sure that a song for the same character doesn't get read over and over and start the song repeatedly without letting it finish.
Also note that if there is a misread, and the text is blank, the code will skip over that case. This means that unless the program is interrupted with either the power putton or "keyboard interrupt", it will just keep on going.
There are multiple ways to do this task. I read one Instructable accomplishing a very similar task that was helpful, but this way made more sense to me. I have attached the Write.py and Read.py files I am using in this project, so feel free to download them and place them in your /home/pi/pi-rfid folder on your Raspberry Pi along with all of your songs.
One Last Quick Note
For whatever reason, I was having some trouble at one point with the MFRC522 code, and so I actually had to edit lines 140-143 of the MFRC522.py file you pip installed from:
if gpioMode is None: GPIO.setmode(pin_mode) else: pin_mode = gpioMode
to simply:
GPIO.setmode(GPIO.BOARD)
I only had to do this after I started using the button though. Maybe it has something to do with the way that the pins are shared. This worked for me. To get to the MFRC522.py file, you need to sudo nano /usr/local/lib/python3.7/dist-packages/mfrc522/MFRC522.py. Or you can find this file and edit it through SFTP.
Step 6: Setting Up the LED Button
A Timely Word of Caution
Whenever you fiddle with scripts at boot, you need to be very careful. Best case, everything works out great. Worst case, you accidentally mess up a critical bootup protocol or add a script that will cripple your Pi. So to make sure you aren't headed back to Step 1 to flash Raspbian anew on your Micro SD, let's take a look at a safe on/off protocol you can use. I didn't have any issues using the process described below.
Check out these three videos that I watched to help me get started on the button:
- Video 1 - Adding a button
- Video 2 - Adding the LED to the button
- Video 3 - Mounting the button inside of a RPi case - I only watched this video because it shows the exact wiring for the same button I bought.
These videos and the text file he provides were critical to getting my button working right. Although we are not using RetroPie, all of the startup protocols and procedures will still work just fine with your Raspbian OS.
Since this step is more like a to-do list, just follow step-by-step, and everything should run smoothly for you.
"And at Last I See the Light"
To keep your LED glowing when the Pi is on, first, you need to make sure the serial can access the login shell. As best as I can tell (proficient Pi users correct me), this makes it so power is provided to the pin that your LED is attached to.
In a Raspberry Pi terminal, enter:
sudo raspi-config
Once you are in, arrow to "5 Interfacing Options" and press Enter. Next, arrow to "P6 Serial" and once again hit Enter.Arrow to "Yes" and hit Enter one more time. Now reboot your Pi:
sudo reboot
And when your Pi reboots, you should notice that your LED will light up and remain lit until it is once again shut off.
On Again Off Again
The text file that I followed is in the comments section of the first of the three YouTube Videos that I mentioned above. I will outline the procedure here as well.
First, even though we have some of the packages we need, let's just make sure everything was installed properly once again. If you already have the package installed, the Pi will just tell you. Open a terminal and type:
sudo apt-get install python-dev sudo apt-get install python3-dev sudo apt-get install gcc sudo apt-get install python-pip
Now, install the GPIO Package that you will need to use to get the button working correctly:
wget
To extract all of the files within the .tar.gz package execute:
sudo tar -zxvf RPi.GPIO-0.5.11.tar.gz
These last few steps should have created a new folder titled "RPI.GPIO-0.5.11". Enter into this folder:
cd RPi.GPIO-0.5.11
Install the scripts inside the folder:
sudo python setup.py install sudo python3 setup.py install
Make a directory called "scripts" in your Pi path to keep your shutdown script:
mkdir /home/pi/scripts
Use either SFTP or nano editor to create a new Python script called "shutdown.py" inside of the "/home/pi/scripts/" folder. Using nano in your terminal would look like this:
sudo nano /home/pi/scripts/shutdown.py
Paste the following scripts inside of the nano editor by copying and then right clicking inside of nano. The scripts has also been provided above, so feel free to snag it.
#!/usr/bin/python import RPi.GPIO as GPIO import time import subprocess # we will use the pin numbering to match the pins on the Pi, instead of the # GPIO pin outs (makes it easier to keep track of things) GPIO.setmode(GPIO.BOARD) # use the same pin that is used for the reset button (one button to rule them all!) GPIO.setup(5, GPIO.IN, pull_up_down = GPIO.PUD_UP) oldButtonState1 = True while True: #grab the current button state buttonState1 = GPIO.input(5) # check to see if button has been pushed if buttonState1 != oldButtonState1 and buttonState1 == False: subprocess.call("shutdown -h now", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) oldButtonState1 = buttonState1 time.sleep(.1)
Press CTRL + X and then Y to save this script. Connecting Pins 5 and 6 on the Pi enables the bootup, then this program watches and waits for a new button press on those same pins. Once connected, the "sudo shutdown -h now" command is executed, and a safe shutdown follows.
Reboot the Pi:
sudo reboot
This last step will ensure that the shutdown.py script runs at startup and continues running until the button is pressed. We will use the edit the /etc/rc.local to ensure the script runs at bootup:
sudo nano /etc/rc.local
Use your arrow keys to go down to right before the end where it says "Exit 0". Add the following command right before "Exit 0" and make sure you don't miss the ampersand (&) at the end of the command. This "&" will make sure that even if the program ends up in some wicked, endless loop, the Pi will still continue its bootup protocols.
sudo python /home/pi/scripts/shutdown.py &
Press CTRL + X and then Y to save the file. Issue one last:
sudo shutdown -h now
As soon as your Pi is completely shutdown (the green light is done blinking), go ahead and try out your button.
Make sure everyone using the button understands that even though the Pi boots and turns off very quickly, it is still a computer, and takes some time to go from on to off. We usually wait 10 seconds or so between button pushes to let it do its thing.
You should be able to taste the victory at this point! One last tech step, and your child's dreams will come true!
Attachments
Step 7: Enabling RC522 at Startup
The very last step we need to take is to make the RC522 "Read.py" script run on startup. I tried all sorts of different ways, but I finally had success using a Crontab. The simplest method of changing the rc.local file like we did in the last step just didn't work for me, and I'm not sure why.
Maybe some Python/Pi whiz out there can tell me what I did wrong, but Systemd, init.d tab, bashrc, and several other methods gave me some grief. Once I got it working with Crontab, I didn't dare go back to see if I could make the other methods work, but maybe on my next project, I will try them again.
I recommend reading this article, which describes the process of using a crontab to run a script at bootup.
We are going to need the Pi's id later, so let's find out what your id is now. In a terminal, type:
id [user_name]
My username is Pi, so I typed the following as shown in the image above:
id Pi
Your Pi's id is listed as uid. Mine was 1000, so I noted that down for later.
Next, you need to open crontab. In a terminal, execute:
sudo crontab -e
I wish that I had written down what I saw after issuing this command, but I want to say that I was presented with three options (one of them is default I believe). I just went with the default. If you aren't seeing those three options, then just arrow to the bottom of the file as shown in the image above.
For some reason, my USB audio had a bit of trouble running through a crontab, so I had to find a workaround. I'm not sure if this is the best way to get it working, but once again it worked great for me.
At the very bottom of the crontab file enter:
XDG_RUNTIME_DIR=/run/user/user_id
Replace "user_id" with your id. You can see my example in the images above.
Finally, enable your script in the Crontab by placing the following line at the very bottom of the file:
@reboot sudo python3 /home/pi/pi-rfid/Read.py > /home/pi/pi-rfid/log.txt
This will not only ensure that your file runs at bootup, but will also write all of the outputs to a .txt file inside of the "pi-rifd" folder. This is a great way to troubleshoot your program. Press CTRL + X and then Y to save the crontab, and you should be finished with all of the programming required for this project.
Use your button to turn off your Pi and then after waiting a sufficient amount of time, go ahead and hit the button one more time to turn it on. When you place the tag onto the reader, you should hear your saved sound files. If you get to this point and nothing happens Do Not Panic!! I ran into several errors while working on this project. I tried to include all of the workarounds that helped me get things going, but there is not way that I could catch every speaker/tag configuration that you could imagine. I have found that with a little patience and a lot of troubleshooting, miracles can happen.
One tool that I found helpful to see if my scripts were indeed running at startup was by opening a terminal and typing:
sudo ps -ax | grep python
This command searches for processes including "Python" in the name. You can look in the image above to see what scripts should show up. It also includes their Process id (pid), so you can end them using:
sudo kill <PID>
For example, if you wanted to end a process with a pid of 437, you would type:
sudo kill 437
I have found that the last process that includes "grep --color=auto python" is always running regardless of what we've done, so don't kill that process. Maybe someone could tell me what it is/does? The CTRL + C keyboard interrupt doesn't work on these background processes sent from the rc.local file or the crontab, so get familiar with killing the scripts in the terminal.
I found that the easiest way to troubleshoot was to kill the scripts and manually run the "Write.py" and "Read.py" scripts inside of the terminal. This allows you to use SFTP or nano to tweak the files and run them at a faster pace than switching the Pi on and off over and over again.
Regardless, if you do run into an issue or two, don't give up. The internet is a wonderful thing, and as I have said, if you have a question, it has probably already been asked.
I will also do my best to help with those of you interested in recreating a project like mine, so feel free to comment problems you run into, and I will do my best to help you figure out a solution.
Recapping:
We now have an RFID reader hooked up to a Pi along with an on/off button. We have written data to the tags and our handy-dandy "Read.py" script is constantly looking for those tags and playing the song corresponding to each tag as it is pressed on the RC522. Both the on/off button and the RFID reader scripts are running on startup, and as the on/off button is pressed, it shuts down and turns on the Pi.
As I mentioned, if you have tech questions or suggestions, please put them below. I am a complete beginner and appreciate criticism. I hope by the number of links that I dropped you can see that I needed about every website on the internet to figure this out. Don't hesitate to let me know what you would change or do differently. Since I received two RFID readers with my purchase, I will most likely try a variation of this project later. If you catch errors in any of the steps, let me know as well so I can correct them.
Passing the Baton:
I will now let my awesome wife explain the rest of this project. I was happy to make it this far, but my wife is amazing at decorating and embellishing. She will talk you through mounting the Pi, speaker, button, and RC522 reader in a safe location that is kid-friendly. Everything from here on out is optional for the functionality of the device, but we hope that by explaining our process, you might be able to replicate it should you so choose.
Step 8: Set the Stage!
For the stage, we found a white photo box at Michael's that worked perfectly. Whatever stage you decide, make sure it passes these qualifications:
- It's big enough to hold/cover/protect your equipment
- It's strong enough to be cut into and glued on
- It isn't too thick for the RFID tags to be read through it
We took the box with us to Home Depot and grabbed our favorite laminate stick-on tiles from their free samples. You can fit those squares to your stage, cut a circle a little bit larger than the RFID tags and call it good! My husband wanted it to look a bit more natural so we sliced the tiles into 1 inch wide sections and arranged the rows slightly different from each other, adding smaller squares in the gaps to get that "wood floor effect". Align them, but don't glue or stick anything permanently yet.
Step 9: Get Everything Into Frame...
Now you build the frame for your stage curtains.
We used some balsa wood dowels we found at Michael's. We used one small 24" square dowel, one small (smaller than the square ones) 24" round dowel and one thin 24" flat board.
- Cut the square dowel into two 12" rods
- Cut the round dowel into one 12" rod
- Drill one hole the size of the round dowel into both square dowels 1/2" down each rod
- Drill two holes into the stage floor on either side of the front of your stage for the square dowels to fit through- use an exact-o knife to cut the hole into a square shape
- Insert each end of the round dowel into the two square dowels
- Insert the square dowels into the holes in the stage floor and check to make sure all dowels are straight and fit properly into each hole
- Take the flat balsa board and cut it to your preferred size (we wanted it to cover the top of the curtains)
- Paint the dowels and board in your preferred color
- After the paint has dried, check for proper fitting, sand down any areas needed
- Insert your frame into the stage, glue it to the inside of the box lid, securely glue down the faux wood flooring, set the flat board to the side for later
Step 10: Draw the Curtains!
For the curtains, I found a half a yard of red velvet fabric at Walmart for $2 that was MORE THAN ENOUGH.
- Measure the width and length you want for your curtain panels and cut the fabric, adding an extra inch to your measurements (I had 7 x 9 in panels so I cut two 8 x 10 inch sections)
- Use a needle and thread or a sewing machine to hem up the two sides and bottom edge of each panel. The top edge should have AT LEAST a 1/2" hem with the sides left open or your round dowel won't fit and you will have done it all for naught...
- Cut two thinner pieces of fabric (about 1" by 4") and hem the sides to make ties to hold the curtains back
- Insert the curtain panels onto the rod, check to make sure it fits the way you want to, and glue the dowel into place
- Take the flat board and glue it to the two square dowels on either side, covering the dowel holding the curtains
Voila! You now have beautiful over-the-top curtains to fit your classy stage!
Step 11: Making the Cut
Now we're getting to installing the hardware inside the box. You'll need to cut out and reinforce (with glue) openings for these things:
- The speaker
- The Pi plug
- The LED on/off button
This depends highly on the shape/size of your stage so make sure to measure appropriate heights and spacing* needed to insert items.
*The Pi needs to be able to plug into the speaker, which needs to reach the outside of the box. The Pi also needs to stay connected to the on/off button, which needs to be part-way out of the box and reach the RFID tag reader, which needs to reach the bottom side of the center of the stage.
Then, it's still just a cardboard box with a weak structure so you've got to toughen it up a bit. We took our extra balsa wood pieces and cut them to reinforce certain areas:
- The LED button needs something to push against to work. Add a big clump of glue over the soldered connections. Place balsa wood directly behind it with a notch cut so the cords could get through. Finally, reinforce that piece of wood with two pieces of wood perpendicular to it
- The Pi needs wood behind it to be able to plug in without being pushed further into the box. Glue a piece of wood directly behind it, with reinforcing pieces perpendicular to that piece
The idea of reinforcing the box is to be sure that you won't completely destroy your stage trying to push the buttons or plug the Pi in. Do what you need to ensure that when you push, the button/plug are able to be pressed/plugged in and it's not bending the box to do so.
The RFID tag reader will remain loose until you have everything secure, then you can use a few dots of hot glue to stick it to the bottom side of your center stage circle.
Step 12: The Final Scene
Take the RFID's and attach them to the bottom your figurines (we used hot glue).
Add some LED lights, if desired (we just attached some $3 battery operated ones from Walmart).
Give those characters their moment in the spotlight and enjoy!
Our daughter has spent hours playing with this stage (she's 16 months old) and we have had friends from ages 2-26 also play with and enjoy it!
As beginners, we understand how much help and research was required to make this project possible. If you have any questions, leave a comment and we'll do our best to help you! We'd love to hear feedback about improving the project and success stories. Thanks for taking this journey with us!
THIS PROJECT WILL BRING JOY TO ANY AGE AT ANY STAGE!
Runner Up in the
Raspberry Pi Contest 2020
Be the First to Share
Recommendations
5 Comments
2 years ago
This is so cool, I can think of a lot of kids that would love this!
2 years ago
Wow, this is very funny. I can imagine how kids would react to this :)
Reply 2 years ago
Agreed! Haha! Thank you. Our daughter loves it! It made the time tinkering worth it! :)
2 years ago
Fun idea! Looks like kids would have a lot of fun playing with this :)
Reply 2 years ago
Thank you! | https://www.instructables.com/The-Pi-Must-Go-On-Pi-powered-RFID-Musical-Box/ | CC-MAIN-2022-40 | en | refinedweb |
13. Solvers¶
A constraint-based reconstruction and analysis model for biological systems is actually just an application of a class of discrete optimization problems typically solved with linear, mixed integer or quadratic programming techniques. Cobrapy does not implement any algorithm to find solutions to such problems but rather creates.
[1]:
from cobra.io import load_model model = load_model('textbook')
[2]:
model.solver = 'glpk' # or if you have cplex installed model.solver = 'cplex'
For information on how to configure and tune the solver, please see the documentation for optlang project and note that
model.solver is simply an optlang object of class
Model.
[3]:
type(model.solver)
[3]:
optlang.cplex_interface.Model
13.1. Internal solver interfaces¶
Cobrapy also contains its own solver interfaces but these are now deprecated and will be removed completely in the near future. For documentation of how to use these, please refer to older documentation. | https://cobrapy.readthedocs.io/en/latest/solvers.html | CC-MAIN-2022-40 | en | refinedweb |
Introduction:
Let's assume that you need to share files from your AWS S3 bucket(private) without providing AWS access to a user. How would you do that? Well, we have pre-signed URLs that are shortly lived, which can be shared and used to access the content shared.
Pre-signed URLs:
What is a pre-signed URL?
A pre-signed URL gives you **temporary *. Same applies for download as well.
We will see how to generate pre-signed URLs for S3 bucket programmatically using python and boto3.
When we say, the creator of the presigned URL should have access what does it mean?
It means, the URL generator should have a aws access with right credentials(may be in a lambda)and to achieve this, we could expose a REST API to the customer to request for a URL based on the upload/download operation. This ensures the user need not be provided with the AWS credentials.
The pre-signed URL will expire based on the expiry value configured while generating it. We shall look at it shortly.
A high-level design:
In the above design, a user requests the URL from the UI(could be a web portal) via a REST API based on the operation required. This hits the API gateway which triggers a lambda. The lambda executes the code to generate the pre-signed URL for the requested S3 bucket and key location.
The most prevalent operations are but not limited to upload/download objects to and from S3 buckets which are performed using
put_object
get_ object.
Let’s look at the code which goes in the lambda
1. Generating pre-signed URL for download
import boto3 from botocore.exceptions import ClientError from botocore.config import Config import requests def generate_presigned_url(bucket_name, object_key, expiry=3600): client = boto3.client("s3",region_name=REGION_NAME, aws_access_key_id=ACCESS_KEY, aws_secret_access_key="SECRET_KEY", aws_session_token="SESSION_TOKEN") try: response = client.generate_presigned_url('get_object', Params={'Bucket': bucket_name,'Key': object_key}, ExpiresIn=expiry) print(response) except ClientError as e: print(e)
Please note that the awssession token is an optional parameter. This may be required if your organization is providing credentials that expire. If you are using your personal account and do not have any configuration for session expiry they may not be required.
The ‘get_object’ specifies the URL is being generated for a download operation. The bucket name and object should be passed as part of the params dictionary.
2. Generating pre-signed URL for upload
We use the same create presigned url with put_object method to create a presigned URL for uploading a file.
def create_presigned_upload_url('put_object', Params= {'Bucket': "BUCKET_NAME", "Key":"OBJECT_KEY"}, ExpiresIn=3600) except ClientError as e: return None # The response contains the presigned URL and required fields return response
Output: []()
This is a sample pre-signed URL output.
Try accessing the presigned URL either through browser or programmatically.
Signature Invalid
Oops:
We have hit a roadblock. The URL throws a signature does not match error.
Message: The request signature we calculated does not match the signature you provided. Check your key and signing method.
Photo by Jonathan Farber on Unsplash
Why?
The reason for this is, it’s not recommended to use generate_presigned_url with put_object parameter to generate pre-signed URLs for uploading files though it wouldn’t throw any error while generating.
I had deliberately used it here because I had run into this issue and wanted to share this learning.
Please refer to this github link for more information about this.
Let’s move to the recommended solution. While generating URLs for upload, it’s always better to use generate_presigned_post method as this includes the proper header information and other parameters required for the URL.
3. Pre-signed URL post
def create_presigned_post(bucket_name=None, object_name=None, fields=None, conditions=None,_post(Bucket="BUCKET_NAME", Key="OBJECT_PATH", ExpiresIn=3600) except ClientError as e: return None # The response contains the presigned URL and required fields return response resp = create_presigned_post() # Extract the URL and other fields from the response post_url = resp['url'] data = resp['fields'] key = data['key'] # Upload the file using requests module response = requests.post(url=post_url, data=data, files={'file': open(r'C:\Users\212757215\Desktop\Dockerfile.txt', 'rb')}) print(response)
Output: A sample presigned URL []()
Caveats:
If the server-side encryption of S3 is set to KMS, you may need to set the signature version to v4 while creating the boto3 object.
Boto3 by default supports signature v4. However for S3, the objects should explicitly set the signature version to v4 in case of KMS.
Not setting the signature to v4 may result in 403 error while trying to access the URL though you have the right permissions.
Setting signature version explicitly:
s3_client=boto3.client("s3",config=Config(signature_version='s3v4'))
Summary:
Pre-signed URLs could be used to provide temporary access to users without providing aws access to users
URLs could be generated to upload and download files | https://plainenglish.io/blog/access-files-from-aws-s3-using-pre-signed-urls-in-python | CC-MAIN-2022-40 | en | refinedweb |
The GY-302 from CJMCU is an I2C board that allows you to measure the amount of light using the BH1750 photodetector. We will use the measured brightness to construct an ambient lighting quality indicator based on European Standard EN 12464-1. It is very easy to integrate the sensor GY-302 in an Arduino project or ESP8266 using the library developed by Christopher Laws. It is available on this GitHub page. The GY-302 costs less than one euro.
Add the BH1750 library to the Arduino IDE
Download the ZIP archive of the library from GitHub without decompressing it.
From the Arduino IDE, go to the menu and then Add the .ZIP library
Circuit
The GY-302 module communicates via the I2C bus with the Arduino or ESP8266. The wiring is very simple.
On an Arduino, connect the SDA pin to pin A4 and SCL on pin A5. On the ESP8266 Wemos d1 mini, SDA is in D2 and SCL in D1. For other ESP8266 boards, read this article.
It is possible to manually assign the I2C bus pins using the Wire.h library. At the beginning of the program, the library is declared
#include <Wire.h>
Then in the setup ()
Wire.begin (SDA pin, SCL pin);
Here, all I2C devices are diverted to the new pins.
How to measure the quality of lighting?
In Europe, EN 12464-1 (summary in English and French) defines the minimum lighting levels according to the occupied workplace.
Source: LUX Lighting Review No. 228 May / June 2004 available online.
In a dwelling, there is no specific standard (to my knowledge). Keria, a lighting specialist, has published some common light intensities on his site. Here are excerpts of the recommendations for some rooms of the house (or to obtain a certain atmosphere: intimate, convivial, game, work).
Source :
On the basis of these different data, I constructed a 5-level indicator (too low, low, ideal, high, too high). You can adjust the values according to your habits and needs.
#define _TOOLOW 25 #define _LOW 50 #define _HIGH 500 #define _TOOHIGH 750 #define LEVEL_TOOLOW "Too low" #define LEVEL_LOW "Low" #define LEVEL_OPTIMAL "Ideal" #define LEVEL_HIGH "High" #define LEVEL_TOOHIGH "Too high"
How to use the BH1750 library
The BH1750 library is used very similar to the BME280 library (or BMP180). At the beginning of the program, the library is called and the lightMeter object is initialized by indicating the address of the BH1750 on the I2C bus. By default the BH1750 is located at address 0x23. If you have a conflict with another component, you can assign the address 0x5C by feeding the addr pin to 3.3V.
#include <BH1750.h> BH1750 lightMeter (0x23);
The library supports the 6 modes of operation of the sensor. The sensor can measure continuous brightness
- BH1750_CONTINUOUS_LOW_RES_MODE: Fast measurement (16ms) at low resolution (4 lux of precision)
- BH1750_CONTINUOUS_HIGH_RES_MODE (default mode): High resolution (1 lux accuracy). The measurement time is 120ms
- BH1750_CONTINUOUS_HIGH_RES_MODE_2: Very high resolution (0.5 lux accuracy). Measurement time 120ms
These three other modes allow to realize a single measurement (One_Time) and then to put the sensor in energy saving. Accuracy and measurement time are identical.
- BH1750_ONE_TIME_LOW_RES_MODE
- BH1750_ONE_TIME_HIGH_RES_MODE
- BH1750_ONE_TIME_HIGH_RES_MODE_2
In the setup, the lightMeter object is started by using the function begin (uint8_t mode) by passing it as parameter the measurement mode. The configure (uint8_t mode) function is (called by begin) is also exposed.
void setup(){ lightMeter.begin(BH1750_CONTINUOUS_HIGH_RES_MODE); }
The readLightLevel method reads the light intensity measured by the BH1750 at any time. The function returns the measurement directly to Lux.
uint16_t lux = lightMeter.readLightLevel();
Arduino Code compatible ESP8266
Here is the complete code of the application you just need to upload. It works either on Arduino, ESP8266 or ESP32.
</co/* Mesurer la qualité d'éclairage de votre habitation à l'aide d'un capteur GY-302 (BH1750) Measure the lighting quality of your home with a GY-30 (BH1750) sensor Code basé sur la librairie Arduino de Christopher Laws disponible sur GitHub Based on the Arduino library of Christopher Laws abailable on GitHub Connection: VCC -> 5V (3V3 on Arduino Due, Zero, MKR1000, etc) GND -> GND SCL -> SCL (A5 on Arduino Uno, Leonardo, etc or 21 on Mega and Due) SDA -> SDA (A4 on Arduino Uno, Leonardo, etc or 20 on Mega and Due) ADD -> GND or VCC (see below) ADD pin uses to set sensor I2C address. If it has voltage greater or equal to 0.7VCC voltage (as example, you've connected it to VCC) - sensor address will be 0x5C. In other case (if ADD voltage less than 0.7 * VCC) - sensor address will be 0x23 (by default). - */ #include <Wire.h> #include <BH1750.h> /* * Niveau d'éclairage définit à partir de la norme EN 12464-1 * Lighting level defined from the standard EN 12464-1 * */ #define _TOOLOW 25 #define _LOW 50 #define _HIGH 500 #define _TOOHIGH 750 #define LEVEL_TOOLOW "Trop bas" // Too low #define LEVEL_LOW "Bas" // Low #define LEVEL_OPTIMAL "Idéal" // Ideal #define LEVEL_HIGH "Elevé" // High #define LEVEL_TOOHIGH "Trop élevé" // Too High uint16_t lux = 250; int luxLevel = 3; String luxMessage = LEVEL_OPTIMAL; /*. */ BH1750 lightMeter(0x23); void setup(){ Serial.begin(115200); /* Each mode, has three different precisions: - Low Resolution Mode - (4 lx precision, 16ms measurment time) - High Resolution Mode - (1 lx precision, 120ms measurment time) - High Resolution Mode 2 - (0.5 lx precision, 120ms measurment time) Full mode list: */ lightMeter.begin(BH1750_CONTINUOUS_HIGH_RES_MODE); Serial.println(F("BH1750 Test")); } void loop() { lux = lightMeter.readLightLevel(); if ( lux <= _TOOLOW ) { luxLevel = 1; luxMessage = LEVEL_TOOLOW; } else if ( lux > _TOOLOW && lux <= _LOW ) { luxLevel = 2; luxMessage = LEVEL_LOW; } else if ( lux > _LOW && lux <= _HIGH ) { luxLevel = 3; luxMessage = LEVEL_OPTIMAL; } else if ( lux > _HIGH && lux < _TOOHIGH ) { luxLevel = 4; luxMessage = LEVEL_HIGH; } else { luxLevel = 5; luxMessage = LEVEL_TOOHIGH; } Serial.print("Light: "); Serial.print(lux); Serial.print(" lx, level: "); Serial.print(luxLevel); Serial.print(" , quality: "); Serial.println(luxMessage); delay(1000); }
Here. One more brick finished for our station is monitoring the air quality (and well being).
- | https://diyprojects.io/bh1750-gy-302-measure-lighting-quality-home-arduino-esp8266-esp32/?amp | CC-MAIN-2022-40 | en | refinedweb |
Full Text Search (FTS) Using the Go SDK with Couchbase Server
You can use the Full Text Search service (FTS) to create queryable full-text indexes in Couchbase Server.
Couchbase offers Full-text search support, allowing you to search for documents that contain certain words or phrases.
In the Go SDK you can search full-text indexes by using the
SearchQuery object and executing it with the
Bucket.ExecuteSearchQuery() API.
The following example shows how to send a simple Search query (note that in this example the cbft package also has to be imported independently):
import ( "github.com/couchbase/gocb" "github.com/couchbase/gocb/cbft" "fmt" ) // ... query := gocb.NewSearchQuery("travel-search", cbft.NewTermQuery("office")) res, _ := bucket.ExecuteSearchQuery(query) for _, hit := range res.Hits() { fmt.Printf("%s\n", hit.Id) }
The
Bucket.ExecuteSearchQuery() method returns a
SearchResultsinterface which provides access to all of the information returned by the query.
Other search result data may be accessed using the various other available methods on the
SearchResults interface.
res, _ := bucket.ExecuteSearchQuery(query) for _, facet := range res.Facets() { // ... } fmt.Printf("Total Hits: %d", res.Status().Total)
Query Types
Query types may be found inside the
gocb/cbft package.
The package contains query classes corresponding to those enumerated in Types of Queries.
Query object should be instantiated by using the associated
New*Query functions, passing the search term (usually a string) as the first argument, followed by some query modifiers. through methods on the
SearchQuery object itself.
Query Facets
Query facets may also be added to the general search parameters by using the
AddFacet method.
The
AddFacet method accepts a facet name as well as a facet definiton You can create facets by instantiating a
Facet object found in the
gocb.cbft package.
query := gocb.NewSearchQuery("travel-search", cbft.NewTermQuery("office")) query.AddFacet("countries", cbft.NewTermFacet("country", 5)) res, _ := bucket.ExecuteSearchQuery(query) fmt.Printf("Total Countries: %d", res.Facets()["countries"].Total) | https://docs.couchbase.com/go-sdk/1.1/full-text-searching-with-sdk.html | CC-MAIN-2019-39 | en | refinedweb |
Your One-Stop Shop For Everything React Boston 2018
at last year’s inaugural conference: GraphQL was a big player, popping up in a handful of talks throughout the weekend, as was performance (including discussions on ways to increase both raw speed and overall perceived performance). ReasonML, the subject of an enthusiastic presentation by Marcel Cutts last year, starred in Ken Wheeler’s Saturday morning keynote. In addition to the usual suspects, React Boston also offered a stunning array of talks showcasing how people are using React in creative, boundary-pushing, useful, and whimsical ways. Keep reading for some of the highlights.
Component libraries make life easier for everyone
In a gorgeously illustrated talk on Saturday morning, Samantha Bretous emphasized how a component kit—a shared library of reusable, well tested, cleanly encapsulated components—can improve efficiency, help isolate bugs, reduce the overall size of your codebase, and ensure that users have a consistent experience across your application.
I love working with a component library @WayfairTech but had little idea what it takes to develop and maintain one. Thanks @samanthabretous for breaking the key concepts down for us! #ReactBoston2018
— 𝚂𝚞𝚣𝚒 🧙🏼♀️ (@suzicurran) September 29, 2018
Bretous laid out a clear guide to building a component kit, involving strong communication pathways between designers and engineers, and a suite of tools—including Zeplin and StorybookJS—that her team has found helpful for organizing and documentation. To avoid the complications—inconsistent UIs, developer decision-making overhead, code proliferation—that result from custom CSS, Wayfair’s Artem Sapegin recommended building a series of primitive components that can serve as building blocks for a larger design system. Proud of the kit component library you’ve built, or looking to augment it? Jason Clark’s lightning talk introduced Bit, a cloud-based tool for hosting, sharing, and collaborating on components.
GraphQL is going strong
Chris Toomey highlighted how using GraphQL—in particular, GraphQL fragments—helps with code organization, letting each component specify its own data needs alongside its JS, CSS, and markup. Individual fragments—like individual components—can be composed to make up larger, more complex queries. Combining React, GraphQL, TypeScript, and the Apollo CLI means each component can structure its type definition around these fragments, making the development process “simple and correct.” In her talk on learning the React Native ecosystem as a junior engineer, Erin Fox agreed, pointing to GraphQL as a “booster seat” that helped her jump start her career. Shawn Swyx Wang followed up with a demo of babel-blade, which solves the “double declaration” problem—where the structure of a data object is specified both in a GraphQL query string and in the component that uses the data—by intelligently building queries for you based on how data is used.
Oh man @swyx just solved the double declaration problem for GraphQL clients using Babel transforms 😱😍🔥 pic.twitter.com/0oZViiAIEh
— Tejas Kumar (@TejasKumar_) September 29, 2018
React is all around us
React is all around us. @_matthamil discusses making cool VR with React360. #ReactBoston2018 pic.twitter.com/CdW1DpQIKA
— Ray Deck (@ray_deck) September 30, 2018
Audience anticipation for the Saturday afternoon demo by Vladimir Novick—who blew minds last year by flying a drone inside of Wayfair’s offices—was high. Novick’s talk on building augmented reality applications with React did not disappoint. After a quick timeline of AR (first introduced by L. Frank Baum—as in, The Wizard of Oz—in 1901), Novick dropped a six-foot-tall monster next to the podium and then took us through a portal into Wakanda.
Nice demo of React Portals by @VladimirNovick 😉 – pic.twitter.com/p2khAgo0tN
— Jeff Winkler (@winkler1) September 30, 2018
Novick’s detailed slides included a deep dive into the code and tooling behind the demo. Following on Sunday, Matt Hamil blended cutting-edge React360 virtual reality technology with old-school nostalgia, demoing an entirely React-powered, 3D version of Frogger. Hamil also noted the new UX challenges and accessibility considerations that come with 3D applications and developing for different input devices, urging engineers to take responsibility for accessibility as they begin building in a third dimension. This point arose several times throughout the weekend, including in Josh Comeau’s Saturday morning talk, a call for whimsy that blended a discussion of how web animations and delightful UX surprises can improve user experience (and companies’ bottom line), a practical walkthrough of several examples, and an emphasis on the importance of accessibility over “wow” factor when implementing these kinds of features.
People are thinking about performance
While Comeau’s talk focused on how we can improve perceived performance, a number of other presentations covered how to assess and speed up load times. Christina Keelan Cottrell’s lightning talk on Lighthouse, a Google open source tool that offers performance and accessibility audits, used Wayfair as a sample use case to show how addressing a few common pain points can dramatically speed up performance. Houssein Djirdeh shared several additional tools—including the Profiler in React 16.5—to measure performance metrics and make addressing performance bottlenecks easier.
.@hdjirdeh on stage at @ReactBoston about website performance with React! 🚗💨
Mentioning the now indispensable React profiler 🔥 pic.twitter.com/hb2cU0xIKV
— Florian Rival (@Florianrival) September 30, 2018
Tejas Kumar gave a Pokemon-themed demo of suspense, a new React API (still a work in progress) that can help prioritize rendering of different components based on when data becomes available. First announced at JSConf Iceland in March, suspense lets you “pause any state update until the data is ready, and… add async loading to any component deep in the tree without plumbing all the props and state through your app and hoisting the logic.” Cole Turner turned the focus from upping speed to slimming down content, sharing tricks for how to render less in a puppy-filled presentation on performant layout.
My mind is completely blown by @coleturner’s #reactboston2018 lightning talk. Render only what’s in the viewport. Remove what the user has already seen. Seems obvious, but has seemed non-trivial until now. @netflix pic.twitter.com/yWMhr7U7cs
— Brent Danley (@brentdanley) September 30, 2018
Everyone loves a challenge
One hundred and eleven of React Boston’s attendees had their interest piqued and their knowledge tested in Wayfair’s React quiz. However, zero of them were able to achieve a perfect score.
The effort involved in setting up a React-flavored challenge was a fascinating and fun learning experience. Watching people furrow their brow trying to remember how comparison operators resolve was entertaining, even after experiencing the heartbreak of looking into someone’s face as you reveal their 1 out of 11 score. It was great to listen to the chatter around some of the questions as groups began to crowdsource their problem solving.
While Wayfair enjoyed generating a little buzz around an impossible quiz, many of the team expected participants to google their way to 100%, or quit halfway through taking it.
What we didn’t expect was the sheer number of submissions. Overall, a big shout out is in order for everyone’s good spirit and participation. React Boston is about bringing our community closer together and raising our collective bar. Watching participants throw yourselves at a challenge like this made the Wayfair quiz custodians smile.
Now for some answers and breakdowns.
Hardest questions:
Answer Key:
Q: What are the results of these two statements?
1 < 2 < 3; 3 > 2 > 1;
A: True, false.
Q: What is logged?
const c = 'constructor'; console.log(c[c][c]);
A: “ƒ Function() { [native code] }”
Q: What is logged to the console?
var z, z, z = 1, b, b, z = 2, z, z, z; console.log(z);
A: 2.
Q: Select all that are true:
[ ] Math.max() > Number.MAX_VALUE [ ] Infinity > Number.MAX_VALUE [ ] Math.min() < Math.max() [ ] Math.max() > Infinity
A: Infinity > Number.MAX_VALUE.
Q: What are the contents of body after executing this in the browser?
document.body.innerHTML = ((y --> y <!--h1 > Hello World!</h1>)()); )());
A: Undefined.
Q: Which of the following is true about React keys?
[ ] They must be locally unique [ ] They must be globally unique [ ] They do not need to be unique
A: They must be locally unique.
Q: Which of the following is true about React keys?
[ ] Returns a JavaScript function [ ] Returns an HTML element [ ] Returns a React element [ ] Returns multiple elements
A: Returns a JavaScript function and Returns a React element (functionally equivalent).
Q: Which of these were ever React APIs?
[ ] React.unstable_createReturn [ ] React.unstable_Yield [ ] React.unstable_deferredUpdates
A: React.unstable_createReturn (some of these were methods on related libraries, though).
Q: During the diffing phase, when comparing the below two React DOM elements of the same type, React will:
<div className="AmazingUI" title="stuff" /> <div className="MoreAmazingUI" title="stuff" />
A: Modify only the className on the underlying DOM node.
Q: What does
const X = <div /> look like when defined in a React environment?
A:
var X = React.createElement("div", null);
Q: What of the following statements are true, regarding to the `ref` usage in the snippet below?
class MyApp extends React.Component { static propTypes = { title: PropTypes.string.isRequired }; render() { return ( <h2 ref={e => this.title = e}> {this.props.title} </h2> ) } }
A:
[x] The ref function will be run each time the component renders. [x] It will create a new instance of the function for ref each time it renders. [x] this.title will be the DOM element of h2 after the component is mounted.
Hardest questions (deep dive)
In the interest of furthering all of our understanding, we want to break down two of the hardest questions in terms of incorrect submissions.
Hard Question One:
What of the following statements are true, regarding the `ref` usage in the snippet below?
class MyApp extends React.Component { static propTypes = { title: PropTypes.string.isRequired }; render() { return ( <h2 ref={e => this.title = e}> {this.props.title} </h2> ) } }
From the author of the question, Wayfair software engineer Yinlin Zhou:
The ref function will be run each time the component renders.
“If we use an inline function for
ref, the
ref function will be run each time the component renders. A new instance of the function will be created each time the component renders, so
ref needs to clear the old one and set the new one, so it’ll be run each time the component renders. More precisely, it’ll be called twice during updates (for reference:).”
It will create a new instance of the function for ref each time it renders.
“Because of the anonymous, in-line nature of the function, it will create a new instance of the
ref function each time it renders.”
this.title will be the DOM element of h2 after the component is mounted.
The reference will be the DOM element after the component is mounted, and will go back to null when it unmounts.
Hard Question Two:
What are the contents of body after executing this in the browser?
document.body.innerHTML = ((y --> y <!--h1 > Hello World!</h1>)()); )());
From the author of the question, Wayfair software engineer Dan Uhl:
“Believe it or not, way back in the days of Netscape 1.x some browsers didn’t understand the <script> tag! This meant that we needed to use HTML comments in script tags so that the browser wouldn’t render our JavaScript directly on the page. While this is no longer relevant, HTML-like comments are still in the spec:. In this example we’re using a single line HTML-like comment to ignore all code after the `<!–`. This means we have an IIFE with no input, so the return value will be
undefined.
Wrapping up an amazing weekend
Covering all of the amazing conversations had and resources shared this during such an event is a challenge—React Boston also featured demos of new tools to help mock GraphQL schemas, manage React state more simply, and make large scale code migrations more straightforward; talks on unit testing best practices, improving the code review process from both ends, and easter eggs across multiple projects; and a tricky quiz that tested the boundaries of participants’ JavaScript knowledge.
For those who weren’t able to be there, videos of all talks are available now; head over to the Wayfair Tech YouTube channel to start watching.
Looking for a few other great recaps? Some fellow presenters shared their own thoughts and experiences, too.
- Shawn Swyx Wang’s “I Went to React Boston and Saw the Future”
- Mark Erikson’s “React Boston 2018 Presentation: The Status of Redux”
Wayfair is incredibly humbled to be actively contributing to and learning from the global React community. React Boston’s schedule featured many who’ll still be on the conference circuit through to the end of the year; React Day Berlin is one such event. Until next year, we’ll keep sharing and participating on all fronts around ReactJS. Bring on React Boston 2019! | https://tech.wayfair.com/2018/11/everything-react-boston-2018/ | CC-MAIN-2019-39 | en | refinedweb |
![if !IE]> <![endif]>
Form Processing and Business Logic
XHTML forms allow users to enter data to be sent to a Web server for processing. Once the server receives the form, a server program processes the data. Such a program could help people purchase products, send and receive Web-based e-mail, complete a survey, etc. These types of Web applications allow users to interact with the server. Figure 28.17 uses an XHTML form to allow users to input personal information for a mailing list. This type of registration might be used to store user information in a database.
<!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML 1.0 Transitional//EN"
"DTD/xhtml1-transitional.dtd">
<!-- Fig. 28.17: fig28_17.html -->
<html xmlns = "" xml:
<head>
<title>Sample FORM to take user input in HTML</title>
</head>
<body style = "font-family: Arial, sans-serif; font-size: 11pt">
<div style = "font-size: 15pt; font-weight: bold">
This is a sample registration form.
</div>
Please fill in all fields and click Register.
<form method = "post" action = "/cgi-bin/fig28_18.py">
<img src = "images/user.gif" alt = "user" /><br />
<div style = "color: blue">
Please fill out the fields below.<br />
</div>
<img src = "images/fname.gif" alt = "firstname" />
<input type = "text" name = "firstname" /><br />
<img src = "images/lname.gif" alt = "lastname" />
<input type = "text" name = "lastname" /><br />
<img src = "images/email.gif" alt = "email" />
<input type = "text" name = "email" /><br />
<img src = "images/phone.gif" alt = "phone" />
<input type = "text" name = "phone" /><br />
<div style = "font-size: 8pt">
Must be in the form (555)555-5555<br/><br/>
</div>
<img src = "images/downloads.gif" alt = "downloads" /><br />
<div style = "color: blue">
Which book would you like information about?<br />
</div>
<select name = "book">
<option>XML How to Program</option>
<option>Python How to Program</option>
<option>E-business and E-commerce How to Program</option>
<option>Internet and WWW How to Program 2e</option>
<option>C++ How to Program 3e</option>
<option>Java How to Program 4e</option>
<option>Visual Basic How to Program</option>
<br /><br />
<img src = "images/os.gif" alt = "os" /><br />
<div style = "color: blue">
Which operating system are you
currently using?<br />
</div>
<input type = "radio" name = "os" value = "Windows NT"
checked = "checked" />
Windows NT
<input type = "radio" name = "os" value = "Windows 2000" />
Windows 2000
<input type = "radio" name = "os" value = "Windows 95_98" />
Windows 95/98/ME<br />
<input type = "radio" name = "os" value = "Linux" />
Linux
<input type = "radio" name = "os" value = "Other" />
Other<br />
<input type = "submit" value = "Register" />
</form>
</body>
</html>
Fig. 28.17 XHTML form to collect information from user
The form element (line 19) specifies how the information enclosed by tags <form> and </form> should be handled. The first attribute, method = "post", directs the browser to send the form’s information to the server. The second attribute, action = "/ cgi-bin/fig28_18.py", directs the server to execute the fig28_18.py Python script, located in the cgi-bin directory. The names given to the input items (e.g., firstname) in the Web page are important when the Python script is executed on the server. These names allow the script to refer to the individual pieces of data the user sub-mits. When the user clicks the button labeled Register, both the input items and the names given to the items are sent to the fig28_18.py Python script.
Figure 28.18 takes user information from fig28_17.html and sends a Web page to the client indicating that the information was received. Line 6 imports the cgi module, which provides functionality for writing CGI scripts in Python, including access to XHTML form values.
#!c:\Python\python.exe
# Fig. 28.18: fig28_18.py
# Program to read information sent to the server from the
# form in the form.html document.
import cgi
import re
# the regular expression for matching most US phone numbers
telephoneExpression = \
re.compile( r'^\(\d{3}\)\d{3}-\d{4}$' )
def printContent():
print "Content-type: text/html"
print """
<html xmlns = "" xml:
<head><title>Registration results</title></head>
<body>"""
def printReply():
print """
Hi <span style = "color: blue; font-weight: bold">
%(firstName)s</span>.
Thank you for completing the survey.<br />
You have been added to the <span style = "color: blue;
font-weight: bold">%(book)s </span> mailing list.<br /><br />
<span style = "font-weight: bold">
The following information has been saved in our database:
</span><br />
<table style = "border: 0; border-width: 0;
border-spacing: 10">
<tr><td style = "background-color: yellow">Name </td>
<td style = "background-color: yellow">Email</td>
<td style = "background-color: yellow">Phone</td>
<td style = "background-color: yellow">OS</td></tr>
<tr><td>%(firstName)s %(lastName)s</td><td>%(email)s</td>
<td>%(phone)s</td><td>%(os)s</td></tr>
</table>
<br /><br /><br />
<div style = "text-align: center; font-size: 8pt">
This is only a sample form.
You have not been added to a mailing list.
</div></center>
""" % personInfo
def printPhoneError():
print """<span style = "color: red; font-size 15pt">
INVALID PHONE NUMBER</span><br />
A valid phone number must be in the form
<span style = "font-weight: bold">(555)555-5555</span>
<span style = "color: blue"> Click the Back button,
enter a valid phone number and resubmit.</span><br /><br />
Thank You."""
def printFormError():
print """<span style = "color: red; font-size 15pt">
FORM ERROR</span><br />
You have not filled in all fields.
<span style = "color: blue"> Click the Back button,
fill out the form and resubmit.</span><br /><br />
Thank You."""
printContent()
form = cgi.FieldStorage()
try:
personInfo = { 'firstName' : form[ "firstname" ].value,
'lastName' : form[ "lastname" ].value,
'email' : form[ "email" ].value,
'phone' : form[ "phone" ].value,
'book' : form[ "book" ].value,
'os' : form[ "os" ].value }
except KeyError:
printFormError()
if telephoneExpression.match( personInfo[ 'phone' ] ):
printReply()
else:
printPhoneError()
Fig. 28.18 XHTML form to get cookie values from user
Line 72 begins the main portion of the script and calls function printContent to print the proper HTTP header and XHTML DOCTYPE string. Line 74 creates an instance of class FieldStorage and assigns the instance to variable form. This class contains information about any posted forms. The try block (lines 76–82) creates a dictionary that contains the appropriate values from each defined element in form. Each value is accessed via the value data member of a particular form element. For example, line 78 assigns the value of the lastName field of form to the dictionary key 'lastName'.
If the value of any element in form is None, the try block raises a KeyError exception, and we call function printFormError. This function (lines 63–70) prints a message in the browser that tells the user the form has not been completed properly and instructs the user to click the Back button to fill out the form and resubmit it.
Line 86 tests the user-submitted phone number against the specified format. We com-pile the regular expression telephoneExpression in lines 10–11. If the expression’s match method does not return None, we call the printReply function (discussed momentarily). If the match method does return None (i.e., the phone number is not in the proper format), we call function printPhoneError. This function (lines 53–61) dis-plays a message in the browser that informs the user that the phone number is in improper format and instructs the user to click the Back button to change the phone number and resubmit the form.
If the user has filled out the form correctly, we call function printReply (lines 22– 51). This function thanks the user and displays an XHTML table with the information gathered from the form. Notice that we format the output with values from the person-Info dictionary. For example, the beginning of line 25
%(firstName)s
inserts the value of the string variable firstName into the string after the percent sign (%). Line 51 informs Python that the string variable firstName is a key in the dictionary personInfo. Thus, the text at the beginning of line 25 is replaced with the value stored in personInfo[ 'firstName' ].
Related Topics
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | https://www.brainkart.com/article/Form-Processing-and-Business-Logic---Python_11131/ | CC-MAIN-2019-39 | en | refinedweb |
Abstract:What do we do when data exceeds the capacity but has to be stored on disks? How can we encapsulate KVStore and integrate it into Redis? How is Redis encoding implemented?
This article aims to address these questions by using the Ardb protocol, specifically at the encoding/decoding layer during the integration between Redis and KVStore.
Redis is currently a hot property in the NoSQL circle. It is multipurpose and practical, and especially suitable for cracking some challenges that fall beyond the capability of traditional relational databases. Redis, as a memory database, stores all the data in memory.
Ardb is a NoSQL storage service fully compatible with the Redis protocol. Its storage is implemented based on the existing mature KVStore engine. Theoretically, any KVStore implementations similar to B-Tree/LSM Tree can be used as Ardb's underlying storage. Ardb currently supports LevelDB/RocksDB/LMDB.
The encoding/decoding layer is a very important part of the Redis and KVStore integration solution. Through this layer, we can remove the differences between various KVStore implementations. You can encapsulate and implement complicated data structures in Redis such as string, hash, list, set, and sorted set with any simple KVStore engine.
For strings, it is clear that it can be mapped to a KV pair in a one-to-one manner in KVStore. For other container types, we need to do the following:
• One KV stores the metadata of the entire key (such as the number of members of the list and their expiration time).
• Each member needs a KV to save the member's name and value.
For sorted set, each member has two attributes: score and rank, so we need to do the following:
• One KV stores the metadata of the entire key.
• Each member needs a KV to store the score information.
• Each member needs a KV to store the rank information of each member.
All keys contain the same prefix, and the encoding format is defined as follows:
[<namespace>] <key> <type> <element...>
The "namespace" is used to support something similar to database in Redis. It can be any string, and is not limited to a number.
The "key" is a varbinary string.
The "type" is used to define a simple key-value container. This type implicitly indicates the data structure type of the key. It is one byte long.
The key type in the "meta" information is fixed as KEY_META. Specific types will be defined in the value (refer to the next section).
In addition to the above three parts, different types of keys may have additional fields. For example, hash's key may need an additional "field" field.
Internal values are complex, but their encoding all starts with "type", as defined in the previous section.
<type> <element...>
Subsequent formats vary based on various type definitions.
The encoding of each type of data is shown as follows: "ns" stands for namespace.
KeyObject ValueObject String [<ns>] <key> KEY_META KEY_STRING <MetaObject> Hash [<ns>] <key> KEY_META KEY_HASH <MetaObject> [<ns>] <key> KEY_HASH_FIELD <field> KEY_HASH_FIELD <field-value> Set [<ns>] <key> KEY_META KEY_SET <MetaObject> [<ns>] <key> KEY_SET_MEMBER <member> KEY_SET_MEMBER List [<ns>] <key> KEY_META KEY_LIST <MetaObject> [<ns>] <key> KEY_LIST_ELEMENT <index> KEY_LIST_ELEMENT <element-value> Sorted Set [<ns>] <key> KEY_META KEY_ZSET <MetaObject> [<ns>] <key> KEY_ZSET_SCORE <member> KEY_ZSET_SCORE <score> [<ns>] <key> KEY_ZSET_SORT <score> <member> KEY_ZSET_SORT
Here we use the most complex type, sorted set, as an example. Suppose that there is a Sorted Set A: {member = first, score = 1}, {member = second, score = 2}. Its storage mode in Ardb is as follows:
The storage encoding of Key A is:
// // The "|" in the pseudo-code represents the division of the domain and does not mean to store the data as "|". During actual serialization, each field is serialized at a specific location. The key is: ns|1|A (1 stands for KEY_META metadata type). The value is: metadata encoding (redis data type/zset, expiration time, number of members, maximum and minimum score among others).
The core information storage encoding of Member "first" is:
The key is: ns|11|A|first (11 stands for the KEY_ZSET_SCORE type). The value is: 11|1 (11 stands for the KEY_ZSET_SCORE type. 1 stands for the score of the Member "first").
The rank information storage encoding of Member "first" is:
The key is: ns|10|A|1|first (10 stands for the KEY_ZSET_SORT type and 1 stands for the score). The value is: 10 (representing the KEY_ZSET_SORT type, insignificant. RocksDB automatically sorts the values by key, so it is easy to calculate the rank, requiring no storage and updating).
The score information storage encoding of Member "second" is skipped.
When you use the zcard A command, you can access namespace_1_A directly to get the number of ordered collections in the metadata.
When you use zscore A first, you can directly access namespace_A_first to get the score of Member "first".
When you use zrank A first, first you run zscore to get the score, and then find the serial number of namespace_10_A_1_first.
The specific storage code is as follows:
KeyObject meta_key(ctx.ns, KEY_META, key); ValueObject meta_value; for (each_member) { // KEY_ZSET_SORT stores the rank information. KeyObject zsort(ctx.ns, KEY_ZSET_SORT, key); zsort.SetZSetMember(str); zsort.SetZSetScore(score); ValueObject zsort_value; zsort_value.SetType(KEY_ZSET_SORT); GetDBWriter().Put(ctx, zsort, zsort_value); // Store the score information. KeyObject zscore(ctx.ns, KEY_ZSET_SCORE, key); zscore.SetZSetMember(str); ValueObject zscore_value; zscore_value.SetType(KEY_ZSET_SCORE); zscore_value.SetZSetScore(score); GetDBWriter().Put(ctx, zscore, zscore_value); } if (expiretime > 0) { meta_value.SetTTL(expiretime); } // Metadata. GetDBWriter().Put(ctx, meta_key, meta_value);
All data structures store a key-value of the metadata using a uniform encoding format, so it is impossible to have the same name for different data structures. (That is why in the KV pairs storing the key, the K is fixed to the KEY_META type, and the corresponding type of information exists in the Value field of the Metadata type in Redis.)
When implementing Del, the system will first query the metadata key-value to get the specific data structure type, and then perform the corresponding deletion, following steps similar to the following:
• Query the meta information of the specified key to get the data structure type;
• Perform deletion according to the specific type;
• One "del" will require at least one read + subsequent write operation for the deletion.
The specific code is as follows:
int Ardb::DelKey(Context& ctx, const KeyObject& meta_key, Iterator*& iter) { ValueObject meta_obj; if (0 == m_engine->Get(ctx, meta_key, meta_obj)) { // Delete directly if the data is of the string type. if (meta_obj.GetType() == KEY_STRING) { int err = RemoveKey(ctx, meta_key); return err == 0 ? 1 : 0; } } else { return 0; } if (NULL == iter) { // If the data is of a complicated type, the database will be traversed based on the namespace, key and type prefix. // Search all the members with the prefix of namespace|type|Key. iter = m_engine->Find(ctx, meta_key); } else { iter->Jump(meta_key); } while (NULL != iter && iter->Valid()) { KeyObject& k = iter->Key(); ... iter->Del(); iter->Next(); } }
The prefix search code is as follows:
Iterator* RocksDBEngine::Find(Context& ctx, const KeyObject& key) { ... opt.prefix_same_as_start = true; if (!ctx.flags.iterate_no_upperbound) { KeyObject& upperbound_key = iter->IterateUpperBoundKey(); upperbound_key.SetNameSpace(key.GetNameSpace()); if (key.GetType() == KEY_META) { upperbound_key.SetType(KEY_END); } else { upperbound_key.SetType(key.GetType() + 1); } upperbound_key.SetKey(key.GetKey()); upperbound_key.CloneStringPart(); } ... }
It is relatively difficult to support the data expiration mechanism of complicated data structures on a key-value storage engine. Ardb, however, uses special methods to support the expiration mechanism for all data structures.
The specific implementation is as follows:
• The expiration information is stored in the meta value field in the absolute Unix format (ms).
• Based on the above design, TTL/PTTL and other TTL queries only require one meta read.
• Based on the above design, any reads of the metadata will trigger an expiration decision. Since read operations on the metadata are a requisite step, no extra read operations are required here (it is triggered on access like in Redis).
• Create a namespace TTL_DB to specifically store TTL sorting information.
• When the expiration time setting is stored int the metadata and is not zero, an additional key-value will be stored as KEY_TTL_SORT. The key encoding format is [TTL_DB] "" KEY_TTL_SORT, and the value is empty. Therefore, operations similar to expire settings in Ardb will require an additional write operation for implementation.
• In the custom comparator, the key comparison rule for the KEY_TTL_SORT type is to compare first, so that the KEY_TTL_SORT data will be saved in the order of the expiration time.
• A thread is started independently in Ardb to scan the KEY_TTL_SORT data in order at regular intervals (100 ms). When the expiration time is earlier than the current time, the delete operation will be triggered. When the expiration time is later than the current time, the scan will be terminated (equivalent to the timed serverCron task in Redis for processing expired keys).
Through the conversion on the encoding layer, we can well encapsulate KVStore for integration with Redis. All operations on Redis data, after the conversion on the encoding layer, will eventually be converted to n reads and writes (N> = 1) to the KVStore. On the premise of conforming to the Redis command semantics, we tried to minimize the number of "n"s in encoding.
The most important point is that the integration of Redis and KVStore is not meant to replace Redis, but to enable the single machine to support a data size that far exceeds the memory capacity while maintaining an acceptable level of performance. In a particular situation, it can also serve as a cold data storage solution to achieve interconnection with the hotspot data in Redis.
WordPress with LEMP on Alibaba Cloud – Part 1 Provision and Secure an Ubuntu 16.04 Server
Database Recovery in GitLab: Implementing Database Disaster Tolerance
1,435 posts | 227 followersFollow
Alibaba Clouder - November 10, 2017
Alibaba Cloud Product Launch - December 11, 2018
Michelle - August 1, 2018
Alibaba Clouder - July 25, 2018
Alibaba Cloud Storage - April 25, 2019
Alibaba Cloud Product Launch - December 11, 2018
1,435 posts | 227 followersFollow
A key value database service that offers in-memory caching and high-speed access to applications hosted on the cloudLearn More
Raja_KT February 12, 2019 at 4:23 am
Interesting one. But if shown with real sample values, it will be more comprehensible. | https://www.alibabacloud.com/blog/247782 | CC-MAIN-2019-39 | en | refinedweb |
I'm trying to figure out how to generate a competition chart list from users (players) in database.
I'v got a random sorting:
def shotokanRandPlayers(request, tournament_id): tournament = Tournament.objects.get(id = tournament_id) categories = Category.objects.filter(tournament_id = tournament) for category in categories: if category.type=="KM" and category.playerT_id.all().count()>0: playersT = list(category.playerT_id.all()) random.shuffle(playersT) i = 0 while i<len(playersT): first = FirstPlayer.objects.create(player = playersT[i].player_id) i=i+1 if i<len(playersT): second = SecondPlayer.objects.create(player = playersT[i].player_id) else: second = None Fight.objects.create(category_id = category, firstplayer = first, secondplayer = second, round = 0) i=i+1 return redirect('tournament', tournament_id = tournament.id)
By now on I would like go generate a ready list of these players to have it looked like this:
Are there any django-ready-extensions that would make it ? Or anybody has an idea how to make it ? Thanks! | http://www.howtobuildsoftware.com/index.php/how-do/bCQs/python-django-database-charts-how-to-generate-chart-of-players-from-database-in-django | CC-MAIN-2019-39 | en | refinedweb |
After we have created a grid in the previous example, we now show how to define degrees of freedom on this mesh. For this example, we will use the lowest order ( \(Q_1\)) finite elements, for which the degrees of freedom are associated with the vertices of the mesh. Later examples will demonstrate higher order elements where degrees of freedom are not necessarily associated with vertices any more, but can be associated with edges, faces, or cells.
The term "degree of freedom" is commonly used in the finite element community to indicate two slightly different, but related things. The first is that we'd like to represent the finite element solution as a linear combination of shape function,.
Defining degrees of freedom ("DoF"s in short) on a mesh is a rather simple task, since the library does all the work for you. Essentially, all you have to do is create a finite element object (from one of the many finite element classes deal.II already has, see for example the Finite element space descriptions documentation) and give it to a DoFHandler object through the DoFHandler::distribute_dofs function ("distributing DoFs" is the term we use to describe the process of enumerating the basis functions as discussed above). The DoFHandler is a class that manages which degrees of freedom live where, i.e., it can answer questions like "how many degrees of freedom are there globally" and "on this cell, give me the global indices of the shape functions that live here". This is the sort of information you need when determining how big your system matrix should be, and when copying the contributions of a single cell into the global matrix.
The next step would then be to compute a matrix and right hand side corresponding to a particular differential equation using this finite element and mesh. We will keep this step for the step-3 program and rather talk about one practical aspect of a finite element program, namely that finite element matrices are almost always very sparse, i.e. almost all entries in these matrices are zero. (To be more precise, we say a discretization leads to a sparse matrix if the number of nonzero entries per row in the matrix is bounded by a number that is independent of the overall number of degrees of freedom. For example, the simple 5-point stencil of a finite difference approximation of the Laplace equation leads to a sparse matrix since the number of nonzero entries per row is five, and therefore independent of the total size of the matrix.) Sparsity is one of the distinguishing feature of the finite element method compared to, say, approximating the solution of a partial differential equation using a Taylor expansion and matching coefficients, or using a Fourier basis.
In practical terms, it is the sparsity of matrices that enables us to solve problems with millions or billions of unknowns. To understand this, note that a matrix with \(N\) rows, each with a fixed upper bound for the number of nonzero entries, requires \({\cal O}(N)\) memory locations for storage, and a matrix-vector multiplication also requires only \({\cal O}(N)\) operations. Consequently, if we had a linear solver that requires only a fixed number of matrix-vector multiplications to come up with the solution of a linear system with this matrix, then we would have a solver that can find the values of all \(N\) unknowns with optimal complexity, i.e., with a total of \({\cal O}(N)\) operations. It is clear that this wouldn't be possible if the matrix were not sparse, but it also requires very specialized solvers such as multigrid methods to satisfy the requirement that the solution requires only a fixed number of matrix-vector multiplications. We will frequently look at the question of what solver to use in the remaining programs of this tutorial.
The sparsity is generated by the fact that finite element shape functions are defined locally on individual cells, rather than globally, and that the local differential operators in the bilinear form only couple shape functions that have some overlap. By default, the DoFHandler class enumerates degrees of freedom on a mesh in a rather random way; consequently, the sparsity pattern is also not optimized for any particular purpose. However, for some algorithms, especially for some linear solvers and preconditioners, it is advantageous to have the degrees of freedom numbered in a certain order, and we will use the algorithm of Cuthill and McKee to do so. This can be thought of as choosing a different, permuted basis of the finite element space. The results are written to a file and visualized using a simple visualization program; you get to see the outcome in the results section below.
The first few includes are just like in the previous program, so do not require additional comments:
However, the next file is new. We need this include file for the association of degrees of freedom ("DoF"s) to vertices, lines, and cells:
The following include contains the description of the bilinear finite element, including the facts that it has one degree of freedom on each vertex of the triangulation, but none on faces and none in the interior of the cells.
(In fact, the file contains the description of Lagrange elements in general, i.e. also the quadratic, cubic, etc versions, and not only for 2d but also 1d and 3d.)
In the following file, several tools for manipulating degrees of freedom can be found:
We will use a sparse matrix to visualize the pattern of nonzero entries resulting from the distribution of degrees of freedom on the grid. That class can be found here:
We will also need to use an intermediate sparsity pattern structure, which is found in this file :
We will want to use a special algorithm to renumber degrees of freedom. It is declared here:
And this is again needed for C++ output:
Finally, as in step-1, we import the deal.II namespace into the global scope:
This is the function that produced the circular grid in the previous step-1 example program with fewer refinements steps. The sole difference is that it returns the grid it produces via its argument.
The details of what the function does are explained in step-1. The only thing we would like to comment on is this:
Since we want to export the triangulation through this function's parameter, we need to make sure that the manifold object lives at least as long as the triangulation does. However, in step-1, the manifold object is a local variable, and it would be deleted at the end of the function, which is too early. We avoid the problem by declaring it 'static' which makes sure that the object is initialized the first time control the program passes this point, but at the same time assures that it lives until the end of the program.
Up to now, we only have a grid, i.e. some geometrical (the position of the vertices) and some topological information (how vertices are connected to lines, and lines to cells, as well as which cells neighbor which other cells). To use numerical algorithms, one needs some logic information in addition to that: we would like to associate degree of freedom numbers to each vertex (or line, or cell, in case we were using higher order elements) to later generate matrices and vectors which describe a finite element field on the triangulation.
This function shows how to do this. The object to consider is the
DoFHandler class template. Before we do so, however, we first need something that describes how many degrees of freedom are to be associated to each of these objects. Since this is one aspect of the definition of a finite element space, the finite element base class stores this information. In the present context, we therefore create an object of the derived class
FE_Q that describes Lagrange elements. Its constructor takes one argument that states the polynomial degree of the element, which here is one (indicating a bi-linear element); this then corresponds to one degree of freedom for each vertex, while there are none on lines and inside the quadrilateral. A value of, say, three given to the constructor would instead give us a bi-cubic element with one degree of freedom per vertex, two per line, and four inside the cell. In general,
FE_Q denotes the family of continuous elements with complete polynomials (i.e. tensor-product polynomials) up to the specified order.
We first need to create an object of this class and then pass it on to the
DoFHandler object to allocate storage for the degrees of freedom (in deal.II lingo: we
distribute degrees of freedom). Note that the DoFHandler object will store a reference to this finite element object, so we have to make sure its lifetime is at least as long as that of the
DoFHandler; one way to make sure this is so is to make it static as well, in order to prevent its preemptive destruction. (However, the library would warn us if we forgot about this and abort the program if that occurred. You can check this, if you want, by removing the 'static' declaration.)
As described above, let us first create a finite element object, and then use it to allocate degrees of freedom on the triangulation with which the dof_handler object is associated:
Now that we have associated a degree of freedom with a global number to each vertex, we wonder how to visualize this? There is no simple way to directly visualize the DoF number associated with each vertex. However, such information would hardly ever be truly important, since the numbering itself is more or less arbitrary. There are more important factors, of which we will demonstrate one in the following.
Associated with each vertex of the triangulation is a shape function. Assume we want to solve something like Laplace's equation, then the different matrix entries will be the integrals over the gradient of each pair of such shape functions. Obviously, since the shape functions are nonzero only on the cells adjacent to the vertex they are associated with, matrix entries will be nonzero only if the supports of the shape functions associated to that column and row numbers intersect. This is only the case for adjacent shape functions, and therefore only for adjacent vertices. Now, since the vertices are numbered more or less randomly by the above function (DoFHandler::distribute_dofs), the pattern of nonzero entries in the matrix will be somewhat ragged, and we will take a look at it now.
First we have to create a structure which we use to store the places of nonzero elements. This can then later be used by one or more sparse matrix objects that store the values of the entries in the locations stored by this sparsity pattern. The class that stores the locations is the SparsityPattern class. As it turns out, however, this class has some drawbacks when we try to fill it right away: its data structures are set up in such a way that we need to have an estimate for the maximal number of entries we may wish to have in each row. In two space dimensions, reasonable values for this estimate are available through the DoFHandler::max_couplings_between_dofs() function, but in three dimensions the function almost always severely overestimates the true number, leading to a lot of wasted memory, sometimes too much for the machine used, even if the unused memory can be released immediately after computing the sparsity pattern. In order to avoid this, we use an intermediate object of type DynamicSparsityPattern that uses a different internal data structure and that we can later copy into the SparsityPattern object without much overhead. (Some more information on these data structures can be found in the Sparsity patterns module.) In order to initialize this intermediate data structure, we have to give it the size of the matrix, which in our case will be square with as many rows and columns as there are degrees of freedom on the grid:
We then fill this object with the places where nonzero elements will be located given the present numbering of degrees of freedom:
Now we are ready to create the actual sparsity pattern that we could later use for our matrix. It will just contain the data already assembled in the DynamicSparsityPattern.
With this, we can now write the results to a file :
The result is stored in an
.svg file, where each nonzero entry in the matrix corresponds with a red square in the image. The output will be shown below.
If you look at it, you will note that the sparsity pattern is symmetric. This should not come as a surprise, since we have not given the
DoFTools::make_sparsity_pattern any information that would indicate that our bilinear form may couple shape functions in a non-symmetric way. You will also note that it has several distinct region, which stem from the fact that the numbering starts from the coarsest cells and moves on to the finer ones; since they are all distributed symmetrically around the origin, this shows up again in the sparsity pattern.
In the sparsity pattern produced above, the nonzero entries extended quite far off from the diagonal. For some algorithms, for example for incomplete LU decompositions or Gauss-Seidel preconditioners, this is unfavorable, and we will show a simple way how to improve this situation.
Remember that for an entry \((i,j)\) in the matrix to be nonzero, the supports of the shape functions i and j needed to intersect (otherwise in the integral, the integrand would be zero everywhere since either the one or the other shape function is zero at some point). However, the supports of shape functions intersected only if they were adjacent to each other, so in order to have the nonzero entries clustered around the diagonal (where \(i\) equals \(j\)), we would like to have adjacent shape functions to be numbered with indices (DoF numbers) that differ not too much.
This can be accomplished by a simple front marching algorithm, where one starts at a given vertex and gives it the index zero. Then, its neighbors are numbered successively, making their indices close to the original one. Then, their neighbors, if not yet numbered, are numbered, and so on.
One algorithm that adds a little bit of sophistication along these lines is the one by Cuthill and McKee. We will use it in the following function to renumber the degrees of freedom such that the resulting sparsity pattern is more localized around the diagonal. The only interesting part of the function is the first call to
DoFRenumbering::Cuthill_McKee, the rest is essentially as before:
Again, the output is shown below. Note that the nonzero entries are clustered far better around the diagonal than before. This effect is even more distinguished for larger matrices (the present one has 1260 rows and columns, but large matrices often have several 100,000s).
It is worth noting that the
DoFRenumbering class offers a number of other algorithms as well to renumber degrees of freedom. For example, it would of course be ideal if all couplings were in the lower or upper triangular part of a matrix, since then solving the linear system would among to only forward or backward substitution. This is of course unachievable for symmetric sparsity patterns, but in some special situations involving transport equations, this is possible by enumerating degrees of freedom from the inflow boundary along streamlines to the outflow boundary. Not surprisingly,
DoFRenumbering also has algorithms for this.
Finally, this is the main program. The only thing it does is to allocate and create the triangulation, then create a
DoFHandler object and associate it to the triangulation, and finally call above two functions on it:
The program has, after having been run, produced two sparsity patterns. We can visualize them by opening the
.svg files in a web browser.
The results then look like this (every point denotes an entry which might be nonzero; of course the fact whether the entry actually is zero or not depends on the equation under consideration, but the indicated positions in the matrix tell us which shape functions can and which can't couple when discretizing a local, i.e. differential, equation):
The different regions in the left picture, indicated by kinks in the lines and single dots on the left and top, represent the degrees of freedom on the different refinement levels of the triangulation. As can be seen in the right picture, the sparsity pattern is much better clustered around the main diagonal of the matrix after renumbering. Although this might not be apparent, the number of nonzero entries is the same in both pictures, of course.
Just as with step-1, you may want to play with the program a bit to familiarize yourself with deal.II. For example, in the
distribute_dofs function, we use linear finite elements (that's what the argument "1" to the FE_Q object is). Explore how the sparsity pattern changes if you use higher order elements, for example cubic or quintic ones (by using 3 and 5 as the respective arguments).
Or, you could see how the sparsity pattern changes with more refinements. You will see that not only the size of the matrix changes, but also its bandwidth (the distance from the diagonal of those nonzero elements of the matrix that are farthest away from the diagonal), though the ratio of bandwidth to size typically shrinks, i.e. the matrix clusters more around the diagonal.
Another idea of experiments would be to try other renumbering strategies than Cuthill-McKee from the DoFRenumbering namespace and see how they affect the sparsity pattern.
You can also visualize the output using GNUPLOT (one of the simpler visualization programs; maybe not the easiest to use since it is command line driven, but also universally available on all Linux and other Unix-like systems) by changing from
print_svg() to
print_gnuplot() in
distribute_dofs() and
renumber_dofs(): | https://www.dealii.org/8.4.1/doxygen/deal.II/step_2.html | CC-MAIN-2019-39 | en | refinedweb |
Some 🙂:
- GetModelsWithMakes: returns a list of car Models with their respective associated Makes
- GetMakes: returns the full list of car Makes
- UpdateModels: takes a list of car Models and uses ApplyChanges and SaveChanges to save changes in the database
And the typical operation of the application goes likes this:
- Your client application invokes GetModelsWithMakes and uses it to populate a grid in the UI.
- Then, the app invokes GetMakes and uses the results to populate items in a drop down field in the grid.
- When a Make “A” is selected for a car Model, there is some piece of code that assigns the instance of Make “A” to the Model.Make navigation property.
- When changes are saved, the UpdateModels operation is called on the server with the graph resulting from the steps above.:
1. Only use Foreign Key values to manipulate associations:
You can use foreign key properties to set associations between objects without really connecting the two graphs. Every time you would do something like this:
model.Make = make;
… replace it with this:
model.MakeId = make.Id;
This is the simplest solution I can think of and should work well unless you have many-to-many associations or other “independent associations” in your graph, which don’t expose foreign key properties in the entities.
2. Use a “graph container” object and have a single “Get” service operation for each “Update” operation:.
// type shared between client and server
public class CarsCatalog
{
public Model[] Models {get; set;}
public Make[] Makes {get; set;}
}
// server side code
public CarsCatalog GetCarsCatalog()
{
using (var db = new AutoEntities())
{
return new CarsCatalog
{
Models = context.Models.ToArray(),
Makes = context.Makes.ToArray()
};
}
}
// client side code
var catalog = service.GetCarsCatalog();
var model = catalog.Models.First();
var make = catalog.Makes.First();
model.Make = make;
This approach should work well even if you have associations without FKs. If you have many-to-many associations, it will be necessary to use the Include method in some queries, so that the data about the association itself is loaded from the database.
3. Perform identity resolution on the client::
// returns an instance from the graph with the same key or the original entity
public static class Extensions
{
public static TEntity MergeWith<TEntity, TGraph>(this TEntity entity, TGraph graph,
Func<TEntity, TEntity, bool> keyComparer)
where TEntity : class, IObjectWithChangeTracker
where TGraph: class, IObjectWithChangeTracker
{
return AutoEntitiesIterator.Create(graph).OfType<TEntity>()
.SingleOrDefault(e => keyComparer(entity,e)) ?? entity;
}
}
// usage
model.Make = make.MergeWith(model, (j1, j2) => j1.Id == j2.Id);.
Summary).
Hope this helps,
Diego
Hi Diego, I´d like to use DDD aproach on ASP.NET WebForms App, so I have my Poco Classes in separate assembli of the .edmx Model, I´m serializing the My POCO classes in ViewState and it works fine to Insert and Retrive data from my Repository, also the LazyLoading with the Virtual in my List<T> properties works fine. But, to persist in Data Base the updated object graph is another long story. I´ve been looking for one easy answer to that but I did´t find yet, belive me I´ve been reading a lot about EF. This Aproach with STE looks to be more apropriate to use with WCF stuff I´m not using services on my App. I was trying to use the STE generator but my first problem was It´s generated in the same assemblie of .edmx Model and if I move that to my Domain assemblie to try to use partial Classes in My domain I lost the Interfaces IChangeble…… ITracklebel….
Can you give me any tip just to Save back my object Graph with the modifications, the Funny part is The modifications are there in the object But the modifications (add or delete ) of my lists are ignored by the EF4.
Thanks,
Edmilson
Thank's Diego for writing this. When is the new version of STE, which solves this issue, going to be available??
Hi Diego,
I am running into the problem described in the post and was hoping to come up with a pure server side solution because it is very difficult to control the all the use cases on the client. The solution I had at first was to override Equals and GetHashCode to compare on entity keys, and while it worked great when one entity was persisted at any given time, the solution broke down when ApplyChanges was called on multiple entities sequentially.
So, is there a server side solution that you would recommend?
And is there a fix on the horizon that we can expect?
Thanks,
Alex.
abesidski@hotmail.com
Very nice article. This made things a lot clearer. But I have still issues with many-to-many relations.
Option 1 will work in most scenarios, but as you said, not in many-to-many relations.
Option 2 will work, but will work, but then we need to fetch a lot of data we do not need, to add just one object.
Option 3, we would rather not use 🙂
Is it any other options when working with many-to-many relations?
Regards
Magnus Rekkedal
Hello,
I'm facing this problem and never received any response to my numerous posts on the web. I do not understand why having 2 entities with the same EntityKey is a problem if those entities are in an unchanged state. STE is unusable in a real world app. I've a lot of many-to-many tables and thus, cannot bind to ID as stated in your first point. The second one is a performance-killer, i do not want to download a complete list of Make with each Model, just to be able to select one. Concerning the third point, it's not an example for a many-to-many table. Waht about a merge for TrackableCollection<T>?
I'm completely stuck on a project using Silverlight / WCF and EF 4 STE. Can you please tell me how to proceed to save the entity back to the database?
Thanks in advance
David
Diego —
This is a GREAT post and so very helpful.
I have a follow-up question.
Regarding this…
Only use Foreign Key values to manipulate associations:
Every time you would do something like this:
model.Make = make; //*1
…replace it with this:
model.MakeId = make.Id //*2
…am I correct in assuming that the default behaviour of STEs would be that if one sets the FK property (as noted in *2) then at that point the corresponding object property (which is model.Make in this case) would be invalid, etiher null or pointing to some object that is not necessarily correct AND, I further assume, that once that modified "model" entity does get back to the Context and ApplyChanges() is called and SaveChanges() is called then (and only then) will the object property (model.Make) be set properly and ready to consume and use?
What do you think?
Please advise.
Thank you.
— Mark Kamoski
Thanks, this article provides some good insights.
We use kind of the same technique as in work-around 3.
Except we do it on the server-side, when a graph arrives at the server, right before calling, AcceptChanges. At this point, we search for duplicates and throw duplicates out.
This only works when the duplicates don't have changes them selves.
But this works out fine in our scenarios, where like in this example,
the duplicates usually come from a selection to link them to a Parent entity.
This took us quite some time to figure out though.
Some guidelines on what (and especially what not) is possibly with STE's would be nice.
Without such documentation, we keep running into "unexpected" issues like this, which take
a lot of time to figure out.
Still struggling with other issues, like re-linking detached entities.
(see social.msdn.microsoft.com/…/4fadd41f-3157-43cb-b5e1-7def59aacdb5)
We feel like STE's were/are not mature enough to go live with.
Thanks for your time,
Koen
Hi, I have found an issue with the Iterator solution to this problem. In almost all cases it works but in this one case its still an issue and the iterator solution is the only solution we can use.
Lets say I have two Person properties and a find person screen.
I find a person and set it to person 1 using the merge. I find a person and set it to person 2 using the merge and save. This works correctly.
I go back into it and i find the same person set on person 2 and set it to person 1, I find the same person that was previously in person 1 and set it to person 2.
What this causes is two copies of the person that was originally in person 1. The reason for this is when i set person 1 to person 2 it properly finds the merge and uses it, but then what was in person 1 originally is now in OriginalValues. When i set person 2 to what was originally in person 1 the iterator does not find it because it exists in OriginalValues as an old value.
When apply changes is called it still hits this issue because two copies exist of the same object, one directly in a navigation property and another in original values.
Is there any way to fix this? This issue overall is causing some major headaches. I dont see why it is so hard to have the concept of finding a record and bringing it into an object graph to save the key. We have to have the actual objects for modification purposes and display purposes. there are other ways we can solve this in the UI layer but we really want to solve this in the model layer.
Thanks
This is a great fault by Microsoft in something that must be an easy task. Java allows works without problem, doing that i'm trying to do with STE… without the need to do any workaround.
I ran into this problem a lot and found that it could also be resolved by overriding the Equals method of each entity class to compare the key values. Reading the above, are you saying that this should not have worked?
Hi @Martin,
Could you show how you override the Equals method?
Thank you.
Just an additional note on only using the foreign key ID: I needed to be able to attach the same Product to PurchaseDetailLines and was getting the error, but my grid view uses the Product.Name property. Since everthing related to products was read-only I was able to do a foreach loop through the detail lines and set the Product entity equal to null. The foreign key kept and I was still able to use the referenced objects before the save.
Hi guys This is very interesting… However, I'm getting that exception too but in a different scenario I guess..
I have already a recond In DB/DataContext.
I deleted the record from DB and detach it from EntityContext.
Then a second user comes, executes same logic, detects that the record no longer exist but needs it, so it is created again..
All this happens in two separated threads and both start at the same time (A user clicking OK to same Form for example).
When I accept all changes for the record that is being created I get that Exception. Seems like the one I already deleted somehow still remains in memory/cache.. so, when the second thread tries to created again that record and acept the changes it conflicts with the other one. | https://blogs.msdn.microsoft.com/diego/2010/10/05/self-tracking-entities-applychanges-and-duplicate-entities/ | CC-MAIN-2019-39 | en | refinedweb |
Directory Structure
The Processing Unit Jar File
Much like a JEE web application or an OSGi bundle, The Processing Unit is packaged as a .jar file and follows a certain directory structure which enables the GigaSpaces runtime environment to easily locate the deployment descriptor and load its classes and the libraries it depends on. A typical processing unit looks as follows:
|----META-INF |--------spring |------------pu.xml |------------pu.properties |------------sla.xml |--------MANIFEST.MF |----com |--------mycompany |------------myproject |----------------MyClass1.class |----------------MyClass2.class |----lib |--------hibernate3.jar |--------.... |--------commons-math.jar
The processing unit jar file is composed of several key elements:
META-INF/spring/pu.xml(mandatory): This is the processing unit’s deployment descriptor, which is in fact a Spring context XML configuration with a number of GigaSpaces-specific namespace bindings. These bindings include GigaSpaces specific components (such as the space for example). The
pu.xmlfile typically contains definitions of GigaSpaces components (space, event containers, remote service exporters) and user defined beans which would typically interact with those components (e.g. an event handler to which the event containers delegate the events, or a service beans which is exposed to remote clients by a remote service exporter).
META-INF/spring/sla.xml(not mandatory): This file contains SLA definitions for the processing unit (i.e. number of instances, number of backup and deployment requirements). Note that this is optional, and can be replaced with an
<os:sla>definition in the
pu.xmlfile. If neither is present, the default SLA will be applied. Note, the
sla.xmlcan also be placed at the root of the processing unit. SLA definitions can be also specified at the deploy time via the deploy CLI or deploy API.
SLA definitions are only enforced when deploying the processing unit to the GigaSpaces service grid, since this environment actively manages and controls the deployment using the GSM. When running within your IDE or in standalone mode these definitions are ignored.
META-INF/spring/pu.properties(not mandatory): Enables you to externalize properties included in the
pu.xmlfile (e.g. database connection username and password), and also set system-level deployment properties and overrides, such as JEE related deployment properties (see this page for more details) or space properties (when defining a space inside your processing unit). Note, the
pu.propertiescan also be placed at the root of the processing unit.
User class files: Your processing unit’s classes (here under the com.mycompany.myproject package)
lib: Other jars on which your processing unit depends, e.g. commons-math.jar or jars that contain common classes across many processing units.
META-INF/MANIFEST.MF(not mandatory): This file could be used for adding additional jars to the processing unit classpath, using the standard
MANIFEST.MF
Class-Pathproperty. (see Manifest Based Classpath for more details)
You may add your own jars into the runtime (GSC) classpath by using the
PRE_CLASSPATH and
POST_CLASSPATH variables. These should point to your application jars.
Sharing Libraries Between Multiple Processing Units
In some cases, multiple Processing Units use the same JAR files. In such cases it makes sense to place these JAR files in a central location accessible by all the Processing Units rather than packaging them individually with each of the Processing Units. Note that this is also useful for decreasing the deployment time in case your Processing Units contain a lot of 3rd party jars files, since it saves a lot of the network overhead associated with downloading these JARs to each of the GSCs. There are three options to achieve this:
lib/optional/pu-common directory
JAR files placed in the
<XAP root>/lib/optional/pu-common directory will be loaded by each Processing Unit instance in its own separate classloader (called the Service Classloader, see the section below).
This means they are not shared between Processing Units on the same JVM, which provides an isolation quality often required for JARs containing the application’s proprietary business-logic. On the other hand this option consumes more PermGen memory (due to potentially multiple instances per JVM).
You can place these JARs in each XAP installation in your network, but it is more common to share this folder on your network and point the
pu-common directory to the shared location by specifying this location in the
com.gs.pu-common system property in each of the GSCs on your network.
When a new JAR needs to be loaded, just place the new JAR in
pu-common directory and restart the Processing Unit.
Note: if different Processing Units use different versions of the same JAR (under same JAR file name) then
pu-common should not be used.
META-INF/MANIFEST.MF descriptor
JAR files specified in the Processing Unit’s
META-INF/MANIFEST.MF descriptor file will be loaded by each Processing Unit instance in its own separate classloader (called the Service Classloader, see the Class Loaders section below.
This option achieves similar behavior to the
lib/optional/pu-common option above, but allows a more fine-grained control by enabling to specify specific JAR files (each in its own location) rather than an entire folder (and only a single folder).
For more information see Manifest Based Classpath section below.
lib/platform/ext directory
JAR files placed in the
<XAP root>/lib/platform/ext directory will be loaded once by the GSC-wide classloader and not separately by each Processing Unit instance (this classloader is called the Common Classloader, see the Class Loaders section below).
This means they are shared between Processing Units on the same JVM and thereby offer no isolation. On the other hand this option consumes less PermGen memory (one instance per JVM).
This method is recommended for 3rd party libraries that have no requirement for isolation or different versions for different Processing Units, and are upgraded rather infrequently, such as JDBC driver.
You can place these jars in each XAP installation in your network, but it is more common to share this folder on your network and point the
platform/lib/ext directory to the shared location on your network by specifying this location in the
com.gigaspaces.lib.platform.ext system property in each of the GSCs on your network.
When a new JAR needs to be loaded, place the new JAR in
lib/platform/ext directory and restart the relevant GSCs (on which an instance of the PU was running).
Considerations
When it comes to choosing the right option for your system, the following should be considered:
- Size of loaded classes in memory (PermGen)
- Size of Processing Unit JAR file and Processing Unit deployment time
- Isolation (sharing classes between Processing Units)
- Frequency of updating the library JAR
- In addition special attention is required to xml parsing related jars that have parllels in jdk itself,If your pu requires use of one of those jar,you should place ALL related jars in lib/platform/ext starting with 10.1 version the product dosn’t include xml parsing jars under lib/platform/xml and use default jdk jars.
Runtime Modes
The processing unit can run in multiple modes.
When deployed on to the GigaSpaces runtime environment or when running in standalone mode, all the jars under the
lib directory of your processing unit jar, will be automatically added to the processing unit’s classpath.
When running within your IDE, it is similar to any other Java application, i.e. you should make sure all the dependent jars are part of your project classpath.
Deploying the Processing Unit to the Service Grid
When deploying the processing unit to GigaSpaces Service Grid, the processing unit jar file is uploaded to the XAP Manager (GSM) and extracted to the
deploy directory of the local GigaSpaces installation (located by default under
<XAP Root>/deploy).
Once extracted, the GSM processes the deployment descriptor and based on that provisions processing unit instances to the running XAP containers.
Each GSC to which a certain instance was provisioned, downloads the processing unit jar file from the GSM, extracts it to its local work directory (located by default under
<XAP Root>/work/deployed-processing-units) and starts the processing unit instance.
Deploying Data Only Processing Units
In some cases, your processing unit contains only a Space and no custom code.
One way to package such processing unit is to use the standard processing unit packaging described above, and create a processing unit jar file which only includes a deployment descriptor with the required space definitions and SLA.
GigaSpaces also provides a simpler option via its built-in data-only processing unit templates (located under
<XAP Root>/deploy/templates/datagrid. Using these templates you can deploy and run data only processing unit without creating a dedicated jar for them.
For more information please refer to Deploying and running the processing unit
Class Loaders
In general, classloaders are created dynamically when deploying a PU into a GSC. You should not add your classes into the GSC CLASSPATH. Classes are loaded dynamically into the generated classloader in the following cases:
- When the GSM sending classes into the GSC when the application deployed and when GSC is restarted.
- When the GSM sending classes into the GSC when the application scales.
- When a Task class or Distributed Task class and its dependencies are executed (space execute operation).
- When space domain classes and their dependencies (Data model) are used (space write/read operations)
Here is the structure of the class loaders when several processing units are deployed on the Service Grid (GSC):
Bootstrap (Java) | System (Java) | Common (Service Grid) / \ Service CL1 Service CL2
The following table shows which user controlled locations end up in which class loader, and the important JAR files that exist within each one:
In terms of class loader delegation model, the service (PU instance) class loader uses a parent last delegation mode. This means that the processing unit instance class loader will first try and load classes from its own class loader, and only if they are not found, will delegate up to the parent class loader.
Native Library Usage
When deploying applications using native libraries you should place the Java libraries (jar files) loading the native libraries under the
GSRoot/lib/platform/ext folder. This will load the native libraries once into the common class loader.
Permanent Generation Space
For applications that are using relatively large amount of third party libraries (PU using large amount of jars) the default permanent generation space size may not be adequate. In such a case, you should increase the permanent generation space size. Here are suggested values:
-XX:PermSize=512m -XX:MaxPermSize=512m
Manifest Based Classpath
You may add additional jars to the processing unit classpath by having a manifest file located at
META-INF/MANIFEST.MF and defining the property
Class-Path as shown in the following example (using a simple
MANIFEST.MF file):
Manifest-Version: 1.0 Class-Path: /home/user1/java/libs/user-lib.jar lib/platform/jdbc/hsqldb.jar ${MY_LIBS_DIRECTORY}/user-lib2.jar file:/home/user2/libs/lib.jar [REQUIRED EMPTY NEW LINE AT EOF]
In the previous example, the
Class-Path property contains 4 different entries:
/home/user1/java/libs/user-lib.jar- This entry uses an absolute path and will be resolved as such.
lib/optional/jdbc/hsqldb-2.3.2.jar- This entry uses a relative path and as such its path is resolved in relative to the gigaspaces home directory.
${MY_LIBS_DIRECTORY}/user-lib2.jar- In this entry the
${MY_LIBS_DIRECTORY}will be resolved if an environment variable named
MY_LIBS_DIRECTORYexists, and will be expanded appropriately.
file:/home/user2/libs/lib.jar- This entry uses URL syntax
The
pu-common Directory
The
pu-common directory may contain a jar file with a manifest file as described above located at
META-INF/MANIFEST.MF. The classpath defined in this manifest will be shared by all processing units as described in Sharing libraries.
Further details
- If an entry points to a non existing location, it will be ignored.
- If an entry included the
${SOME_ENV_VAL}placeholder and there is no environment variable named
SOME_ENV_VAL, it will be ignored.
- Only file URLs are supported. (i.e http, etc… will be ignored) | https://docs.gigaspaces.com/xap/12.0/dev-java/the-processing-unit-structure-and-configuration.html | CC-MAIN-2019-39 | en | refinedweb |
Provided by: libpcp3-dev_4.3.1-1_amd64
NAME
pmFreeResult - release storage allocated for performance metrics values
C SYNOPSIS
#include <pcp/pmapi.h> void pmFreeResult(pmResult *result); cc ... -lpcp
DESCRIPTION).
SEE ALSO
malloc(3), PMAPI(3) and pmFetch(3). | http://manpages.ubuntu.com/manpages/disco/man3/pmFreeResult.3.html | CC-MAIN-2019-39 | en | refinedweb |
import "go.uber.org/cadence/internal/common/cache"
var ( // ErrCacheFull is returned if Put fails due to cache being filled with pinned elements ErrCacheFull = errors.New("Cache capacity is fully occupied with pinned elements") )
type Cache interface { // Exist checks if a given key exists in the cache Exist(key string) bool // Get retrieves an element based on a key, returning nil if the element // does not exist Get(key string) interface{} // Put adds an element to the cache, returning the previous element Put(key string, value interface{}) interface{} // PutIfNotExist puts a value associated with a given key if it does not exist PutIfNotExist(key string, value interface{}) (interface{}, error) // Delete deletes an element in the cache Delete(key string) // Release decrements the ref count of a pinned element. If the ref count // drops to 0, the element can be evicted from the cache. Release(key string) // Size returns the number of entries currently stored in the Cache Size() int }
A Cache is a generalized interface to a cache. See cache.LRU for a specific implementation (bounded cache with LRU eviction)
New creates a new cache with the given options
NewLRU creates a new LRU cache of the given size, setting initial capacity to the max size
NewLRUWithInitialCapacity creates a new LRU cache with an initial capacity and a max size
type Options struct { // TTL controls the time-to-live for a given cache entry. Cache entries that // are older than the TTL will not be returned TTL time.Duration // InitialCapacity controls the initial capacity of the cache InitialCapacity int // Pin prevents in-use objects from getting evicted Pin bool // RemovedFunc is an optional function called when an element // is scheduled for deletion RemovedFunc RemovedFunc }
Options control the behavior of the cache
RemovedFunc is a type for notifying applications when an item is scheduled for removal from the Cache. If f is a function with the appropriate signature and i is the interface{} scheduled for deletion, Cache calls go f(i)
Package cache imports 4 packages (graph) and is imported by 3 packages. Updated 2019-05-28. Refresh now. Tools for package owners. | https://godoc.org/go.uber.org/cadence/internal/common/cache | CC-MAIN-2019-39 | en | refinedweb |
Jupyter doesn't use the built-in MathJax
I have two copies of MathJax. One in /usr/lib/sagemath/local/share/mathjax, another in /usr/share/javascript/mathjax.
I open up the jupyter notebook:
sage -n jupyter. I open a notebook and what I get is
Math/LaTeX rendering will be disabled.
If you have administrative access to the notebook server and a working internet connection, you can install a local copy of MathJax for offline use with the following command on the server at a Python or Jupyter prompt:
from Jupyter.external import mathjax; mathjax.install_mathjax()
But I don't want a third (sic!) copy of MathJax on my computer! How can I make Jupyter use the existing ones?
I'm using Linux Mint 17.2, used the sagemath-upstream-binary package from the PPA. | https://ask.sagemath.org/question/31542/jupyter-doesnt-use-the-built-in-mathjax/?answer=31551 | CC-MAIN-2019-39 | en | refinedweb |
I have a large database and I am looking to read only the last week for my python code.
My first problem is that the column with the received date and time is not in the format for datetime in pandas. My input (Column 15) looks like this:
recvd_dttm 1/1/2015 5:18:32 AM 1/1/2015 6:48:23 AM 1/1/2015 13:49:12 PM
From the Time Series / Date functionality in the pandas library I am looking at basing my code off of the "Week()" function shown in the example below:
In [87]: d Out[87]: datetime.datetime(2008, 8, 18, 9, 0) In [88]: d - Week() Out[88]: Timestamp('2008-08-11 09:00:00')
I have tried ordering the date this way:
df =pd.read_csv('MYDATA.csv') orderdate = datetime.datetime.strptime(df['recvd_dttm'], '%m/%d/%Y').strftime('%Y %m %d')
however I am getting this error
TypeError: must be string, not Series
Does anyone know a simpler way to do this, or how to fix this error?
Edit: The dates are not necessarily in order. AND sometimes there is a faulty error in the database like a date that is 9/03/2015 (in the future) someone mistyped. I need to be able to ignore those.
import datetime as dt # convert strings to datetimes df['recvd_dttm'] = pd.to_datetime(df['recvd_dttm']) # get first and last datetime for final week of data range_max = df['recvd_dttm'].max() range_min = range_max - dt.timedelta(days=7) # take slice with final week of data sliced_df = df[(df['recvd_dttm'] >= range_min) & (df['recvd_dttm'] <= range_max)] | http://databasefaq.com/index.php/answer/983/python-datetime-pandas-format-dataframes-selecting-data-from-last-week-in-python | CC-MAIN-2017-43 | en | refinedweb |
Retrieve an extension identified by a key (string) from the specified list of video modes.
#include <wfdqnx/wfdcfg.h>
struct wfdcfg_keyval* wfdcfg_mode_list_get_extension(const struct wfdcfg_mode_list *list, const char *key)
A handle to the list whose extension(s) you are retrieving
Identifier of the extension to retrieve
The extension is valid between the time you create and destroy the list.
Pointer to wfdcfg_keyval if the extension was found; NULL if the extension was not found. It's considered acceptable for a list to have no extensions. | http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.screen.wfdcfg/topic/wfdcfg_mode_list_get_extension.html | CC-MAIN-2017-43 | en | refinedweb |
all final(optimal) values of for loop
I have written a program to check which matrices in a given class maximizes the determinant. But my program gives me only one such matrix, I need all those matrices which attains the maximum determinant.
def xyz(n): m=0 A=matrix(QQ,n) for a in myfunction(n): if a.det()>m: m=a.det() A=a print A | https://ask.sagemath.org/question/36860/all-finaloptimal-values-of-for-loop/?answer=36869 | CC-MAIN-2017-43 | en | refinedweb |
Hi all,
I am using jdev12c.
I tried to create the following class
package view; import java.awt.Dimension; import java.util.ListResourceBundle; public class Resource extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] = { // }; } }
The code is copied from java documentation ListResourceBundle (Java Platform SE 7 )
Looks like a documentation bug where "=" has to be removed
The code is copied from java documentation ListResourceBundle (Java Platform SE 7 )
No - that code is not a copy of that Java documentation.
The code at that link compiles just fine. You have modified the code.
Looks like a documentation bug where "=" has to be removed
No - looks like you have added "=" and caused the error you are probably complaining about.
The code in the API is a constructor; it is constructing a new object. You don't use "=" in a constructor; you use "=" when you are making an assignment.
Review the 'Anonymous Classes trail in The Java Tutorials
The following example,
HelloWorldAnonymousClasses, uses anonymous classes in the initialization statements of the local variables
frenchGreetingand
spanishGreeting,
. . .
HelloWorld frenchGreeting = new HelloWorld() {
String name = "tout le monde";
public void greet() {
greetSomeone("tout le monde");
}
public void greetSomeone(String someone) {
name = someone;
System.out.println("Salut " + name);
}
};
That 'new HelloWorld() {' is constructing a new instance of an anonymous class. Note that the ONLY time "=" is used in that example is for the assignments.
Message was edited by: Moderator. No SHOUTING please | https://community.oracle.com/message/11329475 | CC-MAIN-2017-43 | en | refinedweb |
EXECL(3) Library Routines EXECL(3)
execl, execlp, execv, execvp - execute a file
#include <unistd.h> extern char **environ; int execl(const char *path, const char *arg, ...); int execle(const char *path, const char *rg, ...); int execlp(const char *file, const char *arg, ...); int execv(const char *path, char * const *argv); int execvp(const char *file, char * const *argv); arg and subsequent ellipses in the execl, execle, and execlp func- tionsle function expects a final argument, envp, of type 'char * const *' to follow the trailing NULL pointer. This is an array of environment strings, similar to that used by execve(2). This array must be NULL-terminated. The execv and execvp functions provide an array of pointers to NULL- terminated strings that represent the argument list available to the new program. The first argument, by convention, should point to the file name associated with the file begin executed. The array of point- ers must be terminated by a NULL pointer. Some of these functions have special semantics. The functions execlp and execvp will duplicate the actions of the shell in searching for an executable file if the specified file name does not contain a slash (/) character or a colon (:). The search path is the path specified in the environment by PATH variable. If this variable isn't specified, the default path /bin /usr/bin (or /usr/bin /bin for gsh(1)) is used.
If any of the exec functions returns, an error will have occurred. The return value is SYSERR (-1), and the global variable errno will be set to indicate the error.
These routines may fail and set errno for any of the errors specified for the library functions execve(2), _execve(2), and malloc(3).
When parsing the PATH environment variable, execvp and execlp assume that if there is no colon (:) within PATH then the pathname delimiter is a slash (/). This is to facilitate use of GS/OS pathname delim- iters. The current version of the gsh shell searches PATH from back to front. In most other shells, it is done front to back. In order to provide consistency with gsh, PATH is currently scanned back to front. With this backwards scanning, the default PATH used is /usr/bin /bin. If gsh gets fixed, the scan order will be quickly changed.
Implemented from the BSD specification by Devin Reade.
execve(2), fork(2), exec(3).
The GNO implementation of these routines first appeared in the lenviron library. They became part of the GNO distribution as of v2.0.6 GNO 19 January 1997 EXECL(3) | http://www.gno.org/gno/man/man3/execl.3.html | CC-MAIN-2017-43 | en | refinedweb |
in PersistentPermissionResolver.filterSetByActionAlim Abdulkhairov Apr 17, 2012 11:19 AM
Hello community.
I use seam 3 and faced with suc problem:
I didn't define identityPermissionClass in my seam configuration beans.xml. So JpaPermissionStore.identityPermissionClass property is null and during it init() method JpaPermissionStore.enabled is defined as false (JpaPermissionStore.enabled = false). And when PersistentPermissionResolver.filterSetByAction is called permissions variable is assigned to null.
The cause lies in this code in JpaPermissionStore:
protected List<Permission> listPermissions(Object resource, Set<Object> targets, String action) { if (identityPermissionClass == null) return null; ...... }
So i get NullPointerException in PersistentPermissionResolver.filterSetByAction on this line:
for (Permission permission : permissions) { ... }
Why this check is not used in PersistentPermissionResolver.filterSetByAction like it does in PersistentPermissionResolver.hasPermission?
public void filterSetByAction(Set<Object> targets, String action) { if (permissionStore == null) return; if (!identity.isLoggedIn()) return; if (!permissionStore.isEnabled()) return; // to check if JpaPermissionStore is enabled
1. Re: in PersistentPermissionResolver.filterSetByActionRichard Barabe Apr 17, 2012 12:31 PM (in response to Alim Abdulkhairov)1 of 1 people found this helpful
I think you should read this :
Unfortunately the permission stuff does not work in seam 3. Except for Rule based permissions, these works well.
ACL permissions where asked by many people but it doesn't seem to be planned at all (hope I'm mistaking on this, though).
Is that because all the efforts are for the DeltaSpike project ?
I should probably ask in another thread
2. Re: in PersistentPermissionResolver.filterSetByActionRichard Barabe Apr 17, 2012 12:34 PM (in response to Richard Barabe)
Sorry for double posting, but indeed most of the efforts are on delta spike :
3. Re: in PersistentPermissionResolver.filterSetByActionAlim Abdulkhairov Apr 17, 2012 12:51 PM (in response to Alim Abdulkhairov)
Thank you for link, Richard.
But can I disable PersistentPermissionResolver.filterSetByAction from resolver chain? It fails with NullPointerException and my custom PermissionResolver implementation isn't called. This is the problem.
4. Re: in PersistentPermissionResolver.filterSetByActionRichard Barabe Apr 17, 2012 2:16 PM (in response to Alim Abdulkhairov)
I think not. But you could provide a dummy identityPermissionClass :
{code}
package foo.bar;
import javax.persistence.GenerationType;
import javax.persistence.Table;
import java.io.Serializable;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.ManyToOne;
import javax.validation.constraints.NotNull;
import org.jboss.seam.security.annotations.permission.PermissionProperty;
import static org.jboss.seam.security.annotations.permission.PermissionPropertyType.*;
/**
* This entity stores ACL permissions
*
* @author Shane Bryzak
*/
@Entity
@Table(name="IdentityPermission")
public class IdentityPermission implements Serializable {
private static final long serialVersionUID = -5366058398015495583L;
private Long id;
private IdentityObject identityObject;
private IdentityObjectRelationshipType relationshipType;
private String relationshipName;
private String resource;
private String permission;
/**
* Surrogate primary key value for the permission.
*
* @return
*/
@Id
@GeneratedValue(strategy= GenerationType.IDENTITY)
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
/**
* Either the specific identity object for which this permission is granted,
* or in the case of a permission granted against a group, this property
* then represents the "to" side of the group relationship. Required field.
*
* @return
*/
@NotNull
@ManyToOne
@PermissionProperty(IDENTITY)
public IdentityObject getIdentityObject() {
return identityObject;
}
public void setIdentityObject(IdentityObject identityObject) {
this.identityObject = identityObject;
}
/**
* If this permission is granted to a group of identities, then this property may
* be used to indicate the relationship type of the group membership. For example,
* a group or role relationship. It is possible that the permission may also be
* granted to identities that have *any* sort of membership within a group, in
* which case this property would be null.
*
* @return
*/
@ManyToOne
@PermissionProperty(RELATIONSHIP_TYPE)
public IdentityObjectRelationshipType getRelationshipType() {
return relationshipType;
}
public void setRelationshipType(IdentityObjectRelationshipType relationshipType) {
this.relationshipType = relationshipType;
}
/**
* If this permission is granted to a group of identities, then this property
* may be used to indicate the name for named relationships, such as role
* memberships.
*
* @return
*/
@PermissionProperty(RELATIONSHIP_NAME)
public String getRelationshipName() {
return relationshipName;
}
public void setRelationshipName(String relationshipName) {
this.relationshipName = relationshipName;
}
/**
* The unique identifier for the resource for which permission is granted
*
* @return
*/
@PermissionProperty(RESOURCE)
public String getResource() {
return resource;
}
public void setResource(String resource) {
this.resource = resource;
}
/**
* The permission(s) granted for the resource. May either be a comma-separated
* list of permission names (such as create, delete, etc) or a bit-masked
* integer value, in which each bit represents a different permission.
*
* @return
*/
@PermissionProperty(PERMISSION)
public String getPermission() {
return permission;
}
public void setPermission(String permission) {
this.permission = permission;
}
}
{code}
And configure it :
{code:xml}
<beans xmlns="
"
xmlns:xsi=""
xmlns:s="urn:java:ee"
xmlns:security="urn:java:org.jboss.seam.security"
xmlns:permission="urn:java:org.jboss.seam.security.permission"
xsi:
<security:JpaPermissionStore>
<s:modifies/>
<security:identityPermissionClass>foo.bar.IdentityPermission</security:identityPermissionClass>
</security:JpaPermissionStore>
</beans>
{code}
That should work around the error. Let me know if you make it work
5. Re: in PersistentPermissionResolver.filterSetByActionAlim Abdulkhairov Apr 18, 2012 9:40 AM (in response to Richard Barabe)
Thanks a lot, Richard.
But
identityPermissionClassis not injected in JpaPermissionStore anyway
I have copied your
IdentityPermission realization and configured seam. Now seam-beans.xml contents look this way:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <security:JpaPermissionStore> <s:replaces/> <security:identityPermissionClass>com.foo.bar.security.IdentityPermission</security:identityPermissionClass> </security:JpaPermissionStore> </beans>
seam-beans.xml is in src/main/resources/META-INF folder. I tried to use beans.xml with the same contents but it isn't work too. And it seems container doesn't attempt to load IdentityPermission class. There isn't any logs relatated to it, only "No identityPermissionClass set, JpaPermissionStore will be unavailable."
6. Re: in PersistentPermissionResolver.filterSetByActionRichard Barabe Apr 18, 2012 4:47 PM (in response to Alim Abdulkhairov)
I just tested it on my side, and it works for me.
I mean, as soon as I provide and configure the IdentityPermission as in my previous post, JpaPermissionStore.enabled becomes true.
Commenting the configuration in seam-beans make JpaPermissionStore.enabled false.
By the way, I'm using seam 3.1.0.Final with glassfish 3.1.2
7. Re: in PersistentPermissionResolver.filterSetByActionRoland Olsson May 8, 2012 10:40 AM (in response to Richard Barabe)
I'm also struggling with enabling the JpaPermissionStore. No matter how I put things into beans.xml or seam-beans.xml it just doesn't work. I have spent hours debugging Seam Security, Seam Solder and Weld but to no avail. From what I can see the configuration files aren't even read, or at least not the portion that should configure the Seam beans. The deployment structure: EAR containing a WAR (with beans) which in turn contains a JAR (with beans). The classes related to JpaPermissionStore is located in the JAR file. I try to deploy this to an JBoss AS 7.1 server.
8. Re: in PersistentPermissionResolver.filterSetByActionJason Porter May 8, 2012 10:54 AM (in response to Roland Olsson)
Try the annotations, they're easier to use anyway.
9. Re: in PersistentPermissionResolver.filterSetByActionAlim Abdulkhairov May 8, 2012 10:56 AM (in response to Alim Abdulkhairov).
10. Re: in PersistentPermissionResolver.filterSetByActionRoland Olsson May 11, 2012 7:25 AM (in response to Jason Porter)
Which annotation? I already use the IdentityEntity annotation for the other identity entity classes. This annotation, however, lacks support for an identity permission entity type.
Jason Porter wrote:
Try the annotations, they're easier to use anyway.
11. Re: in PersistentPermissionResolver.filterSetByActionRoland Olsson May 11, 2012 7:34 AM (in response to Alim Abdulkhairov)
If I add the seam-config-xml module as a dependency it doesn't deploy at all. It doesn't allow my to use this module in parallel with the solder-impl module. From what I understand the functionality of the seam-config-xml module has now completely moved into Solder?
Alim Abdulkhairov wrote:.
12. Re: in PersistentPermissionResolver.filterSetByActionRoland Olsson May 11, 2012 7:55 AM (in response to Roland Olsson)
Debugging the process of retreiving the bean configuration files I end up in the org.jboss.solder.servlet.resource.WebResourceLocator and its getWebResourceUrl(path) method. This gets called with e g "WEB-INF/beans.xml". But something seems to go wrong in here. The ServiceLoader.load method doesn't find any WebResourceLocationProvider service and returns an iterator to an empty collection. This results in the method returning a null resource URL.
Have I missed anything configuration wise or maybe is this a bug in Seam Solder? I deploy my application to a JBoss AS 7.1 Final server.
package org.jboss.solder.servlet.resource;
...
public class WebResourceLocator {
...
public URL getWebResourceUrl(final String path) {
// build sorted list of provider implementations
List<WebResourceLocationProvider> providers = new ArrayList<WebResourceLocationProvider>();
Iterator<WebResourceLocationProvider> iterator = ServiceLoader.load(WebResourceLocationProvider.class).iterator();
while (iterator.hasNext()) {
providers.add(iterator.next());
}
Collections.sort(providers, new Sortable.Comparator());
// prefer the context classloader
ClassLoader classLoader = WebResourceLocator.class.getClassLoader();
// process each provider one by one
for (WebResourceLocationProvider provider : providers) {
// execute the SPI implementation
final URL resourceLocation = provider.getWebResource(path, classLoader);
if (resourceLocation != null) {
return resourceLocation;
}
}
return null; | https://developer.jboss.org/thread/198490 | CC-MAIN-2017-43 | en | refinedweb |
You want to run some setup code one time and then run several tests. You only want to run your cleanup code after all of the tests are finished.
Use the junit.extensions.TestSetup class..extensions.TestSetup class supports this requirement. Example 4-4 shows how to use this technique.
package com.oreilly.javaxp.junit; import com.oreilly.javaxp.common.Person; import junit.extensions.TestSetup; import junit.framework.Test; import junit.framework.TestCase; import junit.framework.TestSuite; public class TestPerson extends TestCase { public void testGetFullName( ) { ... } public void testNullsInName( ) { ... } public static Test suite( ) { TestSetup setup = new TestSetup(new TestSuite(TestPerson.class)) { protected void setUp( ) throws Exception { // do your one-time setup here! } protected void tearDown( ) throws Exception { // do your one-time tear down here! } }; return setup; } }
TestSetup is a subclass of junit.extensions.TestDecorator, which is a base class for defining custom tests. The main reason for extending TestDecorator is to gain the ability to execute code before or after a test is run.[4] The setUp( ) and tearDown( ) methods of TestSetup are called before and after whatever Test is passed to its constructor. In our example we pass a TestSuite to the TestSetup constructor:
[4] JUnit includes source code. Check out the code for TestSetup to learn how to create your own extension of TestDecorator.
TestSetup setup = new TestSetup(new TestSuite(TestPerson.class)) {
This means that TestSetup's setUp( ) method is called once before the entire suite, and tearDown( ) is called once afterwards. It is important to note that the setUp( ) and tearDown( ) methods within TestPerson are still executed before and after each individual unit test method within TestPerson.
Recipe 4.6 describes setUp( ) and tearDown( ). | http://etutorials.org/Programming/Java+extreme+programming/Chapter+4.+JUnit/4.7+One-Time+Set+Up+and+Tear+Down/ | CC-MAIN-2015-40 | en | refinedweb |
This database solution has a number of benefits including:
· Store C# objects (and objects of any .NET-language) without the need for any interface
or adapter.
· Store Dynamic objects with any fields/properties, and mapping them into objects
of any type.
· SQL-like queries. If you can write SQL, you can use this database right away.
· LINQ support
· Returns objects in their original state (as enumeration or as a single object)
· Arrays and lists – parameters support. Arrays could be either jagged or multidimensional
– query syntax remains the same.
· Functions and expressions in the queries, they can also be used for ORDER BY.
· Standard and non-standard ORDER BY.
· Regular expressions in the queries.
· Indexing and query optimizer to speed up queries.
· Bulk insert and update for objects.
· Generic objects support.
· Restoring read-only (init-only) fields and properties.
· Inheritance in queries (SELECT ParentClass would return both ParentClass and ChildClass).
· Queries for a required type only (SELECT ONLY ParentClass will return only ParentClass,
but will not return ChildClass)
· Capable of partial object restoration (for example if the required object is ForumTopic,
then it is not necessary to drag all the referenced TopicMessages)
· Restoration depth specification – return only A or A and A.B.C, or all referenced
objects?
· Client-server architecture - very important if you use it for a web site. Connect
to server from anywhere.
· Simultaneous support of multiple users (queries executed in parallel and independently).
· Authentication via Windows accounts (or uses the account of the active user) or
using the Eloquera DB integrated authentication.
· x86 and x64 builds available.
· Unique identifiers for each object— convenient for working in stateless environments
and ASP.NET.
· Culture support (for example, WHERE dates BETWEEN['en-US'] @d1 and @d2
— will be interpreted in US format, even if the current Windows Language is French)
· Approximate search using ALMOST
· Commercial use is FREE. Of course, there are paid support options.
There are numerous additional features including backup, fixing corrupt database files and much more. If you have an interest, study the documentation.
I've written a number of times about various NoSQL databases and stores, including
MongoDb and BPlusTree. Eloquera Database is simply a logical extension of that
quest. One thing I found was that the level of support is exceptionally good,
even for the "free" user. When I installed the Eloquera Database, the
service started, but there was nothing listening on the default port of 43962.
Consequently I was only able to use the database in embedded Desktop mode. After
exhausting all possible search results, I went on to the Eloquera forums and
posted my problem. Dmytro of Eloquera responded within 10 minutes, and walked
me through a very structured set of troubleshooting steps, and within an hour,
we got the issue resolved. This is quality customer service! You can look at
the conversation here. One important lesson I learned from this exchange is that if you start a service
from the command line with NET START <ServiceName> you can get a lot more
information in the Application Event Log than if you start it from the Services
Service Control Manager Control Panel applet.
Let's take a look at some typical usage:
Connect:
DB db = new DB("server=localhost;options=none;");
To connect to a remote machine running Eloquera as a service, replace localhost with
it's IP address.
Create a new database:
db.CreateDatabase("MyDatabaseName");
Note that CreateDatabase does not open the database.
Open existing database:
db.OpenDatabase("MyDatabaseName");
Store (or update) object:
db.Store(new Book());
Sample Query Syntax:
SELECT [SKIP count][TOP count] TypeName
[WHERE {Expression [AND|OR] }[..n]]
[ORDER BY {Expression}[,] [ASC|DESC]}[..n]]
var books = db.ExecuteQuery("SELECT UserGroup WHERE Name = 'Swimmers'");
foreach(var book in books)
{
…
}
Delete object:
var book = db.ExecuteScalar("SELECT UserGroup WHERE Name = 'Swimmers'");
db.Delete(book);
INSERT:
// Create the object we would like to work with.
Cinema cinema = new Cinema() {
Location = "Sydney",
OpenDates = new DateTime[] { new DateTime(2003, 12, 10), new DateTime(2003,
10, 3) }
};
// Store the object - no conversion required.
db.Store(cinema);
To UPDATE the existing object we simply call the same db.Store function.
This method stores an object in the database. If the object is present in the database,
it will be updated, otherwise it will be inserted.
For example:
//Get the object we would like to work with.
Parameters param = db.CreateParameters();
param["location"] = "Sydney";
Cinema cinema = (Cinema)db.ExecuteScalar("SELECT Cinema WHERE Location = @location",
0, param);
//Change that object
cinema.Location = "Melbourne";
//Store it back
//Object will be updated
db.Store(cinema);
To DELETE the object we can use db.Delete or db.DeleteAll functions
Delete - deletes an object that was retrieved from the database.
DeleteAll - deletes the object and its dependent objects are involved
Example of delete:
IEnumerable listOfMovies = db.ExecuteQuery("SELECT Movie WHERE Genre >= 5");
foreach (Movie mov in listOfMovies)
{
// Delete - deletes an object that was retrieved from the database.
db.Delete(mov);
}
A common query may look like this:
//Create parameters
Parameters param = db.CreateParameters();
//Simple type parameter
param["genre"] = 5;
Movie m = (Movie)db.ExecuteScalar("SELECT Movie WHERE Genre >= @genre",
param);
Or more sophisticated queries:
db.ExecuteQuery("SELECT Movie WHERE Genre = 1");
db.ExecuteQuery("SELECT Cinema WHERE Movies.Title = 'Die Hard 4'");
db.ExecuteQuery("SELECT Cinema WHERE Movies.Studios.Titles CONTAINS '20th
Century Fox'");
db.ExecuteQuery("SELECT Cinema WHERE ALL Movies.Studios.Titles CONTAINS '20th
Century Fox'");
db.ExecuteQuery("SELECT Cinema WHERE '20th Century Fox' IN Movies.Studios.Titles");
db.ExecuteQuery("SELECT Cinema WHERE OpenDates BETWEEN['en-US'] '10/1/2006'
AND '9/17/2009'");
db.ExecuteQuery("SELECT SKIP 6 JoinClassA, JoinClassB FROM JoinClassA JOIN JoinClassB
ON JoinClassA.id = JoinClassB.id INNER JOIN JoinClassC ON JoinClassA.id = JoinClassC.id");
COMPLEX OBJECT
A Complex object contains other objects or arrays.
For example:
public class BasicUser
{
public Location HomeLocation;
public Location CurrentLocation;
public string Name;
public DateTime BirthDate;
public string Interests;
public BasicUser[] Friends;
public WorkPlace UserWorkPlace;
public School UserSchool;
public string[] Emails;
}
To access the field during the query, you should write it after the ‘.’
For example, to access a field in BasicUser, a complex object would use:
var res = db.ExecuteQuery("SELECT BasicUser WHERE School.Name = 'Thornlegh
School'");
In this query, a search is performed in the database for any BasicUser that has School
with the name ‘Thornleigh School.’
You can also access other fields, as follows:
var res = db.ExecuteQuery("SELECT BasicUser WHERE Interests = 'reading'");
LINQ:
Traditional LINQ example:
var res = from Movie m in db where m.Year > x select m;
LINQ can also be used like this:
var movies = from m in db.Query<Movie>() where m.Title == "Joe" &&
m.Year > 1950 orderby -m.Year select m;
From version 4.1 Eloquera supports native type evolution.
Eloquera.config contains TypeUpdateAllowed tag that determines whether type changes
shall be auto detected and applied.
<SmartRuntime Smart="true" TypeUpdateAllowed ="true" />
If TypeUpdateAllowed is turned on, while storing a new object, database will auto-detect
· Changes in field’s type. Already stored objects' fields will be converted
to a new type.
· Newly added fields. Old objects will be converted to a new type, with new
fields assigned a default value.
· Removed fields.
A function to rename type is also available:
void RenameType(Type type, string oldTypeName);
Of course, performance is always an issue. Here I was pleasantly surprised:
OPERATION ELAPSED MILLISECONDS
10,000 Connect-Open-Close 3053
10,000 inserts 5158
10,000 Bulk Insert 1878
SELECT 2
10,000 Updates 10,338
10,000 Select 347 Ticks
The above are for a connection to the same machine. If connecting to a remote instance,
times will be longer.
With the options=inmemory,persist; in the connection string, times will be even faster. "inmemory,persist"
saves everything to the database file when db.Close() is called. Also, notice
in the source code for my demo, I've placed the {Index] attribute on the
ID of the Person Class. This speeds up updates and selects by a large factor.
You can place [Index] on more than one property.
I've included a Console App test harness illustrating the above. Of course you'll
need to download and install the Eloquera Database first.
Here is the sample code:
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using Eloquera.Client;
namespace EloqueraTest
{
class Program
{
static void Main(string[] args)
{
// Connect
DB db = new DB("server=localhost;options=none;");
// Clear away any old work
db.DeleteDatabase("Test", true);
//Create the database
db.CreateDatabase("Test");
//Open existing database
db.OpenDatabase("Test");
Stopwatch sw= Stopwatch.StartNew();
//INSERT
for (int i = 0; i < 10000; i++)
{
List<Address> addresses = new List<Address>();
Address addr = new Address(i, i.ToString() + " My Street", "Nanuet", "NY", "10931", "Business");
Address addr2 = new Address(i, i.ToString() + " My Street", "Nanuet", "NY", "10931", "Personal");
addresses.Add(addr);
addresses.Add(addr2);
Person p = new Person(i, "Mr.", "Joe", "Blow" + i.ToString(), "joe@blow.com", "3867435454", addresses);
db.Store(p);
}
sw.Stop();
Console.WriteLine("INSERT: "+sw.ElapsedMilliseconds.ToString());
// BULK INSERT
Stopwatch sw4 = Stopwatch.StartNew();
List<Address> addresses2 = new List<Address>();
List<Person> persons2 = new List<Person>();
for (int i = 9999; i < 20000; i++)
{
Address addr = new Address(i, i.ToString() + " My Street", "Nanuet", "NY", "10931", "Business");
Address addr2 = new Address(i, i.ToString() + " My Street", "Nanuet", "NY", "10931", "Personal");
addresses2.Add(addr);
addresses2.Add(addr2);
Person p = new Person(i, "Mr.", "Joe", "Blow" + i.ToString(), "joe@blow.com", "3867435454", addresses2);
persons2.Add(p);
}
db.Store(persons2);
sw4.Stop();
Console.WriteLine("BULK INSERT: " + sw4.ElapsedMilliseconds.ToString());
// SELECT VIA LINQ
Stopwatch sw3 = Stopwatch.StartNew();
var persons = from p in db.Query<Person>() where p.LastName == "Blow2300" select p;
sw3.Stop();
Console.WriteLine("SELECT: " + sw3.ElapsedMilliseconds.ToString() + ": " + persons.FirstOrDefault().LastName);
// SELECT VIA SQL(10,000)
Stopwatch sw5 = Stopwatch.StartNew();
var people = db.ExecuteQuery("SELECT TOP 10000 Person WHERE Id >5000");
sw5.Stop();
Console.WriteLine("10000 SELECT: " + sw5.ElapsedTicks.ToString() + " Ticks.");
// UPDATE 10,000
Stopwatch sw2 = Stopwatch.StartNew();
for(int j=0;j<10000;j++)
{
Person p = (Person) db.ExecuteScalar("SELECT TOP 1000 Person WHERE Id=" + j.ToString());
p.LastName += "OK";
db.Store(p);
}
sw2.Stop();
Console.WriteLine("10000 UPDATES: " + sw2.ElapsedMilliseconds.ToString());
db.Close();
Console.WriteLine("Any Key to Quit.");
Console.ReadKey();
}
}
}
You can download the Visual Studio 2010 Solution here. | http://www.nullskull.com/a/1708/eloquera-nosql-object-database.aspx | CC-MAIN-2015-40 | en | refinedweb |
If you already have the R code (especially if you are going to use it often) it can be written to file stored through known adopath with your do- or ado-file using file with write option each time to use the ado/do-file. Linking this with Roger Newson's rsource will allow you to call R, run you code and output within Stata.
Example:
tempname binla
file open `binla' using "c:\ado\plus\m\midas.R", write replace text
file write `binla' `"library(INLA)"' _n
file write `binla' `"inla.upgrade()"' _n
file write `binla' `"library(foreign)"' _n
file write `binla' `"anti.logit = function(x) return (exp(x)/(1+exp(x)))"' _n
file write `binla' `"quantiles = c(0.025, 0.5, 0.975)"' _n
file write `binla' `"filename = commandArgs(trailingOnly = TRUE)"' _n
file write `binla' `"midas <- read.csv(filename, header = TRUE)"' _n
file write `binla' `"data.frame(midas)"' _n
file write `binla' `"formula <- Y~f(diid,model= "2diidwishart", param=c(4,1,2,0.1)) + tp + tn - 1 "' _n
file write `binla' `"midas = inla(formula,family="binomial", data=midas, Ntrials=N,"' _n
file write `binla' `"quantiles = quantiles, control.inla = list(strategy = "laplace", int.strategy = "grid", npoints=50),"' _n
file write `binla' `"control.compute = list(dic=T, cpo=T, mlik=T))"' _n
file write `binla' `"hyper = inla.hyperpar(midas)"' _n
file write `binla' `"nq = length(quantiles)"' _n
file write `binla' `"R = matrix(NA, 7, nq)"' _n
file write `binla' `"colnames(R) = as.character(quantiles)"' _n
file write `binla' `"rownames(R) = c("summary.sens", "summary.spec", "logit.sens", "logit.spec", "sigma.sens", "sigma.spec", "rho")"' _n
file write `binla' `"tp = midas\$marginals.fixed\$tp"' _n
file write `binla' `"R[1,] = anti.logit(inla.qmarginal(quantiles, tp))"' _n
file write `binla' `"tn = midas\$marginals.fixed\$tn"' _n
file write `binla' `"R[2,] = anti.logit(inla.qmarginal(quantiles, tn))"' _n
file write `binla' `"tpl = midas\$marginals.fixed\$tp"' _n
file write `binla' `"R[3,] = inla.qmarginal(quantiles, tpl)"' _n
file write `binla' `"tnl = midas\$marginals.fixed\$tn"' _n
file write `binla' `"R[4,] = inla.qmarginal(quantiles, tnl)"' _n
file write `binla' `"tau.1 = hyper\$marginals\$\`Precision for diid (first component)\` "' _n
file write `binla' `"R[5,] = 1/sqrt(inla.qmarginal(quantiles, tau.1))"' _n
file write `binla' `"tau.2 = hyper\$marginals\$\`Precision for diid (second component)\` "' _n
file write `binla' `"R[6,] = 1/sqrt(inla.qmarginal(quantiles, tau.2))"' _n
file write `binla' `"rho = hyper\$marginals\$\`Rho for diid\` "' _n
file write `binla' `"R[7,] = inla.qmarginal(quantiles, rho)"' _n
file write `binla' `"print(R)"' _n
file write `binla' `"midares <- data.frame(R)"' _n
file write `binla' `"write.foreign(midares,"' _n
file write `binla' `"datafile="midares.dta","' _n
file write `binla' `"codefile="midares.do","' _n
file write `binla' `"package="Stata")"' _n
file write `binla' `"fitdic<- data.frame(midas\$dic)"' _n
file write `binla' `"print(fitdic)"' _n
file write `binla' `"write.foreign(fitdic,"' _n
file write `binla' `"datafile="midadic.dta","' _n
file write `binla' `"codefile="midadic.do","' _n
file write `binla' `"package="Stata")"' _n
file write `binla' `"like<- data.frame(midas\$mlik)"' _n
file write `binla' `"print(like)"' _n
file write `binla' `"write.foreign(like,"' _n
file write `binla' `"datafile="midalik.dta","' _n
file write `binla' `"codefile="midalik.do","' _n
file write `binla' `"package="Stata")"' _n
if "`fixed'" != "" {
file write `binla' `"marg.fix.tp <- data.frame(midas\$marginals.fixed\$tp)"' _n
file write `binla' `"write.foreign(marg.fix.tp,"' _n
file write `binla' `"datafile="mfixtp.dta","' _n
file write `binla' `"codefile="mfixtp.do","' _n
file write `binla' `"package="Stata")"' _n
file write `binla' `"marg.fix.tn <- data.frame(midas\$marginals.fixed\$tn)"' _n
file write `binla' `"write.foreign(marg.fix.tn,"' _n
file write `binla' `"datafile="mfixtn.dta","' _n
file write `binla' `"codefile="mfixtn.do","' _n
file write `binla' `"package="Stata")"' _n
}
if "`covplot'" != "" {
file write `binla' `"marg.rho <- data.frame(midas\$marginals.hyper$\`Rho for diid\`) "' _n
file write `binla' `"write.foreign(marg.rho,"' _n
file write `binla' `"datafile="mrho.dta","' _n
file write `binla' `"codefile="mrho.do","' _n
file write `binla' `"package="Stata")"' _n
file write `binla' `"marg.hyper.tp <- data.frame(midas\$marginals.hyper\$\`Precision for diid (first component)\`) "' _n
file write `binla' `"write.foreign(marg.hyper.tp,"' _n
file write `binla' `"datafile="mhypertp.dta","' _n
file write `binla' `"codefile="mhypertp.do","' _n
file write `binla' `"package="Stata")"' _n
file write `binla' `"marg.hyper.tn <- data.frame(midas\$marginals.hyper\$\`Precision for diid (second component)\`) "' _n
file write `binla' `"write.foreign(marg.hyper.tn,"' _n
file write `binla' `"datafile="mhypertn.dta","' _n
file write `binla' `"codefile="mhypertn.do","' _n
file write `binla' `"package="Stata")"' _n
}
if "`fitted'" != "" {
file write `binla' `"linpred <- data.frame(midas\$summary.linear.predictor)"' _n
file write `binla' `"write.foreign(linpred,"' _n
file write `binla' `"datafile="lpred.dta","' _n
file write `binla' `"codefile="lpred.do","' _n
file write `binla' `"package="Stata")"' _n
file write `binla' `"sumfit <- data.frame(midas\$summary.fitted.values)"' _n
file write `binla' `"write.foreign(sumfit,"' _n
file write `binla' `"datafile="sumfit.dta","' _n
file write `binla' `"codefile="sumfit.do","' _n
file write `binla' `"package="Stata")"' _n
}
if "`predplot'" != "" {
file write `binla' `"cpo <- data.frame(midas\$cpo)"' _n
file write `binla' `"write.foreign(cpo,"' _n
file write `binla' `"datafile="cpo.dta","' _n
file write `binla' `"codefile="cpo.do","' _n
file write `binla' `"package="Stata")"' _n
file write `binla' `"pit <- data.frame(midas\$pit)"' _n
file write `binla' `"write.foreign(pit,"' _n
file write `binla' `"datafile="pit.dta","' _n
file write `binla' `"codefile="pit.do","' _n
file write `binla' `"package="Stata")"'
}
file close `binla'
tokenize "`varlist'", parse(" ")
gen Y1 = int(`1')
gen N1 = int(`1'+`3')
gen N2 = int(`2'+`4')
gen Y2 = int(`4')
gen id=_n
reshape long Y N, i(id)j(dis)
tab dis, gen(T)
rename tp abcd1
rename tn abcd2
rename T1 tp
rename T2 tn
gen diid =_n
tempfile datafile
capture outsheet Y N tp tn diid using `datafile', replace comma /*nolabel*/
capture findfile "midas.ado"
local midasloc `"`r(fn)'"'
_getfilename "`midasloc'"
local xfile "`r(filename)'"
local foundit : subinstr local midasloc `"`xfile'"' ""
local foundit : subinstr local midasloc `"`xfile'"' ""
rsource using "`foundit'midas.R", lsource roptions("--slave --args `datafile'") rpath("C:\Program Files\R\R-2.10.1\bin\rterm.exe")
nois di ""
nois di ""
nois di ""
infile lower estimate upper using "midares.dta" , automatic clear
gen str parameter="Sensitivity"
replace parameter="Specificity" in 2/2
replace parameter="logit(sens)" in 3/3
replace parameter="Logit(spec)" in 4/4
replace parameter="Var(logitsens)" in 5/5
replace parameter="Var(logitspec)" in 6/6
replace parameter="Corr(logits)" in 7/7
format parameter %-15s
format estimate lower upper %3.2f
nois list parameter estimate lower upper, sep(0) noobs table
erase midares.dta
erase midares.do
nois di ""
>>> Fred Wolfe <fwolfe@arthritis-research.org> 2/26/2010 1:21 PM >>>
I:
*
*
*
**********************************************************
Electronic Mail is not secure, may not be read every day, and should not be used for urgent or sensitive issues
*
* For searches and help try:
*
*
*
**********************************************************
Electronic Mail is not secure, may not be read every day, and should not be used for urgent or sensitive issues
*
* For searches and help try:
*
*
* | http://www.stata.com/statalist/archive/2010-02/msg01333.html | CC-MAIN-2015-40 | en | refinedweb |
Difference between revisions of "TTML/changeProposal015"
Revision as of 17:32, 21 March 2014
Style.CSS - OPEN
- Owner: Glenn Adams.
- Started: 14/06/13
Contents
- 1 Style.CSS - OPEN
- 2 Issues Addressed
- 3 Summary and Change details
- 3.1 margin
- 3.2 padding
- 3.3 box-decoration-break
- 3.4 border
- 3.5 line stacking strategy
- 3.6 region anchor points
- 3.7 text outline
- 3.8 text shadow
- 3.9 shrink fit
- 3.10 font face rule
- 3.11 multiple row alignment (flex box in CSS mapping)
- 4 Dependencies on other packages
- 5 Edits to be applied
- 6 Edits applied
- 7 Impact
- 8 References
Issues Addressed
- ISSUE-168
- ISSUE-176
- ISSUE-193
- ISSUE-20
- ISSUE-209
- ISSUE-21
- ISSUE-213
- ISSUE-234
- ISSUE-235
- ISSUE-285
- ISSUE-273
- ISSUE-284
- ISSUE-286.
Style attribute
tts:linePadding applies to block-level elements
tt:body,
tt:div and
tt:p. Has no effect when declared solely on a
tt:span.
Value permitted is a single
<length>. Percentages are relative to the width of the region. Length values must be zero or positive.
Animatable: discrete.
Every line area created to contain the text contained in a paragraph with a non-zero
linePadding will be inset at the start and end by the specified length. The background color at the start and end of each line will be extended into this inset space.
PAL: Are TTML 1 processors expected to ignore TTML 2 attributes that are in the same namespace at TTML 1? Is this the case in practice?
TTML2 example
The
tts:linePadding style is illustrated by the following example:
<p tts: <span tts: Left and right padding broken across a line </span> </p>
CSS mapping
Use of linePadding results in the properties
padding-left,
padding-right and
box-decoration-break: clone being set on a span around the text, with an anonymous span being introduced if necessary. The previous example thus results in the following example code when mapped to HTML5/CSS:
<p style="color: white;"> <span style="background-color: black; padding-left: 0.5em; padding-right: 0.5em; box-decoration-break: clone;"> Left and right padding broken across a line </span> </p>
This produces an output similar to:
Left and right padding
broken across a line
NB the wiki source for this example differs from the mapping provided above because browser support for
box-decoration-break: clone is not yet available.> </styling>> ...
- Need to define mappings for other combinations of textAlign and multiRowAlign to get desired behaviour. | http://www.w3.org/wiki/index.php?title=TTML/changeProposal015&curid=7078&diff=72505&oldid=72474 | CC-MAIN-2015-40 | en | refinedweb |
IRC log of dawg on 2005-07-05
Timestamps are in UTC.
14:28:22 [RRSAgent]
RRSAgent has joined #dawg
14:28:22 [RRSAgent]
logging to
14:28:31 [Zakim]
SW_DAWG()10:30AM has now started
14:28:32 [Zakim]
+DanC
14:29:13 [Zakim]
+HowardK
14:29:15 [Zakim]
+??P4
14:29:19 [Zakim]
-??P4
14:29:33 [stoni]
stoni has joined #dawg
14:29:48 [Zakim]
+??P4
14:29:51 [AndyS]
zakim, ??P4 is AndyS
14:29:51 [Zakim]
+AndyS; got it
14:30:10 [Zakim]
+EricP
14:30:11 [Zakim]
+Kendall_Clark
14:30:21 [kendall]
zakim, mute me
14:30:21 [Zakim]
Kendall_Clark should now be muted
14:30:28 [Zakim]
+[IBMCambridge]
14:30:56 [DanC]
Zakim, [IBMCambridge] is temporarily LeeF
14:30:56 [Zakim]
+LeeF; got it
14:31:22 [Zakim]
+[IPcaller]
14:31:28 [SteveH]
Zakim, IPcaller is SteveH
14:31:28 [Zakim]
+SteveH; got it
14:31:35 [DanC]
Zakim, take up item 1
14:31:35 [Zakim]
agendum 1. "Convene, take roll, review records and agenda
" taken up [from DanC]
14:31:46 [kendall]
zakim, unmute me
14:31:46 [Zakim]
Kendall_Clark should no longer be muted
14:33:34 [ericP]
warning about identical-looking IRIs -->
14:33:42 [DanC]
regrets: Jeen Broekstra, Dave Beckett
14:33:46 [Zakim]
+[IBMCambridge]
14:33:51 [Zakim]
+??P24
14:34:10 [DanC]
regrets+ Yoshio FUKUSHIGE
14:35:02 [DanC]
Zakim, list attendees
14:35:02 [Zakim]
As of this point the attendees have been DanC, HowardK, AndyS, EricP, Kendall_Clark, LeeF, SteveH, [IBMCambridge]
14:35:28 [DanC]
->
26 Jun minutes
14:35:30 [AndyS]
Eric your 2 resume examples render differently in FF :-)
14:36:23 [DanC]
RESOLVED to accepts 26 Jun minutes
14:36:28 [DanC]
Zakim, pick a scribe
14:36:28 [Zakim]
Not knowing who is chairing or who scribed recently, I propose DanC
14:36:32 [DanC]
Zakim, pick a scribe
14:36:32 [Zakim]
Not knowing who is chairing or who scribed recently, I propose SteveH
14:36:38 [ericP]
PROPOSED accept
as a true record
14:36:41 [ericP]
RESOLVE
14:36:43 [ericP]
RESOLVED
14:37:06 [ericP]
Next meeting: 12-July
14:37:09 [ericP]
Scribe: SteveH
14:37:53 [DanC]
Zakim, next agendum
14:37:54 [Zakim]
agendum 2. "SPARQL QL publication" taken up [from DanC]
14:38:24 [EliasT]
EliasT has joined #dawg
14:39:47 [DanC]
. ACTION EricP: defns extraction
14:39:54 [AndyS]
"resume" example works in IE :-)
14:40:07 [DanC]
ACTION EricP: defns extraction
14:40:15 [DanC]
ACTION: EricP clarify which regex lang, new section ericp; have AndyS check it.
14:40:23 [DanC]
ACTION: PatH to review new optionals defintions, if any
14:40:32 [Zakim]
+ +1.323.444.aaaa
14:40:46 [DanC]
Zakim, aaaa is JosD
14:40:46 [Zakim]
+JosD; got it
14:40:56 [DanC]
ACTION DanC: write SOTD; work with EricP to publish
14:41:25 [Zakim]
+PatH
14:41:26 [ericP]
zakim, who is here?
14:41:26 [Zakim]
On the phone I see DanC, HowardK, AndyS, EricP, Kendall_Clark, LeeF, SteveH, [IBMCambridge], ??P24, JosD, PatH
14:41:28 [Zakim]
On IRC I see EliasT, stoni, RRSAgent, kendall, howardk, Zakim, AndyS, LeeF, SteveH, afs, ericP, DanC
14:41:37 [ericP]
zakim, ??P24 is Souri
14:41:37 [Zakim]
+Souri; got it
14:41:57 [JosD]
JosD has joined #dawg
14:42:29 [patH]
patH has joined #dawg
14:42:33 [kendall]
pedantic web, yes! :>
14:43:46 [kendall]
(hmm, I have to send real regrets for next week -- dr's appointment during our call)
14:45:08 [ericP]
Andy believes he can produce OPTIONALs text this week in time for PatH to review it
14:45:43 [ericP]
KendallC: we care about bNodes issue, but don't care that it is before last call
14:47:11 [DanC]
->
example from DanC in bnodes
14:47:27 [ericP]
DanC: at first, I thought bNodes was an overspecification problem. Now see that a test case reveals that they want a different definition of matching.
14:48:46 [DanC]
(bnode rich stuff can be hard to deal with, yes.
advises "don't do that". hmm.)
14:49:20 [ericP]
(multiple identities for everything causes FOAF database bloat)
14:50:00 [kendall]
yes, i agree about "don't do that", but FOAF is kinda a big deal!
14:50:18 [ericP]
[KendallC describes UM's interest in stable bNode identifiers]
14:52:56 [ericP]
KendallC: I don't want to make a decision here that will make FOAF and OWL/DL queries harder in the future
14:54:13 [AndyS]
Answer is
14:54:13 [ericP]
PatH: we might be able to allow bNodes to pin down a match without requiring it
14:54:57 [DanC]
ron's reply regarding the test case
14:55:00 [ericP]
KendallC: I believe that is what we want. This affects using SPARQL between portals
14:55:55 [DanC]
input data: _:l23c14 foaf:mbox <
mailto:connolly@w3.org
>.
14:57:05 [ericP]
PatH: in the past, we tried bNodes not treated as variabls
14:57:17 [ericP]
... requires that you put a variable there instead.
14:57:42 [ericP]
... removes bNodes from the QL entirely
14:58:07 [ericP]
... backing off, you can send a query with a bNode but it might not match
14:59:51 [ericP]
Proplems with removing BNodes from SPARQL:
15:00:07 [ericP]
1. SELECT * gives more bindings
15:00:09 [DanC]
in _:l55c33 , it's an ell, not a one. line 55 character 33.
15:00:37 [AndyS]
3 ways round it :
15:00:52 [ericP]
2. need to rename named bNodes and []s when translating turtle to SPARQL
15:02:05 [ericP]
PatH: how about some syntax for "marked" bNodes?
15:02:20 [SteveH]
yes
15:03:09 [ericP]
DanC notes that this appears to match Andy's _!:xyz proposal in
15:04:32 [ericP]
AndyS: protocol solution is also interesting 'cause you're in a session context
15:05:20 [AndyS]
We also need to relax XML results format (and RDF/XML??)
15:05:32 [SteveH]
_:a -> _!:a to mark?
15:05:39 [SteveH]
thats my understanding
15:07:36 [DanC]
agenda + blank node handling... new requirements?
15:07:36 [kendall]
but that's *one* implementation strategy among others; I don't see any reason to privilege it.
15:07:38 [ericP]
Elias, if we already have the capability, with OPTIONALs, is inventing this stuff necessary?
15:07:41 [DanC]
Zakim, take up item 8
15:07:41 [Zakim]
agendum 8. "blank node handling... new requirements?" taken up [from DanC]
15:08:01 [ericP]
Zakim, [IBMCambridge] is Elias
15:08:01 [Zakim]
+Elias; got it
15:08:36 [kendall]
what else is going on outside our group? Uh, FOAF, OWL DL. Little things like that! :>
15:09:07 [EliasT]
I meant: what else is going on outside our group that deals with bNodes across RDF documents...
15:09:22 [kendall]
Elias: and my answer is foaf & owl dl :>
15:09:35 [ericP]
AndyS: I don't find the FOAF example so compelling because you can always use mbox or mbox_sha1 [FOAF IFPs]
15:09:55 [ericP]
... I find update more compelling
15:10:14 .
15:11:02 [DanC]
EliasT, there are a number of such toolkits. cwm has a "smushing" mode, for example.
15:11:12 [ericP]
PatH: I find asking the server to create URIs which it is willing to author more appealing
15:11:46 [ericP]
LeeF: that puts a slightly larger burden on the servers that *do* offer bNode stability
15:12:00 [ericP]
AndyS: can be done with a simple map
15:13:34 [ericP]
KendallC: URI label space solution makes me nervous as I don't know the implications on OWL/DL
15:13:45 [ericP]
... protocol solution is interesting. want to think about it.
15:14:14 [ericP]
... glad we have a record of discussing this.
15:16:41 [AndyS]
Within rq23: FILTER ext:bnodeLabel(?x , "label") - it's mildly cheating
15:17:16 [kendall]
it's "h-u-f-f-i-t-y", huffity :>
15:17:32 [DanC]
DanC suggests we go with the design we have, with some flexibility about new information later.
15:19:01 [kendall]
it's all well and good (seriously) to suggest that FOAF allows URIs instead of bnodes, as well as definining mbox as IFP, but it doesn't seem like OWL DL has that flexibility.
15:19:32 [SteveH]
that doesnt mesh well with CONSTRUCT
15:19:38 [kendall]
It should be possible to query RDF vocabularies that require heavy use of bnodes in a user-friendly manner. ?
15:19:42 [DanC]
right, kendall, I think OWL DL does not (though I'm never quite sure without looking it up)
15:19:44 [kendall]
eh, that sucks, but ??
15:20:54 [ericP]
PatH: my intuition is that mapping to URIs has the same implications as re-using bNode labels.
15:20:55 [patH]
90 mins OK me
15:21:01 [ericP]
... but have to think about that hard
15:24:11 [ericP]
DanC: the effect of adopting Kendall's requirement is that the WG will spend weeks considering a technical solution
15:24:42 [DanC]
perhaps: it must be possible for a client to refer to a bnode provided by a server
15:25:21 [kendall]
i think that's better, dan
15:25:29 [ericP]
JosD, in my experience, I query billions of bNodes and lists and I haven't seen this problem come up.
15:25:46 [SteveH]
variable length lists are tricky
15:25:51 [kendall]
jos: i'd welcome you writing an email explaining yr experience in this regard
15:26:52 [LeeF]
DanC, does that wording place a requirement on servers to support this, or only on the QL to allow clients to ask queries hoping that the server supports it?
15:26:55 [kendall]
eric: i thought we found language in the present spec that does *not* allow that presently
15:27:21 [DanC]
perhaps: it must be possible for a client to refer to a bnode provided by a server
15:27:53 [DanC]
Zakim, who's on the phone?
15:27:53 [Zakim]
On the phone I see DanC, HowardK, AndyS, EricP, Kendall_Clark, LeeF, SteveH, Elias, Souri, JosD, PatH
15:29:41 [kendall]
oops, sorry, he is on irc. my bad. ;>
15:30:48 [ericP]
+.5, +.5, -1, +1
15:31:03 [kendall]
eh the weakly = .5 thing should be non-canonical, IMO :>
15:31:24 ?
15:32:07 [DanC]
do the same query, pat, and add more to it
15:32:32 [patH]
I think the problem we ahve here is that several folk do not see that there is a real problem. Suggestion: if non-idiotic users (Maryland) say thery have a problem, there really is a problem.
15:33:00 [Zakim]
-HowardK
15:33:13 [kendall]
fwiw, i don't know what design instantiates that distinction! :>
15:33:18 [DanC]
howardk is excused
15:33:20 [patH]
Add what, Dan? Do I have to put the RDF list syntax into my query? (Yech)
15:33:30 [DanC]
yes, yech, but it works, pat
15:33:54 [patH]
OK, sorry, I shuld know better than to say "yech" in an RDF context.
15:35:55 [SteveH]
always mapping bnodes into uris is messy
15:36:03 [SteveH]
having it be an option would be more acceptable IMHO
15:36:22 [kendall]
steveh: yes, i understand this as "you may do" instead of "you must do"
15:36:28 [DanC]
ok, our decision to go to last call is vacated, and we've got a new issue on our issues list.
15:36:36 [AndyS]
Would _!:xyz cover that?
15:36:56 [JosD]
Pat, it is pretty convenient with the ( list ) notation
15:37:00 [kendall]
Andy: splitting the bnode label space seems a variant of bnode->uri
15:37:25 [AndyS]
Sort of - but there are not forced to be URIs by design
15:37:27 [SteveH]
kendall, I meant optional at runtime, sorry wasnt clear
15:37:40 [ericP]
ACTION: PatH to consider implications of answering bNode bindings with created URIs
15:38:29 [ericP]
ACTION: KendallC to ask Bijan to consider implications of answering bNode bindings with created URIs
15:39:11 [DanC]
Zakim, agenda?
15:39:11 [Zakim]
I see 7 items remaining on the agenda:
15:39:12 [Zakim]
2. SPARQL QL publication [from DanC]
15:39:13 [Zakim]
3. punctuationSyntax [from DanC]
15:39:15 [Zakim]
4. Comments [from DanC]
15:39:16 [Zakim]
5. SPARQL protocol publication [from DanC]
15:39:17 [Zakim]
6. SPARQL results format publication [from DanC]
15:39:17 [AndyS]
EricP : What about the other designs?
15:39:18 [Zakim]
7. tests [from DanC]
15:39:19 [SteveH]
JosD, you cant use the ( list ) notation in that case because of the :nil URI
15:39:19 [Zakim]
8. blank node handling... new requirements? [from DanC]
15:39:26 [DanC]
Zakim, close item 8
15:39:26 [Zakim]
agendum 8 closed
15:39:27 [Zakim]
I see 6 items remaining on the agenda; the next one is
15:39:28 [Zakim]
2. SPARQL QL publication [from DanC]
15:40:03 [ericP]
AndyS, I only pushed on one of them, the one that I saw as most immediate
15:40:07 [DanC]
editors still working on optionals
15:40:12 [DanC]
Zakim, close item 2
15:40:12 [Zakim]
agendum 2 closed
15:40:13 [Zakim]
I see 5 items remaining on the agenda; the next one is
15:40:14 [Zakim]
3. punctuationSyntax [from DanC]
15:40:16 [kendall]
zakim, mute me
15:40:17 [Zakim]
Kendall_Clark should now be muted
15:40:36 [AndyS]
I'd like protocol considered because the requirment was for session usage
15:41:03 [ericP]
ACTION: JosD to fix up the relevant tests [recorded in
]
15:41:06 [ericP]
DONE
15:41:10 [ericP]
action -7
15:41:14 [JosD]
SreveH, what is meant was e.g. query ... { ?X owl:interscetionOf (:a :b :c) ...}
15:41:48 [DanC]
syntax-qname-08-rq and syntax-qname-14-rq
15:42:00 [SteveH]
JosD, ah, sorry, I thought you were talking about the unknown list length case
15:42:36 [SteveH]
JosD, eg. find all the classes that this class is the intersection of, but you dont knwo how many there are
15:42:41 [DanC]
WHERE { :a. x.: : . } <- old or new bits?
15:42:49 [DanC]
old
15:43:25 [ericP]
ACTION: JosD to fix up the relevant tests [recorded in
]
15:43:28 [ericP]
CONTINUED
15:43:34 [DanC]
Zakim, close this agendum
15:43:34 [Zakim]
I do not know what agendum had been taken up, DanC
15:44:03 [DanC]
Zakim, take up item protocol
15:44:03 [Zakim]
agendum 5. "SPARQL protocol publication" taken up [from DanC]
15:44:21 [kendall]
zakim, unmute me
15:44:21 [Zakim]
Kendall_Clark should no longer be muted
15:44:51 [ericP]
KendallC: still have a short todo. spending time on query lang.
15:45:03 [ericP]
... no complaints apart from Mark Baker
15:45:45 [ericP]
DanC: what's standing in the way on Results Format?
15:46:15 [ericP]
EricP: we needed a namespace document. done. don't know what else is critical path.
15:48:04 [kendall]
I don't. :>
15:49:33 [DanC]
re results format, EricP notes an outstanding comment about xsi:type
15:49:40 [AndyS]
Dave recommedned I remove xsi:schemaLocation=... from example in rq23
15:49:47 [SteveH]
I'm happy to add some results test, but it will be atl east a week
15:49:53 [SteveH]
before I can do it
15:50:09 [DanC]
(ericp, in the minutes, please continue my 2 actions under the Comments item)
15:50:15 [ericP]
xsi:type on sparql:literal elements -->
15:51:50 [ericP]
DanC: is it ok if one puts spurious extra attributes without changing the meaning?
15:52:39 [DanC]
ADJOURN.
15:52:40 [Zakim]
-Souri
15:52:48 [Zakim]
-SteveH
15:52:50 [Zakim]
-DanC
15:52:55 [Zakim]
-JosD
15:54:22 [ericP]
DanC, can you do the scribe script magic?
15:55:07 [DanC]
RRSAgent, draft minutes
15:55:07 [RRSAgent]
I have made the request to generate
DanC
15:55:20 [DanC]
RRSAgent, make logs world-access
15:55:47 [DanC]
re "... vacated..."; the LC decision stands, pending outcome of the actions re blank nodes and the optionals action
16:01:28 [kendall]
oops, my lunch plans kick in. ciao
16:01:32 [Zakim]
-Kendall_Clark
16:02:48 [EliasT]
EliasT has left #dawg
16:03:03 [Zakim]
-Elias
16:03:48 [DanC]
<LeeF> DanC, does that wording place a requirement on servers to support this, or only on the QL to allow clients to ask queries hoping that the server supports it?
16:04:20 [DanC]
good question... the question before the WG was really: is this worth some schedule to slip to study?
16:05:20 [LeeF]
Right.
16:06:28 [patH]
Suggest that we cut out too much stufy by using the permissive alterntaive when possible.
16:06:36 [patH]
stufy/study
16:07:07 [LeeF]
Is there a good resource(s) available to learn the various w3 tools that I could use to learn to contribute more to the wokrings of the WG? (how to scribe, add tests, that sort of thing)?
16:07:38 [Zakim]
-LeeF
16:12:08 [DanC]
LeeF, the WG home page (
) is intended to serve in that way
16:12:38 [DanC]
e.g. it has a link to "scribe tips"
16:12:52 [DanC]
as far as I know, "now to add a test case" is lightly, if at all, documented.
16:13:03 [DanC]
how to...
16:13:23 [DanC]
there are several WG members that know how to do it; SteveH is the test editor/coordinator
16:14:21 [Zakim]
-PatH
16:19:41 [Zakim]
-EricP
16:20:11 [Zakim]
-AndyS
16:20:12 [Zakim]
SW_DAWG()10:30AM has ended
16:20:13 [Zakim]
Attendees were DanC, HowardK, AndyS, EricP, Kendall_Clark, LeeF, SteveH, +1.323.444.aaaa, JosD, PatH, Souri, Elias
16:53:19 [LeeF]
Thanks, DanC.
18:07:47 [Zakim]
Zakim has left #dawg | http://www.w3.org/2005/07/05-dawg-irc | CC-MAIN-2015-40 | en | refinedweb |
Per HTML5, the @lang attribute can be used in both XHTML and HTML:
"The lang attribute in no namespace may be used on any HTML element." <>
"The term "HTML elements", when used in this specification, refers to any element in that namespace, and thus refers to both HTML and XHTML elements." <>
Therefore, Polyglot Markup should specify that xml:lang is optional.
Aslo see: 16166#c2
(In reply to comment #0)
> Aslo see: 16166#c2
See Bug 16166#c2
I don't believe this is appropriate. See my rationale in comment 7 at
(In reply to comment #2)
> I don't believe this is appropriate. See my rationale in comment 7 at
>
I don't think your justification there is good enough.
(1) The point is that if one is to make an application/xhtml+xml document, then one isn't required to use xml:lang, if all one cares about is XML-capable HTML5-parsers.
And so, the question begs to be asked: Why must I, suddenly, use xml:lang, it the document
is supposed to be 'polyglot'?
I agree that you, in bug 16166, 7th comment, have pointed out some reasons why an author might want the document to contain xml:lang. But I see no *must* in there - it all depends on how "naked" you expect the XML parser to be.
(2) Meanwhile, in 16166, you suggest that text/html parsers should start to handle xml:lang. So, if that proposal were to be accepted, how would this impact on Polyglot Markup? My presumption is that you would like to be able to produce polyglot markup which contained xml:lang, without any requirement that @lang is present.
(3) HTML+RDFa 1.1 is agnostic about xml:lang versus lang
(4) RDFa Core likewise says: "In XHTML+RDFa [XHTML-RDFA], for example, the XML language attribute @xml:lang or the attribute @lang is used to add this information, whether the plain literal is designated by @content, or by the inline text of the element:"
And so, I maintain that xml:lang should be optional. It would also be good to point out when/for what xml:lang is useful.
This ended abruptly and without conclusive proposed text. The same seems true for bug 16166, in which much relevant discussion took place. Am I to take it that Leif's final thoughts in comment 3 contain the resolution, that xml:lang should be optional?
I tend toward keeping the text as it is, that polyglot markup uses both xml:lang and @lang. If you have come to consensus otherwise, however, I am willing to incorporate that.
(In reply to comment #4)
Hi Eliot,
consider that I have changed my mind: xml:lang should be considered <del>polyglot</del> <ins>robust</ins>. Thus, no change, basically.
The thing is that xml:lang is *not* necessary in XHTML since a conforming XHTML parser will understand the @lang attribute.
However, from a *robust* viewpoint, it can be defended that xml:lang should be in.
Cheers! | https://www.w3.org/Bugs/Public/show_bug.cgi?id=16190 | CC-MAIN-2015-40 | en | refinedweb |
)
Issue Links
- is related to
KARAF-1033 Set the Features validation optional
- Resolved
Activity
- All
- Work Log
- History
- Activity
- Transitions
Due to itests broken (Pax Exam "embeds" Karaf, so force the "old" feature namespace usage), I let the features namespace in v1.0.0 (on trunk).
On karaf-2.2.x branch, add a warning message if the features repository doesn't have a name: revision 1200616.
On trunk, I change the XSD (and so the namespace version).
On karaf-2.2.x branch, I just raise a warning during parsing.
Revision 1201204. | https://issues.apache.org/jira/browse/KARAF-968?page=com.atlassian.jira.plugin.ext.subversion:subversion-commits-tabpanel | CC-MAIN-2015-40 | en | refinedweb |
On Mon, May 20, 2002 at 01:56:05AM +0200, Marcus Brinkmann wrote: > >. Using an analogy to Linux, it's not so much that it contains programs (in the /usr/bin sense), it's more that it contains "filesystem definitions". That those happen to be programs is interesting, but not particularly important. So, where you'd type: mount -t msdos /dev/hda3 /mnt # in Linux you type settrans -a /mnt /hurd/msdos /dev/hda3 # in Hurd (assume there's a "/dev" translator setup if you have to) "/hurd" isn't needed simply as somewhere to put these translators, it's needed as a namespace to reference them. The only alternative that I can see, would be having settrans and kin have a default path to use (either a colon-separated $TRANSPATH, or a single directory configurable at compile-time), in which case /hurd could be moved under /usr/lib/fs-translators or similar (and the settrans command would look more like `settrans -a /mnt msdos /dev/hda3'). > > In the current FHS, there is documentation about /lib/modules. >. > /lib/modules is completely wrong. Hurd servers are neither a library nor > modules, nor non-executable architecture specific data or anything of that > sort. "lib" isn't just for "libraries", it's "object files, libraries, and internal binaries that are not intended to be executed directly by users or shell scripts" (well, /usr/lib, anyway). The latter seems a pretty accurate description, so "/lib/servers" would be a fairly reasonable place to put it, if you didn't have to type it out every time you wanted to do a mount. Anyway, this is all pretty irrelevant. If policy's not meant to be a stick, then people shouldn't be trying to change however many years of existing practice in the Hurd just for fun. Cheers, aj -- Anthony Towns <aj@humbug.org.au> <> I don't speak for anyone save myself. GPG signed mail preferred. ``BAM! Science triumphs again!'' --
Attachment:
pgp8mpJJFMfzh.pgp
Description: PGP signature | https://lists.debian.org/debian-hurd/2002/05/msg00502.html | CC-MAIN-2015-40 | en | refinedweb |
Hey guys,
I'm having a very weird problem with a project I'm working on. I have a multi-scene project and on one of the scenes there are three buttons which the user can interact with. Each button takes the user to a difference scene that starts with music, text fade ins, and pretty simple stuff. Two of the buttons worked fine, and I duplicated the scene I used for the second button when I started on the third since they have the same general format. What is really weird is that when I have the third button link to the second scene (via actionscript), everything works fine...but when I use actionscript to link the third button to an EXACT duplicate of the second scene, there is a brief burst of sound before the main track that is supposed to begin starts...almost as if it tried to play a sound file for a brief second and then abruptly stopped - any ideas what could be going on? It just doesn't make sense to me that the annoying 1 second sound would only be there when I make a copy of the scene, but not when I link to the original....
Also, since my project contains many scenes, is there a way for me to preview only one scene that will still let me click on a button to link me to another scene - seems like that only works when I compile the entire movie, when I preview a scene in isolation it gives me a compiler error saying the scene I'm linking to can't be found (likely because when you preview a scene in isolation it doesn't compile any of the button-linked scenes)...
Hope this made sense!
Thanks!
-Ricky
Using scenes is always problematic. You really shouldn't be using scenes, that's a methodology that was abandoned at about Flash 2. The central problem is that when a scene based Flash file is saved as an .swf, Flash concatenates all of the scene timelines end to end to make one, long, timeline. One of the resulting problems is that you may end up with duplicate frame labels, and function and variable names.
While Flash will warn you about duplicate frame labels, it may have difficulty resolving the duplicates, even if your code refers to the scene and the frame label.
Another problem with using scenes is that you end up with one large .swf file that may not download quickly or efficiently and can result in unpredictable playback. At runtime, when Flash will always use the first named thing that it finds, so, if there are two frame labels with the same name, Flash will always go to the first one in the timeline. If it gets completely confused, Flash will always go back to the first frame of the movie and start over, this is the looping that you may see when there is a problem in playback.
A better method for producing a movie made of many parts is to produce each section as a unique Flash movie. Then, at runtime, load each of these movies separately. This may seem more complex, but it will yield a more predictable user experience, a simpler production process, and a simpler testing process.
Hey Rob,
Thanks for the tip. I've gone about breaking everything into one scene movies, and I've got code that, upon the click of a button, loads the new .swf thusly:
import flash.display.Loader;
import flash.net.URLRequest;
stop();
var singleLoader:Loader = new Loader();
btnCont_scene1.addEventListener(MouseEvent.CLICK, btnContClick);
function btnContClick(event:MouseEvent):void
{
removeChild(singleLoader);
var request:URLRequest = new URLRequest("DT-Session1-Male_AnxietyIntr2.swf");
singleLoader.load(request);
addChild(singleLoader);
}
However, I'm unsure how to go about clearing the screen first?? The code above just tries to copy right over it without clearing the stage first...
Thanks!
-Ricky
Oops the removeChild actually shouldn't be there, that was just something I was playing around with to try to get the screen to clear.
Yes, when you add a child object to the display list that new object will be placed on top of whatever is already on the stage. If you don't position this new object, it will be positioned with its upper left corner at the upper left corner of the stage.
Working with the display list and loading new objects is a complex subject, but simply put, loading in a new .swf works pretty much the same as opening a new .swf in a browser window. When you call for that new .swf it will begin to stream into your existing movie. It will begin to run or not depending on how it was coded. One difference between loading an .swf into a browser and loading an additional .swf into an existing .swf is that the background for that additional .swf does not load. The background, the color layer that you define in the movie's properties, is only used for the first movie that is loaded into the browser. This means that if you create a movie with a blank first frame and a stop(); directive at that first frame, and then load that movie, the user will not see anything on the stage as that movie loads.
Alternatively, you can set an x and/or y position for the newly loaded movie that is off the visible area of the stage. I often load in new movies with a y position of -1000. Then, when I want to actually use that movie, I change its y position property value so that it shows on the stage.
You can then either move that movie back off the visible area or you can use removeChild() to get rid of the movie if you no longer need it.
When you load new content, how you make the new content appear on the stage and how you remove that content are all part of the design of your website. You can load in all of the individual parts at the start and then show and hide each piece as needed, or you can wait until the user asks for a particular part and then load that part as it is asked for.
Does that help?
I've been trying to use the method of making one with a blank first frame and a stop() directive and it loads the movie fine, but it still doesn't solve the issue because I'm linking multiple flash files so that one scene leads to the next which leads to the next etc. And each scene needs to wipe out everything that was there on the previous scene - by the time im going from the second to the third scene, the third is trying to overwrite the second scene again - I tried using a transition one with just the blank stage and a stop; but it didn't work, and I really don't want to have to create a unique transition .swf file each time - it just seems like there should be a less convoluted way to do this - it doesn't seem like it should be too fancy, is there no equivalent to a clear screen command or a way to unload the current stage before loading up whatever is in the next .swf file?
I found a way to do it but I want to check to see if there's anything wrong with it - basically I just have the button skip ahead to a frame where I deleted everything that was on the stage, and then on that frame I load the .swf file...
cuts out the separate transition .swf middleman....
Yes, that is one way to load in new content and display it. Without seeing your movie's content it's difficult to know what the best method might be. The way that you display your content relates to the design of that content. You need to consider what parts are constant, if any, what parts change, how the user interacts with the individual elements, the randomness of the interaction, the pacing, etc.
Say, for instance, that you have three sections to your movie and you want to move from one section to another when the user clicks on a button. Further, you want to show a transition between each of these sections.
You can create each section as a different flash movie. Likewise you can create the transition as another unique flash movie. Finally, you can create a container movie that does nothing except to load in these content movies.
Let's call these movies "container", "content1", "content2", "content3" and "transition". Have the movie container, load in each of these movies with transition as the last to be loaded. So now you have four .swf movies stacked on top of your container movie. Content1 can play when it loads. When the user calls for the movie content2, you can play the movie transition and when it finishes, play the movie content2. When the transition movie begins, you can remove the movie content1. If you no longer need the movie content1, then you can use removeChild() to get rid of it. If you think that you will need it again, then just rewind it back to frame 1.
Hey Rob,
I'm pretty confused. My application generally proceeds in a linear fashion, and content is not reused so I really just need to wipe out the stage each time and load in a new .swf file. There is one part where I used a sharedObject to store what the user has already clicked on so I can adjust which options are later presented. I've been using the gotoAndStop() method of loading new movies, but things are getting all mixed up and I don't really understand what's going on. Sometimes things don't load, sometimes they load multiple times and loop, and other times I get a compile error - for instance, when the user clicks on a button, I have the actionscript direct it to a certain frame and load a new movie - but then when that movie loads, it crashes at frame 466 and when I look at it in the debugger, its because its still traversing the timeline of the parent caller and referring to a button that doesn't exist on the movie I've loaded -- I'm confused as to how to do something that I thought would be pretty straightforward - all I want is that each time the new movie loads for flash to use that timeline and that timeline only -- I'm not sure why it's going through the timeline of the calling movie - I have a stop command so I thought it would just go straight to the new movie's first frame and go from there....I've been trying to read help documentation on how the loading stuff works but it's confusing and I'm unable to find examples of the sort of simple thing I need to be do...
-Ricky
It sounds like you may have a couple of problems. The first is that you are addressing the wrong movie and the second is that you may be addressing a movie before its ready to be used. There may be other problems. Can you send me a copy of the movies so that I can see what's going on?
Thanks for all the help Rob. Below are links to download the movies. Here's a brief explanation of what I'm trying to do.
DTM1-Start.swf is the starter file, and at the end is a continue button that should load DTM1-Intr1.swf. DTM1-Intr1.swf does load successfully, but it loops instead of following the stop() command at the end to allow the user to click on one of the three emotion buttons. If, however, I run DTM1-Intr1.swf alone, it does not loop and the stop() command is executed as it should be -- this is a common theme throughout, i.e., the files work as they should when I run them directly, it's the loading and linking between .swf files that is problematic. Anyway, if I run DTM1-Intr1.swf directly, I am able to click on one of three buttons. No matter what button I choose (anger- DTM1-Ang1.swf, anxiety - DTM1-Anx1.swf, or sadness - DTM1-Sad1.swf), the movie only gets a bit through the first block of text before crashing with a null reference error (I alluded to this problem in my above post). Again, if I run any of the emotion movies directly, there is no crash. When running the emotion movies directly, the continue button is supposed to lead the user to DTM1-EmotTrans.swf. It does do this, but the movie loops repeatedly instead of following the stop() and goto commands that are supposed to display the buttons the user has not yet clicked (i.e., if they first clicked the sad button, only the anxiety and anger buttons remain on the screen in DTM1-EmotTrans.swf) --- again, if I open DTM1-EmotTrans.swf directly, it displays the correct buttons. Finally, when clicking on one of the emotion buttons from the DTM1-EmotTrans movie, where it should be taking me back to the emotion movies, it instead crashes with a null reference error. So basically none of my linking is working and I'm not sure what I'm doing wrong :/ ..... Anyway, below are the .swf files -- I really appreciate your help, I'm pretty desperate.
DTM1-Start.swf -
DTM1-Intr1.swf -
DTM1-Sad1.swf -
DTM1-Anx1.swf -
DTM1-Ang1.swf -
DTM1-EmotTrans.swf -
With gratitude,
Ricky
Also, I'm not sure if you need the project files as well to figure out what might be going on - I didn't post them since they're significantly bigger, but if you need them I'll put them up.
-Ricky
Yes, the .fla files will be the useful things for me to look at. Send me a private message through this forum and I'll send you an address to my file uploader. That way I'll get the files directly. | http://forums.adobe.com/message/4576293 | CC-MAIN-2013-48 | en | refinedweb |
Hi Ivan, Omari, Following up on this question, is it possible to have the log automatically set the name of the workspace to the programs running in it? Currently I use the dynamic log with xmonad-log-applet (), but it would be so much more useful if rather than *me* making sure I place each window in the right workspace the system would name the workspaces after the windows each of them contains... I don't understand Haskell, but I believe this is the part of my xmonad.hs that sets the names of the workspaces as they are shown on the taskbar: logHook = dynamicLogWithPP $ defaultPP { ppOutput = \ str -> do let str' = "<span font=\"Terminus 9 Bold\">" ++ str ++ "</span>" str'' = sanitize str' msg <- newSignal "/org/xmonad/Log" "org.xmonad.Log" "Update" addArgs msg [String str''] -- If the send fails, ignore it. send dbus msg 0 `catchDyn` (\ (DBus.Error _name _msg) -> return 0) return () , ppTitle = pangoColor "#003366" . shorten 50 , ppCurrent = pangoColor "#006666" . wrap "[" "]" , ppVisible = pangoColor "#663366" . wrap "(" ")" , ppHidden = wrap " " " " , ppUrgent = pangoColor "red" } Any ideas would be most welcome : ) best, lara --- On Sat, 1/30/10, Ivan Miljenovic <ivan.miljenovic at gmail.com> wrote: > From: Ivan Miljenovic <ivan.miljenovic at gmail.com> > Subject: Re: [xmonad] make DynamicLog prefix workspace names with number > To: xmonad at haskell.org > Date: Saturday, January 30, 2010, 2:23 AM > 2010/1/30 Omari Norman <omari at smileystation.com>: > > Is there any easy way to get DynamicLog to prefix the > names of > > workspaces with numbers? For instance, I have > workspaces named "web", > > "mail" and "local"; I want these to show up in my > xmobar as "1:web", > > "2:mail", and "3:remote". > > I do this, but I'm not at my own machine at the > moment. I'll email > that part of my config later on when I _am_ at my own > computer. > > > -- > Ivan Lazar Miljenovic > Ivan.Miljenovic at gmail.com > IvanMiljenovic.wordpress.com > Marie von Ebner-Eschenbach - "Even a stopped clock is > right twice a > day." - > _______________________________________________ > xmonad mailing list > xmonad at haskell.org > > | http://www.haskell.org/pipermail/xmonad/2010-January/009685.html | CC-MAIN-2013-48 | en | refinedweb |
24 June 2009 05:03 [Source: ICIS news]
SINGAPORE (ICIS news)--Fujian Refining and Petrochemical Co (FREP) has delayed the start up of a new 800,000 tonne/year steam cracker to first half of August due to ongoing construction works, a source close to the company said on Wednesday.
“The cracker is not ready,” he said in Mandarin, adding that it was initially slated to start up in June.
Commercial production at derivative polyethylene (PE) and polypropylene (PP) lines at the same site was also pushed back to second half July or August due to technical issues, company sources had said earlier.
The petrochemical complex in Quanzhou, southern ?xml:namespace>
FREP is a joint venture of ExxonMobil (25%), Saudi Aramco (25%) and Fujian Petrochemical (50%).
Fujian Petrochemical is a 50:50 joint venture between the | http://www.icis.com/Articles/2009/06/24/9227043/chinas-frep-delays-cracker-start-up-to-h1-aug-source.html | CC-MAIN-2013-48 | en | refinedweb |
29 August 2008 12:47 [Source: ICIS news]
(adds background from paragraph 3)
LONDON (ICIS news)--A major European monoethylene glycol (MEG) producer said on Friday it had agreed a September contract at €875/tonne ($1,287/tonne), down €75/tonne from August, with a major consumer.
The source said it has also followed the initial August contract at €950/tonne FD (free delivered) NWE (northwest ?xml:namespace>
Immediate confirmation from the consumer was not available.
“MEG is a global commodity and this is why we adjusted the price down, with Asian prices lower," said the seller. "However, it means we see a September contract price in Europe which is well below spot levels."
There has been an unusual delay in the further confirmation of Europe's August contract, with most buyers feeling that an increase was not a fair reflection of market fundamentals as Asian prices fell sharply and suppliers nominated decreases for September.
The market was split as sellers were focused on the record upstream ethylene contract price for the third quarter.
More reaction was expected to follow from other market sources.
($1 = €0.68) | http://www.icis.com/Articles/2008/08/29/9152809/Europe-Sept-MEG-settles-75t-down.html | CC-MAIN-2013-48 | en | refinedweb |
Archived:How to create menu with callback function that receives parameters
Tested with
Devices(s): Nokia N96
Compatibility
Platform(s): S60 2nd Edition, S60 3rd Edition
Article
Keywords: appuifw, menu
Created: diegodobelo (14 Jan 2009)
Last edited: hamishwillee (31 May 2013)
Overview
The appuifw.app.menu receives a list of tuples.
Each tuple contains two elements.
- The first element is a unicode that is the option name.
- The second element is a function name that will be called when the option be selected.
The problem is that when passing parameters to a function, necessarily the function will be called and will return something (like None). In this case, when selecting an option, what will happen is this: something().
To solve the problem we need to create a function that receives parameters, creates an internal function (which uses the received parameters) and returns the internal function name. This is showed by the PySymbian code example below.
Code Snippet
#import modules
import appuifw, e32
#initial ball size
ball_size = 10
#define the menu callback function
def size(value):
#create an internal function which be returned
def internal_function():
global ball_size
ball_size += value
print ball_size
#return internal function name
return internal_function
#create the menu
appuifw.app.menu = []
#create menu items
option1 = (u"size up", size(2))
option2 = (u"size down", size(-2))
#append menu items to menu
appuifw.app.menu.append(option1)
appuifw.app.menu.append(option2)
#set up application exit
lock = e32.Ao_lock()
appuifw.app.exit_key_handler = lock.signal
lock.wait()
Postconditions
Following images show the outcome of above code. | http://developer.nokia.com/Community/Wiki/How_to_create_menu_with_callback_function_that_receives_parameters | CC-MAIN-2013-48 | en | refinedweb |
14 August 2012 18:50 [Source: ICIS news]
LONDON (ICIS)--Indorama Ventures’ second-quarter net profit fell 28%, to $39m (€32m), from the first quarter, mainly because of inventory losses and markdowns in the wake of the global commodity downturn, the Thailand-based international polyester major said on Tuesday.
However, Indorma Venture’s "core net profit" - before exceptional items and inventory gains and losses - was $54m for the three months ended 30 June, up from $17m in the first quarter, partially driven by a 13% quarter-on-quarter increase in volumes.
Sales were $1.74bn, compared with $1.69bn in the first quarter.
Indorama said that its segment earnings before interest, tax, depreciation and amortisation (EBITDA) increased quarter-on-quarter across all its platforms with the addition of new businesses.
"The last 12 months have witnessed extreme volatility that closely shadows what we experienced in the second half of 2008,” said CEO Aloke Lohia.
“There was a considerable squeeze in inventory across the industry then, which led to a total collapse of commodity prices, and we have seen it happen again in fourth quarter of 2011 and yet again in second quarter of 2012,” Lohia said.
“This time, though, we are also experiencing a slowdown in growth in Asia, particularly ?xml:namespace>
Purified terephthalic acid (PTA) spreads in June 2012 reached their historical bottom of $50/tonne, “a stark decline from the record high of $400 in March 2011”, he added.“We will have to be more patient and wait for a rebound in growth, something we expect to have a positive impact on the company, just as in 2009 and 2010,” | http://www.icis.com/Articles/2012/08/14/9586981/thai-indorama-q2-profit-falls-28-from-q1-on-global-commodity-woes.html | CC-MAIN-2013-48 | en | refinedweb |
In this blog we will examine the process to “Consume” a web service located on the web.
To begin, create the default web site in Visual Web Developer Express 2010, the process is basically the same as with any of the Visual Studio web tools. The installation is faster, about 10 minutes from start of download to completion. To get started select the ASP.NET Web Site, and I will be using the Visual C# language, mainly because of its built in support of refactoring. For the next few figures, I will be comparing Visual Web Developer express 2010 (VWD 2010) to Visual Web Developer express 2008 (VWD 2008)
Figure 1: Visual Web Developer Express 2010 New Web Site Dialog Box
Figure 2: Visual Web Developer Express 2008 New Web Site Dialog Box
Figure 3: VWD 2010, Solution Explorer has more files in it then VWD 2008, many for security purposes
Figure 4: VWD 2008 doesn't have the security items or the site master
From now on in this blog entry I will be using VWD 2010 express and refer to it as VWD, you will be able to use VWD 2008 Express, the pictures will be slightly different.
You will need to add a “Standard” button to the Default.aspx and a Label to use the following code, here is what the code will look like when you are done, but don’t enter the code yet:
protected void Button1_Click(object sender, EventArgs e)
{
us.z25.Service ws = new us.z25.Service();
Label1.Text = ws.HelloWorld();
}
To create the Button1_Click event, that you see in the introduction to this section, you will need to:
· Add a button to Default.aspx and double click on it in the Visual Studio or Visual Web Dev form (the VSD 2008 Express default is different than the VSD 2010 default web page).
· Add a label, you can change the text, but that isn’t required.
· You DO NOT need to actually type in the code for the event, you will need to type a little code to consume the web service.
Figure 5: How your form might look in VWD 2010 Express
Figure 6: Right click the project at the top of the Solution Explorer and then select Add Web Reference
Here is the Add Web Reference dialog box, for the URL, make sure to use your own URL after 1/1/2010, you can check to see if I have anything at z25, you will need to navigate to the site to see what I have. If you get an error, it could mean that I have taken service.asmx off the site or moved it.
Make user to note the Web Reference name out of the textbox just above the Add Reference button.
Figure 7: Add Web Reference, type in the URL plus the service.asmx. Note that a change needs to be made to our hello world application
First off let’s talk about the properties of the button and label, for this example, don’t change any of the properties or IDs. Make sure that they look like the following:
This is fairly simple coding exercise. Double click the button on Default.aspx, then using the Web Reference name, in this case: us.z25.Service (because Service was the name of the Web Method in the Web service created by VWD by default)
Figure 8: When you double click the button you will see this screen, or a similar screen.
Figure 9: Now construct the Service Class exposed by the Web Service
Figure 10: Now add the Service class, do it on both sides
Figure 11: For reference purposes I have included the default web service from the previous blog entry here, note that there is a public class Service that exposes the method HelloWorld:
Press the F5 key or ctrl+F5, after a few seconds you will see the following in VWD 2010
Click the button on your new webpage. After a few seconds the web service at will return the phrase: Hello World and look like the following:
Conclusion:
Great work! You have successfully consumed a simple web service!
Great post. Thank you for this.
Cool post. Thanks, very helpful :)
What F***8 Cr**** images
RObert P, thanks for the feedback on those first three images, I will see if I can improve them.
Awesome feedback, got my attention.
Now a little feedback for you: No need for the asterik cursing, personally I view all comments and take a look at what I think you are trying to say.
For many bloggers this type of language is ignored; makes the blogger upset; etc. In balance an equal number bloggers like any feedback.
In the future, if you could focus your comments a little on the problem that would make the discourse on the web more polite in general.
However, feel free to post as you like here at devschool. Communications is a part of learning.
Instead of: "What F***8 Cr*** images"
You might consider writing: "Images in this blog are blurred, it makes your blog look stupid."
See the latter makes your point, adds the important insult we all like to throw out (I have done my share) and allows me to quickly get to the point that you want to make.
F***8 is short for one of the "seven" words and lends an unneccessary weight to the feedback. For example you might want to save that for one of the various forums that are talk about the POS Ghostbusters 3 video game or the latest government/college/corporate screw-up.
Cr*** images: Now this was appropriate weight, and I think you could use the full word without violating the "seven" word rule.
Great stuff, thanks. Would you be able to extend or convert this example, so that a batch job could use this code instead of invoking via ASP.NET?
lo que pasa es que tengo una aplicación en visual studio 2010 y la compilo desde visual y funciona bien (la aplicación es un servicio web), y apenas la publico el iis 8 (windows 8 ) el servicio no me deja entrar ala base de datos que esta creada en sqlserver 2008 r2, no se si es que local mente el iis tenga una configuración especifica sobre la entrada a consumir el servicio
Pido perdón, pero no tengo una respuesta para usted en esto. He cambiado de enfoque.
Legal Note:
Restrictions: | http://blogs.msdn.com/b/devschool/archive/2010/01/01/consuming-a-hosted-web-service-using-visual-studio-specifically-vwd-2010.aspx | CC-MAIN-2013-48 | en | refinedweb |
05 November 2009 12:58 [Source: ICIS news]
LONDON (ICIS news)--Artenius’ polyethylene terephthalate (PET) plant at El Prat de Llobregat, Spain, could be restarted by the end of the year to accommodate demand, a source from the Spanish company said on Thursday.
“There is a possibility that El Prat could restart by the end of December. It seems the company is financially much healthier...We are sold out and need to prepare for the new season ahead,” the source said.
The 150,000 tonne/year unit was shut down in September because of poor demand.
There was still no final agreement for the future of Artenius’ parent company, La Seda de Barcelona.
In October, La Seda was given a further four-week reprieve from banks as it tried to resolve issues with its finances.
The company’s facility at ?xml:namespace>
La Seda has put up for sale four of the company’s other plants, which are located in
The European PET market was looking firm in November, despite this being the end of the bottling season, sources agreed.
“Now, in the off season, all our customers are purchasing our material,” a reseller noted.
Producers were seeking increases of up to €50/tonne ($75/tonne) this month because of higher feedstock costs and more expensive imports, they said.
October prices for European material spread from €850-920/tonne FD (free delivered)
($1 = €0.67)
For more | http://www.icis.com/Articles/2009/11/05/9261303/artenius-to-resume-pet-output-at-el-prat-de-llobregat.html | CC-MAIN-2013-48 | en | refinedweb |
This article introduces a small and simple library to help logging and
instrumentation in your applications. Having recently reviewed the area of
logging and instrumentation, I discovered some very complete libraries, such as
the Enterprise Instrumention Framework components and the Microsoft Logging
Block. Although these libraries were vast and feature complete, they were
overkill for the relativly simple logging that I needed. I created the Slogger
library for my own use and have put it here as it may be useful to others.
The Slogger is a two part library containg the core library, and a library
containing the Sink objects. The supplied solution also includes a sample
application that demonstrates writing Errors and Events to the Event Log. To run
the sample application, simply build the supplied solution with Visual Studio
.NET 2003.
The library is driven by an XML file that describes two major parts,
Events and Sinks. Events are categorised er, events. To give an
example, an event can be called ApplicationError or perhaps just Error
or Information. Sinks are where an event is consumed. Examples of
Sinks could be where the information in an event is written to the disk or event
log.
Event are are linked to one or more Sinks. In the sample file, we have an
event named Error. This event is wired to two Sinks, EventLogInformation
and FileLog. This means that when an Error event is generated, it
will be consumed by the EventLogInformation and FileLog sinks
which both in turn do something with the event. Here is an excerpt from the
sample file that describes an event and how it is linked to two sinks: <
<Event Type="Information">
<Sink Name="EventLogInformation" />
<Sink Name="FileLog" />
</Event>
Below is an excerpt from the sample file that describes the sinks:
<Sink Type="EventLogInformation">
<Class Name="Slogger.Sinks.EventLogSink, Slogger.Sinks">
<InitialiserParameter Index="0" Name="EventLogSourceName">
Slogger Samples</InitialiserParameter>
<InitialiserParameter Index="1" Name="EventLogEntryTypeEnumeration">
Information</InitialiserParameter>
</Class>
</Sink>
The Sink described above is called EventLogInformation and has a class
name of EventLogSink in the Slogger.Sinks assembly. It
takes two paramaters that are passed to the Initialise method of
the class. Below is the sample code that initialises the library with the
settings file and fires two events.
EventLogSink
Slogger.Sinks
Initialise
Trace.Listeners.Add( new SloggerListener(
@"..\..\samples\cs\EventLogAndFileLog\settings.xml" ) );
That's it. The library is now initialized. Of course, there's a few
house-keeping things that must be done, like adding a reference to the
System.Diagnostics namespace and setting up event log sources:
System.Diagnostics
using System.Diagnostics ;
if ( !EventLog.SourceExists("Slogger Samples" ))
EventLog.CreateEventSource( "Slogger Samples", "Application") ;
Run the sample application. It puts two entries in the event log. Not much of
a sample, but remember apart from house-keeping code, the entries got the event
log with just two actual lines of code and a bunch of settings in an XML file.
The Slogger library is extensible. It can load your custom Sink classes and
pass event information to them. Below is the interface that your custom classes
must implement in order for the Slogger library to be able to load an initialize
them:
public interface ISink
{
void Initialise( SinkManager sinkManager, params object[] arguments) ;
void Write( string message ) ;
void WriteLine( string message ) ;
}
Each of your classes must implement the above methods. The Write and
WriteLine have been implemented. Other overrides from the Trace
class have not yet been implemented, but should be simple to add should
you need them. The Slogger library will create your class and pass any
constructor parameters (if any are specified in the XML). Once constructed, it
will call Initialize on your class, again, with any parameters you may have
specified in the XML. The XML below is used in the sample application. This XML
describes the FileLog sink and specifies the parameters to pass to the
constructor of the FileLogSink class:
Trace
FileLogSink
<Sink Type="FileLog">
<Class Name="Slogger.Sinks.FileLogSink, Slogger.Sinks">
<ConstructorParameter Index="0" Name="Filename">
c:\out.txt</ConstructorParameter >
</Class>
</Sink>
Whilst this library is only small (and simple), it does solve a problem many
will face. I learnt that the Microsoft Logging Block is reliant on EIF and Web
Services Enhancements 2.0 Technology Preview. This library is ideal for those
who want simple and light logging without the overhead of involved set-up
procedures and reliance (if only a build reliance) on technology preview code.
This is the first release of this library. I may update the core ISink
interface in the future to support all of the overrides available to the
Trace.WriteLine method. It would be nice for those who create their
own custom Sink classes to share them with the community.
ISink
Trace | http://www.codeproject.com/Articles/5698/Slogger-The-simple-extensible-logger?fid=30603&df=90&mpp=25&noise=1&prof=True&sort=Position&view=None&spc=None | CC-MAIN-2013-48 | en | refinedweb |
This article has been dead for over three months: Start a new discussion instead
Reputation Points: 10 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
0
Hello,
Reputation Points: 1,687 [?]
Q&As Helped to Solve: 1,756 [?]
Skill Endorsements: 56 [?]
•Moderator
0
This is an example on how to run Python code externally ...
# pipe the output of calendar.prmonth() to a string: import subprocess code_str = \ """ import calendar calendar.prmonth(2006, 3) """ # save the code filename = "mar2006.py" fout = open(filename, "w") fout.write(code_str) fout.close() # execute the .py code and pipe the result to a string test = "python " + filename process = subprocess.Popen(test, shell=True, stdout=subprocess.PIPE) # important, wait for external program to finish process.wait() print process.returncode # 0 = success, optional check # read the result to a string month_str = process.stdout.read() print month_str
More general ...
import subprocess # put in the corresponding strings for # "mycmd" --> external program name # "myarg" --> arg for external program # several args can be in the list ["mycmd", "myarg1", "myarg2"] # might need to change bufsize p = subprocess.Popen(["mycmd", "myarg"], bufsize=2048, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, close_fds=True) # write a command to the external program p.stdin.write("some command string here") # allow external program to work p.wait() # read the result to a string result_str = p.stdout.read()
You
Related Articles | http://www.daniweb.com/software-development/python/threads/198759/control-external-program-from-python-script | CC-MAIN-2013-48 | en | refinedweb |
I need to implement fast two-level loops, and I am learning using seq to make calls tail-recursive. I write programs to compute main = print $ sum [i*j|i::Int<-[1..20000],j::Int<-[1..20000]] This program (compiled with -O2) runs twenty times slower than the unoptimized (otherwise the loop gets optimized out) C version. But it seems to run in constant memory, so I assume that it has been turned into loops. #include <stdio.h> int main(){ int s=0; for(int i=1;i<=20000;++i){ for(int j=1;j<=20000;++j){ s+=i*j; } } printf("%d\n",s); return 0; } Then I write main = print $ f 1 where f i = let x = g 1 in x `seq` (x + if i<20000 then f (i+1) else 0) :: Int where g j = let x = i*j in x `seq` (x + if j<20000 then g (j+1) else 0) :: Int This version runs out of memory. When I scale the numbers down to 10000, the program does run correctly, and takes lots of memory. Even if I change the seqs into deepseqs, or use BangPatterns (f !i =... ; g !j = ...), the situation doesn't change. A monadic version import Control.Monad.ST.Strict import Control.Monad import Data.STRef.Strict main = print $ runST $ do s <- newSTRef (0::Int) let g !i !j = if (j<=10000) then modifySTRef s (+1)>>(g i (j+1)) else return () let f !i = if (i<=10000) then g i 1>>(f $ i+1) else return () f 1 readSTRef s also runs out of memory. So how can I write a program that executes nested loops efficiently? | http://www.haskell.org/pipermail/haskell-cafe/2012-September/103541.html | CC-MAIN-2013-48 | en | refinedweb |
One note: Windows only makes it look like you have the name Web-inf.
Do a "Properties" on it (right click, choose properties), and you might
see it is actualy "WEB-INF". So, you might be ok. Windows doesn't
change the folder, it just displays it innacurately.
- Jeff Tulley
>>> aprw00@dsl.pipex.com 8/29/03 5:36:36 PM >>>
Tomcat can usually find JSPs very easily. Is welcome.jsp also in
CATALINA_HOME/webapps/onjava/ ? Try the following:
private String target = "welcome.jsp";
If 'onjava' is your namespace then this might work.
-----Original Message-----
From: Jim Si [mailto:Jim.Si@CalgaryHealthRegion.ca]
Sent: 29 August 2003 19:04
To: Tomcat Users List
Cc: Stuart MacPherson
Subject: Re: First Servlet 404 error
<snip />
Windows somehow change WEB-INF to Web-inf automatically. I could not
use
WEB-INF.
Jim | http://mail-archives.apache.org/mod_mbox/tomcat-users/200308.mbox/%3Csf4f9056.010@prv-mail20.provo.novell.com%3E | CC-MAIN-2013-48 | en | refinedweb |
ClojureQL is now coming dangerously close to version 1.0. Despite its young age, its already been adopted several interesting places, among others in the Danish Health Care industry. Before we ship version 1.0, I want to walk through some of the features and design decisions as well as encourage comments, criticisms and patches.
(logo: SQL in parenthesis)
Why do we need another DSL is a good question, to which there is a good answer. If you look through a PHP project for instance, how much of the code is actually PHP? 85%? 65% perhaps? It varies from project to project of course, but its not a big number if 100% is to be our goal and the reason is, PHP has no one-size-fits-all abstraction over the common SQL commands.
Thats a big deal, because at a bare minimum the quality assurance team backing a PHP project has to support 2 languages in detail and although SQL can look simple on the cover don't let that fool you, there are many pitfalls. Then add shell-script for the OS integration and Ruby perhaps for some multithreading tasks and suddenly PHP becomes no more than glue.
ClojureQL enables the Clojurian to stay within his Clojure-domain even when interfacing with his SQL database, greatly simplifying the code-base. Its simpler partly because now you only have Clojure code and not Clojure + SQL, but also because SQL doesn't really exist. If I ask you to write out an SQL statement which creates a table with a single column containing an auto incrementing int, what would you do ?
If you were targeting a MySQL database, you could write
CREATE TABLE table1 (id int NOT NULL AUTO_INCREMENT, PRIMARY KEY ('col'));
But then half-way through the project, the Customer wants to target PostgreSQL instead, so you have to write
CREATE TABLE table1 (id int PRIMARY KEY DEFAULT nextval('serial'));
Or what if one service has to run against Oracle instead, or Derby, or Sqlite ? For all of the servers there are various smaller or larger differences. Some use double quotes ", others single. Sometimes its the entire construction of the statement which needs to be altered. Whatever the issue great or small, it means that you have to review your code and modify it accordingly.
(create-table table1 [id int] :non-nulls id, :primary-key id, :auto-inc id)
The code above is simple, flexible and will run on all supported backends. For version 1.0 we want to support 3 backends: MySql, Postgresql and Derby, but its not a complex matter to extend CQL to more than these. By having a modular backend, we've pretty much guaranteed that you can keep your team working in the same domain throughout your databasing needs, which is a big win. If the customer for whatever reason needs to change target or incorporate an entirely new target, you don't need to rewrite your code, ClojureQL will handle that:
Code stays in the same domain, and the liquid concept of 'SQL code' becomes concrete: Lisp.
Any change in the target destination rarely warrants change in the user-code!
Currently we have a nice and uniform frontend, which is what the developers will be working with on a daily basis. Its here you'll find all your SQL-like functions: query (SELECT), insert-into, update, create-table, group-by, etc. The front-end only has 1 job and thats to categorize your input in a format which the backend understands (call it AST).
Lets say I want to pick out the user Frank from my accounts table:
>> (query accounts [username passwords] (= username "frank")) {:columns [username passwords], :tables [accounts], :predicates [= username "?"], :column-aliases {}, :table-aliases {}, :env ["frank"]}
As you can see, this doesn't actually do any work. All the information is seperated into keys in a hash-map for later compilation by the backend. In order to compile the statement you need an open connection to a database, because only through that can we determine exactly what the final SQL Statement should look like. You can test it like so
>> (with-connection [c (make-connection-info "mysql" "//localhost/cql" "cql" "cql")] (compile-sql (query accounts [username passwords] (= username "frank")) c)) "SELECT username,passwords FROM accounts WHERE (username = ?)"
That opens and closes a connection in order to compile the statement. ClojureQL always uses parameterized SQL Statements, meaning that variables appear as question marks in the text and are passed as parameters to the actual execution call. If you examine the AST output from Query, you'll see the parameters lined up sequentially in the :env field, in this case just "frank".
Query is pretty flexible accepting symbols like *, custom predicates, column aliases etc etc. Since everything is rolled in macros you have to explicitly tell CQL what you want evaluated:
(let [frank "john"] (with-connection [c (make-connection-info "mysql" "//localhost/cql" "cql" "cql")] (prn (compile-sql (query accounts [username passwords] (= username frank)) c)) (prn (compile-sql (query accounts [username passwords] (= username ~frank)) c)))) "SELECT username,passwords FROM accounts WHERE (username = frank)" ; finds frank "SELECT username,passwords FROM accounts WHERE (username = ?)" ; finds john
All the macros call drivers, so everything is evaluated at run-time instead of compile-time. Oh and by the way: Pulling out and watching a compiled statement is more verbose than just running the code:
>> (run :mysql (query accounts * (= status "unpaid"))) {:id 22 :name "John the Debtor" :balance -225.25} ...
Since all of our statements initially are just AST representations, the way to modify their core behavior with functions such as Group-By or Order-By is to pass the AST around to these functions
(let [select-statement (query accounts *)] (with-connection [c (make-connection-info "mysql" "//localhost/cql" "cql" "cql")] (compile-sql (group-by select-statement username) c))) "SELECT * FROM accounts GROUP BY username"
That hopefully quickly becomes second nature to you. I've considered rearranging the argument order, to make it easier to read but nothing is decided and I'm open to suggestions.
In broad-strokes we have implemented the following sql-statement-types in the frontend
query
join
ordered-query
grouped-query
having-query
distinct-query
union
intersect
difference
let-query
update
delete
create-table
create-view
drop-table
drop-view
alter-table
batch-statement
raw-statement
If you're already comfortable with SQL you should recognize most of those. I'll explain the exceptions:
Let Query: LetQuery is a way of binding a local to the result of a query, very similar to Clojure's 'let'.
(let-query [password (query accounts [username password] (= username "frank"))] (println "Your password is: " password))
Batch Statement: Batch statements are (as you may have guessed) a cluster of statements which will execute sequentially with a single call to run. Initially we used a batch of create-table + alter-table to be db-agnostic in regards to table-creation, but we later sacrificed this approach by implementing backend modules.
Raw Statement: A Raw Statement is what you use, when you've read all the documentation and frontend.clj, without finding a function that does exactly what you need. It'ins a brute and I hope you won't use it:
(raw "SELECT * FROM accounts USING SQL_NINJAv9x(0x333);")
If you end up using Raw for something which isn't specific to your setup, please drop an email so I can schedule it for assimilation.
And finally, I put alter-table in italics because its not done yet, but it will be for 1.0. No two SQL implementations even remotely agree on how to use alter and yet I want to expose it to you in a uniform way, so thats a challenge - Input is encouraged.
Documentation is...under way, but generally we try to put all significant functions into the the demos, so that you can see them in action. To get started with something like Derby takes no effort, so I'll show MySql instead:
From your shell:
$ mysql -u root -p Enter password: ********* Welcome to the MySQL monitor. Commands end with ; or \g. mysql> CREATE DATABASE cql; Query OK, 1 row affected (0,00 sec) mysql> GRANT ALL PRIVILEGES ON cql.* TO "cql"@"localhost" IDENTIFIED BY 'cql'; Query OK, 0 rows affected (0,29 sec)
Now we have a database called cql, a user called cql and his password is cql. Then fire up your Clojure REPL
user> (use :reload 'clojureql.demos.mysql) nil user> (load-file ".../clojureql/src/clojureql/demos/mysql.clj") nil user> (clojureql.demos.mysql/-main) SELECT StoreName FROM StoreInformation {:storename "Los Angeles"} ....
The demos all do the same thing, so first you might want to read through demos/mysql.clj to see how we load the driver, intialize the namespace and so on. MySql is special because it shows off a global persistent connection, which means it stays open for as long as your program runs or until you close it. The other demos open/close connections on every call, depending on your project you will prefer one of the two.
Once the driver is loaded and the connection-string defined, nothing seperates the demos for Derby, Mysql or Postgres and so they all load demos/common.clj. Once you've read and understood everything that goes on there, you'll have a good grasp on how to handle most situations with ClojureQL.
For Version 1.0 I want ClojureQL running on a quality build-system and have spent the past couple of days researching what that might be. Currently we are interfacing directly with Ant & Ivy using some complicated XML configurations. This setup goes against every principle I have because of its inflexibility, lack of elegance and complexity.
The 2 systems which I found most interesting, were Gradle and Leiningen. I get the impression that Leiningen is widely adopted in the Clojure Community already and rightfully so. It's a DSL which lets you configure your build using Clojure code and its easy to pick up and build with. On the down-side its very fresh off the press so I would have great reservations using it in a complex scenario of several projects and in case you need some Ruby, Python, whatever code Leiningen can not support you. So I opted for Gradle.
Current users: Please notice that during the move we removed 'dk.bestinclass' from the namespace declarations.
Gradle is Groovy because it lets you write build-scripts in Groovy, but supports a multitude of languages. Its been around for a while and is now coming into maturity. It gracefully lets you handle multiple projects, projects written in multiple languages, dependencies, distribution, etc. Best of all, my co-pilot on ClojureQL Mr. Meikel Brandmeyer has written a plugin for Gradle called Clojuresque. Clojuresque enables Gradle to read Clojure-code well enough to identify namespaces etc, letting us AOT compile the project neatly into a Jar which is what we need. He's also added support for distributing Jars to the newly started Clojars site (Leiningen does this too). I don't have much to say about the Groovy build-scripts except they beat XML, and if you don't know Groovy is much like Java without much of the boilerplate.
To avoid any confusion, I'll quickly run you through how to install Gradle + Clojuresque. If you don't need this at the moment, feel free to bookmark the page and skip past it.
First pick a directory where you want to setup. Grab the latest Gradle which depends on nothing, it comes with a small Groovy installation:
wget
unzip gradle-0.8-all.zip && rm gradle-0.8-all.zip
Gradle really only needs your PATH variable to point to its /bin directory, but build-scripts sometimes ask for GRADLE_HOME, so set them up
export PATH=$PATH:/PATH/TO/gradle-0.8/bin
export GRADLE_HOME=/PATH/TO/gradle-0.8
Then with Gradle set up, you need to get Clojuresque and compile it
wget
unzip v1.1.0.zip && rm v1.1.0.zip
cd clojuresque
gradle build
-- Massive output, BUILD SUCCESSFUL, jar in: build/libs/clojuresque-1.1.0.jar
clojuresque-1.1.0.jar is all Gradle needs in order to understand your Clojure projects. If all you need Clojuresque for is building ClojureQL then don't bother fetching it, Gradle will handle that automatically once you build ClojureQL.
Now you've seen a little bit of ClojureQL and I hope it has caught your interest. We're dedicated to making this as stable, elegant and featureful as possible in order to get you talented Lispniks to stop writing SQL - That said, contributions (even if its just ideas) are most welcome. These are the facts you should know
ClojureQL is.... | http://www.bestinclass.dk/index.clj/2009/12/clojureql-where-are-we-going.html | CC-MAIN-2013-48 | en | refinedweb |
Windows Azure has introduced a nice set of services on top of the Azure platform. These include the Mobile Services, the Service Bus, Media Services among a host of others.
Media Services primarily offers on demand streaming, variable bit rate or smooth streaming, encoding to various formats including smooth streaming and storage capabilities. Under, but under the covers, it uses Azure App Fabric for compute and Blob Storage for hosting data. Thus it’s a Platform as a Service(PaaS) offering additional capabilities over infrastructure provided Azure.
Today we’ll see how to leverage Media Services to build an ASP.NET MVC application that allows users to upload and encode their videos in a Web Portal and playback the content on demand.
The diagram above gives us an overall view of how we can use Azure Media Services. The major steps involved are:
1. User logs into the ASP.NET MVC Web Application
2. Uploads their Video ‘assets’ to Azure Blob storage. The fact that it is going to Blob storage is transparent to the user, it’s a normal file upload for them.
3. Once uploaded, we can optionally encode the Video for smooth streaming.
4. Finally when the video (either smooth streaming version or the progressive download version) is requested by the user, it is streamed back to them.
5. On the client side, a Browser plugin is used to serve up the content.
With the basic premise out of the way, let’s get started with our application.
- We’ll need an active Azure account, if you don’t have one, you can avail a 90 day free trial at. Keep an eye out for outgoing data and encoding charges if used.
- Login to the Azure Management Portal and click on the [+ New] button in the bottom toolbar and add a new Media Service by navigating as follows
App Service > Media Service > Quick Create
- In the Quick Create panel, provide the Name of your media service, the Region where you want it to be hosted and the Azure Storage account to use. If you don’t have a Storage Account already, you’ve to create a new one, else you can pick one of your existing Storage Accounts.
- Finally click on the ‘Create Media Service’ button to initiate service creation. Once the service is created, we’ll see a new Media Service and new Storage Service (if you opted for a new one) created.
Note: I am using an existing media service and storage account hence the names are different from the ‘Create New’ panel.
- Final step is to obtain the Service Keys. Click on the Media Service from the ‘All Items’ list.
Here you have multiple options to retrieve the Keys, you can download the sample project or click on the Manage Keys button to just view the keys.
Note, the keys and save them securely for easy access later in the app. We are now set on the Azure side. Let’s setup our MVC Application.
To setup the application, we start off with the ASP.NET MVC 4 template and use the ‘Internet’ project type. This gives us the forms authentication module out of the box. Once the project is setup, we add the following Nuget package for the media services dependencies
PM> install-package WindowsAzure.MediaServices
This installs all packages necessary to use Azure Media Services APIs
Our data model will be simple for this example. We’ll save the UserId, Title, FileUrl and a Boolean indicating if the file is visible to others, for every media asset uploaded.
public class MediaElement{ public int Id { get; set; } public string UserId { get; set; } public string Title { get; set; } public string AssetId { get; set; } public bool IsPublic { get; set; }}
We’ll generate the CRUD pages using the default ASP.NET MVC Scaffolding as follows
- Build the application
- Right click on Controller folder and select ‘Add’ > ’New Controller’
- Update the Template to use Entity Framework and the Model to use the MediaElement entity. Provide a new Data Context Class name and click Add to complete the codegen.
Once the code is generated open the MediaController class add the Authorize attribute so that all actions can executed only when a user is logged in.
[Authorize]public class MediaController : Controller{…}
Open the _Layout.cshtml file and add an Action Link to navigate to the Media Browser
<li>@Html.ActionLink("My Media", "Index", "Media")</li>
You can also update the Title, remove the About and Contact links etc.
Update the Index.html for the Home Controller (the landing page) by replacing the generic markup with something indicating that we have a ‘super special’ Media hosting Application.
This wraps up the basics, let’s dive in and see how we can implement the Media management part.
As per our architecture diagram, user login has been implemented thanks to ASP.NET project template. Next thing to implement is the uploading of Media.
Media files, especially video files, can be large, spanning hundreds of MBs. As a result, it is not possible to upload these files in one go due to limitation of Request size imposed by IIS. So we will upload the media files in chunks. The topic of uploading files in chunk has been discussed in details in our article Uploading Big files to Azure Storage from ASP.NET MVC. I will use the same technique, so I won’t be reproducing the code here. If you are new to Azure Cloud Storage, I suggest you go through the previous article.
We will modify the default Application flow by changing the ‘Create’ link to ‘Upload’ in the Index.cshtml and navigating to a New page called Upload.
<p> @Html.ActionLink("Upload New Media", "Upload")</p>
Views/Media/Index.cshtml
[HttpGet]public ActionResult Upload(){ return View();}
Controllers/MediaController.cs
@{ ViewBag.Title = "Upload";}<h2>Upload New Media</h2>@using (Html.BeginForm()){ <fieldset> <legend>Media Element</legend>
<div class="editor-label"> Select Media File to Upload: </div> <div class="editor-field"> <input type="file" id="selectFile" value=" " /> <input type="button" id="fileUpload" value="Upload" /> </div> <div id="progressBar" style="width: 50%; height: 20px; background-color: grey"></div> <br /> <label id="statusMessage"></label> </fieldset>}<div> @Html.ActionLink("Back to List", "Index")</div>@section Scripts { <script src="~/Scripts/media-upload.js"></script> @Scripts.Render("~/bundles/jqueryui") @Scripts.Render("~/bundles/jqueryval")}
Views/Media/Upload.cshtml
Add a JavaScript file media-upload.js and copy the script over from the Chunked Upload sample. Ensure the POST is going to the correct URL i.e. /Media/SetMetaData and /Media/UploadChunk.
Add reference to this script in the Upload.cshtml (as shown above). For the progress bar to show up properly, update the _Layout.cshtml to add the following css bundle
@Styles.Render("~/Content/themes/base/css")
Before we continue we need to build the Blob Storage’s connection string. This is easy once you know the format:
DefaultEndpointsProtocol=<connectionType>;AccountName=<blobStorageAccountName>;AccountKey=<blobStorageAccessKey>
1. ConnectionType is either http or https
2. AccountName is the name of the storage account associated with your Media Service. You can go to your Azure Portal, Select Media Services (1) tab on the left, go to Linked Resources tab (2) and pick the name of the Storage account (3)
3. Account Key: In the above screen, click on the Storage account name to navigate to Storage Dashboard, from the bottom toolbar, click on ‘Manage Keys’ button to bring up the Key’s dialog, Copy the Primary Access Key.
4. Now that we have got all the three components, add a key to the Web.config’s appSettings section.
<add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=mediasvcc5hww8r75gwc0;AccountKey=o+oXVH9PEVQ3AFC6xWBQHL9diuJ7jecU10oaGyw5wRhMbdLlA9f+lfoeGOsXgYQyaxrgFq8SFSj6nfFJa96cnA==" />
Now that we’ve got the Storage connection string, add the following keys as well
<add key="StorageContainerReference" value ="temporary-media" /> <add key="MediaAccountName" value="mediaservicedemo"/> <add key="MediaAccountKey" value="hSpRS8OJuhJIktmSX9HhAeZKD+paOt05W+uSZC6Y2W8=" />
The StorageContainerReference is name of the temporary container to which media will be uploaded. The MediaAccountName is the name of the MediaService that we gave when we created the Service
MediaAccountKey is the access key, you can go to the Media dashboard and use the Manage Keys button like you did from the Storage dashboard, to copy the MediaAccountKey.
Next we add the SetMetadata, UploadCurrentChunk and CommitAllChunks methods into our MediaController.cs. Wherever we are Update the configuration strings appropriately.
If you run the app and try to upload a file, it will get uploaded to the temporary-media storage container. However our work isn’t done yet. We’ve simply uploaded it to blob storage, our Media service still doesn’t know about the video and can’t serve it up, or encode it. Add the following method in the MediaController and add call it from CommitChunks method:
private void CreateMediaAsset(CloudFile model){…}
This code fetches the media in the blob storage and create a new MediaService Asset out of it and copies it to a container controlled by Media Services. I have broken down the code for the above method in the following steps:
Step 1: Retrieve account keys and names
string mediaAccountName = ConfigurationManager.AppSettings["MediaAccountName"];string mediaAccountKey = ConfigurationManager.AppSettings["MediaAccountKey"];string storageAccountName = ConfigurationManager.AppSettings["StorageAccountName"];string storageAccountKey = ConfigurationManager.AppSettings["StorageAccountKey"];
Step 2: Create the media service context.
CloudMediaContext context = new CloudMediaContext(MediaServicesAccountName, MediaServicesAccountKey);
Step 3: Create instance of the CloudStorageAccount, this is the storage account associated with the Media Service.
CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(MediaServicesStorageAccountName, MediaServicesStorageAccountKey), true);
Step 4: Create a Storage Client instance from where we need to copy the file
var cloudBlobClient = storageAccount.CreateCloudBlobClient();var mediaBlobContainer = cloudBlobClient.GetContainerReference(cloudBlobClient.BaseUri + "temporary-media");mediaBlobContainer.CreateIfNotExists();
Step 5: Create a new Media Asset and a Write Policy.
IAsset asset = context.Assets.Create("NewAsset_" + Guid.NewGuid(), AssetCreationOptions.None);IAccessPolicy writePolicy = context.AccessPolicies.Create("writePolicy", TimeSpan.FromMinutes(120), AccessPermissions.Write);
Step 6: Create a Destination Location in the Media Service and get the blob handle of the destination file (blob).
ILocator destinationLocator = context.Locators.CreateLocator(LocatorType.Sas, asset, writePolicy);// Get the asset container URI and copy blobs from mediaContainer to assetContainer.Uri uploadUri = new Uri(destinationLocator.Path);string assetContainerName = uploadUri.Segments[1];CloudBlobContainer assetContainer = cloudBlobClient.GetContainerReference(assetContainerName);
Step 7: Get Blob handle of the Source File
string fileName = HttpUtility.UrlDecode(Path.GetFileName(model.BlockBlob.Uri.AbsoluteUri));var sourceCloudBlob = mediaBlobContainer.GetBlockBlobReference(fileName); sourceCloudBlob.FetchAttributes();
Step 8: Check for rudimentary properties to ensure the source file is valid and then create the file in the designation. Initiate copy from Blob.
Note: This is actually a job and takes a few seconds to reflect on the server if you are hitting refresh continuously.
if (sourceCloudBlob.Properties.Length > 0){ IAssetFile assetFile = asset.AssetFiles.Create(fileName); var destinationBlob = assetContainer.GetBlockBlobReference(fileName); destinationBlob.DeleteIfExists(); destinationBlob.StartCopyFromBlob(sourceCloudBlob); destinationBlob.FetchAttributes(); if (sourceCloudBlob.Properties.Length != destinationBlob.Properties.Length) Console.WriteLine("Failed to copy");}
Step 9: Once the copy is done delete the destination locator and the write policy.
destinationLocator.Delete();writePolicy.Delete();
Step 10: Refresh the asset by retrieving it from the context
asset = context.Assets.Where(a => a.Id == asset.Id).FirstOrDefault();var ismAssetFiles = asset.AssetFiles.ToList() . Where(f => f.Name.EndsWith(".mp4", StringComparison.OrdinalIgnoreCase)) .ToArray();
if (ismAssetFiles.Count() != 1) throw new ArgumentException("The asset should have only one, .mp4 file");
ismAssetFiles.First().IsPrimary = true;ismAssetFiles.First().Update();model.UploadStatusMessage += " Created Media Asset '" + asset.Name + "' successfully.";model.AssetId = asset.Id;}
Next we save the new MediaElement details for the current User.
Once we know that the file has been saved to the Media Service, we can save the Title and Asset details to the database. To do this, we first update the CommitAllChunks method to send back the AssetId in the Json that we were returning.
return Json(new{ error = errorInOperation, isLastBlock = model.IsUploadCompleted, message = model.UploadStatusMessage, assetId = model.AssetId});
Next we update the Update.cshtml to add a panel with the Title and Save button. This panel becomes visible once the Upload is complete and file saved to the Media Service.
<div id="detailsPanel"> <input type="hidden" id="assetId" /> <label id="statusMessage"></label> <br /> <div> Title <input type="text" id="title" /> </div> <button id="saveDetails">Save</button></div>
To toggle it’s visibility, we update the media-upload.js to hide it on document load and show it once the last chunk upload has returned successfully.
Next we add a Save method to the media-upload.js to post the AssetId and Title.
var saveDetails = function (){ var dataPost = { "Title": $("#title").val(), "AssetId": $("#assetId").val() } $.ajax({ type: "POST", async: false, contentType: "application/json", data: JSON.stringify(dataPost), url: "/Media/Save" }).done(function (state) { if (state.Saved == true) { displayStatusMessage("Saved Successfully"); $("#detailsPanel").hide(); } else { displayStatusMessage("Saved Failed"); } });}
This posts the data to a Save action method in our MediaController. We don’t have a Save method so far, so we add one to save the data to the database as follows:
[HttpPost]public JsonResult Save(MediaElement mediaelement){ try { mediaelement.UserId = User.Identity.Name; mediaelement.FileUrl = GetStreamingUrl(mediaelement.AssetId); db.MediaElements.Add(mediaelement); db.SaveChanges(); return Json(new { Saved = true, StreamingUrl = mediaelement.FileUrl}); } catch (Exception ex) { return Json(new { Saved = false }); }}
Before we save the Data to the Server, we call the GetStreamingUrl method. This method does the equivalent of ‘Publishing’ data from the Web Portal. It creates an access policy that’s valid for a year and generates an appropriate URL for the uploaded media.
private string GetStreamingUrl(string assetId){CloudMediaContext context = new CloudMediaContext(ConfigurationManager.AppSettings["MediaAccountName"],ConfigurationManager.AppSettings["MediaAccountKey"]);var streamingAssetId = assetId; // "YOUR ASSET ID";var daysForWhichStreamingUrlIsActive = 365;var streamingAsset = context.Assets.Where(a => a.Id == streamingAssetId).FirstOrDefault();
IAccessPolicy accessPolicy = context.AccessPolicies.Create(streamingAsset.Name, TimeSpan.FromDays(daysForWhichStreamingUrlIsActive), AccessPermissions.Read | AccessPermissions.List); string streamingUrl = string.Empty;var assetFiles = streamingAsset.AssetFiles.ToList();var streamingAssetFile = assetFiles.Where(f => f.Name.ToLower().EndsWith("m3u8-aapl.ism")).FirstOrDefault();if (streamingAssetFile != null){ var locator = context.Locators.CreateLocator(LocatorType.OnDemandOrigin, streamingAsset, accessPolicy); Uri hlsUri = new Uri(locator.Path + streamingAssetFile.Name + "/manifest(format=m3u8- aapl)"); streamingUrl = hlsUri.ToString();}streamingAssetFile = assetFiles.Where(f => f.Name.ToLower().EndsWith(".ism")).FirstOrDefault();if (string.IsNullOrEmpty(streamingUrl) && streamingAssetFile != null){ var locator = context.Locators.CreateLocator(LocatorType.OnDemandOrigin, streamingAsset, accessPolicy); Uri smoothUri = new Uri(locator.Path + streamingAssetFile.Name + "/manifest"); streamingUrl = smoothUri.ToString();}streamingAssetFile = assetFiles.Where(f => f.Name.ToLower().EndsWith(".mp4")).FirstOrDefault();if (string.IsNullOrEmpty(streamingUrl) && streamingAssetFile != null){ var locator = context.Locators.CreateLocator(LocatorType.Sas, streamingAsset, accessPolicy); var mp4Uri = new UriBuilder(locator.Path); mp4Uri.Path += "/" + streamingAssetFile.Name; streamingUrl = mp4Uri.ToString();}return streamingUrl;}
With that we have all the data we need to keep track of files uploaded by each user. Next up the media player.
We will leverage the excellent Player Framework project from Microsoft Media Service team. This is an OSS project on Codeplex and provides a set of clients to serve up Media along with other features like Playlist, Ad insertion and so on.
You have a variety of clients to choose, on the Web you can use HTML5 player and/or the Silverlight player. Today we’ll use the HTML5 player only.
Step 1: Download the Player Framework for HTML5 client from here. This consists of the playerframework.js and the playerframework.css both in their minified form.
Step 2: Add the style reference to _Layout.css (Bundle it as a best practice).
Step 3: Add a new JavaScript file media-player.js. It has only one function, that is to initialize the player framework client and depends on playerframework.js.
var mediaPlayer = {initFunction : function (window, sourceUrl){ var myPlayer = new PlayerFramework.Player(window, { mediaPluginFallbackOrder: ["VideoElementMediaPlugin", "SilverlightMediaPlugin"], width: "480px", height: "320px", sources: [ { src: sourceUrl, type: 'video/mp4;' } ] });}}
Step 4: Adding the player in the ‘Edit’ page (Edit.cshtml). Update the markup to hide the UserId, AssetId and FileUrl. These are not directly updatable by the user.
@Html.HiddenFor(model => model.Id)@Html.HiddenFor(model => model.AssetId)@Html.HiddenFor(model => model.FileUrl, new { id = "fileUrl" })@Html.HiddenFor(model => model.UserId)
Step 5: Add a <div> that will serve as the container and then use the media-player script to tie the div to the PlayerFramework client. The FileUrl value is passed to the videoPlayer as well.
<div id="videoPlayer"></div>
<div> @Html.ActionLink("Back to List", "Index")</div>
@section Scripts { <script src="~/Scripts/playerframework.min.js"></script> <script src="~/Scripts/media-player.js"></script> @Scripts.Render("~/bundles/jqueryval") <script type="text/javascript"> mediaPlayer.initFunction("videoPlayer", $("#fileUrl").val());
</script>}
That’s all that needs to be done to play the video.
Step 6: We can cleanup the Index.cshtml as well to show the Title and Delete options only, with the Title hyperlinked to the Edit page.
@model IEnumerable<AzureMediaPortal.Models.MediaElement>
@{ ViewBag.Title = "My Media Index";}<h2>My Media Index</h2><p>@Html.ActionLink("Upload New Media", "Upload")</p><table><tr> <th> @Html.DisplayNameFor(model => model.Title) </th> <th> @Html.DisplayNameFor(model => model.IsPublic) </th> <th></th></tr>@foreach (var item in Model) { <tr> <td> @Html.ActionLink(@item.Title, "Edit", new { id=item.Id }) </td> <td> @Html.DisplayFor(modelItem => item.IsPublic) </td> <td> @Html.ActionLink("Delete", "Delete", new { id=item.Id }) </td> </tr>}</table>
Finally we’ll update the Delete method in the controller to delete Assets from server as well. To do this, we again use the AssetId to create a context and delete the asset. Once the asset is deleted we delete the record from our database as well.
private void DeleteMedia(string assetId){ string mediaAccountName = ConfigurationManager.AppSettings["MediaAccountName"]; string mediaAccountKey = ConfigurationManager.AppSettings["MediaAccountKey"]; CloudMediaContext context = new CloudMediaContext(mediaAccountName, mediaAccountKey); var streamingAsset = context.Assets.Where(a => a.Id == assetId).FirstOrDefault(); if (streamingAsset != null) { streamingAsset.Delete(); }}
[HttpPost, ActionName("Delete")][ValidateAntiForgeryToken]public ActionResult DeleteConfirmed(int id){ MediaElement mediaelement = db.MediaElements.Find(id); DeleteMedia(mediaelement.AssetId); db.MediaElements.Remove(mediaelement); db.SaveChanges(); return RedirectToAction("Index");}
With that we are ready to run our app, Demo Time!
Step 1: Run the application, Register yourself the first time and Login.
Step 2: Navigate to the My Media page and click on Upload New Media
Step 3: Browse and select a media file (only mp4 for this sample). Click upload to begin upload. You’ll notice the upload button gets hidden as the progress shows progress.
Step 4: After the file is uploaded 100%, you’ll notice a pause as the media is registered with Media Service and AssetID obtained.
Step 5: Once Media is registered with Media Service, you’ll see a Save button and an Input box to Save the Title for the uploaded file. Provide the title and hit Save. Once save completes, you will see a Video panel and be able to preview the uploaded video. Click on ‘Back to List’ to go back to the Index page.
Step 6: After saving you can navigate back to the index page.
Step 7: From the index page, you can click on the title to navigate to the Edit page.
Step 8: Here you can view the media asset as well as change the Title if you want. Save will navigate back to the Index page, from which we can go to the delete page to Delete the asset if no longer required.
So far we have seen how to upload asset to Azure Storage and move it to Media service, then generate policies and access the media over a web client (for Modern browsers only, tested in IE10).
We have not tried out encoding to various formats, this we will try in a future article.
The Windows Azure Media Services along with the Player Framework provide us with a big leg up while building Media oriented services. Today we saw how we could build our personal video library with not-too-much effort.
Download the entire source code of this article (Github) | http://www.dotnetcurry.com/showarticle.aspx?ID=924 | CC-MAIN-2013-48 | en | refinedweb |
04 November 2011 15:22 [Source: ICIS news]
TORONTO (ICIS)--EcoSynthetix has commissioned an 80m lb/year production line at ?xml:namespace>
EcoSynthetix said the expansion raises its overall capacity to 155m lb/year (70,308 tonnes/year).
The company did not disclose how much it invested in the project. An official at EcoSynthetix's headquarters in
According to information on its website, EcoSynthetix uses cornstarch as its main feedstock. The company plans to build up capacities for its products at contract manufacturing sites in Europe and
EcoSynthetix, which recently completed an initial public offering, reported sales of $5.6m (€4.0m) for the three months ended 30 June 2011, up 58% year on year from the same period a year ago.
($1 = €0 | http://www.icis.com/Articles/2011/11/04/9505671/canadian-renewable-chem-firm-commissions-production-at-dutch.html | CC-MAIN-2013-48 | en | refinedweb |
Lazy<T> Constructor (Boolean)
Assembly: mscorlib (in mscorlib.dll)> or Lazy<T> constructor.
A Lazy<T> instance that is created with this constructor does not cache exceptions. For more information, see theLazy<T> class or the System.Threading.LazyThreadSafetyMode enumeration.
The following example demonstrates the use of this constructor to create a lazy initializer that is not thread safe, for scenarios where all access to the lazily initialized object occurs on the same thread. It also demonstrates the use of the Lazy<T> constructor (specifying LazyThreadSafetyMode.None for mode. To switch to a different constructor, just change which constructor is commented out.
The example defines a LargeObject class that will be initialized lazily. In the Main method, the example creates a Lazy<T> instance and then pauses. When you press the Enter key, the example accesses the Value property of the Lazy<T> instance, which causes initialization to occur. The constructor of the LargeObject class displays a console message.
using System; using System.Threading; class Program { static Lazy<LargeObject> lazyLargeObject = null; static void Main() { // The lazy initializer is created here. LargeObject is not created until the // ThreadProc method executes. lazyLargeObject = new Lazy<LargeObject>(false); // The following lines show how to use other constructors to achieve exactly the // same result as the previous line: //lazyLargeObject = new Lazy<LargeObject>(LazyThreadSafetyMode.None); Console.WriteLine( "\r\nLargeObject is not created until you access the Value property of the lazy" + "\r\ninitializer. Press Enter to create LargeObject."); Console.ReadLine(); LargeObject large = lazyLargeObject.Value; large.Data[11] = 89; Console.WriteLine("\r\nPress Enter to end the program"); Console.ReadLine(); } } class LargeObject { public LargeObject() { Console.WriteLine("LargeObject was created on thread id {0}.", Thread.CurrentThread.ManagedThreadId); } public long[] Data = new long[100000000]; } /* This example produces output similar to the following: LargeObject is not created until you access the Value property of the lazy initializer. Press Enter to create LargeObject. LargeObject was created on thread id 1. Press Enter to end the. | http://msdn.microsoft.com/en-us/library/dd989799(v=vs.100).aspx | CC-MAIN-2013-48 | en | refinedweb |
Summary
Compares two feature classes or layers and returns the comparison results.. The first field is sorted, then the second field, and so on, in ascending order. Sorting by a common field in both the Input Base Features and the Input Test Features ensures that you are comparing the same row from each input dataset.
By default, the compare type is set to All (ALL in Python). This means all properties of the features being compared will be checked, including such things as spatial reference, field properties, attributes, and geometry. However, you may choose a different compare type to check only specific properties of the features being compared.
The Ignore Options provide the flexibility to omit properties such as measure attributes, z attributes, point ID attributes, and extension properties. Two feature classes may be identical, yet one has measures and z coordinates and the other does not. You can choose to ignore these properties. The Ignore extension properties (IGNORE_EXTENSION_PROPERTIES in Python) option refers to additional information added to a feature class or table. For example, the features of two annotation feature classes can be identical but the feature classes may have different extension properties, such as different symbols in the symbol collection and different editing behavior.
The default XY Tolerance is determined by the default XY Tolerance of the Input Base Features. To minimize error, the value you choose for the compare tolerance should be as small as possible. If zero is entered for the XY Tolerance, an exact match is performed.
The default M Tolerance and the default Z Tolerance is determined by the default M Tolerance and Z Tolerance of the Input Base Features. The units are the same as those of the Input Base Features. If zero is entered for the M Tolerance and Z Tolerance, an exact match is performed.
When comparing Geometry only (GEOMETRY_ONLY in Python), the spatial references must match. If the spatial references are different, a miscompare will be reported. If the coordinate system is different for either input, the features will miscompare. This tool does not do projection on the fly.
The Omit Fields parameter is a list of fields that are not included in the field count comparison—their field definitions and tabular values are ignored.
Attribute tolerances can only be specified for numeric field types.
The Output Compare File will contain all similarities and differences between the Input Base Features and the Input Test Features. This file is a comma-delimited text file which can be viewed and used as a table in ArcGIS. For example, this table can be queried to obtain all the ObjectID values for all the rows that are different. The has_error field indicates that the record contains an error. True indicates there is a difference.
One of the first comparisons performed is a feature count. If the feature count is reported as being different and the Continue Compare parameter is True, the subsequent comparison messages may not accurately reflect additional differences between the Input Base Features and Input Test Features. This is due to the Feature Compare tool's inability to figure out where features have been added or removed in the Input Test Features and simply moves to the next row in each attribute table. At the location in the attribute table where a feature has been added or deleted, the tool will simply move to the next row and begin comparing the base feature with the wrong test feature because the correct one in the Input Test Data was deleted or a feature was added before it.
When using this tool in Python, you can get the status of this tool using result.getOutput(1). The value will be 'true' when no differences are found and 'false' when differences are detected.
Learn more about using tools in Python
Parameters
arcpy.management.FeatureCompare(in_base_features, in_test_features, sort_field, {compare_type}, {ignore_options}, {xy_tolerance}, {m_tolerance}, {z_tolerance}, {attribute_tolerances}, {omit_field}, {continue_compare}, {out_compare_file})
Derived Output
Code sample
The following Python window script demonstrates how to use the FeatureCompare function in immediate mode.
import arcpy arcpy.FeatureCompare_management( r'C:/Workspace/baseroads.shp', r'C:/Workspace/newroads.shp', 'ROAD_ID', 'ALL', 'IGNORE_M;IGNORE_Z', '0.001 METERS', 0, 0, 'Shape_Length 0.001', '#', 'CONTINUE_COMPARE', r'C:/Workspace/roadcompare.txt')
Example of how to use the FeatureCompare tool in a stand-alone script.
# Name: FeatureCompare.py # Description: Compare two feature classes and return comparison result. # import system modules import arcpy # Set local variables base_features = "C:/Workspace/baseroads.shp" test_features = "C:/Workspace/newroads.shp" sort_field = "ROAD_ID" compare_type = "ALL" ignore_option = "IGNORE_M;IGNORE_Z" xy_tolerance = "0.001 METERS" m_tolerance = 0 z_tolerance = 0 attribute_tolerance = "Shape_Length 0.001" omit_field = "#" continue_compare = "CONTINUE_COMPARE" compare_file = "C:/Workspace/roadcompare.txt" # Process: FeatureCompare compare_result = arcpy.FeatureCompare_management( base_features, test_features, sort_field, compare_type, ignore_option, xy_tolerance, m_tolerance, z_tolerance, attribute_tolerance, omit_field, continue_compare, compare_file) print(compare_result[1]) print(arcpy.GetMessages())
Environments
Licensing information
- Basic: Yes
- Standard: Yes
- Advanced: Yes | https://pro.arcgis.com/en/pro-app/latest/tool-reference/data-management/feature-compare.htm | CC-MAIN-2022-05 | en | refinedweb |
Taquito v10.2.0-beta
Summary
New features
- @taquito/contract-library - [Performance] Embed popular contracts into your application using the new ContractAbstraction instantiation #1049
- @taquito/rpc - [Performance] Enable RPC caching in your application using the RpcClient cache implementation #924
- @taquito/taquito - [DevExp] Taquito Entrypoint methods now accept javascript object format for contract method calls (parametric calls are unchanged!) #915
Enhancements
- Compatibility support for Hangzhounet
- Allow to set HttpBackend on IpfsHttpHandler #1092
@taquito/contract-library - Ability to bundle smart-contract scripts and entrypoints for ContractAbstration instantiation
A new package named
@taquito/contract-library has been added to the Taquito library.
To improve (d)App performance, we aim to provide ways to reduce the number of calls made by Taquito to the RPC. The
@taquito/contracts-library package allows developers to embed the smart-contract scripts into the application, preventing Taquito from loading this data from the RPC for every user.
The ContractsLibrary class is populated by at project compile time, using contract addresses and their corresponding script and entry points. The
ContractsLibrary is then injected into a
TezosToolkit as an extension using the toolkits
addExtension method.
When creating a ContractAbstraction instance using the
at method of the Contract or the Wallet API, if a
ContractsLibrary is present on the TezosToolkit instance, the script and entry points of matching contracts will be loaded from the ContractsLibrary. Otherwise, the values will be fetched from the RPC as usual.
Example of use:
import { ContractsLibrary } from '@taquito/contracts-library';import { TezosToolkit } from '@taquito/taquito';const contractsLibrary = new ContractsLibrary();const Tezos = new TezosToolkit('rpc');contractsLibrary.addContract({'contractAddress1': {script: script1, // script should be obtained from Tezos.rpc.getScript('contractAddress1')entrypoints: entrypoints1 // entrypoints should be obtained from Tezos.rpc.getEntrypoints('contractAddress1')},'contractAddress2': {script: script2,entrypoints: entrypoints2},//...})Tezos.addExtension(contractsLibrary);// The script and entrypoints are loaded from the contractsLibrary instead of the RPCconst contract = await Tezos.contract.at('contractAddress1');
@taquito/RPC - New RpcClient implementation that caches RPC data
Similar to the new
ContractsLibrary feature, Taquito provides an additional way to increase dApp performance by caching some RPC data. To do so, we offer a new
RpcClient implementation named
RpcClientCache
The constructor of the
RpcClientCache class takes an
RpcClient instance as a parameter and an optional TTL (time to live). By default, the TTL is of 1000 milliseconds. The
RpcClientCache acts as a decorator over the
RpcClient instance. The
RpcClient responses are cached for the period defined by the TTL.
Example of use:
import { TezosToolkit } from '@taquito/taquito';import { RpcClient, RpcClientCache } from '@taquito/rpc';const rpcClient = new RpcClient('replace_with_RPC_URL');const tezos = new TezosToolkit(new RpcClientCache(rpcClient));
@taquito/taquito - New Taquito Entrypoint methods accept javascript object format for contract method calls
The ContractAbstraction class has a new member called
methodsObject, which serves the same purpose as the
methods member. The format expected by the smart contract method differs:
methods expects flattened arguments while
methodsObject expects an object.
It is to the user's discretion to use their preferred representation. We wanted to provide Taquito users with a way to pass an object when calling a contract entry point using a format similar to the storage parameter used when deploying a contract.
A comparison between both methods is available here:
Compatibility support for Hangzhounet
This version ships with basic compatibility support for the new Hangzhou protocol. New features, such as support for new Michelson instructions, types and constants, will follow in future Taquito releases.
What's coming next for Taquito?
We started preliminary work on integrating Hangzhounet, the next Tezos protocol update proposal. We plan to deliver a final version of Taquito v11 early, giving teams a longer runway to upgrade their projects before protocol transition.
If you have feature or issue requests, please create an issue on or join us on the Taquito community support channel on Telegram
Taquito v10.1.3-beta
Bug fix - Key ordering
Fixed key sorting in literal sets and maps when these collections have mixed key types.
Upgrade beacon-sdk to version 2.3.3
This beacon-sdk release includes:
- updated Kukai logo
- hangzhounet support
- fix for #269 Pairing with Kukai blocked (from -beta.0)
Taquito v10.1.2-beta
Bug fix - Unhandled operation confirmation error #1040 & #1024
Taquito v10.1.1-beta
Bug fix where the custom polling interval values for the confirmation methods were overridden with the default ones.
Taquito v10.1.0-beta
Breaking change
In version 9.2.0-beta of Taquito, the ability to send more than one operation in the same block was added to Taquito. This ability relied on a workaround solution. The
helpers/preapply/operations and
helpers/scripts/run_operation RPC methods do not accept a counter higher than the head
counter + 1 as described in issue tezos/tezos#376. Despite the limitation of these RPC's, the Tezos protocol itself does allow the inclusion of more than one operation from the same implicit account. In version 9.2.0-beta of Taquito, we introduced an internal counter and simulated the operation using a counter value that the
preapply &
run_operation will accept. This allowed Taquito to send many operations in a single block. However, we found that the workaround used may lead to inconsistent states, and results that violate the principle of least astonishment. We decided to remove this feature temporarily. We aim to reintroduce this feature when Tezos RPC issue tezos/tezos#376 is addressed and considers the transaction in the mempool when checking the account counter value or otherwise by providing a separate and adapted new interface to support this use case properly.
Summary
Enhancements
- @taquito/taquito - Made PollingSubscribeProvider's polling interval configurable #943
- @taquito/taquito - Possibility to withdraw delegate
Bug Fixes
- @taquito/taquito - Added a status method for batched transactions using the wallet API #962
- @taquito/michelson-encoder - Fixed the Schema.ExecuteOnBigMapValue() for ticket token #970
- @taquito/taquito - Fixed a "Memory leak" in the PollingSubscribeProvider #963
Documentation
- Updated Taquito website live examples to use Granadanet #993
- Documentation for FA2 functionality #715
- Documentation for confirmation event stream for wallet API #159
@taquito/taquito - Made PollingSubscribeProvider's polling interval configurable
The default streamer set on the
TezosToolkit used a hardcoded polling interval of 20 seconds, and there was no easy way to change this. To reduce the probability of missing blocks, it is now possible to configure the interval as follow:
const tezos = new TezosToolkit('')tezos.setProvider({ config: { streamerPollingIntervalMilliseconds: 15000 } });const sub = tezos.stream.subscribeOperation(filter)
@taquito/taquito - Possibility to withdraw delegate
It is now possible to
undelegate by executing a new setDelegate operation and not specifying the delegate property.
// const Tezos = new TezosToolkit('');await Tezos.contract.setDelegate({ source: 'tz1_source'});
@taquito/taquito - Property status doesn't exist on a batched transaction for the wallet API
When multiple operations were batched together using the
batch method of the wallet API, the
send() method returned a value of type
WalletOperation where the status was missing.
BatchWalletOperation, which extends the
WalletOperation class and contains a
status method, is now returned.
@taquito/michelson-encoder - Fixed the Schema.ExecuteOnBigMapValue() for ticket token
The
Execute and
ExecuteOnBigMapValue methods of the
Schema class could not deserialize Michelson when ticket values were not in the optimized (Edo) notation. Both representations are now supported.
@taquito/taquito - Fixed a "Memory leak" in the PollingSubscribeProvider
A fix has been made to change the behavior of the
PollingSubscribeProvider, which was keeping all blocks in memory.
Taquito v10.0.0-beta
Summary
Remaining support for Granadanet
- @taquito/rpc - Support
depositsfield in
frozen_balance#919
- @taquito/rpc - Support new fields introduced by Granada in block metadata #918
Bug Fixes
- @taquito/taquito - Drain an unrevealed account #975
- @taquito/rpc - Type
ContractBigMapDiffItemhas BigNumber's but values are string's #946
Documentation
- Document usage of Taquito with TezosDomain #912
- Document storage and fee passing from wallet to dapp #926
- Add integration tests for Permit contracts (TZIP-17) #661
Enhancements
- Breaking changes - @taquito/michelson-encoder - Improvement to the
Schema.ExtractSchema()method #960 and #933
@taquito/rpc - Support deposits field in frozen_balance
In Granada, when fetching delegate information from the RPC, the
deposit property in
frozen_balance_by_cycle has been replaced by
deposits. The RpcClient supports the new and the old notation.
@taquito/rpc - Support new fields introduced by Granada in block metadata
The
balance_updates property in block metadata now includes the new origin
subsidy, besides the existing ones:
block and
migration.
The support for the new
liquidity_baking_escape_ema and
implicit_operations_results properties in block metadata has been added in the
RpcClient class.
@taquito/taquito - Drain an unrevealed account
Since v9.1.0-beta, the fees associated with a reveal operation are estimated using the RPC instead of using the old 1420 default value. When draining an unrevealed account, the fees associated with the reveal operation needs to be subtracted from the initial balance (as well as the fees related to the actual transaction operation). The reveal fee has changed from 1420 to 374 (based on the simulation using the RPC). However, the constants file was still using the 1420 value, leading to a remaining amount of 1046 in the account when trying to empty it. The default value has been adjusted on the constants file to match this change.
@taquito/rpc - Type ContractBigMapDiffItem has BigNumber's but values are string's
The type of the
big_map,
source_big_map, and
destination_big_map properties of
ContractBigMapDiffItem was set as
BigNumber, but they were not cast to it. The RPC returns these properties in a string format. The type has been changed from
BigNumber to
string for them.
Add integration tests for Permit contracts (TZIP-17)
Examples have been added to the integration tests showing how to manipulate permit contracts using the new data packing feature:
@taquito/michelson-encoder - Improvement to the Schema.ExtractSchema() method
The
ExtractSchema method of the
Schema class indicates how to structure contract storage in javascript given its storage type in Michelson JSON representation. This method can be helpful to find out how the storage object needs to be written when deploying a contract.
Return the type of element(s) that compose a "list"
Before version 10.0.0-beta, when calling the
Schema.ExtractSchema method, the Michelson type
list was represented only by the keyword
list. This behavior has been changed to return an object where the key is
list and the value indicates the list's composition.
Example:
const storageType = {prim: 'list',args: [{prim: 'pair',args: [{ prim: 'address', annots: ['%from'] },{ prim: 'address', annots: ['%to'] },],},],};const storageSchema = new Schema(storageType);const extractSchema = storageSchema.ExtractSchema();println(JSON.stringify(extractSchema, null, 2));
before version 10.0.0-beta, the returned value was:
'list'
in version 10.0.0-beta the returned value is:
{list: {"from": "address","to": "address"}}
Based on the information returned by the
ExtractSchema method, the storage can be writen as follow:
Tezos.contract.originate({code: contractCode,storage: [{from: "tz1...",to: "tz1..."}],})
Breaking changes - Change in the representation of big_map type
The representation of the
big_map type returned by the
Schema.ExtractSchema method has changed to increase consistency with the
map representation. Similar to the
map type, an object is now returned where its key is
big_map, and its value is another object having a
key and a
value property, indicating the type of the key and value of the big map. At the same time, this change fixed an issue where the key of a big map as pair was not represented properly and returned "[object Object]" instead.
Example:
const storageType = {prim: 'big_map',annots: [ '%balances' ],args: [{prim: 'address'},{prim: 'pair',args: [ { prim: 'address' }, { prim: 'nat' } ]}]};const storageSchema = new Schema(storageType);const extractSchema = storageSchema.ExtractSchema();println(JSON.stringify(extractSchema, null, 2));
before version 10.0.0-beta the returned value was:
{"address": {"0": "address","1": "nat"}}
in version 10.0.0-beta the returned value is:
{"big_map": {"key": "address","value": {"0": "address","1": "nat"}}}
Based on the information returned by the
ExtractSchema method, the storage can be writen as follow:
const bigMap = new MichelsonMap();bigMap.set('tz1...', { // address0: 'tz1...', // address1:10 // nat});Tezos.contract.originate({code: contractCode,storage: bigMap})
What's coming next for Taquito?
Taquito team is committed to creating the best experience for Taquito users, and we need your feedback! Please help us improve Taquito further by filling out this 2-minute survey by EOD August 1 (PST). Thank you for your time and support!
If you have feature or issue requests, please create an issue on or join us on the Taquito community support channel on Telegram
Taquito v9.2.0-beta
Summary
New features
- Compatibility support for Granadanet
- @taquito/michelson-encoder - Accept bytes in Uint8Array #375
- @taquito/michelson-encoder - Added Bls12-381 tokens #888
- @taquito/michelson-encoder - Added sapling_state and sapling_transaction tokens #586
- @taquito/rpc - Added sapling RPC #586
- @taquito/taquito - sapling_state abstraction on storage read #602
- @taquito/taquito - Possibility to send more than one operation in the same block #955
Documentation
- @taquito/http-utils - Cancel http requests
Enhancements
- Updated various dependencies and switched from tslint to eslint
@taquito/michelson-encoder - Accept bytes in Uint8Array
The only format accepted in the Michelson-encoder for the type bytes was the hexadecimal string. We added support for the type Uint8Array. It is now possible to call an entry point or originate a contract using a Uint8Array or a hexadecimal string.
@taquito/http-utils - Make http requests cancelable
We received requests from users to use the abort signal to allow making requests cancelable. This implementation would require changes in the high-level API that we will consider in an incoming issue where we envisage providing a new API. Meanwhile, it is possible to customize the HttpBackend and RpcClient classes to support cancelable requests. Here is an example where a custom HttpBackend class is used to be able to cancel all requests: The example, as not specific, might not be ideal for all use cases, so we plan to provide better support for this feature in the future.
@taquito/michelson-encoder - Added Bls12-381 tokens
The
bls12_381_fr,
bls12_381_g1, and
bls12_381_g2 tokens were missing in the Michelson-Encoder since the Edo protocol and have been added. As for the bytes token, their supported format is the hexadecimal string or the Uint8Array.
@taquito/michelson-encoder - Added sapling_state and sapling_transaction tokens
The
sapling_state and
sapling_transaction tokens were missing in the Michelson-Encoder since the Edo protocol and have been added.
Note that no additional abstractions or ability to decrypt Sapling transactions have been implemented so far.
@taquito/rpc - Added sapling RPC
The RPC endpoints related to sapling have been added to the RpcClient:
- the
getSaplingDiffByIdmethod takes a sapling state ID as a parameter and returns its associated values.
- the
getSaplingDiffByContracttakes the address of a contract as a parameter and returns its sapling state.
@taquito/taquito - sapling_state abstraction on storage read
When accessing a
sapling_state in the storage with the RPC, only the sapling state's ID is returned.
When fetching the storage of a contract containing a
sapling_state, Taquito will provide an instance of
SaplingStateAbstraction. The
SaplingStateAbstraction class has a
getId and a
getSaplingDiff methods.
The
getSaplingDiff method returns an object of the following type:
{root: SaplingTransactionCommitmentHash,commitments_and_ciphertexts: CommitmentsAndCiphertexts[];nullifiers: string[];}
@taquito/taquito - Possibility to send several operations in the same block
Unless using the batch API, a specific account was limited to only sending one operation per block. If trying to send a second operation without awaiting confirmation on the first one, a counter exception was thrown by the RPC.
The node accepts the injection of more than one operation from the same account in the same block; the counter needs to be incremented by one for each of them. A limitation comes from the chains/main/blocks/head/helpers/scripts/run_operation and /chains/main/blocks/head/helpers/preapply/operations RPC APIs as they do not take into account the transaction in the mempool when checking the account counter value.
We added a counter property (a map of an account and its counter) on the TezosToolkit instance as a workaround. The counter is incremented when sending more than one operation in a row and used to inject the operation. However, the counter used in the prevalidation or the estimation is the head counter + 1. Note that if you send multiple operations in a block to a contract, the estimate will not take into account the impact of the previous operation on the storage of the contract. Consider using the batch API to send many operations at the same time. The solution presented in this issue is a workaround; the operations will need to be sent from the same TezosToolkit instance as it will hold the counter state. plan to improve performance issues by implementing some caching. Please have a look at these open discussions:. Any feedback or suggestions are appreciated.
If you have feature or issue requests, please create an issue on or join us on the Taquito community support channel on Telegram
Taquito v9.1.1-beta
@taquito/beacon-wallet - Updated beacon-sdk to version 2.2.9 @taquito/michelson-encoder - Fix for unexpected MapTypecheckError when loading contract storage - for cases where a map contains a big map as value #925
Taquito v9.1.0-beta
Summary
New features
- @taquito/taquito - Added reveal operation on the RpcContractProvider and RPCEstimateProvider classes #772
- @taquito/taquito & @taquito/beacon-wallet - Ability to specify the fee, storageLimit and gasLimit parameters using the wallet API #866
Enhancements
- @taquito/taquito - Include estimate for reveal operation on batch estimate #772
- @taquito/taquito - Export return types of public API methods (BatchOperation, Operation, OperationBatch, TransferParams, ParamsWithKind) #583
- @taquito/michelson-encoder - Types chain_id, key, option, or, signature, and unit made comparable #603
- @taquito/rpc - Added big_map_diff, lazy_storage_diff properties and failing_noop operation to RPC types #870
- @taquito/beacon-wallet - Updated beacon-sdk to version 2.2.8
Bug fixes
- @taquito/signer - Fixed a public key derivation bug in InMemorySigner class #848
- @taquito/michelson-encoder - Fixed a bug in the
Executemethod of the
OrTokenclass
@taquito/taquito - Added reveal operation on the
RpcContractProvider and the
RPCEstimateProvider classes & Include estimate for reveal operation on batch estimate
When sending an operation using the contract API, Taquito takes care of adding a reveal operation when the account needs to be revealed. This has not changed, but we added a
reveal method on the
RpcContractProvider class, allowing to reveal the current account using the contract API without the need to do another operation. The method takes an object as a parameter with optional fee, gasLimit and StorageLimit properties:
await Tezos.contract.reveal({});
We also added a reveal method on the
RPCEstimateProvider class, allowing an estimation of the fees, storage and gas related to the operation:
await Tezos.estimate.reveal();
Moreover, when estimating a batch operation where a reveal operation is needed, an
Estimate object representing the reveal operation will now be returned as the first element of the returned array.
@taquito/signer - Fixed a public key derivation bug in InMemorySigner class
There was an issue in the derivation of public keys by the InMemorySigner with the
p256 and
secp256k1 curves having a
y coordinate shorter than 32 bytes. For these specific cases, the returned public key was erroneous. Please remember that this signer implementation is for development workflows.
@taquito/michelson-encoder - Types
chain_id,
key,
option,
or,
signature, and
unit made comparable
Taquito ensures that
map keys and
set values of comparable types are sorted in strictly increasing order as requested by the RPC.
@taquito/michelson-encoder - Fixed a bug in the
Execute method of the
OrToken class
The execute method allows converting Michelson data into familiar-looking javascript data. This is used in Taquito to provide a well-formatted JSON object of contract storage. This release includes a bug fix for the OrToken where the right values were not formatted correctly.
@taquito/taquito & @taquito/beacon-wallet - Ability to specify the fee, storageLimit and gasLimit parameters using the wallet API
We are currently seeing a high number of transactions being backtracked with "storage exhausted" errors in high-traffic dapps in the ecosystem. To mitigate this issue and knowing that dapps are in a better position to assess reasonable values than the wallet, we now provide the ability to specify the storage, gas limit, and fee via the wallet API. As the
beacon-sdk, which @taquito/beacon-wallet package is built on, accepts those parameters, dapp developers will now have the ability to specify those parameters. One important note is that at the end, it is the wallet that has control over what is actually used when injecting the operation. started some preliminary work on integrating Granadanet, the next Tezos protocol update proposal. We plan to deliver a final version of Taquito v10 early, giving teams a longer runway to upgrade their projects before protocol transition.
We plan to improve the
michelson-encoder implementation to open the door for Type generation from contracts and to provide easier discoverability of what parameters endpoints and initial storage take. We opened a discussion on this subject on GitHub where any feedback or suggestions are appreciated:.
If you have feature or issue requests, please create an issue on or join us on the Taquito community support channel on Telegram.
Taquito minified build published to unpkg.com CDN … | https://tezostaquito.io/docs/11.0.0/version/ | CC-MAIN-2022-05 | en | refinedweb |
On Wed, Jun 14, 2017 at 08:27:40AM -0400, Stefan Berger wrote:?
Otherwise it would be good if the value was wrapped in a data
structure use by all xattrs, but that doesn't seem to be the case,
either. So I guess we have to go into each type of value structure
and add a uid field there.
namespace any security.* xattrs. Wouldn't be automatically enabled
for anything but ima and capabilities, but we could make the infrastructure
generic and re-usable. | http://lkml.iu.edu/hypermail/linux/kernel/1706.2/00757.html | CC-MAIN-2022-05 | en | refinedweb |
Barcode Software
barcode generator in vb.net free download
ActivatedServiceTypeEntry in C#.net
Integrated UPC-13 in C#.net ActivatedServiceTypeEntry
Estimated lesson time: 15 minutes
using barcode generating for excel control to generate, create bar code image in excel applications. royalty
BusinessRefinery.com/ barcodes
using plugin .net for windows forms to draw barcodes in asp.net web,windows application
BusinessRefinery.com/ bar code
Objective 1.4 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-39
generate, create barcode recogniton none with .net projects
BusinessRefinery.com/ barcodes
using barcode encoder for rdlc control to generate, create barcodes image in rdlc applications. zipcode
BusinessRefinery.com/ bar code
Description The icon that is shown in the system tray. The text that is shown when the user s mouse rests on the icon in the system tray. Indicates whether the icon is visible in the system tray.
Using Barcode scanner for result Visual Studio .NET Control to read, scan read, scan image in Visual Studio .NET applications.
BusinessRefinery.com/ bar code
using barcode encoder for jasper control to generate, create barcode image in jasper applications. capture
BusinessRefinery.com/barcode
WindowsPrincipal currentPrincipal = (WindowsPrincipal)Thread.CurrentPrincipal;
to compose qr-codes and qr code iso/iec18004 data, size, image with visual basic barcode sdk generators
BusinessRefinery.com/QRCode
to draw denso qr bar code and qrcode data, size, image with visual basic barcode sdk help
BusinessRefinery.com/qr bidimensional barcode
After this lesson, you will be able to
to insert qr code and qr barcode data, size, image with office excel barcode sdk fixed
BusinessRefinery.com/Denso QR Bar Code
qr code size backcolor with .net
BusinessRefinery.com/QR Code 2d barcode
Performing Restore to Confirm Backup Validity
to paint qrcode and qr bidimensional barcode data, size, image with .net barcode sdk fill
BusinessRefinery.com/Quick Response Code
qr codes data application for visual basic.net
BusinessRefinery.com/qr barcode
What data is needed for the ongoing operations of the organization What data is needed for the reporting of all styles, including both historical and business intelligence What data do the individual departments need to make the decisions that are expected of them What types of transformations does the data need to go through Who makes the decisions about data transformation, data storage, and enterprise data flow
use word documents code128 integrated to incoporate barcode 128 for word documents clarity,
BusinessRefinery.com/barcode code 128
rdlc data matrix
using pattern rdlc reports to use 2d data matrix barcode with asp.net web,windows application
BusinessRefinery.com/barcode data matrix
For more information about the Open Geospatial Consortium (OGC) and the associated standards, see.
vb.net data matrix generator
using builder visual studio .net to develop barcode data matrix on asp.net web,windows application
BusinessRefinery.com/data matrix barcodes
c# pdf417 open source
using thermal .net to print pdf 417 with asp.net web,windows application
BusinessRefinery.com/pdf417 2d barcode
Tell Me Where I Am: Location-Aware Applications
using tiff excel to display data matrix ecc200 in asp.net web,windows application
BusinessRefinery.com/2d Data Matrix barcode
use excel pdf417 generator to access pdf417 for excel action
BusinessRefinery.com/barcode pdf417
4
use excel 39 barcode development to access code 39 extended with excel foundation
BusinessRefinery.com/barcode 39
winforms data matrix
using lowercase .net windows forms to print data matrix with asp.net web,windows application
BusinessRefinery.com/barcode data matrix
Online responders
The domain namespace is hierarchical in structure.
Pre-production
([Measures].[Reseller Sales Amount] ([Order Date].[Calendar].PrevMember, [Measures].[Reseller Sales Amount]))/ [Measures].[Reseller Sales Amount]
Lesson 1: Designing an Image Creation Strategy ChAPTER 3 77
Lesson 3: Administering Software Licenses. . . . . . . . . . . . . . . . . . . . . . . . . . . 9-25
3-16
In the following case scenarios, you will apply what you ve learned about fine-grained password policies and RODCs. You can find answers to these questions in the Answers section at the end of this book.
Performing Nonauthoritative or Authoritative Restores
c. Incorrect: The No Auto-Restart With Logged On Users For Scheduled Automatic Updates
Microsoft Deployment Toolkit 2010 requires that you install the Windows 7 AIK. Microsoft provides the Windows 7 AIK free of charge from the Microsoft Downloads Center.
D. By default, the Everyone group will be granted the Read (Allow) share permission. E. By default, members of the Engineering global group are not assigned share level permissions.
More EAN-13 on C#
java qr code scanner library: Creating a Remoting Server Application in C# Incoporate EAN-13 Supplement 5 in C# Creating a Remoting Server Application
java qr code scanner download: //Look at what args were passed in to determine what kind in .net C# Draw ean13+2 in .net C# //Look at what args were passed in to determine what kind
barcode generator in vb.net free download: objecturi="MyRemotingApp.rem" in .net C# Integrating UPC-13 in .net C# objecturi="MyRemotingApp.rem"
java qr code scanner download: Creating a Remoting Client Application in .net C# Assign EAN13 in .net C# Creating a Remoting Client Application
java qr code scanner download: Lesson 1: Creating a Client Application to Access a Remote Object in visual C#.net Generate EAN-13 Supplement 2 in visual C#.net Lesson 1: Creating a Client Application to Access a Remote Object
java qr code scanner download: End Function in visual C# Access EAN-13 Supplement 2 in visual C# End Function
barcode generator c# wpf: Identifies the type name and the assembly that con tains the type. This attribute is required. in .net C# Creation GS1-13 in .net C# Identifies the type name and the assembly that con tains the type. This attribute is required.
barcode generator c# wpf: NOTE in C# Integration European Article Number 13 in C# NOTE
barcode generator c# wpf: Debug and deploy a remoting application. in C# Add European Article Number 13 in C# Debug and deploy a remoting application.
vb.net barcode generator: Deploying a Remote Object to the Server in C#.net Access UPC-13 in C#.net Deploying a Remote Object to the Server
barcode generator c# code: Lesson 1: How to Deploy a Remoting Application in C#.net Generate EAN/UCC-13 in C#.net Lesson 1: How to Deploy a Remoting Application
ean 13 check digit calculator c#: The Visual Studio 2005 Attach To Process dialog box in C# Include ean13+5 in C# The Visual Studio 2005 Attach To Process dialog box
vb.net barcode generator: Lesson Review in .net C# Develop UPC-13 in .net C# Lesson Review
ean 13 check digit calculator c#: Review in visual C#.net Integration EAN13 in visual C#.net Review
barcode generator c# code: Method Invocations and Event Management with .NET Web Services in .net C# Writer EAN/UCC-13 in .net C# Method Invocations and Event Management with .NET Web Services
ean 13 generator c#: Exercise: Calling a Web Method in visual C#.net Generating EAN13 in visual C#.net Exercise: Calling a Web Method
vb.net ean 13: Method Invocations and Event Management with .NET Web Services in c sharp Encoder EAN-13 Supplement 5 in c sharp Method Invocations and Event Management with .NET Web Services
barcode generator c# wpf: Interviews in c sharp Generator European Article Number 13 in c sharp Interviews
vb.net barcode generator: Interfaces.IPerson.LastName in C#.net Print UPC-13 in C#.net Interfaces.IPerson.LastName
ean 13 generator c#: Method Invocations and Event Management with .NET Remoting in visual C# Generator EAN-13 Supplement 5 in visual C# Method Invocations and Event Management with .NET Remoting
Articles you may be interested
generate barcode in c# asp.net: Overview of Management Tools in visual C#.net Integrating ECC200 in visual C#.net Overview of Management Tools
vb.net barcode freeware: Part 1: Part Title in .NET Creator Code 128 Code Set A in .NET Part 1: Part Title
qr code generator for c#: The IRR Function in visual C#.net Generation qrcode in visual C#.net The IRR Function
generate qr code in c#.net: Real World Integrating Windows Server 2003 into Existing Domains in C#.net Receive QRCode in C#.net Real World Integrating Windows Server 2003 into Existing Domains
how to create barcode in c#.net: Designing Fully Trusted Applications in c sharp Integrating Code 39 in c sharp Designing Fully Trusted Applications
vb.net 2d barcode free: Part 2: Building a Microsoft Access Desktop Application in .NET Integrated code 128 barcode in .NET Part 2: Building a Microsoft Access Desktop Application
c# barcode scanner api: Getting Connected in C#.net Render Code 3 of 9 in C#.net Getting Connected
use barcode scanner in asp.net: plan in C# Writer PDF-417 2d barcode in C# plan
asp.net mvc barcode reader: Specifying How Archived Items Are Handled in .net C# Creation PDF-417 2d barcode in .net C# Specifying How Archived Items Are Handled
barcode generator c# open source: After this lesson, you will be able to in c sharp Printer qr barcode in c sharp After this lesson, you will be able to
use barcode reader in asp.net: Designing Virtualization for Exchange 2010 Servers in C# Implementation pdf417 2d barcode in C# Designing Virtualization for Exchange 2010 Servers
vb.net barcode library dll: On the Home tab, in the Styles group, click Cell Styles, and then click New Cell Style. in .NET Creation Data Matrix ECC200 in .NET On the Home tab, in the Styles group, click Cell Styles, and then click New Cell Style.
c# library for qr code: After this lesson, you will be able to in C#.net Print qr codes in C#.net After this lesson, you will be able to
free visual basic qr code generator: Suggested Practices in visual basic Maker QRCode in visual basic Suggested Practices
vb.net barcode freeware: F22QQ15.bmp in .NET Paint USS Code 128 in .NET F22QQ15.bmp
qr code generator vb.net open source: Part 2: Messaging in visual basic.net Drawer QR Code JIS X 0510 in visual basic.net Part 2: Messaging
free visual basic qr code generator: Configuring Assembly Version Binding and Codebases in visual basic Writer QR Code ISO/IEC18004 in visual basic Configuring Assembly Version Binding and Codebases
barcode generator c# code: Lesson 1: Collecting Data Items in c sharp Encoding qr bidimensional barcode in c sharp Lesson 1: Collecting Data Items
printing barcode vb.net: start /w ocsetup DirectoryServices-ADAM-ServerCore in .NET Printing barcode 128a in .NET start /w ocsetup DirectoryServices-ADAM-ServerCore
print barcode labels using vb.net: Part 1: Understanding Microsoft Access in .NET Receive code 128 code set c in .NET Part 1: Understanding Microsoft Access | http://www.businessrefinery.com/yc2/299/72/ | CC-MAIN-2022-05 | en | refinedweb |
Configure Push and In-app notifications.
An easy way to setup push notifications is to uncomment relevant code in the
HsUnityAppController.mm file. The class
HsUnityAppController implements the
AppDelegateListener and provides relevant delegate methods needed to get started with push.
true/
false
true
If you do not want the in-app notifications support provided by the
Helpshift SDK, please set this flag to
false. The default value of this
flag is
true i.e in-app notifications will be enabled.
Read more about in-app notifications in the Notifications section.
Example:
using Helpshift; private HelpshiftSdk help; this.help = HelpshiftSdk.GetInstance(); Dictionary<string, object> configMap = new Dictionary<string, object>(); configMap.Add(HelpshiftSdk.ENABLE_INAPP_NOTIFICATION, true); help.Install(platformId, domainName, configMap);
If you have enabled in-app notifications, use the API
PauseDisplayOfInAppNotification() to pause/resume the notifications. When
true is passed to this method display of in-app notifications is paused even if they arrive. When you pass a
false, the in-app notifications start displaying.
Example:
using Helpshift; // Install call private HelpshiftSdk help; this.help = HelpshiftSdk.GetInstance(); Dictionary<string, object> configMap = new Dictionary<string, object>(); configMap.Add(HelpshiftSdk.ENABLE_INAPP_NOTIFICATION, true); help.Install(platformId, domainName, configMap); // To temporarily pause in-app notifications help.PauseDisplayOfInAppNotification(true); // To resume showing the in-app notifications help.PauseDisplayOfInAppNotification(false);. | https://developers.helpshift.com/sdkx-unity/notifications-ios/ | CC-MAIN-2022-05 | en | refinedweb |
Hi,
I want to use the eeprom in my code and I downloaded the EEPROM file from Audrino website and added it to the Library folder. restarted the IDE. I added the following line
#include <EEPROM.h>. I keep getting the following error:
EEPROM.h: No such file or directory
Is there a different EEPROM file that I would need to download.On the Audrino website they mention that your IDE already has EEPROM.h file and it is just adding the library file and getting started with it. But then I am having the above problem. Even before I downloaded and added the EEPROM file I was having the same issue | https://forums.adafruit.com/viewtopic.php?f=65&t=180466&p=878043&sid=94923412a5dd97712456f5939e9f10dd | CC-MAIN-2022-05 | en | refinedweb |
I've searched the web for an answer to this question but I can't find anything, I'm not even sure it's possible to do in C#.
What I want to do is, at class declaration, use a for or a while (or something else that gets the same effect) to declare some variables. I want to do this so I don't have to keep track of the variable names as they are stored in Enum classes. So if I modify the Enum lists I don't have to go changing this other script accordingly.
I hope I'm being clear enough, I know this is a complicated thing. To give you an idea here's the code that I have (which doesn't work as monodevelop says I can't use a for in class struct or declaration).
[Serializable ()]
public class SaveData : ISerializable
{
public string profHeader;
public string playerName;
for(int cnt = 0; cnt < Enum.GetValues(typeof(AttributeName)).Length; cnt++) {
public string (AttributeName)cnt.ToString();
}
for(int cnt = 0; cnt < Enum.GetValues(typeof(SkillName)).Length; cnt++) {
public string (SkillName)cnt.ToString();
}
The idea is for me not to have to modify this script if I modify the Enums.
People with requirements like this write programs that generate source code(e.g. *.cs files). Once you have written it to disk it can be compiled like any other. Unity allows for scripts to run in the editor so it is an ideal environment for this approach.
*.cs
Another simpler approach is to use a:
Dictionary<AttributeName, string>
@Thebardstale, you should do as $$anonymous$$elly$$anonymous$$ says, there is no problem with creating a Dictionary and filling it on the fly, also you can still acces each attribute by its name, so it´s very similar to having an instance. Vote him up!
Dictionary
The Dictionary sounds like what I need, thanks a lot, I'll.
Varying number of variables depending on power type?
0
Answers
Passing an instance of a class to different game objects
2
Answers
Convert SerializedProperty to Custom Class
7
Answers
Runtime Error IOS AOT Only
0
Answers
Arrays and Classes javascript
1
Answer
EnterpriseSocial Q&A | https://answers.unity.com/questions/628505/how-to-declare-multiple-variables-using-a-for-loop.html | CC-MAIN-2022-05 | en | refinedweb |
Microsoft Teams JavaScript client SDK
The Microsoft Teams JavaScript client SDK is part of the Microsoft Teams developer platform. It makes it easy to integrate your own services with Teams, whether you develop custom apps for your enterprise or SaaS applications for teams around the world. See The Microsoft Teams developer platform for full documentation on the platform and on the SDK.
Finding the SDK
The Teams client SDK is distributed as an npm package. The latest version can be found here:.
Installing the SDK
You can install the package using npm or yarn:
npm install --save @microsoft/teams-js
yarn add @microsoft/teams-js
Using the SDK
If you are using any dependency loader or module bundler such as RequireJS, SystemJS, browserify, or webpack, you can use
import syntax to import specific modules. For example:
import * as microsoftTeams from "@microsoft/teams-js";
You can also reference the entire library in html pages using a script tag. There are three ways to do this:
Important
Do not copy/paste these
<script src=... URLs from this page; they refer to a specific version of the SDK. To get the
<script src=...></script> markup for the latest version, always go to.
<!-- Microsoft Teams JavaScript API (via CDN) --> <script src="" crossorigin="anonymous"></script> <!-- Microsoft Teams JavaScript API (via npm) --> <script src="node_modules/@microsoft/teams-js@1.5.2/dist/MicrosoftTeams.min.js"></script> <!-- Microsoft Teams JavaScript API (copied local) --> <script src="MicrosoftTeams.min.js"></script>
The final option, using a local copy on your servers, eliminates that dependency but requires hosting and updating a local copy of the SDK.
Tip
If you are a TypeScript developer it is helpful to install the NPM package as described above, even if you don't link to the copy of
MicrosoftTeams.min.js in
node_modules from your HTML, because IDEs such as Visual Studio Code will use it for Intellisense and type checking.
Reference
The following sections contain reference pages for all the elements of the Teams client API. These pages are auto-generated from the source found in the npm module on. The source code for the SDK is located at.
And remember that The Microsoft Teams developer platform has full documentation on using the platform and the SDK. | https://docs.microsoft.com/de-de/javascript/api/overview/msteams-client?view=msteams-client-js-latest&preserve-view=true | CC-MAIN-2022-05 | en | refinedweb |
Important: Please read the Qt Code of Conduct -
What is the real effect of function: QGraphicsScene::setSceneRect?
#include <QApplication>
#include <QGraphicsScene>
#include <QGraphicsView>
int main(int argc, char *argv[]) {
QApplication a(argc, argv);
QGraphicsScene scene;
QGraphicsView view(&scene);
view.setHorizontalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
view.setVerticalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
view.setFixedSize(800, 600);
view.show();
scene.setSceneRect(100, 100, 300, 300);
// scene.setSceneRect(0, 0, 300, 300);
//scene.setSceneRect(-100, -100, 300, 300);
scene.addRect(0, 0, 300, 300, QPen(Qt::blue));
scene.addRect(0, 0, 1, 1, QPen(Qt::red));
return a.exec();
}
Changing the arguments of setSceneRect, I see the red point and the blue rectangle is positioned on different points in the window, why? Can you help me? I read the Qt documents about "Graphics View Framework" and QGraphicsScene, QGraphicsView, QGraphicsItem, Coordinates, ... but I cannot imagine why changing the positions make those items changing the positions in the window/view.
- SGaist Lifetime Qt Champion last edited by
Hi,
From the doc, I'd say that the items are still put at the same global places but your scene is looking at a different place in the global scene.
I think I've understood the issue. The center of the view will be the same as the center of the scene's rect at begin.
Thanks | https://forum.qt.io/topic/58699/what-is-the-real-effect-of-function-qgraphicsscene-setscenerect | CC-MAIN-2022-05 | en | refinedweb |
Important: Please read the Qt Code of Conduct -
Minimal QSystemTrayIcon example in Ubuntu 12.04
Hello. I am studying how to build and deploy a Qt application with Ubuntu 12.04. I am working with Qt 5.4.2 which I downloaded here. I installed it in the default location
~/Qt5.4.2.
Currently I want to set a tray icon. My code is just the following:
#include <QApplication> #include <QDebug> #include <QIcon> #include <QSystemTrayIcon> int main(int argc, char **argv) { QApplication app(argc, argv); QSystemTrayIcon *trayIcon = new QSystemTrayIcon(); qDebug() << trayIcon->isSystemTrayAvailable(); trayIcon->setIcon(QIcon("heart.png")); trayIcon->show(); return app.exec(); }
heart.pngis from PyQt5's QSystemTrayIcon example, I got it from here. I put it in the same directory as the source file.
I built the executable with the following commands.
~/Qt5.4.2/5.4/gcc_64/bin/qmake -config release make
Running the created file shows the tray icon.
For deploying the app, I copied the following files into the same directory. I got them from
~/Qt5.4.2/5.4/gcc_64/lib:
libicudata.so.53 libicui18n.so.53 libicuuc.so.53 libQt5Core.so.5 libQt5DBus.so.5 libQt5Gui.so.5 libQt5Widgets.so.5
I also copied
~/Qt5.4.2/5.4/gcc_64/plugins/platforms/libqxcb.soand placed it under a directory named
platforms.
I tested adding a
qt.conffile, but it doesn't seem to have an effect. Its contents are
[Paths] Prefix = . Binaries = .
I copied over this whole directory to a VM running an Ubuntu 12.04 live CD. Before running the binary I exported
LD_LIBRARY_PATH=.so that it will find the included Qt libraries.
Unfortunately the tray icon is not shown when the program is run in the VM. The
qDebugstatement shows that the system tray is available though.
Thanks in advance.
- A Former User last edited by
Hi! Your Ubuntu uses the Unity desktop environment, right? IIRC Unity doesn't support "normal" tray icons but Canonical decides which applications are allowed to show a "tray icon".
@Wieland Hello. It worked in the machine where I compiled the source though, it only didn't work when I tried to deploy it to another machine.
- A Former User last edited by
This is what I mean: Don't know if it is still true.
@Wieland Hmm, that worked but there should be another solution. For example Dropbox works without editing the systray whitelist.
I'll mark this as solved as editing the systray whitelist worked but I'll try looking for other solutions.
I tested in Ubuntu 14.04 up to 15.10 and the tray icon is still not shown. The systray whitelist is not available any more in those distributions though, and I'd rather not have to install packages that bring back the tray icon like. I thought Qt 5.4.2 already supported app indicators? | https://forum.qt.io/topic/63785/minimal-qsystemtrayicon-example-in-ubuntu-12-04 | CC-MAIN-2022-05 | en | refinedweb |
The typical way to create an object in .NET/C# is to use the
new keyword. However it's also possible to create a new instance of an object using reflection. In this post I compare 4 different methods, and benchmark them to see which is fastest.
Creating objects using reflection—why bother?
In the Datadog APM tracer library we automatically instrument multiple libraries in your application. This allows us to add distributed tracing to your application without you having to change your code at all.
To achieve this, we often have to interact with library types without having a direct reference to them. For example, I was recently working on our Kafka integration to automatically instrument Confluent's .NET Client for Kafka. I needed to be able to create an instance of the
Headers class.
In an application, the solution would be simple—reference the Confluent.Kafka NuGet package, and call
new Headers(), but we can't do that in the Tracer. If we reference a NuGet package, then we would need to reference that package when we instrument your application. But what if you already reference some version of Confluent.Kafka? On .NET Framework we would likely run into binding redirect problems, while on .NET Core/.NET 5 we could cause
MissingMethodExceptions or any number of other problems.
To avoid all this, we use reflection to interact with the Confluent.Kafka types, using whichever version of the library you have referenced in your application. This avoids the multiple-references issue (though obviously means we need to be careful about ensuring we test with many different versions of the library).
Reflection is hugely flexible, but the big downside is that it's slow compared to directly interacting with objects. For a performance-critical application like the APM tracer, we need to make sure we're being as performant as possible, so we ensure that the reflection we do is highly optimised.
Which brings me back to the case in point. I needed a way to call
new Headers() without referencing the
Headers type directly. There are actually multiple ways to do this using reflection, which I will walk through here. At the end of the post, I run a benchmark to compare them, to see which is fastest.
4 ways to create an object using reflection
There are quite possibly more, but I've come up with 4 ways to create an object using reflection:
- Calling
Invokeon a
ConstructorInfoinstance.
- Using
Activator.CreateInstance().
- Creating a Compiled
Expression.
- Using
DynamicMethodand
Reflection.Emit.
These are arranged in roughly ascending order of complexity. I'll walk through each approach in the following sections before we get to the benchmarks themselves.
1. Standard Reflection using Invoke
The first approach is the "traditional" reflection approach:
Type typeToCreate = typeof(Headers); ConstructorInfor ctor = typeToCreate.GetConstructor(System.Type.EmptyTypes); object headers = ctor.Invoke(null);
The first step is to obtain a
Type representing the type you want to create. In this case (and in all the other examples) I've used
typeof(Headers) for simplicity, but this relies on having a direct reference to the
Headers type, so in practice, you'd obtain the same
Type through some other mechanism.
From this
Type you can get a reference to the
ConstructorInfo, which describes the parameterless constructor for the
Headers type. The
EmptyTypes helper is a cached empty array (
new Type[0]), which is used to indicate you want the parameterless constructor.
Finally, you can execute the constructor by calling
Invoke() on the
ConstructorInfo and passing in
null (as the constructor does not take any arguments). This returns an
object which is actually an instance of
Headers.
This is one of the simplest and most flexible ways to use reflection, as you can use similar approaches to invoke methods on an object, access fields, interfaces and attributes etc. However, as you'll see later, it's also one of the slowest. The next approach is a slightly optimised approach designed specifically for our scenario - construction.
2. Activator.CreateInstance
In this post I'm looking at a single scenario - creating an instance of an object. There happens to be a helper class designed specifically for this available in both .NET Framework and .NET Core called
Activator. You can use this class to easily create an instance of a type using the following:
Type typeToCreate = typeof(Headers); object headers = Activator.CreateInstance(typeToCreate);
as before, we need a reference to the
Headers type, but then we can simply call
Activator.CreateInstance(type). No need to mess around with
ConstructorInfo or anything. Very handy! As with the previous method, you can call parameterised constructors too if you need to.
There's not much more to say about this one, so we'll move on to the next one, where things start to get more interesting.
3. Compiled expressions
Expressions have been around for a long time (since C# 3.0) and are integral to various other features and libraries such as LINQ and ORMs such as EF Core. In many ways they are similar to reflection, in that they allow manipulation of code at runtime,
Expressions offer a high-ish level language for declaring code, which can subsequently be converted into an executable
Func<> by calling
Compile. We can create an expression that creates an instance of the
Headers type, compile it into a
Func<object>, and then invoke it as follows:
NewExpression constructorExpression = Expression.New(HeadersType); Expression<Func<object>> lambdaExpression = Expression.Lambda<Func<object>>(constructoeExpression); Func<object> createHeadersFunc = lambdaExpression.Compile(); object Headers = createHeadersFunc();
The first two lines of this snippet create an expression that is equivalent to
() => new Headers(). The third line converts the
Expression<> into a
Func<> that we can execute. The final line invokes our newly created
Func<object> to create the
Headers object.
For this simple example of calling a constructor, the syntax is pretty easy to understand, especially as you've likely worked with
Expressions in the past. In contrast, the final approach in this post, using Reflection.Emit, may not be something you've used before (I hadn't!).
4. Reflection.Emit
Reflection.Emit refers to the System.Reflection.Emit namespace, which contains various methods for creating new intermediate language (IL) in your application. IL instructions are the "assembly code" that the compiler outputs when you compile your application. The JIT in the .NET runtime converts these IL instructions into real assembly code when your application runs.
The approach in this section uses a class called
DynamicMethod to create a new method in the assembly at runtime, with the IL method body that creates an instance of the
Headers object.
Effectively, we're dynamically creating a method that looks like this:
Headers KafkaDynamicMethodHeaders() { return new Headers(); }
The following snippet creates a method signature similar to this using
DynamicMethod, though it doesn't have a method body yet;
Type headersType = typeof(Headers); DynamicMethod createHeadersMethod = new DynamicMethod( name: $"KafkaDynamicMethodHeaders", returnType: headersType, parametertypes: null, module: typeof(ConstructorBenchmarks).Module, skipVisibility: false);
We create a
DynamicMethod providing the
name of the method, the parameter and return
Types, and the
Module in which the method should be defined. Finally we specify whether the JIT visibility checks should be skipped for types and members accessed by the IL of the dynamic method. I'm only accessing
public types here, so chose to not skip it.
Types are defined in
Modules. This is very similar to an
Assembly, but a single
Assemblycan contain multiple
Modules.
We have the method signature, now we need to create the method body's IL:
ConstructorInfor ctor = typeToCreate.GetConstructor(System.Type.EmptyTypes); ILGenerator il = createHeadersMethod.GetILGenerator(); il.Emit(OpCodes.Newobj, Ctor); il.Emit(OpCodes.Ret);
We obtain an
ILGenerator for the method by calling
GetILGenerator(). We can then
Emit() IL operation codes. This is essentially hand-crafting the "raw" IL codes that the original
KafkaDynamicMethodHeaders would create.
One way to work out what IL you need is to write the C# you need, and then look at the generated IL.
IL_0000: newobj instance void Headers::.ctor() IL_0005: ret
The final step to create the dynamic method is to create a delegate that we can use to execute the function. We'll use a
Func<object> again, which we can invoke to create the
Headers object:
Func<object> headersDelegate = createHeadersMethod.CreateDelegate(typeof(Func<object>)); object headers = headersDelegate();
That covers the 4 approaches to calling a constructor using reflection, now on to the benchmarks!
Benchmarking the approaches
To run the benchmarks, I used the excellent BenchmarkDotNet library and created the following benchmark. This uses the
new Header() case as the "baseline", and compares each of the other approaches. All of the one-off work of loading
Types, compiling expressions, and generating
DynamicMethods is done in the constructor, so it's not included in the startup time. We're measuring the "steady state" time to create the
Headers instance.
using System; using System.Linq.Expressions; using System.Reflection; using System.Reflection.Emit; using BenchmarkDotNet.Attributes; using Confluent.Kafka; public class ConstructorBenchmarks { private static readonly Type HeadersType = typeof(Headers); private static readonly ConstructorInfo Ctor = HeadersType.GetConstructor(System.Type.EmptyTypes); private readonly Func<object> _dynamicMethodActivator; private readonly Func<object> _dynamicMethodActivator2; private readonly Func<object> _expression; public ConstructorBenchmarks() { DynamicMethod createHeadersMethod = new DynamicMethod( $"KafkaDynamicMethodHeaders", HeadersType, null, typeof(ConstructorBenchmarks).Module, false); ILGenerator il = createHeadersMethod.GetILGenerator(); il.Emit(OpCodes.Newobj, Ctor); il.Emit(OpCodes.Ret); _dynamicMethodActivator = (Func<object>)createHeadersMethod.CreateDelegate(typeof(Func<object>)); _expression = Expression.Lambda<Func<object>>(Expression.New(HeadersType)).Compile(); } [Benchmark(Baseline = true)] public object Direct() => new Headers(); [Benchmark] public object Reflection() => Ctor.Invoke(null); [Benchmark] public object ActivatorCreateInstance() => Activator.CreateInstance(HeadersType); [Benchmark] public object CompiledExpression() => _expression(); [Benchmark] public object ReflectionEmit() => _dynamicMethodActivator(); }
Using C# 9.0's top-level statements, the Program.cs file is as simple as:
using BenchmarkDotNet.Running; BenchmarkRunner.Run<ConstructorBenchmarks>();
For completeness, the project file is shown below. I decided to test .NET Framework and .NET 5 separately.
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFrameworks>net461;net5.0</TargetFrameworks> <LangVersion>9.0</LangVersion> </PropertyGroup> <ItemGroup> <PackageReference Include="BenchmarkDotNet" Version="0.12.1" /> <PackageReference Include="Confluent.Kafka" Version="1.6.3" /> </ItemGroup> </Project>
To run the benchmarks I used
dotnet run -c Release --framework net5.0 for example.
Finally…1500 words later, what are the results!?
The results
Obviously, don't get too hung up on the real numbers here. I'm using a middle-of-the-road laptop from several years ago, but the absolute numbers are not what I'm interested in. I'm more interested in the relative performance of each approach.
First off, lets look at the .NET 5 numbers: DefaultJob : .NET Core 5.0.5 (CoreCLR 5.0.521.16609, CoreFX 5.0.521.16609), X64 RyuJIT
And the .NET Framework results:
BenchmarkDotNet=v0.12.1, OS=Windows 10.0.19042 Intel Core i7-7500U CPU 2.70GHz (Kaby Lake), 1 CPU, 4 logical and 2 physical cores [Host] : .NET Framework 4.8 (4.8.4300.0), X64 RyuJIT DefaultJob : .NET Framework 4.8 (4.8.4300.0), X64 RyuJIT
Running the benchmark several times, there's a fair amount of variation in the numbers. Being a laptop, I'd imagine it's possible there was some thermal-throttling at play but the general pattern seems quite stable:
- Standard reflection using
ConstructorInfo.Invoke()is roughly 10× slower than calling
new Headers()
Activator.CreateInstanceis 2× faster, i.e. roughly 5× slower than calling
new Headers()
- The compiled expression and dynamic method approaches are roughly the same as calling
new Headers(). For .NET 5, we're talking a couple of nanoseconds difference. That's very impressive!
- On .NET Framework the DynamicMethod approach is faster than using compiled expressions, while on .NET 5, compiled expressions appear to be slightly faster than using
DynamicMethod
Given these differences, it seems clear that for performance-sensitive applications, it may well be worth the effort of using compiled expressions or
DynamicMethod, even for this simple case.
For completeness, I decided to also benchmark the startup time for each approach. This is a one-off cost that you pay when first calling a specific dynamic method, but it seems like information worth having. This post is already pretty long, so I'll leave out the code for now, but these are the results I get for .NET 5 when the setup costs are included: Job-ONVXJR : .NET Core 5.0.5 (CoreCLR 5.0.521.16609, CoreFX 5.0.521.16609), X64 RyuJIT LaunchCount=50 RunStrategy=ColdStart
As you can see, compiled expressions and Reflection.Emit have a considerable setup cost. This is important to bear in mind: you'll only start to see the benefits of Reflection.Emit's speedup over using
Activator.CreateInstance() after you've called the delegate about 3,500 times! Depending on where you're using this code, that trade off may or may not be worth it!
Summary
In this post I showed 4 different ways to call the constructor of a type using reflection. I then ran benchmarks comparing each approach using BenchmarkDotNet. This shows that using the naïve approach to reflection is about 10× slower than calling the constructor directly, and that using
Activator.CreateInstance() is about 5× slower. Both compiled expressions and
DynamicMethod with Reflection.Emit approach the speed of calling
new Headers() directly, with compiled expressions slightly faster in .NET 5 and ReflectionEmit faster on .NET Framework. | https://andrewlock.net/benchmarking-4-reflection-methods-for-calling-a-constructor-in-dotnet/ | CC-MAIN-2022-05 | en | refinedweb |
Vuex is the solution for state management in Vue applications. The next version —. However, even as Vuex 4 is just getting out the door, Kia King Ishii (a Vue core team member) is talking about his plans for Vuex 5, and I’m so excited by what I saw that I had to share it with you all. Note that Vuex 5 plans are not finalized, so some things may change before Vuex 5 is released, but if it ends up mostly similar to what you see in this article, it should be a big improvement to the developer experience.
With the advent of Vue 3 and it’s composition API, people have been looking into hand-built simple alternatives. For example, demonstrates a relatively simple, yet flexible and robust pattern for using the composition API along with
provide/inject to create shared state stores. As Gábor states in his article, though, this (and other alternatives) should only be used in smaller applications because they lack all those things that aren’t directly about the code: community support, documentation, conventions, good Nuxt integrations, and developer tools.
That last one has always been one of the biggest issues for me. The Vue devtools browser extension has always been an amazing tool for debugging and developing Vue apps, and losing the Vuex inspector with “time travel” would be a pretty big loss for debugging any non-trivial applications.
Thankfully, with Vuex 5 we’ll be able to have our cake and eat it too. It will work more like these composition API alternatives but keep all the benefits of using an official state management library. Now let’s take a look at what will be changing.
Defining A Store. Actions also cannot mutate the state on their own; they must use a mutator. So what does Vuex 5 look like?
import { defineStore } from 'vuex' export const counterStore = defineStore({ name: 'counter', state() { return { count: 0 } }, getters: { double () { return this.count * 2 } }, actions: { increment () { this.count++ } } })
There are a few changes to note. This name is used by the Vuex registry, which we’ll talk about later.. Kia noted that too often, mutations just became simple setters, making them pointlessly verbose, so they removed them. He didn’t mention whether it was “ok” to mutate the state directly from outside the store, but we are definitely allowed and encouraged to mutate state directly from an action and the Flux pattern frowns on the direct mutation of state.
Note: For those who prefer the composition API over the options API for creating components, you’ll be happy to learn there is also a way to create stores in a similar fashion to using the composition API.
import { ref, computed } from 'vue' import { defineStore } from 'vuex' export const counterStore = defineStore('counter', { const count = ref(0) const double = computed(() => count.value * 2) function increment () { count.value++ } return { count, double, increment } })
As shown above, the name gets passed in as the first argument for
defineStore. The rest looks just like a composition function for components. This will yield exactly the same result as the previous example that used the options API.
Getting The Store Instantiated } } })
There are pros and cons to importing the store directly into the component and instantiating it there. It allows you to code split and lazily loads the store only where it’s needed, but now it’s a direct dependency instead of being injected by a parent (not to mention you need to import it every time you want to use it). If you want to use dependency injection to provide it throughout the app, especially if you know it’ll be used at the root of the app where code splitting won’t help, then you can just use
provide:
import { createApp } from 'vue' import { createVuex } from 'vuex' import App from './App.vue' import store from './store' const app = createApp(App) const vuex = createVuex() app.use(vuex) app.provide('store', store) // provide the store to all components app.mount('#app')
And you can just inject it in any component where you’re going to use it:
import { defineComponent } from 'vue' export default defineComponent({ name: 'App', inject: ['store'] }) // Or with Composition API import { defineComponent, inject } from 'vue' export default defineComponent({ setup () { const store = inject('store') return { store } } })
I’m not excited about this extra verbosity, but it is more explicit and more flexible, which I am a fan of. This type of code is generally written once right away at the beginning of the project and then it doesn’t bother you again, though now you’ll either need to provide each new store or import it every time you wish to use it, but importing or injecting code modules is how we generally have to work with anything else, so it’s just making Vuex work more along the lines of how people already tend to work.
Using A Store
Apart from being a fan of the flexibility and the new way of defining stores the same way as a component using the composition API, there’s one more thing that makes me more excited than everything else: how stores are used.. And this API only gets more difficult to use when you are using namespaced modules. By comparison, Vuex 5 looks to work exactly how you would normally hope:
store.count // Access State store.double // Access Getters (transparent) store.increment() // Run actions // No Mutators
Everything — the state, getters and actions — is available directly at the root of the store, making it simple to use with a lot less verbosity and practically removes all need for using
mapState,
mapGetters,
mapActions and
mapMutations for the options API or for writing statements or simple functions for composition API. This simply makes a Vuex store look and act just like a normal store that you would build yourself, but it gets all the benefits of plugins, debugging tools, official documentation, etc.
Composing Stores
The final aspect of Vuex 5 we’ll look at today is composability. Vuex 5 doesn’t have namespaced modules that are all accessible from the single store. Each of those modules would be split into a completely separate store. That’s simple enough to deal with for components: they just import whichever stores they need and fire them up and use them. But what if one store wants to interact with another
usethe store use () { return { greeter: greeterStore } }, state () { return { count: 0 } }, getters: { greetingCount () { return `${this.greeter.greeting} ${this.count}' // access it from this.greeter } } })
With v5, we import the store we wish to use, then register it with
use and now it’s accessible all over the store at whatever property name you gave it. Things are even simpler if you’re using the composition API variation of the store definition:
// store/counter.js import { ref, computed } from 'vue' import { defineStore } from 'vuex' import greeterStore from './greeter' // Import the store you want to interact with export default defineStore('counter', ({use}) => { //
useis passed in to function const greeter = use(greeterStore) // use
useand now you have full access const count = 0 const greetingCount = computed(() => { return
${greeter.greeting} ${this.count}// access it like any other variable }) return { count, greetingCount } })
No more namespaced modules. Each store is separate and is used separately. You can use
use to make a store available inside another store to compose them. In both examples,
use is basically just the same mechanism as
vuex.store from earlier and they ensure that we instantiating the stores with the correct instance of Vuex.
TypeScript Support
For TypeScript users, one of the greatest aspects of Vuex 5 is that the simplification made it simpler to add types to everything. The layers of abstraction that older versions of Vuex had made it nearly impossible and right now, with Vuex 4, they increased our ability to use types, but there is still too much manual work to get a decent amount of type support, whereas in v5, you can put your types inline, just as you would hope and expect.
Conclusion
Vuex 5 looks to be almost exactly what I — and likely many others — hoped it would be, and I feel it can’t come soon enough. It simplifies most of Vuex, removing some of the mental overhead involved, and only gets more complicated or verbose where it adds flexibility. Leave comments below about what you think of these changes and what changes you might make instead or in addition. Or go straight to the source and add an RFC (Request for Comments) to the list to see what the core team thinks. | https://kerbco.com/whats-coming-to-vuex/ | CC-MAIN-2022-05 | en | refinedweb |
Data fetching patterns are a very important part of every web framework. That is why this part of every web technology has constantly seen improvements and innovations.
Seeing that modern web development paradigms rely a lot on data-fetching functionalities to support features like SSR and CSR, it makes sense to stay up to date with changes in this area of the web.
In this post, I’ll introduce you to the
useSWR hook that was just recently introduced to Next.js to help make data fetching easier. To do that, we’ll build a random user generation site. Not that you need yet another one of these random generation sites, but it has proven to be effective in showing developers how things work. My goal is to ensure that at the end of this post, you’ll know more about the
useSWR hook and consequently improve your Next.js authoring experience with it. Before we dive in, here’s a brief about it.
useSWR
SWR in this context stands for “stale-while-revalidate,” which is a term I imagine Next.js developers are already familiar with. The Next.js team built it to give developers more ways to fetch remote data when working with Next. It is basically a set of React Hooks that provide features like revalidation, mutation, caching, etc. out of the box.
I like to think of it this way: the problem
useSWR solves for me is that it gives me the opportunity to show something to users immediately, and a convenient way to manage their experience while the actual content gets loaded behind the scenes. And it does it likes this:
- When a request is made, it first returns a cached value. Probably something generated in the
getStaticProps()function
- Next, the server will start a revalidation process and fetch the actual data for the page
- Finally, when revalidation is successful, SWR will update the page with actual data
When this is the case, users won’t be stuck looking at loading screens and your site remains fast and performant. Amongst other benefits of SWR, you can use it to retrieve data from any HTTP-supported servers and it has full TypeScript support. Let’s dive in and set up our random user generator app with Next.js and the
useSWR hook.
Setting up a Next.js application
To quickly set up a Next.js application, open a terminal window and run the
create-next-app command like so:
npx create-next-app useswr-user-generator
Follow the prompts to complete the setup process and you should have a
useswr-user-generator app locally. Navigate into the application directory and install SWR with this command:
cd useswr-user-generator # navigate into the project directory npm install swr axios # install swr and axios npm run dev # run the dev server
The commands above will install both the SWR and Axios packages and open the project on your browser at
localhost:3000. If you check, we should have the project live on that port like so:
Great, we’ve successfully set up a Next.js application. Let’s go ahead and build this random generator thing, shall we?
Data fetching with SWR
In the project’s root, create a
components folder. Inside this folder, add a
Users.js file and update it with the snippet below:
// components/Users.js import axios from "axios"; import useSWR from "swr"; import Image from "next/image"; export default function Users() { const address = ``; const fetcher = async (url) => await axios.get(url).then((res) => res.data); const { data, error } = useSWR(address, fetcher); if (error) <p>Loading failed...</p>; if (!data) <h1>Loading...</h1>; return ( <div> <div className="container"> {data && data.results.map((item) => ( <div key={item.cell} className={`user-card ${item.gender}`}> <div> <Image width={100} height={100} src={item.picture.large} <h3>{`${item.name.first} ${item.name.last}`}</h3> </div> <div className="details"> <p>Country: {item.location.country}</p> <p>State: {item.location.state}</p> <p>Email: {item.email}</p> <p>Phone: {item.phone}</p> <p>Age: {item.dob.age}</p> </div> </div> ))} </div> </div> ); }
Let’s walk through this code. In the snippet above, we are using the
useSWR() hook to fetch the data for six users as specified in our API
address variable.
The
useSWR hook accepts two arguments and returns two values (based on the status of the request). It accepts:
- A
key— a string that serves as the unique identifier for the data we are fetching. This is usually the API URL we are calling
- A
fetcher— any asynchronous function that returns the fetched data
It returns:
data— the result of the request (if it was successful)
error— the error that occurred (if there was an error)
At its simplest state, this is how to fetch data with
useSWR:
import useSWR from 'swr' export defaul function Users() { const { data, error } = useSWR('/api/users', fetcher) if (error) return <div>failed to load</div> if (!data) return <div>loading...</div> return <div>hello {data.name}!</div> }
In our case, we saved our API URL in the
address variable and defined our
fetcher function to process the request and return the response.
N.B., the
useSWR docs suggest that using the Next.js native fetch function would work with SWR, but that wasn’t the case for me. I tried it but it didn’t quite work, hence why I’m using Axios instead.
Let’s check back on the browser and see if we get our users. To show the users page, let’s import the
<Users /> component into the
pages/index.js file and update it like so:
// pages/index import Head from "next/head"; import Users from "../components/Users"; export default function Home() { return ( <div> <Head> <title>Create Next App</title> <meta name="description" content="Random user generator" /> <link rel="icon" href="/favicon.ico" /> </Head> <Users /> </div> ); }
You might have noticed that I’m using the Next.js image component to show the user avatars in the
Users.js file. As a result, we need to specify the image domains in the
next.config.js file like this:
module.exports = { reactStrictMode: true, images: { domains: ["randomuser.me"], }, };
Now when you check back on the browser, we should see our users showing up as expected:
There! We have our users. So, our
useSWR hook is working! But is that it? How is this any different from other data fetching methods? Read on…
Why
useSWR?
It’s a likely question to ask at this point. Other than being very declarative, I wouldn’t say this is a huge improvement for me yet. Which probably makes now a good time to talk about a feature of
useSWR I really like (among others):
Pagination
Pagination with
useSWR is a breeze. Let’s exemplify it: imagine that instead of loading just six users, we want the ability to generate more users and add them to this page on demand. This is particularly helpful when you’re building an application where users need to navigate through multiple pages of the same content. We can demonstrate this by adding a Load More button at the end of our users page to generate more users when clicked. Let’s update the index page to pass a count prop to our
<Users /> component:
// pages/index.js import Head from "next/head"; import Users from "../components/Users"; export default function Home() { const [count, setCount ] = useState(0); return ( <div> <Users count={count} setCount={setCount} /> </div> ); }
Next, let’s add a button in the
components/Users.js file to update the count and load more users to the page from our API:
import axios from "axios"; import useSWR from "swr"; import Image from "next/image"; export default function Users({ count, setCount }) { const address = `{count}`; const fetcher = async (url) => await axios.get(url).then((res) => res.data); const { data, error } = useSWR(address, fetcher); if (error) <p>Loading failed...</p>; if (!data) <h1>Loading...</h1>; return ( <div> <div className="container"> // show users </div> <center> <div className="btn"> <button onClick={() => setCount(count + 3)}>Load More Users</button> </div> </center> </div> ); }
Let’s check back on the browser and see if our button is working as expected. When clicked, it should load three more users and render them on the page along with our existing users:
Works like a charm. However, this introduced another problem for us. When the button is clicked, Next.js will make a fresh request to our API to fetch those three more users, which will trigger our loading state. That’s why you notice a flicker on the screen when the button is clicked. Luckily, we have a simple solution for this, much thanks to the SWR cache.
We can pre-generate those users (or a next page in a different context) and render them in a hidden
<div> element. This way, we’ll just display that element when the button is clicked. As a result, the request will be happening before we click the button, not when.
See? No more flickers.
Amongst others, here are some amazing features you get out of the box from SWR:
- Reusable data fetching
- Inbuilt cache
- SSR/ISR/SSG support
- TypeScript support
- Mutations and revalidations
And a lot more performance benefits that we can’t cover within the scope of this post. As a next step, I’d like you to check out the SWR docs for more details on usage and benefits.
Conclusion
In this post, we’ve gone through the basics of the
useSWR hook. We also built a mini random user generator app with Next.js to demonstrate the SWR functionalities. I hope this post has provided you some insight into fetching data in Next.js applications with
useSWR. The live demo of the site we built is hosted here on Netlify. Feel free to play around with it and also fork the codebase from GitHub if you want to tweak things to your taste.. | https://blog.logrocket.com/handling-data-fetching-next-js-useswr/ | CC-MAIN-2022-05 | en | refinedweb |
#include <CGAL/Direction_2.h>
An object
d of the class
Direction_2 is a vector in the two-dimensional vector space \( \mathbb{R}^2\) where we forget about its length.
They can be viewed as unit vectors, although there is no normalization internally, since this is error prone. Directions are used whenever the length of a vector does not matter. They also characterize a set of parallel oriented lines that have the same orientations. For example, you can ask for the direction orthogonal to an oriented plane, or the direction of an oriented line. Further, they can be used to indicate angles. The slope of a direction is
dy()/
dx().
Kernel::Direction_2
returns true, iff
d is not equal to
d1, and while rotating counterclockwise starting at
d1,
d is reached strictly before
d2 is reached.
Note that true is returned if
d1 ==
d2, unless also
d ==
d1.
returns values, such that
d
== Direction_2<Kernel>(delta(0),delta(1)). | https://doc.cgal.org/4.12.1/Kernel_23/classCGAL_1_1Direction__2.html | CC-MAIN-2022-05 | en | refinedweb |
S2T2-Wav2Vec2-CoVoST2-EN-AR-ST
s2t-wav2vec2-large-en-ar is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in Large-Scale Self- and Semi-Supervised Learning for Speech Translation and officially released in
Fairseq.
Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a pretrained Wav2Vec2 as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
Intended uses & limitations
This model can be used for end-to-end English speech to Arabic text translation. See the model hub to look for other S2T2 checkpoints.
How to use
As this a standard sequence to sequence transformer model, you can use the
generate method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
from datasets import load_dataset from transformers import pipeline librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-ar", feature_extractor="facebook/s2t-wav2vec2-large-en-ar") translation = asr(librispeech_en[0]["file"])
or step-by-step as follows:
import torch from transformers import Speech2Text2Processor, SpeechEncoderDecoder from datasets import load_dataset import soundfile as sf model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-ar") processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-ar") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt") generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"]) transcription = processor.batch_decode(generated_ids)
Evaluation results
CoVoST-V2 test results for en-ar (BLEU score): 20.2
For more information, please have a look at the official paper - especially row 10 of Table 2.
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-2104-06678, author = {Changhan Wang and Anne Wu and Juan Miguel Pino and Alexei Baevski and Michael Auli and Alexis Conneau}, title = {Large-Scale Self- and Semi-Supervised Learning for Speech Translation}, journal = {CoRR}, volume = {abs/2104.06678}, year = {2021}, url = {}, archivePrefix = {arXiv}, eprint = {2104.06678}, timestamp = {Thu, 12 Aug 2021 15:37:06 +0200}, biburl = {}, bibsource = {dblp computer science bibliography,} }
- Downloads last month
- 131 | https://huggingface.co/facebook/s2t-wav2vec2-large-en-ar | CC-MAIN-2022-05 | en | refinedweb |
High-Velocity Engineering with Virtual Kubernetes Clusters
by Vishnu Chilamakuru
Kubernetes, an open-source container-orchestration system for automating your application deployment, scaling, and management, has matured so much recently that it’s expanded beyond its original operations usage and will likely continue to do so.
While this fast growth is impressive, it also means that the ecosystem needs to quickly evolve to solve the challenges of using Kubernetes for other scenarios, such as development or testing.
As your organization grows and you integrate Kubernetes more fully into your daily workflow, your needs may grow more complex. You probably began with a single cluster for everything, but now, you need multiple clusters, such as one for testing and one for specific workloads. As the number of clusters increases, your work also increases to address factors like isolation, access, admin effort, cost-efficiency, or management of more environments. Having virtual Kubernetes clusters, which can be created and disposed of in seconds, is one solution to this issue.
In this post, you will learn what vclusters are and how to use them to enable high-velocity engineering, efficiently addressing key challenges like the following:
- Creating and disposing of new environments
- Launching and managing environments with minimal admin effort
- Utilizing resources in a cost-efficient manner
Treat Resources like Cattle
A popular mindset is to treat the cloud as “cattle, not pets”-meaning, infrastructure resources should be cared for but replaced when things go wrong. This phrase was coined by Microsoft engineer Bill Baker in 2012 during his presentation on scaling up versus scaling out. The phrase explains how server treatment has changed over time. Gavin McCance later popularized this when he talked about the OpenStack cloud at CERN.
Each server is given a name in the pets service model, like
zeus,
ares,
hades,
poseidon, and
athena. They are unique and lovingly cared for, and when they get sick, you nurse them back to health. You scale them up by making them bigger, and when they are unavailable, everyone notices.
Examples of pet servers include mainframes, solitary servers, load balancers and firewalls, and database systems.
Cattle Model
In the cattle service model, the servers are given identification numbers, like
web01,
web02,
web03,
web04, and
web05, just as cattle are tagged. Each server is almost identical to the others, and when one gets sick, you replace it with another one. You scale them by creating more of them, and when one is unavailable, no one notices.
Examples of cattle servers include web server arrays; NoSQL clusters; queuing clusters; search clusters; caching reverse proxy clusters; multimaster data stores, like Cassandra and MongoDB; and big-data cluster solutions.
Evolution of the Cattle Model
The cattle service model has evolved from the Iron Age (bare-metal rack-mounted servers) to the Cloud Age (virtualized servers that are programmable through a web interface).
- Iron Age of computing: there was no concept of hardware virtualization. Robust change configuration tools, like Puppet or Chef, allowed operations to configure systems using automation.
- First Cloud Age: virtualization was extended to offer Infrastructure as a Service (IaaS) that virtualized the entire infrastructure (networks, storage, memory, and CPU) into programmable resources. Popular platforms offering IaaS are Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
- Second Cloud Age: automation was built to virtualize aspects of the infrastructure. This allows applications to be segregated into isolated environments without the need to virtualize hardware, which in turn duplicates the operating system per application. Examples of this are Linux Containers and Docker.
Kubernetes and similar technologies have now evolved to allocate resources for containers and schedule these containers across a cluster of servers. These tools give rise to immutable production, where disposable containers are configured at deployment.
Introducing vclusters
“Virtual clusters are fully working Kubernetes clusters that run on top of other Kubernetes clusters. Compared to ‘real’ clusters, virtual clusters do not have their own node pools. Instead, they are scheduling workloads inside the underlying cluster while using a separate control plane.”- “What are Virtual Kubernetes Clusters?”
High-Velocity Engineering with vclusters
In the software development and release cycle, the software moves through multiple environments, like local, dev, test, and preproduction, before it’s released to production. These environments should be close to the production environment to avoid any libraries or dependency version conflicts.
With local Kubernetes environments, such as minikube or K3s, developers can create Kubernetes clusters on their local computers. The upside of this approach is that you have complete control over your environment. But this can frequently leave you struggling with management and setup. It also does not resemble cloud-based environments closely enough when there is a dependency on multiple downstream or upstream services.
A cloud-based environment setup can address this issue by removing the setup pain points. But it also slows down the deployment and testing of multiple versions of your software. For example, your team is simultaneously working on three new features. You can deploy and test only one version at a time, making this a sequential process. Launching three parallel environments is resource-intensive and costly.
Using vcluster, you can quickly launch and delete these environments, thus speeding up the development and testing process.
How vcluster Works
Each developer gets an individual virtual cluster with full admin access to use however they please. They can change all available configurations, even the Kubernetes version, independently from other users working on the same physical cluster.
Since every development or QA environment is running on the same physical cluster, only one or two clusters are needed for all engineers, which significantly reduces the workload on the sysadmins. The isolation helps prevent the underlying cluster from breaking due to developer misconfigurations, and the cluster won’t require additional installations and add-ons.
Developers and testers can create virtual clusters as needed, as long as they stay within their resource limits, instead of waiting for IT to provide infrastructure access.
The virtual cluster consists of core Kubernetes components, like an API server, controller manager, and storage backend (such as etcd, SQLite, or MySQL). To reduce virtual cluster overhead, vcluster builds on K3s, a fully working, certified, and lightweight Kubernetes distribution that compiles the components into a single binary and disables all unneeded features.
That lightweight configuration makes vcluster fast, and it uses few resources because it’s bundled in a single pod. Because pods are scheduled in the underlying host cluster, there is no performance degradation.
vcluster splits up large multi-tenant clusters into smaller vclusters to reduce overhead and increase scalability. This dramatically decreases pressure on the underlying Kubernetes cluster since most vcluster API requests and objects will not reach the host cluster.
Virtual clusters also save on cloud computing costs, as the underlying environment is shared. Additional features, such as automatic sleep mode, make this approach even more cost-efficient since the idle times seen in other cloud-based approaches can be nearly eliminated.
Quick Start Guide
vcluster works with any Kubernetes clusters, like Amazon Elastic Kubernetes Service (EKS), Google Cloud, Microsoft Azure, and DigitalOcean. To install the vcluster, here are the prerequisites:
kubectl: check via
kubectl versioncommand
helmv3: check via
helm versioncommand
- a working kube-context with access to a Kubernetes cluster: check via `kubectl get namespaces’ command
1. Download vcluster CLI
vcluster can be downloaded using one of the following commands based on your operating system:
- Mac (Intel/AMD)
curl -s -L "" | sed -nE 's!.*"([^"]*vcluster-darwin-amd64)".*!\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
- Linux (AMD)
curl -s -L "" | sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
- Windows (PowerShell)
md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -UseBasicParsing ((Invoke-WebRequest -URI "" -UseBasicParsing).Content -replace "(?ms).*`"([^`"]*vcluster-windows-amd64.exe)`".*","`$1") -o $Env:APPDATA\vcluster\vcluster.exe;
$env:Path += ";" + $Env:APPDATA + "\vcluster";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);
To install in other operating systems, refer to these steps. Alternatively, you can download the binary for your platform from the GitHub Releases page and add it to your PATH.
2. Verify Installation
To confirm that the vcluster CLI is successfully installed, test using this command:
vcluster --version
3. Create a vcluster
Create a virtual cluster
vcluster-1 in namespace
host-namespace-1:
# By default vcluster will connect via port-forwarding
vcluster create vcluster-1 -n host-namespace-1 --connect
# OR: Use --expose to create a vcluster with an externally accessible LoadBalancer
vcluster create vcluster-1 -n host-namespace-1 --connect --expose
Check the vcluster docs to find out how to deploy a vcluster using Helm or Kubectl instead.
4. Use the vcluster
Run this in a separate terminal:
export KUBECONFIG=./kubeconfig.yaml
# Run any kubectl, helm, etc. command in your vcluster
kubectl get namespace
kubectl get pods -n kube-system
kubectl create namespace demo-nginx
kubectl create deployment nginx-deployment -n demo-nginx --image=nginx
kubectl get pods -n demo-nginx
5. Clean up resources
vcluster delete vcluster-1 -n host-namespace-1
Conclusion
Using vclusters can help you tackle challenges, like environment setup, configurations, and dependency management, when you use Kubernetes. Virtual clusters give developers secure, flexible, and cost-efficient Kubernetes access without consuming too many resources, thus increasing your organization’s engineering capability.
To learn more or get started with virtual clusters, check out the vcluster site.
Photo by Universal Eye on Unsplash
Originally published at. | https://loft-sh.medium.com/high-velocity-engineering-with-virtual-kubernetes-clusters-7df929ac6d0a | CC-MAIN-2022-05 | en | refinedweb |
By John Hanley, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community..
In Part 1, we discussed the concepts related to SSL Certificates and Let's Encrypt in detail. In this part, we will explain how to create your Account Key, Certificate Key and Certificate Signing Request (CSR).
The following source code examples do not have error checking. These code snips are designed to demonstrate how to interface with ACME. For more complete examples, review the source code in the examples package that you can download: ACME Examples in Python (Zip - 20 KB).
I could not find documentation on the size of the private key. I have been testing with a key size of 4096 bits and this works just fine.
There are numerous methods to create the Account Key. Let's look at two methods: writing a Python program and using OpenSSL from the command line. Included are examples showing how to work with private keys.
This example does not use the openssl python libraries. This example uses the crypto libraries which makes creating a private key very simple. Following this example is one using openssl which is more complicated but has more options.
make_account_key.py
""" Let's Encrypt ACME Version 2 Examples - Create Account Key """ from Crypto.PublicKey import RSA filename = 'account.key' key = RSA.generate(4096) with open(filename,'w') as f: f.write(key.exportKey().decode('utf-8'))
make_account_key2.py
import sys import OpenSSL from OpenSSL import crypto filename = 'account.key' key = crypto.PKey() key.generate_key(crypto.TYPE_RSA, 4096) key_material = crypto.dump_privatekey(crypto.FILETYPE_PEM, key) val = key_material.decode('utf-8') with open("account.key", "wt") as f: f.write(val)
OpenSSL Command Line Example:
openssl genrsa -out account.key 4096
OpenSSL command line options:
View details and verify the new account key:
openssl rsa -in account.key -text -check -noout
Extract the public key from the private key:
openssl rsa -pubout -in account.key -out account.pub.
We will repeat the above examples to create the certificate key. The difference is that the filename will be the name of our domain name that we will be issuing the certificate for. Change "domain.com" to your domain name.
make_certificate_key.py
""" Let's Encrypt ACME Version 2 Examples - Create Certificate Key """ from Crypto.PublicKey import RSA domainname = "example.com" filename = domainname + '.key' key = RSA.generate(4096) with open(filename,'w') as f: f.write(key.exportKey().decode('utf-8'))
OpenSSL Command Line Example:
openssl genrsa -out example.com.key 4096
OpenSSL command line options:.
Generating a CSR is easy with OpenSSL. All that is required is the domain name and optionally an email address. In the following example, replace domainName with your domain name and emailAddress with your email address.
This example also removes all the subject fields that Let's Encrypt does not process such as C, ST, L, O and OU and does add the subjectAltName extension that Chrome requires.
make_csr.py
""" Let's Encrypt ACME Version 2 Examples - Create CSR (Certificate Signing Request) """ importOpenSSL KEY_FILE = "certificate.key" CSR_FILE = "certificate.csr" domainName = 'api.neoprime.xyz' emailAddress = 'support@neoprime.xyz' def create_csr(pkey, domain_name, email_address): """ Generate a certificate signing request """ # create certificate request cert = OpenSSL.crypto.X509Req() # Add the email address cert.get_subject().emailAddress = email_address # Add the domain name cert.get_subject().CN = domain_name san_list = ["DNS:" + domain_name] cert.add_extensions([ OpenSSL.crypto.X509Extension( b"subjectAltName", False, ", ".join(san_list).encode("utf-8")) ]) cert.set_pubkey(pkey) cert.sign(pkey, 'sha256') return cert # Load the Certicate Key data = open(KEY_FILE, 'rt').read() # Load the private key from the certificate.key file pkey = OpenSSL.crypto.load_privatekey(OpenSSL.crypto.FILETYPE_PEM, data) # Create the CSR cert = create_csr(pkey, domainName, emailAddress) # Write the CSR to a file in PEM format with open(CSR_FILE,'wt') as f: data = OpenSSL.crypto.dump_certificate_request(OpenSSL.crypto.FILETYPE_PEM, cert) f.write(data.decode('utf-8'))
In Part 3 we will begin going through each Let's Encrypt ACME API using the account.key, certificate.key and certificate.csr files to generate and install SSL certificates for Alibaba Cloud API Gateway and CDN.
Let's Encrypt ACME on Alibaba Cloud – Part 1
Creating an Ecosystem for Redevelopment with Alibaba Cloud DataWorks
2,630 posts | 656 followersFollow
Alibaba Clouder - June 25, 2018
Alibaba Clouder - June 28, 2018
Alibaba Clouder - June 27, 2018
Alibaba Clouder - July 1, 2020
Alibaba Clouder - February 21, 2019
Alibaba Clouder - February 18, 2019
2,630 posts | 656 | https://www.alibabacloud.com/blog/lets-encrypt-acme-with-alibaba-cloud-api-gateway-and-cdn-part-2_593778 | CC-MAIN-2022-05 | en | refinedweb |
Message-ID: <1882777231.64149.1555592865187.JavaMail.confluence@cwiki-vm5.apache.org> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_64148_417318868.1555592865187" ------=_Part_64148_417318868.1555592865187 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This section is about regular Camel. The examples presented here in this= section is much more in common of all the examples we have in the Camel do= cumentation.=20
If you have been reading the previous 3 parts then, this quote applies:<= /p>=20
=20=20
you must unlearn what you have learned
Master Yoda, Star Wars IV=
So we start all over again!
Camel is particular strong as a light-weight and agile routing=
strong> and mediation framework. In this part we will intr=
oduce the routing concept and how we can introduce this in=
to our solution.
Looking back at the figure from the Introduction page w= e want to implement this routing. Camel has support for expressing this routing logic using Java as a= DSL (Domain Specific Language). In fact Camel also has DSL for XML and Sca= la. In this part we use the Java DSL as its the most powerful and all devel= opers know Java. Later we will introduce the XML version that is very well = integrated with Spring.
Before we jump into it, we want to state that this tutorial is about
So the starting point is:=20
/** * The webservice we have implemented. */ public class ReportIncidentEndpointImpl implements ReportIncidentEndpoint { /** * This is the last solution displayed that is the most simple */ public OutputReportIncident reportIncident(InputReportIncident paramete= rs) { // WE ARE HERE !!! return null; } }=20
Yes we have a simple plain Java class where we have the implementation o= f the webservice. The cursor is blinking at the WE ARE HERE block and this = is where we feel home. More or less any Java Developers have implemented we= bservices using a stack such as: Apache AXIS, Apache CXF or some other quit= e popular framework. They all allow the developer to be in control and impl= ement the code logic as plain Java code. Camel of course doesn't enforce th= is to be any different. Okay the boss told us to implement the solution fro= m the figure in the Introduction page and we are now ready to code.=20
RouteBuilder is the hearth in Camel of the Java DSL rou= ting. This class does all the heavy lifting of supporting EIP verbs for end= -users to express the routing. It does take a little while to get settled a= nd used to, but when you have worked with it for a while you will enjoy its= power and realize it is in fact a little language inside Java itself. Came= l is the only integration framework we are aware of that h= as Java DSL, all the others are usually only XML based. = p>=20
As an end-user you usually use the RouteBuilder as of f= ollows:=20
So we create a new class ReportIncidentRoutes and implement the first pa= rt of the routing:=20
import org.apache.camel.builder.RouteBuilder; public class ReportIncidentRoutes extends RouteBuilder { public void configure() throws Exception { // direct:start is a internal queue to kick-start the routing in ou= r example // we use this as the starting point where you can send messages to= direct:start from("direct:start") // to is the destination we send the message to our velocity en= dpoint // where we transform the mail body .to("velocity:MailBody.vm"); } }=20
What to notice here is the configure method. Here is wh= ere all the action is. Here we have the Java DSL langauge, that is expresse= d using the fluent builder syntax that is also known from = Hibernate when you build the dynamic queries etc. What you do is that you c= an stack methods separating with the dot.=20
In the example above we have a very common routing, that can be distille= d from pseudo verbs to actual code with:=20
from("direct:start") is the consumer that is kick-start=
ing our routing flow. It will wait for messages to arrive on the direct queue and then dispatch the m=
essage.
to("velocity:MailBody.vm") is the producer tha= t will receive a message and let Velocity generate the mail body response.<= /p>=20
So what we have implemented so far with our ReportIncidentRoutes RouteBu=
ilder is this part of the picture:
Now we have our RouteBuilder we need to add/connect it to our CamelConte= xt that is the hearth of Camel. So turning back to our webservice implement= ation class ReportIncidentEndpointImpl we add this constructor to the code,= to create the CamelContext and add the routes from our route builder and f= inally to start it.=20
private CamelContext context; public ReportIncidentEndpointImpl() throws Exception { // create the context context =3D new DefaultCamelContext(); // append the routes to the context context.addRoutes(new ReportIncidentRoutes()); // at the end start the camel context context.start(); }=20
Okay how do you use the routes then? Well its just as before we use a Pr=
oducerTemplate to send messages to Endpoints, so we just send to the
So we = implement the logic in our webservice operation:
/** * This is the last solution displayed that is the most simple */ public OutputReportIncident reportIncident(InputReportIncident paramete= rs) { Object mailBody =3D context.createProducerTemplate().sendBody("dire= ct:start", parameters); System.out.println("Body:" + mailBody); // return an OK reply OutputReportIncident out =3D new OutputReportIncident(); out.setCode("OK"); return out; }=20
Notice that we get the producer template using the createProduce=
rTemplate method on the CamelContext. Then we send the input param=
eters to the direct:start endpoint and it will route it
to the velocity endpoint that will generate the mail body. S=
ince we use direct as the consumer endpoint (=3Dfrom) and =
its a synchronous exchange we will get the response back f=
rom the route. And the response is of course the output from the velocity e=
ndpoint.
About creating ProducerTemplate
We have now completed this part of the picture:
Now is the time we would like to unit test what we got now. So we call f= or camel and its great test kit. For this to work we need to add it to the = pom.xml=20
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>1.4.0</version> <scope>test</scope> <type>test-jar</type> </dependency>=20
After adding it to the pom.xml you should refresh your Java Editor so it=
pickups the new jar. Then we are ready to create out unit test class.
= We create this unit test skeleton, where we extend this cl= ass
ContextTestSupport
package org.apache.camel.example.reportincident; import org.apache.camel.ContextTestSupport; import org.apache.camel.builder.RouteBuilder; /** * Unit test of our routes */ public class ReportIncidentRoutesTest extends ContextTestSupport { }=20
=20
ContextTestSupport is a supporting unit test class for much=
easier unit testing with Apache Camel. The class is extending JUnit TestCa=
se itself so you get all its glory. What we need to do now is to somehow te=
ll this unit test class that it should use our route builder as this is the=
one we gonna test. So we do this by implementing the
createRouteBuil=
der method.
@Override protected RouteBuilder createRouteBuilder() throws Exception { return new ReportIncidentRoutes(); }=20
That is easy just return an instance of our route builder and this unit = test will use our routes.=20
It is quite common in Camel itself to unit test using routes defined as = an anonymous inner class, such as illustrated below:=20
protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { public void configure() throws Exception { // TODO: Add your routes here, such as: from("jms:queue:inbox").to(""); } }; }=20
The same technique is of course also possible for end-users of Camel to = create parts of your routes and test them separately in many test classes.<= br>.= p>=20
public void testTransformMailBody() throws Exception { // create a dummy input with some input data InputReportIncident parameters =3D createInput(); // send the message (using the sendBody method that takes a paramet= ers as the input body) // to "direct:start" that kick-starts the route // the response is returned as the out object, and its also the bod= y of the response Object out =3D context.createProducerTemplate().sendBody("direct:st= art", parameters); // convert the response to a string using camel converters. However= we could also have casted it to // a string directly but using the type converters ensure that Came= l can convert it if it wasn't a string // in the first place. The type converters in Camel is really power= ful and you will later learn to // appreciate them and wonder why its not build in Java out-of-the-= box String body =3D context.getTypeConverter().convertTo(String.class, = out); // do some simple assertions of the mail body assertTrue(body.startsWith("Incident 123 has been reported on the 2= 008-07-16 by Claus Ibsen.")); } /** * Creates a dummy request to be used for input */ protected InputReportIncident createInput() { InputReportIncident input =3D; }=20
The next piece of puzzle that is missing is to store the mail body as a = backup file. So we turn back to our route and the EIP patterns. We use the = Pipes and Filters= a> pattern here to chain the routing as:=20
public void configure() throws Exception { from("direct:start") .to("velocity:MailBody.vm") // using pipes-and-filters we send the output from the previous= to the next .to(""); }=20
Notice that we just add a 2nd .to on the newline. Camel= will default use the Pipes and Filters pattern here when there are multi endpoints chaine= d liked this. We could have used the pipeline verb to let = out stand out that its the Pipes and Filters pattern such as:=20
from("direct:start") // using pipes-and-filters we send the output from the previous= to the next .pipeline("velocity:MailBody.vm", "");=20
But most people are using the multi .to style instead.<= /p>=20
We re-run out unit test and verifies that it still passes:=20
Running org.apache.camel.example.reportincident.ReportIncidentRoutesTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.157 sec=20
But hey we have added the file producer endpoint and thus a fil=
e should also be created as the backup file. If we look in the
target=
/subfolder we can see that something happened.
On my humble lapt= op it created this folder: target\subfolder\ID-claus-acer.= So the file producer create a sub folder named
ID-claus-acer =
what is this? Well Camel auto generates an unique filename based on the uni=
que message id if not given instructions to use a fixed filename. In fact i=
t creates another sub folder and name the file as: target\subfolder\ID-clau=
s-acer\3750-1219148558921\1-0 where 1-0 is the file with the mail body. Wha=
t we want is to use our own filename instead of this auto generated filenam=
e. This is archived by adding a header to the message with the filename to =
use. So we need to add this to our route and compute the filename based on =
the message content.
For starters we show the simple solution and build from there. We start =
by setting a constant filename, just to verify that we are on the right pat=
h, to instruct the file producer what filename to use. The file producer us=
es paramete= rs) { // create the producer template to use for sending messages ProducerTemplate producer =3D context.createProducerTemplate(); // send the body and the filename defined with the special header k= ey=20 Object mailBody =3D producer.sendBodyAndHeader("direct:start", para= meters, FileComponent.HEADER_FILE_NAME, "incident.txt"); System.out.println("Body:" + mailBody); // return an OK reply OutputReportIncident out =3D new OutputReportIncident(); out.setCode("OK"); return out; }=20
However we could also have used the route builder itself to configure th= e constant filename as shown below:=20
public void configure() throws Exception { from("direct:start") .to("velocity:MailBody.vm") // set the filename to a constant before the file producer rece= ives the message .setHeader(FileComponent.HEADER_FILE_NAME, constant("incident.t= xt")) .to(""); }=20
But Camel can be smarter and we want to dynamic set the filename based o=
n some of the input parameters, how can we do this?
Well the obvious so= lution is to compute and set the filename from the webservice implementatio= n, but then the webservice implementation has such logic and we want this d= ec.=20
/** * Plain java class to be used for filename generation based on the reporte= d incident */ public class FilenameGenerator { public String generateFilename(InputReportIncident input) { // compute the filename return "incident-" + input.getIncidentId() + ".txt"; } }=20
The class is very simple and we could easily create unit tests for it to= verify that it works as expected. So what we want now is to let Camel invo= ke this class and its generateFilename with the input parameters and use th= e output as the filename. Pheeeww is this really possible out-of-the-box in= Camel? Yes it is. So lets get on with the show. We have the code that comp= utes the filename, we just need to call it from our route using the Bean Language:=20
public void configure() throws Exception { from("direct:start") // set the filename using the bean language and call the Filena= me(Fi= lenameGenerator.class, null)) .to("velocity:MailBody.vm") .to(""); }=20
Notice that we use the bean language where we supply th= e class with our bean to invoke. Camel will instantiate an instance of the = class and invoke the suited method. For completeness and ease of code reada= bility we add the method name as the 2nd parameter=20
.setHeader(FileComponent.HEADER_FILE_NAME, BeanLanguage.bean(Fi= lenameGenerator.class, "generateFilename"))=20
Then other developers can understand what the parameter is, instead of <= code>null.=20
Now we have a nice solution, but as a sidetrack I want to demonstrate th= e Camel has other languages out-of-the-box, and that scripting language is = a first class citizen in Camel where it etc. can be used in content based r= outing. However we want it to be used for the filename generation.=20
We could do as in the previous parts where we send the computed filename= as a message header when we "kick-start" the route. But we want to learn n= ew stuff so we look for a different solution using some of Camels many Languages. As OGNL is a favorite language of mine (used= by WebWork) so we pick this baby for a Camel ride. For starters we must ad= d it to our pom.xml:=20
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ognl</artifactId> <version>${camel-version}</version> </dependency>=20
And remember to refresh your editor so you got the new .jars.
We wa= nt to construct the filename based on this syntax:
mail-incident-#ID#=
.txt where #ID# is the incident id from the input parameters. As OGNL is a language that can inv=
oke methods on bean we can invoke the
getIncidentId() on the m=
essage body and then concat it with the fixed pre and postfix strings.
In OGNL glory this is don= e as:=20
"'mail-incident-' + request.body.incidentId + '.txt'"=20
where
request.body.incidentId computes to:
getIncidentId()method on the body.
Now we got the expression to dynamic compute the filename on the fly we = need to set it on our route so we turn back to our route, where we can add = the OGNL expression:=20", ""); }=20
And since we are on Java 1.5 we can use the static import of ogn= l so we have:=20
import static org.apache.camel.language.ognl.OgnlExpression.ognl; ... .setHeader(FileComponent.HEADER_FILE_NAME, ognl("'mail-incident-' + req= uest.body.incidentId + '.txt'"))=20
Notice the import static also applies for all the other languages, such = as the Bean Language we used previously.
Whatever worked for you we have now implemented the backup of the data f=
iles:
What we need to do before the solution is completed is to actually send = the email with the mail body we generated and stored as a file. In the prev= ious part we did this with a Fi= le consumer, that we manually added to the CamelContext. We can do this= quite easily with the routing.=20at= or.class, "generateFilename")) .to("velocity:MailBody.vm") .to(""); // second part from the file backup -> send email from("") // set the subject of the email .setHeader("subject", constant("new incident reported")) // send the email .to("smtp://someone@localhost?password=3Dsecret&to=3Dincide= nt@mycompany.com"); } }=20
The last 3 lines of code does all this. It adds a file consumer = from(""), sets the mail subject, and finall= y send it as an email.=20
The DSL is really powerful where you can express your routing integratio=
n logic.
So we completed the last piece in the picture puzzle with just= 3 lines of code.
We have now completed the integration:
We have just briefly touched the routing in Camel and s= hown how to implement them using the fluent builder syntax= in Java. There is much more to the routing in Camel than shown here, but w= e are learning step by step. We continue in part 5. See you there.=20 | https://cwiki.apache.org/confluence/exportword?pageId=93043 | CC-MAIN-2019-18 | en | refinedweb |
Table of Contents
One of the most frequently asked questions that any Java developer needs to have the answer to is to how to write a prime number program in Java. It is one of the basic concepts concerning the leading high-level, general purpose programming language.
There are several ways of writing a program in Java that checks whether a number is prime on not. However, the basic logic remains the same i.e. you need to check whether the entered number (or already defined in the program) has some divisor other than 1 and itself or not.
The prime number program is an indispensable part of learning Java. Hence, most of the great books on Java covers it. Before moving forward to discuss the prime number program in Java, let’s first understand the concept of prime numbers and their importance.
Prime Numbers – The Definition and Importance
Any number that is only divisible by 1 other than itself is known as a primary number. 3, 5, 23, 47, 241, 1009 are all examples of prime numbers. While 0 and 1 can’t qualify for being a prime number, 2 is the only even prime number in the entire infinitely long set of prime numbers.
Prime numbers exhibit a number of odd mathematical properties that make them desirable for a wide variety of applications, many of which belongs to the world of information technology. For example, primes find use in pseudorandom number generators and computer hash tables.
There are several instances in the history of using encryption for hiding information in plain sight. Amazingly, it is the process of using prime numbers to encode information.
With the introduction of computers, modern cryptography was introduced. It became feasible to generate complex and longer codes that were much, much difficult to crack.
Most of the modern computer cryptography is dependent on making use of prime factors of large numbers. As prime numbers are the building blocks of whole numbers, they are of the highest importance to number theorists as well.
Check out more about the importance of prime number in IT security.
Prime Number Program in Java
As already mentioned, there are several ways of implementing a prime number program in Java. In this section, we’ll look at three separate ways of doing so as well as 2 additional programs for printing primes.
Type 1 – A Simple Program With No Provision for Input
This is one of the simplest ways of implementing a program for checking whether a number is a prime number or not in Java. It doesn’t require any input and simply tells whether the defined number (by the integer variable n) is a prime number or not. Here goes the code:
public class PrimeCheck{ public static void main(String args[]){ int i,m=0,flag=0; int n."); } } } }
Output:
3 is a prime number.
Type 2 – A Program in Java Using Method (No User Input Required)
This Java code demonstrates implementation of a prime number program that uses a method. Like the program mentioned before, it doesn’t ask for any user input and works only on the numbers entered to the defined method (named checkPrime) in the program. Here is the code:
public class PrimeCheckUsingMethod{ static void checkPrime(int n){ int i,m=0,flag."); } } } public static void main(String args[]){ checkPrime(1); checkPrime(3); checkPrime(17); checkPrime(20); } }
Output:
1 is not a prime number.
3 is a prime number.
17 is a prime number.
20 is not a prime number.
Type 3 – A Program in Java Using Method (Requires User Input)
This Java program is similar to the aforementioned program. However, this program prompts for user input. Here goes the code:
import java.util.Scanner; import java.util.Scanner; public class PrimeCheckUsingMethod2 { public static void main(String[] args) { Scanner s = new Scanner(System.in); System.out.print("Enter a number: "); int n = s.nextInt(); if (isPrime(n)) { System.out.println(n + " is a prime number."); } else { System.out.println(n + " is not a prime number."); } } public static boolean isPrime(int n) { if (n <= 1) { return false; } for (int i = 2; i < Math.sqrt(n); i++) { if (n % i == 0) { return false; } } return true; } } )
Sample Output:
Enter a number: 22
22 is not a prime number.
[Bonus Program] Type 4 – A Program in Java to Print Prime Numbers from 1 to 100
This code will demonstrate a Java program capable of printing all the prime numbers existing between 1 and 100. The code for the program is:
class PrimeNumbers { public static void main (String[] args) { int i =0; int num =0; String primeNumbers = ""; for (i = 1; i <= 100; i++) { int counter=0; for(num =i; num>=1; num--) { if(i%num==0) { counter = counter + 1; } } if (counter ==2) { primeNumbers = primeNumbers + i + " "; } } System.out.println("Prime numbers between 1 and 100 are :"\n); System.out.println(primeNumbers); } }
Output:
Prime numbers between 1 and 100 are:
2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97
[Bonus Program] Type 5 – A Program in Java to Print Prime Numbers from 1 to n (User Input)
This Java program prints all the prime numbers existing between 1 and n, where n is the number entered by the user. Here is the code:
import java.util.Scanner; class PrimeNumbers2 { public static void main (String[] args) { Scanner scanner = new Scanner(System.in); int i =0; int num =0; String primeNumbers = ""; System.out.println("Enter a number:"); int n = scanner.nextInt(); scanner.close(); for (i = 1; i <= n; i++) { int counter=0; for(num =i; num>=1; num--) { if(i%num==0) { counter = counter + 1; } } if (counter ==2) { primeNumbers = primeNumbers + i + " "; } } System.out.println("Prime numbers between 1 and n are:"/n); System.out.println(primeNumbers); } }
Sample Output:
Enter a number: 22
Prime numbers between 1 and 22 are:
2 3 5 7 11 13 17 19
It’s Done!
That was all about the prime number program in Java. No matter at what skill level a Java developer is, it is very important to be able to write a program concerning prime numbers, at least for checking whether a given number (or set of numbers) is a prime or not.
Continued learning is very important for advancing in coding. A program that you are able to write now might be better when you write it after gaining new knowledge. If you’re looking to enhance your Java skill further, consider checking out some of the best Java tutorials.
Do you know some fascinating way of implementing a prime number program in Java? Care to share with us? Then you can do so by the comment window below. Thanks in advance. | https://hackr.io/blog/prime-number-program-in-java | CC-MAIN-2019-18 | en | refinedweb |
IPython magic functions
One of the cool features of IPython are magic functions - helper functions built into IPython. They can help you easily start an interactive debugger, create a macro, run a statement through a code profiler or measure its’ execution time and do many more common things.
Don't mistake IPython magic functions with Python magic functions (functions with leading and trailing double underscore, for example
__init__ or
__eq__) - those are completely different things! In this and next parts of the article, whenever you see a magic function - it's an IPython magic function.
Moreover, you can create your own magic functions. There are 2 different types of magic functions.
The first type - called line magics - are prefixed with
% and work like a command typed in your terminal. You start with the name of the function and then pass some arguments, for example:
In [1]: %timeit range(1000) 255 ns ± 10.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
My favorite one is the %debug function. Imagine you run some code and it throws an exception. But given you weren’t prepared for the exception, you didn’t run it through a debugger. Now, to be able to debug it, you would usually have to go back, put some breakpoints and rerun the same code. Fortunately, if you are using IPython there is a better way! You can run
%debug right after the exception happened and IPython will start an interactive debugger for that exception. It’s called post-mortem debugging and I absolutely love it!
The second type of magic functions are cell magics and they work on a block of code, not on a single line. They are prefixed with
%%. To close a block of code, when you are inside a cell magic function, hit
Enter twice. Here is an example of
timeit function working on a block of code:
In [2]: %%timeit elements = range(1000) ...: x = min(elements) ...: y = max(elements) ...: ...: 52.8 µs ± 4.37 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Both the line magic and the cell magic can be created by simply decorating a Python function. Another way is to write a class that inherits from the
IPython.core.magic.Magics. I will cover this second method in a different article.
Creating line magic function
That’s all the theory. Now, let’s write our first magic function. We will start with a
line magic and in the second part of this tutorial, we will make a
cell magic.
What kind of magic function are we going to create? Well, let’s make something useful. I’m from Poland and in Poland we are use Polish notation for writing down mathematical operations. So instead of writing
2 + 3, we write
+ 2 3. And instead of writing
(5 − 6) * 7 we write
* − 5 6 71.
Let’s write a simple Polish notation interpreter. It will take an expression in Polish notation as input, and output the correct answer. To keep this example short, I will limit it to only the basic arithmetic operations:
+,
-,
*, and
/.
Here is the code that interprets the Polish notation:
def interpret(tokens): token = tokens.popleft() if token == "+": return interpret(tokens) + interpret(tokens) elif token == "-": return interpret(tokens) - interpret(tokens) elif token == "*": return interpret(tokens) * interpret(tokens) elif token == "/": return interpret(tokens) / interpret(tokens) else: return int(token)
Next, we will create a
%pn magic function that will use the above code to interpret Polish notation.
from collections import deque from IPython.core.magic import register_line_magic @register_line_magic def pn(line): """Polish Notation interpreter Usage: >>> %pn + 2 2 4 """ return interpret(deque(line.split()))
And that’s it. The
@register_line_magic decorator turns our
pn function into a
%pn magic function. The
line parameter contains whatever is passed to the magic function. If we call it in the following way:
%pn + 2 2,
line will contain
+ 2 2.
To make sure that IPython loads our magic function on startup, copy all the code that we just wrote (you can find the whole file on GitHub) to a file inside IPython startup directory. You can read more about this directory in the IPython startup files post. In my case, I’m saving it in a file called:
~/.ipython/profile_default/startup/magic_functions.py
(name of the file doesn’t matter, but the directory where you put it is important).
Ok, it’s time to test it. Start IPython and let’s do some Polish math:
In [1]: %pn + 2 2 Out[1]: 4 In [2]: %pn * - 5 6 7 Out[2]: -7 In [3]: %pn * + 5 6 + 7 8 Out[3]: 165
Perfect, it works! Of course, it’s quite rudimentary - it only supports 4 operators, it doesn’t handle exceptions very well, and given that it’s using recursion, it might fail for very long expressions. Also, the
queue module and the
interpret function will now be available in your IPython sessions, since whatever code you put in the
magic_function.py file will be run on IPython startup.
But, you just wrote your first magic function! And it wasn’t so difficult!
At this point, you are probably wondering - Why didn’t we just write a standard Python function? That’s a good question - in this case, we could simply run the following code:
In [1]: pn('+ 2 2') Out[1]: 4
or even:
In [1]: interpret(deque('+ 2 2'.split())) Out[1]: 4
As I said in the beginning, magic functions are usually helper functions. Their main advantage is that when someone sees functions with the
% prefix, it’s clear that it’s a magic function from IPython, not a function defined somewhere in the code or a built-in. Also, there is no risk that their names collide with functions from Python modules.
Conclusion
I hope you enjoyed this short tutorial and if you have questions or if you have a cool magic function that you would like to share - drop me an email or ping me on Twitter!
Stay tuned for the next parts. We still need to cover the cell magic functions, line AND cell magic functions and Magic classes.
Footnotes | https://switowski.com/python/ipython/2019/02/01/creating-magic-functions-part1.html | CC-MAIN-2019-18 | en | refinedweb |
IDependencyRegistrar Interface
[namespace: Serenity.Abstractions, assembly: Serenity.Core]
Dependency resolvers should implement this interface (IDependencyRegistrar) to register dependencies:
public interface IDependencyRegistrar { object RegisterInstance<TType>(TType instance) where TType : class; object RegisterInstance<TType>(string name, TType instance) where TType : class; object Register<TType, TImpl>() where TType : class where TImpl : class, TType; object Register<TType, TImpl>(string name) where TType : class where TImpl : class, TType; void Remove(object registration); }
MunqContainer and other IoC containers are also dependency registrars (they implement IDependencyRegistrar interface), so you just have to query for it:
var registrar = Dependency.Resolve<IDependencyRegistrar>(); registrar.RegisterInstance<ILocalTextRegistry>(new LocalTextRegistry()); registrar.RegisterInstance<IAuthenticationService>(...)
IDependencyRegistrar.RegisterInstance Method
Registers a singleton instance of a type (TType, usually an interface) as provider of that type.
object RegisterInstance<TType>(TType instance) where TType : class;
When you register an object instance with this overload, whenever an implementation of
TType is requested, the instance that you registered is returned from dependency resolver. This is similar to Singleton Pattern.
var registrar = Dependency.Resolve<IDependencyRegistrar>(); registrar.RegisterInstance<ILocalTextRegistry>(new LocalTextRegistry());
If there was already a registration for TType, it is overridden.
This overload is the most used method of registering dependencies.
Make sure the provider which you registered is thread safe, as all threads will be using your instance at same time!
RegisterInstance has a less commonly used overload with a name parameter:
object RegisterInstance<TType>(string name, TType instance) where TType : class;
Using this overload, you can register different providers for the same interface, differentiated by a string key. register a IConfigurationRepository provider for each of these scopes, you would call the method like:
var registrar = Dependency.Resolve<IDependencyRegistrar>(); registrar.RegisterInstance<IConfigurationRepository>( "Application", new MyApplicationConfigurationRepository()); registrar.RegisterInstance<IConfigurationRepository>( "Server", new MyServerConfigurationRepository());
And when querying for these dependencies:
var appConfig = Dependency.Resolve<IConfigurationRepository>("Application"); // ... var srvConfig = Dependency.Resolve<IConfigurationRepository>("Server"); // ...
IDependencyRegistrar.Register Method
Unlike RegisterInstance, when a type is registered with this method, every time a provider for that type is requested, a new instance will be returned (so each requestor gets a unique instance).
var registrar = Dependency.Resolve<IDependencyRegistrar>(); registrar.Register<ILocalTextRegistry, LocalTextRegistry>();
IDependencyRegistrar.Remove Method
All registration methods of IDependencyRegistrar interface returns an object which you can later use to remove that registration.
Avoid using this method in ordinary applications as all registrations should be done from a central location and once per lifetime of the application. But this can be useful for unit test purposes.
var registrar = Dependency.Resolve<IDependencyRegistrar>(); var registration = registrar.Register<ILocalTextRegistry, LocalTextRegistry>(); //... registrar.Remove(registration);
This is not an undo operation. If you register type C for interface A, while type B was already registered for the same interface, prior registration is overriden and lost. You can't get back to prior state by removing registration of C. | https://volkanceylan.gitbooks.io/serenity-guide/service_locator/idependencyregistrar_interface.html | CC-MAIN-2019-18 | en | refinedweb |
Dynamic Model Introduction
Dynamic models are essential for understanding the system dynamics in open-loop (manual mode) or for closed-loop (automatic) control. These models are either derived from data (empirical) or from more fundamental relationships (first principles, physics-based) that rely on knowledge of the process. A combination of the two approaches is often used in practice where the form of the equations are developed from fundamental balance equations and unknown or uncertain parameters are adjusted to fit process data.
In engineering, there are 4 common balance equations from conservation principles including mass, momentum, energy, and species (see Balance Equations). An alternative to physics-based models is to use input-output data to develop empirical dynamic models such as first-order or second-order systems.
Steps in Dynamic Modeling
The following are general guidelines for developing a dynamic model. The process is iterative as simulation results help inform modeling assumptions or correct errors in the dynamic balance equations.
- Identify objective for the simulation
- Draw a schematic diagram, labeling process variables
- List all assumptions
- Determine spatial dependence
- yes = Partial Differential Equation (PDE)
- no = Ordinary Differential Equation (ODE)
- Write dynamic balances (mass, species, energy)
- Other relations (thermo, reactions, geometry, etc.)
- Degrees of freedom, does number of equations = number of unknowns?
- Classify inputs as
- Fixed values
- Disturbances
- Manipulated variables
- Classify outputs as
- States
- Controlled variables
- Simplify balance equations based on assumptions
- Simulate steady state conditions (if possible)
- Simulate the output with an input step
A Beginning Example: Filling a Water Tank
Consider a cylindrical tank with no outlet flow and an adjustable inlet flow. The inlet flow rate is not measured but there is a level measurement that shows how much fluid has been added to the tank. The objective of this exercise is to develop a model that can maintain a certain water level by automatically adjusting the inlet flow rate. See the subsequent section on P-only control for the tank level controller design.
Diagram of a tank with an inlet and no outlet. The symbol LT is an abbreviation for Level Transmitter.
A first step is to develop a dynamic model of how the inlet flow rate affects the level in the tank. A starting point for this model is a balance equation.
$$\frac{dm}{dt} = \dot m_{in} - \dot m_{out}$$
The accumulation term is a differential variable such as dm/dt for mass. In this case, the accumulation of mass is equal to only an inlet flow and no outlet, generation, or consumption terms.
Assumptions
The next objective is to simplify the expression and transform it into a relationship between height h and the valve opening u (0-100%). For liquid water, density is nearly constant even over wide temperature ranges and the mass is equal to the density multiplied by the volume V. Assuming a constant cross-sectional area gives V = h A and a linear correlation between valve opening and inlet flow gives the following relationship.
$$ \rho \; A \; \frac{dh}{dt} = c \; u \quad \mathrm{with} \quad \dot m_{in} = c \; u$$
where c is a constant that relates valve opening to inlet flow.
Problem
Simulate the height of the tank by integrating the mass balance equation for a period of 10 seconds. The valve opens to 100% at time=2 and shuts at time=7. Use a value of 1000 kg/m3 for density and 1.0 m2 for the cross-sectional area of the tank. For the valve, assume a valve coefficient of c=50.0 (kg/s / %open).
Solution
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# define tank model
def tank(Level,time,c,valve):
rho = 1000.0 # water density (kg/m^3)
A = 1.0 # tank area (m^2)
# calculate derivative of the Level
dLevel_dt = (c/(rho*A)) * valve
return dLevel_dt
# time span for the simulation for 10 sec, every 0.1 sec
ts = np.linspace(0,10,101)
# valve operation
c = 50.0 # valve coefficient (kg/s / %open)
u = np.zeros(101) # u = valve % open
u[21:70] = 100.0 # open valve between 2 and 7 seconds
# level initial condition
Level0 = 0
# for storing the results
z = np.zeros(101)
# simulate with ODEINT
for i in range(100):
valve = u[i+1]
y = odeint(tank,Level0,[0,0.1],args=(c,valve))
Level0 = y[-1] # take the last point
z[i+1] = Level0 # store the level for plotting
# plot results
plt.figure()
plt.subplot(2,1,1)
plt.plot(ts,z,'b-',linewidth=3)
plt.ylabel('Tank Level')
plt.subplot(2,1,2)
plt.plot(ts,u,'r--',linewidth=3)
plt.ylabel('Valve')
plt.xlabel('Time (sec)')
plt.show() | http://apmonitor.com/pdc/index.php/Main/DynamicModeling | CC-MAIN-2019-18 | en | refinedweb |
Linearization of Differential Equations
Linearization is the process of taking the gradient of a nonlinear function with respect to all variables and creating a linear representation at that point. It is required for certain types of analysis such as stability analysis, solution with a Laplace transform, and to put the model into linear state-space form. Consider a nonlinear differential equation model that is derived from balance equations with input u and output y.
$$\frac{dy}{dt} = f(y,u)$$
The right hand side of the equation is linearized by a Taylor series expansion, using only the first two terms.
$$\frac{dy}{dt} = f(y,u) \approx f \left(\bar y, \bar u\right) + \frac{\partial f}{\partial y}\bigg|_{\bar y,\bar u} \left(y-\bar y\right) + \frac{\partial f}{\partial u}\bigg|_{\bar y,\bar u} \left(u-\bar u\right)$$
If the values of `\bar u` and `\bar y` are chosen at steady state conditions then `f(\bar y, \bar u)=0` because the derivative term `{dy}/{du}=0` at steady state. To simplify the final linearized expression, deviation variables are defined as `y' = y-\bar y` and `u' = u - \bar u`. A deviation variable is a change from the nominal steady state conditions. The derivatives of the deviation variable is defined as `{dy'}/{dt} = {dy}/{dt}` because `{d\bar y}/{dt} = 0` in `{dy'}/{dt} = {d(y-\bar y)}/{dt} = {dy}/{dt} - \cancel{{d\bar y}/{dt}}`. If there are additional variables such as a disturbance variable `d` then it is added as another term in deviation variable form `d' = d - \bar d`.
$$\frac{dy'}{dt} = \alpha y' + \beta u' + \gamma d'$$
The values of the constants `\alpha`, `\beta`, and `\gamma` are the partial derivatives of `f(y,u,d)` evaluated at steady state conditions.
$$\alpha = \frac{\partial f}{\partial y}\bigg|_{\bar y,\bar u,\bar d} \quad \quad \beta = \frac{\partial f}{\partial u}\bigg|_{\bar y,\bar u,\bar d} \quad \quad \gamma = \frac{\partial f}{\partial d}\bigg|_{\bar y,\bar u,\bar d}$$
Example
Part A: Linearize the following differential equation with an input value of u=16.
$$\frac{dx}{dt} = -x^2 + \sqrt{u}$$
Part B: Determine the steady state value of x from the input value and simplify the linearized differential equation.
Part C: Simulate a doublet test with the nonlinear and linear models and comment on the suitability of the linear model to represent the original nonlinear equation solution.
Part A Solution: The equation is linearized by taking the partial derivative of the right hand side of the equation for both x and u.
$$\frac{\partial \left(-x^2 + \sqrt{u}\right)}{\partial x} = \alpha = -2 \, x$$
$$\frac{\partial \left(-x^2 + \sqrt{u}\right)}{\partial u} = \beta = \frac{1}{2} \frac{1}{\sqrt{u}}$$
The linearized differential equation that approximates `\frac{dx}{dt}=f(x,u)` is the following:
$$\frac{dx}{dt} = f \left(x_{ss}, u_{ss}\right) + \frac{\partial f}{\partial x}\bigg|_{x_{ss},u_{ss}} \left(x-x_{ss}\right) + \frac{\partial f}{\partial u}\bigg|_{x_{ss},u_{ss}} \left(u-u_{ss}\right)$$
Substituting in the partial derivatives results in the following differential equation:
$$\frac{dx}{dt} = 0 + \left(-2 x_{ss}\right) \left(x-x_{ss}\right) + \left(\frac{1}{2} \frac{1}{\sqrt{u_{ss}}}\right) \left(u-u_{ss}\right)$$
This is further simplified by defining new deviation variables as `x' = x - x_{ss}` and `u' = u - u_{ss}`.
$$\frac{dx'}{dt} = \alpha x' + \beta u'$$
Part B Solution: The steady state values are determined by setting `\frac{dx}{dt}=0` and solving for x.
$$0 = -x_{ss}^2 + \sqrt{u_{ss}}$$
$$x_{ss}^2 = \sqrt{16}$$
$$x_{ss} = 2$$
At steady state conditions, `frac{dx}{dt}=0` so `f (x_{ss}, u_{ss})=0` as well. Plugging in numeric values gives the simplified linear differential equation:
$$\frac{dx}{dt} = -4 \left(x-2\right) + \frac{1}{8} \left(u-16\right)$$
The partial derivatives can also be obtained from Python, either symbolically with SymPy or else numerically with SciPy.
import sympy as sp
sp.init_printing()
# define symbols
x,u = sp.symbols(['x','u'])
# define equation
dxdt = -x**2 + sp.sqrt(u)
print(sp.diff(dxdt,x))
print(sp.diff(dxdt,u))
# numeric solution with Python
import numpy as np
from scipy.misc import derivative
u = 16.0
x = 2.0
def pd_x(x):
dxdt = -x**2 + np.sqrt(u)
return dxdt
def pd_u(u):
dxdt = -x**2 + np.sqrt(u)
return dxdt
print('Approximate Partial Derivatives')
print(derivative(pd_x,x,dx=1e-4))
print(derivative(pd_u,u,dx=1e-4))
print('Exact Partial Derivatives')
print(-2.0*x) # exact d(f(x,u))/dx
print(0.5 / np.sqrt(u)) # exact d(f(x,u))/du
Python program results
-2*x 1/(2*sqrt(u)) Approximate Partial Derivatives -4.0 0.125 Exact Partial Derivatives -4.0 0.125
The nonlinear function for `\frac{dx}{dt}` can also be visualized with a 3D contour map. The choice of steady state conditions `x_{ss}` and `u_{ss}` produces a planar linear model that represents the nonlinear model only at a certain point. The linear model can deviate from the nonlinear model if used further away from the conditions at which the linear model is derived.
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
# Make data.
X = np.arange(0, 4, 0.25)
U = np.arange(0, 20, 0.25)
X, U = np.meshgrid(X, U)
DXDT = -X**2 + np.sqrt(U)
LIN = -4.0 * (X-2.0) + 1.0/8.0 * (U-16.0)
# Plot the surface.
surf = ax.plot_wireframe(X, U, LIN)
surf = ax.plot_surface(X, U, DXDT, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# Customize the z axis.
ax.set_zlim(-10.0, 5.0)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
# Add a color bar which maps values to colors.
fig.colorbar(surf, shrink=0.5, aspect=5)
# Add labels
plt.xlabel('x')
plt.ylabel('u')
plt.show()
Part C Solution: The final step is to simulate a doublet test with the nonlinear and linear models.
Small step changes (+/-1): Small step changes in u lead to nearly identical responses for the linear and nonlinear solutions. The linearized model is locally accurate.
Large step changes (+/-8): As the magnitude of the doublet steps increase, the linear model deviates further from the original nonlinear equation solution.
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# function that returns dz/dt
def model(z,t,u):
x1 = z[0]
x2 = z[1]
dx1dt = -x1**2 + np.sqrt(u)
dx2dt = -4.0*(x2-2.0) + (1.0/8.0)*(u-16.0)
dzdt = [dx1dt,dx2dt]
return dzdt
# steady state conditions
x_ss = 2.0
u_ss = 16.0
# initial condition
z0 = [x_ss,x_ss]
# final time
tf = 10
# number of time points
n = tf * 10 + 1
# time points
t = np.linspace(0,tf,n)
# step input
u = np.ones(n) * u_ss
# magnitude of step
m = 8.0
# change up m at time = 1.0
u[11:] = u[11:] + m
# change down 2*m at time = 4.0
u[41:] = u[41:] - 2.0 * m
# change up m at time = 7.0
u[71:] = u[71:] + m
# store solution
x1 = np.empty_like(t)
x2 = np.empty_like(t)
# record initial conditions
x1[0] = z0[0]
x2[0] = z0[1]
# solve ODE
for i in range(1,n):
# span for next time step
tspan = [t[i-1],t[i]]
# solve for next step
z = odeint(model,z0,tspan,args=(u[i],))
# store solution for plotting
x1[i] = z[1][0]
x2[i] = z[1][1]
# next initial condition
z0 = z[1]
# plot results
plt.figure(1)
plt.subplot(2,1,1)
plt.plot(t,u,'g-',linewidth=3,label='u(t) Doublet Test')
plt.grid()
plt.legend(loc='best')
plt.subplot(2,1,2)
plt.plot(t,x1,'b-',linewidth=3,label='x(t) Nonlinear')
plt.plot(t,x2,'r--',linewidth=3,label='x(t) Linear')
plt.xlabel('time')
plt.grid()
plt.legend(loc='best')
plt.show()
Assignment
See Linearization Exercises | http://apmonitor.com/pdc/index.php/Main/ModelLinearization | CC-MAIN-2019-18 | en | refinedweb |
You can pass a JavaBean that has a property that is a Vector of another
JavaBean, but you must supply a mapping for the other class. Basically, any
type you want to serialize that is not a Java primitive (e.g. float, int),
wrapper (e.g. Float, Integer) or one of the few others with built-in
serializer mappings (e.g. Date), you have to provide a serializer mapping.
For working with interfaces, you would typically write your own serializer,
specifically because the deserializer must create an instance of a concrete
class. This is done for collection interfaces. Serializing interfaces, on
the other hand, is no different than serializing any class. If you have an
interface that follows the JavaBean pattern, for example, I think you could
serialize it using the BeanSerializer.
Scott Nichol
----- Original Message -----
From: "Alexandros Panaretos" <apanar@essex.ac.uk>
To: "'Scott Nichol'" <snicholnews@scottnichol.com>;
<soap-dev@xml.apache.org>
Cc: <apanar@essex.ac.uk>
Sent: Tuesday, September 10, 2002 5:27 AM
Subject: RE: SOAP Serialization problem
Hi Scott,
Thanks for your e-mail. The problems I had before I managed to solve. I
discovered that when you are passing JavaBeans through SOAP using the
Bean Serializer despite the setters & getters you should and the empty
constructor if you have any other methods that start with a get or set
you should provide an equivalent set or get (empty) in order to work.
Anyway, I have another problem now: Is it possible to pass a JavaBean
through SOAP that one of its properties is Vector of another user
defined objects which are JavaBeans as well ?
Moreover I would like to ask if you can pass an Interface as a parameter
of a SOAP method and then inside the SOAP Client specify the
implementation class of the interface you want the Serializer and
Deserializer to use.
Thank you in advance.
Alex
-----Original Message-----
From: Scott Nichol [mailto:snicholnews@scottnichol.com]
Sent: Monday, September 09, 2002 9:08 PM
To: soap-dev@xml.apache.org
Cc: apanar@essex.ac.uk
Subject: Re: SOAP Serialization problem
Can you describe the problems you are having? Assuming you have getters
and setters for all properties you wish to serialize, and the properties
have data types that have serializers, BeanSerializer should work. As
you have experienced, there are no serializers for URL or
DataInputStream.
Scott Nichol
----- Original Message -----
From: "Alexandros Panaretos" <apanar@essex.ac.uk>
To: <soap-dev@xml.apache.org>
Sent: Monday, September 02, 2002 11:03 AM
Subject: SOAP Serialization problem
Hi there,
I am quite new to SOAP and would like some help with the following if
possible:
Well, I am trying to pass my user defined object through SOAP. Now, I
have read various tutorials about passing user defined objects through
SOAP. From what I have understood there two ways of doing so:
A. Code your object class as a JavaBean and the use the provided
beanSerializer or B. Create your own custom serializer & deserializer.
I am trying to do the first and I have a lot problems serializing the
following Java class:
<--------------- Start of Java Code ---------------->
import java.io.InputStream;
import java.io.DataInputStream;
import java.net.URL;
import java.util.Vector;
public class MNistDataset implements ImageDataset {
//InputStream is;
//DataInputStream dis;
//URL labelSetUrl;
//URL imageSetUrl;
String labelSetUrl;
String imageSetUrl;
byte[] labels;
byte[] buf;
Vector images;
int maxPatterns = 1000000;
int nImages;
int nRows;
int nCols;
int nBytes;
int nClasses;
public MNistDataset(){
}
public MNistDataset(String labelSetUrl, String imageSetUrl) throws
Exception {
this.labelSetUrl = labelSetUrl;
this.imageSetUrl = imageSetUrl;
readLabelSet(labelSetUrl);
readImageSet(imageSetUrl);
}
public MNistDataset(String labelSetUrl, String imageSetUrl, int
maxPatterns) throws Exception {
this.labelSetUrl = labelSetUrl;
this.imageSetUrl = imageSetUrl;
this.maxPatterns = maxPatterns;
readLabelSet(labelSetUrl);
readImageSet(imageSetUrl);
}
public void readImageSet(String imageSetUrl) throws Exception {
InputStream is;
DataInputStream dis;
is = new URL(imageSetUrl).openStream();
dis = new DataInputStream(is);
int magic = dis.readInt();
//System.out.println("magic number is equal: " + magic);
int n = dis.readInt();
n = Math.min( maxPatterns , n );
if (n != nImages) {
throw new Exception(nImages + " <> " + n);
}
nRows = dis.readInt();
nCols = dis.readInt();
images = new Vector();
for (int i = 0; i < nImages; i++) {
images.addElement(readImage(dis, nImages, nRows, nCols));
}
}
public byte[][] readImage(DataInputStream dis, int nImages, int
nRows, int nCols) throws Exception {
byte[][] imageByteArray = new byte[nRows][nCols];
nBytes = nRows * nCols;
buf = new byte[nBytes];
dis.read(buf, 0, nBytes);
for (int x = 0; x < nRows; x++) {
for (int y = 0; y < nCols; y++) {
imageByteArray[x][y] = buf[ x + y * nRows];
}
}
return imageByteArray;
}
public byte[][] getImage(int i) {
return (byte[][]) images.elementAt(i);
}
public Object getPattern(int i) {
return images.elementAt(i);
}
public int getClass(int i) {
return labels[i];
}
public int nClasses() {
return nClasses + 1;
}
public int nPatterns() {
return nImages;
}
public ClassifiedDataset emptyCopy() {
// buggy - but probably not used!
return this;
}
/**
I have ommitted the setters and getters of the class
variables in order to keep it short
**/
}
<--------------- End of Java Code ---------------->
I already had a problem with the URLs and DataInputStream and
DataOutputStream variables so I declared within the methods rather than
global and I am not sure but I think SOAP can not handle the byte arrays
as properties of the JavaBean. Do you think there is a way of passing
this object through SOAP or is very complex and it can not serialize it?
Your help would be very much appreciated because this has been very
frustrating for me.
Thank you very much in advance for patience in reading this post and
your help.
Alexandros Panaretos
--
To unsubscribe, e-mail: <mailto:soap-dev-unsubscribe@xml.apache.org>
For additional commands, e-mail: <mailto:soap-dev-help@xml.apache.org> | http://mail-archives.us.apache.org/mod_mbox/ws-soap-dev/200209.mbox/%3C00e801c258e9$0c04f170$c900a8c0@fastdata%3E | CC-MAIN-2019-18 | en | refinedweb |
Hide Forgot
Hi there!
Description of problem:
There may be some problem with the backwards compatiblity between g++-4.1 (Red
Hat 4.1.0-18) and libg++ 3.4.3. When using -fstrict-aliasing optimization flag,
i get warnings about broken strict-aliasing rules. The warnings do not apper
when using g++ 3.4.6.
Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux AS release 4 (Nahant Update 4)
g++4 (GCC) 4.1.0 20060515 (Red Hat 4.1.0-18)
How reproducible:
always, just compile the following code on an x86_64 machine
#include <string>
int main() {
std::string sausage("huhu");
}
using g++4 -Wall -fstrict-aliasing -c test.cc
then leave -fstrict-aliasing or use g++
should I take that warning message serious?
Thank you very much!
Best regards
ratko
The -Wstrict-aliasing warning support was added only in 4.1+ and libstdc++-v3
4.1+ headers have been tweaked to shut the warning in this case down by
additional cast through void *, but as in RHEL4 libstdc++-v3 3.4.6 is used, you
get that harmless warning. The code is believed to be ok, because
_S_empty_rep_storage is never written into at runtime (only initialized to 0 by
the linker). | https://bugzilla.redhat.com/show_bug.cgi?id=230701 | CC-MAIN-2019-18 | en | refinedweb |
Two ways…
OneUndo decorater by Cesar Saez
from functools import wraps def OneUndo(function): @wraps(function) def _inner(*args, **kwargs): try: Application.BeginUndo() f = function(*args, **kwargs) finally: Application.EndUndo() return f return _inner # # And a basic example... # @OneUndo def CreateNulls(p_iCounter=100): lNulls = [] for i in range(p_iCounter): lNulls.append( Application.ActiveSceneRoot.AddNull() ) return lNulls CreateNulls()
Undo with statement by ethivierge
from win32com.client import constants as c from win32com.client.dynamic import Dispatch as d xsi = Application log = xsi.LogMessage collSel = xsi.Selection class xsiUndo(): def __enter__(self): xsi.BeginUndo() def __exit__(self, type, value, traceback): xsi.EndUndo() def testFunc(): log("running test") with xsiUndo(): testFunc()
Sorry to be such a pain, but two more “"” sneaked (snuck?) into the second listing… 😉
And now WordPress automagically changes my post: I meant the " token
No pain at all, thanks for the copy edit.
So do any of this methods have advantages over the other? Or is it pretty much the same thing?
Exercise left to the reader… 🙂
But I think…a decorator is associated with a specific function, whereas with with you can use with with any function that you want to pair with the undo bracket.
Sorry about all the alliteration. haha | https://xsisupport.com/2012/11/17/saturday-snippet-wrapping-an-undo/ | CC-MAIN-2019-18 | en | refinedweb |
tag:blogger.com,1999:blog-78786315571409715942018-08-30T22:07:23.871-04:00The Softer Side of Software DevelopmentSoftware development is often seen as a "man vs machine" endeavour. In reality, it's still people who do the work, a work that often has a clear artistic component. It is work we do with others, for others. And here are some of my thoughts on thatNancy Deschênes's Game of Life in SQL, Part 2I showed <a href="" target="_blank">my implementation</a> of <a href="'s_Game_of_Life" target="_blank">Conway's Game of Life</a> in SQL to a few people, a co-worker pondered if it could be done without the stored procedure. I gave it some thought, and came up with an implementation that relies on views. I used views for the simple reason that it made things easier to see and understand. If a query can be written using views, the it can be written as straight SQL, but the results are generally uglier, messier, and can often be unreadable. Since I'm using MySQL, I realize how bad views can be when it comes to performance, but since this is a proof of concept, and a toy anyway, I'm not too worried. I honestly doubt any real-life application would ever require me to implement the Game of Life in SQL!<br /><div><br /></div><div>I started as I had earlier, by defining the table as holding all the live cells for each generation.</div><pre class="brush:sql">create table cell (<br /> generation int,<br /> x int, <br /> y int,<br /> primary key (generation, x, y));<br /></pre><div>Then, I created a view to represent all the candidate position on the grids where we might find live cells on the next generation. We can only find cells where there are cells for the current generation, and in all 8 neighbour positions of those cells:</div><pre class="brush:sql">create view candidate (generation, x, y) as <br /> select generation, x-1, y-1 from cell union<br /> select generation, x, y-1 from cell union<br /> select generation, x+1, y-1 from cell union<br /> select generation, x-1, y from cell union<br /> select generation, x, y from cell union<br /> select generation, x+1, y from cell union<br /> select generation, x-1, y+1 from cell union<br /> select generation, x, y+1 from cell union<br /> select generation, x+1, y+1 from cell;<br /></pre><div>Next, I counted the number of live cells in the 3x3 square around each candidate. This isn't quite the same as counting the neighbours, since it includes the current cell in the count. We will need to take that into account in a bit.</div><pre class="brush: sql">create view neighbour_count (generation, x, y, count) as <br /> select generation, x, y, <br /> (select count(*) <br /> from cell <br /> where cell.x in (candidate.x-1, candidate.x, candidate.x+1)<br /> and cell.y in (candidate.y-1, candidate.y, candidate.y+1)<br /> and not (cell.x = candidate.x and cell.y = candidate.y)<br /> and cell.generation = candidate.generation)<br /> from candidate;</pre><div>From the previous post, the rules of the game can be reduced to: a cell is alive in the next generation if<br /><br /><ol><li>it has 3 neighbours</li><li>it has 2 neighbours and was alive in the current generation</li></ol><div>Let's write this as a;">Neighbours</th> <th style="padding: 0px 14px;">Alive in the next generation</th></tr></thead><tbody><tr> <td style="border-right: 1px black solid; padding: 0 14px;">yes</td> <td style="border-right: 1px black solid; padding: 0 14px;">less than 2</td> <td style="padding: 0 14px;">no</td></tr><tr> <td style="border-right: 1px black solid; padding: 0 14px;">yes</td> <td style="border-right: 1px black solid; padding: 0 14px;">2</td> <td style="padding: 0 14px;">YES<;">more than>If we count the cell in the neighbour count, we change our;"># cells in 3x3 square</th> <th style="padding: 0px 14px;">Alive in the next generation</th></tr></thead><tbody><tr> <td style="border-right: 1px black solid; padding: 0 14px;">yes</td> <td style="border-right: 1px black solid; padding: 0 14px;">less than 3</td> <td style="padding: 0 14px;">no<;">4</td> <td style="padding: 0 14px;">YES</td></tr><tr> <td style="border-right: 1px black solid; padding: 0 14px;">yes</td> <td style="border-right: 1px black solid; padding: 0 14px;">more than>A cell will be alive in the next generation if it's alive and the number of live cells in the square is 3 or 4, or if the cell is dead and there are 3 cells in the square. Let's rewrite the rules... again.<br /><br />A cell will be alive in the next generation if<br /><ol><li>The 3x3 square has exactly 3 cells</li><li>The 3x3 square has exactly 4 cells and the current cell is alive.</li></ol>So, we'll include a cell in the next generation according to these rules. We can tell if a cell is alive by doing a left outer join on the cell table:</div><pre class="brush:sql">insert into cell (<br /> select neighbour_count.generation+1, neighbour_count.x, neighbour_count.y <br /> from neighbour_count <br /> left outer join cell <br /> on neighbour_count.generation = cell.generation<br /> and neighbour_count.x = cell.x<br /> and neighbour_count.y = cell.y<br /> where neighbour_count.generation = (select max(generation) from cell)<br /> and (neighbour_count.count = 3 <br /> or (neighbour_count.count = 4 and cell.x is not null)));<br /></pre><div>Repeat the insert statement for each following generation.</div></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes Code RetreatOn December 8th, I attended the Montréal edition of the 2012 Code Retreat. It was organized by Mathieu Bérubé, to whom I'm very thankful. He asked the attendees for comments on the event, so here I go.<br /><div><br /></div><div>The format of the code retreat is that during the day, there are 6 coding session, using pairs programming. In each session, we try to implement Conways' Game of Life, using TDD principles. We can use any language or framework we want. Sometimes, only one of the partners knows the language used, and the other learns as we go. For some of the session, the leader also adds a challenge or a new guideline. After the session, we delete the code, and we all get together to discuss the session and the challenge. Mathieu had some questions for us to guide the discussion and to make us think.</div><div><br /></div><div>The goal of the even, however, is not to implement the game of life, but to learn something in the process. I think we all learned a lot, technically. Some of us learned new languages, some learned new ways to approach the problem. I learned that the Game of Life is easier to solve in functional or functional-type languages, rather than with object oriented or procedural languages. I learned the problem cannot be solved by storing and infinite 2D array. I learned that MySQL has special optimisation for "IN" that it doesn't have for "BETWEEN". I learned a smattering of Haskell.<br /><br />But that's the sort of things one can learn in a book. What else did I learn, that's less likely to be in technology books? In all fairness, some of the following, I already knew, and is covered in management or psychology books, but those book tend to be less popular with developers.<br /><h4>The clock is an evil but useful master</h4></div><div>By having only 45 minutes to complete the task, we can't afford to explore various algorithms and data structures to solve the problem - as soon as we see something that looks promising, we grab it, and run with it, until we notice a problem and try down a different path. In some cases, we even started coding without knowing where we were headed. I would normally think of this as a problem, because creativity is a good thing, and stifling it must be bad. However, having too much time usually means I'll over-design, think of all possible ways, but in the end, I may still rush through the implementation. Moreover, I assume that I'll have thought about all the possible snags that can come along, but some problems only rear their ugly heads after some code is written and the next step proves impossible. Sometimes, it's better to start moving down the wrong path, than to just stay in one spot and do nothing at all.</div><div><br /></div><div>This is somewhat related the the Lean Startup methodology - test your assumptions before you make too many.</div><div><br /></div><div>At one point, I asked the organizer to put up, on the projector, how much time remained in the session. He pointed out that it was a bad idea, because when we look at the clock, we're more likely to cut corners just to get something working, rather than focus on the quality of the code. This was a hard habit to ignore!</div><h4>Let go, and smell the flowers</h4><div>Since my first partner and I solved the problem in the first session, I was really eager to solve it the second (and third, ...) time around, in part to prove that he didn't do all the work, and I was just there to fix the syntax. But from the second session on, Mathieu gave us particular challenges to include in our coding, from "think about change-proofing your solution" to "naive ping-pong (one partner writes the test, the other does the least amount of work possible to pass the test)". As it turns out, it was really hard to completely implement the solution, writing the tests first, and factoring in the challenge. Something had to give. It was really hard to let go "complete the solution" in favor of TDD and the challenges. Recognizing that this is what I was doing certainly helped me let go, but the drive to finish the job kept nagging me. So much so that when I got home, I just <i>HAD</i> to implement the Game of Life in Scala, because I was so upset I didn't finish during the event.</div><div><br /></div><div>Another aspect of this is how I interacted with my partners. When I thought I had the solution and they didn't, I just pushed on with my idea, so that we'd get the job done as fast as possible. Rushing means I didn't listen as well as I could have to what the other person had to say. You can't speak and listen at the same time. The point of Code Retreat is not to write the Game of Life, that's been done thousands of times. The point was to play with writing code, try different things, and learn something in the process.</div><h4>The tools make a difference on productivity</h4><div>Session after session, it became clear that for this particular problem, a functional approach makes more sense than a procedural or object-oriented one. While we can use functional programming concepts in most languages, functional languages make the task much simpler. Even though we struggled setting up the test environment for Scala, we got further than with procedural language, because once started, we just added a test, added an implementation - everything seemed to work almost on the first try.</div><div><br /></div><div>Another tool that affected productivity is the editor. I'm not advocating for any particular tool, but in all the sessions, we used one participant's set up. Whoever's setup that was would edit so much faster. When using someone else's computer, I quickly got frustrated because I would type some editor command, only to realize that it's the wrong way to do what I want on <i>that</i> editor, or I had to slow down to avoid such mistakes. This happened even though I knew how to use the other person's editor. It may have worked better if we had used something like <a href="" target="_blank">Dropbox</a> so we could each use our own computer, but given that IDEs tend to organize the source/project differently, it may not have worked either.</div><h4>I change depending on the personality of my partner</h4><div>Each team I was on had a different dynamic. This was due in part to the challenge proposed - ping-pong tends to do that, and in part to the personality of my partner. It is likely that they also took cues from my personality as well, particularly since I was generally outspoken between the sessions, so I did not necessarily get to know them well in the process. I tend to avoid being the leader of a group, but I'm also impatient. At the beginning of a session, the my partner didn't suggest something immediately, an approach, a language, etc, I'd propose something. This means that more often than not, I imposed my will on the other. I should have listened more to my partners, and I might have, if I hadn't been so intent on finishing the problem in the 45 minutes!</div><h4>Deleting the code</h4><div>One of the instructions in the Code Retreat is that after each session, you delete the code. This was hardest to do, the closer we got to finishing the solution. It was particularly difficult in the session when we wrote in Scala, because after the initial fumbling with the test environment, the progress was steady and fast. If <i>only</i> I had another 10 minutes, I'm <i>positive</i> we could finish! Surprisingly, the working code in MySQL was very easy to delete, possibly because it felt complete, done, over with.</div><div><br /></div><div><br /></div><div>All in all, it was a useful, interesting and fun experience. I highly recommend it to any programmer!</div><div><br /></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes's Game of Life in... SQL?Yesterday, I attended the <a href="">Code Retreat</a> hosted at Notman House and organized by Mathieu Bérubé. It was a most enjoyable experience. On the first session, I paired with Christian Lavoie, and we tried to implement <a href="'s_Game_of_Life">Conway's Game of Life</a> using... mysql. Some people thought this was crazy, and I can't disagree. At the same time, we were able to complete the solution in under 45 minutes, not a small feat. <br /><br />Part. <br /><br /).<br /><br /. <br /><br />Then, for each cell within that rectangle, we decide if it should be alive in the next generation. We simply ignore cells that die, by not adding them to the next generation. The 4 rules of the game can be rewritten as two simpler rules:<br /><br /><ol><li>If the current cell is alive, it stays alive if it as exactly 2 or 3 neighbours who are also alive</li><li>If the current cell is dead, it becomes alive if it has exactly 3 neighbours.</li></ol><div>Using a little logic processing, we can further rewrite this as the following 2 conditions to be alive in the next generation:</div><div><ol><li>If a cells has 3 neighbours</li><li>If a cell is alive and has 2 neighbours</li></ol><div>So, we only add the cell to the next generation (insert it in the table) if it fits either of those rules.</div></div><div><br /></div><div.</div><div><br /></div><div>Here is code close to what we came up with, with some modifications that came to me after the fact.</div><br /><br /><pre class="brush: java">drop table if exists cells;<br />create table cells (<br /> generation int,<br /> x int,<br /> y int,<br /> primary key (generation, x, y));<br /><br />drop procedure if exists iterate_generation;<br />delimiter $$<br />create procedure iterate_generation ()<br /> begin<br /> declare current_gen int;<br /> declare current_x int;<br /> declare current_y int;<br /> declare min_x int;<br /> declare max_x int;<br /> declare min_y int;<br /> declare max_y int;<br /> declare live int;<br /> declare neighbour_count int;<br /><br /> select max(generation) into current_gen from cells;<br /> select min(x)-1, min(y)-1, max(x)+1, max(y)+1<br /> into min_x, min_y, max_x, max_y from cells<br /> where generation = current_gen;<br /><br /><br /> set current_x := min_x;<br /> while current_x <= max_x do<br /> set current_y := min_y;<br /> while current_y <= max_y do<br /> select count(1) into live from cells<br /> where generation = current_gen<br /> and x = current_x<br /> and y = current_y;<br /> select count(1) - live into neighbour_count from cells<br /> where generation = current_gen<br /> and x in (current_x - 1, current_x, current_x + 1)<br /> and y in (current_y - 1, current_y, current_y + 1);<br /><br /><br /> if neighbour_count = 3 || live = 1 and neighbour_count = 2<br /> then<br /> insert into cells (generation, x, y)<br /> values (current_gen+1, current_x, current_y);<br /> end if;<br /> set current_y := current_y + 1;<br /> end while;<br /> set current_x := current_x + 1;<br /> end while;<br />end<br />$$<br />delimiter ;<br /></pre><br />Crazy? well, yes. And yet, surprisingly easy. And absolutely a whole lot of fun. <img src="" height="1" width="1" alt=""/>Nancy Deschênes, and seemingly dirty command objects<div>I love Grails, but sometimes, the magic is a little too much.</div><br /><div.</div><div>Here is the simplest form of the problem I have been able to put together.</div><br /><div>The controller:</div><pre class="brush: java">def debug = { DebugCommand d -><br />render new JSON(d)<br />}</pre><br /><div>The command objects: I have nested commands, with DebugCommand being the outer command (used by the controller) and DebugMapCommand, a map holding some values. I'm using a LazyMap since that's why I used in my real-life problem.</div><br /><pre class="brush: java">public class DebugCommand {<br /> int someNum<br /> DebugMapCommand debugMapCommand = new DebugMapCommand()<br />}<br /></pre><br /><pre class="brush: java">@Validateable<br />public class DebugMapCommand {<br /> Map things = MapUtils.lazyMap([:], FactoryUtils.instantiateFactory(String))<br />}<br /></pre><br /><div>What happens here is that multiple calls to the controller/action result data being accumulated in the LazyMap between calls:</div><br /><pre class="brush: jscript">nancyd $ curl '\&debugMapCommand.things\[2\]=5'<br />{"class":"com.example.DebugCommand",<br /> "debugMapCommand":{"class":"com.example.DebugMapCommand",<br /> "things":{"2":"5"}},<br /> "someNum":5}<br /><br />nancyd $ curl '\&debugMapCommand.things\[1\]=5'<br />{"class":"com.example.DebugCommand",<br /> "debugMapCommand":{"class":"com.example.DebugMapCommand",<br /> "things":{"2":"5","1":"5"}},<br /> "someNum":5}<br /><br />nancyd $ curl '\&debugMapCommand.things\[elephants\]=5'<br />{"class":"com.example.DebugCommand",<br /> "debugMapCommand":{"class":"com.example.DebugMapCommand",<br /> "things":{"2":"5","1":"5","elephants":"5"}},<br /> "someNum":5}<br /></pre><br /><div>If I gave a new value for a map entry, the new value was used.</div><br /><div>So, what's happening? The DebugCommand's reference to the DebugMapCommand is called debugMapCommand, and Grails thinks I want a DebugMapCommand injected, so it created a singleton, and passed it to all my DebugCommand instances. Oops.</div><br /><div>Trying to prove this wasn't too easy. It would seem that a number of factors are necessary for this particular issue to manifest:</div><br /><ol><li>The field name must be the same as the class name with the first letter lowercased</li><li>The inner/sub command, DebugMapCommand, must be annotated with @Validateable</li><li>The inner/sub command must be in a package that is searched for "validateable" classes (in Config.groovy, you have to have <span class="Apple-style-span" style="font-family: Monaco; font-size: 11px;">grails.validateable.packages = ['com.example.yourpackage', ...])</span></li></ol><br /><div>So, what's the lesson here?</div><br /><div>Don't name your fields after their class.</div><img src="" height="1" width="1" alt=""/>Nancy Deschênes is reading code so hard?We've all been there. Faced with lines and lines of code, written by an intern, a programmer long gone, or a vendor with whom you do not have a support contract.<br /><br />Code can be quite hard to read. Why is that?<br /><br /.<br /><br /><b>1. Reading code usually involves two actions: identifying what the code does, and what it was actually meant to do.</b><br /><br /.<br /><br /:<br /><br /><pre class="brush: java">if (someCondition == true) {<br /> myVar = false;<br />} else {<br /> myVar = false;<br />}<br /></pre><br /><b>2. We're talking too loud, and not listening enough</b><br /><br />When we see code like the one above, we laugh, we point, and our opinion of the code (and the programmer(s)) goes down. We start generalizing and thinking the worse. How can code with <i>that</i> in it be any good? And <i>what</i> were they thinking? who would wrote such poor code?<br /><br /!"<br />. <br /><br /><b>3. It's not written how we think</b><br />.<br /><br /.<br /><br /><b>4. We lose track</b><br /><br /<br /><br /.<br /><br /><br /><b>What can we do about it?</b><br /><ul><li>Try to find out what the programmer wanted to do</li><li>Then see if the code does it</li><li>Keep an open mind - don't let your prejudice get in the way</li><li>Learn new (to you) techniques and patterns</li><li>When you code, try to see more than one way to solve the problem; maybe next time you try to read someone's code, they'll have used your second or third choice, and you'll recognize it more easily</li><li.</li></ul><div><br /></div><div>Why do you think reading code is so hard? what helps you?</div><div><br /></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes building blocks part 2: imports and packagesIf you're used to Java, Scala imports and packages look both familiar and all wrong and weird.<br /><br /><b>Imports</b><br /><br />The first thing that you may notice in some code you're reading is the use of <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">_</span> in imports. The best way I've found to deal with <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">_</span> is to replace it in my head with "whatever". It works in imports, but it also works in other cases where you encounter <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">_</span>.<br /><br />So, when you see<br /><br /><pre class="brush: java">import com.nancydeschenes.mosaique.model._<br /></pre><br />it means "import whatever you find under <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">com.nancydeschenes.mosaique.model</span>"<br /><br />You can lump together multiple imports from the same package:<br /><br /><pre class="brush: java">import scala.xml.{ NodeSeq, Group }<br /></pre><br /><br />Imports are relative:<br /><br /><pre class="brush: java">import net.liftweb._<br />import http._<br /></pre><br />imports whatever's needed in net.liftweb, and whatever's needed from net.liftweb.http.<br /><br />But what if I have a package called http, and don't want net.liftweb.http? use the _root_ package:<br /><br /><pre class="brush: java">import net.liftweb._<br />import _root_.http._<br /></pre><br /><b>Packages</b><br /><br /.<br /><br />You can use the same package statement you would with Java:<br /><br /><pre class="brush: java">package com.nancydeschenes.mosaique.snippet<br /></pre><br />Or, you can use the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">package</span> block:<br /><br /><pre class="brush: java">package com.nancydeschenes.mosaique {<br /> // Some stuff<br /><br /> package snippet {<br /> // stuff that ends up in com.nancydeschenes.mosaique.snippet<br /> }<br />}<br /></pre><img src="" height="1" width="1" alt=""/>Nancy Deschênes building blocksWhen I first started reading about Scala, I saw a lot of very interesting looking code, but I felt lost. It looked as if I was trying to learn a foreign language with a phrasebook. I saw a number of examples, but I felt I was missing the building blocks of the language.<br /><br />I.<br /><br /><br /><b><span class="Apple-style-span" style="font-size: large;">Classes, traits, objects, companions</span></b><br /><br />Everything in Scala is an object, and every object is of a particular class. Just like Java (if you ignore the pesky primitives). So, generally speaking, classes work the way you expect them. Except that there's no "static". No static method, no static values. <br /><br />Traits are like interfaces, but they can include fields and method implementations. <br /><br />Objects can be declared and are then known throughout the program, in a way that's reminiscent of dependency injection, particularly the Grails flavour. Object declarations can even include implementation, and new fields and methods.<br /><br /.<br /><br />So you may encounter code such as<br /><br /><pre class="brush: java">class Car extends Vehicle with PassengerAble {<br /> val numWheels = 4<br /> val numPassengerSeats = 6<br />}<br /><br />object Car {<br /> def findByLicensePlate(String plate, String emitter) : Car = {<br /> Authorities.findEmittingAuthority(emitter).findByLicensePlate(plate);<br /> }<br />}<br /><br />object MyOwnCar extends Car with PassengerSideAirbags, RemoteStarter {<br /> val numChildSeats = 2;<br /> def honk : Unit = {<br /> HonkSounds.dixie.play;<br /> }<br />}<br /></pre><br />In that example, the first <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Car</span> is a class. The second <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Car</span> is an object. <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">MyOwnCar</span> is an object that can be addressed anywhere (same package rules apply as java), but <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">MyOwnCar</span> has extra stuff in it: <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">PassengerSideAirbags</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">RemoteStarter</span> are trait (you can guess that because of the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">with</span> keyword). It even defines a new method so that honking it <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">MyOwnCar</span> should remind you of the <i>Dukes of Hazzard</i>.<br /><br /><br /><b><span class="Apple-style-span" style="font-size: large;">Types</span></b><br /><br />Unlike Java, in Scala, everything is an object. There is no such thing as a primitive.<br /><br /><br /><b>Basic types</b><br /><br />At the top of the object hierarchy, we have <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Any</span>. Every object, either what you think of an object in a Java sense, or the types that are primitives in Java, every thing inherits from <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Any</span>.<br /><br />The hierarchy then splits into two: <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyVal</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyRef</span>. Primitive-like types are under <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyVal</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyRef</span> is essentially the equivalent of <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">java.lang.Object</span>. All Java and Scala classes that you define will be under <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyRef</span>.<br /><br />Strings are just like Java Strings. Double-quoted. But Scala defines a few additional methods on them, and treats them as collections, too, so your String is also a list of characters. A particularly convenient method, I've found, is <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">toInt.</span><span class="Apple-style-span" style="font-family: inherit;"> There's </span><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">toDouble</span><span class="Apple-style-span" style="font-family: inherit;"> and </span><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">toBoolean</span><span class="Apple-style-span" style="font-family: inherit;"> too.</span><br /><br /><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Unit</span><span class="Apple-style-span" style="font-family: inherit;"> is what a method returns when it doesn't return anything. You can think of it as "void".</span><br /><br /><span class="Apple-style-span" style="font-family: inherit;">Null as you know it is a value, but in Scala, every value has type, so it's of type Null. And Null extends every single class, so that null (the one and only instance of Null) can be assigned to any </span><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyRef</span><span class="Apple-style-span" style="font-family: inherit;"> object. It sounds crazy, but if you let it sink in, it will make sense. Null is actually a trait, not a class.</span><br /><br /><span class="Apple-style-span" style="font-family: inherit;">Nothing is the absolute bottom of the hierarchy. Nothing doesn't have any instances.</span><br /><br /><br /><b>Numeric Types</b><br /><br />Integers are of type Int.<br /><br />Doubles are Doubles, floats are Float. <br /><br />A litteral in the code is treated an object of the appropriate type. Things just work, without "autoboxing" or other convolutions.<br /><br />Strings and numeric types are immutable.<br /><br /><br /><b>Collections</b><br /><br />Collections come in mutable and immutable variations.<br /><br /.<br /><br />A special kind of collection is the Tuple. It's an ordered group of N items that can be of the same or of different types. They are defined for N=2 to N=22. You can access the <i>j</i>-th element of the tuple with ._<i>j</i> (ex: the first is myTuple._1, the third is myTuple._3)<br /><br /><b>Options</b><br /><br />The first thing you have to realize about options is that they're everywhere, so you better get used to the idea. Lift uses Box instead, but it serves the same purpose.<br /><br />The second thing you have to realize is that Options (or Box) is the right way to handle multiple scenarios, but you've spent your programming life working around that fact.<br /><br />Let's take a simple example. You want to implement a method that computes the square root of the parameter it receives. What should that method return? a Double. So you start:<br /><br /><pre class="brush: java">def sqrt (x: Double) : Double = {<br /> // ...<br />}<br /></pre><br /)<br /><br /.<br /><br /.<br /><br />Or, you can use matching to decide what to do:<br /><br /><pre class="brush: java">MathHelper.sqrt(x) match {<br /> case Some(y) => "The square root of " + x + " is " + y<br /> case None => "The value " + x + " does not have a square root"<br />}<br /></pre><img src="" height="1" width="1" alt=""/>Nancy Deschênes on Rejection.<br /><br /?<br /><br /. <br /><br /.<br /><br /?"<br /><br /?<br /><br />And then, the decision came. The list of speakers was published, and I couldn't find my name on it. I looked and looked and.. nope! nothing. Not for the Scala talk, not for the Grails talk, not for the SQL talk.<br /><br /.<br /><br /><b>So, why was I so disappointed?</b><br /><br /.<br /><br /><b>What now?</b><br />. <br /><br />I know that whenever we do something, we "practice" doing it - we get better at it, it becomes easier. So it's time for me to start practicing risk taking. At least a little.<br /><br />Isn't it convenient that this is happening at the turning of the year?<br /><br />Here's to 2011!<img src="" height="1" width="1" alt=""/>Nancy Deschênes the configuration for the mail plugin for grails from the databaseI have a grails application that sometimes needs to email users. I am using the <a href="">mail plugin</a>, but that expects the configuration (SMTP server, etc) configuration to be in the application's Config.groovy file. I wanted to make it possible for the site administrator to change the configuration. This way, they could create a specific email account with Google just to send email for the application.<br /><div><br /></div><div>After some research, I discovered that the configuration could be altered at runtime by getting the mailSender injected into my code, and setting its properties as needed.</div><div><br /></div><div>Next, I needed a domain object to represent my configuration. I tried using the @Singleton annotation, but that does not play well with domain objects. I ended up writing getInstance() myself:</div><div><br /></div><div><div></div></div><br /><pre class="brush: java">static long instanceId = 1l;<br />static EmailServerConfig instance;<br /><br />static synchronized getInstance() {<br />if (!instance) {<br />instance = EmailServerConfig.get(instanceId);<br />if (!instance) {<br />instance = new EmailServerConfig()<br />instance.id = instanceId<br />instance.save()<br />}<br />}<br />return instance;<br />}<br /></pre><div>This can only work with </div><pre class="brush: java">static mapping = {<br />id generator:'assigned'<br />}<br /></pre><br /><div><br /></div><div>and even with that, I was unable pass the id parameter to the constructor, that's why I set it separately after the EmailServerConfig object is created.</div><div><br /></div><div>Then all I need is to define the GORM event handlers afterLoad, afterInsert and afterUpdate to apply the values from the database to the mailSender.</div><div><br /></div><div>Finally, I made sure to encrypt the SMTP authentication password on its way into the database, and to decrypt it when retreiving it. Thanks to Geoff Lane for this <a href="">blog post</a> on how to handle that with codecs:</div><pre class="brush: java">class EmailServerConfig {<br />String password<br />String passwordEncoded<br />//...<br />def afterLoad = {<br />password = passwordEncoded?.decodeSecure()<br />updateMailSender()<br />}<br />def beforeUpdate = {<br />passwordEncoded = password?.encodeAsSecure()<br />}<br />//...<br />}<br /></pre><br /><div>Then all I had to do was write the controller to let site admins edit those values. The controller uses EmailServerConfig.getInstance(), since we're simulating a singleton.</div><div><br /></div><div>Things to remember:</div><div><ul><li>onLoad is called before the values are loaded into the object, so the values are null; use afterLoad instead</li><li>beforeInsert, beforeUpdate and beforeDelete are called only once the object is validated, so the constraints have to allow a null value for passwordEncoded</li><li>Grails tries to instantiate domain classes at startup, and @Singleton prevents that, that's why you can't have @Singleton on a domain class</li><li>If you don't create an EmailServerConfig in the Bootstrap, and you do not provide any configuration in Config.groovy, the mailSender will default to trying to send mail through localhost:25. This may work, depending on your setup.</li></ul></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes 2010 in Montreal - Recap and Impressions<div><div>The Recent Changes Camp (also called RoCoCo) was held last weekend (June 25-17) in Montreal. I'm not sure how I heard about it. Possibly someone I follow on twitter commented on tikiwiki, and I checked it out and saw that they were sponsoring a conference near me. I do not "wiki". We use one at work, but I have felt no drive to get involved in the open ones such as wikipedia or wikiHow. But I decided not too long ago to be move involved in what's happening near me, and to actually meet people face to face once in a while. So I went. Their website made it look open enough.</div><div><div><br /></div><div>It turns out, that was a good idea.</div><div><br /></div><div>The first few people I met were welcoming and friendly, so I stayed. "I don't really have anything to do with wikis" was met with "that's cool", and "whoever ends up being there is whoever was meant to be there". I learned first-hand about the openspace technology (well, I'd call it more a philosophy or concept, but it seems that it's usually referred to as a technology). When we got together to set the agenda, I expected people to act like I've always seen people act, shy: "you go first" "no you do". But they couldn't wait for the presenter to give them the floor to introduce the sessions they wanted to host. I suppose I shouldn't have been surprised; many are active wiki or open software contributors, so they do not wait for others to do what they want done - they are doers. Still, for me that set the tone do the day - a very proactive sort of tone. Which eventually led me to co-host a talk (at Mark Dillon's invitation). I hadn't planned on coming back on the Saturday, but I was learning, I was talking to people, I felt part of the group, so I came back.</div></div><div><br /></div><div><b>Why wiki, why not wiki, and why don't more people contribute?</b></div><div><br /></div><div>The first session I attended was led by <a href="">Mark Dillon</a>, and asked "<a href="">Why wiki?</a>". Many issues were discussed, as you can see from the notes. We discussed how things are, how people use wikis, and how it can make things better. We also looked at some reasons why people do not wiki. The technical aspect came up (wiki syntax can be cumbersome), but I felt they were leaving the "people aspect" out of it a bit- being shy, or... something. We did cover a lot of the features and how they can act to invite users to contribute, or add to the "conversation". After the talk, Mark asked if I would co-host another one on <a href="">barriers to entry</a>. I don't know if it was his strategy to gently push a newcomer, of it was just the action of a natural leader - either way, it worked, and I accepted. The session happened the next day. I think we covered a lot of "personal" aspects. I personally feel that while the technical complexity may keep some people away, many just don't want to write, either because they are shy, or do not feel the drive to do so. I wanted to explore these in more details, and we certainly did. What came out of it for me, why I think many of us don't contribute, is that we feel we need the permission to edit someone else's text. Even when we know how the system works, we may want to make sure we're not going to hurt the other person's feelings, or possibly miss an important point they were making, or an angle they were trying to give. Another major obstacle is the lack of feedback. If we don't believe that others will review our work and improve on it, there is little to motivate us. What I didn't know is how much of a community the contributors end up creating, and that it is an important factor to current contributors. A very interesting and enlightening set of discussions.</div><div><br /></div><div>Since then, I have read an <a href="">article</a> (thanks to <a href="">Seb Paquet</a> for pointing it out) showing some empirical measurement of the benefits of methods used to encourage people to contribute. I have also encountered the concept of "diffusion of responsibility" and "the bystander effect". The feeling that we don't have to do something; someone else will. I was somewhat familiar with the concept, but now it has a name. And I'm sure it plays a role in whether one contributes to wikis, as well as pretty much any open venture.</div><div><br /></div><div><b>Structured wikis, semantic wikis, Semantic web</b></div><div><br /></div><div>Here, I mostly took notes, trying furiously to make sense of all the information provided. Structured wikis are those where contributors are asked to fill in fields, rather than (or as well as) come up with the text of the article. It may feel more like entering data into a database. For example, for an entry on an author, the user may be asked for a date of birth, a date of death, schools attended, genre, and a list of titles.</div><div><br /></div><div>A semantic wiki is one where the occurrence of information with a particular meaning ("this is a city", "this is a person", "this is a date") is annotated as such. This then allows the software to make links between other things that also have that information, such as other authors who were born in the same city, or events that occurred on the same date.</div><div><br /></div><div>The power of both these wiki styles is greatly improved when they are combined - structured semantic wikis create a wealth of information that can be interrelated by a machine to create a rich set of information.</div><div><br /></div><div>This is also where I learned about the existence RDF, and the RDF triple: a relation is expressed as 3 components: the subject, the predicate, and the object. I was taken back to 6th grade grammar, but of course it makes sense. If you want to describe a relation you need the the origin of the relation, the type of relation, and the target of the relation: "Bob knows Peter", "Steven King wrote The Stand". I believe someone brought up microformats, which can encode the same information, but differently, in a way that's more accessible on the web. There is so much here to try to make sense of, I'll be spending some serious time puzzling it out. But I think the potential there is obvious. Calling it "Web 3.0" may be an exaggeration, but I don't think it's far off.</div><div><br /></div><div><b>Open companies and reputation systems</b></div><div><br /></div><div><a href="">Bayle Shanks</a> presented the concept of open companies where employees are those who want to work on the company's project, and they get paid according to a rating system by their peers. When I first sat in that session, I expected it to be about open companies in the sense of information flow, not a true open source parallel. Before the talk, I would have thought the whole concept simply crazy, but the culture of open spaces (or this particular one?) is such that we all listened, commented, suggested, brought up issues, without anyone being offensive or dismissive. I'm still not sure it can work in a general setting, but I applaud Bayle for wanting to give it a shot, and I can see some applications where it has some definite potential.</div><div><br /></div><div>The reputation system he wants to use is quite intriguing. The system sounds simple enough, but I'm not following all the repercussions of money distribution scheme. The interesting bits is that the people who contribute the most also end up being the ones who have the bigger influence on the distribution of rewards (money, reputation, gold stars). This can be useful in a number of non-money applications, too, as a way to recognize the contribution of volunteers, for example. Maybe all wikis should have that!</div><div><br /></div><div>And here is why keeping an open mind pays. If I had chosen not to listen to the open company part of the session I would never have heard about the reputation system. And boy am I glad I did!</div><div><br /></div><div><b>Multilingual wikis</b></div><div><br /></div><div>The resources on the net are still mostly in English, but the web as a whole is getting a lot better. At least now, a number of HTTP requests include an Accept-Language header that makes sense for the context. Still, users do not always want to browse a particular site in the language configured in their browsers.</div><div><br /></div><div>The question was, "how do you handle multilingual wikis"? This can be particularly hard to do when original content can come from any of the supported languages. If someone updates the French version, how do you make sure the changes are included in the English version? <a href="">Wiki translation</a> shows some promise, but I think there are still more questions than answers on that topic, both for wikis and for websites and social networks in general. I still haven't figured out how to Facebook or Twitter bilingually.</div><div><br /></div><div><b>Intent Map</b></div><div><br /></div><div>In another session, Seb Paquet introduced the idea of a way to find people who want to work on the same sort of things you want to work on. We discussed this, and by the end of the session, we had agreed to turn this into a <a href="">project</a>. Others in the group were already familiar with RDF (see section above on semantic web). Someone else brought up <a href="">FOAF</a>, a specification for identifying people and relations between people. Someone else brought up <a href="">DOAP</a>. Microformats came up again. I try write down everything I'm going to have to learn about. And yet, I volunteered to code it all. Code what? well, that's the question! I'm not sure what I got myself into. During the session, I mentioned that one way to get widespread visibility would be with a Facebook application, so I have started playing with writing a Facebook application. Since I don't want to worry about hosting the application (and the scraper, and the database that will be needed to support it all) I also started learning about Google App Engine. I even wrote a very simple application that displays inside Facebook, but I'm still sorting out authentication and permissions. It is quite a challenge, but I need to get out of my comfort zone, so this is perfect. I get to play with new (to me) technologies, learn new specifications, and tons of new concepts. I hope I don't disappoint my teammates. I better get coding.</div><div><br /></div><div><b>What now?</b></div><div><br /></div><div>I met tons of new people. I hosted a session. I am now an official contributor to the RoCoCo2010.org wiki (I wrote up some session notes). I volunteered to implement something around a spec that doesn't even exist, with tools and APIs I don't know. This is not the same-old, same-old, but I'm okay with that. And I'm going to do it again, when the opportunity arises again. Thanks to everyone who was there for making it so energizing.</div><div><br /></div></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes notes from the RoCoCo unconference in Montreal this weekendI decided to attend the <a href="">Recent Changes Camp 2010: Montreal</a><br />Here are a few notes I made for myself, and that I'm sharing now, at least until I get around to writing a proper entry:<br /><br />Wikis (well, wiki software) could be a way to implement <a href="">addventure</a> (a "you are the hero" collaborative story telling game originally written by Allen Firstenberg). Wiki red links are very much like "this episode has not been written yet".<br /><br />Wikis synthesize (focus), where forums divide (or disperse in their focus).<br /><div><div><br /></div><div>Ontology vs folksonomy.<br /><br /><b>Look into:<br /></b>HTLit</div><div>Inform 7<br /><a href="">Universal Edit Button</a><br />Etherpad<br />Semantic web; the semantic triple: [subject, predicate, object]</div><div>Resource Definition Framework (RDF), SPARQL (query language for RDF)<br /><a href="">confoo</a><br /><a href="">appropedia</a></div><div><a href="">dbpedia</a></div><div>Google wheel</div><div>microformats</div><div><br /></div><div><br /><b>To read:</b><br />The wisdom of crowds</div><div><div><a href="">The Delphi Method</a></div><div><br /></div><br />I also got to lead a <a href="">session</a> (which is a lot easier than you might expect since all the participants are interested in the topic anyway - or they leave, and because they participate willingly). And we started a new <a href="">project</a>!</div></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes to transform one type of object to another in Groovy/Grails (as long as it's not a domain object)<div>I've been working on a system that will be using remote calls to communicate between a client (browser, mobile phone, possitbly a GWT client) and the server. The client sends a request, and a grails controller returns a grails domain object encoded using JSON. Relatively straight-forward stuff, but I hit a few snags. I was thankful when I discovered <a href=""></a>/ which goes into some details into how to make it happen. Detailed post are <a href="">here</a>, <a href="">here</a>, and <a href="">here</a>.</div><br /><div>I debated using the ObjectMarshaller to restrict the data sent (afterall, the client doesn't need to know the class name of my objects), but in the end, I decided to use Data Transfer Objects. I can see a future development where these objects will be used as commands, for example.</div><div><br /></div><div>The problem that's been keeping me awake tonight, tho, is in the translation from domain object to DTO. Based on my reading, it looked like I could transform any kind of object into any other kind of object, as long as the initial object knew what to do.</div><pre class="brush: java">class User {<br />// grails will contribute fields for id and version<br />String lastName<br />String firstName<br />Address workAddress //<br />Address homeAddress // The client does not need that info and SHOULD NOT ever see it<br />static hasMany [roles: Role, groups: Groups] // etc<br /><br />doThis() {<br />//..<br />}<br /><br />doThat() {<br />//...<br />}<br />}<br /><br />class UserDTO {<br />String lastName<br />String firstName<br />}<br /></pre><div>How do you take a User object and make a UserDTO out of it? Well, you should certainly have a look at Peter Ledbrook's <a href="">DTO plugin</a>. But for my needs, I thought I'd stick with something simpler. Just use the groovy "as" operator.</div><div></div><div>All you need to do something like</div><pre class="brush: java">def dto = User as DTO<br /></pre><div>is to have User implement asType(Class clazz) and to handle (by hand) the case where clazz is DTO:</div><pre class="brush: java">class User {<br />// same fields as before, etc<br />Object asType(Class clazz) {<br />if (clazz.isAssignableFrom(UserDTO)) {<br />return new UserDTO(lastName: lastName, firstName:firstName)<br />} else {<br />return super.asType(clazz)<br />}<br />}<br />}<br /></pre><div>All works well. Unit tests confirm, there's nothing to it.</div><pre class="brush: java">void testUserAsUserDTO() {<br />String lastName = 'Lovelace'<br />String firstName = 'Ada'<br />User u = new User(lastName: lastName, firstName: firstName);<br />UserDTO dto = u as UserDTO;<br />assertEquals(UserDTO.class, dto.class)<br />assertEquals(lastName, dto.lastName);<br />assertEquals(firstName, dto.firstName);<br />}<br /></pre><div>Integration test. I want to make sure my controller sends the right data</div><div>The controller:</div><pre class="brush: java">def whoAmI = {<br />def me = authenticateService.userDomain() // acegi plugin; this returns a User<br />if (me) {<br />def dto = me as UserDTO<br />render dto as JSON<br />} else {<br />render [error: "You are not logged in"] as JSON<br />}<br />}<br /></pre><div>The test:</div><pre class="brush: java">class RpcWhoAmITest extends ControllerUnitTestCase {<br />void testWhoAmI() {<br />String lastName = 'Lovelace';<br />String firstName = 'Ada';<br />User u = new User(lastName: lastName, firstName: firstName)<br />mockDomain(User.class,[u])<br />mockLoginAs(u)<br />controller.whoAmI()<br />def returnedUser = JSON.parse(controller.response.contentAsString)<br />assertNotNull(returnedUser)<br />assertEquals(lastName, returnedUser.lastName)<br />assertEquals(firstName, returnedUser.firstName)<br />}<br />}<br /></pre><div>And that... <span style="font-weight: bold;">fails!</span> The message is</div><pre>org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'my.package.User : 1' with class 'my.package.User' to class 'my.package.rpc.UserDTO'<br />at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToType(DefaultTypeTransformation.java:348)<br />at ...<br /></pre><div><br />What went wrong? The call to mock my domain object is what went wrong. It replaces my asType(Class clazz) with its own. Fortunately, that's relatively easy to fix. I needed to override the method addConverters in grails.test.GrailsUnitTestCase to replace asType(Class) only if it didn't already exist (in my test class):</div><pre class="brush: java">@Override<br />protected void addConverters(Class clazz) {<br />registerMetaClass(clazz)<br />if (!clazz.metaClass.asType) {<br />clazz.metaClass.asType = {Class asClass -><br />if (ConverterUtil.isConverterClass(asClass)) {<br />return ConverterUtil.createConverter(asClass, delegate, applicationContext)<br />}<br />else {<br />return ConverterUtil.invokeOriginalAsTypeMethod(delegate, asClass)<br />}<br />}<br />}<br />}</pre><br /><br /><div>Sadly, after all this work, I deploy, launch, and still get GroovyCastExceptions. It turns out that the instrumentation of domain class objects essentially throws out my "asType()" method. In the end, I switched to the DTO plugin (which post-instruments the domain object to do it's own stuff, something I considered doing, but at some point, the "quick, home-made solution" just isn't.</div><img src="" height="1" width="1" alt=""/>Nancy Deschênes Part 3The third speaker in the rapid-fire keynotes was Adele <span class="blsp-spelling-error" id="SPELLING_ERROR_0">McAlear</span> with Death and <i>Digital Legacy</i>.<br /><div><br /></div><div>That presentation was definitely nowhere near as much fun as any of the other ones, simply due top the morbid topic. It is a brand new domain - what do you with all the electronic assets when their owner (producer, creator) dies? Who has the rights to decide what happens, and how are the service providers handling it?</div><div><br /></div><div>First, Adele showed us the breadth of our "online footprint". From email accounts to <a href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_1">flickr</span></a> uploads, blogs, tweets, WOW characters... Each service provider may have different policies for dealing with a deceased person's account, but in reality, only <a href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_2">Facebook</span></a> has a stated policy. And what about paid subscription services? When a person dies, the credit card they used to maintain that service is cancelled, and unless a survivor has access to the account, they can't change the credit card on file (I wonder what happens in the cases where the service have "gift subscriptions", such as "buy Person a pro account"). The survivor can't gain access to the account because it is tied to an email account, to which, in all likelihood, they do not have access.</div><div><br /></div><div>So what solutions do we have, if we want to leave something behind, if not for ourselves, for our friends, fans, followers?</div><div><br /></div><div>First, we should make a list of our digital assets. What accounts we have, and how we'd like them to be preserved after death. For example, we may want blogs to be preserved or archived, but we may think that our twitter account history is just not worth the effort. We should make sure that our family knows about our online accounts, too, so they know what to expect.</div><div><br /></div><div>Then we appoint a digital executor. This is someone with whom we will have discussed the matter, and who will be responsible for our digital legacy. This does not have to be same person who will execute our will. Then, we create an email account exclusively for this purpose, and set it up as a "backup email account" for our regular <span class="blsp-spelling-error" id="SPELLING_ERROR_3">emai</span>l account. This way, when the executor wants to take over our main email account, they only have to request a password reset be sent to the backup email account. The password to the backup email account should be kept with our will. From there, the executor will be able to gain access to other accounts by requesting a password reset.</div><div><br /></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes Part 2Sean Power, <i>Communilytics</i><div><br /></div><div>Communilytics, is the analysis of how specific information flow through online communities. In his presentation, Sean encouraged us to look a the numbers at a deeper level than simple page-views or the number of followers. But before we can mount a successful campaign, we have to decide what we want the accomplish with that campain: make more money, gain attention/recognition, or improve our reputation. How deeply we want to get involved in the community we're targeting (whether we want to search, join, moderate or run it) will affect which tools and technologies to use. There are 8 social platforms(group/mailing list, forum, real-time chat, social network, blog, wiki, ), with different dynamics, and each can be addressed at different levels of implication. The metrics, then, will depend on the tools used. Each business is different, we know best what's right for our business.</div><div><br /></div><div>He also brought up the AARRR model by Dave McClure: Acquisition, Activation, Retention, Referral, Revenue. </div><div><br /></div><div>He then gave some examples of how information flows through communities. When one person posts a tweet, their followers see it, But if it's a tweet of interest, the recipients may want to let <i>their</i> followers know about it, and re-tweet it. The reach of a message, then is the followers, those who receive the message as a retweet, and those who receive the retweet retweeted, etc. But some people may cross cross social platform - they may put a tweet on Facebook, or send it through email. </div><div><br /></div><div>This presentation is where I also found out the format definition of "going viral". That's when the average number a person tells the message to is greater than one. (So on average, everyone who gets this message will forward it).</div><div><br /></div><div>To get a wide reach, sometimes the best way is to find a few seeds who, because of their respectability and following, will insure a wide, receptive audience for the message. The sites <a href="">Twinfluence</a>, <a href="">tweetreach</a> and <a href="">TwitterAnalyer</a> can help find out the reach of tweeter user and messages.</div><div><br /></div><div><br /></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes Montreal<div>I was lucky enough to find out about this event a few days before the fact. I only participated in the free portion: 5 presentations of 15-20 minutes about different aspects of Web 2.0 and social networks.</div><div><div><br /></div><div>Here are some notes and thoughts about the presentations:</div><div><br /></div><div>Chris Heuer, <i>Serve the Market</i></div><div><br /></div><div.</div><div><br /></div><div style="text-align: center;"><b>Serving the market is leadership, not management</b></div><div><br /></div></div><div>and the parting quesiton, </div><div><br /></div><div style="text-align: center;"><b>is profit the only purpose of business? or are we able to transcend our current thinking?</b></div><div style="text-align: center;"><br /></div><div><br /></div><div.</div><div><br /></div><div>Next: Sean Power, <i>Applied Communilytics In a Nutshell</i></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes most software developers, programming is more than just a way to earn money. What puts a smile on our faces as we think about upcoming tasks? What makes us think about the current problem when we wait in line, while we're driving, or as we fall asleep? Sure, we do it because it has to be done, but often there are much more personal reasons. Where does the satisfaction come from? We are all different, and different things make us tic, but all the cases fit in 3 categories: personal, interpersonal, and global. The fun is in figure out who we are based on what's important to us<br /><br /><span class="Apple-style-span" style="font-size: large;">Personal<br /></span><br /><div><ul><li>Feeling smart: This is a very good feeling, whether it's because we've beat the machine into submission, or because we've found a whole new way to use an old tool. It doesn't mean outsmarting another person, but lining up ideas in a productive way</li><li>Aesthetics: Coming up with a beautiful, elegant design. Simplifying something complicated. Making all the parts fit nicely</li><li>Achievement: Going beyond our abilities, taking it one step further</li><li>Learning: New tools, new ways of thinking. We prefer learning things that change how we thinking about the problem. The more ways we can shape our minds around an issue, the more likely we can come up with an elegant solution; or a solution at all.</li><li>The big "DONE" stamp: when we get to say something is done, out the door, complete. Sometimes we have to compromise - we'd like to tweak this a bit more, or refactor that, but like an artist, talent isn't only knowing how a piece can be made better, but also knowing when to stop.</li><li>Glory: well, okay, that's pushing it a bit, but the recognition of others, whether our peers, customers, or the whole wide world is a very powerful motivation.</li></ul><span class="Apple-style-span" style="font-size: large;">Interpersonal</span></div><div><br /></div><div>We are often seen as solitary workers, but in truth, our job cannot be done totally alone. We value good relations with our coworkers, mentors, employer, and clients.</div><div><br /></div><div><ul><li>Coworkers: we often need to rely on others, whether to show us what they've done, to discuss an idea, or do share the workload. Good relations with them come from listening to ideas, asking question, and pointing out issues in a respectful manner. It works even better when we can be friends with our coworkers, and spend time together outside the work environment, but that's not necessary.</li><li>Relations with the employer can be very productive when we know what's expected of us, what the boundaries are, and when we know we can meet our targets. We much prefer having some freedom in how we attain these objectives, and when we have input in setting the targets. When employees are asked to set the target themselves, they tend to shoot for higher accomplishment, and they are more likely to reach them. This is true not only for software developers, but in pretty much any field.</li><li>Clients (in a consulting or customization setting) are also part of the interpersonal aspect of a developer's life. Some of us hate talking to the client, but for others, knowing the person who will use the product, knowing how they will use it and why, can help us propose solutions they may not have thought of. It may require us to think differently, to use a different language so we can truly connect with the client in terms they understand, but that just keeps our minds nimble.</li></ul></div><div><span class="Apple-style-span" style="font-size: large;">Global<br /></span></div><div><br /></div><div>For some of us, the greater good is what drives our actions. We can make a difference by the work we do, whether it's through software that favors sustainability, the people we're helping, teaching/education we support, the time our software will save thousands of people, the list goes on. We all have causes we support, some more ardently than others, and when our work allows us to promote them, or help them along, we derive even more satisfaction from our efforts. We build a legacy, even if it's all too often anonymously.</div><div><br /></div><div>Clearly, money is not the only reason we develop software. If it were, there would be no Open Source movement.</div><div><br /></div><div>We all feel the pull of these motivations differently. For some, doing good for goodness's sake is plenty; others want recognition. I'm generally motivated mostly by the personal and interpersonal aspects of the work. I value the recognition of my peers more than that of the population at large.</div><div><br /></div><div>What motivates you?</div><div><br /></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes | http://feeds.feedburner.com/TheSofterSideOfSoftwareDevelopment | CC-MAIN-2019-18 | en | refinedweb |
Question 1 :
If the following structure is written to a file using fwrite(), can fread() read it back successfully?
struct emp { char *n; int age; }; struct emp e={"IndiaParinam", 15}; FILE *fp; fwrite(&e, sizeof(e), 1, fp);
Since the structure contain a char pointer while writing the structure to the disk using fwrite() only the value stored in pointer n will get written. so fread() fails to read.
Question 2 :
size of union is size of the longest element in the union
Question 3 :
The elements of union are always accessed using & operator
Question 4 :
Will the following code work?
#include < stdio.h > #include < malloc.h > struct emp { int len; char name[1]; }; int main() { char newname[] = "Rahul"; struct emp *p = (struct emp *) malloc(sizeof(struct emp) -1 + strlen(newname)+1); p->len = strlen(newname); strcpy(p -> name, newname); printf("%d %s\n", p->len, p->name); return 0; }
The program allocates space for the structure with the size adjusted so that the name field can hold the requested name.
Question 5 :
A pointer union CANNOT be created | http://www.indiaparinam.com/c-programming-language-question-answer-structures-unions-enums/yes-no-questions | CC-MAIN-2019-18 | en | refinedweb |
.
Archive-patcher is an open-source project that allows space-efficient patching of zip archives. Many common distribution formats (such as jar and apk) are valid zip archives; archive-patcher works with all of them.
Because the patching process examines each individual file within the input archives, we refer to the process as File-by-File patching and an individual patch generated by that process as a File-by-File patch. Archive-patcher processes almost all zip files, but it is most efficient for zip files created with “standard” tools like PKWARE‘s ‘zip’, Oracle’s ‘jar’, and Google's ‘aapt’.
By design, File-by-File patches are uncompressed. This allows freedom in choosing the best compression algorithms for a given use case. It is usually best to compress the patches for storage or transport.
Note: Archive-patcher does not currently handle ‘zip64’ archives (archives supporting more than 65,535 files or containing files larger than 4GB in size).
Archive-patcher transforms archives into a delta-friendly space to generate and apply a delta. This transformation involves uncompressing the compressed content that has changed, while leaving everything else alone. The patch applier then recompresses the content that has changed to create a perfect binary copy of the original input file. In v1, bsdiff is the delta algorithm used within the delta-friendly space. Much more information on this subject is available in the Appendix.
Diagrams and examples follow. In these examples we will use an old archive and a new archive, each containing 3 files: foo.txt, bar.xml, and baz.lib:
File-by-File v1: Patch Generation Overview Delta-Friendly Delta-Friendly Old Archive Old Blob New Blob New Archive ---------------- ---------------- ---------------- ---------------- | foo.txt | | foo.txt | | foo.txt | | foo.txt | | version 1 | | version 1 | | version 2 | | version 2 | | (compressed) | |(uncompressed)| |(uncompressed)| | (compressed) | |--------------| | | | | |--------------| | bar.xml | | | | | | bar.xml | | version 1 | |--------------| |--------------| | version 2 | |(uncompressed)|--->| bar.xml |--┬--| bar.xml |<---|(uncompressed)| |--------------| | version 1 | | | version 2 | |--------------| | baz.lib | |(uncompressed)| | |(uncompressed)| | baz.lib | | version 1 | |--------------| | |--------------| | version 1 | | (compressed) | | baz.lib | | | baz.lib | | (compressed) | ---------------- | version 1 | | | version 1 | ---------------- | | (compressed) | | | (compressed) | | | ---------------- | ---------------- | v v v ---------------- ---------- ---------------- |Uncompression | | delta | |Recompression | | metadata | ---------- | metadata | ---------------- | ---------------- | v | | ---------------------- | └------------------>| File-by-File v1 |<-------------------┘ | Patch | ----------------------
File-by-File v1: Patch Application Overview Delta-Friendly Delta-Friendly Old Archive Old Blob New Blob New Archive ---------------- ---------------- --------------- ---------------- | foo.txt | | foo.txt | | foo.txt | | foo.txt | | version 1 | | version 1 | | version 2 | | version 2 | | (compressed) | |(uncompressed)| |(uncompressed)| | (compressed) | |--------------| | | | | |--------------| | bar.xml | | | | | | bar.xml | | version 1 | |--------------| |--------------| | version 2 | |(uncompressed)|-┬->| bar.xml | | bar.xml |-┬->|(uncompressed)| |--------------| | | version 1 | | version 2 | | |--------------| | baz.lib | | |(uncompressed)| |(uncompressed)| | | baz.lib | | version 1 | | |--------------| |--------------| | | version 1 | | (compressed) | | | baz.lib | | baz.lib | | | (compressed) | ---------------- | | version 1 | | version 1 | | ---------------- | | (compressed) | | (compressed) | | | ---------------- ---------------- | | | ^ | ---------------- | | ---------- | | ---------------- |Uncompression |-┘ └---->| delta |-----┘ └--|Recompression | | metadata | ---------- | metadata | ---------------- ^ ---------------- ^ | ^ | ---------------------- | └-------------------| File-by-File v1 |--------------------┘ | Patch | ----------------------
The examples above used two simple archives with 3 common files to help explain the process, but there is significantly more nuance in the implementation. The implementation searches for and handles changes of many types, including some trickier edge cases such as a file that changes compression level, becomes compressed or becomes uncompressed, or is renamed without changes.
Files that are only in the new archive are always left alone, and the delta usually encodes them as a literal copy. Files that are only in the old archive are similarly left alone, and the delta usually just discards their bytes completely. And of course, files whose deflate settings cannot be inferred are left alone, since they cannot be recompressed and are therefore required to remain in their existing compressed form.
Note: The v1 implementation does not detect files that are renamed and changed at the same time. This is the domain of similar-file detection, a feature deemed desirable - but not critical - for v1.
The following code snippet illustrates how to generate a patch and compress it with deflate compression. The example in the subsequent section shows how to apply such a patch.
import com.google.archivepatcher.generator.FileByFileV1DeltaGenerator; import com.google.archivepatcher.shared.DefaultDeflateCompatibilityWindow; import java.io.File; import java.io.FileOutputStream; import java.util.zip.Deflater; import java.util.zip.DeflaterOutputStream; /** Generate a patch; args are old file path, new file path, and patch file path. */ public class SamplePatchGenerator { public static void main(String... args) throws Exception { if (!new DefaultDeflateCompatibilityWindow().isCompatible()) { System.err.println("zlib not compatible on this system"); System.exit(-1); } File oldFile = new File(args[0]); // must be a zip archive File newFile = new File(args[1]); // must be a zip archive Deflater compressor = new Deflater(9, true); // to compress the patch try (FileOutputStream patchOut = new FileOutputStream(args[2]); DeflaterOutputStream compressedPatchOut = new DeflaterOutputStream(patchOut, compressor, 32768)) { new FileByFileV1DeltaGenerator().generateDelta(oldFile, newFile, compressedPatchOut); compressedPatchOut.finish(); compressedPatchOut.flush(); } finally { compressor.end(); } } }
The following code snippet illustrates how to apply a patch that was compressed with deflate compression, as in the previous example.
import com.google.archivepatcher.applier.FileByFileV1DeltaApplier; import com.google.archivepatcher.shared.DefaultDeflateCompatibilityWindow; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.util.zip.Inflater; import java.util.zip.InflaterInputStream; /** Apply a patch; args are old file path, patch file path, and new file path. */ public class SamplePatchApplier { public static void main(String... args) throws Exception { if (!new DefaultDeflateCompatibilityWindow().isCompatible()) { System.err.println("zlib not compatible on this system"); System.exit(-1); } File oldFile = new File(args[0]); // must be a zip archive Inflater uncompressor = new Inflater(true); // to uncompress the patch try (FileInputStream compressedPatchIn = new FileInputStream(args[1]); InflaterInputStream patchIn = new InflaterInputStream(compressedPatchIn, uncompressor, 32768); FileOutputStream newFileOut = new FileOutputStream(args[2])) { new FileByFileV1DeltaApplier().applyDelta(oldFile, patchIn, newFileOut); } finally { uncompressor.end(); } } }
Patching software exists primarily to make updating software or data files spatially efficient. This is accomplished by figuring out what has changed between the inputs (usually an old version and a new version of a given file) and transmitting only the changes instead of transmitting the entire file. For example, if we wanted to update a dictionary with one new definition, it's much more efficient to send just the one updated definition than to send along a brand new dictionary! A number of excellent algorithms exist to do just this - diff, bsdiff, xdelta and many more.
In order to generate spatially efficient patches for zip archives, the content within the zip archives needs to be uncompressed. This necessitates recompressing after applying a patch, and this in turn requires knowing the settings that were originally used to compress the data within the zip archive and being able to reproduce them exactly. These three problems are what make patching zip archives a unique challenge, and their solutions are what make archive-patcher interesting. If you'd like to read more about this now, skip down to Interesting Obstacles to Patching Archives.
The v1 patch format is a sequence of bytes described below. Care has been taken to make the format friendly to streaming, so the order of fields in the patch is intended to reflect the order of operations needed to apply the patch. Unless otherwise noted, the following constraints apply:
|------------------------------------------------------| | Versioned Identifier (8 bytes) (UTF-8 text) | Literal: "GFbFv1_0" |------------------------------------------------------| | Flags (4 bytes) (currently unused, but reserved) | |------------------------------------------------------| | Delta-friendly old archive size (8 bytes) (uint64) | |------------------------------------------------------| | Num old archive uncompression ops (4 bytes) (uint32) | |------------------------------------------------------| | Old archive uncompression op 1...n (variable length) | (see definition below) |------------------------------------------------------| | Num new archive recompression ops (4 bytes) (uint32) | |------------------------------------------------------| | New archive recompression op 1...n (variable length) | (see definition below) |------------------------------------------------------| | Num delta descriptor records (4 bytes) (uint32) | |------------------------------------------------------| | Delta descriptor record 1...n (variable legth) | (see definition below) |------------------------------------------------------| | Delta 1...n (variable length) | (see definition below) |------------------------------------------------------|
The number of these entries is determined by the “Num old archive uncompression ops” field previously defined. Each entry consists of an offset (from the beginning of the file) and a number of bytes to uncompress. Important notes:
|------------------------------------------------------| | Offset of first byte to uncompress (8 bytes) (uint64)| |------------------------------------------------------| | Number of bytes to uncompress (8 bytes) (uint64) | |------------------------------------------------------|
The number of these entries is determined by the “Num new archive recompression ops” field previously defined. Like an old archive uncompression op, each entry consists of an offset - but this time from the beginning of the delta-friendly new blob. This is followed by the number of bytes to compress, and finally a compression settings field. Important notes:
|------------------------------------------------------| | Offset of first byte to compress (8 bytes) (uint64) | |------------------------------------------------------| | Number of bytes to compress (8 bytes) (uint64) | |------------------------------------------------------| | Compression settings (4 bytes) | (see definition below) |------------------------------------------------------|
The compression settings define the deflate level (in the range 1 to 9, inclusive), the deflate strategy (in the range 0 to 2, inclusive) and the wrapping mode (wrap or nowrap). The settings are specific to a compatibility window, discussed in the next section in more detail.
In practice almost all entries in zip archives have strategy 0 (the default) and wrapping mode ‘nowrap’. The other strategies are primarily used in-situ, e.g., the compression used within the PNG format; wrapping, on the other hand, is almost exclusively used in gzip operations.
|------------------------------------------------------| | Compatibility window ID (1 byte) (uint8) | (see definition below) |------------------------------------------------------| | Deflate level (1 byte) (uint8) (range: [1,9]) | |------------------------------------------------------| | Deflate strategy (1 bytes) (uint8) (range: [0,2] | |------------------------------------------------------| | Wrap mode (1 bytes) (uint8) (0=wrap, 1=nowrap) | |------------------------------------------------------|
A compatibility window specifies a compression algorithm along with a range of versions and platforms upon which it is known to produce predictable and consistent output. That is, all implementations within a given compatibility window must produce identical output for any identical inputs consisting of bytes to be compressed along with the compression settings (level, strategy, wrapping mode).
In File-by-File v1, there is only one compatibility window defined. It is the default deflate compatibility window, having ID=0 (all other values reserved for future expansion), and it specifies the following configuration:
The default compatibility window is compatible with the following runtime environments based on empirical testing. Other environments may be compatible, but the ones in this table are known to be.
Delta descriptor records are grouped together before any of the actual deltas. In File-by-File v1 there is always exactly one delta, so there is exactly one delta descriptor record followed immediately by the delta data. Conceptually, the descriptor defines input and output regions of the archives along with a delta to be applied to those regions (reading from one, and writing to the other).
In subsequent versions there may be arbitrarily many deltas. When there is more than one delta, all the descriptors are listed in a contiguous block followed by all of the deltas themselves, also in a contiguous block. This allows the patch applier to preprocess the list of all deltas that are going to be applied and allocate resources accordingly. As with the other descriptors, these must be ordered by ascending offset and overlaps are not allowed.
|------------------------------------------------------| | Delta format ID (1 byte) (uint8) | |------------------------------------------------------| | Old delta-friendly region start (8 bytes) (uint64) | |------------------------------------------------------| | Old delta-friendly region length (8 bytes) (uint64) | |------------------------------------------------------| | New delta-friendly region start (8 bytes) (uint64) | |------------------------------------------------------| | New delta-friendly region length (8 bytes) (uint64) | |------------------------------------------------------| | Delta length (8 bytes) (uint64) | |------------------------------------------------------|
Description of the fields within this record are a little more complex than in the other parts of the patch:
Problem: Zip files make patching hard because compression obscures the changes. Deflate, the compression algorithm used most widely in zip archives, uses a 32k “sliding window” to compress, carrying state with it as it goes. Because state is carried along, even small changes to the data that is being compressed can result in drastic changes to the bytes that are output - even if the size remains similar. If you change the definition of ‘aardvark’ in our imaginary dictionary (from back in the Background section) and zip both the old and new copies, the resulting zip files will be about the same size, but will have very different bytes. If you try to generate a patch between the two zip files with the same algorithm you used before (e.g., bsdiff) you'll find that the resulting patch file is much, much larger - probably about the same size of one of the zip files. This is because the files are too dissimilar to express any changes succinctly, so the patching algorithm ends up having to just embed a copy of almost the entire file.
Solution: Archive-patcher transforms the input archives into what we refer to as delta-friendly space where changed files are stored uncompressed, allowing diffing algorithms like bsdiff to function far more effectively.
Note: There are techniques that can be applied to deflate to isolate changes and stop them from causing the entire output to be different, such those used in rsync-friendly gzip. However, zip archives created with such techniques are uncommon - and tend to be slightly larger in size.
Problem: In order for the generated patch to be correct, we need to know the original deflate settings that were used for any changed content that we plan to uncompress during the transformation to the delta-friendly space. This is necessary so that the patch applier can recompress that changed content after applying the delta, such that the resulting archive is exactly the same as the input to the patch generator. The deflate settings we care about are the level, strategy, and wrap mode.
Solution: Archive-patcher iteratively recompresses each piece of changed content with different deflate settings, looking for a perfect match. The search is ordered based on empirical data and one of the first 3 guesses is extremely likely to succeed. Because deflate has a stateful and small sliding window, mismatches are quickly identified and discarded. If a match is found, the corresponding settings are added to the patch stream and the content is uncompressed in-place as previously described; if a match is not found then the content is left compressed (because we lack any way to tell the patch applier how to recompress it later).
Note: While it is possible to change other settings for deflate (like the size of its sliding window), in practice this is almost never done. Content that has been compressed with other settings changed will be left compressed during patch generation.
Problem: The patch applier needs to know that it can reproduce deflate output in exactly the same way as the patch generator did. If this is not possible, patching will fail. The biggest risk is that the patch applier's implementation of deflate differs in some way from that of the patch generator that detected the deflate settings. Any deviation will cause the output to diverge from the original input to the patch generator. Archive-patcher relies on the java.util.zip package which in turn wraps a copy of zlib that ships with the JRE. It is this version of zlib that provides the implementation of deflate.
Solution: Archive-patcher contains a ~9000 byte corpus of text that produces a unique output for every possible combination of deflate settings that are exposed through the java.util.zip interface (level, strategy, and wrapping mode). These outputs are digested to produce “fingerprints” for each combination of deflate settings on a given version of the zlib library; these fingerprints are then hard-coded into the application. The patch applier checks the local zlib implementation's suitability by repeating the process, deflating the corpus with each combination of java.util.zip settings and digesting the results, then checks that the resulting fingerprints match the hard-coded values.
Note: At the time of this writing (September, 2016), all zlib versions since 1.2.0.4 (dated 10 August 2003) have identical fingerprints. This includes every version of Sun/Oracle Java from 1.6.0 onwards on x86 and x86_64 as well as all versions of the Android Open Source Project from 4.0 onward on x86, arm32 and arm64. Other platforms may also be compatible but have not been tested.
Note: This solution is somewhat brittle, but is demonstrably suitable to cover 13 years of zlib updates. Compatibility may be extended in a future version by bundling specific versions of zlib with the application to avoid a dependency upon the zlib in the JRE as necessary.
The File-by-File v1 patching process dramatically improves the spatial efficiency of patches for zip archives, but there are many improvements that can still be made. Here are a few of the more obvious ones that did not make it into v1, but are good candidates for inclusion into later versions:
Major software contributions, in alphabetical order:
Additionally, we wish to acknowledge the following, also in alphabetical order: | https://android.googlesource.com/platform/external/archive-patcher/+/a555f6b53d6e50635c790257e18a0da8e504405a/ | CC-MAIN-2019-18 | en | refinedweb |
Report which keyed scalars fail to accumulate due to running out of keys
RESOLVED FIXED in Firefox 65
Status
()
P3
normal
People
(Reporter: chutten, Assigned: smurfd, Mentored)
Tracking
Firefox Tracking Flags
(firefox65 fixed)
Details
(Whiteboard: [good next bug][lang=c++])
Attachments
(2 attachments, 10 obsolete attachments)
It is possible that a keyed scalar could hit a limit we imposed in the number of keys allowed (defaults to 100[1]). This limit might not be hit in development or test, and could affect the integrity of the data we collect. We should report which scalars hit those limits and how often. I'm thinking a keyed uint scalar, telemetry.keyed_scalars_exceed_limit, where keys are scalar names. This is sorta-kinda a Telemetry Health measure in the category of Client Integrity. Since Scalar names are public already[2] this shouldn't have disclosure or publication problems... but that'll be covered more in the Data Collection Review. [1]: [2]:
To help Mozilla out with this bug, here's the steps: 0) Comment here on the bug that you want to volunteer to help. I (or someone else) will assign it to you. 1) Download and build the Firefox source code: - If you have any problems, please ask on IRC () in the #introduction channel. They're there to help you get started. - You can also read the Developer Guide, which has answers to most development questions: 2) Start working on this bug. You will be adding a new Keyed uint Scalar, so you'll want to read and probably also. - If you have any problems with this bug, please comment on this bug and set the needinfo flag for me. Also, you can find me and my teammates on the #telemetry channel on IRC () most hours of most days. 3) Build your change with `mach build` and test your change with `mach test toolkit/components/telemetry/tests/`. Also check your changes for adherence to our style guidelines by using `mach lint` 4) Submit the patch (including an automated test, check the Adding a New Probe doc for how to write one) for review. Mark me as a reviewer so I'll get an email to come look at your code. - Here's the guide: - We will also need Data Collection Review as we'll be adding a measurement to Firefox:.
Mentor: chutten
Priority: -- → P3
Whiteboard: [good next bug][lang=c++]
I want to contribute.Please give me the references related to this bug.
I'm not sure what references you mean?
Flags: needinfo?(akshay2gud)
Sir it will be my first contribution to an organization. So I don't know how to start on this bug. Sir if any kind of documentation is available please provide me the link. It will be a great help.
Flags: needinfo?(akshay2gud)
Work on this bug will involve adding a keyed uint scalar, so you'll probably want to read the "adding a new probe" guide[1] to learn about adding new data measurements to Firefox, and the "scalars" documentation[2] so you know what a scalar is and what I mean by "keyed uint". I've provided a lot of additional information in Comment#1 that might be helpful to read through. [1]: [2]:
Thank You Sir. I will start working on it.
Assignee: nobody → joberts.ff
Status: NEW → ASSIGNED
jason, do you need any additional help getting started with this one?
Flags: needinfo?(joberts.ff)
Unassigning due to activity. Jason, if you'd like to pick this back up just let me know.
Flags: needinfo?(joberts.ff)
Assignee: joberts.ff → nobody
Status: ASSIGNED → NEW
I have a few questions: For the Scalars.yaml entry, when should it expire, and what process(es) would it fall under? For the implementation, how do I get the string of the name of the scalar, should I call it everywhere a TooManyKeys error might occur, and what value should I use (a counter)? I guess the test(s) would involve making sure that telemetry notification fires for every method I add it to. Is there a method for registering them (like xpcshell) so the test suite knows it exists? Thanks in advance
Flags: needinfo?(chutten)
These are all excellent questions. It shouldn't expire, since keyed scalars will continue to exist forever. It should only accumulate on the "main" process as it's the one that actually accumulates to scalars (the other processes just send arrays of instructions over IPC). It checks the limit here: [1] Be sure to list my email in the alert_emails field, as it'll need an ongoing monitor. I think you'll be implementing this in C++, so you'll be able to use the ScalarID enum instead of the string (like so: [2]). You can call "ScalarAdd" on it so you don't have to worry about storage (we'll take care of it) The tests will involve accumulating too many keys to a test keyed scalar and then checking that your new scalar counts the right number of overflows. You can probably add your checks to the test_keyed_max_keys[3] test. [1]: [2]: [3]:
Flags: needinfo?(chutten)
Thanks! I'll take this.
Assignee: nobody → me
How's it going? Anything I can help with?
Flags: needinfo?(me)
(In reply to Chris H-C :chutten from comment #12) > How's it going? Anything I can help with? My bad, I did most of the work a few days after taking the bug, but I've been busy with work and am currently on vacation and never got around to submitting the patch. I'll try and push it around next weekend.
Flags: needinfo?(me)
I don't have the time to see this through, so I'll let it go (and maybe come back to it another time)
Assignee: me → nobody
No problem!
I'd like to give this a go...
Assignee: nobody → smurfd
Mentor: jrediger
Hey Nicklas, Chris is currently out of office. I'll cover for him. I assigned you the bug. If you have any questions ping me here or on IRC (janerik).
Thanks Jan-erik So i have edited the Scalar.yaml file and added an entry under telemetry: following the same structure like the rest with the data ive seen mentioned in the comments above... like kind: uint, expires: "never" and so on. added a ScalarAdd in this if-statement, in TelemetryScalar.cpp if (mScalarKeys.Count() >= mMaximumNumberOfKeys) { ScalarAdd(mozilla::Telemetry::ScalarID::TELEMETRY_KEYED_SCALARS_EXCEED_LIMIT, 1); Then when i got it to build i thought id try to run the tests, just to see what the output would be. It started looping it seemed, complaining about the process still active.. not sure though if it has todo with somehting i did now or before :) Anywho, guess you can do something like subtracting mMaximumNumberOfKeys - mScalarKeys.Count() in the above if-statement, when you know it has exceeded the max number to get the number of scalarkeys over the max. Then what? i feel like im missing some major part of it :)
Flags: needinfo?(jrediger)
You're already on the right track there. The current unmodified tests should run fine, especially if you haven't touched them at all. How did you run the tests? The subtraction will not be needed at all, all we want to record is that an overflow happened (actually, |mMaximumNumberOfKeys - mScalarKeys.Count| should always be 0 if the above if condition holds, as we don't store more if the limit is reached once). Unfortunately the solution will be a bit more involved, as we want to record the overflow per scalar name, but we currently don't have a way to actually get to the scalar name from inside the |KeyedScalar| object. I'll free some time to come up with a rough solution there and get back to you ASAP.
Flags: needinfo?(jrediger)
So I took another look and this indeed needs a bit more design first to decide where and how we get the scalar name back so we can record the overflow. I'll hand this over to Chris again, he'll be back next week and can take it from there. Unfortunately that means you're currently blocked on this, but maybe we can find something else for you to work on.
Thanks for having a look i ran my test like : mach test toolkit/components/telemetry/tests/ okey its not a problem, i guess once he have a look at this he will comment and we take it from there...
Flags: needinfo?(chutten)
Sounds like we need the name in the KeyedScalar object. If we replace the KeyedScalar's mScalarKind with a reference to the BaseScalarInfo we can avoid storage costs -and- get a reference to the scalar name.
Flags: needinfo?(chutten)
Thanks Chris, think i follow. Looking into it.
How long does it usually take to run: mach test toolkit/components/telemetry/tests/ ? It takes over 1h for me and ends in basicly saying some of the tests fail due to timeout. Thats even if i have not added my code... Anywho i Think i have been able to get the name of the scalar, that breaches mMaximumNumberOfKeys in this if-statement. Can i somehow trigger above if-statement without running the test-suites? (like from running ./mach run, and maby via BrowserConsole do some magic?) or should i create a specific test to trigger it? just to see that my printf-debugging gets hit and shows me the name ;)
Flags: needinfo?(chutten)
mach test toolkit/components/telemetry/tests/ runs a lot of tests, some of them are mochitests[1] which require focus be set on the window under test, which is irritating and may lead to timeouts/failures if you try and use the machine while running them. It's really not the greatest, but it works well in continuous integration :S I recommend trying `mach test toolkit/components/telemetry/tests/unit/` which will run only the unit tests. It'll run them in parallel, and needs only the terminal window. On my machine (which is decently nice, but some years old now) they take 3 minutes or so. Once you are writing your own test, say in test_TelemetryScalars.js, you can specify just that file to mach test and it'll run nice and quickly (and give you more detailed output). As for TelemetryScalar.cpp#871, that indeed is the condition we're interested in. You might not have luck recording the failure in that function, though, as I'm pretty sure we hold the lock when that function is called. You might have to detect that the return code is ScalarResult::TooManyKeys further out after we release the lock (maybe in [2] where we log it to the console) because we can't acquire the lock to accumulate while we hold the lock trying to accumulate :) Or maybe it'll be fine there, I'm not sure. I say write a test to find out :) [1]: [2]:
Flags: needinfo?(chutten)
Hey Chris hm still issue with running `mach test toolkit/components/telemetry/tests/unit/` it comes to toolkit/components/telemetry/tests/unit/test_PingSender.js but then it seems to go into a loop. First i thought it was because i had added these two lines to my .mozconfig file. MOZ_TELEMETRY_REPORTING=1 MOZILLA_OFFICIAL=1 Removed them and rebuilt, still ... i also have the same issue with a freshly grabbed Repo mozilla-central from hg.mozilla.org i CAN run a test on the single test_TelemetryScalars.js and i see my function being run! In the other bug i am looking at i was asked to submit a Progress patch, for the Mentor to more easily be able to help. Hope that's allright here aswell!?
Please see my previous comment
Flags: needinfo?(chutten)
Comment on attachment 9000068 [details] [diff] [review] progress-patch-bug1451813.patch Review of attachment 9000068 [details] [diff] [review]: ----------------------------------------------------------------- This is good work so far! I have some comments, most of which are about whitespace that crept into this WIP patch and probably would be cleaned up before review anyway so I feel a little weird even mentioning them :S test_PingSender.js runs fine in automation, so I wonder what could be different for you locally that's causing this oddity. For now proceed by just running the necessary test file and we can look into why test_PingSender.js loops as a problem separate from developing this change. Though maybe it has to do with the lock. The lock comment's the most relevant one here. I think we need to be careful here with threading, and we can do that by moving our check just a couple of levels up. What do you think about that approach? ::: toolkit/components/telemetry/Scalars.yaml @@ +1362,5 @@ > + keyed_scalars_exceed_limit: > + bug_numbers: > + - 1451813 > + description: > > + Checking if the max number of Scalars is reached. For keyed measures like this one it is helpful if the description mentions what it is keyed by. So this could be "The number of times keyed scalars exceeded the number of keys limit, keyed by scalar name" @@ +1366,5 @@ > + Checking if the max number of Scalars is reached. > + expires: "never" > + kind: uint > + keyed: true > + notification_emails: You should remove the end-of-line whitespace. @@ +1367,5 @@ > + expires: "never" > + kind: uint > + keyed: true > + notification_emails: > + - smurfd@gmail.com Use my email address (chutten@mozilla.com) for the notification_emails here. I'll take care of questions and monitoring. @@ +1934,4 @@ > record_in_processes: > - 'content' > > + This extra whitespace appears to have been added accidentally? ::: toolkit/components/telemetry/TelemetryScalar.cpp @@ +728,5 @@ > ScalarKeysMapType mScalarKeys; > const uint32_t mScalarKind; > uint32_t mMaximumNumberOfKeys; > + const BaseScalarInfo *info; > + Excess whitespace we can omit. @@ +887,4 @@ > } > > if (mScalarKeys.Count() >= mMaximumNumberOfKeys) { > + static StaticMutex gTelemetryScalarsMutex; We might already hold the lock at this point (see for instance [1] which calls [2] which calls here). We might need to catch this case further out (maybe store the return values around [1] and then check to see if it was TooManyKeys), or to pass the lock down this far so we can use it (we'd want to add proof of locking to all the KeyedScalar methods then, which would make this a larger patch). [1]: [2]: @@ +890,5 @@ > + static StaticMutex gTelemetryScalarsMutex; > + StaticMutexAutoLock locker(gTelemetryScalarsMutex); > + nsAutoCString sName = GetScalarNameByEnum(locker, mozilla::Telemetry::ScalarID::TELEMETRY_KEYED_SCALARS_EXCEED_LIMIT); > + > +// printf("sName = %s\n", ToNewCString(sName)); You could use sName.get() to get a pointer to the underlying buffer. ::: toolkit/components/telemetry/TelemetryScalar.h @@ +9,4 @@ > #include "nsTArray.h" > #include "mozilla/TelemetryScalarEnums.h" > #include "mozilla/TelemetryProcessEnums.h" > +#include "mozilla/Telemetry.h" Not necessary to include this here
Flags: needinfo?(chutten)
I have addressed your white space points, and no no, please mention them :) Changed to your email and changed description in Scalars.yaml Went ahead and modified the Scalars functions in TelemetryScalar.cpp to have the lock argument. Feels like that is the way to go, since its just within TelemetryScalar.cpp they are used. That way i can reuse the lock from earlier stages in the code.. Hopefully i have not generalized it too much ... Jup one of the reasons why atleast the line-break white space was in there, was because i was looking on another bug at the same time. So i had to cut and paste some after generating the patch. Tried to get around that by grabbing a fresh m-c mercurial repo. Add bookmarks for each thing im looking at and commit to each bookmark during my progress, then as an example : 'hg update --check bookmark1' to switch to one bookmark, do work, commit, hg update --check bookmark2 ... do work, commit, 'hg update --check main' as is my untouched bookmark ...That seems to work quite well. Until i was now going to grab this patch. Sorry, not sure where to ask this, have gone through a ton of sites and MDN without any luck finding the answer have like 7 commits to one bookmark and would like to have that in one patch. How?! Found this as an example Sadly no mention on howto generate a patch after your commits :( hg diff > file.patch gives me only the uncommitted data hg export -o file.patch gives me my last commit I could ofcourse go to each changeset and 'hg export -o file1.patch' to get 7 patces... There Must be some better way :)
Flags: needinfo?(chutten)
New contributors are encouraged to use Phabricator for code review. You can find the user guide here: There should be people on the IRC channels #introduction and #developers who will be able to help you with further questions. We have an IRC guide here:
Flags: needinfo?(chutten)
Sorted out my Mercurial issues via help from #developers. Sorry for bothering you with that kindof stuff... Adding a progress patch where i have added the StaticMutexAutoLock to the KeyedScalar functions. Not sure about the name we get though, if its right or not. Intentionally left a printf, to see the name it gets. telemetry.keyed_scalars_exceed_limit , Is that right?!
Attachment #9000068 - Attachment is obsolete: true
Flags: needinfo?(chutten)
Comment on attachment 9004111 [details] [diff] [review] Bug1451813_180827_0043.patch Review of attachment 9004111 [details] [diff] [review]: ----------------------------------------------------------------- Looking good. The scalar name is very descriptive. I have two notes:. 2) We can save ourselves some effort by storing the ScalarInfo inside the KeyedScalar. We construct KeyedScalars here[1], passing in the scalar's kind. Instead of passing just the kind, we could pass a pointer to the info and then we'd still have the kind (mInfo->kind) but we'd also have easy access to the name (mInfo->name()) whenever we wanted. This then makes later changes to KeyedScalars easy as they'll always have a pointer to their info handy. (and it won't take much more space than storing the kind). That being said, your existing solution is perfectly fine. It's not wasteful and it'll be very quick once we get the string types correct. If you'd prefer to keep it the way it is, I'm okay with that, too. But then we should probably move the utility function out of KeyedScalar:: and into internal_ next to internal_GetEnumByScalarName [1]: ::: toolkit/components/telemetry/TelemetryScalar.cpp @@ +696,4 @@ > > // Set, Add and SetMaximum functions as described in the Telemetry IDL. > // These methods implicitly instantiate a Scalar[*] for each key. > + ScalarResult SetValue(const nsAString& aKey, nsIVariant* aValue, const StaticMutexAutoLock& locker); Convention in this file is to have the locker be the first parameter. Could you reorder these to match? @@ +731,3 @@ > }; > > +nsAutoCString Returning nsAutoCStrings is not really something done in the codebase. (I apologize. Strings in mozilla-central are hard. I have to look them up every time.) In this case the pointer returned from ScalarInfo::name is a pointer to a buffer that will outlive the string we want to return from this function. Since the string can _depend_ on the buffer, what we likely want is a const nsDependentCString. `return nsDependentCString(mInfo.name());` should be fine in this case. @@ +732,5 @@ > > +nsAutoCString > +KeyedScalar::GetScalarNameByEnum(const StaticMutexAutoLock& lock, mozilla::Telemetry::ScalarID aId) > +{ > + nsAutoCString kName; `k` is the prefix for static constants. This can just be `name` @@ +735,5 @@ > +{ > + nsAutoCString kName; > + > + ScalarKey uniqueId{static_cast<uint32_t>(aId), false}; > + const BaseScalarInfo& mInfo = internal_GetScalarInfo(lock, uniqueId); 'm' is the prefix for object members. Local variables have no prefix, so this should be something closer to `info`.
Oh, and no worries about mercurial stuff! I only wish I could be more helpful with those (I use git through an adaptor, so my mercurial knowledge is next-to-none)
Flags: needinfo?(chutten)
(In reply to Chris H-C :chutten from comment #32) >. Have addressed everything you mentioned, accept the above. Do you mean that, If we reach mMaximumNumberOfKeys [1] We should get the scalar name, and add values to this new scalar. so like scalar->AddValue , but if we add more than 100 that would itself hit [1] and we are looping. So we check if we reach maximum number of scalars for this new scalar.. then we send a Warning and dont add it or? [1]
Flags: needinfo?(chutten)
Correct. If we have more than 100 keyed histograms with more than 100keys, our new "telemetry.keyed_scalars_exceed_limit" scalar will also exceed its key limit... which will call into the code to handle keys exceeding the limit which will try to add the key which may loop forever. (or at least it'll be confusing). You can warn if you'd like, or we can just let the fact that "telemetry.keyed_scalars_exceed_limit" has 100 keys act as an indicator that we've reached the limit and that other keyed histograms may have exceeded the limit but we were unable to record them.
Flags: needinfo?(chutten)
hehe not sure if i got more sure or more confused. After this : if (mScalarKeys.Count() >= mMaximumNumberOfKeys) { nsDependentCString sName = nsDependentCString(GetScalarNameByEnum(locker, mozilla::Telemetry::ScalarID::TELEMETRY_KEYED_SCALARS_EXCEED_LIMIT)); we then do something like : ScalarKeysMapType mExceedScalarKeys; ScalarBase* scalarExceed = nullptr; mExceedScalarKeys.Get(sName, &scalarExceed); scalarExceed->AddValue() // How Do we get the value to add? if (mExceedScalarKeys.Count() >= mMaximumNumberOfKeys) { return ScalarResult::TooManyKeys; }
Flags: needinfo?(chutten)
Well, we want if(mScalarKeys.Count() >= mMaximumNumberOfKeys && {nameOfKeyedScalar != "telemetry.keyed_scalars_exceed_limit"})
Flags: needinfo?(chutten)
Just fyi. Have the last couple of days figured out why, when i tried to run 'mach test toolkit/components/telemetry/tests/unit/' it hanged for me. It has something todo with that i am running a Linux distribution not supported by Bootstrapping, so im guessing im missing some package installed or configuration made by the bootstrap. That causes me not to be able to create a fake webserver
Hm, that's odd. Well, luckily I don't think the Scalar tests need pingserver, and we can run your changes on try[1] before we land them to make sure the tests that -do- need it are still running okay. Are you okay to keep going with this? [1]:
I have a Ubuntu VM i can run it in, so its allright. Yeah i think i can keep going. If we do something like? if (mScalarKeys.Count() >= mMaximumNumberOfKeys && nsDependentCString(GetScalarNameByEnum(locker, mozilla::Telemetry::ScalarID::TELEMETRY_KEYED_SCALARS_EXCEED_LIMIT)) != "telemetry.keyed_scalar_exceed_limit") { // here we add data to the scalar something like i mentioned in Comment 36? } Or is it enough to just return ScalarResult::TooManyKeys?
I think there's a `.EqualsLiteral()` method on ns*String types that would be useful here, but yes.
if (mScalarKeys.Count() >= mMaximumNumberOfKeys && !nsDependentCString(GetScalarNameByEnum(locker, mozilla::Telemetry::ScalarID::TELEMETRY_KEYED_SCALARS_EXCEED_LIMIT)).EqualsLiteral("telemetry.keyed_scalar_exceed_limit")) { ScalarBase* scalarExceed = nullptr; mScalarKeys.Get(nsDependentCString(GetScalarNameByEnum(locker, mozilla::Telemetry::ScalarID::TELEMETRY_KEYED_SCALARS_EXCEED_LIMIT)), &scalarExceed); // Get the scalar value. nsCOMPtr<nsIVariant> scalarExceedValue = nullptr; scalar->GetValue(scalarExceedValue); scalarExceed->AddValue(scalarExceedValue); return ScalarResult::TooManyKeys; } ------- I did like that and it compiles all fine. Yet a unmodified test_TelemetryScalars.js fails for some reason -- 0:09.12 INFO (xpcshell/head.js) | test test_keyed_max_keys pending (2) 0:09.15 pid:29044 ExceptionHandler::GenerateDump cloned child 29065 0:09.15 pid:29044 ExceptionHandler::SendContinueSignalToChild sent continue signal to child 0:09.15 pid:29044 ExceptionHandler::WaitForContinueSignal waiting for continue signal... 0:09.31 TEST_END: Test FAIL, expected PASS. Subtests passed 71/71. Unexpected 0 - xpcshell return code: -4 -- My thinking is that we need to get the value from 'scalar' to add to our new Exceeded scalar. Right?
Flags: needinfo?(chutten)
ExceptionHandler means the test crashed. My guess is that either scalarExceedValue or scalar is null... or, since scalarExceed is a keyed scalar you need to provide a key as part of the AddValue call. Oh, and !nsDependentCString(GetScalarNameByEnum(locker, mozilla::Telemetry::ScalarID::TELEMETRY_KEYED_SCALARS_EXCEED_LIMIT)).EqualsLiteral("telemetry.keyed_scalar_exceed_limit") is an interesting condition. If I'm not mistaken you might be checking that telemetry.keyed_scalar_exceed_limit != telemetry.keyed_scalar_exceed_limit I think you want to check if the name of the keyed scalar who is being asked to add a key (`this`) happens to EqualsLiteral(...) I might be able to help more if you post your in-progress patch. We'll get this!
Flags: needinfo?(chutten)
Ofcourse. I agree it must have to do with 'scalar' being empty or something similar... Tried to address both your points. also tried after i had exported this patch to add 'scalar = internal_ScalarAllocate(mScalarKind);' above the if-statement, and comment away the scalar = *aRet; yes i have poked some at the test, but just tried to re-test with the original test_TelemetryScalars.js But no luck.. Please have a look
Attachment #9004111 - Attachment is obsolete: true
Flags: needinfo?(chutten)
Comment on attachment 9011580 [details] [diff] [review] Bug1451813_180924_2120.patch Review of attachment 9011580 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/components/telemetry/TelemetryScalar.cpp @@ +883,4 @@ > return ScalarResult::Ok; > } > > + if (mScalarKeys.Count() >= mMaximumNumberOfKeys && !aKey.EqualsLiteral("telemetry.keyed_scalar_exceed_limit")) { There's a typo here, I think it's "keyed_scalars_exceed_limit" (notice the plural "scalars") @@ +885,5 @@ > > + if (mScalarKeys.Count() >= mMaximumNumberOfKeys && !aKey.EqualsLiteral("telemetry.keyed_scalar_exceed_limit")) { > + ScalarBase* scalarExceed = nullptr; > + nsDependentCString sName = nsDependentCString(GetScalarNameByEnum(locker, mozilla::Telemetry::ScalarID::TELEMETRY_KEYED_SCALARS_EXCEED_LIMIT)); > + mScalarKeys.Get(sName, &scalarExceed); Instead of getting the name and using that to get scalarExceed, could you use internal_GetKeyedScalarByEnum directly? Then with the KeyedScalar for telemetry.keyed_scalars_exceed_limit in scalarExceed we can do a simpler scalarExceed->AddValue(mScalarInfo.name(), 1); And that'll make it so telemetry.keyed_scalars_exceed_limit will have a key whose name is the keyed scalar who exceeded the limit, and whose value is 1 more than it was just a moment ago. Is this making sense?
Flags: needinfo?(chutten)
Yeah that makes sense. Sadly though, i cant seem to use the internal_GetKeyedScalarByEnum from within KeyedScalar::GetScalarForKey error: use of undeclared identifier 'internal_GetKeyedScalarByEnum' nsresult rv = internal_GetKeyedScalarByEnum(lock, uniqueId, aProcessOverride, &scalar); I only can see it being used in either internal_ and TelemetryScalar:: functions. Seems however like we suspected that it is the 'scalar' that is null causing testfail. ill keep digging though...
It's probably because it's further down the file, so the compiler doesn't know it exists at that point. We can put a forward declaration[1] of the function just above the function where we want to use it to promise to the compiler that there is a function that exists like that, it just needs to keep reading to find it. [1]:
Ahh that i should have figured out. Progress! It became like below. Hm realize that we dont need the GetScalarNameByEnum function. Should i keep it or remove?; nsresult rv = internal_GetKeyedScalarByEnum(locker, uniqueId, aProcessOverride, &scalarExceed); if (NS_FAILED(rv)) { return ScalarResult::InvalidType; //There are probably some better return value to use? } scalarExceed->AddValue(locker, NS_ConvertUTF8toUTF16(mScalarInfo.name()), 1); ... On to the testing. in test_TelemetryScalars.js i tried to do the following First i tried to use KEYED_UINT_SCALAR and do the below, then it would say about 100 times that: Keyed scalars can not have more than 100 keys. With the below it would only say that once. And i get that(i think), the KEYED_UINT_SCALAR has already added 99 keys. So when asked to add 99 more, it will complain (I mean, the test passes but it would write 99 INFO rows) Why it would only say it once with KEYED_EXCEED_SCALAR is because we brake it off, right? // Generate the names for the exceeded keys let keyNamesSet2 = new Set(); for (let k = 0; k < 100; k++) { keyNamesSet2.add("key2_" + k); } // Add 100 keys to an histogram and set their initial value. valueToSet = 100; keyNamesSet2.forEach(keyName => { Telemetry.keyedScalarSet(KEYED_EXCEED_SCALAR, keyName, valueToSet++); });
Flags: needinfo?(chutten)
It probably depends on precisely how you implemented the code. It should still print the warning about too many keys the same number of times with and without your changes. If it did anything different that could be confusing... or it might even be an indication that there's something wrong. Maybe throw your work up in another patch so I can take a look at the whole thing?
Flags: needinfo?(chutten)
QA Contact: gfritzsche
ofcourse
Attachment #9011580 - Attachment is obsolete: true
Comment on attachment 9014888 [details] [diff] [review] Bug1451813_181004_2350.patch Review of attachment 9014888 [details] [diff] [review]: ----------------------------------------------------------------- Seems as though we only have one thing to work out in the main code and then it's just cleanup and tests. ::: toolkit/components/telemetry/Scalars.yaml @@ +1577,5 @@ > + release_channel_collection: opt-out > + record_in_processes: > + - 'main' > + > + keyed_scalars_exceed_limit: We only need the one definition, I think. These appear to be duplicated. ::: toolkit/components/telemetry/TelemetryScalar.cpp @@ +689,5 @@ > public: > typedef mozilla::Pair<nsCString, nsCOMPtr<nsIVariant>> KeyValuePair; > > + explicit KeyedScalar(const BaseScalarInfo& info) > + : mScalarKind(info.kind) Now that we're passing in info, we can store it as mScalarInfo (instead of storing just the kind). That'll make mScalarInfo already have the name of the Scalar. (( We'll have to tidy up anyone who uses mScalarKind to instead use mScalarInfo.kind )) @@ +723,5 @@ > > private: > typedef nsClassHashtable<nsCStringHashKey, ScalarBase> ScalarKeysMapType; > + const nsDependentCString > + GetScalarNameByEnum(const StaticMutexAutoLock& lock, mozilla::Telemetry::ScalarID aId); Yup, it seems as though this is no longer used in the patch and can be omitted. @@ +893,5 @@ > + if (mScalarKeys.Count() >= mMaximumNumberOfKeys && !aKey.EqualsLiteral("telemetry.keyed_scalars_exceed_limit")) { > +); We'll be able to remove this. @@ +895,5 @@ > + > +; Can just call it `process`. I don't think we're overriding the process here, just specifying it. @@ +902,5 @@ > + if (NS_FAILED(rv)) { > + return ScalarResult::InvalidType; > + } > + > + scalarExceed->AddValue(locker, NS_ConvertUTF8toUTF16(mScalarInfo.name()), 1); This will be the name "telemetry.keyed_scalars_exceed_limit" instead of being the name of the KeyedScalar our users are trying to accumulate too many keys to. ::: toolkit/components/telemetry/tests/unit/test_TelemetryScalars.js @@ +535,5 @@ > Assert.equal(keyedScalars[KEYED_UINT_SCALAR][keyName], expectedValue++, > "The key must contain the expected value."); > }); > + > + // Generate the names for the exceeded keys First thing we should do is check that (KEYED_UINT_SCALAR in keyedScalars[KEYED_EXCEED_SCALAR]) (because we accumulated 101 keys to it). @@ +546,5 @@ > + valueToSet = 0; > + keyNamesSet2.forEach(keyName => { > + Telemetry.keyedScalarSet(KEYED_EXCEED_SCALAR, keyName, valueToSet++); > + }); > + After this we should assert that KEYED_EXCEED_SCALAR has exactly 100 keys. (it should be missing the last of keyNamesSet2's keys because it already has KEYED_UINT_SCALAR when we started). This will test that we don't infinitely loop when we run out of space on KEYED_EXCEED_SCALAR.
QA Contact: gfritzsche
With some luck this should do the trick, and hopefully no excessive white spaces :)
Flags: needinfo?(chutten)
Attachment #9014888 - Attachment is obsolete: true
Flags: needinfo?(chutten)
Comment on attachment 9014963 [details] [diff] [review] Bug1451813_181005_2358.patch Review of attachment 9014963 [details] [diff] [review]: ----------------------------------------------------------------- Really close, for sure. ::: toolkit/components/telemetry/TelemetryScalar.cpp @@ +688,4 @@ > public: > typedef mozilla::Pair<nsCString, nsCOMPtr<nsIVariant>> KeyValuePair; > > + explicit KeyedScalar(const BaseScalarInfo& info) Alas, there's a bit of whitespace here. @@ +883,5 @@ > + ProcessID process = ProcessID::Parent; > + nsresult rv = internal_GetKeyedScalarByEnum(locker, uniqueId, process, &scalarExceed); > + > + if (NS_FAILED(rv)) { > + return ScalarResult::InvalidType; Let's go with ScalarResult::TooManyKeys for this one, too. @@ +886,5 @@ > + if (NS_FAILED(rv)) { > + return ScalarResult::InvalidType; > + } > + > + scalarExceed->AddValue(locker, NS_ConvertUTF8toUTF16("telemetry.keyed_scalars_exceed_limit"), 1); Oh, we don't want it this way. We want the key to be mScalarInfo.name(). We want to record the name of the keyed scalar that wants to have too many keys. ::: toolkit/components/telemetry/tests/unit/test_TelemetryScalars.js @@ +538,5 @@ > + > + // Check that KEYED_EXCEED_SCALAR is in keyedScalars > + Assert.ok((KEYED_EXCEED_SCALAR in keyedScalars), > + "We have exceeded maximum number of Keys."); > + We also want to ensure the KEYED_UINT_SCALAR is in keyedScalars[KEYED_EXCEED_SCALAR] and that its value is 1 to make sure we recorded that KEYED_UINT_SCALAR tried to record 1 too many keys.
Whitespaces should be called Ghostspaces(TM pending Haha) they keep appearing ;). Fixed the code part. For the test to work, like i think we want, i had to remove these two lines // Telemetry.keyedScalarSet(KEYED_UINT_SCALAR, LAST_KEY_NAME, 1); // Telemetry.keyedScalarSetMaximum(KEYED_UINT_SCALAR, LAST_KEY_NAME, 10) Otherwise keyedScalars[KEYED_EXCEED_SCALAR][KEYED_UINT_SCALAR] would equal 3. This should then be correct, right? // Check that KEYED_UINT_SCALAR is in keyedScalars and its value equals 1 Assert.ok((KEYED_UINT_SCALAR in keyedScalars[KEYED_EXCEED_SCALAR]), "The keyed Scalar is in the keyed exceeded scalar"); Assert.equal(keyedScalars[KEYED_EXCEED_SCALAR][KEYED_UINT_SCALAR], 1, "We have exactly 1 key over the limit");
We can assert it's 3, then. Even better!
Noticed a while ago that the folder structure for telemetry/ had changed some, that TelemetryScalar.cpp moved into a core/ folder. So because of that and hm that i somehow screwed up my mercurial here is a new patch that hopefully does not reintroduce some old issues. Does not Look like that, and hopefully fixes all issues. Please have a look
Attachment #9014963 - Attachment is obsolete: true
Flags: needinfo?(chutten)
Comment on attachment 9016848 [details] [diff] [review] Bug1451813_181012_2317.patch Review of attachment 9016848 [details] [diff] [review]: ----------------------------------------------------------------- You're right, this does address more or less all of the concerns. Well done! I did just notice a small bug that we'll need to address, and two really small things. ::: toolkit/components/telemetry/core/TelemetryScalar.cpp @@ +874,4 @@ > return ScalarResult::Ok; > } > > + if (mScalarKeys.Count() >= mMaximumNumberOfKeys && !aKey.EqualsLiteral("telemetry.keyed_scalars_exceed_limit")) { Oh, I just noticed that this condition will create a new key with string "telemetry.keyed_scalars_exceed_limit" on any keyed scalar regardless of whether it hit the maximum number of keys or not. We need to break this condition up so that every time mScalarKeys.Count() > mMaximumNumberOfKeys we return TooManyKeys. So maybe something like if (mScalarKeys.Count() >= mMaximumNumberOfKeys) { if (aKey.EqualsLiteral("telemetry.keyed_scalars_exceed_limit")) { return ScalarResult::ToManyKeys; } ...the rest of the block } ::: toolkit/components/telemetry/tests/unit/test_TelemetryScalars.js @@ +516,4 @@ > const LAST_KEY_NAME = "overflowing_key"; > Telemetry.keyedScalarAdd(KEYED_UINT_SCALAR, LAST_KEY_NAME, 10); > Telemetry.keyedScalarSet(KEYED_UINT_SCALAR, LAST_KEY_NAME, 1); > + Telemetry.keyedScalarSetMaximum(KEYED_UINT_SCALAR, LAST_KEY_NAME, 10) Lost a semicolon on this line @@ +545,5 @@ > + for (let k = 0; k < 100; k++) { > + keyNamesSet2.add("key2_" + k); > + } > + > + // Add 100 keys to an histogram and set their initial value. replace "an histogram" with "the keyed exceed scalar"
Now, since we're adding new data collection to Firefox we need to request Data Collection Review[1]. Since I'm the one asking for this collection, I'll take care of filling out the form and stuff. Just letting you know ahead of time so you aren't confused about what I'm doing :) [1]:
Flags: needinfo?(chutten)
Allright. Closing in. This should fix these pointers :) It was good that you noticed the missing semicolon row. It should not have been there. I had removed the two lines in the test to get keyedScalars[KEYED_EXCEED_SCALAR][KEYED_UINT_SCALAR] equal 1, committed. Then submitted progress-patch of that, but then when it was okey that it could be 3 aswell, then i re-added them, made a new commit and those lines where added to the patch :/ Ahh yeah remember reading about those forms when i started to look at this bug...
Attachment #9016848 - Attachment is obsolete: true
Comment on attachment 9017182 [details] data review request?** Yes, :chutten. 4) Using the **[category system of data types]()** on the Mozilla wiki, what collection type of data do the requested measurements fall under? ** Category 1. 5) Is the data collection request for default-on or default-off? Default ON. #9017182 - Flags: review?(francois) → review+
Comment on attachment 9017264 [details] [diff] [review] Bug1451813_181015_1940.patch Review of attachment 9017264 [details] [diff] [review]: ----------------------------------------------------------------- Patch looks good to me (though the first hunk of the patch file is corrupt. It says 34 affected lines but it should have 20 affected lines). I've sent it up to our "try server" architecture to make sure it builds and passes the tests. We'll follow along here: If it comes out green, I'll mark this checkin-needed and we'll be done! It might be a good time to start thinking about what you'd like to work on next. Have you any ideas?
Attachment #9017264 - Flags: review+
Hussah! :) Hm see alot of green, but some orange, so hmm ... Anywho one thing i will have to look into is getting a solid mercuiral workflow. Feels abit like black magic and voodo mixed together at this point ;) I Do have looked into Phabricator, but im not sure it would help me, but its a discussion for #developers though. I have another patch im gonna start looking at soon, 1370224 and supplied a progress patch for a bug that causes openSUSE linux to not have bootstrap functionallity... but hey, if you had something in mind, please let me know :)
Seems as though we're getting crashes when test_TelemetryScalars.js is run on a debug build. That usually means we're violating an assert someplace. If we look at the log we can get the stack trace of the crashing thread. Go here [1] and search for "application crashed" and you should find something that looks like > [task 2018-10-16T14:57:22.534Z] 14:57:22 INFO - 0 libxul.so!(anonymous namespace)::KeyedScalar::AddValue(mozilla::BaseAutoLock<mozilla::AnyStaticMutex> const&, nsTSubstring<char16_t> const&, unsigned int) [TelemetryScalar.cpp:dc35f71fa8ebab2f6ebbd894b7ada9b097fa491e : 799 + 0x0]. So. What is the correct way to proceed. I'm not sure. Let's ask Alessio who implemented this about the choice to assert here and not assert on the JS-facing ones, and what the intent is. [1]:
Flags: needinfo?(alessio.placitelli)
okidoki, gonna try to build a local debug build and see if i can get the test to crash aswell... But could it not be so that, since we have been poking at what happens when a scalar gets too many keys, this assertion is not correct anymore?!
(In reply to Chris H-C :chutten from comment #64) >. This was intentional (see bug 1277806 comment 17 for context): when using the JS API, we most probably have access to the browser console so we can notify devs through it if something goes wrong. With the C++ API is a bit different, as it might be a bit more complicated to access the browser console. We decided to assert on errors in c++ debug builds (so only asserting to devs) and communicate the error through the console with the JS API. Both are meant as ways for developers to catch this error early during development rather than capturing weird data and re-iterate on the code again. This doesn't mean it can't be changed: since we're about to report which keyed scalars fail using telemetry, I think it should be ok to get rid of this (and related) asserts.
Flags: needinfo?(alessio.placitelli)
Let's go with not asserting on TooManyKeys. Something well commented like: // Bug 1451813 - We now report which scalars exceed the key limit in telemetry.keyed_scalars_exceed_limit. if (sr != ScalarResult::Ok && sr != ScalarResult::TooManyKeys) { We do still want to assert on other non-Ok cases because telemetry.keyed_scalars_exceed_limit doesn't cover those cases. This'll be needed for all the KeyedScalar:: calls that have a void return. Sound like a plan, Nicklas?
Yeah that sounds like a plan. Attaching patch to fix that. BUT, for some reason now the test fails locally. It fails on Telemetry.keyedScalarSet, unless i change the test to have 99 (like i did below, not in the patch) instead of 100, then it passes...What would be the reason for that? It had earlier taken one of the Assertion-if-statements that we changed now, so now it doesnt find that one, and times out? The error it shows is : 0:07.50 pid:13279 ExceptionHandler::GenerateDump cloned child 13297 0:07.50 pid:13279 ExceptionHandler::SendContinueSignalToChild sent continue signal to child 0:07.50 pid:13279 ExceptionHandler::WaitForContinueSignal waiting for continue signal... ---- // Generate the names for the exceeded keys let keyNamesSet2 = new Set(); for (let k = 0; k < 99; k++) { keyNamesSet2.add("key2_" + k); } // Add 99 keys to the keyed exceed scalar and set their initial value. valueToSet = 0; keyNamesSet2.forEach(keyName2 => { Telemetry.keyedScalarSet(KEYED_EXCEED_SCALAR, keyName2, valueToSet++); }); // Check that there are exactly 99 keys in KEYED_EXCEED_SCALAR Assert.equal(valueToSet, 99, "The keyed scalar must contain all the 99 keys."); ----
Attachment #9017264 - Attachment is obsolete: true
Flags: needinfo?(chutten)
Comment on attachment 9018039 [details] [diff] [review] Bug1451813_181017_2155.patch Review of attachment 9018039 [details] [diff] [review]: ----------------------------------------------------------------- This is why we have tests, so they can catch things like these :D ::: toolkit/components/telemetry/core/TelemetryScalar.cpp @@ +770,5 @@ > ScalarBase* scalar = nullptr; > + ScalarResult sr = GetScalarForKey(locker, aKey, &scalar); > + > + // Bug 1451813 - We now report which scalars exceed the key limit in telemetry.keyed_scalars_exceed_limit. > + if (sr != ScalarResult::Ok && sr != ScalarResult::TooManyKeys) { I foresee a problem here. In the event the ScalarResult is TooManyKeys we will try to call SetValue on scalar. Scalar is initialized to nullptr, and since we early-return from GetScalarForKey it is likely still nullptr. We're probably dereferencing a null below here. This means we need to make these int 2-level if blocks if (sr != ScalarResult::Ok) { // Bug 1451813 - We now report which scalars exceed the key limit in telemetry.keyed_scalars_exceed_limit. if (sr != ScalarResult::TooManyKeys) { MOZ_ASSERT(...); } return; } @@ +784,4 @@ > { > ScalarBase* scalar = nullptr; > + ScalarResult sr = GetScalarForKey(locker, aKey, &scalar); > + whitespace ghost
Flags: needinfo?(chutten)
Yeah that fixed things, now it passes the test locally. haha whitespaces. gonna need to increase the color of my whitespace characters :)
Attachment #9018039 - Attachment is obsolete: true
Comment on attachment 9018703 [details] [diff] [review] Bug1451813_181019_2145.patch Review of attachment 9018703 [details] [diff] [review]: ----------------------------------------------------------------- It looks good to me, but I've now been looking at it enough times that I might be missing something. Hey :janerik, what do you think about this?
Attachment #9018703 - Flags: review?(jrediger)
I've put it up for try over here:
Hmm that was a scary movie ;) I see errors, but, there are bug numbers assigned to them, so im guessing that has not todo with this, right? Time to cheer perhaps?
Comment on attachment 9018703 [details] [diff] [review] Bug1451813_181019_2145.patch Review of attachment 9018703 [details] [diff] [review]: ----------------------------------------------------------------- The code looks good to me. Some minor nits regarding the now-incorrect assert messages and the test needs to be adjusted to test the correct thing. :::."); The message is not fully correct anymore. It won't trigger for too many keys anymore. @@ +7900724. ::: toolkit/components/telemetry/tests/unit/test_TelemetryScalars.js @@ +552,5 @@ > + Telemetry.keyedScalarSet(KEYED_EXCEED_SCALAR, keyName2, valueToSet++); > + }); > + > + // Check that there are exactly 100 keys in KEYED_EXCEED_SCALAR > + Assert.equal(valueToSet, 100, This assert is only checking that we correctly increased `valueToSet` 100 times. It's not testing the keyed scalar at all. To test that it contains the right number of keys you need to take a snapshot, something along these lines: let snapshot = Telemetry.snapshotKeyedScalars(Ci.nsITelemetry.DATASET_RELEASE_CHANNEL_OPTIN, false); Assert.equal(100, Object.keys(snapshot.parent[KEYED_UINT_SCALAR]).length, "The keyed scalar must contain all the 100 keys.");
Attachment #9018703 - Flags: review?(jrediger) → review+
(In reply to Nicklas Boman [:smurfd] from comment #73) > Hmm that was a scary movie ;) > > I see errors, but, there are bug numbers assigned to them, so im guessing > that has not todo with this, right? > Time to cheer perhaps? Time to cheer indeed, and to make the adjustments Jan-Erik found :)
I changed the message to be : "Key too long is recorded in the scalar." Not sure if that is allright? Also added that snapshot. Built okey and passed running the xpcshell test.
Attachment #9018703 - Attachment is obsolete: true
Flags: needinfo?(jrediger)
Comment on attachment 9019438 [details] [diff] [review] Bug1451813_181023_1935.patch Review of attachment 9019438 [details] [diff] [review]: ----------------------------------------------------------------- ::: is recorded in the scalar."); Better grammar would be "Key too long to be recorded in the scalar."
You are right, fixed!
Attachment #9019438 - Attachment is obsolete: true
Comment on attachment 9019464 [details] [diff] [review] Bug1451813_181023_2120.patch Clearing the ni? for Jan-Erik as he provided r+ on the earlier patch. Carrying forward the r+. Next step, marking this for checkin.
Flags: needinfo?(jrediger)
Attachment #9019464 - Flags: review+
Pushed by cbrindusan@mozilla.com: Report which keyed scalars fail to accumulate due to running out of keys. r=chutten
Status: NEW → RESOLVED
Last Resolved: 6 months ago
status-firefox65: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla65 | https://bugzilla.mozilla.org/show_bug.cgi?id=1451813 | CC-MAIN-2019-18 | en | refinedweb |
Flamingo Tutorial
Flamingo Tutorial
Join the DZone community and get the full member experience.Join For Free
In this article, I will provide you with the documentation to easily use the Flamingo framework and more precisely, its ribbon widget.
Introduction
Never say that Microsoft never innovates: in Office, it introduced an interesting concept, the ribbon band.
The ribbon band is a toolbar of sort. But whereas toolbars are fixed, ribbons layout can change according to the width they display. If you have such an application, just play with it for a few seconds and you will see the magic happens.
Recent versions of Swing do not have such widgets. However, I found the Flamingo project on java.net. Examples made with Flamingo look awfully similar to Office.
Trying to use Flamingo for the first time is no small feat since there’s no documentation on the Web, apart from Javadocs and the source for a test application. The following is what I understood since I began my trial and error journey.
The basics
Semantics
- the ribbon is the large bar on the screenshot above. There can be only a single ribbon for a frame
- a task is a tabbed group of one or more band. On the screenshot, tasks are Page Layout, Write, Animations and so on
- a band is a group of one or more widgets. On the screenshot, bands are Clipboard, Quick Styles, Font and so on
Underlying concepts
The core difference between buttons in a toolbar and band in a ribbon bar is that bands are resizable. For examples, these are the steps for displaying the Document band, in regard to both its relative width and the ribbon width.
The last step is known as the iconified state. When you click on the button, it displays the entire band as a popup.
Your first ribbon
Setup
In order to use the Flamingo framework, the first step is to download it. If you’re using Maven, tough luck! I didn’t find Flamingo in central nor java.net repositories. So download it anyway and install it manually in your local (or enterprise) repository. For information, I choosed the net.java.dev.flamingo:flamingo location.
The frame
If you are starting from scratch, you’re lucky. Just inherit from JRibbonFrame: the method getRibbon() will provide you a reference to the ribbon instance. From there, you will be able to add tasks to it.
However, chances are you probably already have your own frame hierachy. In this case, you have to instantiate a JRibbon and add it on the NORTH location of your BorderLayout-ed frame.
In both cases, the result should be something akin to that:
Adding a task
Tasks represent logical band grouping. They look like tabs and act the part too. Let’s add two such tasks aptly named “One” and “Two”.
public class MainFrame extends JRibbonFrame { public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { MainFrame frame = new MainFrame(); frame.setDefaultCloseOperation(EXIT_ON_CLOSE); frame.pack(); frame.setVisible(true); RibbonTask task1 = new RibbonTask("One"); RibbonTask task2 = new RibbonTask("Two"); frame.getRibbon().addTask(task1); frame.getRibbon().addTask(task2); } }); }}
Notice the getRibbon() method on the JRibbonFrame. It is the reference on the ribbon bar.
Also notice that the addTask() method accepts a task but also a varargs of JRibbonBand. And if you launch the above code, it will fail miserably with the following)
Adding bands
To satisfy our Flamingo friend, let’s add a ribbon band to each task. The constructor of JRibbonBand takes two argument, the label and an instance of a previously unknown class, ResizableIcon. It will be seen in detail in the next section.
As for now, if you just create the RibbonTask with a reference to the JRibbonBand and launch the application, you will get such an)
Remember that bands are resizable? Flamingo needs information on how to do it. Before initial display, it will check that those policies are consistent. By default, they are not and this is the reason why it complains: Flamingo requires you to have at least the iconified policy that must be in the last place. In most cases, however, you’ll want to have at least a normal display in the policies list.
Let’s modify the code to do it:
JRibbonBand band1 = new JRibbonBand("Hello", null);JRibbonBand band2 = new JRibbonBand("world!", null);band1.setResizePolicies((List) Arrays.asList(new IconRibbonBandResizePolicy(band1.getControlPanel())));band2.setResizePolicies((List) Arrays.asList(new IconRibbonBandResizePolicy(band1.getControlPanel())));RibbonTask task1 = new RibbonTask("One", band1);RibbonTask task2 = new RibbonTask("Two", band2);
The previous code let us at least see something:
Adding buttons (at last!)
Even if the previous compiles and runs, it still holds no interest. Now is the time to add some buttons!
JCommandButton button1 = new JCommandButton("Square", null);JCommandButton button2 = new JCommandButton("Circle", null);JCommandButton button3 = new JCommandButton("Triangle", null);JCommandButton button4 = new JCommandButton("Star", null);band1.addCommandButton(button1, TOP);band1.addCommandButton(button2, MEDIUM);band1.addCommandButton(button3, MEDIUM);band1.addCommandButton(button4, MEDIUM);
Too bad there’s no result! Where are our buttons? Well, they are well hidden. Remember the resize policies? There’s only one, the iconified one and its goal is only to display the iconified state. Just update the policies line with the code:
band1.setResizePolicies((List) Arrays.asList(new CoreRibbonResizePolicies.None(band1.getControlPanel()), new IconRibbonBandResizePolicy(band1.getControlPanel())));
The result looks the same at first, but when you resize the frame, it looks like this:
Even if it’s visually not very attractive, it looks much better than before. We see the taks, the name of the band and the labels on our four buttons.
Resizable icons
The JCommandButton‘s constructor has 2 parameters: one for the label, the other for a special Flamingo class, the ResizableIcon. Since Flamingo is all about displaying the same button in different sizes, that’s no surprise. Resizable icons can be constructed from Image, ico resources or even SVG.
Let’s add an utility method to our frame, and spice up our UI:
public static ResizableIcon getResizableIconFromResource(String resource) { return ImageWrapperResizableIcon.getIcon(MainFrame.class.getClassLoader().getResource(resource), new Dimension(48, 48));}...JCommandButton button1 = new JCommandButton("Square", getResizableIconFromResource("path"));JCommandButton button2 = new JCommandButton("Circle", getResizableIconFromResource("to"));JCommandButton button3 = new JCommandButton("Triangle", getResizableIconFromResource("the"));JCommandButton button4 = new JCommandButton("Star", getResizableIconFromResource("resource"));band1.addCommandButton(button1, TOP);band1.addCommandButton(button2, MEDIUM);band1.addCommandButton(button3, MEDIUM);band1.addCommandButton(button4, MEDIUM);
This is somewhat more satisfying:
Choosing policies
Now we’re ready to tackle Flamingo’s core business, resizing management. If you have Office, and played with it, you saw that the resizing policies are very rich. And we also saw previously that with only two meager policies, we can either see the iconified display or the full display.
Let’s see how we could go further. You probably noticed that the addCommandButton() of JRibbonBand has 2 parameters: the button to add and a priority. It is this priority and the policy that Flamingo use to choose how to display the band.
Priorities are the following: TOP, MEDIUM and LOW.
Policies are:
Now, you have all elements to let you decide which policies to apply. There’s one rule though: when setting policies, the width of the band must get lower and lower the higher the index of the policy (and it must end with the IconRibbonBandResizePolicy) let you’ll get a nasty IllegalStateException: Inconsistent preferred widths (see above).
Let’s apply some policies to our band:
band1.setResizePolicies((List) Arrays.asList( new CoreRibbonResizePolicies.None(band1.getControlPanel()), new CoreRibbonResizePolicies.Mirror(band1.getControlPanel()), new CoreRibbonResizePolicies.Mid2Low(band1.getControlPanel()), new CoreRibbonResizePolicies.High2Low(band1.getControlPanel()), new IconRibbonBandResizePolicy(band1.getControlPanel())));
This will get us the following result:
Note: there won’t be any iconified state in my example since the band does not compete for space with another one.
More features
Flamingo’s ribbon feature let you also:
- add standard Swing components to the ribbon
- add a menu on the top left corner
- integration with standard Look and Feels
Those are also undocumented but are much easier to understand on your own.
It also has other features:
- Breadcrumb bar
- Command button strips and panels
Conclusion
Flamingo is a nice and powerful product, hindered by a big lack of documentation. I hope this article will go one step toward documenting it.
Here are the sources for this article in Eclipse/Maven format.
To go further:
- The Flamingo site
- Some demo applications
- The latest release (4.2) download page
- Kirill Grouchnivkov (Flamingo’s father) site, where he blogs about Flamingo and other products
From
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/flamingo-tutorial | CC-MAIN-2019-18 | en | refinedweb |
Sounds as SAP.Connector.dll (and SAP.Connector.Rfc.dll for 2.0) is not present in GAC or is not copied to the Bin-folder of the project. In 1.x version the assemblies are NOT installed to GAC, thus the "Copy flag" must be turned on. In 2.0 the assemblies are installed to GAC, thus the flag is usually off.
I´ve had the same problem after deploying my application.
Just check, as Reiner Hille-Doering wrote, if you set the properties of SAP.Connector "Lokale Kopie" -> "True" (German VS2003). Only then the copy of SAP.connector.dll in your bin folder is used. Otherwise GAC is used.
Gerhard Rausch
[WebServiceBinding(Name="", Namespace="urn:sap-com:document:sap:rfc:functions")]
public class SAPProxy1 : SAPClient
{
// Constructors
--->public SAPProxy1(){}
public SAPProxy1(string ConnectionString) : base(ConnectionString){}
that's the point where the webapp crashes....
in my browserwindow I receive among other things theese informations where 0z1tvv_5 differs every time I run my project:
=== Pre-bind state information ===
LOG: Where-ref bind. Location = C:\WINNT\TEMP\0z1tvv_5.dll
LOG: Appbase =
LOG: Initial PrivatePath = bin
Calling assembly : (Unknown).
===
LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind).
LOG: Attempting download of new URL.
I just figured it out: The process did not have any write-permissions in c:\winnt\temp though it could not create the temporarely DLLs
Add comment | https://answers.sap.com/questions/818935/index.html | CC-MAIN-2019-18 | en | refinedweb |
send (3p)
PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
NAMEsend — send a message on a socket
SYNOPSIS
#include <sys/socket.h>
ssize_t send(int socket, const void *buffer, size_t length, int flags);
DESCRIPTIONThe.
RETURN VALUEUpon successful completion, send() shall return the number of bytes sent. Otherwise, −1 shall be returned and errno set to indicate the error.
ERRORSThe send() function shall fail if:
- EAGAIN or EWOULDBLOCK
- EBADF
- The socket argument is not a valid file descriptor.
- ECONNRESET
- A connection was forcibly closed by a peer.
- EDESTADDRREQ
-ACCES
- The calling process does not have appropriate privileges.
- EIO
- An I/O error occurred while reading from or writing to the file system.
- ENETDOWN
- The local network interface used to reach the destination is down.
- ENETUNREACH
- ENOBUFS
- Insufficient resources were available in the system to perform the operation. | https://readtheman.io/pages/3p/send | CC-MAIN-2019-18 | en | refinedweb |
unlockpt (3p)
PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
NAMEunlockpt — unlock a pseudo-terminal master/slave pair
SYNOPSIS
#include <stdlib.h>
int unlockpt(int fildes);
DESCRIPTIONThe unlockpt() function shall unlock the slave pseudo-terminal device associated with the master to which fildes refers. Conforming applications shall ensure that they call unlockpt() before opening the slave side of a pseudo-terminal device.
RETURN VALUEUpon successful completion, unlockpt() shall return 0. Otherwise, it shall return −1 and set errno to indicate the error.
ERRORSThe unlockpt() function may fail if:
- EBADF
- The fildes argument is not a file descriptor open for writing.
- EINVAL
- The fildes argument is not associated with a master pseudo-terminal device. | https://readtheman.io/pages/3p/unlockpt | CC-MAIN-2019-18 | en | refinedweb |
Recompose patterns
Recompose is a toolbelt for working with React components in a reusable, functional way. The workflow is similar to libraries like Underscore or Lodash, which can help you avoid re-implementing common patterns and keep your code DRY. Check out the Recompose API for all the details.
A common use-case when using the
graphql HOC is to display a "loading" screen while your data is being fetched. We often end up with something like this:
const Component = props => { if (props.data.loading) { return <LoadingPlaceholder> } return ( <div>Our component</div> ) }
Recompose has a utility function
branch() which lets us compose different HOCs based on the results of a test function. We can combine it with another Recompose method,
renderComponent(). So we can say "If we are loading, render
import { propType } from 'graphql-anywhere' const renderWhileLoading = (component, propName = 'data') => branch( props => props[propName] && props[propName].loading, renderComponent(component), ); const Component = props => (<div>Our component for {props.user.name}</div>) Component.propTypes = { // autogenerated proptypes should be in place (if no error) user: propType(getUser).isRequired, } const enhancedComponent = compose( graphql(getUser, { name: "user" }), renderWhileLoading(LoadingPlaceholder, "user") )(Component); export default enhancedComponent;
This way, our wrapped component is only rendered outside of the loading state. That means we only need to take care of 2 states: error or successful load.
Note:
trueduring the first fetch for a particular query. But if you enable options.notifyOnNetworkStatusChange you can keep track of other loading status using the data.networkStatus field. You can use a similar pattern to the above.
Error handling
Similar to the loading state above, we might want to display a different component in the case of an error, or let the user
refetch(). We will use
withProps() to include the refetch method directly in the props. This way our universal error handler can always expect it to be there and is more decoupled.
const renderForError = (component, propName = "data") => branch( props => props[propName] && props[propName].error, renderComponent(component), ); const ErrorComponent = props =>( <span> Something went wrong, you can try to <button onClick={props.refetch}>refetch</button> </span> ) const setRefetchProp = (propName = "data") => withProps(props => ({refetch: props[propName] && props[propName].refetch})) const enhancedComponent = compose( graphql(getUser, { name: "user" }), renderWhileLoading(LoadingPlaceholder, "user"), setRefetchProp("user"), renderForError(ErrorComponent, "user"), )(Component); export default enhancedComponent;
Now we can count on results being available for our default component and don't have to manually check for loading state or errors inside the
render function.
Query lifecycle
There are some use-cases when we need to execute code after a query finishes fetching. From the example above, we would render our default component only when there is no error and loading is finished.
But it is just a stateless component; it has no lifecycle hooks. If we need extra lifecycle functionality, Recompose's
lifecycle() comes to the rescue:
const execAtMount = lifecycle({ componentWillMount() { executeSomething(); }, }) const enhancedComponent = compose( graphql(getUser, { name: "user" }), renderWhileLoading(LoadingPlaceholder, "user"), setRefetchProp("user"), renderForError(ErrorComponent, "user"), execAtMount, )(Component);
The above works well if we just want something to happen at component mount time.
Let's define another more advanced use-case, for example, using
react-select to let a user pick an option from the results of a query. I want to always display the react-select, which has its own loading state indicator. Then, I want to automatically select the predefined option after the query finishes fetching.
There is one caveat: we need to be aware that the query can skip the loading state when data is already in the cache. That would mean we need to handle
networkStatus === 7 on mount.
We will also use recompose's
withState() to keep value for our option picker. For this example we will assume the default
data prop name is unchanged.
const DEFAULT_PICK = "orange"; const withPickerValue = withState("pickerValue", "setPickerValue", null); // find matching option const findOption = (options, ourLabel) => lodashFind(options, option => option.label.toLowerCase() === ourLabel.toLowerCase()); const withAutoPicking = lifecycle({ componentWillReceiveProps(nextProps) { // when value was already picked if (nextProps.pickerValue) { return; } // networkStatus changed from 1 to 7, meaning initial load finished successfully if (this.props.data.networkStatus === 1 && nextProps.data.networkStatus === 7) { const match = findOption(nextProps.data.options) if (match) { nextProps.setPickerValue(match); } } }, componentWillMount() { const { pickerValue, setPickerValue, data } = this.props; if (pickerValue) { return; } // when Apollo query is resolved from cache, // it already has networkStatus 7 at mount time if (data.networkStatus === 7 && !data.error) { const match = findOption(data.options); if (match) { setPickerValue(match); } } }, }); const Component = props => ( <Select loading={props.data.loading} value={props.pickerValue && props.pickerValue.value || null} onChange={props.setPickerValue} options={props.data.options || undefined} /> ); const enhancedComponent = compose( graphql(getOptions), withPickerValue, withAutoPicking, )(Component);
Controlling pollInterval
This case is borrowed from David Glasser's post on the Apollo blog about the Meteor's Galaxy UI migrations panel implementation. In the post, he says:
We’re not usually running any migrations, so a nice, slow polling interval like 30 seconds seemed reasonable. But in the rare case where a migration is running, I wanted to be able to see much faster updates on its progress.
The key to this is knowing that the
optionsparameter to react-apollo’s main graphql function can itself be a function that depends on its incoming React props. (The
optionsparameter describes the options for the query itself, as opposed to React-specific details like what property name to use for data.) We can then use recompose's
withState()to set the poll interval from a prop passed in to the graphql component, and use the
componentWillReceivePropsReact lifecycle event (added via the recompose lifecycle helper) to look at the fetched GraphQL data and adjust if necessary.
Let's look at the code:
import { graphql } from "react-apollo"; import gql from "graphql-tag"; import { compose, withState, lifecycle } from "recompose"; const DEFAULT_INTERVAL = 30 * 1000; const ACTIVE_INTERVAL = 500; const withData = compose( // Pass down two props to the nested component: `pollInterval`, // which defaults to our normal slow poll, and `setPollInterval`, // which lets the nested components modify `pollInterval`. withState("pollInterval", "setPollInterval", DEFAULT_INTERVAL), graphql( gql` query GetMigrationStatus { activeMigration { name version progress } } `, { // If you think it's clear enough, you can abbreviate this as: // options: ({pollInterval}) => ({pollInterval}), options: props => { return { pollInterval: props.pollInterval }; } } ), lifecycle({ componentWillReceiveProps({ data: { loading, activeMigration }, pollInterval, setPollInterval }) { if (loading) { return; } if (activeMigration && pollInterval !== ACTIVE_INTERVAL) { setPollInterval(ACTIVE_INTERVAL); } else if ( !activeMigration && pollInterval !== DEFAULT_INTERVAL ) { setPollInterval(DEFAULT_INTERVAL); } } }) ); const MigrationPanelWithData = withData(MigrationPanel);
Note that we check the current value of
pollInterval before changing it because, by default in React, nested components will get re-rendered any time we change state, even if you change it to the same value. You can deal with this using
shouldComponentUpdate or
React.PureComponent, but in this case it’s straightforward just to only set the state when it’s actually changing.
Other use-cases
Recompose is a powerful tool and can be applied to all sorts of other cases. Here are a few final examples.
Normally, if you wanted to add side effects to the
mutate function, you would manage them in the
graphql HOC's
props option by doing something like
{ mutate: () => mutate().then(sideEffectHandler) }. But that's not very reusable. Using recompose's
withHandlers() you can compose the same prop manipulation in any number of components. You can see a more detailed example here.
Mutations can also be tracked using recompose's
withState, since it has no effect on your query's
loading state. For example, you could use it to disable buttons while submitting form data.
See the full Recompose docs here. | https://www.apollographql.com/docs/react/recipes/recompose/ | CC-MAIN-2019-18 | en | refinedweb |
8 min read
Knowing.
In this article, we are going to build a simple Android application to determine the user’s latitude and longitude using Android’s Google Location Services API. When developing Android applications, there are a couple of ways to get the user’s location.
Package “android.location”
The package “android.location” has been available since Android was first introduced, and it gives us access to location services. These services allow applications to obtain periodic updates of the device’s geographical location.
The package provides two means of acquiring location data:
LocationManager.GPS_PROVIDER: Determines location using satellites. Depending on the conditions, this provider may take a while to return a location fix.
LocationManager.NETWORK_PROVIDER: Determines location based on availability of nearby cell towers and WiFi access points. This is faster than GPS_PROVIDER.
When you are looking for user location you have to play with these providers and their availability. Ideally you obtain the first location using NETWORK_PROVIDER, which might not be as accurate, but is much faster. You might then make an attempt to increase accuracy by listening for a better location fix using the GPS_PROVIDER.
The APIs provided by this package are fairly low-level, and require the developer of the application to handle the finer details of determining when to request location data and schedule calls to the API in an optimized way. To improve developer experience with location based system services and ease the process of developing location-aware applications, Google introduced a new way of requesting a user’s location using Google Play Services. It offers a simpler API with higher accuracy, low-power geofencing, and much more.
Google Location Services API.
Let us build a location-based Android application using this API. For this, we will use Google’s suggested IDE for Android application development - Android Studio. Getting started with Android Studio is pretty straight forward. Their website describes the procedure involving the installation and configuration of Android Studio in great detail, including how to bootstrap your first Android application for development.
Android Studio should make things super-easy for us. However, we will need to begin by configuring the build script and adding Google Play Services as a dependency for this application. This can be done by modifying the “build.gradle” file as follows:
dependencies { compile 'com.android.support:appcompat-v7:21.0.3' compile 'com.google.android.gms:play-services:6.5.87' // Add this line }
At the time I am writing this article, the latest version of Google Play Services available is 6.5.87. Make sure you always check for the latest version available before you start. In case newer versions comes out later down the road and you decide to update it for your own projects, test all location related features against all versions of Android you are supporting.
At this point, we should be able to start doing the actual work for our application.
Requesting Permission, Configuring AndroidManifest.xml
Androids have specific security features that would prevent any arbitrary application from requesting a precise user location. To solve this, we need to edit “AndroidManifest.xml” and add the permission we require for this application:
<uses-permission android:
While we are at it, we should also define the version of Google Play Services we are using for this application:
<meta-data android:
Checking for Google Play Services Availability
Before accessing features provided by Google Play Services, we must check if the device has Google Play Services installed, and that the version is the one we intend to use (6.5.87).
private boolean checkGooglePlayServices(){ int checkGooglePlayServices = GooglePlayServicesUtil .isGooglePlayServicesAvailable(mContext); if (checkGooglePlayServices != ConnectionResult.SUCCESS) { /* * Google Play Services is missing or update is required * return code could be * SUCCESS, * SERVICE_MISSING, SERVICE_VERSION_UPDATE_REQUIRED, * SERVICE_DISABLED, SERVICE_INVALID. */ GooglePlayServicesUtil.getErrorDialog(checkGooglePlayServices, mContext, REQUEST_CODE_RECOVER_PLAY_SERVICES).show(); return false; } return true; }
This method will check for Google Play Services, and in case the device doesn’t have it installed (it’s rare, but I’ve seen such cases), it will open a dialog with the corresponding error and invite the user to install/update Google Play Services from the Google Play Store.
After the user completes the resolution provided by “GooglePlayServicesUtil.getErrorDialog()”, a callback method “onActivityResult()” is fired, so we have to implement some logic to handle that call:
(mContext, "Google Play Services must be installed.", Toast.LENGTH_SHORT).show(); finish(); } } }
Accessing Google APIs
To access Google APIs, we just need to perform one more step: create an instance of GoogleApiClient. The Google API Client provides a common entry point to all the Google Play services, and manages the network connection between the user’s device and each Google service. Our first step here is to initiate the connection. I usually call this code from “onCreate” method of the activity:
protected synchronized void buildGoogleApiClient() { mGoogleApiClient = new GoogleApiClient.Builder(this) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .addApi(LocationServices.API) .build(); }
By chaining a series of method calls, we are specifying the callback interface implementation and the Location Service API that we want to use. The interface implementation, in this case “this”, will receive response to the asynchronous “connect()” method when the connection to Google Play Services succeed, fail, or become suspended. After adding this code, our “MainActivity” should look like this:
package com.bitwoo.userlocation; import android.content.Intent; import android.location.Location; import android.os.Bundle; import android.support.v7.app.ActionBarActivity; import android.view.Menu; import android.view.MenuItem; import android.widget.Toast; import com.google.android.gms.common.ConnectionResult; import com.google.android.gms.common.GooglePlayServicesUtil; import com.google.android.gms.common.api.GoogleApiClient; import com.google.android.gms.location.LocationServices; public class MainActivity extends ActionBarActivity implements GoogleApiClient.ConnectionCallbacks, GoogleApiClient.OnConnectionFailedListener { private static int REQUEST_CODE_RECOVER_PLAY_SERVICES = 200; private GoogleApiClient mGoogleApiClient; private Location mLastLocation; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); if (checkGooglePlayServices()) { buildGoogleApiClient(); } } ); } private boolean checkGooglePlayServices() { int checkGooglePlayServices = GooglePlayServicesUtil .isGooglePlayServicesAvailable(this); if (checkGooglePlayServices != ConnectionResult.SUCCESS) { /* * google play services is missing or update is required * return code could be * SUCCESS, * SERVICE_MISSING, SERVICE_VERSION_UPDATE_REQUIRED, * SERVICE_DISABLED, SERVICE_INVALID. */ GooglePlayServicesUtil.getErrorDialog(checkGooglePlayServices, this, REQUEST_CODE_RECOVER_PLAY_SERVICES).show(); return false; } return true; } (this, "Google Play Services must be installed.", Toast.LENGTH_SHORT).show(); finish(); } } } protected synchronized void buildGoogleApiClient() { mGoogleApiClient = new GoogleApiClient.Builder(this) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .addApi(LocationServices.API) .build(); } @Override public void onConnected(Bundle bundle) { } @Override public void onConnectionSuspended(int i) { } @Override public void onConnectionFailed(ConnectionResult connectionResult) { } }
Then in our “onStart” method we call the “connect” method and wait for “onConnected” callback method be invoked:
@Override protected void onStart() { super.onStart(); if (mGoogleApiClient != null) { mGoogleApiClient.connect(); } }
The “onConnected” method will look like this:
@Override public void onConnected(Bundle bundle) { mLastLocation = LocationServices.FusedLocationApi.getLastLocation( mGoogleApiClient); if (mLastLocation != null) { Toast.makeText(this, "Latitude:" + mLastLocation.getLatitude()+", Longitude:"+mLastLocation.getLongitude(),Toast.LENGTH_LONG).show(); } }
This callback is fired when Google Play Services is connected, which means by then we should have the last known location. However, this location can be null (it’s rare but not impossible). In that case, what I recommend is to listen for location updates which will be covered next.
Listening for Location Updates
After you invoke “getLastLocation”, you might want to request periodic updates from the Fused Location Provider. Depending on your application, this period could be short or long. For instance, if you are building an application that tracks a user’s location while he drives, you will need to listen for updates on short intervals. On the other hand, if your application is about sharing user location with his friend, you maybe just need to request the location once in a while.
Creating a request is pretty easy - you can call this method inside the “onCreate” method:
protected void createLocationRequest() { mLocationRequest = new LocationRequest(); mLocationRequest.setInterval(20000); mLocationRequest.setFastestInterval(5000); mLocationRequest.setPriority(LocationRequest.PRIORITY_HIGH_ACCURACY); }
We instantiate a new LocationRequest object. Set the interval to 20 seconds (20000 milliseconds). Furthermore, we set a throttled update rate to 5 seconds. This tells the API to provide updates every 20 seconds (preferably), but if there is a change available within a 5 second period, it should provide that too. Finally, we set the priority to “PRIORITY_HIGH_ACCURACY”, among the other available priority options: PRIORITY_BALANCED_POWER_ACCURACY, PRIORITY_LOW_POWER, PRIORITY_NO_POWER.
Once you have built the request, you are ready to start listening on location updates after “onConnected()” method has been fired:
protected void startLocationUpdates() { LocationServices.FusedLocationApi.requestLocationUpdates( mGoogleApiClient, mLocationRequest, this); }
All that remains now is to implement the callback method to satisfy the LocationListener interface:
public class MainActivity extends ActionBarActivity implements ConnectionCallbacks, OnConnectionFailedListener, LocationListener { // ... @Override public void onLocationChanged(Location location) { mLastLocation = location; Toast.makeText(this, "Latitude:" + mLastLocation.getLatitude()+", Longitude:"+mLastLocation.getLongitude(),Toast.LENGTH_LONG).show(); } }
Stop Listening for Updates
It is important to explicitly stop listening for updates when you don’t need them anymore, or if the user leaves your application. The following method should be invoked from within “onPause” callback:
protected void stopLocationUpdates() { if (mGoogleApiClient != null) { LocationServices.FusedLocationApi.removeLocationUpdates( mGoogleApiClient, this); } }
… and disconnecting Google API:
@Override protected void onStop() { super.onStop(); if (mGoogleApiClient != null) { mGoogleApiClient.disconnect(); } }
Wrapping Up
As you can see, the fundamental ideas behind implementing location aware applications in Android is very simple. Moreover, with the available APIs that are both simple to use and easy to understand, it should be a no-brainer to build basic location-based applications for Android. The small sample application we have built here is meant to demonstrate exactly that. You can find the complete source code for this on GitHub. Please note that to keep things simple, the application is not handling the “onConnectionFailed” callback method.
Hopefully this tutorial will help you get started with using the Google Location Services API. | https://www.toptal.com/android/android-developers-guide-to-google-location-services-api | CC-MAIN-2019-18 | en | refinedweb |
Angular 2 with Rails and Webpacker
Angular.
In this blog post I will explain how to create a SPA application with Rails and Angular 2+. I will do it with the new Webpacker gem.
Because I use bleeding edge technologies, it may improve and become more smooth in the future, so stay tuned.
First some history
The firsts Rails versions didn’t have any unique feature for running JavaScript in the browser. When JavaScript had become a key player in web development, Rails introduced the “Asset pipeline”.
The asset pipeline through the ‘sprockets-rails’ gem provides a framework to concatenate and minify or compress JavaScript and CSS assets. It enabled using a lot of JavaScript files much easier. For instance, to creating an Rails + Angular1 SPA.
But the JavaScript world had evolved and minifying and concentrate JavaScript files is not enough.
In order to use features of ES6+ or TypeScript we need to use a compiler (or transpiler). The same goes for features like hot reloading and more. The Asset Pipeline could not provide it (although there are efforts to enable it).
There are few ways that I can use Rails with modern JavaScript library (React, Angular2+, Vue):
- Run Rails as API, and call it from JavaScript files that is serving from different place. The biggest disadvantage is that it is not served from the same server. The deployment is harder. I cannot use Rails session for CSRF, I cannot use Devise out of the box, I cannot add Rails variables to my page and so on.
- The second option is to build the JavaScript artifacts (using Angular-cli, or webpack) and put it in the Rails public folder. This way I can serve the JavaScript through the same server. It can work but it is not convenient, because I lose features like hot reloading,
- Luckily there is a third option. Use Railes official gem Webpacker.
Webpacker
Webpacker makes it easy to use the JavaScript pre-processor and bundler webpack 3 word about webpack
Webpack is a module bundler for modern JavaScript applications. Webpack builds a dependency graph that includes every module your application needs, then packages all of those modules into one or more bundles.
Webpack allows use of loaders and plugins for processing and building the files.
Webpack is the most popular utility today for this purpose.
Rails with Angular 2+
Although webpacker let you use several JavaScript libraries, I decided to demonstrate Angular 2+ because there is not a lot of material on this subject. React has some proven solutions (such as react-Rails gem).
After the release of Angular 2+, there was a lot of disappointment in the Angular community (due to the major change) and many migrated to React. I feel that lately there is a drifting back to angular, and I find it myself quite attractive.
A step by step tutorial for your first Rails-Angular-Webpacker application
We will start by creating a new Rails application with Webpacker and angular. You can do it for React/Vue/Elm as well, and you can add it also to an existing application.
There are few prerequisites that needs to be installed before:
- Ruby 2.2.6+
- Rails 5+
- Node
- Yarn
- Webpack
Rails new webpacker-angular-app --webpack=angular
Let's enter the created code and go over the created files and folders:
The angular code is placed in app/javascript which is a new subfolder in the app folder (in addition to app/asset/javascript).
In app/javascript there are two subfolders:
- packs - contains the modules entry points (this folder can be configured). Webpack will treat these files as entry point, and the result will be bundling the modules.
- Hello_angular - an example module (or angular app). Contains the angular code.
The webpacker configuration is placed in the config folder:
- Webpacker.yml - a config file for webpacker
- config/webpack - webpack configuration files ## Improve the hello_angular app The generated code comes with a sample application called hello_angular, I'll expand and explain how to work with it. The common scenario will be creating one or more apps like this for every application. I will start by creating a page that contains the angular application. I will create a controller and a view and place the angular inside:
Rails g controller hello_angular index
Now I will add hello_angular to the view app/views/hello_angular/index.html.erb
<div> <hello-angular></hello-angular> </div> <%= javascript_pack_tag 'hello_angular' %>
hello-angular is the component name.
javascript_pack_tag will pull in the compiled hello_angular module script and reference it in the application.
I will make this page the root of the application, and check if it works:
config/routes.rb :
Rails.application.routes.draw do root 'hello_angular#index' get 'hello_angular/index' end
In order to run the server we have to run the server:
Rails s
And run webpack (in a different tab - I will show how to run them together later)
./bin/webpack-dev-server
Oops, it is not working…
We need to hack the configuration a bit for it to work. We need to tell webpack what to do with the “@angular/core” symbol. In order to do it we will need to use ContextReplacementPlugin. The way to add plugins or loaders to webpacker is to use a custom configuration file.
We will create a new file config/webpack/custom.js
const webpack = require('webpack') const path = require('path') module.exports = { plugins: [ new webpack.ContextReplacementPlugin( /angular(\\|\/)core/, root('../../app/javascript/hello_angular'), // location of your src { } ) ] } function root(__path) { return path.join(__dirname, __path); }
We can read more about it in here
Then we will add it to the environment (for example to config/webpack/development.js)
const environment = require('./environment') const merge = require('webpack-merge') const customConfig = require('./custom') module.exports = merge(environment.toWebpackConfig(), customConfig)
You can read more about it here
In addition we need to install the ‘webpack-merge’ library
npm i -D webpack-merge
Let’s try again, now it is working!
Navigate to And you will see the hello_angular app.
Using a different file for html
One of the things that I like in angular 2+ components is the division of code (ts file), html and style (scss in our example) to different files.
I will start with taking out the template from the app.component.ts, into an html file.
First we will write our html file app/javascript/hello_angular/app/app.component.html
<h1>Hello {{name}}</h1>
There are couple of things that we need to do in order to allow it. The first is to add html loader to webpack so it will know what to do with the html file. I will do it in config/webpack/environment.js :
const { environment } = require('@Rails/webpacker') environment.loaders.set('html', { test: /\.html$/, exclude: /node_modules/, loaders: ['html-loader'] }) module.exports = environment
And install the loader:
npm i -D html-loader
As you can see, webpacker lets you add loaders to the configuration without defining a custom module and merge. More details can be found here.
To complete this I will add html extension to webpacker.yml:
- .html
Second, we need to require this file in order that we can use it. It is not so simple in TypeScript. First we need to declare it as module (of type ‘html’) and then import it and use it.
I will add a declaration file app/javascript/hello_angular/html.d.ts :
declare module "*.html" { const content: string export default content }
And then I will change app/javascript/hello_angular/app/app.component.ts:
import { Component } thing in order from '@angular/core'; import templateString from './app.component.html' @Component({ selector: 'hello-angular', template: templateString, }) export class AppComponent { name = 'Angular'; }
You can read more about it here.
Notice that unlike the Angular-cli, here I’m using “template” instead of “templateUrl”, and serve it as a string.
Using a different file for style
I will do a pretty similar things for the style form. I’ll start by creating a scss file,
app.component.scss:
h1 { color: red; }
I’ll add a module declaration for scss and webpack loders:
app/javascript/hello_angular/scss.d.ts:
declare module "*.scss" { const content: string export default content }
Add the loaders to config/webpack/environment.js:
const { environment } = require('@Rails/webpacker') environment.loaders.set('html', { test: /\.html$/, exclude: /node_modules/, loaders: ['html-loader'] }) environment.loaders.set('style', { test: /\.(scss|sass|css)$/, use: [{ loader: "to-string-loader" }, { loader: "css-loader" }, { loader: "resolve-url-loader" }, { loader: "sass-loader" }] }) module.exports = environment
Install them:
npm i -D to-string-loader css-loader resolve-url-loader sass-loader
Import the scss file and use it in app.component.ts:
import { Component } from '@angular/core'; import templateString from './app.component.html' import styleString from './app.component.scss'; @Component({ selector: 'hello-angular', template: templateString, styles: [ styleString ] }) export class AppComponent { name = 'Angular'; }
Again, I use “styles” instead of “styleUrl”.
And we have a style!
Adding a server call
Now I’ll add a server call so we will see tht there is no need for url specification, angular will call its server.
I’ll start by adding an endpoint to my Rails controller that returns a new name:
app/controllers/hello_angular_controller.rb:
class HelloAngularController < ApplicationController def index; end def name name = %w[Jack Smith Sara Linda Josh Amitai].sample render json: { name: name } end end
Add to routes.rb:
Rails.application.routes.draw do root 'hello_angular#index' get 'hello_angular/index' get 'hello_angular/name' end
Then I’ll add HttpClient to angular, call it from a button, and replace the name. { }
App.component.html:
<h1>Hello {{name}}</h1> <button (click)="changeName()">Change Name!</button>
App.component.ts:
import { Component } from '@angular/core'; import {HttpClient} from '@angular/common/http'; import templateString from './app.component.html' import styleString from './app.component.scss'; @Component({ selector: 'hello-angular', template: templateString, styles: [ styleString ] }) export class AppComponent { name = 'Angular'; constructor(private http: HttpClient){} changeName() { this.http.get('/hello_angular/name').subscribe(data => { this.name = data['name']; }); } }
That’s all!
Running all together in one command:
Create a Procfile.dev file:
web: bundle exec Rails s webpacker: ./bin/webpack-dev-server
Add forman to Gemfile:
gem 'foreman'
And then you can run the command:
bundle exec foreman start -f Procfile.dev
The server address is
Deploying to heroku
remove 'sqlite3' gem and add 'pg' gem in the Gemfile:
gem ‘pg’
Create a new app in Heroku, provide a postgresql and push. Heroku will build webpack and run it.
Conclusion
The JavaScript development has changed in the last few years. We need utilities like webpack for using modern framework like React and Angular.
Until the introduction of webpacker, Rails didn’t have a clear way of how to combine them. Now we can use them together and enjoy developing in Rails and modern JavaScript framework.
It is not smooth yet, and there are still some wiring and configuration that need to be done in order to make it work. I hope that it will be fixed, so it would not be necessary in the future.
You can find all the code in
Happy coding! | https://www.spectory.com/blog/Angular%202%20with%20Rails%20and%20Webpacker | CC-MAIN-2019-18 | en | refinedweb |
Details
- Type:
Improvement
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 0.19.0
-
- Component/s: benchmarks
- Labels:None
Description
The goal of this issue is to measure how the name-node performance depends on where the edits log is written to.
Three types of the journal storage should be evaluated:
- local hard drive;
- remote drive mounted via nfs;
- nfs filer.
Issue Links
- is related to
HADOOP-4029 NameNode should report status and performance for each replica of image and log
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
I benchmarked three operations: create, rename, and delete using NNThroughputBenchmark, which is a pure name-node benchmark. It calls the name-node methods directly without using the rpc protocol. So the rpc overhead is not included in these results, and should be measured separately say with synthetic load generator.
In a sense these benchmarks determine an upper bound for the HDFS operations, namely the maximum throughput the name-node can sustain under heavy load.
Each run starts with an empty files system and performs 1 million operations handled by 256 threads on the name-node. The output is the throughput that is the number of operation per second, which is calculated as 1,000,000/(tE-tB), where tB is when the first thread starts, and tE is when all threads stop. The threads run in parallel.
Creates create empty files and do not close them. Renames change file names, but do not move them.
All test results are consistent except for one distortion in deletes on a remote drive, which is way out of the expected range. Don't know what that is, one day they were good the other not.
Each test consists of 1,000,000 operations performed using 256 threads.
Result is in ops/sec.
Some conclusions:
- Local drive is faster than nfs, and
- nfs filer is faster than a remote drive;
- but the difference between nfs storage and local drives is very slim, only 2-3%.
- Using 4 local drives instead of 1 degrades the performance by only 9%, even though we write onto the drives sequentially (one after another).
It would be fair to say that there is some parallelism in writing, since current code batches writes first and then synchs them at once in larges chunks. So while the writes are sequential the synchs are parallel.
- Opens (getBlockLocation()) are 22 times faster than creates,
- which means journaling is the real bottleneck for the name-node operations,
- and the lack of fine-grained locking in the namespace data-structures is not a problem so far. Otherwise, the throughputs for opens and other operations would be characterized by the same or at least close numbers.
- Further optimization of the name-node performance imo should be focused around efficient journaling.
Another set of statistical data, which characterizes the actual load on the name-node on some of our clusters. Unfortunately, the statistics for open is broken, and we do not collect stats for renames. So I can only present creates and deletes. Please contribute if somebody has more data.
- These numbers show that the actual peak load for creates is about 40 times lower than the name-node can handle, and 3 times lower for deletes. On average the picture is even more drastic.
The name-node processing capability is 400-500 times higher than the actual average load on it.
+1 overall. Here are the results of testing the latest attachment
against trunk revision 6808.
NFS is a black art: when doing benchmarks such as these, implementation matters. Are we using NFSv2? v3? v4? UDP or TCP? What is the rwsize set to? What is the server side and what is the client side? What about TCP/IP tuning?
You probably know that better than I do. But the point of the benchmarking was to compare nfs vs local drives.
There was a suspicion that ios to nfs are substantially slower then to local drives, and it turned out to be pretty much the same.
It would even better of course if we could fine tune nfs.
I edited a typo in the formula explaining throughput:
– 1,000,000/(tE-tE)
+ 1,000,000/(tE-tB)
It looks as though it is not just the number of mutations but something else matters as well(may be amount of data written to edits log per mutation, cpu, or locking). That could explain large disparity between number creates, renames, and deletes though each of these is single mutation.
I just committed this.
Please feel free to comment, discuss the benchmark results.
Hi Konstantin,
Great analysis. I completely agree with you that coarse-grain locking for the namenode should not be impacting scalability of opens and creates. It is the disk sync times that really matters. BTW, when you ran the test on a single disk on local drive, did you see the disk max-out on IO? You said that 5710 creates occured, the limitation being CPU on the machine or disk IO contention?
Also, I had a patch
HADOOP-2330 that pre-allocated transaction log. If I had seen this JIRA earlier, i would have requested you to see if you could repeat the exact same test on the same hardware with this patch. This patch pre-allocates the transaction log in large chunks.
Dhruba,
For creates we definitely have disk IO contention not the cpu.
About H2330, Hairong tested it with her new synthetic load generator - very encouraging results.
Integrated in Hadoop-trunk #581 (See)
I am attaching a patch that was used for the benchmarks.
It extends NNThroughputBenchmark with new operations rename and delete as well as introduces additional command line options,
which control what the benchmarks do with generated files before and after the execution. | https://issues.apache.org/jira/browse/HADOOP-3860 | CC-MAIN-2017-26 | en | refinedweb |
Details
Description
The current QPID client addressing syntax provides a way to create and delete queue/topic resource on the qpidd broker "in band". For example:
$ QPID_LOAD_MODULE=amqpc.so ./spout --connection-options "{protocol:amqp1.0}
" "TestQ;{create:always,node:{type:queue}}"
$ qpid-stat -q
Queues
queue dur autoDel excl msg msgIn msgOut bytes bytesIn bytesOut cons bind
============================================================================
<...>
TestQ 1 1 0 65 65 0 0 1
This capability is not available when using the Messenger API.
Issue Links
- duplicates
PROTON-439 Support for dynamic reply-to address in Messenger
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
Just for completeness, the 1.0 protocol does define a mechanism by which the broker can be asked to create a queue (the 'dynamic' flag on source/target), however in that case the queue is named by the broker and not the application. This works well for 'temporary queues' e.g. as used in request-response patterns. However I share the view that the more general solution of on-demand creation is indeed better handled through broker configuration and have raised to add that to qpidd.
This is impossible to fix inside messenger because there is no way using the 1.0 protocol to ask the broker to create a queue. This particular scenario will be more generally handled by qpidd functionality to dynamically create nodes within certain configured namespaces. | https://issues.apache.org/jira/browse/PROTON-426 | CC-MAIN-2017-26 | en | refinedweb |
Hello, guys! Recently I read the book "Effective STL" written by S. Meyers and found a mention about rope data structure, which is implemented in some STL's versions. Briefly speaking this data structure can fastly insert arbitrary blocks of array to any position and erase them. So it's very similar to implicit cartesian tree (you can find some details in the wiki article mentioned above). It's used sometimes to handle a very long strings.
As it turned out, the rope is actually implemented in some versions of STL, for example, in SGI STL, and I should say that it's the most complete documentation of this class I've ever seen. And now let's find the rope in GNU C++. I used this task for testing. Here you should quickly move the block [l,r] to the beginning of the array 105 times and besides the size of array is not greater than 105:
#include <iostream> #include <cstdio> #include <ext/rope> //header with rope using namespace std; using namespace __gnu_cxx; //namespace with rope and some additional stuff int main() { ios_base::sync_with_stdio(false); rope <int> v; //use as usual STL container int n, m; cin >> n >> m; for(int i = 1; i <= n; ++i) v.push_back(i); //initialization int l, r; for(int i = 0; i < m; ++i) { cin >> l >> r; --l, --r; rope <int> cur = v.substr(l, r - l + 1); v.erase(l, r - l + 1); v.insert(v.mutable_begin(), cur); } for(rope <int>::iterator it = v.mutable_begin(); it != v.mutable_end(); ++it) cout << *it << " "; return 0; }
It works perfectly, but 2 times slower than the handwritten implicit cartesian tree, but uses less memory. As far as I can see, GNU C++ has the same implementation of rope as SGI STL. Visual C++ doesn't support the rope class.
There are several points, that you should know about rope in C++. From SGI STL's documentation it's known that the rope doesn't cope well with modifications of single elements (that's why begin() and end() return const_iterator. If you want to use classic iterators, you should call mutable_begin() and mutable_end().). But my tests show that it's pretty fast (about
). Simultaneously operator += performes for O(1) (Of course, if we don't consider the time needed to construct the object on the right side).
You can see some features with [ ] operator in the following code:. Since developers want to maintain the rope in permanent state, the operator [ ] returns const reference, but there is a special method to overcome it. This solution works with the same speed. Futhermore, I forgot to mention that all iterators are RandomAccess.
If you test this container, please, tell about your experience in comments. In my opinion, we got a pretty fast array with complexity
for all operations :) | http://codeforces.com/blog/entry/10355?locale=en | CC-MAIN-2017-26 | en | refinedweb |
An Introduction to Swift Playgrounds
Before introducing the Swift programming language in the chapters that follow, it is first worth learning about a feature of Xcode known as Playgrounds. Playgrounds are a feature introduced in Xcode 6 that make learning Swift and experimenting with the iOS SDK much easier. The concepts covered in this chapter can be put to use when experimenting with many of the introductory Swift code examples contained in the chapters that follow and will be of continued use in future when experimenting with many of the features of UIKit framework when designing dynamic user interfaces.
What is a Playground?
A playground is an interactive environment where Swift code can be entered and executed with the results appearing in real-time. This makes an ideal environment in which to learn the syntax of Swift and the visual aspects of iOS app development without the need to work continuously through the edit/compile/run/debug cycle that would ordinarily accompany a standard Xcode iOS project. With support for rich text comments, playgrounds are also a good way to document code as a teaching environment.
Creating a New Playground
To create a new Playground, start Xcode and select the Get started with a playground option from the welcome screen or select the File -> New -> Playground menu option. On the resulting options screen, name the playground LearnSwift and set the Platform menu to iOS. Click Next and choose a suitable file system location into which the playground should be saved.
Once the playground has been created, the following screen will appear ready for Swift code to be entered:
Figure 5-1
The panel on the left-hand side of the window (marked A in Figure 5 1) is the playground editor where the lines of Swift code are entered. The right-hand panel (marked B) is referred to as the results panel and is where the results of each Swift expression entered into the playground editor panel are displayed.
The cluster of three buttons at the right-hand side of the toolbar (marked C) are used to hide and display other panels within the playground window. The left most button displays the Navigator panel which provides access to the folders and files that make up the playground (marked A in Figure 5 2 below). The middle button, on the other hand, displays the Debug view (B) which displays code output and information about coding or runtime errors. The right most button displays the Utilities panel (C) where a variety of properties relating to the playground may be configured.
Figure 5-2
By far the quickest way to gain familiarity with the playground environment is to work through some simple examples.
A Basic Swift Playground Example
Perhaps the simplest of examples in any programming language (that at least does something tangible) is to write some code to output a single line of text. Swift is no exception to this rule so, within the playground window, begin by deleting the current Swift expression from the editor panel:
var str = “Hello, playground”
Next, enter a line of Swift code that reads as follows:
print("Welcome to Swift")
All that the code does is make a call to the built-in Swift print function which takes as a parameter a string of characters to be displayed on the console. Those familiar with other programming languages will note the absence of a semi-colon at the end of the line of code. In Swift, semi-colons are optional and generally only used as a separator when multiple statements occupy the same line of code.
Note that after entering the line of code, the results panel to the right of the editing panel is now showing the output from the print call as highlighted in Figure 5-3:
Figure 5-3
Viewing Results
Playgrounds are particularly useful when working and experimenting with Swift algorithms. This can be useful when combined with the Quick Look feature. Remaining within the playground editor, enter the following lines of code beneath the existing print statement:
var x = 10 for index in 1...20 { let y = index * x x -= 1 }
This expression repeats a loop 20 times, performing an arithmetic expression on each iteration of the loop. Once the code has been entered into the editor, the playground will execute the loop and display in the results panel the number of times the loop was performed. More interesting information, however, may be obtained by hovering the mouse pointer over the results line so that two additional buttons appear as shown in Figure 5-4:
Figure 5-4The left most of the two buttons is the Quick Look button which, when selected, will show a popup panel displaying the results as shown in Figure 5-5:
Figure 5-5
The right-most button is the Show Result button which, when selected, displays the results in-line with the code:
Figure 5-6
Enabling the Timeline SliderA useful tool when inspecting the results of a code sequence is the timeline slider. Switched off by default, the slider can be enabled by displaying the Utilities panel (Marked C in Figure 5-2) and enabling the Show Timeline check box as illustrated in Figure 5-7:
Figure 5-7
Once enabled, the timeline appears as a slider located along the bottom edge of the playground panel and can be moved to view the prevailing results at different points in the value history. Sliding it to the left, for example, will highlight and display the different values in the graph:
Figure 5-8
Clicking on the blue run button located to the left of the timeline slider will re-run the code within the playground.
Adding Rich Text Comments
Rich text comments allow the code within a playground to be documented in a way that is easy to format and read. A single line of text can be marked as being rich text by preceding it with a //: marker. For example:
//: This is a single line of documentation text
Blocks of text can be added by wrapping the text in /*: and */ comment markers:
/*: This is a block of documentation text that is intended to span multiple lines */
The rich text uses the Markdown markup language and allows text to be formatted using a lightweight and easy to use syntax. A heading, for example, can be declared by prefixing the line with a ‘#’ character while text is displayed in italics when wrapped in ‘*’ characters. Bold text, on the other hand, involves wrapping the text in ‘**’ character sequences. It is also possible to configure bullet points by prefixing each line with a single ‘*’. Among the many other features of Markdown are the ability to embed images and hyperlinks into the content of a rich text comment.
To see rich text comments in action, enter the following markdown content into the playground editor immediately after the print(“Welcome to Swift”) line of code:
/*: # Welcome to Playgrounds This is your *first* playground which is intented to demonstrate: * The use of **Quick Look** * Placing results **in-line** with the code */
As the comment content is added it is said to be displayed in raw markup format. To display in rendered markup format, select the Editor -> Show Rendered Markup menu option. Once rendered, the above rich text should appear as illustrated in Figure 5-9:
Figure 5-9
Detailed information about the Markdown syntax can be found online at the following URL:
Working with Playground Pages
A playground can consist of multiple pages, with each page containing its own code, resources and rich text comments. So far, the playground used in this chapter contains a single page. Add an additional page to the playground now by selecting the File -> New -> Playground Page menu option. Once added, click on the left most of the three view buttons (marked C in Figure 5 1) to display the Navigator panel. Note that two pages are now listed in the Navigator named “Untitled Page” and “Untitled Page 2”. Select and then click a second time on the “Untitled Page 2” entry so that the name becomes editable and change the name to UIKit Examples as outlined in Figure 5-10:
Figure 5-10
Note that the newly added page has Markdown links which, when clicked, navigate to the previous or next page in the playground.
Working with UIKit in Playgrounds
The playground environment is not restricted to simple Swift code statements. Much of the power of the iOS 10 SDK is also available for experimentation within a playground.
When working with UIKit within a playground page it is necessary to import the iOS UIKit Framework. The UIKit Framework contains most of the classes necessary to implement user interfaces for iOS applications and is an area which will be covered in significant detail throughout the book. An extremely powerful feature of playgrounds is that it is also possible to work with UIKit along with many of the other frameworks that comprise the iOS 10 SDK.
The following code, for example, imports the UIKit framework, creates a UILabel instance and sets color, text and font properties on it:
import UIKit let myLabel = UILabel(frame: CGRect(x: 0, y: 0, width: 200, height: 50)) myLabel.backgroundColor = UIColor.red myLabel.text = "Hello Swift" myLabel.textAlignment = .center myLabel.font = UIFont(name: "Georgia", size: 24) myLabel
Enter this code into the playground editor on the UIKit Examples page (the line importing the Foundation framework can be removed) and note that this is a good example of how the Quick Look feature can be useful. Each line of the example Swift code configures a different aspect of the appearance of the UILabel instance. Clicking on the Quick Look button for the first line of code will display an empty view (since the label exists but has yet to be given any visual attributes). Clicking on the Quick Look button in the line of code which sets the background color, on the other hand, will show the red label:
Figure 5-11
Similarly, the quick look view for the line where the text property is set will show the red label with the “Hello Swift” text left aligned:
Figure 5-12
The font setting quick look view on the other hand displays the UILabel with centered text and the larger Georgia font:
Figure 5-13
Adding Resources to a Playground
Another feature of playgrounds is the ability to bundle and access resources such as image files in a playground. Within the Navigator panel, click on the right facing arrow to the left of the UIKit Examples page entry to unfold the page contents (Figure 5-14) and note the presence of a folder named Resources:
Figure 5-14
If you have not already done so, download and unpack the code samples archive from the following URL:
Open a Finder window, navigate to the playground_images folder within the code samples folder and drag and drop the image file named waterfall.png onto the Resources folder beneath the UIKit Examples page in the Playground Navigator panel:
Figure 5-15
With the image added to the resources, add code to the page to create an image object and display the waterfall image on it:
let image = UIImage(named: "waterfall")
With the code added, use the Quick Look option to view the results of the code:
Figure 5-16
Working with Enhanced Live Views
So far in this chapter, all of the UIKit examples have involved presenting static user interface elements using the Quick Look and in-line features. It is, however, also possible to test dynamic user interface behavior within a playground using the Xcode Enhanced Live Views feature. To demonstrate live views in action, create a new page within the playground named Live View Example. Within the newly added page, remove the existing lines of Swift code before adding import statements for the UIKit framework and an additional playground module named PlaygroundSupport:
import UIKit import PlaygroundSupport
The XCPlayground module provides a number of useful features for playgrounds including the ability to present a live view within the playground timeline. Beneath the import statements, add the following code:
import UIKit import PlaygroundSupport let container = UIView(frame: CGRect(x: 0,y: 0,width: 200,height: 200)) })
The code creates a UIView object to act as a container view and assigns it a white background color. A smaller view is then drawn positioned in the center of the container view and colored red. The second view is then added as a child of the container view. An animation is then used to change the color of the smaller view to blue and to rotate it through 360 degrees. If you are new to iOS programming rest assured that these areas will be covered in detail in later chapters. At this point the code is simply provided to highlight the capabilities of live views.
Clicking on any of the Quick Look buttons will show a snapshot of the views at each stage in the code sequence. None of the quick look views, however, show the dynamic animation. To see how the animation code works it will be necessary to use the live view playground feature.
The PlaygroundSupport module includes a class named PlaygroundPage that allows playground code to interact with the pages that make up a playground. This is achieved through a range of methods and properties of the class, one of which is the current property. This property, in turn, provides access to the current playground page. In order to execute the live view within the playground timeline, the liveView property of the current page needs to be set to our new container. To display the timeline, click on the toolbar button containing the interlocking circles as highlighted in Figure 5-17:
Figure 5-17When clicked, this button displays the Assistant Editor panel containing the timeline. Once the timeline is visible, add the code to assign the container to the live view of the current page as follows:
import UIKit import PlaygroundSupport let container = UIView(frame: CGRect(x: 0,y: 0,width: 200,height: 200)) PlaygroundPage.current.liveView = container })
Once the call has been added, the views should appear in the timeline (Figure 5-18). During the 5 second animation duration, the red square should rotate through 360 degrees while gradually changing color to blue:
Figure 5-18
To repeat the execution of the code in the playground page, select the Editor -> Execute Playground menu option or click on the blue run button located next to the timeline slider. If the square stop button is currently displayed in place of the run button, click on it to stop execution and redisplay the run button. The different stages of the animation may also be viewed by moving the timeline slider located along the bottom edge of the playground window. Since the animation only lasts 5 seconds the length of time covered by the slider may also be reduced to 5 seconds using the control located at the end of the slider:
Figure 5-19
When to Use Playgrounds
Clearly Swift Playgrounds provide an ideal environment for learning to program using the Swift programming language and the use of playgrounds in the Swift introductory chapters that follow is recommended.
It is also important to keep in mind that playgrounds will remain useful long after the basics of Swift have been learned and will become increasingly useful when moving on to more advanced areas of iOS development.
The iOS 10 SDK is a vast collection of frameworks and classes and it is not unusual for even experienced developers to need to experiment with unfamiliar aspects of iOS development before adding code to a project. Historically this has involved creating a temporary iOS Xcode project and then repeatedly looping through the somewhat cumbersome edit, compile, run cycle to arrive at a programming solution. Rather than fall into this habit, consider having a playground on standby to carry out experiments during your project development work.
Summary
This chapter has introduced the concept of playgrounds. Playgrounds provide an environment in which Swift code can be entered and the results of that code viewed dynamically. This provides an excellent environment both for learning the Swift programming language and for experimenting with many of the classes and APIs included in the iOS 10 SDK without the need to create Xcode projects and repeatedly edit, compile and run code. | http://www.techotopia.com/index.php/An_Introduction_to_Swift_Playgrounds | CC-MAIN-2017-26 | en | refinedweb |
Recently I programmed my new project - homemade ambilight. Ambilight is a backlight behind television. The light is the average of some pixels in the screen.
In order to get the colors for the ambilight, I needed fast screen capture. After some search, I heard about the front buffer. DirectX devices have this cool
property. It contains the actual screen image. It is faster than GDI. DirectX puts an image in the surface object and it's faster for processing than GDI's bitmap.
Your project must be STAThread. First you need the DirectX SDK. It contains all the libraries that you need. When you
have it downloaded and installed, add to your project the following references:
STAThread
If you can't find these in the list of references, then look for these libraries in "C:\Windows\Microsoft.NET\DirectX
for Managed Code" and "C:\Windows\Microsoft.NET".
In order to get Direct3D working, we have to add these lines into the app.config (if it doesn't exist, add a new file):
<startup useLegacyV2RuntimeActivationPolicy="true">
<supportedRuntime version="v4.0"
sku=".NETFramework,Version=v4.0,Profile=Client"/>
</startup>
In the supportedRuntime tag, change the version to the .NET version that you use.
supportedRuntime
Our new class DxScreenCapture will have functions for capturing screen. In order to get the front buffer, we need a DirectX device. It can be created from Form
or another Control. So our class must inherit from Form. Next, declare the device (of course, add using statements for DirectX too).
DxScreenCapture
Form
Control
using
public class DxScreenCapture : Form
{
Device d;
}
Next, let's setup the device.
public DxScreenCapture()
{
PresentParameters present_params = new PresentParameters();
present_params.Windowed = true;
present_params.SwapEffect = SwapEffect.Discard;
d = new Device(0, DeviceType.Hardware, this,
CreateFlags.SoftwareVertexProcessing, present_params);
}
The device renders images with hardware and processes vertexes by software. It's irrelevant for our project. Now we can access the front buffer!
This is the method for getting the print screen:
public Surface CaptureScreen()
{
Surface s = d.CreateOffscreenPlainSurface(Screen.PrimaryScreen.Bounds.Width,
Screen.PrimaryScreen.Bounds.Height, Format.A8R8G8B8, Pool.Scratch);
d.GetFrontBufferData(0, s);
return s;
}
Surface is a DirectX type of image. We can convert this to Bitmap, but it takes up much time. Locking the pixels of the surface is fast, so for processing it is OK.
Surface
Bitmap
First, the method creates a new surface. Next, we get the front buffer from the device to the surface and then return the surface. The class for capturing is ready. Isn't this easy?
This method is fast, but if you save images to the hard drive, it takes much time. It isn't the best way to record the video of the screen. If you want to get a screen
capture, you can use normal print screen from Graphics.
Graphics
Saving and viewing are slow, but capturing is fast. So if you have an application that needs some pixels or average, this is the best way: simple and fast.
This solution I found when I wrote my homemade ambilight driver. This project needed the average of colors of the screen's edges and it had to refresh a minimum of ten times
per second. Maybe, GDI would have sufficed, but it charged the system.
The example "Colors average" is a part of my ambilight. It works very fast. First, I calculate the positions of pixels in the locked pixels stream:
Collection<long> tlPos = new Collection<long>();
Collection<long> tPos = new Collection<long>();
Collection<long> trPos = new Collection<long>();
Collection<long> lPos = new Collection<long>();
Collection<long> rPos = new Collection<long>();
Collection<long> blPos = new Collection<long>();
Collection<long> bPos = new Collection<long>();
Collection<long> brPos = new Collection<long>();
int o = 20;
int m = 8;
int sx = Screen.PrimaryScreen.Bounds.Width - m;
int sy = Screen.PrimaryScreen.Bounds.Height - m;
int bx = (sx - m) / 3 + m;
int by = (sy - m) / 3 + m;
int bx2 = (sx - m) * 2 / 3 + m;
int by2 = (sy - m) * 2 / 3 + m;
long x, y;
long pos;
y = m;
for (x = m; x < sx; x += o)
{
pos = (y * Screen.PrimaryScreen.Bounds.Width + x) * Bpp;
if (x < bx)
tlPos.Add(pos);
else if (x > bx && x < bx2)
tPos.Add(pos);
else if (x > bx2)
trPos.Add(pos);
}
y = sy;
for (x = m; x < sx; x += o)
{
pos = (y * Screen.PrimaryScreen.Bounds.Width + x) * Bpp;
if (x < bx)
blPos.Add(pos);
else if (x > bx && x < bx2)
bPos.Add(pos);
else if (x > bx2)
brPos.Add(pos);
}
x = m;
for (y = m + 1; y < sy - 1; y += o)
{
pos = (y * Screen.PrimaryScreen.Bounds.Width + x) * Bpp;
if (y < by)
tlPos.Add(pos);
else if (y > by && y < by2)
lPos.Add(pos);
else if (y > by2)
blPos.Add(pos);
}
x = sx;
for (y = m + 1; y < sy - 1; y += o)
{
pos = (y * Screen.PrimaryScreen.Bounds.Width + x) * Bpp;
if (y < by)
trPos.Add(pos);
else if (y > by && y < by2)
rPos.Add(pos);
else if (y > by2)
brPos.Add(pos);
}
I created a Calculate method and I raise it with each timer tick. I capture the screen and lock pixels. Locking pixels is converting the surface or
bitmap to a stream with pure pixel values. To read the stream, you must know its width and in which format it is saved. In DirectX, the format is specified when the
surface is created. In the CaptureScreen method, there is:
Calculate
CaptureScreen
Surface s = d.CreateOffscreenPlainSurface(Screen.PrimaryScreen.Bounds.Width,
Screen.PrimaryScreen.Bounds.Height, Format.A8R8G8B8, Pool.Scratch);
A8R8G8B8 is a 32-bit RGB format, where each pixel has one byte for alpha, one for red, one for green, and one byte for blue. In the stream, the first four bytes are the first pixel,
next 4 bytes are the second pixel, and so on. So in the Calculate method, I wrote:
Surface s = sc.CaptureScreen();
GraphicsStream gs = s.LockRectangle(LockFlags.None);
Next, I wrote the avcs method that reads the pixels specified in the table containing their locations and returns their average.
avcs
Color avcs(GraphicsStream gs, Collection<long> positions)
{
byte[] bu = new byte[4];
int r = 0;
int g = 0;
int b = 0;
int i = 0;
foreach (long pos in positions)
{
gs.Position = pos;
gs.Read(bu, 0, 4);
r += bu[2];
g += bu[1];
b += bu[0];
i++;
}
return Color.FromArgb(r / i, g / i, b / i);
}</long>
Finally, I set the colors to preview and dispose the objects:
topLeft.BackColor = avcs(gs, tlPos);
topRight.BackColor = avcs(gs, trPos);
bottomLeft.BackColor = avcs(gs, blPos);
bottomRight.BackColor = avcs(gs, brPos);
top.BackColor = avcs(gs, tPos);
bottom.BackColor = avcs(gs, bPos);
left.BackColor = avcs(gs, lPos);
right.BackColor = avcs(gs, rPos);
gs.Close();
gs.Dispose();
s.UnlockRectangle();
s.ReleaseGraphics();
s.Dispose();
If you run some video behind the example's window, you can see how fast this method is.
You can use the DirectX solution for all screen processing problems, if you don't view or save full image.
The Print Screen button captures the screen by GDI. This slow method is wrapped in System.Drawing. In order to get fast print screens for processing, not for
saving or viewing, DirectX is a better solution than GDI. A DirectX device has a front buffer which contains the rendered screen.
System. | http://www.codeproject.com/Articles/274461/Very-fast-screen-capture-using-DirectX-in-Csharp?msg=4064228 | CC-MAIN-2014-52 | en | refinedweb |
display transparent image on canvas
Image class of Python will load a transparent gif/png image just like a non-transparent one. But its blit() method accept a mask parameter ! The missing link is to create a mask automatically from the Image. It's typically the top-left pixel. You can use Image's getpixel() and combine them all as shown below.
Code
The automask function
def automask(im):
width, height = im.size # get image
Example usage
# import modules
from graphics import Image
import appuifw, e32
# define quit function
def quit():
a.signal()
appuifw.app.exit_key_handler = quit
# open image
img = Image.open('E:\\Images\\img.gif')
# create mask
mask = automask(img)
# create canvas on application body
appuifw.app.body = canvas = appuifw.Canvas()
# clear canvas with yellow color
canvas.clear(0xDBDB70)
# show transparent image
canvas.blit(img, mask=mask)
a = e32.Ao_lock()
a.wait() # wait for exit
Postconditions
Following screenshots show the result of above code.
30 Sep
2009
30 Sep
2009 | http://developer.nokia.com/community/wiki/Archived:How_to_display_transparent_image_on_canvas | CC-MAIN-2014-52 | en | refinedweb |
12 August 2010 21:47 [Source: ICIS news]
By Joseph Chang
?xml:namespace>
“While volume trends in the second half will not be as strong as in the first, we think year-over-year earnings comparisons will fare well,” said Hassan Ahmed, partner and head of research at US-based Alembic Global Advisors.
“The consensus forecast of drops of around 30% in second-half earnings from the first half is overly cynical,” he said. “Estimates will likely have to rise.”
Lower expected
Analysts already had a pessimistic view of second-half chemical sector earnings because of expected large amounts of olefins capacity coming on in the Middle East and
“But Q2 saw companies posting very strong numbers, and most management teams said they were not seeing much of a slowdown,” said Hassan.
“While there has been a steep correction in pricing in some areas, the prospects for healthy demand and margin expansion are good,” he added.
The analyst said he expects to see continuing strength in electronic materials, coatings and engineering plastics. | http://www.icis.com/Articles/2010/08/12/9384747/wall-street-too-cynical-in-h2-chem-profit-forecast-analyst.html | CC-MAIN-2014-52 | en | refinedweb |
Hi,
I am studying this peice of code used in bluej, it is part of the world of zuul game example but there is this part of it that I want to know what it does. what does the public static etc do,...
Hi,
I am studying this peice of code used in bluej, it is part of the world of zuul game example but there is this part of it that I want to know what it does. what does the public static etc do,...
I want to comment them like this //comment
but I do not know what to put, could you suggest for each one
Hi,
now it prints out this line which is what I want
******************************
** **
** Ticket **
** **...
hi,
again there is something wrong with my code it comes it with and error and highlights lottoticket.add(new Numbers());
no suitable method found for add(numbers)
import java.util.ArrayList;...
are you talking about does this peice of code executes
public Ticket(int numOfLines)
{
lottoticket = new ArrayList<Numbers>();
//needs a loop
for(int i=0; i<qty;...
when I create an instance of the ticket class and call the method print ticket, it comes up with this,
1699
this is my code
import java.util.ArrayList;
/**
* lucky dip lottery ticket...
I commented the code but lost it a long time ago and now forgot what comments to put in the code now
my tutor was like you should start to be able to comment on your code and I did when I started this but I have lost some of it
basically it is a Numbers class should provide the following...
hi a while back I started to put comments on the code and now I have lost some of them, could anyone help me comment this unfinshed peice of code
//access to the java.utill.random libary.
...
I was just asking what name suits most for my mock app development business, I have an idea, target market but i need to pitch the idea to people at the business fair
Hi
I have been asked for a project to think of a business idea and pitch at an event
the idea is an outsourcing app development which isn't new but I just need a name
the ones I have came up...
for our coursework we had to make a lottery random number ticket generator
The Numbers class should provide the following functionality:
Generates 6 random numbers in a range 1 to 49.
Write...
public class Month
{
private int [] months = {31,28,31,30,31,30,30,31,30,31,30,31,99};
private int[] fib = new int[10];
public Month()
{
}
hi,
I did but I do not know if some information is correct for what I need
Hi, I am new and only just found out this forum
im doing computing and business in uni and the way the lecture is teach us java allows me to work hard in the lesson but for all techniques and... | http://www.javaprogrammingforums.com/search.php?s=6f61e26d9d869ac3e049fab4ddea5b1b&searchid=1273251 | CC-MAIN-2014-52 | en | refinedweb |
why use typecast on NULL?
Discussion in 'C Programming' started by Xiaofeng Ye,
Why use "return (null);" instead of "return null;" ?Carl, Aug 21, 2006, in forum: Java
- Replies:
- 21
- Views:
- 1,025
- Patricia Shanahan
- Aug 24, typecast to void when call a function with no return valuesu, Jan 23, 2009, in forum: C Programming
- Replies:
- 3
- Views:
- 797
- CBFalconer
- Jan 23, 2009 | http://www.thecodingforums.com/threads/why-use-typecast-on-null.436161/ | CC-MAIN-2014-52 | en | refinedweb |
In this post I will describe how to create virtual views for ASP.NET MVC. These are views that do not exist as files in the regular place in the file system (~/Views/xxx/yyy.aspx), instead they are stored somewhere. Elsewhere in this case can be a database or perhaps a .ZIP file. This is all made possible by the VirtualPathProvider class (and some supporting classes) in ASP.NET. While the example will use MVC framework and views, the class provides much more. With it you can virtualize any web content (css, gif/jpg, js, aspx, etc) and it can be used in any ASP.NET application, nut just MVC.
Use cases
When would you need such a virtual view system?
If you want users to be able to customize Views, but without them having access to the Web solution or source code. In this case the user could upload the View into the database, where it is stored. When the application wants to display the view, instead of reading the view source from the file system it will be read from the database. MVC will not know the difference and believes it is found under the regular ~/Views/... path.
Another possibility would be to use a ZIP file based virtual path provider. Here the user could create a custom skin by creating or customizing views (.ascx, .aspx) and adding new content (css, images). The user would then pack it into a ZIP, upload into the server, and server would serve the custom skin directly from the .ZIP file. This way with multiple skins installed the file system would not be polluted by a vast amount of files.
There are of course many more possibilities I could think of, but I hope you get the point
Getting ready
Before we begin, lets create a simple database table that will store the data for our example. It is very badly designed I have to admit
I called the table Pages and the idea is that it will serve dynamic pages into my MVC application. Kind of like a very simple CMS. However, I want to customize the view layout (.aspx), and not just the data itself. So I ended up with a table like this:
The table contains the data for the page (Body, Title) and the name/virtual path of the view (ViewName) and of course the view file itself (ViewData, uploaded as binary). Here is an example row. I uploaded a simple .ASPX view into the row.
In my example solution I added a LINQ to SQL .dbml to the solution (named MyMvcVp), dragged and dropped my table into the designer. I could then use the generated data context to access the database.
Controller
Next is my PagesController.cs that will serve our dynamic content pages. To display such a page I decided to use default route already available in the MVC template: /controller/action/id. I added a single Display action to the controller which will get the id parameter as a string value. This will have to correspond to the PageId field in the database. For example: /Pages/Display/d9b07a02-7c47-41d9-8c21-bf546841bb6c. The resulting code follows:
public class PagesController : Controller { public ActionResult Display ( string id ) { Guid guid = new Guid ( id ); MyMvcVpDataContext context = new MyMvcVpDataContext (); var res = (from p in context.Pages where p.PageId == guid select p).SingleOrDefault (); if ( res == null ) { return RedirectToAction ( "Index", "Home" ); } ViewData["Title"] = res.Title; ViewData["Body"] = res.Body; return View ( System.IO.Path.GetFileNameWithoutExtension ( res.ViewName ) ); } }
The code is very simple: it uses LINQ to look for the page, and if found, we extract the data from it, and instruct MVC to show it using our ViewName. To make the example very simple I included the virtual path in the database column, so here I use GetFileNameWithoutExtension() to get just the View name.
Because I wanted to keep the example simple I decided not to support virtual folders (which is also possible). So the folder where the virtual files seem to reside needed to be created also. I created a new Pages folder under /Views in the solution. If you implement virtual folders than this step could be left out.
At this point we still miss the actual VirtualPathProvider implementation. Without that you will get an error message when accessing this action, because MVC will not find the View.
VirtualPathProvider
So I added a new file MyVirtualPathProvider.cs to the solution. This contains the MyVirtualPathProvider class, that derives from VirtualPathProvider. I overrided two methods, FileExists() and GetFile(). Both get the virtual path (application relative). The first checks if a file exists, and the second returns the file from the file system - in this case the database.
Here is the code I used. Note that Page in this case refers to the LINQ class that was generated for the Pages table, and not the ASP.NET Page class.
public class MyVirtualPathProvider : VirtualPathProvider { public override bool FileExists ( string virtualPath ) { var page = FindPage ( virtualPath ); if ( page == null ) { return base.FileExists ( virtualPath ); } else { return true; } } public override VirtualFile GetFile ( string virtualPath ) { var page = FindPage ( virtualPath ); if ( page == null ) { return base.GetFile ( virtualPath ); } else { return new MyVirtualFile ( virtualPath, page.ViewData.ToArray () ); } } private Page FindPage ( string virtualPath ) { MyMvcVpDataContext context = new MyMvcVpDataContext (); var page = (from p in context.Pages where p.ViewName == virtualPath select p).SingleOrDefault (); return page; } }
As you can see if I don't find the virtual path in question from the database, the base implementation is called. That will just try to look up the file from the regular file system location.
You should notice that I used a class that I did not yet talk about: MyVirtualFile. This is just a simple implementation of the VirtualFile abstract class. Its purpose is to return a Stream where the file can be read. In my example I will just return a MemoryStream using the data from the database table. But this is where you would read the file from the actual place and pass it back to the framework.
public class MyVirtualFile : VirtualFile { private byte[] data; public MyVirtualFile ( string virtualPath, byte[] data ) : base ( virtualPath ) { this.data = data; } public override System.IO.Stream Open () { return new MemoryStream ( data ); } }
If you wanted to support virtual directories you would also override the DirectoryExists() and GetDirectory() methods from the VirtualPathProvider. You would also need a custom VirtualDirectory derived class.
Registering the custom VirtualPathProvider
As a last step the MyVirtualPathProvider must be registered with ASP.NET. To accomplish this you need to add the following call to the application:
HostingEnvironment.RegisterVirtualPathProvider ( new MyVirtualPathProvider () );
The best place is probably in the Init() override of the application class or perhaps the Application_Start event handler (global.asax, unless you coded a custom application class somewhere in which case that would be the ideal place).
Conclusion
Virtual files and directories can be used to extend an existing MVC application (or a Web Forms application) by allowing you to store files in a place that is different from the regular file system. Be it a database, ZIP file or a remote network location, by using VirtualPathProvider the rest of the application will not have to know anything about the actual storage strategy.
@ Wednesday, 24 September 2014 21:00 | http://blog.rebuildall.net/2009/11/17/ASP_NET_MVC_and_virtual_views | CC-MAIN-2014-52 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.