text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
#include <gromacs/utility/allocator.h>
Inherits AllocationPolicy.
Policy-based memory allocator.
This class can be used for the optional allocator template parameter in standard library containers. It must be configured with both the type of object to allocate, and an AllocationPolicy which effectively wraps a matching pair of malloc and free functions. This permits implementing a family of related allocators e.g. with SIMD alignment, GPU host-side page locking, or perhaps both, in a way that preserves a common programming interface and duplicates minimal code.
AllocationPolicy is used as a base class, so that if AllocationPolicy is stateless, then the empty base optimization will ensure that Allocation is also stateless, and objects made with the Allocator will incur no size penalty. (Embedding an AllocationPolicy object incurs a size penalty always, even if the object is empty.) Normally a stateless allocator will be used.
However, an AllocationPolicy with state might be desirable for simplifying writing code that needs to allocate suitably for a transfer to a GPU. That code needs to specify an Allocator that can do the right job, which can be stateless. However, if we have code that will not know until run time whether a GPU transfer will occur, then the allocator needs to be aware of the state. That will increase the size of a container that uses the stateful allocator.
Constructor.
No constructor can be auto-generated in the presence of any user-defined constructor, but we want the default constructor.
Constructor to accept an AllocationPolicy.
This is useful for AllocationPolicies with state.
Do the actual memory allocation.
Release memory.
Return true if two allocators are different.
This is a member function of the left-hand-side allocator.
Return true if two allocators are identical.
This is a member function of the left-hand-side allocator. Always true for stateless polcies. Has to be defined in the policy for stateful policies. FUTURE: Can be removed with C++17 (is_always_equal) | https://manual.gromacs.org/current/doxygen/html-full/classgmx_1_1Allocator.xhtml | CC-MAIN-2021-17 | en | refinedweb |
dct 0.0.4
dct: ^0.0.4 copied to clipboard
package runner for dart
Use this package as a library
Depend on it
Run this command:
With Dart:
$ dart pub add dct
With Flutter:
$ flutter pub pub add dct
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: dct: ^0.0.4
Alternatively, your editor might support
dart pub get or
flutter pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:dct/download.dart'; | https://pub.dev/packages/dct/install | CC-MAIN-2021-17 | en | refinedweb |
This Tutorial Explains What is Stack in Java, Java Stack Class, Stack API Methods, Stack Implementation using Array & Linked List with the help of Examples:
A stack is an ordered data structure belonging to the Java Collection Framework. In this collection, the elements are added and removed from one end only. The end at which the elements are added and removed is called “Top of the Stack”.
As addition and deletion are done only at one end, the first element added to the stack happens to be the last element removed from the stack. Thus stack is called a LIFO (Last-in, First-out) data structure.
=> Take A Look At The Java Beginners Guide Here
What You Will Learn:
- Java Stack Collection
- Stack Class In Java
- Create A Stack In Java
- Stack API Methods In Java
- Stack Size
- Print / Iterate Stack Elements
- Stack Using Java 8
- Stack Implementation In Java
- Frequently Asked Questions
- Conclusion
Java Stack Collection
A pictorial representation of the stack is given below.
As shown in the above sequence of representation, initially the stack is empty and the top of the stack is set to -1. Then we initiate a “push” operation that is used to add an element to the stack.
So in the second representation, we push element 10. At this point, the top is incremented. We again push element 20 in the stack thereby incrementing the top furthermore.
In the last representation, we initiate a “pop” operation. This operation is used to remove an element from the stack. An element currently pointed to ‘Top’ is removed by the pop operation.
A stack data structure supports the following operations:
- Push: Adds an element to the stack. As a result, the value of the top is incremented.
- Pop: An element is removed from the stack. After the pop operation, the value of the top is decremented.
- Peek: This operation is used to look up or search for an element. The value of the top is not modified.
The top of the stack that is used as an end to add/remove elements from the stack can also have various values at a particular instant. If the size of the stack is N, then the top of the stack will have the following values at different conditions depending on what state the stack is in.
Stack Class In Java
Java Collection Framework provides a class named “Stack”. This Stack class extends the Vector class and implements the functionality of the Stack data structure.
The below diagram shows the hierarchy of the Stack class.
As shown in the above diagram, the Stack class inherits the Vector class which in turn implements the List Interface of Collection interface.
The Stack class is a part of java.util package. To include Stack class in the program, we can use the import statement as follows.
import java.util.*;
or
import java.util.Stack;
Create A Stack In Java
Once we import the Stack class, we can create a Stack object as shown below:
Stack mystack = new Stack();
We can also create a generic type of Stack class object as follows:
Stack<data_type> myStack = new Stack<data_type>;
Here data_type can be any valid data type in Java.
For example, we can create the following Stack class objects.
Stack<Integer> stack_obj = new Stack<>(); Stack<String> str_stack = new Stack<>();
Stack API Methods In Java
The Stack class provides methods to add, remove, and search data in the Stack. It also provides a method to check if the stack is empty. We will discuss these methods in the below section.
Stack Push Operation
The push operation is used to push or add elements into the stack. Once we create a stack instance, we can use the push operation to add the elements of the stack object type to the stack.
The following piece of code is used to initialize an integer stack with the values.
Stack<Integer> myStack = new Stack<>(); myStack.push(10); myStack.push(15); myStack.push(20);
The initial stack obtained as a result of the above piece of code execution is shown below:
If we perform another push() operation as shown below,
push(25);
The resultant stack will be:
Stack Pop Operation
We can remove the element from the stack using the “pop” operation. The element pointed by the Top at present is popped off the stack.
The following piece of code achieves this.
Stack<Integer> intStack = new Stack<>(); intStack.push(100); intStack.push(200); int val = intStack.pop();
The variable val will contain the value 200 as it was the last element pushed into the stack.
The stack representation for push and pop operation is as follows:
Stack Peek Operation
The peek operation returns the Top of the stack without removing the element. In the above stack example, “intStack.peek ()” will return 200.
Stack isEmpty Operation
The isEmpty () operation of the Stack class checks if the stack object is empty. It returns true if the Stack has no elements in it else returns false.
Stack Search Operation
We can search for an element on the stack using the search () operation. The search () operation returns the index of the element being searched for. This index is counted from the top of the stack.
Stack<Integer> intStack = new Stack<> (); intStack.push (100); intStack.push (200); int index = inStack.search(100); //index will have the value 2.
Stack Size
The size of the Stack object is given by the java.util.Stack.size () method. It returns the total number of elements in the stack.
The following example prints the stack size.
Stack<Integer> myStack = new Stack<Integer>(); myStack.push(100); myStack.push(200); myStack.push(300); System.out.println("Stack size:" + myStack.size()); //Stack size: 3
Print / Iterate Stack Elements
We can declare an iterator for the Stack and then traverse through the entire Stack using this iterator. This way we can visit and print each stack element one by one.
The following program shows the way to iterate Stack using an iterator.
import java.util.*; public class Main { public static void main(String[] args) { //declare and initialize a stack object Stack<String> stack = new Stack<String>(); stack.push("PUNE"); stack.push("MUMBAI"); stack.push("NASHIK"); System.out.println("Stack elements:"); //get an iterator for the stack Iterator iterator = stack.iterator(); //traverse the stack using iterator in a loop and print each element while(iterator.hasNext()){ System.out.print(iterator.next() + " "); } } }
Output:
Stack elements:
PUNE MUMBAI NASHIK
Stack Using Java 8
We can also print or traverse the stack elements using Java 8 features like Stream APIs, forEach, and forEachRemaining constructs.
The following program demonstrates the usage of Java 8 constructs to traverse through the stack.
import java.util.*; import java.util.stream.*; public class Main { public static void main(String[] args) { //declare and initialize a stack object Stack<String> stack = new Stack<String>(); stack.push("PUNE"); stack.push("MUMBAI"); stack.push("NASHIK"); System.out.println("Stack elements using Java 8 forEach:"); //get a stream for the stack Stream stream = stack.stream(); //traverse though each stream object using forEach construct of Java 8 stream.forEach((element) -> { System.out.print(element + " "); // print element }); System.out.println("\nStack elements using Java 8 forEachRemaining:"); //define an iterator for the stack Iterator<String> stackIterator = stack.iterator(); //use forEachRemaining construct to print each stack element stackIterator.forEachRemaining(val -> { System.out.print(val + " "); }); } }
Output:
Stack elements using Java 8 forEach:
PUNE MUMBAI NASHIK
Stack elements using Java 8 forEachRemaining:
PUNE MUMBAI NASHIK
Stack Implementation In Java
The following program implements the detailed stack demonstrating the various stack operations.
import java.util.Stack; public class Main { public static void main(String a[]){ //declare a stack object Stack<Integer> stack = new Stack<>(); //print initial stack System.out.println("Initial stack : " + stack); //isEmpty () System.out.println("Is stack Empty? : " + stack.isEmpty()); //push () operation stack.push(10); stack.push(20); stack.push(30); stack.push(40); //print non-empty stack System.out.println("Stack after push operation: " + stack); //pop () operation System.out.println("Element popped out:" + stack.pop()); System.out.println("Stack after Pop Operation : " + stack); //search () operation System.out.println("Element 10 found at position: " + stack.search(10)); System.out.println("Is Stack empty? : " + stack.isEmpty()); } }
Output:
Initial stack : []
Is stack Empty? : true
Stack after push operation: [10, 20, 30, 40]
Element popped out:40
Stack after Pop Operation : [10, 20, 30]
Element 10 found at position: 3
Is Stack empty? : false
Stack To Array In Java
The stack data structure can be converted to an Array using ‘toArray()’ method of the Stack class.
The following program demonstrates this conversion.
import java.util.*; import java.util.stream.*; public class Main { public static void main(String[] args) { //declare and initialize a stack object Stack<String> stack = new Stack<String>(); stack.push("PUNE"); stack.push("MUMBAI"); stack.push("NASHIK"); //print the stack System.out.println("The Stack contents: " + stack); // Create the array and use toArray() method to convert stack to array Object[] strArray = stack.toArray(); //print the array System.out.println("The Array contents:"); for (int j = 0; j < strArray.length; j++) System.out.print(strArray[j]+ " "); } }
Output:
The Stack contents: [PUNE, MUMBAI, NASHIK]
The Array contents:
PUNE MUMBAI NASHIK
Stack Implementation In Java Using Array
The stack can be implemented using an Array. All the stack operations are carried out using an array.
The below program demonstrates the Stack implementation using an array.
import java.util.*; //Stack class class Stack { int top; //define top of stack int maxsize = 5; //max size of the stack int[] stack_arry = new int[maxsize]; //define array that will hold stack elements Stack(){ //stack constructor; initially top = -1 top = -1; } boolean isEmpty(){ //isEmpty () method return (top < 0); } boolean push (int val){ //push () method if(top == maxsize-1) { System.out.println("Stack Overflow !!"); return false; } else { top++; stack_arry[top]=val; return true; } } boolean pop () { //pop () method if (top == -1) { System.out.println("Stack Underflow !!"); return false; } else { System.out.println("\nItem popped: " + stack_arry[top--]); return true; } } void display () { //print the stack elements System.out.println("Printing stack elements ....."); for(int i = top; i>=0;i--) { System.out.print(stack_arry[i] + " "); } } } public class Main { public static void main(String[] args) { //define a stack object Stack stck = new Stack(); System.out.println("Initial Stack Empty : " + stck.isEmpty()); //push elements stck.push(10); stck.push(20); stck.push(30); stck.push(40); System.out.println("After Push Operation..."); //print the elements stck.display(); //pop two elements from stack stck.pop(); stck.pop(); System.out.println("After Pop Operation..."); //print the stack again stck.display(); } }
Output:
Initial Stack Empty : true
After Push Operation…
Printing stack elements …..
40 30 20 10
Item popped: 40
Item popped: 30
After Pop Operation…
Printing stack elements …..
20 10
Stack Implementation Using Linked List
The stack can also be implemented using a linked list just like how we have done using arrays. One advantage of using a linked list for implementing stack is that it can grow or shrink dynamically. We need not have a maximum size restriction like in arrays.
The following program implements a linked list to perform stack operations.
import static java.lang.System.exit; // Stack class using LinkedList class Stack_Linkedlist { // Define Node of LinkedList private class Node { int data; // node data Node nlink; // Node link } // top of the stack Node top; // stack class Constructor Stack_Linkedlist() { this.top = null; } // push () operation public void push(int val) { // create a new node Node temp = new Node(); // checks if the stack is full if (temp == null) { System.out.print("\nStack Overflow"); return; } // assign val to node temp.data = val; // set top of the stack to node link temp.nlink = top; // update top top = temp; } // isEmpty () operation public boolean isEmpty() { return top == null; } // peek () operation public int peek() { // check if the stack is empty if (!isEmpty()) { return top.data; } else { System.out.println("Stack is empty!"); return -1; } } // pop () operation public void pop() { // check if stack is out of elements if (top == null) { System.out.print("\nStack Underflow!!"); return; } // set top to point to next node top = (top).nlink; } //print stack contents public void display() { // check for stack underflow if (top == null) { System.out.printf("\nStack Underflow!!"); exit(1); } else { Node temp = top; System.out.println("Stack elements:"); while (temp != null) { // print node data System.out.print(temp.data + "->"); // assign temp link to temp temp = temp.nlink; } } } } public class Main { public static void main(String[] args) { // Create a stack class object Stack_Linkedlist stack_obj = new Stack_Linkedlist(); // push values into the stack stack_obj.push(9); stack_obj.push(7); stack_obj.push(5); stack_obj.push(3); stack_obj.push(1); // print Stack elements stack_obj.display(); // print current stack top System.out.println("\nStack top : " + stack_obj.peek()); // Pop elements twice System.out.println("Pop two elements"); stack_obj.pop(); stack_obj.pop(); // print Stack elements stack_obj.display(); // print new stack top System.out.println("\nNew Stack top:" + stack_obj.peek()); } }
Output:
Stack elements:
1->3->5->7->9->
Stack top : 1
Pop two elements
Stack elements:
5->7->9->
New Stack top:5
Frequently Asked Questions
Q #1) What are Stacks in Java?
Answer: A stack is a LIFO (Last in, First out) data structure for storing elements. The stack elements are added or removed from the stack from one end called Top of the stack.
The addition of an element to the stack is done using the Push operation. The deletion of elements is done using pop operation. In Java, a stack is implemented using the Stack class.
Q #2) Is Stack a Collection in Java?
Answer: Yes. The stack is a legacy collection in Java that is available from Collection API in Java 1.0 onwards. Stack inherits the Vector class of the List interface.
Q #3) Is Stack an Interface?
Answer: Interface<E> stack is an interface that describes the last-in, first-out structure and is used for storing the state of recursive problems.
Q #4) What are Stacks used for?
Answer: Following are the main applications of the stack:
- Expression evaluation and conversions: Stack is used for converting expressions into postfix, infix, and prefix. It is also used to evaluate these expressions.
- The stack is also used for parsing syntax trees.
- The stack is used to check parentheses in an expression.
- The stack is used for solving backtracking problems.
- Function calls are evaluated using stacks.
Q #5) What are the Advantages of the Stack?
Answer: Variables stored on stack are destroyed automatically when returned. Stacks are a better choice when memory is allocated and deallocated. Stacks also clean up the memory. Apart from that stacks can be used effectively to evaluate expressions and parse the expressions.
Conclusion
This completes our tutorial on Stacks in Java. Stack class is a part of the collection API and supports push, pop, peek, and search operations. The elements are added or removed to/from the stack at one end only. This end is called the top of the stack.
In this tutorial, we have seen all the methods supported by the stack class. We have also implemented the stack using arrays and linked lists.
We will proceed with other collection classes in our subsequent tutorials.
=> Read Through The Easy Java Training Series | https://www.softwaretestinghelp.com/java-stack-tutorial/ | CC-MAIN-2021-17 | en | refinedweb |
A Quick introduction to Hadoop Hive on Azure and Querying Hive using LINQ in C#
Join the DZone community and get the full member experience.Join For Free
Earlier, in a couple of posts related to Hadoop on Azure - Analyzing some ‘Big Data’ using C# and Extracting Top 500 MSDN Links from Stack Overflow – I showed how to use C# Map Reduce Jobs with Hadoop Streaming to do some meaningful analytics.
Now, a preview version of the .NET SDK for Hadoop is available, making it easier to work with Hadoop from .NET – with more types for supporting Map Reduce Jobs, For creating LINQ to Hive queries etc. You can experiment with Hadoop and C# either by creating a cluster in or you can obtain Hadoop in your machine by installing Microsoft HDInsight using WebPI.
In case you are new to Hadoop on Azure, I suggest you read the introductory concepts here before you start. This post is just a quick example that shows how to use LINQ to Hive.
Hive is a data warehouse system for Hadoop that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets stored in Hadoop compatible file systems
Installing the libraries
To start with, you can fire up Visual Studio, Create a console project, and install Microsoft.Hadoop.Hive libraries via Nuget.
install-package Microsoft.Hadoop.Hive -pre
Also, head over to and create a new cluster. And you are now set.
Creating the typed wrappers
To access Hive, you need to create a strongly typed wrapper – as of now, you need to roll this out your own, as there is no automated generation support. When you provision a Hadoop cluster, the Hive will be pre populated with a sample table (hivesampletable), and I’m using the same for the below example for brevity. You can connect to the Hive via ODBC and see the hive tables in Excel.
So, let us go ahead and create a hive connection (much like an EF data context) and a typed representation for a row in the table. HiveConnection and HiveTable types are in the Microsoft.Hadoop.Hive namespace.
//Our concrete hive connection public class SampleHiveConnection : HiveConnection { public SampleHiveConnection(string hostName, int port) : base(hostName, port, null, null) { } public SampleHiveConnection(string hostName, int port, string username, string password) : base(hostName, port, username, password) { } public HiveTable<DeviceInfo> DeviceInfoTable { get { return this.GetTable<DeviceInfo>("hivesampletable"); } } } //A typed row. Property names based on field names hivesampletable public class DeviceInfo : HiveRow { public string DevicePlatform { get; set; } public string DeviceMake { get; set; } public int ClientId { get; set; } }
Querying the Hive using LINQ
Now, you may perform LINQ queries against your Hive context, thanks to the Hadoop SDK we installed via Nuget. Just make sure to substitute the connection string, username and password with your own.
class Program { static void Main(string[] args) { //Create a hive connection //I've my cluster in var hive = new SampleHiveConnection( "saintcluster.cloudapp.net", //your connection string 10000, //port "user", //your username "yourpass"); //your password //Get the results //Make sure you goto the dashboard and turn on the ODBC port var res = from d in hive.DeviceInfoTable where d.ClientId < 100 select d; //Dump it to the console if you like var list = res.ToList(); } }That is cool. Your LINQ query will be submitted to the Azure cluster via the ODBC driver, and will be compiled and executed in the Hive.
Published at DZone with permission of Anoop Madhusudanan, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/quick-introduction-hadoop-hive | CC-MAIN-2021-17 | en | refinedweb |
I was wondering if perhaps there was already a work-around for this, but: Can a mesh be substituted for the 'points array' in a GraphUpdateScene script?
I'm finding that when creating area for 'complex' geometry area (like a terrain contour), a change in that terrain means I have to completely re-create the GraphUpdateScene point array (adding or subtracting 1 point basically means I have to re-create the whole thing), so I was wondering if, like a NavmeshCut script, I could instead use pre-made mesh (which is also much more accurate / I could re-use the mesh I already made the the Cut script)?
I'm interested in using a NavmeshCut script with "IsDual" along with a GraphUpdateScene that modifies the node penalty, but since I have to re-make all the point arrays each time and area / agent changes shape, for the moment I'm just using 'cut' and hoping my agents don't wander out of their safe areas.
Hi
Sorry, there is no support for that at the moment.Note however that you can change the positions of the points after you have placed them the first time by simply selecting them in the scene view. You can also open the "Points" list in the inspector and duplicate any element to get a new point (you can duplicate array elements by right clicking on them and selecting duplicate).
You can if you want make a script that uses the navmesh cut contour by calling the NavmeshCut.GetCountour method. It will return a list of contours (navmesh cuts can contain multiple, you likely just want to use the first one) with the type ClipperLib.IntPoint, you can convert IntPoint to Vector3 by calling the NavmeshCut.IntPointToV3 function.
Did not realize the options with Right Click, great to know. Also, I'm sure I'm over simplifying this (because I don't know all the considerations the GraphUpdateScene makes), but if you took a mesh that is already 'safe' in terms of its construction, could you not 'feed' those vertices points into the array? I don't know if the order would be preserved (I guess that would be a huge problem if there was no way to order them before insertion), just me thinking out loud.
-Looking at your second message:Interesting, I'll keep pursuing that approach as well.
Yeah, you cannot just feed a mesh's vertices to it, you would need to extract the contours like the navmesh cut script does.
Made a basic test version using ExecuteInEditMode, seems to work (not sure if I need the Claim and Release methods...do I?). Thank you very much for the help (I'm going to bed).
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
using Pathfinding;
using Pathfinding.ClipperLib;
[ExecuteInEditMode]
public class Pathfinding_GUSExtender : MonoBehaviour {
//extender to get NMC counturs into the GUS point array
//
[Header ("Update: ")]
public bool updateThisScript = false;
[Header ("Refs: ")]
public NavmeshCut nmc;
public GraphUpdateScene gus;
private List<List<IntPoint>> ip = new List<List<IntPoint>>();
private Vector3[] tempArray;
void Update()
{
updateThisScript = false;
Debug.Log("update is updating 1...");
if(nmc != null && gus != null)
{
Debug.Log("update is updating 2...");
//don't seem to be needed, ask if needed:
//ip = Pathfinding.Util.ListPool<List<Pathfinding.ClipperLib.IntPoint>>.Claim();
nmc.GetContour(ip);
//List<Pathfinding.ClipperLib.IntPoint> cont = ip[0];
tempArray = new Vector3[ip[0].Count];
//according to him, we probably only want the 1st countour
for(int counter = 0; counter < ip[0].Count; counter++)
{
Debug.Log("working");
tempArray[counter] = NavmeshCut.IntPointToV3(ip[0][counter]);
}
gus.points = tempArray;
//don't seem to be needed, ask if needed:
//Pathfinding.Util.ListPool<List<Pathfinding.ClipperLib.IntPoint>>.Release(ip);
}
}
}`
Nice that it's working.
If you don't clear the list the GetContour method will just keep adding contours to the "ip" list every time you call the method. ListPool is used to reuse lists to avoid excessive allocations, you can remove it if you want (see documentation page on pooling if you want to know more). | http://forum.arongranberg.com/t/graphupdatescene-points-array-use-mesh/2445 | CC-MAIN-2018-26 | en | refinedweb |
Qt Remote Objects
Remote Object Concepts
Qt in QtRO) is forwarded to the true object (called a Source in QtRO) for handling. Updates to the Source (either property changes or emitted Signals) are forwarded to every Replica.
A Replica is a light-weight proxy for the Source object, but one that supports the same connections and behavior of QObjects, which makes them as easy to use as any other QObject provided by Qt. Everything needed for the Replica to look like the Source object is handled behind the scenes by QtRO.
Note that Remote Objects behave differently from traditional remote procedure call (RPC) implementations. In RPC, the client makes a request and waits for the response. In RPC, the server does not push anything to the client unless it is in response to a request. The design of RPC is often such that different clients are independent of each other (for instance, two clients can ask a mapping service for directions and get different results). While it is possible to implement this in QtRO (as Source without properties, and Slots that have return values), it is designed more to hide the fact that the processing is really remote. You let a node give you the Replica instead of creating it yourself, possibly use the status signals (isReplicaValid()), but then interact with the object like you would with any other QObject-based type.
Related Information
Getting Started
To enable Qt Remote Objects in a project, add this directive into the C++ files:
#include <QtRemoteObjects>
To link against the Qt Remote Objects module, add this line to the project file:
QT += remoteobjects
Guides
- Overview Qt Remote Objects
- Qt Remote Objects C++ Classes
- Qt Remote Objects Nodes
- Qt Remote Objects Source Objects
- Qt Remote Objects Replica Objects
- Qt Remote Objects Registry
- Qt Remote Objects Compiler
- Remote Object Interaction
- Using Qt Remote Objects
- Troubleshooting Qt Remote Objects
Reference
Qt Remote Objects. | http://doc-snapshots.qt.io/qt5-5.11/qtremoteobjects-index.html | CC-MAIN-2018-26 | en | refinedweb |
dotP = dotProduct(x , y, z, px, py, pz);
double hDistance = magnitude(x, y, z);
double pDistance = magnitude(px, py, pz);
cDistance = dotP / hDistance;
double theta = acos(dotP / (Distance * pDistance));
return ((theta <= 0.5 * ang) && (cDistance <= height)) && cDistance > 0;
What is Distance?
Thanks
acos(dotP / (hDistance * pDistance)); lit vertices correspond to the points that exist within the cone. But not all the vertices are lit within the cone which exists from the viewpoint.
Are you sure my math is correct?
Should all my vectors be normalised? Or none at all if any?
Thanks
jamie
It shouldn't matter whether or not you normalise as long as you are consistent
is px, py, pz the point to be tested, and is the axis of the cone x , y, z with the apex of the cone at 0, 0, 0 ?
vec is a structure with .x, .y, and .z members.
apex is the location of the apex of the cone, the pointy bit.
axis is a unit vector (normalized so that magnitude == 1) pointing from the apex to the center of the base disc.
halfcossq is (cos(alpha/2))^2. If you precalculate this then you don't need to do trig for every single point tested.
p is the point we're testing to see if it's inside the cone.
boolean pointInCone(vec apex, vec axis, double height, double halfcossq,
double p) {
vec pdiff = vec(p.x - apex.x, p.y - apex.y, p.z - apex.z);
double d= axis.x * pdiff.x + axis.y *pdiff.y + axis.z * pdiff.z;
if (d < 0 || d > height) return false; // above the apex or below the base
// we want to know if acos(d / magnitude(pdiff)) < alpha/2.
// applying cos reverses the sign of the comparison. No real improvement,
// since it trades a cos for an acos. However, it sets up other tradeoffs.
// d/magnitude(pdiff) > cos(alpha/2)
// this trades a multiply for a divide, a good deal
// d > cos(alpha/2) * magnitude(pdiff)
// this trades a multiply for a sqrt, another good deal
// d^2 > cos(alpha/2)^2 * magnitude(pdiff)^2
return (d*d > halfcossq * (pdiff.x*pdiff.x + pdiff.y*pdiff.y + pdiff.z*pdiff.z));
}
Given
e="eye position"
d="eye direction" (not necessarily normalized)
t="test point"
The first test is
(t-e) . (t-e)<=l^2 (note that "." is dot-product)
So your math is correct but your cdistance is, actually, the square of the distance (you might have to compare it with square of "height" - but I'm not sure what "height" is)
The second test is
(t-e) . d >= length(t-e) * cos(alpha/2)
You can divide this last test by length(t-e) and take the acos of both sides to get it in the same form you used (acos(A.B / |A||B|) < 0.5 * alpha ) which is correct. However I'd avoid using acos and prefer my form as, with my form, you can precompute cos (alpha/2) and use it for testing multiple points (with your form, you should computer the acos for each test point).
So your "acos(A.B / |A||B|) < 0.5 * alpha " is correct (just take the ac
(t-e) . d >= length(t-e) * length(d) * cos(alpha/2)
Ozo: The picture was supposed to demonstrate a circular or jagged shaped conical shape on an area of terrain.
If my code was working right, it should check whether points in my view (demonstrated by the blue crosshair) are in distance, then they should be illuminated, so it would look like a dark room and some one turns on a torch.
I was doing this with my code.
if (coneLighting && insideLightCone(vec[0], vec[1], vec[2], p1.x-eyex, p1.y-eyey, p1.z-eyez, ANG, coneLength))
Where vec was a normalised array of floats corresponding to a normalised lookat vector. p1 would be the test point, coneLength (or i described height as well) would be the length of the cone from the apex to the center of the base and ANG would be the angle between the two diagonal sides of a cone.
Somehow I think I incorrectly encoded what you said, i'll try again in a second.
Also I'm searching for a flashlight effect, if a sphere sector could produce a similar effect I would be welcome to listen to it.
NovaDenizen
I tried your method and got a similar effect to Ozo's so am trying to track down errors and see if my code is buggering something up somewhere. But yes, I agree your form although probably harder to see is probably a lot more efficient than calling acos() for 10000 points.
Ibertacco
Yes, I basically just want to have a cone which is my eye position and determine whether a point at a distance d away is inside the cone. Obviously this cone can be translated or rotated about the y and x to show movement as if I look down on the floor or to the skys.
You speak of two tests, does this refer to the similar math I describe?
Thanks for all your input, ill have another look at what I have done and see if its a stupid mistake I've done somewhere.
Neutrinohunter
Actually I'm not sure why you want to test for the distance if what you are looking for is a "torch"/spotlight effect (maybe the idea is that the torch can't get too far? - in this case the distance check I proposed is more appropriate than the finite cone idea, but doesn't really change much).
Still, in your code, next to last line (the "return" line), you should compare cdistance with square of height as in:
return ((theta <= 0.5 * ang) && (cDistance <= height*height));
Anyhow, you have all helped and I will give out the well earned points.
Try with this code (Note that I"m assuming that (x,y,z) is normalized as it seems from your code, correct?):
int insideLightCone(double x, double y, double z, double px, double py, double pz, double ang, double height) {
double hsquared = height*height;
double cosangle = cos(0.5*ang);
cDistance = (eyex - px) * (eyex - px) + (eyey - py) * (eyey - py) + (eyez - pz) * (eyez - pz);
return(cDistance<=hsquared
}
As you can see, hsquared and cosangle don't change unless you change the cone size/angle, so you could avoid computing them for each poiint....
Also in your code, cDistance seems to be a global var? (otherwise it would be better to add a "double" in front of "cDistance ="). If you use cDistance in other parts of your code, be careful that it still is not really the distance but the square of it
Ozo: Basically I was doing
/* Ncenter# is the normalised look at vector
P1 is the TestPoint
eye# is the position of the observer
*/
if (insideLightCone(ncenterx,
set colour of the test point to white i.e glColor3f(1.0,1.0,1.0);
else
set colour to black i.e glColor3f(0.0,0.0,0.0);
Ibertacco: I just get a black screen. Though I'm wondering why you are doing:
eyex - px since px = testp.x - eyex so basically I am getting back testp.x, if this was supposed to be the parameter x rather than eyex then I get an effect which is better. But the shape is only very small and doesn't change on movement of my lookat vector, which it is programmed to do.
So, now, my understanding is that:
x,y,z is the cone axis direction, normalized
px,py,pz is testpoint - eyeposition
ang, is the cone aperture angle
height is the light maximum distance
and the revised code is:
int insideLightCone(double x, double y, double z, double px, double py, double pz, double ang, double height) {
double hsquared = height*height;
double cosangle = cos(0.5*ang);
cDistance = px*px + py*py + pz*pz;
return(cDistance<=hsquared
}
I'm not sure how familiar you are with OpenGL, but the idea was to set a colour of a point to fully illuminated if was inside the cone and it seems that no point I have falls within the cone, however that shouldn't be the case. My angle is at around 60 degrees and my coneLength is at around 30 (since my whole scene of size (100, 20,100) in the approriate dimensions.
Thanks
neutrinohunter | https://www.experts-exchange.com/questions/22709639/Point-inside-a-Cone-Detection.html | CC-MAIN-2018-26 | en | refinedweb |
- Advertisement
Content count3301
Joined
Last visited
WinAPI + OpenGL: Change window style and/or resolution
Erik Rufelt replied to csisy's topic in General and Gameplay ProgrammingIs that FRAPS display always active? If so, try disabling it.. it probably overrides a bunch of GL calls to put its overlay there.
WinAPI + OpenGL: Change window style and/or resolution
Erik Rufelt replied to csisy's topic in General and Gameplay ProgrammingTry updating your graphics drivers if you don't have the newest stable one already.. From your images it seems clear that the OpenGL part stays right where it was in respect to the top-left corner of the window (not the just the client-area).. whereas I guess the top left part that was the border before gets the background-color.. Could be a difference in what theme is used in Windows or if there are any window-helper plugins or something that causes the difference in behavior on different computers.. Try dragging another window on top of the window with the error and see if the OpenGL drawn part is properly updated.. I guess you call PumpMessages from a loop and always do glClear and Swap after that? You could try only calling DefWindowProc always and skip any viewport and resize handling etc as glClear and Swap don't care about the viewport anyway, and maybe add a Sleep(10) or something in the loop to avoid insane update-rates..
WinAPI + OpenGL: Change window style and/or resolution
Erik Rufelt replied to csisy's topic in General and Gameplay ProgrammingI don't get it.. does it not work? I ran your program and see nothing strange.. maybe a difference between Windows versions or something.. You should post a single-file source code we can run instead of an exe if possible. Anyway, I think your window styles look weird. Especially setting the style when going back to windowed mode.. save the style when you decide to switch to fullscreen and restore it to the saved style when going back to windowed.. removing the fullscreen style bits will break if the windowed mode has any of the same bits set. Many window-styles are combinations of several bits. Also it looks really complicated, too much code to combine the styles, very difficult to follow exactly what bits will remain in the end.. easy to miss things. I don't get making the window area the same either.. if you want a particular window-area in client-mode then handle that.. but what is the point of a borderless window that doesn't cover the screen? Also getting multiple WM_SIZE shouldn't matter, you could get that anyway if the window is resized, I don't think those messages are supposed to be relied on to send an exact size exactly once or things like that. I'm not sure a resize or especially a style-change is guaranteed to be in any way atomic with respect to the message queue.
WinAPI + OpenGL: Change window style and/or resolution
Erik Rufelt replied to csisy's topic in General and Gameplay ProgrammingShow your game loop with PeekMessage as well as the WndProc.
WinAPI + OpenGL: Change window style and/or resolution
Erik Rufelt replied to csisy's topic in General and Gameplay ProgrammingDepends, it can have WS_VISIBLE and others added.. also many styles commonly used when creating a window are actually combinations of several sub-styles.
WinAPI + OpenGL: Change window style and/or resolution
Erik Rufelt replied to csisy's topic in General and Gameplay ProgrammingI use this one, has some comments and TODOs that could be double-checked.. It has a check to (supposedly) only go to fullscreen on monitors connected to the primary GPU in case you have more than one, if not that part is unnecessary.. // Get device for monitor static bool getDeviceForMonitor(const MONITORINFOEX &monitorInfo, LPDISPLAY_DEVICE pOutDevice) { DISPLAY_DEVICE displayDevice = {0}; displayDevice.cb = sizeof(displayDevice); DWORD dwDevNum = 0; BOOL bRet = EnumDisplayDevices(NULL, dwDevNum, &displayDevice, EDD_GET_DEVICE_INTERFACE_NAME); while(bRet != 0) { if(wcscmp(monitorInfo.szDevice, displayDevice.DeviceName) == 0) { *pOutDevice = displayDevice; return true; } ++dwDevNum; memset(&displayDevice, 0, sizeof(displayDevice)); displayDevice.cb = sizeof(displayDevice); bRet = EnumDisplayDevices(NULL, dwDevNum, &displayDevice, EDD_GET_DEVICE_INTERFACE_NAME); } return false; } // Toggle fullscreen // TODO try SetWindowPlacement instead? // TODO use maximized WS_POPUP to fill screen instead of manually setting rect? check methods... static bool toggleFillscreen(HWND hWnd, bool setFillscreen) { bool resultIsFullscreen = setFillscreen; static LONG_PTR savedWindowStyle = 0; static LONG_PTR savedWindowExStyle = 0; static RECT savedWindowRect; if(setFillscreen) { resultIsFullscreen = false; int x = 0; int y = 0; int w = GetSystemMetrics(SM_CXSCREEN); int h = GetSystemMetrics(SM_CYSCREEN); HMONITOR hMonitor = MonitorFromWindow(hWnd, MONITOR_DEFAULTTONEAREST); HMONITOR hPrimaryMonitor = MonitorFromWindow(NULL, MONITOR_DEFAULTTOPRIMARY); if(hMonitor != NULL && hPrimaryMonitor != NULL) { BOOL bRet; bool switchToFullscreen = false; bool gotPrimary = false; bool gotTarget = false; MONITORINFOEX primaryInfo; ZeroMemory(&primaryInfo, sizeof(primaryInfo)); primaryInfo.cbSize = sizeof(primaryInfo); bRet = GetMonitorInfo(hPrimaryMonitor, &primaryInfo); if(bRet != 0) { gotPrimary = true; } MONITORINFOEX monitorInfo; ZeroMemory(&monitorInfo, sizeof(monitorInfo)); monitorInfo.cbSize = sizeof(monitorInfo); bRet = GetMonitorInfo(hMonitor, &monitorInfo); if(bRet != 0) { x = monitorInfo.rcMonitor.left; y = monitorInfo.rcMonitor.top; w = monitorInfo.rcMonitor.right - monitorInfo.rcMonitor.left; h = monitorInfo.rcMonitor.bottom - monitorInfo.rcMonitor.top; gotTarget = true; } if(gotTarget && gotPrimary) { if(wcscmp(primaryInfo.szDevice, monitorInfo.szDevice) == 0) { switchToFullscreen = true; } else { DISPLAY_DEVICE primaryDevice; bool gotPrimaryDevice = getDeviceForMonitor(primaryInfo, &primaryDevice); DISPLAY_DEVICE targetDevice; bool gotTargetDevice = getDeviceForMonitor(monitorInfo, &targetDevice); if(gotPrimaryDevice && gotTargetDevice) { // always false for secondary monitors, even when connected the same GPU as the primary monitor.. //bool isPrimaryDevice = ((primaryDevice.StateFlags & DISPLAY_DEVICE_PRIMARY_DEVICE) != 0); // according to docs for EDD_GET_DEVICE_INTERFACE_NAME DeviceID is supposed to contain 'something'.. seems to always be empty //if(wcscmp(primaryDevice.DeviceID, targetDevice.DeviceID) == 0)) { // switchToFullscreen = true; //} // seems these registry key paths are per physical GPU apart from the last component which is an index // so this should probably match monitors on the same physical GPU for(size_t i = wcslen(primaryDevice.DeviceKey); i > 0; --i) { if(primaryDevice.DeviceKey[i - 1] == L'\\') break; primaryDevice.DeviceKey[i - 1] = 0; } for(size_t i = wcslen(targetDevice.DeviceKey); i > 0; --i) { if(targetDevice.DeviceKey[i - 1] == L'\\') break; targetDevice.DeviceKey[i - 1] = 0; } if(wcscmp(primaryDevice.DeviceKey, targetDevice.DeviceKey) == 0) { switchToFullscreen = true; } // this only compares the name of the graphics card, identical for 2 GPUs of the same type.. //if(wcscmp(primaryDevice.DeviceString, targetDevice.DeviceString) == 0) { // switchToFullscreen = true; //} } } } if(switchToFullscreen) { GetWindowRect(hWnd, &savedWindowRect); if(GetWindowLongPtr(hWnd, GWL_STYLE) & WS_MAXIMIZE) { savedWindowStyle = SetWindowLongPtr(hWnd, GWL_STYLE, WS_CLIPCHILDREN | WS_CLIPSIBLINGS | WS_OVERLAPPED | WS_VISIBLE | WS_MAXIMIZE); savedWindowExStyle = SetWindowLongPtr(hWnd, GWL_EXSTYLE, 0); } else { savedWindowStyle = SetWindowLongPtr(hWnd, GWL_STYLE, WS_CLIPCHILDREN | WS_CLIPSIBLINGS | WS_OVERLAPPED | WS_VISIBLE); savedWindowExStyle = SetWindowLongPtr(hWnd, GWL_EXSTYLE, 0); } SetWindowPos(hWnd, HWND_TOPMOST, x, y, w, h, SWP_FRAMECHANGED | SWP_DRAWFRAME); resultIsFullscreen = true; } else MessageBeep(MB_OK); } } else { SetWindowLongPtr(hWnd, GWL_STYLE, savedWindowStyle | WS_VISIBLE); SetWindowLongPtr(hWnd, GWL_EXSTYLE, savedWindowExStyle); HWND hWndInsertAfter = HWND_NOTOPMOST; if((savedWindowExStyle & WS_EX_TOPMOST) == WS_EX_TOPMOST) hWndInsertAfter = HWND_TOPMOST; SetWindowPos( hWnd, hWndInsertAfter, savedWindowRect.left, savedWindowRect.top, savedWindowRect.right-savedWindowRect.left, savedWindowRect.bottom-savedWindowRect.top, SWP_FRAMECHANGED | SWP_DRAWFRAME ); } return resultIsFullscreen; } And use like this: (Won't work for more than one window in it's current form as the toggleFillscreen function has statics to save the window placement..) bool currentFullscreen = false; ... if(toggleKeyPressed) currentFullscreen = toggleFillscreen(hWnd, !currentFullscreen); EDIT: added a retain topmost style
responsiveness of main game loop designs
Erik Rufelt replied to Norman Barrows's topic in General and Gameplay ProgrammingIn extreme cases you could improve perceived latency by drawing some things after the main game/simulation graphics, like if your game displays a mouse cursor. Even if your normal simulation and rendering was done as usual you could delay your swap/present and draw the cursor with an updated position right before the image is sent to the display, possibly with input data received several ms after the simulation was last updated. You could even buffer a few rendered frames, then go back and draw the cursor on top right before getting the image ready for presentation. For OpenGL there is to achieve minimal latency.
Antialiasing in 3D with Core OpenGL (Windows)
Erik Rufelt replied to Foxito Foxeh's topic in Graphics and GPU ProgrammingYou should be able to smooth edges with alpha-blended lines or similar after all your regular geometry is drawn (using depth-testing but not depth-writes). I would imagine GL_LINE_SMOOTH could work fine there with depth-testing.. Your question is tagged with OpenGL ES but the title says Windows, do you do OpenGL ES on desktop or regular OpenGL?
Screen tearing when using OpenGL on Windows
Erik Rufelt replied to pseudomarvin's topic in Graphics and GPU ProgrammingVSync is often unreliable in windowed mode. If you create a window filling the entire screen with its client area and nothing obscuring it (WS_POPUP / WS_EX_TOPMOST) the Nvidia driver should switch to exclusive fullscreen mode. For perfect VSync you actually want pre-rendered frames, so it should not be set to 1, that would break VSync if even a single frame happened to lag behind because of some background process or similar that is out of application control
Manually loading OpenGL functions on Windows
Erik Rufelt replied to pseudomarvin's topic in Graphics and GPU ProgrammingI also use the fallback method to get pointers for the old functions.. I think that's what common extension libraries do as well, though I haven't actually checked.. If including a recent glcorearb.h header without the prototypes-define it won't define the old prototypes either, but will define the function pointer types for the 1.0 functions etc. as well. And with the prototypes-define it will add prototypes for all functions including newer ones. On Linux all function pointers can be obtained with glXGetProcAddress so there no fallback is required. If still linking to opengl32.lib one could of course do another fallback, something like mynamespace::glClear = ::glClear instead of GetProcAddress...
bezier curve character
Erik Rufelt replied to JohnnyCode's topic in Math and PhysicsNo.
Meshes rendered with the aid of shaders corrupted in windows 64 bit
Erik Rufelt replied to anders211's topic in Graphics and GPU ProgrammingCould be something with "index" as an integer in the shader. Try using a simpler shader that doesn't use array indexing and doesn't use integers to make sure. Did you enable the debug runtime properly? Go into the DirectX control panel and make sure you set the highest debug level under D3D9 and enable all the debug options, then run your app with the debugger and see what output you get.
Weird Lesson 43 Problem
Erik Rufelt replied to Leroy1981's topic in NeHe ProductionsI tried the code without modifications other than adding Qgjqp to the printed text and they look fine here.. try a different font or text-size and change the string you draw to make sure if it's really those characters that are missing and not certain positions in the string that disappear or similar.
how good is rand() ?
Erik Rufelt replied to Norman Barrows's topic in General and Gameplay ProgrammingCorrect."; } }
how good is rand() ?
Erik Rufelt replied to Norman Barrows's topic in General and Gameplay ProgrammingNot too good.. If you Google for its implementation you get a few results. Something like next = (current * A) + B with overflow wrapping around. EDIT: And it does return the value RAND_MAX at times, not just RAND_MAX-1.
- Advertisement | https://www.gamedev.net/profile/30219-erik-rufelt/ | CC-MAIN-2018-26 | en | refinedweb |
Red Hat Bugzilla – Bug 1284519
[abrt] ktp-contact-list: SignOn::Identity::storeCredentials(): ktp-contactlist killed by SIGSEGV
Last modified: 2016-12-20 11:12:04 EST
Version-Release number of selected component:
ktp-contact-list-15.08.2-1.fc23
Additional info:
reporter: libreport-2.6.3
backtrace_rating: 4
cmdline: /usr/bin/ktp-contactlist
crash_function: SignOn::Identity::storeCredentials
executable: /usr/bin/ktp-contactlist
global_pid: 12862
kernel: 4.2.6-300.fc23.x86_64
runlevel: N 5
type: CCpp
uid: 1000
Truncated backtrace:
Thread no. 1 (10 frames)
#0 SignOn::Identity::storeCredentials at identity.cpp:93
#1 KAccountsUiProvider::storePasswordInSso at ../../../plugins/kaccounts/kaccounts-ui-provider.cpp:449
#2 KAccountsUiProvider::<lambda(Tp::PendingOperation*)>::operator()(Tp::PendingOperation *) const at ../../../plugins/kaccounts/kaccounts-ui-provider.cpp:402
#3 QtPrivate::FunctorCall<QtPrivate::IndexesList<0>, QtPrivate::List<Tp::PendingOperation*>, void, KAccountsUiProvider::onConfigureAccountDialogAccepted()::<lambda(Tp::PendingOperation*)> >::call at /usr/include/qt5/QtCore/qobjectdefs_impl.h:495
#4 QtPrivate::Functor<KAccountsUiProvider::onConfigureAccountDialogAccepted()::<lambda(Tp::PendingOperation*)>, 1>::call<QtPrivate::List<Tp::PendingOperation*>, void> at /usr/include/qt5/QtCore/qobjectdefs_impl.h:552
#5 QtPrivate::QFunctorSlotObject<KAccountsUiProvider::onConfigureAccountDialogAccepted()::<lambda(Tp::PendingOperation*)>, 1, QtPrivate::List<Tp::PendingOperation*>, void>::impl(int, QtPrivate::QSlotObjectBase *, QObject *, void **, bool *) at /usr/include/qt5/QtCore/qobject_impl.h:192
#6 QtPrivate::QSlotObjectBase::call at ../../src/corelib/kernel/qobject_impl.h:124
#7 QMetaObject::activate at kernel/qobject.cpp:3698
#9 Tp::PendingOperation::finished at /usr/src/debug/telepathy-qt-0.9.6.1/x86_64-redhat-linux-gnu-qt5/TelepathyQt/_gen/pending-operation.moc.hpp:161
#10 Tp::PendingOperation::emitFinished at /usr/src/debug/telepathy-qt-0.9.6.1/TelepathyQt/pending-operation.cpp:123
Created attachment 1097657 [details]
File: backtrace
Created attachment 1097658 [details]
File: cgroup
Created attachment 1097659 [details]
File: core_backtrace
Created attachment 1097660 [details]
File: dso_list
Created attachment 1097661 [details]
File: environ
Created attachment 1097662 [details]
File: exploitable
Created attachment 1097663 [details]
File: limits
Created attachment 1097664 [details]
File: maps
Created attachment 1097665 [details]
File: mountinfo
Created attachment 1097666 [details]
File: namespaces
Created attachment 1097667 [details]
File: open_fds
Created attachment 1097668 [details]
File: proc_pid_status
Created attachment 1097669 . | https://bugzilla.redhat.com/show_bug.cgi?id=1284519 | CC-MAIN-2018-26 | en | refinedweb |
I am continuously running into performance issues with Navmesh Cut on recast graph. Each time it takes about 290ms.
Cell Size 0.25 (400x400)Tile size 24I use Recast Mesh Obj for one plane with all include options off.Thread Count: Automatic High Load
WorkItemProcessor.ProcessWorkItems() 99.1% total
My game is top down and when a building is constructed I create Navmesh Cut for each wall. I cannot find a better way to do this. I was thinking about using point graph with a recast, but I still need to do navmesh cutting.
Any help deeply appreciated.
Hi
How many navmesh cuts are you using?How large are they? (i.e how many tiles are they overlapping roughly).Do you think you could send me a screenshot?Which version are you using?
290ms does seem very high. In the example scene "Example3_Recast_Navmesh1" cutting the graph using 35 navmesh cuts takes only about 4.5 ms on my computer.
Note that deep profiling slows things down a lot. It is only useful for checking what percentage is spent on some task, not for checking the absolute time taken.
I turned off deep profile and yes time decreased down to about 50 ms. But that does not resolve my problem(normally, fps stays at about 180, while cutting it drops down to 4)
I use four navmesh cuts per building.The building is modular so I tested different sizes. Building(four cuts) entirely in one tile is in profiler 53ms. The building across 17 tiles took 127ms.
My version is 4.0.10 (2017-05-01).
Buildings will not move so except for deleting it, there is no change to their position.
EDIT:I created a new scene with camera, plane, and gameobject with a simple script that creates new gameobject with navmesh cut. I still get about 50ms.
EDIT2:I opened Example3_Recast_Navmesh1,created empty gameobject and added my test script. It had same bad performance (about 80ms).
here is my test script:
using UnityEngine;using Pathfinding;
public class Test : MonoBehaviour {
public bool create;
// Update is called once per frame
void Update () {
if(create)
{
var obj = new GameObject();
obj.transform.localEulerAngles = new Vector3(0, 0, 0);
var col = obj.AddComponent<NavmeshCut>();
obj.transform.position = new Vector3(10, 0, 10);
obj.transform.parent = transform;
col.useRotationAndScale = true;
col.isDual = true;
create = false;
}
}
}
what am I doing wrong?
Screenshot of pressing P and then object hitting ground in Example3_Recast_Navmesh1
Make sure you do not measure just the first time this happens. The first time any cut is done, the JIT-compiler will have to compile all the cutting code which will take some extra time.The first time a cut happens on my computer it takes 20 ms, but subsequent times it only takes around 2.5 ms.
Awesome! I tried and it takes much less time to do it the second time. Thank you very much! | http://forum.arongranberg.com/t/navmesh-cut-performance/4217 | CC-MAIN-2018-26 | en | refinedweb |
Mapping from color to depth using the RealSense SDK 2.0tk_eab Mar 8, 2018 11:17 PM
Hello,
I have been using the RealSense SDK for Windows (i.e., the one having "pxcsensemanager.h") but now I'm switching over to the SDK 2.0 (i.e., the one having "librealsense2/rs.hpp"). With the SDK for Windows, I use the "MapColorToDepth" function (Intel® RealSense™ SDK 2016 R2 Documentation ) for mapping a single pixel on color frame to its corresponding pixel on depth frame. What is an equivalent function in the SDK 2.0?
I know how to align a whole depth frame to a whole color frame with the SDK 2.0 but this method slows down FPS from 60 to 30 or less on my computer. So I just want to do the mapping for a single pixel.
I also know how to manually transform a depth pixel to a world point and to a color pixel using camera intrinsics and extrinsics. But this website (Projection in RealSense SDK 2.0 · IntelRealSense/librealsense Wiki · GitHub ) shows that different RealSense models use different distortion models. And so I don't think the equation I use is applicable to all RealSense models (I use SR300 and D435). This is why I'd like to use a built-in function that accommodates the difference among the RealSense models.
Thanks.
1. Re: Mapping from color to depth using the RealSense SDK 2.0MartyG Mar 9, 2018 12:53 AM (in response to tk_eab)
If different distortion models is a problem for you then perhaps a solution would be to set the camera to use the 'none' setting for distortion.
From the Projection page (which you have seen):(...).
2. Re: Mapping from color to depth using the RealSense SDK 2.0tk_eab Mar 9, 2018 2:21 AM (in response to MartyG)
MartyG
Thanks for replying to this question as well. As you suggested, I will try the "none" method and see what happens.
By the way, although I have basic understanding of how mapping from depth frame to color frame is accomplished, I don't understand how the "MapColorToDepth" function (Intel® RealSense™ SDK 2016 R2 Documentation ) in the previous SDK accomplishes mapping a single pixel from color to depth frame in a quick way. It seems to me that all depth-frame pixels need to be projected onto a color frame first and then pick a depth-frame pixel that corresponds to a particular color-frame pixel. But this would the same thing as aligning a whole depth frame to a whole color frame, which reduces FPS. I'd like to maintain around 60 FPS during the mapping. Is there any way to take a look at what's actually going on inside the "MapColorToDepth" function? I looked for it and couldn't find it.
Thanks.
3. Re: Mapping from color to depth using the RealSense SDK 2.0MartyG Mar 9, 2018 2:30 AM (in response to tk_eab)
This discussion may be of use to you:
How to find equivalence of a pixel in the color image to the depth image?
4. Re: Mapping from color to depth using the RealSense SDK 2.0jb455 Mar 9, 2018 3:03 AM (in response to tk_eab)
If you look at the source for the Align procedure (librealsense/align.cpp at master · IntelRealSense/librealsense · GitHub), you can see how it does it. Essentially, you'd want to copy align_images but modified so you can pass a single point instead of it using the loop. Though that method starts from depth points and maps to colour, so you'd need to do the reverse if you're starting from a colour point. The project/transform/deproject methods it uses (source available here: librealsense/rsutil.h at master · IntelRealSense/librealsense · GitHub) deal with distortion so you won't need to worry about that.
5. Re: Mapping from color to depth using the RealSense SDK 2.0tk_eab Mar 9, 2018 5:04 AM (in response to MartyG)
Thanks MartyG. I took a look. I may ask further questions.
6. Re: Mapping from color to depth using the RealSense SDK 2.0tk_eab Mar 9, 2018 5:38 AM (in response to jb455)
jb455
Thanks for replying to my question. Actually I have tried to do the reverse for a single pixel before but I couldn't make it by myself. So please help me on that.
I understand how the depth-to-color mapping is accomplished:
#1) With a depth-frame pixel as a starting point, I specify the x and y on the pixel coordinate. Then I get depth in meter at (x, y).
#2) Using the x, y, and depth along with depth camera intrinsics, I can get a point in the 3D world coordinate for the depth-frame pixel.
--> This should correspond to "rs2_deproject_pixel_to_point".
#3) Using camera extrinsics, I transform the point for the depth-frame pixel to the corresponding point for the color-frame pixel.
--> This should correspond to "rs2_transform_point_to_point".
#4) Using color camera intrinsics, I transform the latter point to the color-frame pixel.
--> This should correspond to "rs2_project_point_to_pixel".
Now I have a problem in the color-to-depth mapping: I want to transform the color-frame pixel to a point in the 3D world coordinate. But the pixel is missing depth unlike a depth-frame pixel. Thus, I cannot take similar steps to #2-4 above.
How can I do the reverse of the align_images?
Thanks.
7. Re: Mapping from color to depth using the RealSense SDK 2.0jb455 Mar 9, 2018 7:28 AM (in response to tk_eab)
Ah right, I see what you mean. You need the depth value to be able to do the mapping, but you need the mapping to get the depth value.
I'm not sure how you'd do this without using align then (I'm not an expert, only started looking at stuff like this a month or so ago).
Actually, maybe this thread would be of use to you: How to project points to pixel? Librealsense SDK 2.0. But then if you're generating the pointcloud and its uv map you may get the same performance problems that you've had with align,
You could also try compiling the library with OpenMP turned off. This reduces the CPU usage when streaming so you may get a better framerate.
8. Re: Mapping from color to depth using the RealSense SDK 2.0tk_eab Mar 9, 2018 6:27 PM (in response to jb455)
jb455
Yes, that's exactly the problem I encountered when trying the color-to-depth mapping. But there is a hint for accomplishing it without reducing FPS much. In the old version of SDK, the color-to-depth mapping (MapColorToDepth) function (Intel® RealSense™ SDK 2016 R2 Documentation ) requires a depth frame in the PXCImage format unlike the depth-to-color mapping (MapDepthToColor) function (Intel® RealSense™ SDK 2016 R2 Documentation ). This suggests that the MapColorToDepth function relies on the align method and then somehow achieves the mapping for a single pixel (but not for many pixels) in a quick way. This is why I want to see what's going on inside the MapColorToDepth function. Is that possible? In C++, I could reach the header file where the function was declared but not the actual code for the function.
This is my first time to hear the term "OpenMP" but that seems to correspond to "#pragma omp parralel for..." in align.cpp. If so, do I just need to omit that part of the code to turn off OpenMP?
Thanks.
9. Re: Mapping from color to depth using the RealSense SDK 2.0jb455 Mar 12, 2018 3:58 AM (in response to tk_eab)
Unfortunately the source for the old SDK was never shared so we can't see how any of it worked. You could try asking on GitHub (Issues · IntelRealSense/librealsense · GitHub), a few of the RealSense developers answer questions on there so maybe one of them will know.
To build with OpenMP off, you need to:
- Clone the librealsense source locally
- Install CMake
- Point CMake at the librealsense source folder
- Click Configure. Make sure you choose the correct generator for which platform you want to build for (eg, "Visual studio 2017" for x86, "Visual Studio 2017 Win64" for x64)
- Untick "BUILD_WITH_OPENMP"
- Click Generate, then Open Project
- In Visual Studio, press ctrl+shift+b to build the library
Then you can sub in this dll for the one you're currently using - all usage will be the same (in terms of code), but you should see a difference in CPU utilisation while running.
10. Re: Mapping from color to depth using the RealSense SDK 2.0tk_eab Mar 12, 2018 4:44 AM (in response to jb455)
jb455
Thanks for telling me step for turning off OpenMP. I will do that.
By the way, I've just figured out how to do something equivalent to the MapColorToDepth function in the old SDK. Having tested my programs many times, I noticed that the reduction in FPS results from two factors: 1) aligning images; and 2) using a point cloud to get 3D coordinates of each pixel. What I actually needed was the 3D coordinate for a single pixel and so I modified the second part.
I will leave a note here for those who have the same issue as mine (Just like the MapColorToDepth function, this method only works for a few pixels without reducing FPS).
Prep)
#include <librealsense2/rs.hpp>
#include <librealsense2/rsutil.h>
rs2::config cfg;
rs2::pipeline pipe;
cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 60)
cfg.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 60)
rs2::pipeline_profile prf = pipe.start(cfg);
auto stream = prf.get_stream(RS2_STREAM_DEPTH).as<rs2::video_stream_profiles>();
struct rs2_intrinsics intrin = stream.get_intrinsics();
Step 1) Align a whole depth frame to the corresponding color frame:
rs2::frameset frames = pipe.wait_for_frames();
rs2::align align(rs2_stream::RS2_STREAM_COLOR);
rs2::frameset aligned_frame = align.process(frames);
rs2::frame color_frame = frames.get_color_frame();
rs2::frame depth_frame = aligned_frames.get_depth_frame();
Step 2) Transform a pixel on depth frame to a point on 3D coordinates
rs2::depth_frame df = depth_frame.as<rs2::depth_frame>();
float d_pt[3] = { 0 };
float d_px[2] = { x, y }; // where x and y are 2D coordinates for a pixel on depth frame
float depth = df.get_distance(x, y);
rs2_deproject_pixel_to_point(d_pt, &intrin, d_px, depth);
// d_pt[0], d_pt[1], and d_pt[2] respectively are X, Y, and Z on 3D coordinates in meter
Note: There may be some typos | https://communities.intel.com/thread/123394 | CC-MAIN-2018-26 | en | refinedweb |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
code I have tried
def last_test_date = Date.parse("yyyy-MM-dd hh:mm:ss", "2014-04-03 1:23:45")
return last_test_date
return "12-13-2017"
return "13-12-2017"
return "12/13/2017"
return "12/Dec/2017"
Nothing seems to work.
Hi Mahek,
Which searcher are you using for this script field? Your first example (where you return a Date object) should work correctly, so long as you make sure that the searcher is set to 'Date Time Range picker'.
Yours,
Jake
I don't get "Date Time Range Picker" as an option.
I am using "Date Time Picker" - Same result
Hi Mahek,
'Date Time Picker' is one of the available templates for a script field, but you also need to select the right searcher, which in this case is 'Date Time Range picker'. You can select the searcher as follows:
Image 1
Image 2
Perfect!
That is the issue, the problem is fixed.
Thanks Jake. Appreciate the effort.
Hi Mahek,
I'm glad to hear that the problem is resolved. If you find this answer to be useful, please consider accepting it so that other users of Community who may be having the same issue as you will be able to see that there is a solution available.
Many thanks,
J. | https://community.atlassian.com/t5/Marketplace-Apps-questions/Script-Field-Date-Time-Picker-always-returns-datePickerFormatter/qaq-p/691546 | CC-MAIN-2018-26 | en | refinedweb |
CallType
Since: BlackBerry 10.0.0
#include <bb/system/phone/CallType>
To link against this class, add the following line to your .pro file: LIBS += -lbbsystem
Values describing the type of the call.
You must also specify the access_phone permission in your bar-descriptor.xml file.
Overview
Public Types Index
Public Types
The type of the call.
BlackBerry 10.0.0
- Invalid -1
The call type is invalid.
- Incoming 0
The call is incoming.Since:
BlackBerry 10.0.0
- Outgoing 1
The call is outgoing.Since:
BlackBerry 10.0.0
- MultiParty 2
The call is a multi-party call.Since:
BlackBerry 10.0.0
- Missed 3
The call is missed.Since:
BlackBerry 10.0.0
- Command 4
The call is a command call.Since:
BlackBerry 10.0.0
- Emergency 5
The call is an emergency call.Since:
BlackBerry 10.0.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/cascades/bb__system__phone__calltype.html | CC-MAIN-2018-26 | en | refinedweb |
I wanted to know how to make a minimap? So i started to think maybe a second camera held directly above the character? but how would i make it appear in the top right corner of the screen. Would it be a gui or something of somesort? Please consider in helping me...
Answer by spinaljack
·
Jun 26, 2010 at 10:42 AM
You set the minimap camera to a higher depth than the main camera to get it to render over it.
You set the culling on the minimap cam to depth only so it doesn't render a sky box or anything to cover the rest of the screen.
You then set the viewport normal on the minimap to a corner of the screen.
The normalised rect is from 0-1 where 1 is the width of the screen so 0.5 would be half the screen.
You can also make the map extra special by creating a map icons layer and disabling it on the main camera and setting the map cam to only render the terrain layer and map icons layer so you'll see map icons in the minimap and not in the main view.
Complete and succinct - I wish more answers were like this.
Thanks! This helps.
@spinaljack Maybe you mean set the "Clear Flags" to "Depth Only"?
Absolutely what I needed, even years later.
absolutely awesome answer... worked perfectly, even in unity 5
Answer by Ashkan_gc
·
Jun 26, 2010 at 12:21 PM
there are two ways to do this.
1 you can use normalized viewport as described by spinaljack in another answer.
2 you can render the camera to a render texture and use that texture in any 3d plane/GUI you want. this will require you to have unity pro.
Answer by BakuJake13
·
May 06, 2012 at 02:44 AM
you make a camera, go into "game" tab (next to "scene") and set the position of the camera (i put it in the upper right corner). next, put it up REALLY high up on top of your player and set the projection to orthographic. now make a c# script that will make the camera follow your player. heres the script i used:
using UnityEngine;
public class CameraFollow : MonoBehaviour {
public Transform Target;
void LateUpdate()
{
transform.position = new Vector3(Target.position.x, transform.position.y, Target.position.z);
}
}
then, set your player as the target. if you see any problems, reply back to me. hope it helps!
Answer by kolmich
·
Aug 15, 2012 at 11:28 AM
Hi there,
Making an minimap is not as simple as it appears when you first think about it. We spent about 3 month implementing a really good working minimap/map system. Its fully customizeable, well documented and super easy to integrate. Maybe this would save you some time?
KGFMapSystem Homepage checkout the screenshots!
KGFMapSystem Assetstore
Answer by Treasureman
·
Sep 07, 2014 at 07:13 PM
Set the camera depth to 1 and change the x and y until it's where you.
How do I attach a Camera to a GUI (for a Mini map)
4
Answers
Interactive GUI?
2
Answers
Minimap Camera
2
Answers
Is it possible to render minimap in a different way?
1
Answer
Is there anyway to remove the bevel on the GUI box?
1
Answer | http://answers.unity3d.com/questions/20676/how-do-i-create-a-minimap.html | CC-MAIN-2016-26 | en | refinedweb |
Data Structures for Drivers
no-involuntary-power-cycles(9P)
usb_completion_reason(9S)
usb_other_speed_cfg_descr(9S)
usb_request_attributes(9S)
- Device configuration information
#include <sys/usb/usba.h>
Solaris DDI specific (Solaris DDI)
The usb_client_dev_data_t structure carries all device configuration information. It is provided to a USB client driver through a call to usb_get_dev_data(9F). Most USBA functions require information which comes from this structure.
The usb_client_dev_data_t structure fields are:
usb_pipe_handle_t dev_default_ph; /* deflt ctrl pipe handle */ ddi_iblock_cookie_t dev_iblock_cookie;/* for calls to mutex_init */ /* for mutexes used by intr */ /* context callbacks. */ usb_dev_descr_t *dev_descr; /* parsed* dev. descriptor */ char *dev_mfg; /* manufacturer's ID string */ char *dev_product; /* product ID string */ char *dev_serial; /* serial num. string */ usb_reg_parse_lvl_t dev_parse_level; /* Parse level */ /* reflecting the tree */ /* (if any) returned through */ /* the dev_cfg array. */ usb_cfg_data_t *dev_cfg; /* parsed* descr tree.*/ uint_t dev_n_cfg; /* num cfgs in parsed descr. */ /* tree, dev_cfg array below.*/ usb_cfg_data_t *dev_curr_cfg; /* Pointer to the tree config*/ /* corresponding to the cfg */ /* active at the time of the */ /* usb_get_dev_data() call */ int dev_curr_if; /* First active interface in */ /* tree under driver's control.*/ /* Always zero when driver */ /* controls whole device. */ * A parsed descriptor is in a struct whose fields' have been adjusted to the host processor. This may include endianness adjustment (the USB standard defines that devices report in little-endian bit order) or structure padding as necessary.
dev_parse_level represents the extent of the device represented by the tree returned by the dev_cfg field and has the following possible values:
Build no tree. dev_n_cfg returns 0, dev_cfg and dev_curr_cfg are returned NULL, the dev_curr_xxx fields are invalid.
Parse configured interface only, if configuration# and interface properties are set (as when different interfaces are viewed by the OS as different device instances). If an OS device instance is set up to represent an entire physical device, this works like USB_PARSE_LVL_ALL.
Parse entire configuration of configured interface only. This is like USB_PARSE_LVL_IF except entire configuration is returned.
Parse entire device (all configurations), even when driver is bound to a single interface of a single configuration.
The default control pipe handle is used mainly for control commands and device setup.
The dev_iblock_cookie is used to initialize client driver mutexes which are used in interrupt-context callback handlers. (All callback handlers called with USB_CB_INTR_CONTEXT in their usb_cb_flags_t arg execute in interrupt context.) This cookie is used in lieu of one returned by ddi_get_iblock_cookie(9F). Mutexes used in other handlers or under other conditions should initialize per mutex_init_cfg, makes a device's parsed standard USB descriptors available to the driver. The tree is designed to be easily traversed to get any or all standard USB 2.0 descriptors. (See the “Tree Structure” section of this manpage below.) dev_n_cfg returns the number of configurations in the tree. Note that this value may differ from the number of configurations returned in the device descriptor.
A returned parse_level field of USB_PARSE_LVL_ALL indicates that all configurations are represented in the tree. This results when USB_PARSE_LVL_ALL is explicitly requested by the caller in the flags argument to usb_get_dev_data(), or when the whole device is seen by the system for the current OS device node (as opposed to only a single configuration for that OS device node). USB_PARSE_LVL_CFG is returned when one entire configuration is returned in the tree. USB_PARSE_LVL_IF is returned when one interface of one configuration is returned in the tree. In the latter two cases, the returned configuration is at dev_cfg[USB_DEV_DEFAULT_CONFIG_INDEX]. USB_PARSE_LVL_NONE is returned when no tree is returned. Note that the value of this field can differ from the parse_level requested as an argument to usb_get_dev_data().
The root of the tree is dev_cfg, an array of usb_cfg_data_t configuration nodes, each representing one device configuration. The array index does not correspond to a configuration's value; use the bConfigurationValue field of the configuration descriptor within to find out the proper number for a given configuration.
The size of the array is returned in dev_n_cfg. The array itself is not NULL terminated.
When USB_PARSE_LVL_ALL is returned in dev_parse_level, index 0 pertains to the first valid configuration. This pertains to device configuration 1 as USB configuration 0 is not defined. When dev_parse_level returns USB_PARSE_LVL_CFG or USB_PARSE_LVL_IF, index 0 pertains to the device's one configuration recognized by the system. (Note that the configuration level is the only descriptor level in the tree where the index value does not correspond to the descriptor's value.)
Each usb_cfg_data_t configuration node contains a parsed usb configuration descriptor (usb_cfg_descr_t cfg_descr) a pointer to its string description (char *cfg_str) and string size (cfg_strsize), a pointer to an array of interface nodes (usb_if_data_t *cfg_if), and a pointer to an array of class/vendor (cv) descriptor nodes (usb_cvs_data_t *cfg_cvs). The interface node array size is kept in cfg_n_if, and the cv node array size is kept in cfg_n_cvs; neither array is NULL terminated. When USB_PARSE_LVL_IF is returned in dev_parse_level, the only interface (or alternate group) included in the tree is that which is recognized by the system for the current OS device node.
Each interface can present itself potentially in one of several alternate ways. An alternate tree node (usb_alt_if_data_t) represents an alternate representation. Each usb_if_data_t interface node points to an array of alternate nodes (usb_alt_if_data_t *if_alt) and contains the size of the array (if_n_alt).
Each interface alternate node holds an interface descriptor (usb_if_descr_t altif_descr), a pointer to its string description (char *altif_str), and has its own set of endpoints and bound cv descriptors. The pointer to the array of endpoints is usb_ep_data_t *altif_ep); the endpoint array size is altif_n_ep. The pointer to the array of cv descriptors is usb_cvs_data_t *altif_cvs; the cv descriptor array size is altif_n_cvs.
Each endpoint node holds an endpoint descriptor (usb_ep_descr_t ep_descr), a pointer to an array of cv descriptors for that endpoint (usb_cvs_data_t *ep_cvs), and the size of that array (ep_n_cvs). An endpoint descriptor may be passed to usb_pipe_open(9F) to establish a logical connection for data transfer.
Class and vendor descriptors (cv descriptors) are grouped with the configuration, interface or endpoint descriptors they immediately follow in the raw data returned by the device. Tree nodes representing such descriptors (usb_cvs_data_t) contain a pointer to the raw data (uchar_t *cvs_buf) and the size of the data (uint_t cvs_buf_len).
Configuration and interface alternate nodes return string descriptions. Note that all string descriptions returned have a maximum length of USB_MAXSTRINGLEN bytes and are in English ASCII.
In the following example, a device's configuration data, including the following descriptor tree, is retrieved by usb_get_dev_data(9F) into usb_client_dev_data_t *reg_data:_data { char char1; short short1; char char2; } cv_data_t; Parse the data of C/V descriptor 0, second configuration (index 1), iface 1, alt 2, endpt 0. usb_client_dev_data_t reg_data; usb_cvs_data_t *cv_node; cv_data_t parsed_data; cv_node = ®_data->dev_cfg[1].cfg_if[1].if_alt[2].altif_ep[0].ep_cvs[0]; (void)usb_parse_data("csc", (void *)(&cv_node->cvs_buf), cv_node->cvs_buf_len, &parsed_data, sizeof(cv_data_t));
See attributes(5) for descriptions of the following attributes:
usb_get_alt_if(9F), usb_get_cfg(9F), usb_get_dev_data(9F), usb_get_string_descr(9F), usb_lookup_ep_data(9F), usb_parse_data(9F), usb_pipe_open(9F), usb_cfg_descr(9S), usb_if_descr(9S), usb_ep_descr(9S), usb_string_descr(9S) | http://docs.oracle.com/cd/E26502_01/html/E29047/usb-client-dev-data-9s.html | CC-MAIN-2016-26 | en | refinedweb |
This article is about writing Java Smart card based applications. This
tutorial will help beginners to understand the concepts and
communication between a Java Smart Card and a host application. I have
seen beginners of Java Smart Card technology ask
simple questions, so I decided to provide them with a complete example to
get them started.
In this article/tutorial I will explain a
sample application, a Calculator which will perform four basic operations
of calculations, i.e,, + ,-, * and /.
/>
In order to understand this tutorial you
must know J2SE and have should have basic understanding of (Java) Smart
Cards. To get to what java card is please visit the Oracle official site
here.
Moreover you might need to have the basics understanding of following standards:
I am assuming that you have smart card and a smart card reader and you are able to load and install the
.cap file provided with this tutorial/article.
An
application which resides on the smart card is called Smart Card
Applet. It is written on the computer and then been install on the smart
card.
It is the application which
resides on the computer or and interacts with the smart card via APDUs.
This application can be written in any programming language.
APDU Stands
for Application Programming Data Unit. It is the communication medium
between the applet and the host application. All the communication are
done between the host application and applet via APDUs.
APDU is
of two types one is command APDU which is send by the Host Application
towards the applet and the second is response APDU wich is send by the as a
response of the command APDU back to Host Application.
An APDU consists of following fields:
The sequence of above fields should be:
CLA INS P1 P2 LC Data LE.
Java card application is a sort of Client Server application in which smart card always remains idle and respond
to the commands that. Host application sends to it. There is always a response APDU of a command APDU.
In any Smart card applications we need to detect the reader(s) attached with the computer and then have to make
connection with that reader and will connect with the card inside that reader.
In the Calculator application I am
using a combo-box to display all the available readers and a Button called’
Refresh’ which when clicked populates the combo-box with the attached
terminals/readers. After then you have to choose a terminal and click on the ‘Connect’
button to make connection with the smart card.
I
am using SmartCardIO API which comes officially with the JDK 1.6+ that
means you don't need to download it, just import it and use it. In
calculator application following classes of SmartCardIO are used:
import javax.smartcardio.Card;
import javax.smartcardio.CardChannel;
import javax.smartcardio.CardException;
import javax.smartcardio.CardTerminal;
import javax.smartcardio.CommandAPDU;
import javax.smartcardio.ResponseAPDU;
import javax.smartcardio.TerminalFactory;
To start communication with the card we need to get the reader/terminal first. To do so Java has provided a class TerminalFactory and this class
is used to get all the terminals attached with the computer.
TerminalFactory
public List<CardTerminal> getTerminals() throws Exception{
factory = TerminalFactory.getDefault();
terminals = factory.terminals().list();
return terminals;
}
Above function returns a List of readers that we can display in the combo-box.
List
After selecting a terminal from the combo-box the user have to click on the Connect button. If there is a card present and and its ATR is working fine then
a text Connected will be displayed on the right-bottom side of the frame else corresponding error message will be displayed.
Connect
/>
In order to connect with the smart card, we are using Connect() function of the CardTerminal class. To make connection via T=0 you will need to use Connect ("T=0") and for T=1 you have to use Connect("T=1"). But if you are not sure you can use * and the SmartCardIO will detect the communication protocol automatically.
Connect()
CardTerminal
Connect ("T=0")
T=1
Connect("T=1")
*
SmartCardIO
Following method is performing the connect operation:
protected void connectToCard(CardTerminal terninalSource) throws CardException {
terminal = terninalSource;
card = terminal.connect("*");
}
After successful connection with the card you need to input digits in the input files and press the calculation operation buttons.
I am going to explain the (+) operation here the rest of all are the same.
(+)
private void add_buttonActionPerformed(java.awt.event.ActionEvent evt) {
String command = "00A404000E63616C63756C61746F722E61707000";
byte[] apdu = JavaSmartcard.hexStringToByteArray(command);
if (!selectApplet(apdu))
{
return;
}
byte[] data_LC;
try
{
data_LC = getLCData(this.digit1_TextField.getText(), this.digit2_TextField.getText());
}
catch (Exception ex)
{
JOptionPane.showMessageDialog(this, "Only digits are allowed to input in the fields\n"+
ex.getMessage(), "Type Error", JOptionPane.ERROR_MESSAGE);
return;
}
command = "A000000002";
String LC_Hex = JavaSmartcard.byteArrayToHexString(data_LC);
command = command.concat(LC_Hex);
apdu = JavaSmartcard.hexStringToByteArray(command);
System.out.println(""+ JavaSmartcard.htos(apdu));
try
{
javaCard.sendApdu(apdu);
byte[] data = javaCard.getData();
this.status_Label.setText(""+Integer.toHexString(javaCard.getStatusWords()).toUpperCase());
this.result_Label.setText(new BigInteger(data)+"");
}
catch (CardException | IllegalArgumentException ex)
{
JOptionPane.showMessageDialog(this, "Error while tried to send command APDU\n"+
ex.getMessage()+"", "APDU sending fail", JOptionPane.ERROR_MESSAGE);
}
}
In the above function I am first selection the applet and on the successful selection I am preparing the APDU which will instruct the
applet what to do and what data it has.
command = "00A404000E63616C63756C61746F722E61707000";
Above is the select applet APDU as we need to select our calculator applet first in order to do calculation otherwise the
default applet might not accept your subsequent command APDUs.
byte[] apdu = JavaSmartcard.hexStringToByteArray(command);
After preparing the APDU I am gonna convert it into the byte array to transmit to the card. hexStringToByteArray is a utility function used to convert a hex string into byte array.
hexStringToByteArray
command = "A000000002";
Above APDU A000000002 is the APDU which will tell the applet that you have to add given two digits.
I am gonna to explain its fields here below:
A000000002
Data part is calculated as below:
private byte[] getLCData(String byte1Str, String byte2Str) throws Exception
{
byte[] data_LC = new byte[2];
byte byte1 = Byte.parseByte(byte1Str );
byte byte2 = Byte.parseByte(byte2Str);
data_LC[0] = byte1;
data_LC[1] = byte2;
return data_LC;
}
I am converting the inputs of both the TextFields into the byte and then copying that bytes into a byte array to transmit to the card.
The final APDU will looks like below. Let user enters 5 and 5 and input.
A0 00 00 00 02 05 05
When smart card gets that APDU it will interprets it and find the INS field to know what the Host
Application want to do and then it will gets the data (digits) from the Data part and will add and returns it to the Host Application in the form of response APDU.
INS
When the response APDU received, we can determine the STATUS WORD to get to know what happened during the calculation. If card returns a STATUS WORD of 0x9000 that will means that everything was went fine and else there might be error or do perform further actions to get the actual response APDU.
STATUS WORD
0x9000
public int getStatusWords() {
return rAPDU.getSW();
}
The above function is used to get the STATUS WROD return by the card and by using the below function I am getting the Data part.
public byte[] getData() {
if (rAPDU!=null) {
return rAPDU.getData();
}
else {
return null;
}
}
Rest of the code is simple and self-explanatory. I will try to add another
tutorial on writing the calculator applet which
I am attaching with this article.
In this example no special or third party dll, library or API is being used. Everything is done by using the Java provided SDK.
As smart cards are limited resource and
computations enabled devices so this calculator worked with the rang of 1-127 inclusive.
The reason is that I am converting the input into byte and a byte can contains 127 signed. | http://www.codeproject.com/Articles/546582/Java-Smart-Card-Mini-Calculator | CC-MAIN-2016-26 | en | refinedweb |
Red Hat Bugzilla – Bug 893751
audit breaks containers
Last modified: 2015-09-02 15:28:42 EDT
Description of problem:
While working my way through Lennart Poettering's series of articles on systemd, I had some problems spawning a namespace container. From Example 1 on:
# yum --releasever=17 --nogpgcheck --installroot ~/fedora-tree/ install yum passwd vim-minimal rootfiles systemd
raises two SELinux alerts:
1. The source process: /usr/sbin/groupadd
Attempted this access: write
On this file: /dev/null
2. The source process: /usr/sbin/useradd
Attempted this access: write
On this file: /dev/null
# systemd-nspawn -D ~/fedora-tree /usr/lib/systemd/systemd
causes an endless sequence of the following messages:
Starting D-Bus System Message Bus...
[FAILED] Failed to start D-Bus System Message Bus.
See 'systemctl status dbus.service' for details.
Version-Release number of selected component (if applicable):
[root@server ~]# yum list installed yum
Loaded plugins: langpacks, presto, refresh-packagekit
Installed Packages
yum.noarch 3.4.3-47.fc18 @fedora
[root@server ~]# yum list installed systemd*
Loaded plugins: langpacks, presto, refresh-packagekit
Installed Packages
systemd.x86_64 195-15.fc18 @updates-testing
systemd-libs.x86_64 195-15.fc18 @updates-testing
systemd-sysv.x86_64 195-15.fc18 @updates-testing
[root@server ~]#
How reproducible:
consistent
Currently, the Linux audit layer is broken when it comes to containers. If CAP_AUDIT_WRITE/CAP_AUDIT_CONTROL is lacking all kinds of software will abort, including dbus and PAM.
Auditing is not virtualized properly, hence granting CAP_AUDIT_WRITE/_CONTROL to a container is not a good idea, but due to the broken audit layer this will then cause dbus and PAM fail. If writing to audit fails with EPERM the audit code should just skip over it, not abort.
This is only broken on Fedora, not on Debian. Hence it works fine to run a Debian container on a Fedora host.
A hack around this is passing --capability=cap_audit_write,cap_audit_control to nspawn, which will allow the container to boot but the audit data it generates is useless.
Reassigning to audit, since there's nothing to fix here in systemd.
I just encountered this problem too.
It would have been helpful if there had been a link to this ticket on the 0pointer man page for nspawn.
*
Also worth noting that on Fedora17, systemd-nspawn doesn't have the --capability option so there's no work around.
The problem here is one of container design. There are two ways to look at this.
1) If containers are viewed as process hardening, then do not put anything into them that requires auditing. The goal was separating the process from others.
2) If the containers are viewed as a light VM, then they are required to also have an audit daemon, collect certain properties of the VM, collect certain internal events, and forward events out of the VM to a permanent collector which the audit daemon can provide. Libvirt is also plumbed for all these requirements and it should be used for this purpose.
The audit code cannot skip over an EPERM return code. The requirements of the audit system are that if an event cannot be recorded, then the event must be stopped from occurring.
Well, putting a Fedora in a OS container means the normal codepaths for Fedora userspace audit are used, and since the current userspace audit code is not capable of understanding that the lack of CAP_AUDIT_WRITE/CAP_AUDIT_CONTROL means that audit is not available, you currently cannot boot up a container with Fedora -- unless you grant it CAP_AUDIT_WRITE/CAP_AUDIT_CONTROL, at which time the container suddenly can muck around with the hosts' audit controls, which is a huge security problem...
Anyway, since this is all so broken and I don't really care about audit I have now began to document everywhere that people who want to use OS containers should just turn off audit with audit=0.
Of course that means that Fedora/RHEL won't support OS containers without altering the kernel command line, but I guess that's not really my problem...
Why wouldn't we allow CAP_AUDIT_WRITE/CAP_AUDIT_CONTROL and have the kernel add something to the audit record to indicate that this came from a different namespace. Then people could filter messages within the audit.log or even do stuff like having and audit dispatcher that would forward audit messages to a log which exists within the container.
Turning off audit in order to run containers, seems nuts.
Dan, you are right. Turning off audit when its _required_ is not the solution. As to the other statement, there is work on-going with kernel people to add a field to the event. But we also _have_ to be able to correctly identify the creation of the container and how it differs from its parent process.
Well, auditing is still broken in three ways in containers:
a) audit messages from the container are not distuingishable from the host's messages
b) the container can muck with audit rules of the host.
c) When I open a new PID namespace the sessionid/loginuid is not reset even though I just opened an entirely new container. The loginuid leaked from the host makes no sense at all of course in the container, and confuses the hell out of pam_loginuid, and the audit tools
d) There's no way how we could turn off auditing in a container, but leave it on on the host.
A: This is being worked on with the kernel. Including initpid in the audit message, any initpid != 1 would have come from a container.
Also if there is proper audit messages on start and stop (As libvirt) is doing, you should be able to gather all the audit messages for a particular container based on the initpid.
B: is currently being blocked if you use SELinux.
c: I believe this should be fixed. Changing the pid namespace should reset loginuid and sessionid to -1.
d: Don't know the answer for this, or if it is something a user would want.
Any update on this one? Any chance we get A and C fixed for F19 at least?
We still need to hear from Steve Grubb of the requirements. Without comment #3
answered, we can't implement anything. I sent a draft to allow having a single
auditd and recently Gao Feng submitted a patchset to implement an auditd per
container tied to userns.
See also this mail thread on the problems with PAM + LXC
a) there is an upstream patch posting from aris, but needs more work
b) you shouldn't need to give CAP_AUDIT_CONTROL inside the container
c) not gunna happen. If you launch the container by hand, you are going to have to change pam to remove pam_loginuid. but things should 'just work' if you launch from systemd or libvirt from systemd etc...
d) not something we want
e) the kernel still rejects with EPERM any message is pid_ns != init_pid_ns. It's a bug we need to fix in kernel.
(In reply to comment #12)
> a) there is an upstream patch posting from aris, but needs more work
>
> b) you shouldn't need to give CAP_AUDIT_CONTROL inside the container
Well, iirc writing to loginuid requires this...
>
> c) not gunna happen. If you launch the container by hand, you are going to
> have to change pam to remove pam_loginuid. but things should 'just work' if
> you launch from systemd or libvirt from systemd etc...
Wow, that's just sad. I guess I'll then
make it more prominent in the nspawn docs that auditing needs to be disabled for nspawn to work correctly.
nspawn is almost always started from a shell, so you basically break nspawn entirely with this. Heck, for testing purposes you Daniel also runs libvrit-lxc from a shell, so you make his life really hard too.
I was trying to get management to make running RHEL in containers cleanly without any manual a release goal. I guess I can forget this now if audit stays broken like this.
But well, if this is how it is then I'll instead just document everywhere that auditing breaks things...
You know, before the the sealing off was enabled in the kernel we could still reset loginuid right after setting up the namespace, but now even that's gone...
I have committed this now to systemd:.
(In reply to comment #12)
> c) not gunna happen. If you launch the container by hand, you are going to
> have to change pam to remove pam_loginuid. but things should 'just work' if
> you launch from systemd or libvirt from systemd etc...
BTW, to make this very clear: with systemd we support booting the same OS image on bare metal, on VMs and in containers without *any* alteration, and it needs to work the same way in all three cases. That's why we are not OK with asking the user to patch around in PAM files or anything like that. That's simply not an option for us.
Is it not possible to write the utility to start the container via systemd? Is starting it directly by clone the only possible way to write this software?
To be very clear, the loginuid has to be as tamper-proof as possible. If we open a hole for containers to reset the loginuid, then we also open a hole for abuse by people avoiding detection.
Btw, it seems like an incongruity to start daemons from a clean environment like systemd but not doing the same thing for a container. :-)
Well, Steve, in the container the loginuid of the host makes no sense at all.
If I read /proc/self/loginuid of any of the processes in the container it returns the UID of the host, which doesn't even make any sense... This is so broken, it hurts.
That's a good reason why the container should be started via systemd even if it looks like a shell command. You would also have the container inheriting environmental variables, process group, session membership, supplementary group IDs, alarms, umask, process signal mask, pending signals, rlimits, etc.
Starting from systemd would probably give you a more reliable startup. It would also prevent the problem of loginuid bleeding over.
(In reply to comment #15)
> I have committed this now to systemd:
>
>
> ?id=7ecec4705c0cacb1446af0eb7a4aee66c00d058f
Wrong commit id? Seems unrelated.
>.
Lennart, think about what is going on. Audit lives outside the container. That might change, fujitsu is actually working on that, but for now, auditd is a host thing and is only usable accessible to the real host. So we need to look at this as a global security thingie. Globally, if the namespaces and cgroups are set up by a logged in user, the loginuid of all of those children processes IS that admin which started things. You shouldn't be allowed to change it. Who the hell knows what that admin set up? We have no way to track or record that information.
Now if nsspawn actually was just a wrapper to kick back through systemd, it would be systemd which launched the 'container.' Now we have systemd which can track, record, and make sure things were set up intelligently. The loginuid wouldn't be set. So now you don't need CAP_AUDIT_CONTROL inside the 'container'.
If you launch a 'container' using an intelligent tool, like systemd or libvirt, we are in violent agreement. Things should just work. If you hack shit up by hand you get to hack shit up by hand until it works. How is that a usability nightmare? audit=0 being needed is absolutely wrong. Please lets fix the workflow and remove such silliness as you describe about disabling audit...
As to your point that /proc/self/loginuid not making sense inside the container, I agree. I'll look into making that output local to the readers namespace. Should be a pretty easy patch. Can someone open a BZ?
(In reply to comment #21)
> Wrong commit id? Seems unrelated.
Yes. This is the commit Lennart meant:
Thanks. Yeah, those comments are just absolutely wrong. If audit and systemd containers don't just work something is wron.g It audit + containers by hand don't just work, you get to hold the pieces. Lennart, can we please fix the comments? If you'd like to discuss what I'm thinking and why let me know, I feel like there must be some misunderstanding between us right now...
nspawn is primarily this tool for admins and developers that just allows you to quickly boot-up my machine fromn the command line, that's it. What you are asking me to do means basically turning nspawn into another libvirtd (which makes no sense, we already have that, in libvirtd...).
So nspawn is precisely about being able to quickly run from a command line, and you guys actively make that impossible via audit.
You know, I first tried to get PAM fixed to simply skip audit stuff when the audit caps are missing, so that we can simply drop these caps from a container and everything would work fine. But no, this got blocked by Steve. Steve's just too married to the idea that audit should actively break things, if possible...
Then, my next attempt was to get loginuid to be reset for containers by the kernel, implicitly (what this bug is about). But this was blocked.
My next idea was to reset it manually, but that's blocked too, since the loginuid is sealed now in the kernel...
This has been going on for a year now or so. With the systemd commit I made I simply tried to improve things for the user, since it's documented now why these things fail, and how to work around them. Given that nspawn is a tool for admins/developers, I think it's the best thing to do for now.
You know, I don't really care whether the kernel resets loginuid entirely when a container is opened, or whether it only hides the field from userspace in the container, or whether the userspace audit code simply ignores this data if it runs from a container, but the status quo of simply exposing loginuid in the container, and having the audit userspace naively believe that data is just broken, and that's a fact. And I wished you guys could see that...
Some other wonderful hacks I came up with while trying to work-around the broken audit stack with containers are these:
- Add another PAM module that runs before pam_loginuid, and when it detects that it is run in a container simply mounts a file from /tmp to /proc/self/loginuid. That should be enough to trick pam_loginuid to not fail. This one gets extra points for being ugly...
- Use seccomp to actually make socket(AF_NETLINK, SOCK_RAW, NETLINK_AUDIT) return -ENOTSUP or so (or whatever the right error code is) in the container, so that pam_loginuid is tricked into believing audit is off in userspace... Also ugly, but less so...
A combination of both more or less trick all audit userspace into thinking no audit kernel support was available, and should make things work...
Note that kernel auditing is broken when used with systemd's
container code. When using systemd in conjunction with
containers please make sure to either turn off auditing at
runtime using the kernel command line option "audit=0", or
turn it off at kernel compile time using:
****
This is just wrong. audit + systemd + container should work just fine. It is audit + container WITHOUT systemd which is not working.
****
maybe a fix would be to allow nsspawn, with CAP_AUDIT_CONTROL, to unset the loginuid. It also means that we don't have to leak CAP_AUDIT_CONTROL into the container. Only the setup programs need it.
steve, do you think we can craft security goals around that solution? it would mean that a container, launched by nsspawn (or maybe virt-sandbox) would be tracked as if they were launched by systemd or libvirt...
I have been thinking about this. Steve wants loginuid to be immutable in order to make sure no user could change his loginuid to do something bad, that the audit system can not track. But Steve you acknowledge that turning off audit does the same thing, but you can not stop an evil admin. And to go along with this the audit subsystem would be able to load the fact that said evil admin has disabled audit. Take him out to the woodshed and beat him.
Why not just make changing an existing (non -1) loginuid an auditable event. Then if evil admin changes his loginuid you can also take him to the woodshed.
The current setup with pam_loginuid refusing to allow users to login if the loginuid is set will break anyone debugging a login program like sshd.
gdb /usr/bin/sssd
> r
Switch to another window attempt ssh localhost; Fail loginuid is set.
steve and I discussed this some more. I have now written a patch to allow a untility to UNSET its loginuid if it has cap_audit_control AND the audit failure mechanism is not set to panic. In an environment where people want the system to panic rather than lose and audit log, the absolutely immutable nature of things might have some use. In normal environments we have the attack dan is describing and so unsetting the loginuid BEFORE the actual authentication comes along trying to set it to something else makes sense.
I haven't tested, but nsspawn should be able to just echo "-1" > /proc/self/loginuid before it launches the container and everyone should be happy(ish)
That's close to what I was thinking but a detail needs clarifying. Right now we have a compile time choice between the old way of requiring CAP_AUDIT_CONTROL and the new way that does not require capabilities, but its immutable. What I was trying to say in that discussion is how about we make it a runtime choice instead. We could coopt the audit= boot parameter so that we can choose which method at boot.
I was thinking that we can make it audit=2, which means audit enabled and loginuid are immutable. But Eric reminded me that we can already set -e 2 via auditctl and that means AUDIT_LOCKED. However, at boot it does not make sense to say make the rules immutable.
So there are 2 approaches...either map audit=2 into a new variable during audit_init so that we have a flag to see if loginuid is immutable. Or, maybe it could be bit mapped where 1 = enabled, 2 = rules immutable, 4 - loginuid immutable....except we current make 2 imply that 1 is set. I really don't care which way. The main point is making a runtime choice instead of compile time choice between the two currently existing methods. But it should not be tied to the audit failure mechanism. Thanks.
And one last thought, the runtime choice should be immutable, meaning it can only be chosen at boot and never switched for any reason without a reboot.
kernel command line flags are getting more and more highly discouraged in the community, especially something as difficult to locate/google/find as a bitmap. I'd be agreeable to a new auditctl command/message...
maybe a SETOPTIONS bitmap? with bits for setting and locking? we can make loginauid setable and lockable?
Doing it by auditctl is too late - not to mention I don't want to make people think that they can change it anytime they want. It really needs to be at boot time.
Wait, if we don't trust auditctl during bootup (aka, this is audit policy and should go in audit.rules) we are in a WHOLE lot more trouble than loginuids.
I suggest we follow the prctl SECBIT_* and SECBIT_*_LOCKED as discussed in the capabilities man page. In the CC config you'd set both the loginuid_immutable and the loginuid_immutable_locked bits.
It is similar to -e2, but you can have only 1 bit of a flags/features register locked at a time. I'd think this entire register should be locked by -e2
Its not a question of trust. It can simply be too late. Prctl is just wrong. That means a process can have a different security policy than the system. We don't want that kind of inconsistency in user/subj binding. It should be all one way or all another way. Doing audit=2 will be easy to test for via SCAP. This will be something that needs to be tested for in security check lists.
You haven't described why setting this while parsing audit.rule is too late. From my point of view this is audit configuration. It belongs in audit.rules. Please can you describe the problem, not a solution.
I wasn't saying to use prctl. sorry for the misunderstanding. It was just an example of the design pattern I was hoping to explain. Please ignore prctl and any notions you have about it.
I believe the best way to handle this would be a new interface. The option to turn this on should live in audit.rules. Just like every other audit option. It also means that your scanner should be able to look for it. If the scanner can't handle the audit.rules you are already in a world of additional pain. agreed?
It's possible we could use an AUDIT_SET message with a couple of new mask values
#define AUDIT_STATUS_LOGINUID_IMMUTABLE 0x0020
#define AUDIT_STATUS_LOGINUID_IMMUTABLE_LOCKED 0x0040
Although it seems that the kernel code around struct audit_status is so poorly written it's difficult to grow it to support more bits. Not impossible, but the second time I see binary structures in the kernel<->userspace audit code that weren't well designed.
So maybe a whole new message type AUDIT_SET_FEATURES or AUDIT_GET_FEATURES makes moreoo, any update to this? Can we get a fix for this, maybe as a christmas present?
As far as I understand, the following commits in 3.13 will be part of the solution:
They allow processes with CAP_AUDIT_CONTROL to unset the loginuid at will. Thus systemd-nspawn/lxc/... could get this capability and unset the loginuid right before launching the container.
This got us 99% of the way there. I think at least. Small brain. The remaining problem was that one of the pam modules, heck if I remember which one, sent an userspace audit message. Since the message came from a non-init pid namespace, the kernel was rejecting the message (actually it was rejecting opening the socket as I recall)
The 3.15 kernel will allow userspace message from the non-init pid namespace. So that last part should be taken care of in 3.15...
Can we get these fixes into RHEL7? We have had occurrences where people are trying to run sshd within docker and failing.
+1 it would be great to get this into RHEL7
Let's bring this back to life for a moment in an attempt to reach some sort of conclusion ...
Upstream is fine starting with 3.15, yes?
RHEL7 is broken and likely needs the 3.13 commits (see comment #38) and a few from 3.15, yes?
I believe this was inadvertantly fixed by a backported patch to fix 1010455 (947530 needed an extra step):
It appears that upstream/Fedora are resolved and RHEL7 has already been fixed via other BZs; let's close this out as CLOSED/UPSTREAM. | https://bugzilla.redhat.com/show_bug.cgi?id=893751 | CC-MAIN-2016-26 | en | refinedweb |
Hi --:
----
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
void main() {
char *filename = "missing file";
struct stat finfo;
finfo.st_mode = 0;
fprintf(stderr, "stat returned: %d\n", stat(filename, &finfo));
fprintf(stderr, "finfo.st_mode = %d\n", finfo.st_mode);
}
----
And sample output on RedHat 5.1:
-----
stat returned: -1
38416
-----
Should mod_rewrite really be copying the URI to r->filename in
hook_uri2file? Here's the comment which made me wonder:
329 /*
330 * Are we dealing with a file? If not, we can (hopefuly) safely assume we
331 * have a handler that doesn't require one, but for safety's sake, and so
332 * we have something find_types() can get something out of, fake one. But
333 * don't run through the directory entries.
334 */
335
336 if (r->filename == NULL) {
337 r->filename = ap_pstrdup(r->pool, r->uri);
338 r->finfo.st_mode = 0; /* Not really a file... */
339 r->per_dir_config = per_dir_defaults;
340
341 return OK;
342 }
Anyway, the following patch does solve the problem I'm seeing; but I'm not
sure it's very reasonable for non-linux servers. It might make sense to
wrap it in an appropriate preprocessor define ... but I'm not sure which
versions of linux display this problem.
----
--- http_request.c 1999/04/20 23:38:44 1.147
+++ http_request.c 1999/05/12 01:59:42
@@ -267,6 +267,9 @@
}
#if defined(ENOENT) && defined(ENOTDIR)
else if (errno == ENOENT || errno == ENOTDIR) {
+ if (errno == ENOENT)
+ r->finfo.st_mode = 0;
+
last_cp = cp;
while (--cp > path && *cp != '/')
-----
thanks --
Ed | http://mail-archives.apache.org/mod_mbox/httpd-dev/199905.mbox/%3CPine.LNX.3.96.990511171921.17623C-100000@crankshaft%3E | CC-MAIN-2016-26 | en | refinedweb |
#include <string.h> char *strcpy(char *dest, const char *src); char *strncpy(char *dest, const char *src, size_t n);; }
If there is no terminating null byte in the first n characters of src, strncpy() produces an unterminated string in dest. Programmers often prevent this mistake by forcing termination as follows:
strncpy(buf, str, n); if (n > 0) buf[n - 1]= '\0'; | http://www.makelinux.net/man/3/S/strncpy | CC-MAIN-2016-26 | en | refinedweb |
Contents
This section is informative.:
XML language definitions, regardless of their text representation, contain at least three types of data structures. When combined into a coherent and consistent whole, they form a complete language definition. These three components are:
Additional abstract data structures may be defined for use in the language definition, such as common content models or attribute groups, whose use is shared by other data structures within the language definition. The definition of these structures is the primary task of language development, and the core of the modularization framework.
This schema modularization framework consists of two parts:
In XHTML-MOD, every object in the DTDs is represented by an XML entity. These entities are then composed into larger sets of entities and so on, resulting in a set of data abstractions that can be generalized and used modularly. These multiple levels of abstraction are tied together by the use of a specific naming convention and a set of abstract modules.
Generic classes of entities (composed of sub- and sub-sub-entities) are used to create definitions of the three components listed above. Content models, attribute lists and elements are defined separately, sometimes in separate modules, and the ordering of the modules in the DTD structure is strictly defined (due to document order dependence). They are then combined to form the resulting document type. Extensibility is accomplished through the extensive use of INCLUDE/IGNORE sections in the DTD modules. How each of these structures relates to its Schema-based counterpart is summarized in Table 1 below.
Both the DTD and schema-based modularization frameworks implement a set of formalized data structures, often in a conceptually similar way. The modularization framework described here is designed around the use of similar data structures, which can be represented (more or less) equally well in either representation. This is accomplished through the use of a straightforward mapping of data structures defined in the DTD modules onto equivalent data structures in the XML Schema language.
In XHTML-MOD, content models for elements are defined using three classes of entities, identified through the naming conventions by the suffixes ".content", ".class", and ".mix". Each of these classes of entities is mapped onto a corresponding Schema counterpart in the following way:
".content" models - these models are used to define the contents of individual elements. For each element there is a corresponding ".content" object. IN XML Schema, ".content" entities are mapped directly onto groups:
The contents of ".content" groups are often classes or mixes.
".class" models - these models are used to define abstract classes of content models made up of either ".content" entities or other ".class" entities (or elements). In XML Schema they correspond to groups that may also contain substitution groups:
".mix" models - these models correspond to content models that are mixed groupings of ".class", ".content", and ".mix" entities and serve as abstract content models often used in common by many elements in the DTD. They correspond to groups in XML Schema:
In addition to these three content model groupings, XHTML-MOD includes an additional grouping ".extra". These are currently omitted from the schema modules. (If needed, a developer could add them to the schema modules in a conformant way.)
Attributes and Attribute lists in DTDs correspond directly to attribute and attributeGroup elements in XML Schema. The translation from one to the other is relatively simple and straightforward. Here is an example:
Complex attribute groups that are used by many different elements are grouped in the DTDs using entities suffixed with ".attrib". These attribute entities map directly onto attributeGroup elements in XML Schema as shown above.
The XML Schema specification allows elements as well as attribute values to be strongly typed. In defining elements in the modularized schema, an element type is created for each element that is a complex type composed of the content model (element.content) and the attribute list (element.attlist) as shown below:
Elements are then declared to be of the type element.type:
This allows the author the greatest degree of flexibility while retaining strict type checking via XML Schema. It also allows for extension of the element via type substitution.
Note that in the case of an element with a mixed content model, a complexType is necessary.
In summary, each element is composed of a content model and an attribute list, which are composed into a type for that element.
XML Schema allows inheritance and redefinition of elements, groups, attributes and attributeGroups. In several cases modules require modification of previously declared attribute lists. This is done by using the <xsd:redefine> element to redefine the attributeGroup that needs to be modified
In this example, we redefine the attribute list for the caption element in the tables module to add the align attribute defined in align.legacy.attlist.
The modularized DTDs contain support mechanisms for XHTML. Some of these are DTD-specific and are not fully supported in XML Schema.
This modularization framework attempts to recreate these support structures to the greatest extent possible.
Notations are an SGML feature that allows non-SGML data within documents to be interpreted locally [CATALOG]. Notations for XHTML are preserved in the Schema modules using the notation element in a straightforward way.
The strong typing mechanism in XML Schema, along with the large set of intrinsic types and the ability to create user-defined types, provides for a high level of type safety in instance documents. This feature can be used to express more strict data type constraints, such as those of attribute values, when using XML Schema for validation.
XML Schema provides no means of duplicating XHTML's named character entity mechanism. In most cases data abstraction through entities can be dispensed with in schemas. However, in the case of named character references, no replacement method is available.
Character entities are used to represent characters that occur in document data that may not be processed natively on the user's machine, for instance the copyright symbol. XHTML makes use of 3 sets of named character entities: the ISO Latin 1, Symbols, and Special.
A general solution for the resolution of language-specific named character entities is outside the scope of this document.
Entities are currently referenced in this framework as using a DTD reference to three individual DTD modules that define them.
The following table summarizes the mapping of DTD data structures onto XML Schema structures.
One further issue of note in the conversion of DTDs to XML Schema is that it is absolutely necessary to define all elements globally. Otherwise they are not considered to be in the XHTML namespace but only "associated" [XMLSCHEMA_COMPOSITION] with it. This document does not make use of this association feature in XML Schema.
This section is normative.
This modularization framework consists of a complete set of XHTML schema modules and a set of framework conventions that describe how to use them. The use of the framework conventions is required for conformance.
The modularized XHTML schema uses three types of modules, which when combined comprise the entire XHTML definition.
The Schema hub document is the base document for the schema. It contains only annotations and modules, which in turn contain <xsd:include> statements referencing other modules. The hub document corresponds to the DTD "driver" module in XHTML-MOD, but is much simpler. The hub document allows the author to modify the schema's contents by the simple expedient of commenting out modules that are not used. Note that some modules are always required in order to ensure conformance.
The (non-normative) example hub document described here contains <include> elements for two modules, named "required" and "optional". Each of these included modules is itself a container module.
Module containers, reasonably enough, include other modules. Modules and their containers are organized according to function. Including the hub document, which is a special case of a module container, there are ten included module containers.
In addition to the module containers listed above, there are around forty schema modules which contain only element definitions and their associated attribute and content model definitions. By convention, Schema modularizations may contain either <include> statements or element definitions but not both.
In order to easily identify the contents of any particular schema module, it is useful to provide here a module naming convention syntax. This syntax also provides a simple means of distinguishing modules based on their language version, which may improve maintainability of the modules themselves.
The module naming convention adopted here is the same in almost all respects as that used in XHTML-MOD.
Schema modules for XHTML should have names that:
Modules used in this modularization framework must have names that conform to the following syntax:
Exceptions to this rule are made for the Schema hub modules whose names are the same as above but may omit the content description syllable for brevity.
Version numbers of hub modules may omit the leading zero in the version number, but should include the minor version number.
Example: xhtml-1.1.xsd
In the case where a hub module contains elements or attributes from external namespaces, the name(s) of the external module(s) should be appended to the base language name using the "+" character.
Example: xhtml+fml-1.0.xsd
This module naming convention is intended also to comply with the required use of the media type in [XHTMLMIME].
In order to establish a physical structure for the composition of the Schema modules that corresponds to the abstract modules in XHTML, a module hierarchy structure has been used to organize the physical modules. The hierarchy structure looks like this:
These correspond to the divisions of XHTML into abstract modules described in detail in Section 3.2. The hierarchy structure is intended to match the abstract module structure as closely as possible. This feature is not present in DTD modularization, and is not required for Schema modularization. It does, however, allow the developer to organize the modules in accordance with their hierarchical structure. The directories listed in Table 2 also correspond exactly to the module container modules in this framework.
The consistent use of naming conventions is important for the maintenance and development of complex software applications.
Adhering to these conventions provides numerous benefits to developers:
With few exceptions, the naming conventions used in XHTML-MOD are preserved in this framework.
The naming convention in XHTML-MOD uses suffixing of object names to indicate functionality, as described below.
Abstract attribute groups and attribute lists are suffixed with the ".attrib" and ".attlist" suffixes respectively.
Three different suffixes are used in content model names. They are ".content" for element content models, and ".class" or ".mix" for abstract content models.
Element names are not suffixed in XHTML-MOD. This document uses the notion of element types, which are complexTypes used to define elements and are suffixed with ".type". The ".type" suffix was used in XHTML-MOD for attribute data types. This is superfluous in XML Schema (since attribute types are arguments to the "type" attribute) and so the suffix is used in a different way in this framework.
This document establishes a convention for the internal structure of XHTML Schema modules. This convention provides a consistent and predictable way of organizing schema modules internally. This convention applies also to the hub document, which is itself simply a module of modules, albeit a somewhat specialized one.
Each schema module is composed of several components, some of which are required for functional reasons and some of which provide metadata as a convenience to the author. Not every component is included in every module.
Each module begins with a <xsd:schema> root element (after the optional xml declaration and DOCTYPE).
In the XHTML schema modules, the version number for the specific language being defined (e.g. "1.1") is used as the default value of the version attribute on the schema element.
This framework uses the value of "unqualified" for the value of the elementFormDefault attribute on the schema root element. Elements within the XHTML namespace do not need to use a namespace prefix.
After the root element each module contains an annotation element containing several documentation sections briefly describing the purpose of the module.
This is an annotation element that contains a short description of the module and its purpose.
An annotation element containing authoring and versioning information for the module should always be included.
The standard W3C copyright statement is included in each module through the use of an include element. An exception is the hub document, which contains the full copyright text.
This is a module specific documentation element providing detailed information about the module's contents, its organization, and any noteworthy items of interest to developers.
Module elements contain include statements, import statements, or other modules (or comments). They must precede any other definitions in the module.
These include groups with names ending in ".content", ".class", or ".mix".
These are suffixed with either ".attrib" or ".attlist".
These are complexType elements defining each element's type.
These define individual elements in the module.
Additional constraints on the internal structure of schema modules are:
Each module must contain include statements for other modules or data structure definitions, but not both.
Each module must include at least sections 1 and 2 above, as well either section 3 or some combination of sections 4-7.
The handling of namespaces in XML Schema is entirely different from that in XHTML-MOD. Namespaces are integral to XML Schema and their use in modularization arises naturally from the schema syntax.
One convention chosen for this framework is that the names of elements and attributes in the modules are unqualified i.e. no namespace prefix is required for XHTML elements.
This is set by using the value of "unqualified" on the elementFormDefault attribute of the xsd:schema element.
A consistent commenting convention has been imposed on the modules described here. The purpose of a commenting convention is to allow for generating documentation from the comments (as well as general comprehension). Documentation elements containing Annotation-level comments are assumed to be of the highest importance and should be used to denote information about the module itself, and for important notes for developers.
ModuleF-level comments are denoted as usual with SGML comment delimiters "<!--" and "-->". By means of this convention, modules can become self-documenting. Tools for extracting these comments and formatting them suitably may (hopefully) be developed in the future. | http://www.w3.org/TR/2002/WD-xhtml-m12n-schema-20021209/schema-framework.html | CC-MAIN-2016-26 | en | refinedweb |
but vim 7 is still in development, isn't it? which spellchecker will it use or can you choose?
It uses it's own engine. based on OOo I think
btw.: Is there a reason to set nocompatible?
When a ".vimrc" file is found while Vim is starting up, this option is switched off, [...].
Never knew that, thanks.]]>
but vim 7 is still in development, isn't it? which spellchecker will it use or can you choose?
btw.: Is there a reason to set nocompatible?
'compatible' 'cp' boolean (default on, off when a .vimrc file is found) global {not in Vi} This option has the effect of making Vim either more Vi-compatible, or make Vim behave in a more useful way. [...] When a ".vimrc" file is found while Vim is starting up, this option is switched off, [...].
vim 7.0 will have spell checking internally.
set spelllang=en_US
spell on
syntax on set nomodeline set ww=<,>,[,] filetype on filetype indent on filetype plugin on set tabstop=4 set softtabstop=4 set shiftwidth=4 set expandtab set autoindent map ,s :w !~/.vim/pipe.sh bash <CR> <CR> map ,p :w !~/.vim/pipe.sh python <CR> <CR> nmap > :bn<CR> nmap < :bp<CR> au FileType python source ~/.vim/python.vim :command -nargs=+ Pyhelp :call ShowPydoc("<args>") function ShowPydoc(module, ...) let " . fPath :execute ":sp ".fPath endfunction let spell_auto_type = "html,tex,txt,none" let spell_executable = "aspell" let spell_insert_mode = 0 let spell_language_list = "de_DE,en_US" let psc_fontface = "plain" colorscheme ps_color
I still have to figure out, how to make vimspell auto check the file types mentioned. I still have to type :SpellAutoEnable at the beginning of each session.]]>
ah, ok - it seems "ttyfast" is automatically set depending on $TERM - it's on for xterm/rxvt/etc]]>
I haven't really noticed a difference. I'm still experimenting - just going through the reference manual and trying out random new things to see if they help.]]>
hmmm does ttyfast make a difference? I don't have it set... never seen it before.]]>
Thanks for that - very useful]]>
(mez): I've always wanted to mess with line number formatting, but can't seem to ever change it...
also, if you're a python coder (looking at your vimrc), you should check out python_calltips - great script for python]]>
Some very cool ideas here; seeing this thread inspired me to overhaul my vimrc. Here's what I've got so far:
" my ~/.vimrc file, 27.04.05 """""""""" general options """""""""" set nocompatible " turn off vi quirks filetype plugin indent on " filetype dependent indenting and plugins syntax on " turn on syntax highlighting set backspace=indent,eol,start " proper backspace behaviour set nobackup " don't create annoying backups set nostartofline " keep cursor in same column when moving set number " turn line numbers on set showmode " show whether in insert, visual mode etc set showmatch " indicate matching parentheses, braces etc set tabstop=4 " sets up 4 space tabs set shiftwidth=4 " use 4 spaces when text is indented set expandtab " insert spaces instead of tabs set softtabstop=4 " do the right thing when deleting indents set autoindent " indents line to the line above it set guifont=Monospace 12 " font to use in gvim set history=100 " remember this many commands set hlsearch " highlight search results set incsearch " incremental search while typing set mouse=a " enable mouse in all modes set ruler " show line and column in status line set showcmd " show partial command in status line set ignorecase " ignore case in search patterns set smartcase " override ignorecase if search has uppercase set whichwrap=<,>,[,] " cursor keys can wrap to next/previous line set textwidth=79 " 80 column page for ease of reading set ttyfast " for fast terminals - smoother (apparently) set hidden " don't have to save when switching buffers set guioptions-=T " no toolbar colors nedit " modified so bg is only slightly off-white """""""""" autocommand stuff """""""""" if has("autocmd") " return to last known cursor position when opening file autocmd BufReadPost * if line("'"") > 0 && line("'"") <= line("$") | exe "normal g`"" | endif endif """""""""" abbreviations and remaps """""""""" :abbreviate #! #!/usr/bin/env python """""""""" other stuff """""""""" " vim.org tip 867: get help on python in vim, eg :Pyhelp os :command -nargs=+ Pyhelp :call ShowPydoc("<args>") function ShowPydoc(module, ...) let " . fPath :execute ":sp ".fPath endfunction
A couple of things:
Does anyone know how to increase the space between the line numbers and the start of the line? I surely can't be the only person who wants this, but I've searched and I can't find any mention of it.
There are loads of new colour schemes available in one zip file here: … ipt_id=625]]>
another goody - recent vim tip:
"windows style - ctrl+shift enters visual mode nmap <c-s-left> vbge<space> nmap <c-s-right> vew<bs> nmap <c-s-down> v<down> nmap <c-s-up> v<up> imap <c-s-left> _<esc>mz"_xv`z<bs>obge<space> imap <c-s-right> _<esc>my"_xi<s-right><c-o><bs>_<esc>mz"_xv`yo`z imap <c-s-down> _<esc>mz"_xv`zo`z<down><right><bs><bs> imap <c-s-up> _<esc>mz"_xv`z<up>o`z<bs>o vmap <c-s-left> bge<space> vmap <c-s-right> ew<bs> vmap <c-s-down> <down> vmap <c-s-up> <up>
this is because using "behave=mswin" forces shift+<direction> to enter "select" mode (can't perform operations as in visual mode)]]>
oh, a new goodie:
vnoremap <BS> d
allows backspace to delete a selection of text in visual mode....]]>
thanks... I'll try it out]]>
two things:
a) you can use "<cr>" instead of "^M" to insert a carraige return... it's slightly more portable...
b) with your mappings, you can add <esc> in front of them, so they work in insert mode as well
ok let me see... played with this a bit last night
.vimrc
colorscheme elflord filetype plugin on set shellslash set grepprg=grep -nH $* filetype indent on set sw=2 set iskeyword+=: map <F2> :w<C-M> map <F11> :wq<C-M> map <F12> :q!<C-M> :abbreviate sig Martin Lefebvre^Memail: dadexter@gmail.com^Mweb: source /home/dadexter/.viabbrv
.viabbrv
:ab cc #include <stdio.h>^M^Mint main(int argc, char **argv) {^M^M return 0;^M} :ab php_sqlq $query = "SELECT * FROM ";^M$results = mysql_query($query) or die(mysql_error());^M :ab pkgbuild pkgname=^Mpkgver=^Mpkgrel=1^Mpkgdesc=""^Murl=""^Mlicense=""^Mdepends=()^Mmakedepends=()^Mconflicts=()^Mreplaces=()^Mbackup=()^Minstall=^Msource=($pkgname-$pkgver.tar.gz)^Mmd5sums=()^M^Mbuild() {^Mcd $startdir/src/$pkgname-$pkgver^M./configure --prefix=/usr^Mmake^Mmake DESTDIR=$startdir/pkg install^M}^M
I just had to make one for a PKGBUILD template | https://bbs.archlinux.org/extern.php?action=feed&tid=10908&type=atom | CC-MAIN-2016-26 | en | refinedweb |
These days hardware is hidden beneath a thick blanket of Operating System's code. Yet, for me and many other folks, according to Google searches,
a possibility to reach and experiment with hardware would be a very exciting opportunity. I have created an exploratory platform consisting
of two Intel-based computers connected via RS232 interface. One computer, the Master, is running Windows and is used to control the second computer.
The second one, the NakedCPU, runs essentially without any operating system, hence it is available to truly low-level experiments. My platform was
featured on the cover of Circuit Cellar Magazine, issues 259, 260. Current article is an adaptation of the original publication, which is available
on my web site.
It seems that experimentation with a PC is limited to developing high-level code software with the aid of numerous libraries and technologies hiding the hardware beneath layers and layers of code. Rarely limited experimentation with PC hardware is possible, however one has to install drivers allowing some access to hardware, because OS naturally does not permit us to do any low-level activities. The sad part is that such drivers are mysterious themselves. It is safe to say that the hardware programming was well known to many computer professionals and enthusiasts in the 80-s. Later the people forgot about it, while the technology has tremendously leaped ahead. In this article, I try to bridge the gap in time and to revive the interest in hardware programming based on the state of the art technologies and concepts. There is a Russian saying: "Everything new is actually well-forgotten old".
This article is a result of my interest in the Intel CPU, chipset, I/O controller and other essential PC devices from the perspective of low-level hardware programming unobscured by an operating system and drivers. The motivation for this project was to reach out to people with inquisitive minds who would appreciate a possibility to directly experiment with the CPU, chipset and other hardware. Here I present NakedCPU: a facility providing full access to hardware and CPU without any restrictions imposed by the operating system. Importantly, the processor will not be obscured by Linux, DOS or Windows, and it will be operating in its most interesting and powerful regime - the protected mode. In the text, the users are referred as inquirers, because NakedCPU is made for researchers, i.e. devoted geeks, rather than regular users.
The other goal of this article is to provide the inquirers with a roadmap into navigating hardware documentation, which is confusing and difficult to find otherwise. I did not want to retell the documentation, because many computer concepts and technologies quickly become obsolete. With the roadmap, however, it will be easier to follow the newer technologies and documentation.
Let us think for a moment about one of the modern Intel CPU varieties, for instance Intel Core 2 Duo. Impressively, this processor is capable of consuming up to 75A of current [1]! Also, it is not a simple processor: its documentation consists of 5 volumes with the total page count of approx 4200 pages [2]. Intel CPU does not operate alone: it is interfacing a chipset, i.e. a Graphics and Memory Controller Hub (GMCH). The chipset on the other side is connected to an I/O Controller Hub (ICH). Interestingly, this arrangement is analogous to our nervous system with brain, brain stem and a spinal cord. GMCH and ICH are processors themselves, containing hundreds of configuration and control registers. The documentation on GMCH and ICH spans over 1400 pages [3, 4]. No wonder why operating systems hide actual hardware under a thick blanket of intermediate code!
NakedCPU is an experimental platform exposing the hardware internals of a PC. Experimentation with NakedCPU requires two computers (Figure 1). One is the master computer having Windows and Visual Studio software, whose job is interacting with us and the second computer. The second computer, i.e. the NakedCPU, is connected to the master via RS232 interface.The NakedCPU computer is booted up with a small startup code (provided here), which enables it to communicate via RS232 with the master. Upon startup, the NakedCPU is expecting two separate packages of bytes: one is a stream of Intel CPU opcodes to be executed (i.e. the executable) and the other one is the data to be processed. The executable can modify any part of memory, chipset registers, etc, and even overwrite the startup code. In other words, the freedom is yours.
NakedCPU will not be alive without some sort of a startup code. At startup we have to accomplish two tasks: switch the CPU into the Protected Mode and begin listening on the serial port for two packets of bytes: executable and data. In order to supply NakedCPU with a startup code, the easiest way is to prepare a bootable floppy disk with our own code. Certainly, one can also put this code into a hard drive. The startup code (up to 512 bytes) is written in assembly language and must be stored in the sector 0, i.e. the Master Boot Record (MBR) of the disk. It is difficult to use assembly language compilers and linkers such as MASM, since they tailor the executable to a particular OS. There is, however, a binary editor HexIt [5], which among other things allows direct conversion of assembly commands into binary code. Using this editor, a binary file of the future MBR was created. The content of this file can be seen in Appendix. "Anatomy of MBR" provides detailed dissection of the content.
A small utility "Firstsectwrite.exe", see Appendix, was written to transfer this file into sector 0 of the disk. Although the code of this utility is quite simple, it deserves some attention. A Windows API call
CreateFile(TEXT("\\\\.\\A:")...)
opens raw communication with a disk, in this case a floppy drive A, to allow writes into the sector 0. It is important to note, that this call will be only successful under the administrator account.
In this paper, a Dell Optiplex 760 computer was used to conduct the experiments. It had a floppy drive attached via USB and BIOS startup options allowed to boot up the computer from such a drive.
In this paper, a Dell Optiplex 760 computer was used to
conduct the experiments. It had a
floppy drive attached via USB and BIOS startup options allowed to boot up the
computer from such a drive.
It may sound contradictive to the spirit of the article to be OS-free, however NakedCPU is booted up with a tiny (262 bytes long) 32-bit "operating system", NakedOS, which makes the NakedCPU capable of communicating with the outer world via serial port. In fact, we did not compromise our principles of truly free exploration, because NakedOS is absolutely transparent and its code completely presented in the Appendix. NakedOS defines several memory segments (Table 1), which are useful as initial environment for the inquirer's executable. Intel documentation [2] provides explanation for protected mode memory segments, Global Descriptor Table (GDT) and Interrupt Descriptor Table (IDT). In addition, NakedOS defines two software interrupts and a base vector for hardware interrupts. Note: the INT interrupts have nothing to do with DOS or BIOS; they are solely defined by our code.
Immediately after start, NakedOS is expecting two transactions: one for the executable code and another for data. Each transaction is a stream of bytes sent over the RS232 (see Figure 2). The first transaction is written into the memory segment "Target executable", while the second transaction goes into the "Extended memory" segment. After completion of the second transaction, NakedOS transfers control to the executable by a long jump:
jmp 00030:000000000
From that moment, in principle, any memory occupied by NakedOS can be overwritten by activities of the inquirer's executable. The hardware interrupts are normally masked when NakedOS is running; however the 8259 interrupt controller is set up (see additional file
MBRListingNakedOS.doc in the download archive) to handle the interrupts if the inquirer decides to unmask them. Detailed instructions on programming the interrupt controller are provided in the documentation for the I/O Controller Hub (ICH) [4].
The
important issue remains how to send an executable code to NakedCPU to conduct
experiments. Recall the beginning of the
article, where it says that two computers are involved. The master has a Visual C++ project, NakedCPU
Explorer, which acts as a "shell" allowing inspection and modification of
chipset registers and memory. The code
defines a class having a constructor, which provides __asm{} brackets to be
filled up by the inquirer with executable code:
__asm{}
class Ports : public NakedCPUcode
{
public:
Ports()
{
DWORD pe, ps;
__asm
{
mov pe, offset end //save end and start of the code
mov ps, offset start //to be sent to the NakedCPU
jmp end //master jumps over the NakedCPU code
///////////////////////////////////////////
start: mov ax, 0x28 //loading
mov es, ax //data
mov ds, ax //and stack
mov ax, 0x18 //segment registers
mov ss, ax //initializing
mov esp, 0x3fe //stack pointer
xor edi, edi
mov eax, 'OLEH' //NakedCPU Explorer says HELO
stosd
xor esi, esi
mov ecx, 4
INT 0x21
... //here goes rest of the code
_emit 0xEA
_emit 0x00
_emit 0x00
_emit 0x00
_emit 0x00
_emit 0x10
_emit 0x00
end: nop
}
if(!PrepareCode(ps, pe)) delete this;
}
};
Since Microsoft Visual C++ is running on the master PC having an Intel CPU, the compiler will translate the assembly code into appropriate opcodes, which are naturally suitable for the NakedCPU! Specifically, this class is derived from another class, NakedCPUcode, which performs a preparatory work by extracting the opcodes produced from the code in the __asm{} brackets and making them available for sending over to the NakedCPU. Note, NakedCPU only receives the code between "start" and "end" labels. It is important to understand that the master computer will not execute the code in the __asm{} brackets, it simply jumps over it. The strange keyword "_emit" allows placing opcodes directly by their hexadecimal values - for some reason a long jump is not permitted when using Visual Studio compiler.
Any other executable code, besides the NakedCPU Explorer, can be prepared and sent to the NakedCPU computer. Actual sending of the NakedCPU Explorer code is accomplished the following way:
Ports ncd;
SerialComm Com1;
if(!ncd.UploadEx(&Com1))
throw 1;
The project also defines a class SerialComm and a function SendNakedCPUdataRecvResponse to send and receive data. It is worthwhile to examine the straightforward code of the project to understand the details of communication with NakedCPU. Besides serving as an example, NakedCPU Explorer sends an executable to NakedCPU, which permits interactive examination and modification of various chipset and I/O controller registers. NakedCPU Explorer offers eight commands "write", "write32", "read", "read32", "pci", "memread", "memwrite" and "quit". First four commands will ask for a port address, i.e. an address in the CPU I/O space. With these commands, NakedCPU will write to and read from a GMCH or ICH register, one or four bytes. The fifth command will ask for Bus (decimal), Device (decimal), Function (decimal) and Register (hexadecimal) values. The values will be packed into the port 0xCF8 to open a "window" into the PCI configuration space, accessible via port 0xCFC. Details on addressing PCI devices are provided in the chipset documentation [3]. Memread and memwrite allow reading and writing double words from and to the memory respectively.
Note: NakedCPU Explorer does not use any hidden "helper" drivers or libraries. The code is small and entirely transparent for the inquirer's perusal.
The following sections describe experiments with direct access to the hardware and CPU.
Although
it may sound trivial, making a PC speaker to produce sound involves understanding of timers and some low-level work. Ironically, there seems to be
no way to make a speaker beep using Windows API on Vista and XP 64-bit version, because Microsoft decided that the speaker hardware is obsolete [7]. Certainly, in the past, the DOS programmers must have known how to do it, but now it seems to be forgotten. Reading ICH documentation [4] and conducting a few experiments resulted in the following protocol:
Parallel
port is becoming more and more obsolete, nevertheless it offers a possibility
to read and send data over 8 lines. Strangely, ICH documentation does not say
anything about programming a parallel port. Browsing Internet reveals that
there is still some interest with regard to the parallel port and programming
information is available. Connect an LED to the D2 port line via 470 Ohm
resistor and follow the protocol below, which demonstrating writing to and
reading from the parallel port.
Which instruction the processor executes first after power
on? Documentation [2] says that the processor reads its first instruction from
the address 0xFFFFFFF0, i.e. 16 bytes below 4GB. Attempting to examine this
address with a debugger is fruitless (tested, did not work). In order to reach
this high address, which is in the range of high BIOS, a small executable for
the NakedCPU was prepared. The executable defined a segment of memory
addressing high BIOS and sent the content of the 16 bytes below 4GB back to the
master computer. Quite expected, there was a short jump, approximately to 30KB
below. The executable was modified to download the entire chunk of memory 30KB
below 4GB up to the top. Especially curious inquirers are welcome to
investigate the content, which was saved and available for download. As a
general impression, one can see many accesses to PCI bus and calls for CPUID
instruction. It certainly makes sense, because various devices have to be set
up and BIOS is attempting to determine which processor is being used.
Communication via network is accomplished using the Media
Access Controller (MAC). Documentation is available at the Intel web pages. It is also helpful to read first three chapters of the IEEE 802.3-2008 standard
to get an idea of the low-level network lingo as well as the packet format sent
over the wires [8].
Once again
we will use NakedCPU Explorer to investigate the internals of the Media Access
Controller and conduct some experiments. MAC requires data structures in
memory and configuration transactions via I/O address space. First, we must
determine I/O address of MAC, which is called Base Address 2 (BAR2). The
address is stored in the PCI configuration space at bus 0, device 25, function
0 (B0:D25:F0) register 0x18. By the way, there is a confusion in the
documentation referring to the same register. ICH calls this particular
register as MBARC [4], while MAC documentation [6] calls it BAR2. Conduct PCI transactions
with NakedCPU Explorer as follows:
The number
0xECC1 means that the I/O address is actually 0xECC0 with the 0th
bit hardcoded to 1 to indicate that the address is indeed in the I/O space as
opposed to being memory-mapped [6]. The latter indication is important
because all configuration and communication with MAC can also be done using
memory-mapped registers, which is faster, however for our experiments it is
sufficient to use the I/O space, because it is simpler and accomplishes same
results as with memory-mapped operations.
In order
to interact with MAC, the inquirer writes an address of a register within MAC
into the BAR2 I/O address (0xECC0). After that, BAR2+0x4
(0xECC4) becomes a window to the value of that MAC
register. It is important to mention that BAR2 and BAR2+0x4
accept only 32-bit double word read / write operations. MAC registers
have plenty of bits to deal with and some bits are dependent on one another.
It is very difficult understand the settings just by looking at the hexadecimal
value of a register. An Excel worksheet InterpretRegister.xls (available for
download) features a macro that greatly helps in this situation. Specifically,
a table containing bit descriptions should be copy-pasted from the
documentation PDF and a hexadecimal value of a register will be converted into
binary 1s and 0s in appropriate cells right next to the description text, see Figure 4.
Let us
examine control CTRL(0x0) and status STATUS(0x8) registers; the address of the
register is given in parentheses. After power on with network cable unplugged:
This particular bit constellation determines among other
things enabled automatic configuration for speed and full/half duplex.
These bits tell that there is no link established but
initialization is completed.
After
connecting the master computer with the NakedCPU, CTRL register stays the same
as expected, while the status register changes to 0x80080683. The new value means full duplex
communication, established link and 1Gbps speed. The master computer running
Windows XP reported the same communication parameters, which indicates that the
NakedCPU network interface was able to negotiate with the master computer's
interface on the hardware level.
In this section, we will experiment with reading network
packets originated from the master computer. It was found that when master
computer detected live NakedCPU via the network cable, Windows began generating
DHCP requests. These requests are attempts to obtain an IP address and other
high-level network settings, because Windows assumes (erroneously) that the
NakedCPU is a router or a network server. Although Windows is mistaken, this
is perfectly fine for our experiments, because we can catch these packets and
examine them.
MAC uses
direct memory access to store the received data. We have to create several
descriptors, which will tell MAC where to write the data. Thus, two memory
ranges are required: one for the descriptors and the other for packets.
Referring to Table 1, one can see that there is an area of memory above the
address 0x100000 available for inquirer's data. Bearing in mind that NakedCPU
Explorer uses a tiny bit of that memory to store incoming commands, we can
safely use addresses above 0x100500. It is sufficient to create two
descriptors for the initial experiments. A descriptor is a data structure of
four double words (16 bytes). The first two double words are 64-bit physical
address of the location where the packet is to be stored. With the ability of the
NakedCPU Explorer to write into memory locations, let us create the descriptors
at the address 0x100500, pointing to two 512-byte long buffers located at the
addresses 0x101000 and 0x101200. A collection of descriptors is called
"queue". After MAC finishes with storing packets, it will update the
descriptors to indicate received packet size, errors and several other
parameters.
The command "memwrite" asks for the address and the number
of double words to be written. Note, writing below 0x100000 will cause
general protection fault and reboot of the NakedCPU.
MAC has to
know the location of the descriptors, the size of the receive buffers and the
type of packets to receive. This information has to be stored in several MAC
registers. RDBAL0(0x2800) and RDBAH0(0x2804) - low and high portions
respectively of a 64-bit physical address of the base of the queue. RDLEN0(0x2808)
- length of the memory buffer allocated for the queue. RDH0(0x2810) and RDT0(0x2818)
- head and tail pointers respectively. RFCTL(0x5008) - receive filter control register.
RXCSUM(0x5000) - receive checksum control register. Before setting up these
registers, a bit 26 of the CTRL(0) register has to be set to 1, which will
cause MAC reset. After setting up all registers, data reception is initiated
by writing into RCTL(0x100) to set up the "enable" bit, the size or the receive
buffers reception mode and type of the descriptors.
For the CTRL, RCTL,
RFCTL and RXCSUM registers, the worksheet InterpretRegister.xls shows values
(and bit states) that are going to be used in our experiments. The meaning of
value for the RDLEN is somewhat confusing. According to the documentation, the
length of the queue buffer must be a multiple of 128, which means at least
eight descriptors (128 / 16) must be in the queue. We have only two
descriptors, however. I have determined experimentally that it is not a
problem to tell MAC that the queue buffer is larger than it needs to be, as
long as the RDT0 register is pointing to the end of the actual queue. Hence,
for our particular experiment we must set RDLEN = 0x80, RDT0 = 0x2.
Before receiving the packet by NakedCPU, the Master
computer should not be sending DHCP packets. To suppress the DHCP packets
originating from the Master computer, the following should be entered in the
Windows command-line tool under the Administrator account:
ipconfig /release
Next, copy and paste columns of the Table 2 into the NakedCPU Explorer in the order I - V. Note that at the end of the 4th
step, MAC is ready for enabling reception. That step ends up in reading from
the status register, which should result in a value 0x80683. This value is
similar to the one previously described (0x80080683, page 8) with the difference that the bit 31 is cleared, which indicates that the DMA clock
cannot be lowered to ¼ of its value. The reason why MAC changed its "mind"
concerning the DMA clock is not known, but this is not relevant for our
experiments.
To initiate
the Master sending packets, type the following in the command-line window under
the Administrator account:
ipconfig /renew
After reading
from the network, MAC updates the two descriptors. With the memread command observe the new values by reading 8
double words beginning from the address 0x100500. You will see that the
descriptor's address field is changed and two additional values appeared. Figure 5 shows the new values that my computer produced. Detailed information about
the fields is provided in the MAC documentation [6]. Briefly, the lengths of the
packets are 342 bytes (0x156); no errors occurred, the descriptors are
indicated as "done" and the entire packet was able fit to the buffer
(0x20073).
MAC stored the actual two packets at the addresses
0x101000 and 0x101200 respectively; their contents are present in InterpretRegister.xls,
"Packets" worksheet. Figure 6 shows the beginning of the stored data array and
the order of transmission. The first six bytes marked red are the destination
(broadcast) address, which are followed by the source addresses and the
two-byte length / type field. The latter field is transmitted with most
significant byte first [8], which makes it to be 0x0800. If the value of the
length / type field is less or equal to 0x05DC, then it indicates the length of
the packet, otherwise its type (Ethertype). Web pages at standards.ieee.org in
theory provide specific Ethertype values, however it is virtually impossible to
find the actual list of values. Luckily, Wikipedia points to an exact URL in
the innards of the ieee.org site [9]. According to the standard, Internet
Protocol (IP) is designated with the Ethertype 0x0800. Apparently, next step
is to investigate the format of the Internet Protocol, which is described in
RFC894, which points to RFC791 [10].
The fields of the IP header are transmitted in a similar
way as the previously described length / type field, i.e. most significant bit
and the most significant byte first. In contrast to the intuitive notation,
where bit 0 is the least significant bit, RFC791 provides the opposite, with
bit 0 to be the most significant bit. Table 3 shows continuation of the
received packet and the assignment of its specific values to the fields of the first
32-bits of the IP header. Important fields are IHL and Total Length. IHL
indicates the number of 32-bit words in the header. In our case it is 5, which
means that according to RFC791 Options and Padding fields are omitted. It is
interesting to note that MAC reported receiving 342 bytes and the IP header
indicated 328 (0x0148) bytes. The difference of 14 bytes makes perfect sense,
because they make up to two bytes of the length / type field plus 2*6 bytes of
destination and source hardware addresses.
All other
fields of the IP header are easy to map following this example. Specifically,
the field values are: Identification - 0x0fe8 (packet 1), 0x0fe9 (packet 2);
Flags and Fragment Offset - 0; Time to Live - 0x80; protocol - 0x11; Header
Checksum - 0x29be (packet 1), 0x29bd (packet 2); Source IP address - 0.0.0.0;
Destination IP address - 255.255.255.255. It is understandable, that the master
PC is asking for an IP address while doing the dynamic host configuration
process, therefore the source IP address is all zeroes. For the destination,
the address is all 255.255.255.255, which is a broadcast address, similar to
the hardware broadcast address (6 bytes, all 255).
The value for
the protocol field is 0x11 (17); according to RFC790 it refers to the User
Datagram protocol (UDP) - the next level of data encapsulation - which is
described in RFC768. According to that document, a UDP header contains four
16-bit words. In the order of transmission, these words are source port,
destination port, length and checksum. The values received by NakedCPU in my
experiment were source port - 0x0044, destination port - 0x0043, length -
0x0134, checksum - 0x5c1a (packet 1) and 0x581a (packet 2). The length of 308
(0x134) bytes makes sense, since IP header reported the total datagram length
of 328 bytes minus 20 bytes occupied by the IP header. The source and
destination port numbers should have been explained in RFC790, however it
turned out that a long chain of other documents obsolete this one. At the end
of the chain, it is suggested to look at the website of the Internet Assigned
Numbers Authority (IANA), where, unfortunately, the information is not well
organized. It was possible, however to find among the obsolete RFCs that port
numbers 67 (0x43) and 68 (0x44) correspond to the Bootstrap Protocol, server
and client ports respectively. The Bootstrap Protocol is described in RFC1542
that points out to the DHCP protocol, see RFC2131. A summary of all fields
discussed in this section is provided in Figure 7.
It is possible to directly experiment with Intel CPU and
other PC hardware without any layers of unknown intermediate code that is
intended to make our life "easier". As of yet, the most comprehensive
documentation exists for the processor itself [2]. The other hardware is not
well documented, which is why it takes a significant amount of efforts to
gather pieces of information from the Internet and conduct experiments
directly. The old books that dealt with hardware are DOS oriented and
seriously obsolete. The new hardware is hidden behind layers of unknown code.
In theory, the NakedCPU platform enables developers to
create task-specific applications using only needed components. For instance,
if we are talking about a large-scale database, there is no need to support
GUI, USB plug and play, audio cards, .NET and many other things. An additional
bonus to the NakedCPU platform is immunity to viruses. Like in biology, where
flexible viruses attack well-evolved organisms, the computer viruses attack
well developed operating systems. With the NakedCPU, on the other hand, a
particular task-specific solution can be very unique; hence the virus-creators
will simply have not enough information to explore potential security holes.
The original publication is available on my web site. Click the images:
Address Opcode Mnemonic
00000000 EB3C jmp short 00000003E
.............some remnants from FAT...............................
0000003E FA cli
0000003F 33C0 xor ax,ax
00000041 8ED0 mov ss,ax
00000043 BC007C mov sp,07C00
00000046 16 push ss
00000047 07 pop es
00000048 0E push cs
00000049 1F pop ds
0000004A E80000 call 00000004D
0000004D 89E5 mov bp,sp
0000004F 8B5E00 mov bx,[bp+000]
00000052 0F01976D01 lgdt q.[bx+0016D]
00000057 89DE mov si,bx
00000059 81C67301 add si,0173
0000005D B94000 mov cx,040
00000060 31FF xor di,di
00000062 FC cld
00000063 F3 repe
00000064 A4 movsb
00000065 BF0008 mov di,0800
00000068 89DE mov si,bx
0000006A 83C633 add si,033
0000006D B93B01 mov cx,013B
00000070 F3 repe
00000071 A4 movsb
00000072 0F01E0 smsw ax
00000075 0D0100 or ax,01
00000078 0F01F0 lmsw ax
0000007B EA00001000 jmp 00010:00000
-----from here 32-bit code segment ------------------------------
00000080 66B81800 mov ax,018
00000084 8ED0 mov ss,ax
00000086 BCFE030000 mov esp,03FE
0000008B 66B80800 mov ax,08
0000008F 8EC0 mov es,ax
00000091 E492 in al,092
00000093 0C02 or al,02
00000095 E692 out 092,al
00000097 66BAFB03 mov dx,03FB
0000009B B083 mov al,083
0000009D EE out dx,al
0000009E 66BAF803 mov dx,03F8
000000A2 66B80600 mov ax,06
000000A6 66EF out dx,ax
000000A8 66BAFB03 mov dx,03FB
000000AC B003 mov al,03
000000AE EE out dx,al
000000AF 31C0 xor eax,eax
000000B1 BF00020000 mov edi,0200
000000B6 B900020000 mov ecx,0200
000000BB F3 repe
000000BC AA stosb
000000BD BF00030000 mov edi,0300
000000C2 BE14010000 mov esi,0114
000000C7 0E push cs
000000C8 1F pop ds
000000C9 0F011E lidt q.[esi]
000000CC 83C606 add esi,06
000000CF B904000000 mov ecx,04
000000D4 F3 repe
000000D5 A5 movsd
000000D6 66B83800 mov ax,038
000000DA 8EC0 mov es,ax
000000DC 8ED8 mov ds,ax
000000DE 31FF xor edi,edi
000000E0 CD20 int 020
000000E2 66B82800 mov ax,028
000000E6 8EC0 mov es,ax
000000E8 8ED8 mov ds,ax
000000EA 31FF xor edi,edi
000000EC CD20 int 020
000000EE 6631C0 xor ax,ax
000000F1 8EC0 mov es,ax
000000F3 8ED8 mov ds,ax
000000F5 66BA2000 mov dx,020
000000F9 B011 mov al,011 //ICW1
000000FB EE out dx,al
000000FC B028 mov al,028 //ICW2
000000FE 42 inc edx //IRQ0 base addr 0x28
000000FF EE out dx,al
00000100 B004 mov al,04 //ICW3: Slave
00000102 EE out dx,al //connected to pin 2
00000103 B001 mov al,01 //ICW4
00000105 EE out dx,al
00000106 EA000000003000 jmp 00030:000000000
.............random bytes......................................
00000140 66BAFD03 mov dx,03FD
00000144 EC in al,dx
00000145 A801 test al,01
00000147 74FB jz short 000000144
00000149 66BAF803 mov dx,03F8
0000014D EC in al,dx
0000014E AA stosb
0000014F 50 push eax
00000150 66BAFD03 mov dx,03FD
00000154 EC in al,dx
00000155 A820 test al,020
00000157 74FB jz short 000000154
00000159 58 pop eax
0000015A 66BAF803 mov dx,03F8
0000015E EE out dx,al
0000015F C3 ret
00000160 57 push edi
00000161 B904000000 mov ecx,04
00000166 E8D5FFFFFF call 000000140
0000016B E2F9 loop 000000166
0000016D 5F pop edi
0000016E 8B0F mov ecx,[edi]
00000170 85C9 test ecx,ecx
00000172 7409 jz short 00000017D
00000174 51 push ecx
00000175 E8C6FFFFFF call 000000140
0000017A E2F9 loop 000000175
0000017C 59 pop ecx
0000017D CF iretd
0000017E AC lodsb
0000017F E8CBFFFFFF call 00000014F
00000184 E2F8 loop 00000017E
00000186 CF iretd
.............random bytes.........................................
00000194 FF01
00000196 0002
00000198 0000
0000019A E000
0000019C 1000
0000019E 008E0000
000001A2 FE00
000001A4 1000
000001A6 008E0000
000001AA 0000
000001AC 0000
000001AE 0000
000001B0 0000
000001B2 0000
000001B4 0000
000001B6 0000
000001B8 0000
000001BA FF01
000001BC 0000
000001BE 0000
000001C0 0000
000001C2 0000
000001C4 0000
000001C6 0000
000001C8 FF03
000001CA 0000
000001CC 00920000
000001D0 0A13
000001D2 0008
000001D4 009A4000
000001D8 FF03
000001DA 0004
000001DC 00924000
000001E0 FF0F
000001E2 00800B92
000001E6 0000
000001E8 007800
000001EB 0010
000001ED 92
000001EE 8000FF
000001F1 FF3B
000001F3 0900
000001F5 98
000001F6 40
000001F7 00FF
000001F9 FF3B
000001FB 0900
000001FD 92
000001FE 0000
#include "stdafx.h"
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
TCHAR destDisk[] = TEXT("\\\\.\\X:");
cerr << "\n firstsectwrite.exe E: nakedos_x_y.bin";
if (argc != 3)
{
cerr << "\nInput parameters error";
return -1;
}
if(_tcslen(argv[1]) != 2)
{
cerr << "\nInput parameters error";
return -1;
}
_tcscpy(&destDisk[4], argv[1]);
HANDLE hD=CreateFile(destDisk,GENERIC_WRITE, FILE_SHARE_WRITE, NULL,
OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_NO_BUFFERING, NULL);
HANDLE hS=CreateFile(argv[2],GENERIC_READ, NULL, NULL,
OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
if((INVALID_HANDLE_VALUE == hS) || (INVALID_HANDLE_VALUE == hD))
{
cerr << "error";
return -1;
}
BYTE buf[512];
BYTE bufproof[512];
DWORD dwCopied;
ReadFile(hS, buf, 512, &dwCopied, NULL);
WriteFile(hD, buf, 512, &dwCopied, NULL);
CloseHandle(hD);
CloseHandle(hS);
hD=CreateFile(destDisk,GENERIC_READ, FILE_SHARE_WRITE, NULL,
OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_NO_BUFFERING, NULL);
if(INVALID_HANDLE_VALUE == hD)
{
cerr << "error";
return -1;
}
ReadFile(hD, bufproof, 512, &dwCopied, NULL);
if(!equal(buf, buf + 512, bufproof))
{
cerr << "write error";
return -1;
}
return. | http://www.codeproject.com/Articles/437968/A-Platform-for-Unrestricted-Low-level-Hardware-Pro | CC-MAIN-2016-26 | en | refinedweb |
paramesh Evolution Platform Developer Build (Build: 5.6.50428.7875)2005-06-12T23:15:00ZHyderabad Happenings<P><FONT face=Tahoma color=#000000 size=2><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Tahoma">I). </SPAN></FONT></P> <P><FONT face=Tahoma color=#000000 size=2><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Tahoma">So, our commitments to TechEd 2005, <?xml:namespace prefix = st1<st1:country-region w:India</st1:country-region> and <st1:place w:Europe</st1:place>.</SPAN></FONT></P> <P><FONT color=#000000><FONT face=Tahoma size=2><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Tahoma">Both VJ# and the Java Language Conversion Assistant are pretty much done, save for the occasional corner case bug that is reported. The team is putting its final touches on the products. As you may have read on MSDN, in VS 2005, VJ# includes <SPAN style="COLOR: black". </SPAN></SPAN></FONT></FONT></P><FONT color=#000000><FONT face=Tahoma size=2><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Tahoma"><SPAN style="COLOR: black"></SPAN></SPAN></FONT><?xml:namespace prefix = o<o:p> <P><FONT face=Tahoma color=#000000 size=2><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Tahoma">And while all of this is going on, I have developed a new interest. Solving Su DoKu puzzles. Actually, I can better characterize myself and call out that this has become a craze for me. Su DoKu has become a phenomenon in India, what with almost every newspaper publishing a puzzle every day. I try to solve the puzzles in the Times of UK, as well. They have some good ones, there. I believe that the minimum number of populated entries for a 9*9 puzzle, to be able to solve it is 19. Is this true? </SPAN></FONT></P></o:p></FONT> <P><FONT face=Tahoma color=#000000 size=2><SPAN style="FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: Tahoma". </SPAN></FONT><o:p></o:p></P> <P><FONT face=Tahoma color=#000000 size=2><SPAN style="FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: Tahoma". </SPAN></FONT><o:p></o:p></P> <P><FONT face=Tahoma color=#000000 size=2><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Tahoma". </SPAN></FONT></P><div style="clear:both;"></div><img src="" width="1" height="1">Paramesh.V Ed, 2005, Bangalore and VSTS<P>I.</P> <P>The overall feedback and feeling seems that productivity of software teams will be significantly positively impacted by VSTS. This is exciting for our customers, Microsoft and of course, the teams that are working on VSTS. Exciting times ahead!!</P><div style="clear:both;"></div><img src="" width="1" height="1">Paramesh.V Tools, India<P>Hello, there! I work out of Microsoft's India Development Center at Hyderabad, India. I am the Director of the Developer Tools group, which is part of the Developer Division, home to Visual Studio and the .NET framework. I moved back to India after a long stint in Redmond, for family reasons, in 2001. Although most of my career at Microsoft has been on operating systems, I took the plunge into the developer tools arena in late 2002. I should confide that I was nervous about a large change like this, but I have been having a ton of fun.</P> <P>My original responsibility in the Developer world was to manage the Visual J# effort. You may know that Visual J#, that ships as one of the 4 languages with Visual Studio, was built from scratch out of Microsoft's India Development Center. Visual J# is a tool that Java-language programmers can use to build applications and services to run on the .NET Framework. Visual J# targets the common language runtime and can be used to develop .NET applications, including XML Web services and Web applications, making full use of the .NET Framework In addition to Visual J#, I also own the Java Language Conversion Assistant tool, that also ships as part of Visual Studio. This tool helps convert Java applications to C# and .NET.</P> <P>A big part of my charter now includes components that ship as part of the new Visual Studio Team System (VSTS). VSTS will make its debut with Visual Studio 2005 later this year. My team is building some key technologies that ship with the server components of VSTS (called the Team Foundation Server). Team Build, which is basically a "Build in a Box" is one of the significant pieces that we are creating out of my team. The intent of Team Build is to help customers establish a build lab without going through the process of writing a bunch of custom scripts. A lot of information gets generated as part of the build process that touches all the different tools we are providing. The intent is to unite all the components to add value to the suite. We are also building conversion tools to migrate existing source code and work item tracking software to the new generation source code control system and work item tracking software that ship with VSTS. </P> <P><SPAN>Overall, I am super excited to be part of the effort to build cool technologies that reach out and benefit millions of developers, testers, project managers and architects, worldwide.</SPAN></P><div style="clear:both;"></div><img src="" width="1" height="1">Paramesh.V | http://blogs.msdn.com/b/paramesh/atom.aspx | CC-MAIN-2016-26 | en | refinedweb |
note PodMaster. <p> I'd really like to stress the importance of reading the README/INSTALL that come with modules. They'll often have common problems( and usually their solutions) others have had trying to compile said extension on win32. <p> <TT>'OBJECT' => '$(O_FILES) '." Foo.o Bar.o Baz.o "</tt> which won't fly on windows. The right way to write it would've been as <TT>"Foo$Config{obj_ext} Bar$Config{obj_ext} Baz$Config{obj_ext}"</TT>). <p> Another common portability issue is <TT>#include <uninstd.h></TT>. Lots of extensions include it, but windows has no such beast, and it belongs in an #ifndef. Simply comment it out, and there'll be a good chance the extension will compile. <p> Also, if a required library will not build on windows, all hope is not lost. You can always get MinGW (aka cygwin), compile the required library, and link with it. <p> Read [ can an MSVC program call a MinGW DLL, and vice versa?] on how to do it. <p> I recently did that with Math::GMP (it's up on my repository), cause it's a requirement for Net::SSH::Perl. If you can live with running GMP via cygwin, you can have Math::GMP on windows. <p> It is also important to make sure when compiling libraries required for a module to work, like in the case of [id://249588|pure-db], that your compiler options match those of your perl binary. CL /? will reveal the following possible options <CODE> -LINKING- /MD link with MSVCRT.LIB /MDd link with MSVCRTD.LIB debug lib /ML link with LIBC.LIB /MLd link with LIBCD.LIB debug lib /MT link with LIBCMT.LIB /MTd link with LIBCMTD.LIB debug lib /LD Create .DLL /F<num> set stack size /LDd Create .DLL debug libary /link [linker options and libraries] </CODE> What you want is what your perl has <CODE> perl -V:ccflags ccflags='-nologo -O1 -MD -DNDEBUG -DWIN32 -D_CONSOLE -DNO_STRICT -DHAVE_DES_FCRYPT -DPERL_IMPLICIT_CONTEXT -DPERL_IMPLICIT_SYS -DPERL_MSVCRT_READFIX'; </CODE> So you'd wanna make sure the -MD option is present. A tell-tale sign that the library you're trying to link to was not compiled with the -MD option is an "unresolved external symbol _pctype". <p> If you're faced with an error, [google://google] it, check [], check [] (the [ list archives] as well) because chances are, somebody has already encountered it and there is a workaround available, and if there isn't, simply report it to the author, cause he'll usually be able to help you. <P> <b>update:</b> If you get unresolved external symbol _snprintf, you'll need (this is not like the _pctype issue): <CODE> #ifdef WIN32 #define snprintf _snprintf #endif </code> > 249803 249803 | http://www.perlmonks.org/index.pl?displaytype=xml;node_id=249818 | CC-MAIN-2016-26 | en | refinedweb |
On Thu, Jun 30, 2011 at 11:52:17AM -0700, Junio C Hamano wrote:> I would have to say that it would boil down to "re-do the merge" whichever> way we implement it, and it is not necessarily a bad thing. > > There are ideas to implement a mode of "git merge" that works entirely> in-core without touching the working tree (it may have to write temporary> blobs and possibly trees to the object store, though). It would let sites> like github to let its users accept a trivial pull request that can merge> cleanly on site in the browser without necessarily having to have a local> checkout used for conflict resolution.> > If such an "in-core merge" feature is implemented cleanly in a reusable> way, it would be just the matter of comparing the output from it with the> actual committed result.Below is my unpolished, probably-buggy-as-hell patch to do the in-corecontent merge. But there are still two sticking points: 1. This is a dirt-simple 3-way content merge. The actual merge would likely have used some more complex strategy. So you're going to see discrepancies between a real merge, even a correct one, and what this produces (e.g., in the face of renames detected by merge-recursive). 2. This just makes read-tree do the content merge where it doesn't conflict, and leaves the conflicted cases unmerged in the index. Which is of course the only sane thing to put in the index. But what do you want to do about comparing entries with conflicts, which are the really interesting bits? Compare the result to the version of the file with conflict markers? If so, where do you want to store the file with conflict markers? I guess we could generate an in-core index with the conflict markers that we are just going to throw away. That seems pretty hack-ish.-Peff-- >8 --Subject: [PATCH] teach read-tree to do content-level mergesRead-tree will resolve simple 3-way merges, such as a pathtouched on one branch but not on the other. With--aggressive, it will also do some more complex merges, likeboth sides adding the same content. But it always stopsshort of actually merging content, leaving the unmergedpaths in the index.One can always use "git merge-index git-merge-one-file -a"to do a content-level merge of these paths. However, thathas two disadvantages: 1. It's slower, as we actually invoke merge-one-file for each unmerged path, which in turns writes temporary files to the filesystem. 2. It requires a working directory to store the merged result. When working in a bare repository, this can be inconvenient.Instead, let's have read-tree perform the content-levelmerge in core. If it results in conflicts, read-tree cansimply punt and leave the unmerged entries in the index.Signed-off-by: Jeff King <peff@peff.net>--- builtin/read-tree.c | 2 + unpack-trees.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++ unpack-trees.h | 1 + 3 files changed, 72 insertions(+), 0 deletions(-)diff --git a/builtin/read-tree.c b/builtin/read-tree.cindex df6c4c8..392c378 100644--- a/builtin/read-tree.c+++ b/builtin/read-tree.c@@ -117,6 +117,8 @@ int cmd_read_tree(int argc, const char **argv, const char *unused_prefix) "3-way merge if no file level merging required", 1), OPT_SET_INT(0, "aggressive", &opts.aggressive, "3-way merge in presence of adds and removes", 1),+ OPT_SET_INT(0, "merge-content", &opts.file_level_merge,+ "3-way merge of non-conflicting file content", 1), OPT_SET_INT(0, "reset", &opts.reset, "same as -m, but discard unmerged entries", 1), { OPTION_STRING, 0, "prefix", &opts.prefix, "<subdirectory>/",diff --git a/unpack-trees.c b/unpack-trees.cindex 3a61d82..0443fcf 100644--- a/unpack-trees.c+++ b/unpack-trees.c@@ -8,6 +8,8 @@ #include "progress.h" #include "refs.h" #include "attr.h"+#include "xdiff-interface.h"+#include "blob.h" /* * Error messages expected by scripts out of plumbing commands such as@@ -1515,6 +1517,45 @@ static void show_stage_entry(FILE *o, } #endif +static int file_level_merge(unsigned char sha1[20],+ struct cache_entry *old,+ struct cache_entry *head,+ struct cache_entry *remote)+{+ mmfile_t old_data = {0}, head_data = {0}, remote_data = {0};+ mmbuffer_t resolved = {0};+ xmparam_t xmp = {{0}};+ int ret = -1;++ if (remote->ce_mode != head->ce_mode &&+ remote->ce_mode != old->ce_mode)+ goto out;++ read_mmblob(&old_data, old->sha1);+ if (buffer_is_binary(old_data.ptr, old_data.size))+ goto out;+ read_mmblob(&head_data, head->sha1);+ if (buffer_is_binary(head_data.ptr, head_data.size))+ goto out;+ read_mmblob(&remote_data, remote->sha1);+ if (buffer_is_binary(remote_data.ptr, remote_data.size))+ goto out;++ xmp.level = XDL_MERGE_ZEALOUS_ALNUM;+ if (xdl_merge(&old_data, &head_data, &remote_data, &xmp, &resolved))+ goto out;+ if (write_sha1_file(resolved.ptr, resolved.size, blob_type, sha1) < 0)+ die("unable to write resolved blob object");+ ret = 0;++out:+ free(old_data.ptr);+ free(head_data.ptr);+ free(remote_data.ptr);+ free(resolved.ptr);+ return ret;+}+ int threeway_merge(struct cache_entry **stages, struct unpack_trees_options *o) { struct cache_entry *index;@@ -1653,6 +1694,34 @@ int threeway_merge(struct cache_entry **stages, struct unpack_trees_options *o) return -1; } + if (o->file_level_merge &&+ !no_anc_exists && head && remote && !head_match && !remote_match) {+ int i;+ struct cache_entry *old = NULL;+ unsigned char sha1[20];++ for (i = 1; i < o->head_idx; i++) {+ if (stages[i] && stages[i] != o->df_conflict_entry) {+ old = stages[i];+ break;+ }+ }+ if (!old)+ die("BUG: file-level merge couldn't find ancestor");++ if (file_level_merge(sha1, old, head, remote) == 0) {+ /* ugh */+ unsigned char tmp[20];+ int r;++ hashcpy(tmp, head->sha1);+ hashcpy(head->sha1, sha1);+ r = merged_entry(head, index, o);+ hashcpy(head->sha1, tmp);+ return r;+ }+ }+ o->nontrivial_merge = 1; /* #2, #3, #4, #6, #7, #9, #10, #11. */diff --git a/unpack-trees.h b/unpack-trees.hindex 7998948..516c2f1 100644--- a/unpack-trees.h+++ b/unpack-trees.h@@ -40,6 +40,7 @@ struct unpack_trees_options { trivial_merges_only, verbose_update, aggressive,+ file_level_merge, skip_unmerged, initial_checkout, diff_index_cached,-- 1.7.6.15.ga6419 | https://lkml.org/lkml/2011/6/30/359 | CC-MAIN-2016-26 | en | refinedweb |
This is the first article of two about ETW events. The first article is about how to use them, the second looks at how an EtwDataViewer can display the events in a hierarchal tree and analyze them to reveal context and support searchability.
When we have a problem with an application, we always wish we had more logs, or even logs at all.
Writing a lot of log data to files using printfs or some other technology, slows performance and fills the disk.
As a consequence, the debug build often contain more log capability than the release build,
where it has been removed by hand or by the compiler in order to increase performance.
However, when a problem arises in a production environment and you need to troubleshoot, you just wish you still had those logs.
Wouldn't it be great to be able to turn on logs when needed on a production system with close to zero impact?
One drawback with debug builds is that some developers let an alternative implementation run in the debug build.
I can understand the reason for doing that, run-time checks and safer handling of memory,
but in a worst case scenario. A problem is not reproducible in a debug build.
Debuggers and profilers are both indispensable tools for troubleshooting, optimizing and improving software.
But when handling Big Data, the same tools are of limited use.
Let's say that batch processing of a 10000 invoices performs really bad.
If I single step through 10 or 20 invoices, I might not detect any obvious problems.
If we use a database, the cpu will hardly be saturated which make it harder to detect bottlenecks.
The problem is usually on a higher-level.
Are we doing the right things?
Are we doing redundant queries to the database?
Are the queries that we do, really cached internally as efficient as we think?
Without high-level data points constructed from within the code itself, it might be very hard to answer those questions.
Event Tracing for Windows (ETW)
can be used for inserting permanent, close to zero impact data points.
These data points can be activated and deactivated in production environments, and later analyzed on a completly different machine.
We will see how we can insert these data points and produce a nice report.
Logging and Diagnostics can be done with various different tools.
Process Monitor, from sysinternals collects system wide events.
Perfect for finding missing registry keys, locked files, and interfering processes.
Process Monitor uses both hooks and etw to collect information.
Then we have xprof.exe and Windows Performance Recorder (wpr.exe and wprgui.exe),
from the Windows Performance Toolkit,
which is also found in the Windows SDK 8.0.
The xprof.exe uses only ETW, but with much more level of detail.
You can compare a graph of disk accesses to a graph of thread scheduling, or see how your software affects paging.
Really powerful, and best of all it is all free.
I use both tools, but for showing application specific event, I have relied on debugview.exe (also from sysinternals).
Unfortunately, logs prints, don't usually reveal context, and normal prints tend to be harder to analyze, since every developer tend to write logs their own way.
A standardised way would be to write logs in the ETW format.
Let's go back to the order system example. In an Order system, one often talk about Orders and Order lines.
Typically 1 order, can contain one to a thousand different products, each product type is put in a separate Order line.
When the order is taken, the products are reserved in the warehouse, and when it is packed and shipped the stock is decreased.
Additionally, one invoice may contain many orders, and sometimes, there is need to process a thousands invoices, for example at the end of the month.
The amount of data that needs to be processed is enormous, and there are many sensitive steps that needs data locking and refetching of data.
The more data we have the less feasible it becomes to use debugging to step through the execution of the program.
Profiling also have its weakness. When the amount of orderlines in an order are non uniform, I cannot easily use the statistics the profiler gives.
I can of course optimize some low-level calls to ToString or some other method, which initially gives a good performance increase,
but in a mature software, much of that optimizations has already been done. One has to optimize on a higher level.
Question ourselves, if we are really doing the right thing, rather than doing what we do as fast as we can.
What we need are quality data points.
It is possible to use an Sql Profiler to get what queries that are executed,
but it is harder to correlate it to what is actually taking place in the software at the same time.
In the best of cases you have done a tool yourselves that is able to merge these logs with your own logs.
Adding ETW data points has a steep learning curve.
There are few complete instructions and tutorials in the subject.
I had to serach for resources on the internet, do some trial and error, and figure some things out myself.
My aim is to show you what can be done with ETW, to plant a seed of interest.
Eventually, I hope I will also write a tutorial on this.
The producer of ETW events is called a Provider. The consumer is called Controller.
You have two option of writing a provider.
Either implementing a classic provider which also works for older operating systems like Win XP,
or a manifest based provider which supports up to 8 simultaneous sessions.
I went for the manifest based, which is the recommended one.
You will need Visual Studio 2012 for opening the solution. I use the Express version myself. In addition you need the Windows SDK 8.0, and the Windows Performance Toolkit which is an optional package of the SDK.
I didn't find the sample code for writing and ETW provider easy to understand at all, or at least it wasn't obvious how I could write my own.
What I didn't know is that if you use manifests you don't have to write any code at all.
The code can be generated from the manifest itself, and the manifest is done rather quickly in a graphical tool called ecmangen.exe
ecmangen.exe
The tool has a short but very helpful "help" documentation.
I followed the steps for the first example, and after that I was able to extend it and adapt the manifest to my own needs.
When you are done with the manifest, you should run it through the message compiler (mc.exe),
and then generate .cs files from the manifest also using the same tool.
What you end up with is a some static classes that works out of the box.
No initialization is needed.
mc FirstETW.man
mc -css MyProvider.Eventing FirstETW.man
The -css switch makes it generate .cs files which you add to your project.
Below is skeleton of the generated class. I removed the body of the functions, but what you see is that the class is static and ready for use.
-css
namespace MyProvider.Eventing
{
public static class FunctionTraceProvider
{
public static bool EventWriteFunctionEntry() { }
public static bool EventWriteFuntionExit(string FunctionName) { }
public static bool EventWriteCreateDbConnection() { }
public static bool EventWriteSqlQuery(string query, int rowcount) { }
public static bool EventWriteNetException(string message) { }
}
}
Putting it into a context. I can for example log the queries that my software does.
public List<OrderSummary> GetOrderSummary()
{
const string sql = "select [OrderId], [CustomerId], [OrderlineId], [ProductId], [ProductDescription] from [OrderSummary];";
var ds = new DataSet();
var da = new Sql.SQLiteDataAdapter(sql, m_connection);
da.Fill(ds);
var list = new List<OrderSummary>();
// Code for filling the list removed
// Logging query and the number of rows returned
MyProvider.Eventing.FunctionTraceProvider.EventWriteSqlQuery(sql, list.Count);
return list;
}
In the example above I log the Sql query together with the number of rows returned.
For efficiency reasons, consider using sql string constants mapped to query numbers instead, and just log the query number.
I had to run also the resource compiler, and convert the FirstETW.rc to FirstETW.res.
rc FirstETW.rc
Then I added it as resource file under properties for the project. I haven't figured out how I can use the .rc file directly.
A manifest based provider has to be registered in each computer where you want to collect the data,
but before you do this you must update 2 file paths the manifest so that it points to the the correct file.
After you have saved the manifest you register the provider with the wevtutil.exe tool.
wevtutil.exe
wevtutil.exe im FirstETW.man //installs the provider
wevtutil.exe um FirstETW.man //uninstalls the provider
One of the ways ETW events can be recorded is through xperf.exe. This starts logging of your provider.
xperf.exe -start <SomeName> -on <NameOfYourRegisteredProvider>
xperf.exe -start FirstETW -on Function-Entry-Exit-Provider
You can interact with the WCF app though the tool WcfTestClient.exe, which is part of visual studio.
Register the address and invoke methods from the tool.
The logs can be saved with the following command
xperf.exe -stop <SomeName> -d <FilePath>
xperf.exe -stop FirstETW -d "c:/temp/myevents.etl"
After that you can view the logs in xperfview.exe
xperfview.exe
xperfview.exe myevents.etl
The result can be seen below
The above graph just shows the log event during a period of time from our provider.
If we select a start and end period and right click on any of the Summary Tables, we will get a detailed list of all the events and the data it carries.
This far. There isn't much difference. All the events are just presented in a long list.
Just as we would have with normal traces or printfs. They are thread safe and they are probably a lot faster than prints. Apart from that, the advantage, is minimal.
Let's turn on more ETW providers. The whole kernel is full of them.
You can log disk accesses, network, context switches, thread scheduling, amount of paging, literally the tinyiest move the system does.
ETW is so efficient that is has almost zero impact on your system.
Use the -on DiagEasy switch to turn on the most commonly used providers.
-on DiagEasy
xperf.exe -on DiagEasy -start FirstEtw -on Function-Entry-Exit-Provider
// Do your stuff with your app
// Then stop the logging
xperf.exe -stop FirstETW -d "c:/temp/myevents1.etl" // stop user defined provider
xperf.exe -stop -d "c:/temp/myevents2.etl" // stop kernel providers
xperfview.exe myevents1.etl myevents2.etl // Merge and show everything
If you zoom in to graphs, you are able to see how a specific event affect cpu, disk, and network activity.
This is amazing!!!
We can turn the logged events into a real summary report.
Why not do some data analysis in Excel with pivot tables?
First we need to export the events to a csv file. Right-click anywhere on the page, and select "Export full table".
Then we open the file with Excel and create a Pivot table
Please click on the image to view it in full size
In the pivot I can get a count over all logged event containing specific data.
I can see that a special sql query was executed 6 times, we had 1 .Net Exception, and created a database connection 7 times.
We can also see what type of queries that are executed the most.
A Pivot, can show patterns and give structure to a report. Rotating the Pivot, can give a report with a totally different view.
The full etw trace with additional providers and working with Pivots may prove useful.
But let's go back to my original order system software.
To really be able to pinpoint problem areas. I need to build up an event hierarchy.
To do this, one must plan ahead to make this possible. Since this article was written I have written a follow-up article on a EtwDataViewer, which looks how we can create a log viewer that supports an event hierarchy.
In my sample wcf app. I have created a wcf service with a published interface.
I log when a published method is entered and exited. Additionally I log some other data points.
Having the Enter and Exit, I can assume that the events happening between those two points belong to the same task.
This cannot be guaranteed, but maybe you can temporarily turn off multi processing when logging, or tag events also with a task id.
This permit to create a context, in its extension, to calculate the impact of isolated task.
If you are really ambitious, you can take into consideration the csv logs from the kernel etw data that we obtained from using the -on DiagEasy switch.
The reader
Andre Ziegler was kind to inform about the new .Net 4.5 class, System.Diagnostics.Tracing.EventSource[^]
, which simplifies ETW writing and doesn't need a manifest. However, since the ETW provider isn't registered system-wide, xperf can no longer be used, instead
PerfView[^] is recommended. Here follows a small tutorial
Introduction Tutorial: Logging ETW events in C#: System.Diagnostics.Tracing.EventSource[^]. In my opinion, the .Net class seems easier to use, but the manifest approach is more complete. In addition, if you are using a mixed mode application, the manifest approach has an advantage that you can generate code for both C++ and C#.
1st of April 2013 - Initial release
3rd of April 2013 - Added Alternative Approach with Net class.
6th of August 2013 - Added link to follow-up article
This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL)
sealed class MinimalEventSource : EventSource
{
public void Load(long ImageBase, string Name) { WriteEvent(1, ImageBase, Name); }
public static MinimalEventSource Log = new MinimalEventSource();
}
MinimalEventSource.Log.Load(10, "MyFile");
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/570690/Application-Analysis-with-Event-Tracing-for-Window | CC-MAIN-2016-26 | en | refinedweb |
csTextProgressMeter Class Reference
The csTextProgressMeter class displays a simple percentage-style textual progress meter. More...
#include <csutil/cspmeter.h>
Inherits scfImplementation1< csTextProgressMeter, iProgressMeter >.
Detailed Description
The csTextProgressMeter class displays a simple percentage-style textual progress meter.
By default, the meter is presented to the user by passing CS_MSG_INITIALIZATION to the system print function. This setting may be changed with the SetMessageType() method. After constructing a progress meter, call SetTotal() to set the total number of steps represented by the meter. The default is 100. To animate the meter, call the Step() method each time a unit of work has been completed. At most Step() should be called 'total' times. Calling Step() more times than this will not break anything, but if you do so, then the meter will not accurately reflect the progress being made. Calling Reset() will reset the meter to zero, but will not update the display. Reset() is provided so that the meter can be re-used, but it is the client's responsibility to ensure that the display is in a meaningful state. For instance, the client should probably ensure that a newline '
' has been printed before re-using a meter which has been reset. The complementary method Restart() both resets the meter and prints the initial tick mark ("0%"). The meter does not print a newline after 100% has been reached, on the assumption that the client may wish to print some text on the same line on which the meter appeared. If the client needs a newline printed after 100% has been reached, then it is the client's responsibility to print it.
Definition at line 55 of file cspmeter.h.
Constructor & Destructor Documentation
Constructs a new progress meter.
Destroys the progress meter.
Member Function Documentation
Abort the meter.
Finalize the meter (i.e. we completed the task sooner than expected).
Get the current value of the meter (<= total).
Definition at line 108 of file cspmeter.h.
Get the refresh granularity.
Definition at line 118 of file cspmeter.h.
Get the tick scale.
Definition at line 80 of file cspmeter.h.
Get the total element count represented by the meter.
Definition at line 106 of file cspmeter.h.
Reset the meter to 0%.
Definition at line 95 of file cspmeter.h.
Reset the meter and print the initial tick mark ("0%").
Set the refresh granularity.
Valid values are 1-100, inclusive. Default is 10. The meter is only refreshed after each "granularity" * number of units have passed. For instance, if granularity is 20, then * the meter will only be updated at most 5 times, or every 20%.
Set the id and description of what we are currently monitoring.
An id can be something like "crystalspace.engine.lighting.calculation".
Definition at line 88 of file cspmeter.h.
Set the tick scale.
Valid values are 1-100, inclusive. Default is 2. A value of 1 means that each printed tick represents one unit, thus a total of 100 ticks will be printed. A value of 2 means that each tick represents two units, thus a total of 50 ticks will be printed, etc.
Set the total element count represented by the meter and perform a reset.
Definition at line 104 of file cspmeter.h.
Increment the meter by n units (default 1) and print a tick mark.
The documentation for this class was generated from the following file:
- csutil/cspmeter.h
Generated for Crystal Space 1.4.1 by doxygen 1.7.1 | http://www.crystalspace3d.org/docs/online/api-1.4/classcsTextProgressMeter.html | CC-MAIN-2016-26 | en | refinedweb |
There is a similar problem with overconstraining terms in an RDF vocabulary. RDFS includes predicates to indicate domain and range constraints for the applicability of a property to certain classes. This approach is undoubtedly helpful for production vocabularies, but spending the time on this endeavor in the early stages of the development of a vocabulary is possibly wasted effort and is almost guaranteed to slow you down. Get the terms right, get some examples of using them under your belt, consider any feedback from external parties, and only then go about the effort of constraining your vocabularies. By then you will likely understand the constraints sufficiently well enough to make good choices.
Use Metadata to Describe Your Metadata
While you certainly want to avoid any "turtles all the way down" meta trips, it is a great idea to add metadata to your metadata. RDF vocabularies are themselves information resources that deserve suitable annotations.
Vocabularies will not always be consumed directly from the files in which they are created. Services like Swoogle parse known vocabularies to make their terms and concepts accessible through search. This parsing can be enabled by applying the <rdfs:isDefinedBy> predicate. The prior Dublin Core example demonstrates this link back to the source:
<rdfs:isDefinedBy rdf:
Additionally, as vocabularies evolve, it is helpful to indicate the stability of specific terms, which gives consumers either confidence or a warning that dependence on a term might not be the best idea. The World Wide Web Consortium (W3C) has a set of terms that is useful for this very purpose.
There are three terms defined: <vs:term_status>, <vs:moreinfo>, and <vs:userdocs>. The metadata on this property tells you that it is itself an unstable term, although it should be safe enough to use:
<rdf:Property rdf:
<rdfs:label>term status</rdfs:label>
<rdfs:comment>the status of a vocabulary term, one of
'stable','unstable','testing'.</rdfs:comment>
<vs:term_status>unstable</vs:term_status>
</rdf:Property>
<dcterms:issued>1999-07-02</dcterms:issued>
<dcterms:modified>2006-12-04</dcterms:modified>
<dcterms:hasVersion rdf:
As an example, Edd Dumbill, noted columist, author, and creator of the DOAP vocabulary, chose to reuse <foaf:Person> in DOAP to refer to the maintainers of a project:
<maintainer>
<foaf:Person>
<foaf:name>Edd Dumbill</foaf:name>
<foaf:homepage rdf:
</foaf:Person>
</maintainer>
He certainly could have created a new notion of a person in this role, but there was simply no need to. RDF quite ably supports this mixing and matching of terms from different vocabularies and namespaces; it is one of its chief charms.
Even if it is necessary to introduce a new term, it is a reasonable approach to tie it back into an existing vocabulary. You might want to extend <dc:creator> through <rdfs:subClassOf> relationship for <askew:illustrator> and <askew:inker> (or <askew:tracer>) to model the world of comic book authors.
While defining these files with nothing more than a good text editor is convenient, most people will want better tool support for creating and managing RDF vocabularies and their attendant metadata. There are several tools available to assist you with this process (see the sidebar, "Vocabulary Management Tools").
This discussion covered some good strategies for deciding whether to create your own vocabularies or to seek consensus with others from your domains of interest. The W3C's semantic web technologies are designed to help keep it relatively easy to start with an approach that makes sense to you and your organization and consider external vocabularies at some future date.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/semantic/Article/35906/0/page/3 | CC-MAIN-2016-26 | en | refinedweb |
TreeTableView in J2SE Application
By Geertjan-Oracle on Oct 06, 2007
I've found thus far that it is much easier to learn explorer views via J2SE projects than to do so via NetBeans modules. (More than likely this is also true for most other NetBeans APIs.) Firstly, because compilation and deployment is much faster. (I imagine that debugging will be a piece of cake too.) Secondly, because you're able to focus on a very specific API, instead of needing to first create a scenario in which the API makes sense. Also, this approach forces you to think very explicitly about which APIs you need, because you can't rely on the search functionality in the NetBeans module project's Project Properties dialog box. (Although, you could use another module project for this purpose, but you still need to explicitly add the required JAR to your J2SE project, which entails a more explicit thought process than simply working though a dialog.) As a result, you're more aware of the location of the classes you need. That can't be a bad thing, can it? So, here's my entire source structure:
Note: I used the Library Manager to create a library called 'ExplorerViews'. Then I added the five JARs you see above, from the NetBeans IDE distribution. Then I attached that library to my project's Libraries node.
And here is all my code:
package demo; import java.io.File; import java.lang.reflect.InvocationTargetException; import org.openide.nodes.AbstractNode; import org.openide.nodes.Children; import org.openide.nodes.Node; import org.openide.nodes.PropertySupport; import org.openide.nodes.Sheet; public final class FileNode extends AbstractNode { static String PROP_FULL_PATH = "space"; static String IS_HIDDEN = "is hidden"; private File file; private FileNode(File f) { super(new FileKids(f)); file = f; setName(f.getName()); } public static Node files() { AbstractNode n = new AbstractNode(new FileKids(null)); n.setName("Root"); return n; } public static public Node[] createNodes(File f) { FileNode n = new FileNode(f); return new Node[]{n}; } @Override public Node[] getNodes(boolean arg0) { return super.getNodes(arg0); } } @Override protected Sheet createSheet() { Sheet s = super.createSheet(); Sheet.Set ss = s.get(Sheet.PROPERTIES); if (ss == null) { ss = Sheet.createPropertiesSet(); s.put(ss); } ss.put(new FullPathProperty(file)); ss.put(new IsHiddenProperty(file)); return s; } private class FullPathProperty extends PropertySupport.ReadOnly<String> { File file; public FullPathProperty(File file) { super(FileNode.PROP_FULL_PATH, String.class, "Full path", "Complete path is shown"); this.file = file; } public String getValue() throws IllegalAccessException, InvocationTargetException { return file.getAbsolutePath(); } } private class IsHiddenProperty extends PropertySupport.ReadOnly<String> { File file; public IsHiddenProperty(File file) { super(FileNode.IS_HIDDEN, String.class, "Is hidden", "Is hidden status is shown"); this.file = file; } public String getValue() throws IllegalAccessException, InvocationTargetException { return file.isHidden(); } } }
By the way, before showing the TreeTableView code, it is worth pointing out that, as described yesterday, I can drag and drop the TreeTableView from the Palette, after adding the org.openide.explorer.ExplorerManager JAR to it. By doing so, I was also able to set some properties on the TreeTableView via the IDE's Properties sheet, instead of doing so via code. That was a pretty handy approach to working with TreeTableView.
package demo; import org.openide.explorer.ExplorerManager; import org.openide.explorer.view.NodeTableModel; import org.openide.nodes.Node; public class TreeTableView extends javax.swing.JFrame implements ExplorerManager.Provider { private ExplorerManager manager; private NodeTableModel nodeTableModel; private FileNode f; private Node.Property[] props; /\*\* Creates new form NewJFrame \*/ public TreeTableView() { manager = new ExplorerManager(); manager.setRootContext(FileNode.files()); nodeTableModel = new NodeTableModel(); nodeTableModel.setNodes(new Node[]{FileNode.files()}); Node[] nodes = FileNode.files().getChildren().getNodes(); props = nodes[0].getPropertySets()[0].getProperties(); props[0].setValue("ComparableColumnTTV", Boolean.TRUE); //NOI18N props[0].setValue("SortingColumnTTV", Boolean.TRUE); //Second property column is sortable, but not initially sorted, //so initially will have no arrow icon: props[1].setValue("ComparableColumnTTV", Boolean.TRUE); //NOI18N initComponents(); } /\*\* This method is called from within the constructor to \* initialize the form. \* WARNING: Do NOT modify this code. The content of this method is \* always regenerated by the Form Editor. \*/ //
private void initComponents() { treeTableView1 = new org.openide.explorer.view.TreeTableView(nodeTableModel); setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE); treeTableView1.setBorder(javax.swing.BorderFactory.createTitledBorder("File System Browser")); treeTableView1.setRootVisible(false); javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane()); getContentPane().setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addComponent(treeTableView1, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE) ); layout.setVerticalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addComponent(treeTableView1, javax.swing.GroupLayout.DEFAULT_SIZE, 358, Short.MAX_VALUE) ); treeTableView1.setProperties(props); pack(); }///\*\* \* @param args the command line arguments \*/ public static void main(String[] args) { java.awt.EventQueue.invokeLater(new Runnable() { public void run() { new TreeTableView().setVisible(true); } }); } // Variables declaration - do not modify private org.openide.explorer.view.TreeTableView treeTableView1; // End of variables declaration public ExplorerManager getExplorerManager() { return manager; } }
Finally, if you want to learn about the properties that relate to TreeTableView, see the following:
Also, I found, after googling, that I have quite a lot of relatively useful information in this blog on this subject already. I learned quite a lot from searching in my blog and found myself being surprised at my own insights.
Too bad not all Platform APIs are so JAR-friendly as to be usable in a standard j2se project.
Trust me, once you need to use some API that loves for no particular reason the layer filesystem for example, you're in a world of unneeded workarounds.
Heh, I liked the stuff about searching your own blog. I also use it as a part-time note-taking app because I know I'll need it later. So I just put it there and when I run into an issue again I'll just know it's on the blog (this probably decreases the quality of the blog posts, but oh well...)
Posted by Emilian Bold on October 06, 2007 at 07:26 AM PDT #
I had to change :
@Override
public Node[] createNodes(File f)
into:
@Override
public Node[] createNodes(Object f)
Note: I don't build with netbeans but with my own build.xml. (I tried the jars from netbeans 5.5 and netbeans 6.0 with the same result)
Kees
Posted by guest on October 06, 2007 at 06:22 PM PDT #
You need to change
public static class FileKids extends Children.Keys {
to
public static class FileKids extends Children.Keys<File> {
for NetBeans6 to get @Override working.
Posted by Sven Reimers on October 06, 2007 at 07:16 PM PDT #
Did you ever tried to change this into a table view and hide the TreeTableColumn?
Posted by Sven Reimers on October 06, 2007 at 07:21 PM PDT #
Sorry, Kees, I didn't add the HTML tags as indicated by Sven above, so you couldn't see the <File> part in the signature. I've fixed it in the code listing above. That will fix your problem. Sven interesting idea, haven't tried it, but will look at it.
Posted by Geertjan on October 06, 2007 at 07:29 PM PDT #
I want to try to replace the swingx treetable for the one in netbeans for my open source project 'jmeld'.
Is it possible to change the background color of a row? Swingx has RowHighLighters. I want to even and uneven rows to have a different color.
Is it possible to change the disabled text color? The color now is lightgray (on my system) and is hardly readable.
Posted by Kees Kuip on October 07, 2007 at 02:03 AM PDT #
Kees, see the next blog entry for some pointers to help you.
Posted by Geertjan on October 07, 2007 at 05:11 AM PDT #
Geertjan - Really love and appreciate your tutorials. However I think it would be great if you added something to the top of this pointing to OutlineView as a better, newer alternative to TTV.
Posted by John on August 31, 2010 at 06:09 AM PDT # | https://blogs.oracle.com/geertjan/entry/treetableview_in_j2se_application | CC-MAIN-2016-26 | en | refinedweb |
NAME
XML::Parser::EasyTree - Easier tree style for XML::Parser
SYNOPSIS
use XML::Parser; use XML::Parser::EasyTree; $XML::Parser::Easytree::Noempty=1; my $p=new XML::Parser(Style=>'EasyTree'); my $tree=$p->parsefile('something.xml');
DESCRIPTION
XML::Parser::EasyTree adds a new "built-in" style called "EasyTree" to XML::Parser. Like XML::Parser's "Tree" style, setting this style causes the parser to build a lightweight tree structure representing the XML document. This structure is, at least in this author's opinion, easier to work with than the one created by the built-in style.
When the parser is invoked with the EasyTree style, it returns a reference to an array of tree nodes, each of which is a hash reference. All nodes have a 'type' key whose value is the type of the node: 'e' for element nodes, 't' for text nodes, and 'p' for processing instruction nodes. All nodes also have a 'content' key whose value is a reference to an array holding.
EasyTree nodes are ordinary Perl hashes and are not objects. Contiguous runs of text are always returned in a single node.
The reason the parser returns an array reference rather than the root element's node is that an XML document can legally contain processing instructions outside the root element (the xml-stylesheet PI is commonly used this way).
If the parser's Namespaces option is set, element and attribute names will be prefixed with their (possibly empty) namespace URI enclosed in curly brackets.
SPECIAL VARIABLES
Two package global variables control special behaviors:
- XML::Parser::EasyTree::Latin
If this is set to a nonzero value, all text, names, and values will be returned in ISO-8859-1 (Latin-1) encoding rather than UTF-8.
- XML::Parser::EasyTree::Noempty
If this is set to a nonzero value, text nodes containing nothing but whitespace (such as those generated by line breaks and indentation between tags) will be omitted from the parse tree.
EXAMPLE
Parse a prettyprined version of the XML shown in the example for the built-in "Tree" style:
#!perl -w use strict; use XML::Parser; use XML::Parser::EasyTree; use Data::Dumper; $XML::Parser::EasyTree::Noempty=1; my $xml=<<'EOF'; <foo> <head id="a">Hello <em>there</em> </head> <bar>Howdy<ref/> </bar> do </foo> EOF my $p=new XML::Parser(Style=>'EasyTree'); my $tree=$p->parse($xml); print Dumper($tree);
Returns:
$VAR1 = [ { 'name' => 'foo', 'type' => 'e', 'content' => [ { 'name' => 'head', 'type' => 'e', 'content' => [ { 'type' => 't', 'content' => 'Hello ' }, { 'name' => 'em', 'type' => 'e', 'content' => [ { 'type' => 't', 'content' => 'there' } ], 'attrib' => {} } ], 'attrib' => { 'id' => 'a' } }, { 'name' => 'bar', 'type' => 'e', 'content' => [ { 'type' => 't', 'content' => 'Howdy' }, { 'name' => 'ref', 'type' => 'e', 'content' => [], 'attrib' => {} } ], 'attrib' => {} }, { 'type' => 't', 'content' => ' do ' } ], 'attrib' => {} } ];
AUTHOR
Eric Bohlman (ebohlman@omsdev.com)
Copyright (c) 2001 Eric Bohlman. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
SEE ALSO
XML::Parser | https://metacpan.org/pod/XML::Parser::EasyTree | CC-MAIN-2016-26 | en | refinedweb |
#include "petscpc.h" PetscErrorCode PCSetDM(PC pc,DM dm)Logically Collective on PC
Developer Notes: The routines KSP/SNES/TSSetDM() require the dm to be non-NULL, but this one can be NULL since all it does is replace the current DM
Level:intermediate
Location:src/ksp/pc/interface/pcset.c
Index of all PC routines
Table of Contents for all manual pages
Index of all manual pages | http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/PC/PCSetDM.html | CC-MAIN-2016-26 | en | refinedweb |
Launching default application based on MIME type
This article explains how to launch default application to open a file based on its MIME type. For example, you can launch the default video player to open .3gp or .mp4 files or launch the default browser to open HTML files. In Windows programming, it is basically the equivalent of ShellExecuteEx function.
The class that you need to launch default application is RApaLsSession. There are basically two steps you need to do. The first step, you need to get the MIME type of your file by calling RApaLsSession::AppForDocument(). You will get the MIME type and also the UID of the application that is associated with the MIME type.
The second step, start the application using the MIME type or UID of the application. This can be done by calling RApaLsSession::StartDocument().
The code below shows how you can launch default application to view a file based on its MIME type.
#include <APGCLI.H>
RApaLsSession session;
User::LeaveIfError(session.Connect());
CleanupClosePushL(session);
// Gets the UID and MIME type for the given file name.
TUid uid;
TDataType dataType;
User::LeaveIfError(session.AppForDocument(aFileName, uid, dataType));
// Runs the default application using the MIME type, dataType.
// You can also use the UID to run the application.
TThreadId threadId;
User::LeaveIfError(session.StartDocument(aFileName, dataType, threadId));
CleanupStack::PopAndDestroy(); // session
To compile the code above you need to #include a header file apgcli.h and link against two libraries, apgrfx.lib and apmime.lib.
You do not need any capabilities to execute that code.
09 Sep
2009
GUI application offen require to launching default application based on MIME type. For example opening image or audio/video files, it cane be launched by MIME type. RApaLsSession is right class to launch default application viewer as well as launching other applications.
This article described how to launching default application based on MIME type, how to launch other application based on UID, and what are different headers and library require to use RApaLsSession API. | http://developer.nokia.com/community/wiki/Launching_default_application_based_on_MIME_type | CC-MAIN-2014-10 | en | refinedweb |
Microsoft.SqlServer.MessageBox Namespace
SQL Server 2008
The exception message box is a programmatic interface that is installed with and used by Microsoft SQL Server 2005 graphical components. The exception message box is a supported interface that you can use in your custom applications to provide significantly more control over the messaging experience than is provided by the MessageBox class. It also gives your users the options to save error message content for later reference and to get help on messages.
The namespace of the exception message box implies that this programming interface is only for use with SQL Server. However, it can be used in any application that is based on the Microsoft .NET Framework version 2.0.
Show: | http://technet.microsoft.com/en-us/library/microsoft.sqlserver.messagebox(v=sql.100).aspx | CC-MAIN-2014-10 | en | refinedweb |
Teuchos::FILEstream: Combined C FILE and C++ stream. More...
#include <Teuchos_FILEstream.hpp>
Teuchos::FILEstream: Combined C FILE and C++ stream.
Teuchos::FILEstream is a class that defines an object that is simultaneously a C FILE object and a C++ stream object. The utility of this class is in connecting existing C++ code that uses streams and C code that uses FILEs. An important example of this situation is the python wrappers for Trilinos packages. Trilinos is of course written primarily in C++, but the python wrappers must interface to the python C API. Wrappers for Trilinos methods or operators that expect a stream can be given a Teuchos::FILEstream, which then behaves as a FILE within the python C API. This is a low-level object that should not be needed at the user level.
Definition at line 67 of file Teuchos_FILEstream.hpp.
Constructor.
The only constructor for Teuchos:FILEstream, and it requires a pointer to a C FILE struct.
Definition at line 76 of file Teuchos_FILEstream.hpp. | http://trilinos.sandia.gov/packages/docs/r10.8/packages/teuchos/doc/html/classTeuchos_1_1FILEstream.html | CC-MAIN-2014-10 | en | refinedweb |
11 January 2012 02:40 [Source: ICIS news]
MELBOURNE (ICIS)--Korea Alcohol Industrial has increased its ethyl acetate (etac) output because of higher-than-expected domestic demand for the solvent acetate, a company official said on Wednesday.
“We increased our etac output to full capacity in late December because of an increase in our domestic sales volume,” the official said.
“We’re producing about 173 to 174 tonnes of etac a day,” he added.
Korea Alcohol Industrial is ?xml:namespace>
The company cut its etac output by about 5% to roughly 90% capacity in early December to reduce its year-end inventory. It lowered its butyl acetate (butac) production level to approximately 60% capacity for the same reason.
The producer has kept its butac production level unchanged because of ongoing slow demand.
“Our butac output is about 40 tonnes a day,” said the official.
Please visit the complete ICIS plants and projects database
For more information on et | http://www.icis.com/Articles/2012/01/11/9522409/korea-alcohol-raises-etac-run-rate-at-ulsan-unit-on-strong-demand.html | CC-MAIN-2014-10 | en | refinedweb |
Answered: Desktop implementation
Answered: Desktop implementation
I have created a jar with application Desktop, that is included in example file.
I hava created a new project that incude this jar file. The gwt.xml configuration is:
<?xml version="1.0" encoding="UTF-8"?>
<module rename-
<inherits name='com.sencha.gxt.desktop.Desktop' />
<inherits name="com.google.gwt.logging.Logging" />
<inherits name='com.sencha.gxt.chart.Chart' />
<set-property
<set-property
<set-property
<set-property
<!-- Comment out the following line to disable readable style names -->
<!-- <set-configuration-property -->
<entry-point
</module>
When I can show the objects of Desktop as StatusBar? I write this entry point, but the page in the browser is blank:
public class DesktopApp implements EntryPoint{
@Override
public void onModuleLoad() {
Desktop desktop = new Desktop();
StartMainMenuItem menuItem = new StartMainMenuItem("PROVA");
desktop.addStartMenuItem(menuItem );
desktop.layout(DesktopLayoutType.CENTER);
}
At least the same initial problem as in your other post at, and the same answer as well to start resolving it.
- Join Date
- Feb 2009
- Location
- Minnesota
- 2,568
- Answers
- 106
- Vote Rating
- 78
At least the same initial problem as in your other post at, and the same answer as well to start resolving it. | http://www.sencha.com/forum/showthread.php?257765-Desktop-implementation&p=943671 | CC-MAIN-2014-10 | en | refinedweb |
Using Django's Free Comments
Django includes some basic commenting functionality that can save you a ton of time if you want to allow users to add comments to objects such as blog entries or photos. Django's free comments are flexible enough that they can be used on pretty much anything.
Set Up
The first thing you'll need to do is to add Django's comment packages to your INSTALLED_APPS in "settings.py":
INSTALLED_APPS = ( [...] 'django.contrib.comments', )
If you're using custom views and not generic views, you'll need to add the following to the top of the relevant "urls.py" file. Generic views already include FreeComment, so you don't have to import it yourself.
from django.contrib.comments.models import FreeComment
Add the following URL pattern to your site-wide "urls.py" file:
urlpatterns = patterns('', [...] (r'^comments/', include('django.contrib.comments.urls')), }
You comments %}
Below that, you're free to access the comments.
Adding Comment Counts to List and Archive Pages
The only data you need to access an object's comments is that object's id. If you have that, you can get the comments themselves, comment count, and other info.
django.views.generic.list_detail.object_list
In an object_list template, you can do the following to add the comment count to each of the blog entry listings. The following example assumes you have an app named "blog" with a class called "entry", which has fields called "title", "summary", and has a method called "get_absolute_url" which returns an absolute path to that entry's detail page.
Note: The class name should be always be referred to in all lowercase. For example, if the class contained inside the app named blog were actually named Entry, it would still be referred to as blog.entry.
<ul> {% for object in object_list %} {% get_free_comment_count for blog.entry object.id as comment_count %} <li> <h2><a href="{{ object.url }}">{{ object.title }}</a></h2> <p class="description">{{ object.summary}}</p> <p class="details"><a href="{{ object.get_absolute_url }}">{{ comment_count }} Comments</a></p> </li> {% endfor %} </ul>
django.views.generic.date_based.archive_index
An archive_index generic view works almost exactly the same, except you're iterating through objects from the "latest" collection instead of the "object_list" collection:
<ul> {% for object in latest %} {% get_free_comment_count for blog.entry object.id as comment_count %} [...] {% endfor %} </ul>
In the date-based archives such as archive_year or archive_month, the collection you iterate through is called "object_list". archive_index is the only generic view with "latest".
Adding Comments to Detail Pages
Typically, you'll allow users to add comments through the detail page for an object. It's possible to allow them to do it elsewhere, but for simplicity's sake, this example only shows how to do it on detail pages. For any of the "detail" generic views such as django.views.generic.list_detail.object_detail or django.views.generic.date_based.object_detail, you'll have the object's id as "object.id", so getting the comment count is the same:
{% get_free_comment_count for blog.entry object.id as comment_count %}
To get the list of comments, you call the following, which puts the list of comments in "comment_list":
{% get_free_comment_list for blog.entry object.id as comment_list %}
Each comment object in comment_list has the following bits of data:
- comment.person_name - The comment poster's name.
- comment.submit_date - The date and time the poster submitted the comment. You can pipe this through the "date" filter to format the date. (Shown in the example below)
- comment.comment - The actual comment text. Don't forget to escape this to prevent code-injection attacks with the "escape" filter. (Shown in the example below)
- comment.is_public - Whether or not the comment is public. (TODO: How do we set a comment's public or non-public status?)
- comment.ip_address - The comment author's IP address.
- comment.approved - Whether or not this comment has been approved by a staff member. (TODO: Where is this set or modified?)
And the following built-in methods:
- comment.get_absolute_url - Returns an absolute URL to this comment by way of the content object's detail page. If a comment is attached to a blog entry located at "/blog/some-slug", this URL will look something like "/blog/some-slug/#c4", where "4" is the comments id number.
- comment.get_content_object - Returns the object that this comment is a comment on.
As of this writing, the free comments don't allow for you to specify other bits of data to be included, such as the comment poster's e-mail address or URL. This may be changed in the future.
Example
{% get_free_comment_count for blog.entry object.id as comment_count %} <h2><a href="{{ object.url }}">{{ object.title }}</a></h2> <em>{{ object.description }}</em> <div class="article_menu"> <b>Added on {{ object.add_date|date:"F j, Y" }}</b> <a href="{{ object.get_absolute_url }}#comments">{{ comment_count }} Comment{{ comment_count|pluralize }}</a> </div> {% get_free_comment_list for blog.entry object.id as comment_list %} <h2 id="comments">Comments</h2> {% for comment in comment_list %} <div class="comment_{% cycle odd,even %}" id="c{{ comment.id }}"> <span class="comnum"><a id="c{{ comment.id }}" href="#c{{ comment.id }}">#{{ forloop.counter }}</a></span> <p><b>{{ comment.person_name|escape }}</b> commented, on {{ comment.submit_date|date:"F j, Y" }} at {{ comment.submit_date|date:"P" }}:</p> {{ comment.comment|escape|urlizetrunc:40|linebreaks }} </div> {% endfor %} <h2>Post a comment</h2> {% free_comment_form for blog.entry object.id %}
Free Comment Templates
Django has internal default templates for the various bits of comments-related code. (NOTE: Actually, I lied. Once these patches are applied, it will. Until then, you'll have to specify your own templates for "free_preview.html" and "posted.html".)
You can override any of these built in templates by creating a "comments/" folder in your templates folder with any or all of the following files:
Post Comment Form (freeform.html)
This template holds the form code that is used by the user to post a comment. In the above example, this is included like so:
<h2>Post a comment</h2> {% free_comment_form for blog.entry object.id %}
Example
{% if display_form %} <form action="/comments/postfree/" method="post"> <p>Your name: <input type="text" id="id_person_name" name="person_name" /></p> <p>Comment:<br /><textarea name="comment" id="id_comment" rows="10" cols="60"></textarea></p> <input type="hidden" name="options" value="{{ options }}" /> <input type="hidden" name="target" value="{{ target }}" /> <input type="hidden" name="gonzo" value="{{ hash }}" /> <p><input type="submit" name="preview" value="Preview comment" /></p> </form> {% endif %}">Comment:</label> <br /> {{ comment_form.comment }} </p> <input type="hidden" name="options" value="{{ options }}" /> <input type="hidden" name="target" value="{{ target }}" /> <input type="hidden" name="gonzo" value="{{ hash }}" /> <p> <input type="submit" name="preview" value="Preview revised comment" /> </p> </form>
Posted Message (posted.html)
This template is shown after a user successfully posts a comment. You can access the.REQUEST.url }}" />
before
<p><input type="submit" name="preview" value="Preview revised comment" /></p>
This should be it. Enjoy
--NL
Other Examples
List Recent Comments
The following code lists all the recent comments on your site, regardless of app.
<h1>Recent comments</h1> <p> {% if has_previous %} <a href="?page={{ previous }}">Previous</a> | {% endif %} Page {{ page }} of {{ pages }} {% if has_next %} | <a href="?page={{ next }}">Next</a> {% endif %} </p> {% for comment in object_list %} <div class="comment" id="c{{ comment.id }}"> <h3> <a href="{{ comment.get_absolute_url }}"> {{ comment.person_name|escape }} <span class="small quiet"> {{ comment.submit_date|date:"F j, Y" }} at {{ comment.submit_date|date:"P" }} </span> </a> </h3> {{ comment.comment|escape|urlizetrunc:"40"|linebreaks }} </div> {% endfor %} <p> {% if has_previous %} <a href="?page={{ previous }}">Previous</a> | {% endif %} Page {{ page }} of {{ pages }} {% if has_next %} | <a href="?page={{ next }}">Next</a> {% endif %} </p> | https://code.djangoproject.com/wiki/UsingFreeComment?version=33 | CC-MAIN-2014-10 | en | refinedweb |
A class representing a Digital Output device which can read or write a maximum of 32 bits at once. More...
#include <rtt/dev/DigitalOutInterface.hpp>
A class representing a Digital Output device which can read or write a maximum of 32 bits at once.
When there are N bits, the bits are numbered from Zero to N-1.
Definition at line 57 of file DigitalOutInterface.hpp.
Create a DigitalOutInterface with an optional name.
When name is not "", and unique, it can be retrieved through DigitalOutInterface::nameserver .
Definition at line 67 of file DigitalOutInterface.hpp.
Returns the status of bit n, starting from zero.
Query the number of outputs of this card.
Sets a sequence of bits to pattern value between start_bit and stop_bit inclusive.
For example, setSequence(3, 3, 1) is equivalent to setBit(3, 1).
Sets the n'th output off.
Sets the n'th output on.
The NameServer of this interface.
Definition at line 84 of file DigitalOutInterface.hpp. | http://www.orocos.org/stable/documentation/rtt/v1.12.x/api/html/classRTT_1_1DigitalOutInterface.html | CC-MAIN-2014-10 | en | refinedweb |
A table entry that is a simple std::string. More...
#include <Teuchos_TableEntry.hpp>
A table entry that is a simple std::string.
Definition at line 132 of file Teuchos_TableEntry.hpp.
Construct with a value.
Definition at line 88 of file Teuchos_TableEntry.cpp.
Write the specified entry to a std::string.
Implements Teuchos::TableEntry.
Definition at line 92 of file Teuchos_TableEntry.cpp. | http://trilinos.sandia.gov/packages/docs/r10.8/packages/teuchos/doc/html/classTeuchos_1_1StringEntry.html | CC-MAIN-2014-10 | en | refinedweb |
:Pages for deletion.Avert thine eyes!
This page is extremely offensive and should not be viewed by anyone.
It may be illegal in several countries as well.)
If you don't see a white border, your browser sucks. --47Monkey MUN HMRFRA s7fc | Talk 10:20, 15 Aug 2005 (UTC) </s>
<s>Kirby and Lolly</s>
<s> Both submitted by the same anonymous IP and both have no humour at all. -- ERTW MUN 00:08, 14 Aug 2005 (UTC)
Delete both. --DWIII 02:15, 14 Aug 2005 (UTC)
Delete both. We have standards here. --KP CUN 07:36, 14 Aug 2005 (UTC)
</s>
- Whacked them both, followed "what links here" to other crap, and whacked that too. It was like an adventure. --Famine 02:24, 15 Aug 2005 (UTC)
<s>
Jay-Z
Rewritten --Sir Elvis KUN | Petition 21:13, 20 Aug 2005 (UTC)
Meh! --IMBJR 10:10, 12 Aug 2005 (UTC)
So, JZ is a raper, huh? I think I know just the graphic for this article. . . delete --Marcos_Malo S7fc BOotS | Talk 14:04, 12 Aug 2005 (UTC)
There, I changed the page to something that didn't suck balls. Re-read it. --ENeGMA
- Nice work, En. I think it's important that we continue to keep Uncyclopedia multicultural and have something for the young people (betwen the ages of 27 and 35) to enjoy. --Marcos_Malo S7fc BOotS | Talk 10:02, 13 Aug 2005 (UTC)
Sound in Space
Deleted Fancy a crack at rewritting give me a shout --Sir Elvis KUN | Petition 21:12, 20 Aug 2005 (UTC)
Trek humour? --IMBJR 21:44, 11 Aug 2005 (UTC)
Delete --Rcmurphy KUN 22:15, 11 Aug 2005 (UTC)
Delete but you couldn't hear me tell you that if we were in space. -- ERTW MUN 03:28, 12 Aug 2005 (UTC)
Keep true, it's not very funny, but the premise isn't bad. just needs work. --Marcos_Malo S7fc BOotS | Talk 14:07, 12 Aug 2005 (UTC)
Rewrite Dunk this baby into some water and watch it be reborn. --KP 17:01, 12 Aug 2005 (UTC)
Perhaps tie into the air article somehow? --Spintherism 21:22, 13 Aug 2005 (UTC)
- Comment
- The original quote (used as a slogan for a movie sometime near the end of the 1970's) was "In space, no one can hear you scream". There have been variations ranging from "in cyberspace, no one can hear you scream" to "in n-space, no one can hear you scream" - mostly invented by frustrated compsci and math students respectively. Amusing as one-liners at the time, but this article as written lacks humor entirely. --Carlb 01:59, 14 Aug 2005 (UTC)
- Comment
- I think we should mirror our discusion onto MemoryAlpha:Memory_Alpha:Votes_for_deletion#Sound_in_space and vice verse :-) --Sir Elvis KUN | Petition 09:24, 16 Aug 2005 (UTC)
Shew and Scoo and Middletonia 2 and ...
Kept --Sir Elvis KUN | Petition 21:11, 20 Aug 2005 (UTC)
... anything else that touches these articles. --IMBJR 21:41, 11 Aug 2005 (UTC)
Nuke it from orbit it's the only way to be sure. -- ERTW MUN 03:28, 12 Aug 2005 (UTC)
Keep their not hurting anyoe and I like these random collections of internaly consistant yet nonsense articles, I also can't imagine anyone creating another article called "Middletonia 2" so it's not in anyones way. --Sir Elvis KUN | Petition 09:35, 12 Aug 2005 (UTC)
- I'm with Sir on this one. --Marcos_Malo S7fc BOotS | Talk 14:21, 12 Aug 2005 (UTC)
SGAE
<s>Made me laugh for the wrong reasons --IMBJR 21:38, 11 Aug 2005 (UTC)</s>
<s>What the Fuck? I mean seriously. That's god damn messed up. I say we keep it, just in case someone claims we're normal or respectable. We can point them there, and use it as a leverage in any negoitiations we have to do. "Do you really want someone like this irritated at you? I'm sure we can come to some agreement, before they show up on your doorstep..." --Famine 23:59, 11 Aug 2005 (UTC)</s>
<s>Translated to the English via Babelfish or bathtub drugs? I say keep just because it's that darn bizarre.--blargh 01:30, 12 Aug 2005 (UTC)</s>
<s>Keep definitely for the bizzare factor -- ERTW MUN 03:28, 12 Aug 2005 (UTC)</s>
<s>Move to Babelfish Translations --Sir Elvis KUN | Petition 09:36, 12 Aug 2005 (UTC)</s>
- <s>Humm, perhaps we should have a section for stuff like this. I vote we make a "Non-Native English Prose" subcategory under Category:Incoherent, and place this (and other) articles there. --Famine 13:25, 12 Aug 2005 (UTC)
Partick Thistle Football Club
Kept --Sir Elvis KUN | Petition 20:52, 20 Aug 2005 (UTC)
Too factual? --IMBJR 21:34, 11 Aug 2005 (UTC)
More than factual, looks like vanity to me. -- ERTW MUN 03:28, 12 Aug 2005 (UTC)
- It's actualy reasonably sarcastic "overshadowing their city rivals Glasgow Celtic and Glasgow Rangers many times over" --Sir Elvis KUN | Petition 10:23, 12 Aug 2005 (UTC)
I kind of like it. It’s like a niece or nephew to me. I don’t want to raise it. But I can keep it company for an afternoon and return it to its parents when it has a temper tantrum. --KP 17:05, 12 Aug 2005 (UTC)
Keep Perhaps you may have to be from Glasgow or Scotland to fully appreciate it but I think it's rather good. Gordon.
KEEP I've <s>written</s> seen far worse. Although perhaps you need to know a bit about fitba', you don't need to be Scottish <s>
Marvin Kennedy
Kept --Sir Elvis KUN | Petition 20:53, 20 Aug 2005 (UTC)
Has potential in the right hands, but maybe not. --IMBJR 21:20, 11 Aug 2005 (UTC)
"sip more yak" has a ring to it, keep or rewrite--blargh 02:05, 12 Aug 2005 (UTC)
Definitely potnential of insanity there, definite keep' or rewrite -- ERTW MUN 03:28, 12 Aug 2005 (UTC)
Abstain--Sir Elvis KUN | Petition 10:28, 12 Aug 2005 (UTC)
Dana Reeve
Huffed No one toke the bait and rewrote it, if you want it restored give me a shout --Sir Elvis KUN | Petition 20:54, 20 Aug 2005 (UTC)
Something better than this can surely be made on the subject. --IMBJR 20:55, 11 Aug 2005 (UTC)
Rewrite -- ERTW MUN 03:28, 12 Aug 2005 (UTC)
Omoikane
<s>Delete: ZSY - Zero Snort Yield --IMBJR 20:38, 11 Aug 2005 (UTC)</s>
<s>Weak Keep --Sir Elvis KUN | Petition 10:31, 12 Aug 2005 (UTC)</s>
<s>
People for the Ethical Treatment of Animals
Rewritten put the new version back on VFD if need be --Sir Elvis KUN | Petition 20:58, 20 Aug 2005 (UTC)
Not funny, and entirely too factual to be of any use to us. Gonna kill it in 12 hours anyway, but you can do it too, earlier. It's been recreated once already.--Flammable CUN 14:55, 11 Aug 2005 (UTC)
- I'm going to move at least some sections to TFAODP. --Rcmurphy KUN 14:59, 11 Aug 2005 (UTC)
- Attempted partial rewrite
- People Eating Tasty Animals but the two halves still don't fit together perfectly. Knife and fork, anyone? --Carlb 16:20, 11 Aug 2005 (UTC)
List of what the "blood" PeTA uses could possibly be
Merged and Redirected to People Eating Tasty Animals --Sir Elvis KUN | Petition 20:58, 20 Aug 2005 (UTC)
- Merge
- Merge to People Eating Tasty Animals if that doesn't end up getting huffed. We don't need two articles when one is more than enough. --Carlb 16:41, 11 Aug 2005 (UTC)
Gandhi
Rewritten VFD again if need be --Sir Elvis KUN | Petition 20:59, 20 Aug 2005 (UTC)
QVFD since 29/6, time to put a bullet in his head?
- Rewrite
- Replacing "peace" with "warrior" isn't in and of itself funny. The "I am Ghandi, I come in peace, come see peaceful nuclear explosion test to symbolize peace between India and Pakistan" might work, but only by mixing fact and contradiction. --Carlb 15:46, 8 Aug 2005 (UTC)
I played with it slightly, but it still bites: Rewrite or Delete--blargh 02:22, 10 Aug 2005 (UTC)
Flying leech
Kept rewritten version --Sir Elvis KUN | Petition 21:00, 20 Aug 2005 (UTC)
QVFD since 29/6, time to pull off the patient's parts?
- Disinfect and kill it
- Sucks, but not in a nice way. Not much here but skin and bones. --Carlb 15:46, 8 Aug 2005 (UTC)
I think a rewrite might save it. --blargh 04:47, 9 Aug 2005 (UTC)
I couldn't even finish the page, even by Unc standards it was way too contrived. --sangandongo 17:25, 10 Aug 2005 (UTC)
Keep/Expand It didn't quite make me chortle, but it's probably above the 50th percentile. (Note: it's been dramatically rewritten by Blargh since it was initially VFDed, so take another look if you haven't already.) --Spintherism 05:05, 11 Aug 2005 (UTC)
Keep I think Blargh did a good job. It's not the funniest thing he's ever written, but it's decent, competent humor. Any deficiencies are inherent in the subject matter, not with the Blargh. --Marcos_Malo S7fc BOotS | Talk 10:15, 13 Aug 2005 (UTC)
867-5309
Kept Rewritten version --Sir Elvis KUN | Petition 21:05, 20 Aug 2005 (UTC)
QVFD since 2/7, time to disconnect?
- Rewire
- uh, rewrite - tie in to other numbers, number of the beast perhaps? --Carlb 15:46, 8 Aug 2005 (UTC)
- Monkey sez 'burn in hell'; delete.It seems to -- factual. Ugh, I feel dirty just saying that.--47Monkey 04:55, 9 Aug 2005 (UTC)
- Rewrite
- Maybe the factual edge can be taken off, for monkey's sake.--blargh 12:03, 9 Aug 2005 (UTC)
- Rewrite
- I rememmber that song --Nytrospawn
I rewrote it slightly.--blargh 02:24, 10 Aug 2005 (UTC)
- As n00b, I didn't want to be all cocky with the "I did this," but I think some indication that it has indeed been altered would only be fair if we're still voting on it, so I've retroactively added a comment here to articles I've altered. Wasn't sure if I personally should go ahead and take the VFD's off when I did the edit, or wait for people with higher 1337ness to do that?
- Rewrite
- Oh hey, it was on QVFD? I rewrote the whole thing on July 31st, I didn't notice it was QVFD as well. If it's too factual I apologize, it was mostly just a gateway to the Stallhor Curse. --Monthenor 03:57, 12 Aug 2005 (UTC)
Third world
Kept --Sir Elvis KUN | Petition 21:07, 20 Aug 2005 (UTC)
QVFD since 2/7. Time to commit genocide?
Keep We need an article like this. Not this one, but one like it. I view it as fertilizer, for someone to plant the seed of a better article in. --Famine 14:41, 8 Aug 2005 (UTC)
- Redirect
- Send to Third world country --Carlb 15:46, 8 Aug 2005 (UTC)
Keep The main idea behind this is very clever indeed, and with some rewriting this could be brilliant. Deleting it would probably only produce a far, far worse Third World article, with mindless racism aplaenty. --Turnip 17:45, 8 Aug 2005
Rewrite or keep definitely. -- ERTW MUN 17:06, 8 Aug 2005 (UTC)
Pikachu
Merged and Redirected to Pikachusetts--Sir Elvis KUN | Petition 21:04, 20 Aug 2005 (UTC)
QVFD since 2/7. Time to exterminate?
Keep Necessary for Pikachusetts so if we're keeping that one, we keep this one. --Famine 14:37, 8 Aug 2005 (UTC)
- Merge
- Merge to Pikachusetts if that article is the only reason for keeping this. --Carlb 15:46, 8 Aug 2005 (UTC)
Axe
Kept --Sir Elvis KUN | Petition 20:51, 20 Aug 2005 (UTC)
In QVFD since 17/7. Time to decide if it gets the chop. HO HO.
Keep It needs work, but it has promise. You axed me and I say Keep. --Famine 14:43, 8 Aug 2005 (UTC)
<s>
1920s
Kept--Sir Elvis KUN | Petition 20:50, 20 Aug 2005 (UTC)
Sat in QVFD limbo since 23/7. Time to decide if it goes to Hell or Hebbin. --IMBJR 13:53, 8 Aug 2005 (UTC)
Keep Not bad for a year (or in this case decade) article--220.238.29.245 14:07, 8 Aug 2005 (UTC)
Keep --Famine 14:23, 8 Aug 2005 (UTC)
- Keep, keep like a monkey keeps his enemies close, yet not too close.--47Monkey 04:55, 9 Aug 2005 (UTC)
Objection! I re-wrote the article, after the original was deleted, but forgot to get rid of the tag. I was young and niave. Anyway, I've been getting positive reviews here, and my ego needs it -- 86.133.171.84 19:10, 18 Aug 2005 (UTC)
<s>
Ballet
Kept--Sir Elvis KUN | Petition 20:49, 20 Aug 2005 (UTC)
Has been loitering in QVFD since 29/7. Time to decide its fate. --IMBJR 13:47, 8 Aug 2005 (UTC)
- KEEP!!! "Probably one of the most revolutionary dancers, Anna Pavlova is most famous for her brutal execution of a swan." That is indeed a keeper, if I ever saw one. --Famine 14:35, 8 Aug 2005 (UTC)
I played with it slightly.--blargh 02:33, 10 Aug 2005 (UTC)
I say keep it Hydroksyde 09:58, 11 Aug 2005 (UTC)
<s>
Unused Images
Being dealt with --Sir Elvis KUN | Petition 20:48, 20 Aug 2005 (UTC)
There are 278 images with no homes to go to. I'm suggesting that any image that's older than a month gets reamed. --IMBJR 12:19, 8 Aug 2005 (UTC)
- I've got a feeling some of these images would be useful if we knew they were there. Is it possible to make some sort of thumbnail gallery of unused images?--220.238.29.245 13:43, 8 Aug 2005 (UTC)
- P.S. do we have any real reason to delete these old pictures other than for the hell of it? Are we running out of space?
- I'm not privy to the current server space size, but I see no point in storing images that go unused. --IMBJR 13:57, 8 Aug 2005 (UTC)
- But what about a rainy day? WHy don't you think like this man? Huh?!--47 Monkeys 14:00, 8 Aug 2005 (UTC)
- Selectively keep
- If there's some other reason to huff 'em (plagarism, near-duplicate of something since reuploaded under another name, poor quality) then huff 'em, but it should go beyond "unused" to "unlikely to be worth using anytime soon." --Carlb 15:53, 8 Aug 2005 (UTC)
Unused images as of Aug 8, 2005: Uncyclopedia:Pages_for_deletion/unusedimages
Discussion of individual unused images may be best moved to these subpages as there are simply too many images to all fit on one page. --Carlb 19:08, 8 Aug 2005 (UTC)
- I've now reviewed all of them there purdy pictures. If anyone else would also like to vote - please do so. --IMBJR 10:20, 10 Aug 2005 (UTC)
- Question
- So, being n00b, I have question -- what happens if I used some of these previously unused images? Should I hunt them down in the list and add a comment that they are now used?--blargh 01:16, 9 Aug 2005 (UTC)
Image:Yougonnaget.jpg
Kept with a whopping six votes in favor of retaining the image. Nothing like an accusation of racism to give the ballot boxes some action. I'll leave the image description blank so that someone can write something...appropriate...there. --Rcmurphy KUN 02:12, 18 Aug 2005 (UTC)
<s> Why was this racist shit on the main page? This isn't funny. It's racist. Racism is not funny.
Please DELETE.
there is NO excuse for this. please delete this image and or at least remove it from the fucking main page. --66.177.138.113 (Talk • Contribs (del) • Block (rem-lst-all) • Whois • City • Proxy? • WP Edits • Checkuser) 23:47, 6 Aug 2005
Keep This isn't racist, and I don't know how you came to that conclusion other than that you're an idiot. But even if it this image somehow was racist, that still doesn't make it inherently unfunny. Go pull the stick out of your ass, Mr. Stick-in-the-ass. --EvilZak 03:03, 7 Aug 2005 (UTC)
- Edit: I agree with Carlb that the image description shold be rewritten' --EvilZak 23:24, 7 Aug 2005 (UTC)
Keep for the same reason as EvilZak mentions. -- ERTW MUN04:12, 7 Aug 2005 (UTC)
- Rewrite
- image description page is in poor taste, though not necessarily on racial grounds. --Carlb 04:22, 7 Aug 2005 (UTC) Comment: Full text (including desc page) is "You gonna get raped. Well ya gonna, why fight it?". Tone is therefore one of being in favour of those who commit severe crimes against the person? A similar question was raised earlier wrt National Try To Assassinate The President Day and that article has been carefully worded (or reworded) to indicate that yes, this is still criminal, while retaining much or most of the original. --Carlb 15:56, 7 Aug 2005 (UTC)
Whatever It's obviously* racist on some level, but without further context I'm not sure what the grounds are for deleting it. Without context, it's mildly offensive and non-PC. Are we going to start deleting items because they might hurt someone's feelings? --Marcos Malo 07:14, 7 Aug 2005 (UTC)
- If it's not obviously racist to you, then perhaps it's actually subtly racist.
Um, it's racist. A picture of a black man with "Ya Gonna Get Raped" is racist. it's not funny. it's racist. - 66.177.138.113 (Talk • Contribs (del) • Block (rem-lst-all) • Whois • City • Proxy? • WP Edits • Checkuser) 19:35, 7 Aug 2005
- Comment
- Please sign your edits with --~~~~ and do consider logging in as a named user when posting to "vote" pages such as this one. As for the page itself? Unlikely that a joke about rape could be somehow redeemed by picking someone of some other colour or color to pose for this photo. --Carlb 22:10, 7 Aug 2005 (UTC)
I don't understand what this guy is trying to say. Is he trying to imply that the term "rapist" is a race-exclusive term? Blacks are hardly the only race that has rapists. In fact, there are other "You gonna get raped" images on the Internet, and the men featured in such images vary in race. Hell, there's one with Mario. The joke doesn't discriminate against blacks. --EvilZak 23:24, 7 Aug 2005 (UTC)
- You might want to study the history of racism in the United States. You might find that there is a meme of black man as rapist, often used to justify lynchings of random people (random except for their skin color). Don't expect everyone elses experiences to be analogous to your own limited experience. The fact that there are other images doesn't really lessen the impact of this particular message. If you want a funny image (and I think we can all agree that is what we are here for), upload the mario one. --Marcos Malo 23:51, 7 Aug 2005 (UTC)
Delete - Not necessarily racist, but still not funny and in very bad taste. --Cap'n Ben CUN
- Comment
- Seems to be the same author as this fine template (original wording was "...make her tell everyone it was a miscarriage or I'd do it again..."); might be worth checking contribs for a pattern here? --Carlb 01:07, 8 Aug 2005 (UTC)
- He's been contributing for a while and written some funny stuff. I don't think that's necessary.--Abc 08:13, 8 Aug 2005 (UTC)
Keep Amusing in the context of the page it was on. On a related note, when did we start deleting on the basis of tastefulness? --Abc 08:03, 8 Aug 2005 (UTC)
Keep. The community needs to know these things.--47Monkey 04:55, 9 Aug 2005 (UTC)
Rewrite. How about using a cartoon character, or some character not stereotyped as a rapist by racist people? Remember that it does not need to be realitstic or based on somebodies beliefs (racist or nonracist) or whatnot to be funny. How about using Evil Bert instead, unless accusing muppets of being rapists is somehow racist? Face it, when Orion Blastar is starting to make sense, it is one seriously messed up issue. --Orion Blastar 14:45, 11 Aug 2005 (CDT)
- Comment
- That may already exist Image:Yougonnagetrapedbygirly.jpg but still nothing to boast about. --Carlb 23:53, 17 Aug 2005 (UTC)
Keep. I don't feel it's racist. --Rawr 17:41, 12 Aug 2005 (UTC)
Keep Agree with Abc, it's amusing, tasteless, shocking and controversial, all the things we strive to be but are unable. --GodEmperorOfHell 02:07, 18 Aug 2005 (UTC) </s>
<s>
Antonin Scalia
Kept Rewritten version --Sir Elvis KUN | Petition 20:47, 20 Aug 2005 (UTC)
Rewrite --Sir Elvis KUN | Petition 18:44, 6 Aug 2005 (UTC)
Delete --Rcmurphy CUN 21:06, 6 Aug 2005 (UTC)
Rewritten I have only the vaguest notion who Antonin Scalia is, but I still think my article's better than the old one. --Cap'n BenCUN 00:35, 8 Aug 2005 (UTC)
Keep the Rewrite I didn't actually read it, but I looked nice. --Spintherism 20:53, 11 Aug 2005 (UTC)
Keep the Rewrite - Silly and amusing. --Trevie 23:48, 19 Aug 2005 (UTC)
Badminton
Rewritten --Sir Elvis KUN | Petition 20:46, 20 Aug 2005 (UTC)
Delete--Sir Elvis KUN | Petition 18:44, 6 Aug 2005 (UTC)
I think I'd like to Rewrite this at some point, but I'm not inspired at the moment. --Spintherism 19:27, 6 Aug 2005 (UTC)
Delete --Rcmurphy CUN 21:08, 6 Aug 2005 (UTC)
Major rewrite. --DWIII 03:36, 7 Aug 2005 (UTC)
- Rewrite
- Badminton could become Deadmonton (once the bad guys are dead) then Edmonton (capital of Saudi Alberta) --Carlb 22:34, 7 Aug 2005 (UTC)
- Rewrite
- What's the huge rush to delete topics that just need a make over? Swear to Spaghetti, some of you are worse than the uptight control freaks on wikipedia. Oh, wait. Some of you ARE uptight control freaks over on wikipedia. My bad. (I think I am seeing the deletion of articles without the normal processes of VFD or QVFD. Is it possible that an 'op is doing this arbitrairily?) --Marcos Malo 23:42, 7 Aug 2005 (UTC)
- It's common practice here for Sysops to delete (some) pages without consulting other users or admins. Most of the stuff that is deleted without going through VFD is truly irredeemable garbage, and QVFD exists largely for regular users to list such pages so that they are visible to admins, not necessarily to open them up for voting (though it's not unusual for an admin or other user to move a page from QVFD to regular VFD). Do you have any idea how worthless and cluttered this site would be if every page had to go through VFD before being deleted? Just because Uncyclopedia is a parody site doesn't mean that we encourage stupid pages. Quite the contrary. --Rcmurphy KUN 00:44, 8 Aug 2005 (UTC)
- Thanks for getting back to me on that. And you're right, there does seem to be a great deal of stupid stuff. Still, I wonder: what are the guidelines, if any, for instant deletion? Is it completely arbitrary? Also, what is the policy for redirects? Can a sysop redirect articles to his own articles for the purposes of attention whoring? (Example: Kevin_Mitnick) This last seems to be an abuse of powers, since it forecloses on anyone from writing on the topic of Kevin Mitnick. What is to stop a sysop from redirecting a bunch of related words in order to monopolize a topic?
- I'd say the guidelines for instant deletions are the same as the QVFD guidelines. The vast majority of instant deletions are very short pages - even horrifically bad pages are usually VFD'd if they're more than a few sentences. If I come across a short page that I think has no worth and no potential whatsoever, I'll delete it; but if I see a page that, say, offends me personally or is about a subject that I find boring or inherently unfunny, then no, I won't delete it because there's a good possibility someone else would have a laugh over it. Now, other admins can review deleted pages (i.e. see the deleted contents) and restore them if they want, so there is the possibility of some review that way. I think most admins would also be reasonable if asked to restore a deleted page by a regular user - I've done it before and I know some other guys have too. Finally, redirects can be done by any user, not just admins (the command is "#REDIRECT [[(page to redirect to)]]") and they can be overridden by clicking on the "Redirected from [[(page)]]" link and editing from there. So presumably redirects can be controlled in the same way as any other element of a wiki - by having many users iron them out.
- These, of course, are just my views. Other admins may have differing stances on the issues.
- This discussion is way too serious for Uncyclopedia.--Rcmurphy KUN 05:18, 8 Aug 2005 (UTC)
Billie Piper
<s>Delete--Sir Elvis KUN | Petition 18:44, 6 Aug 2005 (UTC)</s>
<s>Delete --Rcmurphy CUN 21:09, 6 Aug 2005 (UTC)</s>
<s>Delete --anon</s>
<s>Don't Delete i'm rewriting this at the moment, after that feel free to delete it, if you wish. CheeseLover 09:36, 7 Aug 2005 (UTC)</s>
<s>
Dr. Laura
Deleted no one fancied rewritting it so it got deleted, give me a shout if you want it restored to work on. --Sir Elvis KUN | Petition 20:45, 20 Aug 2005 (UTC)
- Rewrite/
- --Sir Elvis KUN | Petition 18:44, 6 Aug 2005 (UTC)
- Rewrite
- In its present form, it's just another so-and-so-is-ghey page that could be about anyone and therefore delete-worthy. Rewrite to jump on clichés specific to this person (for instance, the tiresome «I am Mike Idd's mom» thing - claim that she's slept around enough that she had to get a maternity test to know if the kid was actually hers?). --Carlb 15:46, 8 Aug 2005 (UTC)
Farting
Rewritten and Moved to Fartium--Sir Elvis KUN | Petition 20:41, 20 Aug 2005 (UTC)
- Rewrite and Move to Fartium and add an entry to table of elements--Sir Elvis KUN | Petition 18:57, 6 Aug 2005 (UTC)
- Comment Is it a valid element if it only exists in the gaseous state? --Carlb 19:12, 6 Aug 2005 (UTC)
- I don't see why not. It is possible that, though under extreme conditions it could be liquid or solid, those conditions have never been met. And I second the motion to Rewrite and Move. --Spintherism 19:48, 6 Aug 2005 (UTC)
- Believe me, it CAN be in a liquid state, known as a wet fart or a beer fart. The conditions for the liquid state have been met countless times. CF the movie Trainspotting. --Marcos Malo 18:53, 7 Aug 2005 (UTC)
Friedrich Nietzsche
Kept--Sir Elvis KUN | Petition 20:31, 20 Aug 2005 (UTC)
</s> <s>
Image talk:Rainbow.jpg
Kept--Sir Elvis KUN | Petition 20:30, 20 Aug 2005 (UTC)
- Keep but burn the stupid {{BurnThePicture}} template. --Carlb 18:25, 6 Aug 2005 (UTC)
- Keep--Sir Elvis KUN | Petition 18:52, 6 Aug 2005 (UTC)
Template:BurnThePicture
Rewritten and Moved to Template:vfd_image
- Move and Rewrite to Template:vfd_pic and with link to VFD section (as other vfd templates) --Sir Elvis KUN | Petition 18:52, 6 Aug 2005 (UTC)
</s> <s>
Some others with the {{vfd}} category tag... huff 'em all?
All of these are tagged as being in the {{vfd}} category so they must be junk:
Help:Baleeted
- A lame attempt to sneak factual information into Uncyclopedia. Aren't there Uncyclopedia parody sites for this sort of rubbish? --Carlb 18:16, 6 Aug 2005 (UTC)
Delete HSR fancruft. --EvilZak 21:14, 6 Aug 2005 (UTC)
Uncyclopedia:Templates/Deletion process
Template:Baleeted
Template talk:Baleeted
Template:Baleeted2
Template:BurnThePicture
Template:VFD
Template:Vfd
- These templates must be pure, absolute rubbish... to date, every page that's used them has ended up on VFD. Every last one. Unbelievable. Just huff 'em? --Carlb 18:16, 6 Aug 2005 (UTC)
Uncyclopedia:Pages for deletion#
- Looks like some sort of poor-quality link farm. Every page linked from here is crap too. Why am I even listing this here instead of just QVFD'ing this page? --Carlb 18:16, 6 Aug 2005 (UTC)
Delete This is at least the third time this page has been listed here. When will the admins (idiots, every last one of 'em) get it through their thick skulls that people want this page gone? --EvilZak 21:18, 6 Aug 2005 (UTC)
- Comment
- They probably know it's garbage as they keep trying to huff it (that's why there are six different incarnations of the same blasted page archived away somewhere). The vadnals keep recrating, uh, recreating it as a forum in which to advertise their crappy article substubs. CVP anyone? --Carlb 17:37, 11 Aug 2005 (UTC)
Uncyclopedia:Sandbox
- Looks like some sort of n00b test, or maybe vandalism. --Carlb 18:16, 6 Aug 2005 (UTC)
- Move to Uncyclopedia:Litterbox & Keep as a testing ground for the n00bers; perhaps content could be autopurged on a semi-regular basis (see [1]Wikipedia:Sandbox & [2]Wikipedia talk:Sandbox). --DWIII 20:18, 6 Aug 2005 (UTC)
User:Blat
- Shows as {{BurnThePicture}} but doesn't indicate which picture is the one to be deleted. Perhaps they meant that the user be the one to be deleted... just a minute while I call +1.800.Y0.MAFIA and have them "take care of" this one. --Carlb 18:16, 6 Aug 2005 (UTC)
Delete Vanity. Does not claim any sort of notability, either in the Un-iverse or in Wikipedia canon.
Uncyclopedia:Community Portal/archive3#Homestar VFD Template
- If number of "delete this" templates on a page is an accurate measure of anything, this has to be the second-worst page in all of Uncyclopedia, exceeded only by that piece of cruft Uncyclopedia:Templates/Deletion process. Shoot on sight? --Carlb 18:16, 6 Aug 2005 (UTC)
Delete This page appears to have been used as a chat page. Someone tell these newbies that that's what the Talk namespace is for. --EvilZak 21:12, 6 Aug 2005 (UTC) </s> <s>
Politician
Rewritten--Sir Elvis KUN | Petition 20:26, 20 Aug 2005 (UTC)
Rewrite, Redirect or Delete Three counts of Not Being Funny, one count of Cliché and one count of Bad Writing. --Spintherism 00:58, 6 Aug 2005 (UTC)
I'd say Rewrite is the best route. --ERTW 02:43, 6 Aug 2005 (UTC)</s> | http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Votes_for_deletion/archive9?diff=prev&oldid=5080436 | CC-MAIN-2014-10 | en | refinedweb |
JPanel painComponent not working
Sergey Schek
Greenhorn
Joined: Jul 26, 2009
Posts: 5
posted
Jul 26, 2009 03:06:33
0
I dont know how to best put it, but that seems to be whats happening here. I started a project recently to use up some of the free time I have and decided that Id attack the GUI first, to a degree. Well, ran into my first major problem already, less than 100 lines of code into the project. What Im trying to do is to have a main
JFrame
that will contain all of the program's graphical elements, and its meant to be without a title bar or any other decoration ( exit using esc key for now). To that I am trying to add a
JPanel
that will function as a menu ( I want to make it pretty in a sort of way that would be hard using buttons and dialogs.) Now to
test
out that the panel is working fine, I tried to make it draw a rect....which it didnt. I made sure to set the color, check if the panel is visible, and do all those other things you should do. Im certain I missed something though as Im not having much luck, and I am also certain that its something pretty small and stupid on my part. Anyways, without further ado, here's the code:
import UI.*; public class Main { public static void main(String args[]){ MainScreen main = new MainScreen(); MainMenu menu = new MainMenu(main.getX(), main.getY()); menu.setVisible(true); main.add(menu); System.out.println("Menu visible: "+ menu.isVisible()); } }
Thats the main class that will eventually run everything. So far it creates the main frame, the panel and adds one to the other, in the process printing out if the panel is visible ( which when I run the code returns as a yes )
package UI; import javax.swing.*; import java.awt.event.MouseListener; import java.awt.event.MouseEvent; import java.awt.event.KeyListener; import java.awt.event.KeyEvent; import java.awt.*; public class MainScreen extends JFrame implements MouseListener, KeyListener { public void mousePressed( MouseEvent m){} public void mouseReleased(MouseEvent m){} public void mouseClicked(MouseEvent m){ System.out.println("Mouse Pressed"); } public void mouseEntered(MouseEvent m){} public void mouseExited(MouseEvent m){} public void keyPressed( KeyEvent k){ int kcode = k.getKeyCode(); //System.out.println("Key Pressed " + kcode); if(kcode == KeyEvent.VK_ESCAPE){ System.out.println("Esc Pressed"); this.dispose(); System.exit(0); } } public void keyReleased(KeyEvent k){} public void keyTyped(KeyEvent k){} public MainScreen(){ JFrame mainFrame = new JFrame(); mainFrame.addKeyListener(this); mainFrame.addMouseListener(this); mainFrame.setUndecorated(true); mainFrame.setVisible(true); mainFrame.setSize(300, 300); mainFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); mainFrame.requestFocus(); System.out.println("Focus: " +mainFrame.hasFocus()); //MainMenu menu = new MainMenu(/*this.getX(), this.getY()*/300,300); //menu.setVisible(true); //mainFrame.add(menu); //menu.repaint(); } }
Thats the code for the MainFrame. Ignore the methods for key and mouse listeners, I know they work fine.
package UI; import javax.swing.*; import java.awt.event.KeyListener; import java.awt.event.MouseListener; import java.awt.event.MouseEvent; import java.awt.event.KeyEvent; import java.awt.*; public class MainMenu extends JPanel implements MouseListener, KeyListener { private int xsize, ysize; public void mouseClicked( MouseEvent e ) {} public void mousePressed( MouseEvent e ) {} public void mouseReleased( MouseEvent e ) {} public void mouseEntered( MouseEvent e ) {} public void mouseExited( MouseEvent e ) {} public void keyTyped( KeyEvent e ) {} public void keyPressed( KeyEvent e ) {} public void keyReleased( KeyEvent e ) {} public MainMenu(int x, int y){ super(); xsize = x; ysize =y; JPanel menu = new JPanel(); menu.setVisible(true); menu.setPreferredSize(new Dimension(x,y)); } public void paintComponent( Graphics g){ super.paintComponent(g); g.setColor(Color.BLACK); g.fillRect(0,0,50,50); System.out.println("redrawing rec"); } }
And finally, the code for the menu Panel.
Michael Dunn
Ranch Hand
Joined: Jun 09, 2003
Posts: 4632
posted
Jul 26, 2009 04:58:15
0
I'm sorry, but what a dog's breakfast.
class MainScreen extends
JFrame
, but then you create a
JFrame
in the constructor???
class MainMenu extends
JPanel
, but then you create a
JPanel
in the constructor???
many other 'strange' lines of code.
here's a working version of your code, stripped to the problem (no listeners),
in particular look at the commented out lines and the added one
import javax.swing.*; import java.awt.*; import java.awt.event.*; class Testing { public void buildGUI() { JFrame f = new JFrame(); f.getContentPane().add(new MainMenu(400,300)); f.pack(); f.setLocationRelativeTo(null); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.setVisible(true); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable(){ public void run(){ new Testing().buildGUI(); } }); } } class MainMenu extends JPanel{ private int xsize, ysize; public MainMenu(int x, int y){ super(); xsize = x; ysize =y; //JPanel menu = new JPanel(); //menu.setVisible(true); //menu.setPreferredSize(new Dimension(x,y)); setPreferredSize(new Dimension(x,y));//added } public void paintComponent( Graphics g){ super.paintComponent(g); g.setColor(Color.BLACK); g.fillRect(0,0,50,50); System.out.println("redrawing rec"); } }
Sergey Schek
Greenhorn
Joined: Jul 26, 2009
Posts: 5
posted
Jul 26, 2009 13:53:04
0
D'oh, I knew coding so late at night wasnt a good idea.
Thanks for the prompt reply.
Consider Paul's
rocket mass heater
.
subject: JPanel painComponent not working
Similar Threads
New to Java. Need some proformance suggestions
Problem while working with JPopupMenu
sliding windows
Adding a KeyListener to a JLabel?
How to make popup invisible
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/455725/GUI/java/JPanel-painComponent-working | CC-MAIN-2014-10 | en | refinedweb |
#include <db.h> int DB_TXN->discard(DB_TXN *tid, u_int32_t flags);
The
DB_TXN->discard() method frees up all the per-process resources
associated with the specified DB_TXN handle, neither
committing nor aborting the transaction. This call may be used only
after calls to DB_ENV->txn_recover() when
there are multiple global transaction managers recovering transactions
in a single Berkeley DB environment. Any transactions returned by
DB_ENV->txn_recover()
that are not handled by the current global transaction manager should
be discarded using
DB_TXN->discard().
All open cursors in the transaction are closed and the first cursor close error, if any, is returned.
The
DB_TXN->discard()
method returns a non-zero error value on failure and 0 on success.
The errors values that this method returns include the error values of
DBcursor->close() and the following:
A Berkeley DB Concurrent Data Store database environment configured for lock timeouts was unable to grant a lock in the allowed time.
After
DB_TXN->discard() has been called, regardless of its return,
the DB_TXN handle may not
be accessed again.
The
DB_TXN->discard()
method may fail and return one of the following non-zero errors: | http://idlebox.net/2011/apidocs/db-5.2.28.zip/api_reference/C/txndiscard.html | CC-MAIN-2014-10 | en | refinedweb |
JavaFX for Swing Developers
5 Implementing a Swing Application in JavaFX
In this chapter, you consider a Swing application and learn how to implement it in JavaFX.
For the purpose of this chapter, get familiar with the Converter application shown in Figure 5-1. This application converts distance measurements between metric and U.S. units.
Figure 5-1 Converter Application in Java
Description of "Figure 5-1 Converter Application in Java"
Analyzing the Converter Application Developed in Swing
For more information about the implementation of this example in the Java programming language, see How to Use Panels and Using Models trails in the Swing tutorial. In particular, the graphical user interface (GUI) is discussed in the trail about the panels.
To learn the code of the Converter application, download its NetBeans project or the source files available at the example index.
Swing components use models. If you look at the contents of the project, you notice the
ConverterRangeModel and
FollowerRangeModel classes that define models for the Converter application.
The Converter application consists of the following files:
ConversionPanel.java — contains a custom
JPanelsubclass to hold components
Converter.java — contains the main application class
ConverterRangeModel.java — defines the top slider's model
FollowerRangeModel.java — defines the bottom slider's model
Units.java — creates
Unitobjects
Note that the synchronization between each text field and its slider is implemented by event handlers that listen for changes in values.
Planning the Converter Application in JavaFX
The Converter application contains two similar panels that hold components such as a text field, slider, and combo box. The panels have titles. The
TitlePane class from the javafx.scene.control package ideally suits the GUI of the Converter application.
In what follows, you will implement the
ConversionPanel class and add two instances of this class to the graphical scene of the Converter application.
First, note that the components within a single
ConversionPanel object should be synchronized as follows. Whenever you move the knob on the slider, you must update the value in the text field and vice versa: Whenever you change the value in the text field, you must adjust the position of the knob on the slider.
As soon as you choose another value from the combo box, you must update the value of the text field and, hence, the position of the knob on the slider.
Second, note that both
ConversionPanel objects should be synchronized. As soon as changes happen on one panel, the corresponding components on another panel must be updated.
It is suggested that you implement synchronization between the panels using the
DoubleProperty object, called
meters, and listen to changes in the properties of the text fields and combo boxes by creating and registering two
InvalidationListener objects:
fromMeters and
toMeters. Whenever the property of the text field on one panel changes, the
invalidated method of the attached
InvalidationListener object is called, which updates the
meters property. Because the
meters property changes, the
invalidated method of the
InvalidationListener object, attached to the
meters property, is called, which updates the corresponding text field on another panel.
Similarly, whenever the property of the combo box on one panel changes, the
invalidated method of the attached
InvalidationListener object is called, which updates the text field on this panel.
To provide synchronization between the value of the slider and the value of the
meters object, use bidirectional binding.
For more information about JavaFX properties and binding, see Using JavaFX Properties and Binding.
Creating the Converter Application in JavaFX
Create a new JavaFX project in NetBeans IDE and name it Converter. Copy the Unit.java file from the Swing application to the Converter project. Add a new
java class to this project and name it ConversionPanel.java.
Standard JavaFX Pattern to Create the GUI
Before you start creating the GUI of the Converter application in JavaFX, see the standard pattern of GUI creation in Swing applications, as shown in Example 5-1.
Example 5-1
public class Converter { private void initAndShowGUI() { ... } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { initAndShowGUI(); } }); } }
To map this pattern to JavaFX, you extend the
javafx.application.Application class, override the
start method, and call the
main method, as shown in Example 5-2.
Example 5-2
import javafx.application.Application; import javafx.stage.Stage; public class Converter extends Application { @Override public void start(Stage t) { ... } public static void main(String[] args) { launch(args); } }
When you create a new JavaFX project in the NetBeans IDE, this pattern is automatically generated for you. However, it is important that you understand the basic approach to GUI creation in JavaFX, especially if you use a text editor.
Containers and Layouts
In Swing, containers and layout managers are different entities. You create a container, such as a
JPanel or
JComponent object, and set a layout manager for this container. You can assign a specific layout manager and write
.add() in your code or assign none of the layout managers.
In JavaFX, the container itself takes care of laying out its child nodes. You create a specific layout pane, such as a
Vbox,
FlowPane, or
TitledPane object, and then add content to the list of its child nodes using the
.getChildren().add()methods.
There are several layout container classes in JavaFX, called panes, some of which have their counterparts in Swing, such as the
FlowPane class in JavaFX and
FlowLayout class in Swing.
For more information, see Working With Layouts in JavaFX.
UI Controls
JavaFX SDK provides a set of standard UI controls. Some of the UI controls have their counterparts in Swing such as the
Button class in JavaFX and
JButton in Swing;
Slider in JavaFX and
JSlider in Swing; and
TextField in JavaFX and
JTextField in Swing.
To implement the Converter application in JavaFX, you can use the standard UI controls provided by the
TextField,
Slider, and
ComboBox classes.
For more information, see Using JavaFX UI Controls.
Usage of the Builder Classes
The JavaFX SDK provides a set of builder classes that can be used to create objects. For example, the
SliderBuilder class is used to create objects of the
Slider class. The builder classes and the classes whose objects they build reside within the same packages. To create an object using the corresponding builder class, see the code pattern shown in Example 5-3.
Note that the usage of builders is not compulsory. You might use builders for your convenience, or you might not. An alternative way of creating the same object without using the builder class is shown in Example 5-4.
Mechanism of Getting Notifications on User Actions and Binding
In Swing, you can register a listener on any component and listen for changes in the component properties, such as size, position, or visibility; or listen for events, such as whether the component gained or lost the keyboard focus; or whether the mouse was clicked, pressed, or released over the component.
In JavaFX, each object has a set of properties for which you can register a listener. The listener is called whenever a value of the property changes.
Note that an object can be registered as a listener for changes in another object's properties. Thus, you can use the binding mechanism to synchronize some properties of two objects.
Creating the ConversionPanel Class
The
ConversionPanel class is used to hold components: a text field, a slider, and a combo box. When creating the graphical scene of the Converter application, you add two instances of the
ConversionPanel class to the graphical scene. Add the import statement for the
TitledPane class and extend the
ConversionPanel class as shown in Example 5-5.
Example 5-5
import javafx.scene.control.TitledPane; public class ConversionPanel extends TitledPane { }
Creating Instance Variables for UI Controls
Add import statements for the
TextField,
Slider,
ComboBox controls and define instance variables for the components as shown in Example 5-6.
Creating DoubleProperty and NumberFormat Objects
Add the import statement for the
DoubleProperty class and create a
DoubleProperty object named
meters as shown in Example 5-7. The
meters object is used to ensure the synchronization between two
ConversionPanel objects.
Add the import statement for the
NumberFormat class and add the block of code after this import statement to define the text field format as shown in Example 5-8.
Laying Out the Components
To lay out the text field and the slider, use the
VBox class. To lay out both of these components and a combo box, use the
HBox class. Add the import statements for the
ObservableList,
TextFieldBuilder,
SliderBuilder,
ComboBoxBuilder,
HBoxBuilder,
VBoxBuilder classes and implement the constructor of the
ConversionPanel class as shown in Example 5-9.
Example 5-9
import javafx.collections.ObservableList; import javafx.scene.control.ComboBoxBuilder; import javafx.scene.control.SliderBuilder; import javafx.scene.control.TextFieldBuilder; import javafx.scene.layout.HBoxBuilder; import javafx.scene.layout.VBoxBuilder; public ConversionPanel(String title, ObservableList<Unit> units, DoubleProperty meters) { setText(title); setCollapsible(false); setContent(HBoxBuilder.create() .children( VBoxBuilder.create() .children( textField = TextFieldBuilder.create() .build(), slider = SliderBuilder.create() .max(MAX) .build() ) .build(), comboBox = ComboBoxBuilder.<Unit>create() .items(units) .converter(new StringConverter<Unit>() { @Override public String toString(Unit t) { return t.description; } @Override public Unit fromString(String string) { throw new UnsupportedOperationException("Not supported yet."); } }) .build() ) .build()); this.meters = meters; comboBox.getSelectionModel().select(0); }
The last line of code selects a value in the
ComboBox object.
Creating InvalidationListener Objects
To listen to changes in the properties of the text fields and combo boxes, create the
InvalidationListener objects
fromMeters and
toMeters as shown in Example 5-10.
Example 5-10
import javafx.beans.InvalidationListener; private InvalidationListener fromMeters = new InvalidationListener() { @Override public void invalidated(Observable arg0) { if (!textField.isFocused()) { textField.setText(numberFormat.format(meters.get() / getMultiplier())); } } }; private InvalidationListener toMeters = new InvalidationListener() { @Override public void invalidated(Observable arg0) { if (!textField.isFocused()) { return; ) try { meters.set(numberFormat.parse(textField.getText()).doubleValue() * getMultiplier()); } catch (Exception ignored) { }
Adding Change Listeners to Controls and Ensuring Synchronization
To provide the synchronization between the text fields and combo boxes, add change listeners as shown in Example 5-11.
Example 5-11
meters.addListener(fromMeters); comboBox.valueProperty().addListener(fromMeters); textField.textProperty().addListener(toMeters); fromMeters.invalidated(null);
Create a bidirectional binding between the value of the slider and the value of the
meters object as shown in Example 5-12.
When a new value is typed in the text field, the
invalidated method of the
toMeters listener is called, which updates the value of the
meters object.
Creating the Converter Class
Open the Converter.java file that was automatically generated by the NetBeans IDE and remove all of the code except for the
main method. Then, press Ctrl (or Cmd)+Shift+I to correct the import statements.
Defining Instance Variables
Add import statements for the
ObservableList,
DoubleProperty, and
SimpleDoubleProperty classes and create
metricDistances,
usaDistances, and
meters variables of the appropriate types as shown in Example 5-13.
Example 5-13
import javafx.beans.property.DoubleProperty; import javafx.collections.ObservableList; import javafx.beans.property.SimpleDoubleProperty; private ObservableList<Unit> metricDistances; private ObservableList<Unit> usaDistances; private DoubleProperty meters = new SimpleDoubleProperty(1);
Creating the Constructor for the Converter Class
In the constructor for the
Converter class, create
Unit objects for the metric and the U.S. distances as shown in Example 5-14. Add the import statement for the
FXCollections class. Later, you will instantiate two
ConversionPanel objects with these units.
Example 5-14
import javafx.collections.FXCollections; public Converter() { metricDistances = FXCollections.observableArrayList( new Unit("Centimeters", 0.01), new Unit("Meters", 1.0), new Unit("Kilometers", 1000.0)); usaDistances = FXCollections.observableArrayList( new Unit("Inches", 0.0254), new Unit("Feet", 0.305), new Unit("Yards", 0.914), new Unit("Miles", 1613.0)); }
Creating the Graphical Scene
Override the
start method to create the graphical scene for your Converter application. Add two
ConversionPanel objects to the graphical scene and lay out them vertically. Note that two
ConversionPanel objects are instantiated with the same
meters object. Use the
VBoxBuilder class as a root container for the graphical scene. Add import statements for the
SceneBuilder,
VBoxBuilder, and
StageBuilder classes and instantiate two
ConversionPanel objects as shown in Example 5-15.
Example 5-15
import javafx.scene.SceneBuilder; import javafx.scene.layout.VBoxBuilder; import javafx.stage.StageBuilder; @Override public void start(Stage stage) { StageBuilder.create() .scene(SceneBuilder.create() .root(VBoxBuilder.create() .children( new ConversionPanel( "Metric System", metricDistances, meters), new ConversionPanel( "U.S. System", usaDistances, meters)) .build()) .applyTo(stage); stage.show(); }
You can download the source code of the Converter application in JavaFX.
The Converter application in JavaFX is shown in Figure 5-2.
Figure 5-2 Converter Application in JavaFX
Description of "Figure 5-2 Converter Application in JavaFX"
Compare the two applications with the same functionality implemented using the Swing library and JavaFX.
Not only does the application in JavaFX contain three files as compared with five files of the Swing application, but the code in JavaFX is cleaner. The applications also differ in look and feel. | http://docs.oracle.com/javafx/2/swing/port-to-javafx.htm | CC-MAIN-2014-10 | en | refinedweb |
A beautiful and simple image picker solution for iOS
ConvenientImagePicker
ConvenientImagePicker is a beautiful and simple image picker solution for iOS development written on Swift. It's a view controller that can simply present it everywhere. Excellent interaction, Mutiple selection, Photo picker, Dark mode, and so on.
ConvenientImagePicker provides smooth interaction, has excellent user experience, it can display system photo album and can also display the specified images.
It is worth emphasizing that the ConvenientImagePicker view has precise gesture control.
Requirements
- iOS 9.3+
- Xcode 9.0+
- Swift 4.0+
Installation
ConvenientImagePicker can be installed through CocoaPods, add the following entry to your Podfile:
pod 'ConvenientImagePicker'
Then run
pod install,
and include the image picker wherever you need it with
import ConvenientImagePicker
, it's really a simple way.
Usage
When you prepare to present this image picker, we assume that you will call a function like this:
func PresentPhotoPicker()
Well, the most simplest version is add the following code in this function:
let pickerViewController = PickerViewController() pickerViewController.delegate = self self.present(pickerViewController, animated: true, completion: nil)
Then, you are supposed to implement
ConvenientImagePickerDelegate in your own view controller:
And implement these delegate function:
func imagePickerDidCancel(_ selectedImages: [Int : UIImage]) func imageDidSelect(_ imagePicker: PickerViewController, index: Int, image: UIImage?) func imageDidDeselect(_ imagePicker: PickerViewController, index: Int, image: UIImage?) func imageSelectMax(_ imagePicker: PickerViewController, wangToSelectIndex: Int, wangToSelectImage: UIImage?)
imagePickerDidCancel will inform you that user has cancelled the image picker, and return the images user has selected.
imageDidSelect will inform you that user has selected an image.
imageDidDeselect will inform you that user has deselected an image.
imageSelectMax will inform you that user want to select an image, but this time he has selected the limit number of images.
You can use
imagePicker.selectedImageCount in last 3 functions to get the number of the images user has selected.
Do not initialize
pickerViewController outside of function
PresentPhotoPicker.
So far, this is the simplest usage of this pod.
Optional Configuration
Sure, You can use more features of the image picker, or even customize it, instead of just using the default configuration.
Do start from
let pickerViewController = PickerViewController() there:
pickerViewController.maxNumberOfSelectedImage = 50 // The maximum number of pictures allowed. pickerViewController.allowMultipleSelection = true // A Boolean value that determines whether the picker view can mutiple selection. pickerViewController.numberOfPictureInRow = 4 // The number of pictures in a row. pickerViewController.intervalOfPictures = 5.0 // The interval between pictures. pickerViewController.isSimpleMode = true // A Boolean value that determines whether the title label, count view, and close button exist. pickerViewController.images = nil // The displayed images, it's will be photo library if nil. pickerViewController.isDarkMode = false // A Boolean value that determines whether darkmode enable. pickerViewController.isSwitchDarkAutomately = true // A Boolean value that determines whether darkmode can switched automately. (only iOS 13 valid)
when 'isSimpleMode = false'
When
pickerViewController.isSimpleMode = false appear in your configure list, you are supposed to learn about
titleView,
titleLabel,
countLabel,
doneButton, and
titleViewEffectView.(As shown on the right)
You can customize
titleView,
titleLabel,
countLabel, and
doneButton when
isSimpleMode = false.
You can also customize
titleViewEffectView,
mainView, and
collectionView regardless of the value of
isSimpleMode, because they are always exist.
By the way,
decorationBar can be customized in the case of
isSimpleMode = true.
If more in-controller-override needed,
extension PickerViewController is necessary.
⚠️Notice
- Do not forget to add
NSPhotoLibraryUsageDescriptionin your Info.plist if you want to present a photo picker.
- Do not use ConvenientImagePicker with Landscape on iPhone.
- ConvenientImagePicker is not compatible with Objective-C.
- Please initialize new variable
pickerViewControllerwhenever preparing to present the image picker.
Instance
In TextCard, which is an iOS App has imported ConvenientImagePicker.
GitHub
Get the latest posts delivered right to your inbox | https://iosexample.com/a-beautiful-and-simple-image-picker-solution-for-ios/ | CC-MAIN-2020-24 | en | refinedweb |
LED scrolling message boards are widely used in:
- Notice board displays
- Public advertising boards
- Passenger information display boards in BUS/TRAIN/TRAM/MATRO, etc.
- Name or signboards of shops
Most of these scrolling message boards are made up of a single RED-color LED. However, currently, there are multi-color LED boards and RGB LED boards are also available. In all of the types of boards, the LEDs are connected in a ROW-COLUMN structure, which is called MATRIX LED scrolling message boards.
In a simple scrolling message MATRIX LED board, there is only one animation effect – so, the message scrolls from right to left. But there are many such boards that have several different animation effects, where a message may appear from:
- The top or bottom
- Appears and then disappear
- Offer dissolving effects
- Bounce from left to right, and many more
Here, we’ve presented a simple project in which a user can enter the information message to be scrolled on a board by using a laptop or computer. This message will be continuously displayed and scrolled. Whenever the user wants to display new information (meaning a new message), he or she will need to connect the system with a computer, using a USB, and then enter the new message — that’s all!
For this project, we use the readymade Matrix LED scrolling message board that’s build using six units of an 8×8 LED block. There are a total of 6x8x8 = 384 LEDs.
It will get a message as a serial input from any digital device, such as a microcontroller or microprocessor. It will accept serial data in the 8-N-1 format with the 9600 BPS. The circuit also uses the Arduino NANO board that gets a message from a laptop or computer and sends this message to the MATRIX LED board to be displayed and scrolled.
Here is the circuit diagram with its description and operation…
Circuit description
As shown in this figure, there are only three building blocks in the circuit:
1. The scrolling message MATRIX LED board
2. The Arduino NANO board
3. The laptop (or PC)
Note:
- The scrolling message MATRIX LED board requires three wires for interfacing the Vcc, the Gnd, and the serial input. As a Vcc, it requires 12V @ 1A supply. So, it’s given an external 12V supply from an adapter. Its serial data input is connected with the digital pin D3 of the Arduino board.
- The Arduino board is also given a 12V input from the adapter to its Vin pin. It communicates with the laptop using a USB cable and also receives data (message) from the laptop.
Circuit operation
The circuit operation is simple. When the 12V supply is given to the circuit, it will begin its operation. The Arduino gets the string (message) from the laptop and it will give the same message to the scrolling message MATRIX LED board. This scrolling message will be displayed on this board.
- Initially, the default message “Hello” is displayed and continuously scrolled on the board.
- The Arduino will continuously wait for any message form the computer. The user can send a message (string) to the Arduino IDE serial monitor.
- When the user sends a message from this serial monitor, it is received and stored by the Arduino board in its internal RAM.
- When a complete message is received, the Arduino will send the same message serially to the scrolling message MATRIX LED board. The digital pin D3 from the Arduino works as a serial data Tx pin that sends the message serially to the MATRIX LED board.
- The format to send message to MATRIX LED board is: “!_____________________message________________\r”
- That means the text message that’s to be scrolled, must start with ‘!’ and end with ‘\r’
- The Arduino inserts the start and end characters in the message received from the computer and then it sends it to the MATRIX LED board.
- The MATRIX LED board will start displaying and scrolling this message continuously until it gets a new message.
- This means that every time a user wants to display a new message, he or she will need to send it from a computer and the message will be continuously displayed and scrolled on the MATRIX LED board.
Software program
#include <SoftwareSerial.h>
SoftwareSerial matrix_LED_serial(2,3);
char msg[100];
int i=0;
void setup()
{
// put your setup code here, to run once:
Serial.begin(9600);
matrix_LED_serial.begin(9600);
matrix_LED_serial.print(“!Hello\r”);
matrix_LED_serial.print(249, HEX);
delay(5000);
}
void loop()
{
while(Serial.available())
{
msg[i] = Serial.read();
i++;
}
if(msg[i-1]==’\r’)
{
matrix_LED_serial.print(‘!’);
matrix_LED_serial.print(msg);
matrix_LED_serial.print(‘\r’);
i=0;
}
} | https://www.engineersgarage.com/microcontroller-projects/matrix-led-scrolling-message-board-using-arduino/ | CC-MAIN-2020-24 | en | refinedweb |
Gol.
gorilla/handlerspackage.) /* Set token claims */ token.Claims["admin"] = true token.Claims["name"] = "Ado Kukic" token> <!-- We will use the Babel transpiler so that we can convert our jsx code to js on the fly --> <script src=""></script> <!-- Core React libraries which we will be using --> <script src=""></script> <script src=""></script> <!-- Our React app code will be placed in the app.jsx file --> <script type="text/babel" src="static/js/app.jsx"></script> <!-- We will import bootstrap so that we can build a good looking UI fast --> <link href="" rel="stylesheet"> </head> <body> <!-- This will be the entry point for our React app --> () { this.setState({idToken: null}) }, render: function() { if (this.state.idToken) { return (<LoggedIn idToken={this.state.idToken} />); } { profile: null, products: null } }, proper/go-jwt-middleware and
dgrijalva/jwt-go libraries for dealing with the JWT. Additionally, we will utilize the
joho/godotenv library so that we can store our Auth0 credentials outside of our
main.go file. Let's see what our implemented code looks like.
package main import( ... "github.com/joho/godotenv" "github.com/dgrijalva/jwt-go" "github.com/auth0/go-jwt-middleware" ) func main() { // Here we are loading in our .env file which will contain our Auth0 Client Secret and Domain err := godotenv.Load() if err != nil { log.Fatal("Error loading .env file") } ... r.Handle("/status", StatusHandler).Methods("GET") r.Handle("/products", jwtMiddleware.Handler(ProductsHandler)).Methods("GET") r.Handle("/products/{slug}/feedback", jwtMiddleware.Handler(AddFeedbackHandler)).Methods("POST") } // Handlers ... var jwtMiddleware = jwtmiddleware.New(jwtmiddleware.Options{ ValidationKeyGetter: func(token *jwt.Token) (interface{}, error) { decoded, err := base64.URLEncoding.DecodeString(os.Getenv("AUTH0_CLIENT_SECRET")) if err != nil { return nil, err } return decoded, nil }, })
We have made minor changes to our
jwtMiddleware function to use the
AUTH0_CLIENT_SECRET variable rather than a hardcoded secret. We got this variable from our Auth0 management dashboard and stored it an environmental variable. That is all we needed to do on the Golang side.
Next, we'll implement the login functionality on the frontend. Feel free to remove to the
/get-token route as it is no longer necessary. We will get the token from Auth0.
Login with Auth0 Lock and React
Next, we’ll implement the login system on the frontend that will allow users to login and create accounts. We will do this using Auth0’s Lock widget. We'll first need to add the required libraries for the Lock widget. Let's update the
index.html file.
<!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" /> <title>We R VR</title> <script src="//cdn.auth0.com/js/lock-9.0.min.js"></script> <script type="text/javascript" src="static/js/auth0-variables.js"></script> <!-- ... existing libraries ... --> </head> <body> </body> </html>
We've pulled the lock library from Auth0. We'll additionally need to create a new file called
auth0-variables.js which will store our Auth0
CLIENT_ID and
CLIENT_DOMAIN. You can get the
CLIENT_ID and
CLIENT_DOMAIN from your Auth0 management dashboard..createLock(); this.setState({idToken: this.getIdToken()}) }, /* We will create the lock widget and pass it to sub-components */ createLock: function() { this.lock = new Auth0Lock(this.props.clientId, this.props.domain); }, /* We will ensure that any AJAX request to our Go API has the authorization header and passes the user JWT with the request */ setupAjax: function() { $.ajaxSetup({ 'beforeSend': function(xhr) { if (localStorage.getItem('userToken')) { xhr.setRequestHeader('Authorization', 'Bearer ' + localStorage.getItem('userToken')); } } }); }, /* The getIdToken function get us the users JWT if already authenticated or if it's the first time logging in, will get the JWT data and store it in local storage */ getIdToken: function() { var idToken = localStorage.getItem('userToken'); var authHash = this.lock.parseHash(window.location.hash); if (!idToken && authHash) { if (authHash.id_token) { idToken = authHash.id_token localStorage.setItem('userToken', authHash.id_token); } if (authHash.error) { console.log("Error signing in", authHash); } } return idToken; }, render: function() { if (this.state.idToken) { /* If the user is logged in, we'll pass the lock widget and the token to the LoggedIn Component */ return (<LoggedIn lock={this.lock} idToken={this.state.idToken} />); } else { return (<Home lock={this.lock} />); } } });
Home Component
The updates to the
Home component will add the functionality to allow a user to login.
var Home = React.createClass({ /* We will get the lock instance created in the App component and bind it to a showLock function */ showLock: function() { this.props.lock.show(); }, render: function() { return ( <div className="container"> <div className="col-xs-12 jumbotron text-center"> <h1>We R VR</h1> <p>Provide valuable feedback to VR experience developers.</p> // When the user clicks on the button titled Sign In we will display the lock widget <a onClick={this.showLock}Sign In</a> </div> </div>); } });
Once a user clicks on the Sign In button, they will be prompted to login via the Auth0 Lock widget.
LoggedIn Component
The
LoggedIn component will be updated to pull in products to review from the Golang API.
var LoggedIn = React.createClass({ /* We will create a logout function that will log the user out */ logout : function(){ localStorage.removeItem('userToken'); this.props.lock.logout({returnTo:''}) }, getInitialState: function() { return { profile: null, products: null } }, componentDidMount: function() { /* Once the component is created, we will get the user information from Auth0 */ this.props.lock.getProfile(this.props.idToken, function (err, profile) { if (err) { console.log("Error loading the Profile", err); alert("Error loading the Profile"); } this.setState({profile: profile}); }.bind(this)); /* Additionally, we will make a call to our Go API and get a list of products the user can review */({ /* We will add the functionality for our upvote and downvote functions Both of this will send a POST request to our Golang API */ see the Auth0 Lock widget. Login and you will be redirected to the logged in view of the application and will be able to leave feedback on the different experiences..
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/authentication-in-golang-with-jwts | CC-MAIN-2020-24 | en | refinedweb |
ID_MODELING_LOOP_TOOL still broken ?
Hi.
MODELING_LOOP_TOOL still broken or am I doing something wrong?
import c4d from c4d import gui, utils doc = c4d.documents.GetActiveDocument() obj = doc.GetActiveObject() bc = c4d.BaseContainer() bc.SetData(c4d.MDATA_LOOP_SEL_STOP_AT_BOUNDS, True) bc.SetData(c4d.MDATA_LOOP_SEL_SELECT_BOUNDS, False) bc.SetData(c4d.MDATA_LOOP_SEL_GREEDY_SEARCH, False) bc.SetData(c4d.MDATA_LOOP_SELECTION, c4d.SELECTION_NEW) bc.SetData(c4d.MDATA_LOOP_LOOP_EDGE, 1) # ? index or .. utils.SendModelingCommand(command=c4d.ID_MODELING_LOOP_TOOL, list=[obj], mode=c4d.MODELINGCOMMANDMODE_EDGESELECTION, bc=bc, doc=doc, flags=c4d.MODELINGCOMMANDFLAGS_0) c4d.EventAdd()
Different values give the same loop selection.
Thanks!
Hi just to let you know we didn't forget you, I've reached the development team.
Cheers,
Maxime.
Unfortunately, while some fixes have been done, it's still broken so the command is not consistent in all cases so you can't rely on it.
In any case, the development team is aware of it, hopefully, it will be addressed soon.
I will update the topic once it's done.
Cheers,
Maxime. | https://plugincafe.maxon.net/topic/11857/id_modeling_loop_tool-still-broken | CC-MAIN-2020-24 | en | refinedweb |
Design Patterns • Posted 5 months ago
In a previous article, we have seen how a factory pattern helps in solving problems related to object instantiations in large scale applications, where the decision of choosing a specific concrete implementation for a specific type is taken during runtime. While this works for a single set of concrete implementations which are related over a common subject, consider the scenario wherein this single set of concrete components are replicated over several dimensions causing in a multiple levels of related components which have a common theme. In such cases we are left with not one but several factories each of which intend to serve the decision of choosing over a set of component options. Now the client or the end user should not be aware of this, but instead be provided an abstraction using which he gets his job done. Then arises the need for another layer of abstraction over these set of factories which has the choice to pick one factory for a given scenario. This is what we call an Abstract Factory pattern.
What is an Abstract Factory?
An Abstract Factory pattern is one of the twenty three design patterns defined to solve a specific problem of object oriented design, and this solves the particular problem of choosing a particular implementation over multiple levels of sets of components which share a common theme. It can be simply put as a "Factory of Factories". We basically create a layer of abstraction over the factories and the client creates a concrete implementation of the abstract factory interface and then uses the object to access the concrete object. In this case the client doesn't know which concrete object he has received from the factory.
An Abstract Factory can be thought as a Factory of Factories
One general example would the scenario of a vehicle manufacturer who has three different variants of a motorcycle he designed namely: quarter litre, litre and a commuter caliber models. Now if a customer has to look for a vehicle's details, the customer is provided a ProductFactory which can provide a specific concrete implementation for his choice (such as a quarter-litre, litre or a commute). Now let's assume there are three variants of a motorcycle: say Generic, Retro and Limited Edition models. Now when we combine both, we end up in three variants of motorcycle which come in three calibrations each. But a customer is not interested in all these details for him to know the details of a particular model. He just gives in a variant and a caliber values to the system to get the details for his requirement. For this, we make use of an abstract factory of products under which lie three factories for each variant. And each factory results in a concrete implementation of a specific caliber model.
Pros and Cons:
The intention of the pattern is simple: to insulate the creation of objects from their usage and create families of related types without having to depend on their concrete implementations.
This results in better management of the types, and since we have an abstract layer of factory over the families of factories; we can easy interchange the concrete implementations without even having to change the code that accesses these objects. And the client can never know what change has happened in the background since all he looks at is a single plain abstract base factory type for invocation.
But on the con side, we end up creating a huge chain of factories and implementations which end up in a complex network of abstract and concrete types. These are hard to main and even harder to implement. And hence as the saying goes, we shouldn't try to implement these patterns from the very beginning and instead look for a pattern to apply only when needed.
Hands-On:
Basing on the product example stated above let's look at how we can convert the words into classes and implement an abstract factory for the above scenario. All it begins it at a type Product for which we have families of variants relying on. Let's create an abstract type IProduct with a single method ShowProductInfo() that displays the product information for the variant chosen.
namespace netcore3app.Providers { public interface IProduct { void ShowProductInfo(); } }
And there exist three variants of these products based on the engine caliber:
namespace netcore3app.Providers { public interface IQuarterLitreProduct : IProduct { } public interface ILitreProduct : IProduct { } public interface IEconomyProduct : IProduct { } } And these have their own implementations as follows: namespace netcore3app.Providers { public class EconomyProduct : IEconomyProduct { string type; public EconomyProduct(string type) { this.type = type; } public void ShowProductInfo() { Console.WriteLine($"This is a {type} Product of Economy caliber"); } } public class QuarterLitreProduct : IQuarterLitreProduct { string type; public QuarterLitreProduct(string type) { this.type = type; } public void ShowProductInfo() { Console.WriteLine($"This is a {type} Product of QuarterLitre caliber"); } } public class LitreProduct : ILitreProduct { string type; public LitreProduct(string type) { this.type = type; } public void ShowProductInfo() { Console.WriteLine($"This is a {type} Product of Litre caliber"); } } }
We can observe that each of these concrete implementations receive a type parameter through the constructor which conveys the variant of the Product it is: Generic, LimitedEdition or Retro.
And there exists an abstract base type IProductFactory which forms the base for all the factories representing the variants stated above.
namespace netcore3app.Providers { public interface IProductFactory { IProduct CreateProduct(); } public interface IRetroProductFactory : IProductFactory { } public interface ILimitedEditionProductFactory : IProductFactory { } public interface IGenericProductFactory : IProductFactory { } }
And the concrete implementations for each of these factories decide which of the product to return.
namespace netcore3app.Providers { public class GenericProductFactory : IGenericProductFactory { string type; public GenericProductFactory(string type) { this.type = type; } public IProduct CreateProduct() { switch (type.ToLowerInvariant()) { case "quarter": return new QuarterLitreProduct(type); case "litre": return new LitreProduct(type); case "economy": default: return new EconomyProduct(type); } } } public class LimitedEditionProductFactory : ILimitedEditionProductFactory { string type; public LimitedEditionProductFactory(string type) { this.type = type; } public IProduct CreateProduct() { switch (type.ToLowerInvariant()) { case "quarter": return new QuarterLitreProduct(type); case "litre": return new LitreProduct(type); case "economy": default: return new EconomyProduct(type); } } } public class RetroProductFactory : IRetroProductFactory { string type; public RetroProductFactory(string type) { this.type = type; } public IProduct CreateProduct() { switch (type.ToLowerInvariant()) { case "quarter": return new QuarterLitreProduct(type); case "litre": return new LitreProduct(type); case "economy": default: return new EconomyProduct(type); } } } }
Now we have three families of factory types along with three product types which depend on them for choice. Now we can't simply ask the client to chose for himself the type needed. Instead we call upon a RootProductFactory that does the job for us. The IRootProductFactory is the abstract type that is used by the client for the invocation.
namespace netcore3app.Providers { public interface IRootProductFactory { IProductFactory CreateFactory(string model, string type); } public class RootProductFactory : IRootProductFactory { public RootProductFactory() { } public IProductFactory CreateFactory(string model, string type) { IProductFactory factory; switch (model.ToLowerInvariant()) { case "limitededition": factory = new LimitedEditionProductFactory(type); break; case "retro": factory = new RetroProductFactory(type); break; case "generic": default: factory = new GenericProductFactory(type); break; } return factory; } } }
The AbstractFactory type RootProductFactory forms a base for the families of factories here (Limited / Retro / Generic factories) and on the Client side there's only one abstract type and a method CreateFactory() to call which returns the desired product. The choice is made by passing the model and type of product required as parameters to the factory method.
The AbstractFactory interface and implementation can then be added as a service into the ASP.NET Core container to be able to inject anywhere.
The Client code can be assumed as below:
namespace netcore3app.Providers { public class Client { IRootProductFactory factory; public Client(IRootProductFactory factory) { factory = this.factory; } public void StartingPoint(string model, string type) { IProduct product = factory.CreateFactory(model, type).CreateProduct(); product.ShowProductInfo(); } } }
In this way, we can implement an abstract factory pattern at its simplest form. As mentioned earlier, one must not try to introduce an abstract factory pattern or any pattern for that sake into the application from the very beginning which might end up in messing up the code; but instead try to adapt only when it is needed.
Other Creational Patterns:
The Abstract Factory Pattern (this)
Published 5 months ago | https://referbruv.com/blog/posts/understanding-and-implementing-the-abstract-factory-in-aspnet-core | CC-MAIN-2020-24 | en | refinedweb |
I've setup a private K8s cluster using kops on AWS, and I'd like to be able to autoscale the nodes based on CPU use. I've read that this is possible with GCE, but is it possible with AWS?
Yes it is possible, you can do this by using Cluster Autoscaler or CA
As for how to do it with kops. First, you need to edit instance groups and add extra labels.
$ kops edit ig nodes spec: cloudLabels: k8s.io/cluster-autoscaler/enabled: "" k8s.io/cluster-autoscaler/node-template/label: "" kubernetes.io/cluster/<CLUSTER_NAME>: owned
Cluster Autoscaler has its own auto-discovery which is recommended if you have multiple instance groups. With auto-discovery there is no need to set min and max size in two places, and there is no need to change CA config if you add group later.
You should add additional IAM policy rules for nodes:
$ kops edit cluster spec: additionalPolicies: node: | [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "autoscaling:DescribeAutoScalingInstances", "autoscaling:SetDesiredCapacity", "autoscaling:DescribeLaunchConfigurations", "autoscaling:DescribeTags", "autoscaling:TerminateInstanceInAutoScalingGroup" ], "Resource": ["*"] } ]
And apply the configuration:
$ kops update cluster --yes
Now you can install CA, but keep in mind to check with CA version is recommended for Kubernetes version. For this you should check the releases.
Deployment
Cluster Autoscaler is designed to run on Kubernetes master node. This is the default deployment strategy on GCP. It is possible to run a customized deployment of Cluster Autoscaler on worker nodes, but extra care needs to be taken to ensure that Cluster Autoscaler remains up and running. Users can put it into kube-system namespace (Cluster Autoscaler doesn't scale down node with non-mirrored kube-system pods running on them) and set a
priorityClassName: system-cluster-criticalproperty on your pod spec (to prevent your pod from being evicted).
Once you have deployed CA, you need to choose the right AWS region.
Now you can choose an expander.
Expanders provide different strategies for selecting the node group to which new nodes will be added. Expanders can be selected by passing the name to the
--expanderflag, i.e.
./cluster-autoscaler --expander=random
Currently Cluster Autoscaler has 4 expand details HERE. Currently it works only for GCE and GKE (patches welcome.)
Cluster Autoscaler does support following providers: GCE ,GKE ,AWS ,Azure, Alibaba Cloud
Hope this will be helpful. | https://serverfault.com/questions/955006/is-it-possible-to-set-an-aws-autoscaling-policy-for-kubernetes-nodes-using-kops | CC-MAIN-2020-24 | en | refinedweb |
.
def is_palindrome(s): """ Determine whether the string is palindrome :param s: :return: Boolean >>> is_palindrome("a man a plan a canal panama".replace(" ", "")) True >>> is_palindrome("Hello") False """ return s == s[::-1] if __name__ == "__main__": s = input("Enter string to determine whether its palindrome or not: ").strip() if is_palindrome(s): print("Given string is palindrome") else: print("Given string is not palindrome") | http://python.algorithmexamples.com/web/strings/is_palindrome.html | CC-MAIN-2020-24 | en | refinedweb |
PlaceAttribute QML Type
The PlaceAttribute type holds generic place attribute information. More...
Properties
Detailed Description
A place attribute stores an additional piece of information about a Place that is not otherwise exposed through the Place type. A PlaceAttribute is a textual piece of data, accessible through the text property, and a label. Both the text and label properties are intended to be displayed to the user. PlaceAttributes are stored in an ExtendedAttributes map with a unique key.
The following example shows how to display all attributes in a list:
import QtQuick 2.0 import QtPositioning 5.5 import QtLocation 5.6 ListView { model: place.extendedAttributes.keys() delegate: Text { text: "<b>" + place.extendedAttributes[modelData].label + ": </b>" + place.extendedAttributes[modelData]"
Property Documentation
For details on how to use this property to interface between C++ and QML see "Interfaces between C++ and QML Code".
This property holds the attribute label which is a user visible string describing the attribute.
This property holds the attribute text which can be used to show additional information about. | https://doc-snapshots.qt.io/qt5-5.9/qml-qtlocation-placeattribute.html | CC-MAIN-2020-24 | en | refinedweb |
Konstantinos Melios4,427 Points
Pls HELP
What is wrong here?
def parse_answer(answer, kind="string") answer = gets.chomp answer = answer.to_i if kind == "number" return answer end
2 Answers
Salman AkramPro Student 40,062 Points
Hi Konstantinos,
Your codes look great except gets and chomp method in this line which should be delete because the challenge question didn't require this part.
answer = gets.chomp
;)
Maciej Czuchnowski36,429 Points
Why do you do gets.chomp with the answer variable? It is being passed in as an argument already. Just remove that line and it will work. | https://teamtreehouse.com/community/pls-help-2 | CC-MAIN-2020-24 | en | refinedweb |
The
clientstats - Print information about connected clientsauditlog - Disable the audit log
disableautocompaction - Disable autocompaction for the given keyspace and table
disablebackup - Disable incremental backup
disablebinary - Disable native transport (binary protocol)
disablefullquerylog - Disable the full query log
disablegossip - Disable gossip (effectively marking the node down)
disablehandoff - Disable storing hinted handoffs
disablehintsfordc - Disable hints for a data center
disableoldprotocolversions - Disable old protocol versions
drain - Drain the node (stop accepting writes and flush all tables)
enableauditlog - Enable the audit log
enableautocompaction - Enable autocompaction for the given keyspace and table
enablebackup - Enable incremental backup
enablebinary - Reenable native transport (binary protocol)
enablefullquerylog - Enable full query logging, defaults for the options are configured in cassandra.yaml
enablegossip - Reenable gossip
enablehandoff - Reenable future hints storing on the current node
enablehintsfordc - Enable hints for a data center that was previsouly disabled
enableoldprotocolversions - Enable old protocol versionscurrency - Get maximum concurrency for processing stagesreplicas - Print replicas for a given key
import - Import new SSTables to the system. True size is the total size of all SSTables which are not backed up to disk. Size on disk is total size of the snapshot on disk. Total TrueDiskSpaceUsed does not make any SSTable deduplication.
move - Move node on the token ring to a new token
netstats - Print network information on provided host (connecting node by default)
pausehandoff - Pause hints delivery process
profileload - Low footprint profiling of activity for a period of timessl - Signals Cassandra to reload SSL certificatesfullquerylog - Stop the full query log and clean files in the configured full query log directory from cassandra.yaml as well as JMX
resetlocalschema - Reset node’s local schema and resync
resumehandoff - Resume hints delivery process
ring - Print information about the token ring
scrub - Scrub (rebuild sstables for) one or more tablescurrency - Set maximum concurrency for processing stage ‘Swiss Java Knife’. Run ‘noddaemon - Stop cassandra daemon
tablehistograms - Print statistic histograms for a given table
tablestats - Print statistics on tables
toppartitions - Sample and print the most active partitions ‘nodetool help <command>’ for more information on a specific command. | https://cassandra.apache.org/doc/latest/tools/nodetool/nodetool.html | CC-MAIN-2020-24 | en | refinedweb |
[UPDATED] Building RESTful APIs with Lumen and OAuth2
If you are thinking about building fast RESTful APIs in a quick and simple way, then, Lumen is here!.
Lumen eliminates all the unnecessary and heavy features in Laravel and load only components like Eloquent, middleware, authentication & authorization, …etc which keeps it light and fast. Lumen focuses on building Stateless APIs. Therefore, sessions and views are no longer included in the latest version of Lumen.
Index
- What Do We Mean by RESTful APIs?
- Why Using Lumen in Creating RESTful APIs?
- What We Are Building?
- Setting the Development Environment
- Creating the Models and Their Relationships
- Creating Migrations
- Inserting Fake Data
- Creating the Routes & Controllers
- Implementing the Controllers
- Using OAuth2 Server
- Conclusion
What Do We Mean by RESTful APIs?
First and foremost, What do we mean by building RESTful APIs?. You may have heard the term RESTful, but, don’t know what does it actually mean. So, it worth to know now before moving further.
REST is an architectural style for building APIs. It stands for “Representational State Transfer”. It means when we build an API, we build it in a way that HTTP methods and URIs mean something, and the API has to respond in a way that’s expected.
Why Using Lumen in Creating Restful APIs?
In short, It’s fast, light, and easy!
One Of the Fastest Micro-frameworks
Although all the candidates; Laravel, Slim, & Lumen are faster than you will ever need. With Lumen, your API will support from ~1200–1700 requests(if not more depending on your machine) in a range of 10 seconds with 10 simultaneous requests. It’s great speed makes it a perfect candidate for building a RESTful API.
The following command will instruct Apache Benchmark to run for 10 seconds with 10 concurrent requests.
ab -t 10 -c 10
Use Laravel Features, While Remaining Light
When working on RESTful APIs, you won’t need all the features in a full stack framework. Lumen has the best features of Laravel in a light version, while remaining expressive, intuitive, and simple. Lumen removes all the parts that you probably won’t need.
Easier Than You Might Think
It’s easy to configure, understand, and upgrade. Since Lumen is powered by Laravel’s components, you can easily upgrade your Lumen application to the full Laravel framework.
What We Are Building?
We are going to build a simple application, indented for small projects, helps to understand creating RESTful APIs with Lumen and OAuth2, know how to authenticate and authorize, and more.
The RESTful API is for Posts and Comments, where Users can view, create, update, and delete. It provides authorization mechanism to authorize against access tokens using OAuth2.
You can find the final RESTful API on GitHub. You will need to check the GitHub repository as you are following along throughout this tutorial.
Although this tutorial focuses on building an API for Posts and Comments(just for demonstration purposes), however, you can follow the same steps to build an API for whatever you want.
Setting the Development Environment
Installing Lumen via Composer Create-Project
Lumen utilizes Composer to manage its dependencies.
composer create-project laravel/lumen lumen-api-oauth
Laravel Homestead
Laravel Homestead is an official, pre-packaged Vagrant box that provides you a wonderful development environment without requiring you to install PHP, HHVM, a web server, and any other server software on your local machine. Source
If you want to take the advantage of using Laravel Homestead, then follow the Installation Guide.
WAMP, LAMP, MAMP, XAMP Server
If you are using any of WAMP, LAMP, MAMP, XAMP Servers, then then don’t forget to create a database, probably a MySQL database.
Configure the .env File
Make a copy of the .env.example file, and rename it to .env. Then, set your application key to a random string with 32 characters long, edit database name, database username, and database password if needed.
If you are receiving Class ‘Memcached’ not found error, edit the .env file and change the following:
CACHE_DRIVER=array
QUEUE_DRIVER=array
Thanks to Ozal MEHMETi.
Configure the Bootstrap File
Go to bootstrap/app.php file, and uncomment $app->withEloquent(); and $app->withFacades();
Creating the Models and Their Relationships
Eloquent is the ORM of Lumen, which allows you to interact easily with the database. Eloquent Models allow you to query & insert data in your tables. Eloquent models extend Illuminate\Database\Eloquent\Model class.
We have three Models; User, Post, & Comment. So, let’s create them.
Defining Models
User
By default, the User Model comes with Lumen when installed Via Composer Create-Project, you just need to add Hash facade and edit the fillable & hidden attributes
// app/User.phpuse Illuminate\Auth\Authenticatable;
use Laravel\Lumen\Auth\Authorizable;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Contracts\Auth\Authenticatable as AuthenticatableContract;
use Illuminate\Contracts\Auth\Access\Authorizable as AuthorizableContract;use Illuminate\Support\Facades\Hash;class User extends Model implements AuthenticatableContract, AuthorizableContract { use Authenticatable, Authorizable; protected $fillable = ['id', 'name', 'email'];
protected $hidden = ['created_at', 'updated_at', 'password'];}
// app/Post.phpuse Illuminate\Database\Eloquent\Model;class Post extends Model{ protected $fillable = ['id', 'user_id', 'title', 'content'];
protected $hidden = ['created_at', 'updated_at'];
}
Comment
// app/Comment.phpuse Illuminate\Database\Eloquent\Model;class Comment extends Model{ protected $fillable = ['id', 'post_id', 'user_id', 'content'];
protected $hidden = ['created_at', 'updated_at'];
}
Defining Relationships
Since Database tables are often related to each another, Eloquent makes it easy to work with these database tables relationships by defining relationships between model classes. For example, a post may have one or more comments, and every comment is related to a single post(One To Many).
// app/Post.phpclass Post extends Model{ /**
* Define a one-to-many relationship with App\Comment
*/
public function comments(){
return $this->hasMany('App\Comment');
}
}
Comment
// app/Comment.phpclass Comment extends Model{ /**
* Define an inverse one-to-many relationship with App\Post.
*/
public function post(){
return $this->belongsTo('App\Post');
}
}
Creating Migrations
Migrations allow you to easily build, modify and share the application’s database schema. We will create three tables; users, posts, & comments.
Creating Migration Files
To create a migration file, use the make:migration Artisan command.
php artisan make:migration create_users_table --create=users
php artisan make:migration create_posts_table --create=posts
php artisan make:migration create_comments_table --create=comments
These commands will create migration files in your database/migrations folder.
Migration Structure
In each migration file, specifically, inside “up()” method, we can create a new table and define it’s columns.
Users
Every user has a unique, auto-incremented id, a name, an email, and a password.
public function up()
{
Schema::create('users', function (Blueprint $table) {
$table->increments('id');
$table->string('name');
$table->string('email')->unique();
$table->string('password');
$table->nullableTimestamps();
});
}
A post has a an id, title, content, and a foreign key points to the user who created that post.
Whenever we delete or update a user, the database will cascade down affecting the referencing rows. This is used to force referential integrity at the database level.
public function up()
{
Schema::create('posts', function (Blueprint $table) {
$table->increments('id');
$table->string('title');
$table->string('content');
$table->integer('user_id')->unsigned();
$table->foreign('user_id')->references('id')->on('users')->onDelete('cascade')->onUpdate('cascade');
$table->nullableTimestamps();
});
}
It’s same idea as in posts table, where every comment has a content, and a foreign key points to the user who created that comment, and another foreign key points to the post related to that comment.
It doesn’t go without saying, If you deleted a user or a post, the database would cascade down deleting all the referencing comments.
public function up()
{
Schema::create('comments', function (Blueprint $table) {
$table->increments('id');
$table->string('content'); $table->nullableTimestamps();
$table->integer('user_id')->unsigned();
$table->foreign('user_id')->references('id')->on('users')->onDelete('cascade')->onUpdate('cascade'); $table->integer('post_id')->unsigned();
$table->foreign('post_id')->references('id')->on('posts')->onDelete('cascade')->onUpdate('cascade');
});
}
Timestamps
By default, $timestamps property of the Eloquent model is set to true. So, Eloquent expects created_at and updated_atcolumns to exist on our tables. “$table->nullableTimestamps()” will create created_at and updated_at columns automatically.
Running Migrations
Finally, It’s time to apply what we’ve defined so far. To run all migrations, use the migrate Artisan command.
php artisan migrate
After running this command, your database should have three tables, with their columns defined in the migration files.
Inserting Fake Data
Factories are useful to create and insert fake data for our Eloquent models classes, while the Seeder execute these factories. In fact both of them can insert data into the database.
We will use Faker library to generate fake data. It’s already installed with Lumen.
Model Factories
// database/factories/ModelFactory.php$factory->define(App\Post::class, function (Faker\Generator $faker) {
return [
'title' => $faker->sentence(4),
'content' => $faker->paragraph(4),
'user_id' => mt_rand(1, 10)
];
});$factory->define(App\Comment::class, function (Faker\Generator $faker) {
return [
'content' => $faker->paragraph(1),
'post_id' => mt_rand(1, 50),
'user_id' => mt_rand(1, 10)
];
});$factory->define(App\User::class, function (Faker\Generator $faker) { $hasher = app()->make('hash'); return [
'name' => $faker->name,
'email' => $faker->email,
'password' => $hasher->make("secret")
];
});
To keep the database in a consistent state. A foreign key must have values that exist in the primary key it’s referring to. For example user_id column in posts table must have values from 1 to 10(assuming we inserted 10 or more Users).
Seeding the Database
A seeder class contains only one method by default; “run()”. This method is called when we run seeders. Seeders use model factories to generate & insert database records.
// database/seeds/DatabaseSeeder.php public function run(){ // Disable foreign key checking because truncate() will fail
DB::statement('SET FOREIGN_KEY_CHECKS = 0'); User::truncate();
Post::truncate();
Comment::truncate(); factory(User::class, 10)->create();
factory(Post::class, 50)->create();
factory(Comment::class, 100)->create(); // Enable it back
DB::statement('SET FOREIGN_KEY_CHECKS = 1');
}
Don’t forget to include User, Post, and Comment classes so you can use them.
Running Seeders
Now, we are going to seed the database with fake data.
php artisan db:seed
Creating the Routes & Controllers
Routes link URIs to controller’s action method, while Controllers handle the request and make a response.
Routes
All the routes are defined in the routes/web.php file.
// Users
$app->get('/users/', 'UserController@index');
$app->post('/users/', 'UserController@store');
$app->get('/users/{user_id}', 'UserController@show');
$app->put('/users/{user_id}', 'UserController@update');
$app->delete('/users/{user_id}', 'UserController@destroy');// Posts
$app->get('/posts','PostController@index');
$app->post('/posts','PostController@store');
$app->get('/posts/{post_id}','PostController@show');
$app->put('/posts/{post_id}', 'PostController@update');
$app->delete('/posts/{post_id}', 'PostController@destroy');// Comments
$app->get('/comments', 'CommentController@index');
$app->get('/comments/{comment_id}', 'CommentController@show');// Comments of a Post
$app->get('/posts/{post_id}/comments', 'PostCommentController@index');
$app->post('/posts/{post_id}/comments', 'PostCommentController@store');
$app->put('/posts/{post_id}/comments/{comment_id}', 'PostCommentController@update');
$app->delete('/posts/{post_id}/comments/{comment_id}', 'PostCommentController@destroy');
Controllers
In this tutorial, we will create 4 controllers; UserController, PostController, CommentController, & PostCommentController. All controllers must extend the base controller class.
The final code for controllers won’t be included here as they are already available in the GitHub repository.
// app/Http/Controllers/UserController.php<?php namespace App\Http\Controllers;use App\User;use Illuminate\Http\Request;
use Illuminate\Support\Facades\Hash;class UserController extends Controller{ public function index(){
} // ...
}
Implementing the Controllers
The API will be used to create, read, update, or delete data, right? So, we need to write the code that will take care of these operations.
Inside each controller, we will define action methods to create, read, update, and delete data. Here is an example of the UserController class.
// app/Http/Controllers/UserController.php public function index(){ $users = User::all();
return response()->json(['data' => $users], 200);
} public function store(Request $request){ $this->validateRequest($request); $user = User::create([
'email' => $request->get('email'),
'password'=> Hash::make($request->get('password'))
]); return response()->json(['data' => "The user with with id {$user->id} has been created"], 201);
} public function show($id){ $user = User::find($id); if(!$user){
return response()->json(['message' => "The user with {$id} doesn't exist"], 404);
} return response()->json(['data' => $user], 200);
} public function update(Request $request, $id){ $user = User::find($id); if(!$user){
return response()->json(['message' => "The user with {$id} doesn't exist"], 404);
} $this->validateRequest($request); $user->email = $request->get('email');
$user->password = Hash::make($request->get('password')); $user->save();
return response()->json(['data' => "The user with with id {$user->id} has been updated"], 200);
} public function destroy($id){ $user = User::find($id); if(!$user){
return response()->json(['message' => "The user with {$id} doesn't exist"], 404);
} $user->delete(); return response()->json(['data' => "The user with with id {$id} has been deleted"], 200);
}
public function validateRequest(Request $request){ $rules = [
'email' => 'required|email|unique:users',
'password' => 'required|min:6'
]; $this->validate($request, $rules);
}
Again, the final code for controllers won’t be included here as they are already available in the GitHub repository.
Validating User Inputs
Whenever the client wants to store or update an existing User, the client needs to submit an email and a password. Of course, there are some validations we need to run against user inputs.
For example, we need to make sure the email field exists in the first place, and it’s a valid email, and also it has to be unique. All of these validations can be easily done with Lumen using “$this->validate()” method. It returns a JSON response with the relevant error messages if validation fails.
Don’t Repeat Yourself (DRY)
If you notice, there are some code snippets we have been using over and over again. For example, sending a JSON response with success and error message. So, it’s better the keep these snippets in the base controller class.
// app/Http/Controllers/Controller.php public function success($data, $code){
return response()->json(['data' => $data], $code);
} public function error($message, $code){
return response()->json(['message' => $message], $code);
}
Using OAuth2 Server
The most important part of this tutorial is how to install and use OAuth2 Server. This package adds authorization layer to your application by using access tokens.
About OAuth2 and How It Works
The OAuth 2.0 authorization framework enables a third-party application to obtain limited access to an HTTP service.Source
Before diving deeper into this package, you need to have a good knowledge of the principles behind the OAuth2 specification.
If you need a refresher, or you want to learn about OAuth2, DigitalOcean has a good Introduction to OAuth 2.
Installing the OAuth2 Server
- Add “lucadegasperi/oauth2-server-laravel”: “^5.2” to ‘require’ in composer.json file.
- Run composer update to update and install(if not already installed) all the dependencies.
- Copy configuration folder vendor\lucadegasperi\oauth2-server-laravel\config into lumen-api-oauth folder.
- Copy all migration files in vendor\lucadegasperi\oauth2-server-laravel\database into database\migrations.
- Register service providers and middlewares
// bootstrap/app.php// Register Middleware$app->middleware([
\LucaDegasperi\OAuth2Server\Middleware\OAuthExceptionHandlerMiddleware::class
]);$app->routeMiddleware([
'oauth' =>
\LucaDegasperi\OAuth2Server\Middleware\OAuthMiddleware::class,
]);// bootstrap/app.php// Register Service Providers$app->register(\LucaDegasperi\OAuth2Server\Storage\FluentStorageServiceProvider::class);
$app->register(\LucaDegasperi\OAuth2Server\OAuth2ServerServiceProvider::class);
Configuring the OAuth2 Server
- Create a seeder class for OAuth called OAuthClientSeeder for example. This class will generate and insert data tooauth_clients table.
// database/seeds/OAuthClientSeeder.php
use Illuminate\Database\Seeder;
class OAuthClientSeeder extends Seeder {
public function run(){
DB::table('oauth_clients')->truncate();
for ($i=0; $i < 10; $i++){
DB::table('oauth_clients')->insert(
[ 'id' => "id$i",
'secret' => "secret$i",
'name' => "Test Client $i"
]
);
}
}
}
2. Make a call to OAuthClientSeeder in the DatabaseSeeder class.
// database/seeds/DatabaseSeeder.php
public function run(){
// ...
$this->call('OAuthClientSeeder');
// ...
}
3. Run composer dumpautoload to update the autoloader because of the new classes we’ve added recently.
4. Run Migrations and Seeders
php artisan migrate — seed
5. Define a route to respond to the incoming access token requests.
// app/Http/routes.php // Request Access Tokens
$app->post('/oauth/access_token', function() use ($app){
return response()->json($app->make('oauth2-server.authorizer')->issueAccessToken());
});
6. Choose and Enable the grant type suitable for us. In this tutorial, we will use Password Grant.
// config/oauth2.php 'grant_types' => [
'password' => [
'class' => '\League\OAuth2\Server\Grant\PasswordGrant',
'callback' => '\App\User@verify',
'access_token_ttl' => 3600
]
],
This grant needs the client to send the resource owner(user) email and password along with client id and client secret. So, we need to define a method where you check if the provided user is a valid one.
So, we will create a verify($email, $password) method in the User class.
// app/User.php public function verify($email, $password){ $user = User::where('email', $email)->first(); if($user && Hash::check($password, $user->password)){
return $user->id;
} return false;
}
Testing the OAuth2 Server
You can test the API using Postman, and make a POST request to:.
In case of using any of WAMP, LAMP, MAMP, XAMP servers, make a POST request to.
The client needs to submit 5 fields: username, password, client_id, client_secret, and grant_type.
The username field is the email in Users table.
The password field is secret.
The client_id field is the id in oauth_clients table.
The client_secret field is the secret in oauth_clients table.
The grant_type field is password.
The authorization server will then issue access tokens to the client after successfully authenticating the client credentials and presenting authorization grant(user credentials).
Protecting the Resources Using Access Tokens
After obtaining an access token, we need to restrict the access to controller action methods. Therefore, clients need to present the access token to get access to the resources.
For example, In the PostController class, we can assign oauth middleware to all methods which need authorization via access token except index & show methods. So, if the client needs to remove a post, the client must send a valid access token.
// app/Http/Controllers/PostController.php public function __construct(){
$this->middleware('oauth', ['except' => ['index', 'show']]);
} public function destroy($id){ $post = Post::find($id); if(!$post){
return $this->error("The post with {$id} doesn't exist", 404);
} // no need to delete the comments for the current post,
// since we used on delete cascase on update cascase.
// $post->comments()->delete();
$post->delete(); return $this->success("The post with with id {$id} has been deleted along with it's comments", 200);
}
Now, you can assign oauth middleware to action methods in UserController & PostCommentController classes.
// app/Http/Controllers/UserController.php
// app/Http/Controllers/PostCommentController.php public function __construct(){ $this->middleware('oauth', ['except' => ['index', 'show']]);]);
}
If you still receive invalid request error even if access token is sent, then, key and value pairs should be sent using www-url-encode instead of form-params.
The Current Authorized User Id
You might be asking How to get the id of the authorized user?. Every access token is related to a user, if you remember, we sent the user email and password to get an access token.
So, after the oauth middleware validates the access token sent by the client, you can access the authorized user id by calling:
\LucaDegasperi\OAuth2Server\Facades\Authorizer::getResourceOwnerId();
And because we are going to get the authorized user id multiple times, it’s better to keep this line of code in the base controller class.
// app/Http/Controllers/Controller.php
protected function getUserId(){
return \LucaDegasperi\OAuth2Server\Facades\Authorizer::getResourceOwnerId();
}
I See authorize Middleware, What Is This?!
If you have seen the following line of code in any Controller constructor in the GitHub repository, just uncomment it, Why? Because In this tutorial, we didn’t cover how to authorize against ownership, and non-admin Vs admin users.
// app/Http/Controllers/UserController.php
// app/Http/Controllers/PostCommentController.php
// app/Http/Controllers/PostController.php public function __construct(){ $this->middleware('oauth', ['except' => ['index', 'show']]);
// $this->middleware('authorize:' . __CLASS__, ['except' => [....]]);
}
Conclusion
Building a RESTful API couldn’t be easier with Lumen, even if you aren’t familiar with Laravel. Now, you should be able to install, configure, and build your own RESTful API with OAuth2. You can take what we have done further and tailor it according to your needs. | https://medium.com/omarelgabrys-blog/building-restful-apis-with-lumen-and-oauth2-8ba279c6a31 | CC-MAIN-2020-24 | en | refinedweb |
May 2013
Volume 28 Number 05
Windows with C++ - Introducing Direct2D 1.1
By Kenny Kerr | May 2013
Windows 8 launched with a major new version of Direct2D. Since then, Direct2D has made it into Windows RT (for ARM devices) and Windows Phone 8, both of which are based on this latest version of Windows. Support for Direct2D on the phone OS isn’t yet official, but it’s just a matter of time. What about Windows 7? A platform update is being prepared to bring the latest version of the DirectX family of APIs to Windows 7. It includes the latest versions of Direct2D, Direct3D, DirectWrite, the Windows Imaging Component, the Windows Animation Manager and so on. A major driver for this is, of course, Internet Explorer. Anywhere you find Internet Explorer 9 or higher, you’ll find Direct2D. By the time you read this, it’s likely that Internet Explorer 10 will be available on Windows 7 and that, too, will require Direct2D 1.1 to be installed as a matter of course. Given its ubiquity, there’s really no reason not to use Direct2D 1.1. But what is Direct2D 1.1 and how can you start using it? That’s the topic of this month’s column..
In my last column (msdn.microsoft.com/magazine/jj991972), I showed how you could use ID2D1HwndRenderTarget, the Direct2D HWND render target, in a desktop window. This mechanism continues to be supported by Direct2D 1.1 and is still the simplest way to get started with Direct2D, as it hides all of the underlying Direct3D and DirectX Graphics Infrastructure (DXGI) plumbing. To take advantage of the improvements in Direct2D 1.1 you need to eschew this render target, however, and instead opt for ID2D1DeviceContext, the new device context render target. At first, this might sound like something from Direct3D, and in some ways, it is. Like the HWND render target, the Direct2D device context inherits from the ID2D1RenderTarget interface and is thus very much a render target in the traditional sense, but it’s a whole lot more powerful. Creating it, however, is a bit more involved but well worth the effort, as it provides many new features and is the only way to use Direct2D with the Windows Runtime (WinRT). Therefore, if you want to use Direct2D in your Windows Store or Windows Phone apps, you’ll need to embrace the Direct2D device context. In this month’s column I’ll show you how to use the new render target in a desktop app. Next month, I’ll show you how the render target works with the Windows Runtime—this has more to do with the Windows Runtime than it does with Direct2D. The bulk of what you need to learn has to do with managing the Direct3D device, the swap chain, buffers and resources.
The nice thing about the original Direct2D HWND render target was that you really didn’t need to know anything about Direct3D or DirectX to get stuff done. That’s no longer the case. Fortunately, there isn’t a whole lot you need to know, as DirectX can certainly be daunting. DirectX is really a family of closely related APIs, of which Direct3D is the most well-known, while Direct2D is starting to steal some attention thanks to its relative ease of use and incredible power. Along the way, different parts of DirectX have come and gone. One relatively new member of the family is DXGI, which debuted with Direct3D 10. DXGI provides common GPU resource management facilities across the various DirectX APIs. Bridging the gap between Direct2D and Direct3D involves DXGI. The same goes for bridging the gap between Direct3D and the desktop’s HWND or the WinRT CoreWindow. DXGI provides the glue that binds them all together.
Headers and Other Glue
As I’ve done in the past, I’ll continue to use the Active Template Library (ATL) on the desktop just to keep the examples concise. You can use your own library or no library at all. It really doesn’t matter. I covered these options in my February 2013 column (msdn.microsoft.com/magazine/jj891018). To begin, you need to include the necessary Visual C++ libraries:
#include <wrl.h> #include <atlbase.h> #include <atlwin.h>
The Windows Runtime C++ Template Library (WRL) provides the handy ComPtr smart pointer, and the ATL headers are there for managing the desktop window. Next, you need the latest DirectX headers:
#include <d2d1_1.h> #include <d3d11_1.h>
The first one is the new Direct2D 1.1 header file. The Direct3D 11.1 header is needed for device management. To keep things simple I’ll assume the WRL and Direct2D namespaces are as follows:
using namespace Microsoft::WRL; using namespace D2D1;
Next, you need to tell the linker how to resolve the factory functions you’ll be using:
#pragma comment(lib, "d2d1") #pragma comment(lib, "d3d11")
I tend to avoid talking about error handling. As with so much of C++, the developer has many choices for how errors can be handled. This flexibility is in many ways what draws me, and many others, to C++, but it can be divisive. Still, I get many questions about error handling, so to avoid any confusion, Figure 1 shows what I rely on for error handling in desktop apps.
Figure 1 Error Handling
#define ASSERT ATLASSERT #define VERIFY ATLVERIFY #ifdef _DEBUG #define HR(expression) ASSERT(S_OK == (expression)) #else struct ComException { HRESULT const hr; ComException(HRESULT const value) : hr(value) {} }; inline void HR(HRESULT const hr) { if (S_OK != hr) throw ComException(hr); } #endif
The ASSERT and VERIFY macros are just defined in terms of the corresponding ATL macros. If you’re not using ATL, then you could just use the C Run-Time Library (CRT) _ASSERTE macro instead. Either way, assertions are vital as a debugging aid. VERIFY checks the result of an expression but only asserts in debug builds. In release builds the ASSERT expression is stripped out entirely while the VERIFY expression remains. The latter is useful as a sanity check for functions that must execute but shouldn’t fail short of some apocalyptic event. Finally, HR is a macro that ensures the expression—typically a COM-style function or interface method—is successful. In debug builds, it asserts so as to quickly pinpoint the line of code in a debugger. In release builds, it throws an exception to quickly force the application to crash. This is particularly handy if your application uses Windows Error Reporting (WER), as the offline crash dump will pinpoint the failure for you.
The Desktop Window
Now it’s time to start framing up the DesktopWindow class. First, I’ll define a base class to wrap up much of the boilerplate windowing plumbing. Figure 2 provides the initial structure.
Figure 2 Desktop Window Skeleton
template <typename T> struct DesktopWindow : CWindowImpl<DesktopWindow<T>, CWindow, CWinTraits<WS_OVERLAPPEDWINDOW | WS_VISIBLE>> { ComPtr<ID2D1DeviceContext> m_target; ComPtr<IDXGISwapChain1> m_swapChain; ComPtr<ID2D1Factory1> m_factory; DECLARE_WND_CLASS_EX(nullptr, 0, -1); BEGIN_MSG_MAP(c) MESSAGE_HANDLER(WM_PAINT, PaintHandler) MESSAGE_HANDLER(WM_SIZE, SizeHandler) MESSAGE_HANDLER(WM_DISPLAYCHANGE, DisplayChangeHandler) MESSAGE_HANDLER(WM_DESTROY, DestroyHandler) END_MSG_MAP() LRESULT PaintHandler(UINT, WPARAM, LPARAM, BOOL &) { PAINTSTRUCT ps; VERIFY(BeginPaint(&ps)); Render(); EndPaint(&ps); return 0; } LRESULT DisplayChangeHandler(UINT, WPARAM, LPARAM, BOOL &) { Render(); return 0; } LRESULT SizeHandler(UINT, WPARAM wparam, LPARAM, BOOL &) { if (m_target && SIZE_MINIMIZED != wparam) { ResizeSwapChainBitmap(); Render(); } return 0; } LRESULT DestroyHandler(UINT, WPARAM, LPARAM, BOOL &) { PostQuitMessage(0); return 0; } void"Introducing Direct2D 1.1")); MSG message; BOOL result; while (result = GetMessage(&message, 0, 0, 0)) { if (-1 != result) { DispatchMessage(&message); } } } };
DesktopWindow is a class template, as I rely on static or compile-time polymorphism to call the app’s window class for drawing and resource management at appropriate points. Again, I’ve already described the mechanics of ATL and desktop windows in my February 2013 column, so I won’t repeat that here. The main thing to note is that the WM_PAINT, WM_SIZE and WM_DISPLAYCHANGE messages are all handled with a call to a Render method. The WM_SIZE message also calls out to a ResizeSwapChainBitmap method. These hooks are needed to let DirectX know what’s happening with your window. I’ll describe what these do in a moment. Finally, the Run method creates the standard Direct2D factory object, retrieving the new ID2D1Factory1 interface in this case, and optionally lets the app’s window class create device-independent resources. It then creates the HWND itself and enters the message loop. The app’s wWinMain function is then a simple two-liner:
int __stdcall wWinMain(HINSTANCE, HINSTANCE, PWSTR, int) { SampleWindow window; window.Run(); }
Creating the Device
So far, most of what I’ve described has been a recap of what I’ve shown in previous columns for window management. Now I’ve come to the point where things get very different. The HWND render target did a lot of work for you to hide the underlying DirectX plumbing. Being a render target, the Direct2D device context still delivers the results of drawing commands to a target, but the target is no longer the HWND—rather, it’s a Direct2D image. This image is an abstraction, which can literally be a Direct2D bitmap, a DXGI surface or even a command list to be replayed in some other context.
The first thing to do is to create a Direct3D device object. Direct3D defines a device as something that allocates resources, renders primitives and communicates with the underlying graphics hardware. The device consists of a device object for managing resources and a device-context object for rendering with those resources. The D3D11CreateDevice function creates a device, optionally returning pointers to the device object and device-context object. Figure 3 shows what this might look like. I don’t want to bog you down with Direct3D minutiae, so I won’t describe every option in detail but instead will focus on what’s relevant to Direct2D.
Figure 3 Creating a Direct3D Device
HRESULT CreateDevice(D3D_DRIVER_TYPE const type, ComPtr<ID3D11Device> & device) { UINT flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT; #ifdef _DEBUG flags |= D3D11_CREATE_DEVICE_DEBUG; #endif return D3D11CreateDevice(nullptr, type, nullptr, flags, nullptr, 0, D3D11_SDK_VERSION, device.GetAddressOf(), nullptr, nullptr); }
This function’s second parameter indicates the type of device that you’d like to create. The D3D_DRIVER_TYPE_HARDWARE constant indicates that the device should be backed by the GPU for hardware-accelerated rendering. If a GPU is unavailable, then the D3D_DRIVER_TYPE_WARP constant may be used. WARP is a high-performance software rasterizer and is a great fallback. You shouldn’t assume that a GPU is available, especially if you’d like to run or test your software in constrained environments such as Hyper-V virtual machines (VMs). Here’s how you might use the function from Figure 3 to create the Direct3D device:
ComPtr<ID3D11Device> d3device; auto hr = CreateDevice(D3D_DRIVER_TYPE_HARDWARE, d3device); if (DXGI_ERROR_UNSUPPORTED == hr) { hr = CreateDevice(D3D_DRIVER_TYPE_WARP, d3device); } HR(hr);
Figure 3 also illustrates how you can enable the Direct3D debug layer for debug builds. One thing to watch out for is that the D3D11CreateDevice function will mysteriously fail if the debug layer isn’t actually installed. This won’t be a problem on your development machine, because Visual Studio would’ve installed it along with the Windows SDK. If you happen to copy a debug build onto a test machine, you might bump into this problem. This is in contrast to the D2D1CreateFactory function, which will still succeed even if the Direct2D debug layer isn’t present.
Creating the Swap Chain
The next step is to create a DXGI swap chain for the application’s HWND. A swap chain is a collection of buffers used for displaying frames to the user. Typically, there are two buffers in the chain, often called the front buffer and the back buffer, respectively. The GPU presents the image stored in the front buffer while the application renders into the back buffer. When the application is done rendering it asks DXGI to present the back buffer, which basically swaps the pointers to the front and back buffers, allowing the GPU to present the new frame and the application to render the next frame. This is a gross simplification, but it’s all you need to know for now.
To create the swap chain you first need to get hold of the DXGI factory and retrieve its IDXGIFactory2 interface. You can do so by calling the CreateDXGIFactory1 function, but given that you’ve just created a Direct3D device object, you can also use the DirectX object model to make your way there. First, you need to query for the device object’s IDXGIDevice interface:
ComPtr<IDXGIDevice> dxdevice; HR(d3device.As(&dxdevice));
Next, you need to retrieve the display adapter, virtual or otherwise, for the device:
ComPtr<IDXGIAdapter> adapter; HR(dxdevice->GetAdapter(adapter.GetAddressOf()));
The adapter’s parent object is the DXGI factory:
ComPtr<IDXGIFactory2> factory; HR(adapter->GetParent(__uuidof(factory), reinterpret_cast<void **>(factory.GetAddressOf())));
This might seem like a lot more work than simply calling the CreateDXGIFactory1 function, but in a moment you’re going to need the IDXGIDevice interface, so it’s really just one extra method call.
The IDXGIFactory2 interface provides the CreateSwapChainForHwnd method for creating a swap chain for a given HWND. Before calling it, you need to prepare a DXGI_SWAP_CHAIN_DESC1 structure describing the particular swap chain structure and behavior that you’d like. A bit of care is needed when initializing this structure, as it’s what most distinguishes the various platforms on which you’ll find Direct2D. Here’s what it might look like:
DXGI_SWAP_CHAIN_DESC1 props = {}; props.Format = DXGI_FORMAT_B8G8R8A8_UNORM; props.SampleDesc.Count = 1; props.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; props.BufferCount = 2;
The Format member describes the desired format for the swap chain buffers. I’ve chosen a 32-bit format with 8 bits for each color channel and alpha component. Overall, this provides the best performance and compatibility across devices and APIs. The SampleDesc member affects Direct3D image quality, as it relates to antialiasing. Generally, you’ll want Direct2D to handle antialiasing, so this configuration merely tells Direct3D not to do anything. The BufferUsage member describes how the swap chain buffers will be used and allows DirectX to optimize memory management. In this case I’m indicating that the buffer will be used only for rendering output to the screen. This means that the buffer won’t be accessible from the CPU, but the performance will be greatly improved as a result. The BufferCount member indicates how many buffers the swap chain will contain. This is typically no more than two to conserve memory, but it may be more for exceedingly high-speed rendering (although that’s rare). In fact, to conserve memory, Windows Phone 8 allows only a single buffer to be used. There are a number of other swap chain members, but these are the only ones required for a desktop window. Now create the swap chain:
HR(factory->CreateSwapChainForHwnd(d3device.Get(), m_hWnd, &props, nullptr, nullptr, m_swapChain.GetAddressOf()));
The first parameter indicates the Direct3D device where the swap chain resources will be allocated, and the second is the target HWND where the front buffer will be presented. If all goes well, the swap chain is created. One thing to keep in mind is that only a single swap chain can be associated with a given HWND. This might seem obvious, or not, but it’s something you need to watch out for. If this method fails, it’s likely that you failed to release device-specific resources before re-creating the device after a device-loss event.
Creating the Direct2D Device
The next step is to create the Direct2D device. Direct2D 1.1 is modeled more closely on Direct3D in that instead of simply having a render target, it now has both a device and device context. The device is a Direct2D resource object that’s linked to a particular Direct3D device. Like the Direct3D device, it serves as a resource manager. Unlike the Direct3D device context, which you can safely ignore, the Direct2D device context is the render target that exposes the drawing commands, which you’ll need. The first step is to use the Direct2D factory to create the device that’s bound to the underlying Direct3D device via its DXGI interface:
ComPtr<ID2D1Device> device; HR(m_factory->CreateDevice(dxdevice.Get(), device.GetAddressOf()));
This device represents the display adapter in Direct2D. I typically don’t hold on to the ID2D1Device pointer, because that makes it simpler to handle device loss and swap chain buffer resizing. What you really need it for is to create the Direct2D device context:
HR(device->CreateDeviceContext(D2D1_DEVICE_CONTEXT_OPTIONS_NONE, m_target.GetAddressOf()));
What you now have is a Direct2D render target that you can use to batch up drawing commands as usual, but there’s still one more step before you can do so. Unlike the HWND render target, which could only ever target the window for which it was created, the device context can switch targets at run time and initially has no target set at all.
Connecting the Device Context and Swap Chain
The next step is to set the swap chain’s back buffer as the target of the Direct2D device context. The swap chain’s GetBuffer method will return the back buffer as a DXGI surface:
ComPtr<IDXGISurface> surface; HR(m_swapChain->GetBuffer(0, // buffer index __uuidof(surface), reinterpret_cast<void **>(surface.GetAddressOf())));
You can now use the device context’s CreateBitmapFromDxgiSurface method to create a Direct2D bitmap to represent the DXGI surface, but first you need to describe the bitmap’s format and intended use. You can define the bitmap properties as follows:
auto props = BitmapProperties1( D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW, PixelFormat( DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE));
The D2D1_BITMAP_OPTIONS_TARGET constant indicates that the bitmap will be used as the target of a device context. The D2D1_BITMAP_OPTIONS_CANNOT_DRAW constant relates to the swap chain’s DXGI_USAGE_RENDER_TARGET_OUTPUT attribute, indicating that it can be used only as an output and not as an input to other drawing operations. PixelFormat just describes to Direct2D what the underlying swap chain buffer looks like. You can now create a Direct2D bitmap to point to the swap chain back buffer and point the device context to this bitmap:
ComPtr<ID2D1Bitmap1> bitmap; HR(m_target->CreateBitmapFromDxgiSurface(surface.Get(), props, bitmap.GetAddressOf())); m_target->SetTarget(bitmap.Get());
Finally, you need to tell Direct2D how to scale its logical coordinate system to the physical display embodied by the DXGI surface:
float dpiX, dpiY; m_factory->GetDesktopDpi(&dpiX, &dpiY); m_target->SetDpi(dpiX, dpiY);
Rendering
The DesktopWindow Render method acts as the catalyst for much of the Direct2D device management. Figure 4 provides the basic outline of the Render method.
Figure 4 The DesktopWindow Render Method
void Render() { if (!m_target) { CreateDevice(); CreateDeviceSwapChainBitmap(); } m_target->BeginDraw(); static_cast<T *>(this)->Draw(); m_target->EndDraw(); auto const hr = m_swapChain->Present(1, 0); if (S_OK != hr && DXGI_STATUS_OCCLUDED != hr) { ReleaseDevice(); } }
As with the HWND render target, the Render method first checks whether the render target needs to be created. The CreateDevice method contains the Direct3D device, DXGI swap chain and Direct2D device context creation. The CreateDeviceSwapChainBitmap method contains the code to connect the swap chain to the device context by means of a DXGI surface-backed Direct2D bitmap. The latter is kept separate because it’s needed during window resizing.
The Render method then follows the usual pattern of bracketing draw commands with the BeginDraw and EndDraw methods. Notice that I don’t bother to check the result of the EndDraw method. Unlike the HWND render target, the device context’s EndDraw method doesn’t actually present the newly drawn frame to the screen. Instead, it merely concludes rendering to the target bitmap. It’s the job of the swap chain’s Present method to present this to the screen, and it’s at this point that any rendering and presentation issues can be handled.
Because I’m only using a simple event-driven rendering model for this window, the presentation is straightforward. If I were using an animation loop for synchronized rendering at the display’s refresh frequency, things would get a lot more complicated, but I’ll cover that in a future column. In this case, there are three scenarios to deal with. Ideally, Present returns S_OK and all is well. Alternatively, Present returns DXGI_STATUS_OCCLUDED indicating the window is occluded, meaning it’s invisible. This is increasingly rare, as desktop composition relies on the window’s presentation to remain active. One source of occlusion, however, is when the active desktop is switched. This happens most often if a User Account Control (UAC) prompt appears or the user presses Ctlr+Alt+Del to switch users or lock the computer. At any rate, occlusion doesn’t mean failure, so there’s nothing you need to do except perhaps avoid extra rendering calls. If Present fails for any other reason, then it’s safe to assume that the underlying Direct3D device has been lost and must be re-created. The DesktopWindow’s ReleaseDevice method might look like this:
void ReleaseDevice() { m_target.Reset(); m_swapChain.Reset(); static_cast<T *>(this)->ReleaseDeviceResources(); }
Here’s where you can start to understand why I avoid holding on to any unnecessary interface pointers. Every resource pointer represents a reference directly or indirectly to the underlying device. Each one must be released in order for the device to be properly re-created. At a minimum, you need to hold on to the render target (so that you can actually issue drawing commands) and the swap chain (so that you can present). Related to this—and the final piece of the puzzle—is the ResizeSwapChainBitmap method I alluded to inside the WM_SIZE message handler.
The HWND render target made this simple with its Resize method. Because you’re now in charge of the swap chain, it’s your responsibility to resize its buffers. Of course, this will fail unless all references to these buffers have been released. Assuming you aren’t directly holding on to any, this is simple enough. Figure 5 shows you how.
Figure 5 Resizing the Swap Chain
void ResizeSwapChainBitmap() { m_target->SetTarget(nullptr); if (S_OK == m_swapChain->ResizeBuffers(0, 0, 0, DXGI_FORMAT_UNKNOWN, 0)) { CreateDeviceSwapChainBitmap(); } else { ReleaseDevice(); } }
In this case, the only reference to the swap chain’s back buffer that the DesktopWindow holds is the one held indirectly by the device context render target. Setting this to a nullptr value releases that final reference so that the ResizeBuffers method will succeed. The various parameters just tell the swap chain to resize the buffers based on the window’s new size and keep everything else as it was. Assuming ResizeBuffers succeeds, I simply call the CreateDeviceSwapChainBitmap method to create a new Direct2D bitmap for the swap chain and hook it up to the Direct2D device context. If anything goes wrong, I simply release the device and all of its resources; the Render method will take care of re-creating it all when the time comes.
You now have everything you need to render in a desktop window with Direct2D 1.1! And that’s all I have room for this month. Join me next time (Microsoft) | https://docs.microsoft.com/en-us/archive/msdn-magazine/2013/may/windows-with-c-introducing-direct2d-1-1 | CC-MAIN-2020-24 | en | refinedweb |
Inner Class in Java
Reading time: 20 minutes | Coding time: 5 minutes
In Java, an inner class is a type of nested class. A nested class is a class which is defined inside of another class. Just as we nest the conditional statements such as if...else, switch and loops in the same way we can nest classes in Java. It provides better encapsulation.
The basic syntax of Inner class in Java is as follows:
class OuterClassName { //code class InnerClassName { //code } }
To create an object of the Inner class, the syntax is as follows:
OuterClassName.InnerClassName inner = new OuterClassName().new InnerClassName();
Uses of Inner class
- It can access private members of the outer class, which increases encapsulation.
- It is better to create a class as nested when it needs to be used only by one class.
- It makes our source code more readable and maintainable.
Types of Nested Classes
The different types of Nested classes in Java are:
- Non Static Nested Class (Inner Class)
- Method Local Inner Class
- Anonymous Inner Class
- Static Nested Class
Non Static Inner Class
A non-static nested class is also known as inner class which is just a simple nested class. It can access all variables of the outer class. If it is declared as private then it cannot be accessed by any other class except the class in which it is declared.
In this example, we will create a class A which will have class B as an inner class. We will create a object of class A and try to class B constructor.
Example
Java code for A.java:
//File name opengenus_A.java class opengenus_A{ private int a = 10; class opengenus_B{ public void print(){ System.out.println("a = " + a); } } }
Java code for Main.java:
//File name Main.java class Main{ public static void main(String[] args){ //this is the way of creating an instance of an inner class. opengenus_A.opengenus_B inner = new opengenus_A().new opengenus_B(); inner.print(); } }
Output:
a = 10
As you can see in the above example we have created a class B inside of class A and in B's print() method we are printing the value of A's private variable a.
Method Local Inner Class
When a class is declared inside a method then it is known as method local inner class. The scope of this type of inner class is only within the method it is declared. It can be instantiated only in the method it is defined.
In the following Java code example, notice that the inner class B is a part of the myMethod() function of class A. It is local to that function.
Example
//File name opengenus_A.java class opengenus_A{ void myMethod(){ int a = 50; class opengenus_B{ public void print(){ System.out.println("a = " + a); } } opengenus_B b = new opengenus_B(); b.print(); } }
Output:
a = 50
Anonymous Inner Class
An anonymous inner class is an inner class which is declared without any name. Like other anonymous classes we declare and instantiate it at the same time. Anonymouns inner classes are usefull for overriding methods.
Follow the following Java code example to understand it better.
Example
//File name B.java abstract class opengenus_A{ public abstract void method(); } public class opengenus_B{ public static void main(String[] args){ opengenus_A a = new opengenus_A(){ public void method(){ System.out.println("Inside anonymous class."); } }; a.method(); } }
Output:
Inside anonymous class.
Static Nested Class
A static nested class is declated with the keyword static. Static classes cannot be called an inner class as they are just like other static members of a class. It cannot access the non-static variables of the outer class.
Follow the following Java code example to understand it better:
Example
class opengenus_A{ static int a = 100; static class opengenus_B{ void print(){ System.out.println("a = " + a); } } }
class Main{ public static void main(String[] args){ opengenus_A.opengenus_B b = new opengenus_B(); b.print(); } }
In the above case we do not need to create an object of the outer class as the inner class is static and static members can be accessed without creating it's instance.
Output:
a = 100
If we create a static method inside of a static class then we can access it just like any normal static method of class.
With this article at OpenGenus, you have the complete knowledge of Inner classes in Java and its different kinds. Enjoy. | https://iq.opengenus.org/inner-class-in-java/ | CC-MAIN-2020-24 | en | refinedweb |
downloading through proxy not working
Bug Description
I'm working on a suse unix system, and I have an environment variable:
http_proxy=http://
When I do a python ./bootstrap, I get the following error:
Traceback (most recent call last):
File "/tmp/tmpXwJvtd
user_defaults, windows_restart, command)
File "/tmp/tmpXwJvtd
data[
File "/tmp/tmpXwJvtd
eresult = _open(base, extends.pop(0), seen, dl_options, override)
File "/tmp/tmpXwJvtd
eresult = _open(base, extends.pop(0), seen, dl_options, override)
File "/tmp/tmpXwJvtd
path, is_temp = download(filename)
File "/tmp/tmpXwJvtd
local_path, is_temp = self.download(url, md5sum, path)
File "/tmp/tmpXwJvtd
tmp_path, headers = urllib.
File "/usr/lib64/
return _urlopener.
File "/usr/lib64/
fp = self.open(url, data)
File "/usr/lib64/
return getattr(self, name)(url)
File "/usr/lib64/
h.endheaders()
File "/usr/lib64/
self.
File "/usr/lib64/
self.send(msg)
File "/usr/lib64/
self.connect()
File "/usr/lib64/
socket.
IOError: [Errno socket error] (-2, 'Name or service not known')
After a little debugging it seems that urllib.urlretrieve seems to be the culprit. It can't handle this kind of proxy setting. I have also tried this in a little standalone python script, and it has the same bad result.
urllib2 handles the proxy setting just fine. So if I change this line:
tmp_path, headers = urllib.
into this:
import urllib2
tmp_sock = urllib2.
tmp_file = open(tmp_path, 'w')
tmp_file.
tmp_file.close()
then all works fine.
Now this code might not be great, but is it possible that urllib2 will be used as a workaround for this proxy issue?
Cheers, Huub
The above tests were done on Ubuntu 11.04 and 12.04.
Clayton
This sounds like a bug in the Python standard library. Have you reported it at http://
The problem I think is that I used a proxy with authentication, which is not supported in urllib:
"Proxies which require authentication for use are not currently supported; this is considered an implementation limitation."
http://
Still, the error message from urllib is misleading. Is it trying to resolve a host named 'user:password@
Please file a urllib bug at http://
This error is not detected in machines with python2.4.
The error is detected in zc.buildout 1.4.4 and 1.5.2 with python2.6.
When I use cntlm version 0.35 I dont't detect the problem, but when i use cntlm version > 0.9 the error occurs.
With ntlmaps it's working.
I did what is described above and it worked.
Clayton | https://bugs.launchpad.net/zc.buildout/+bug/484735 | CC-MAIN-2017-47 | en | refinedweb |
Is there anyway to get Eclipse to automatically look for static imports? For example, now that I've finally upgraded to Junit 4, I'd like to be able to write:
assertEquals(expectedValue, actualValue);
import static org.junit.Assert.assertEquals;
I'm using Eclipse Europa, which also has the Favorite preference section:
Window > Preferences > Java > Editor > Content Assist > Favorites
In mine, I have the following entries (when adding, use "New Type" and omit the
.*):
org.hamcrest.Matchers.* org.hamcrest.CoreMatchers.* org.junit.* org.junit.Assert.* org.junit.Assume.* org.junit.matchers.JUnitMatchers.*
All but the third of those are static imports. By having those as favorites, if I type "
assertT" and hit Ctrl+Space, Eclipse offers up
assertThat as a suggestion, and if I pick it, it will add the proper static import to the file. | https://codedump.io/share/jK9OfBDKRKNc/1/eclipse-optimize-imports-to-include-static-imports | CC-MAIN-2017-47 | en | refinedweb |
Our application is interfacing with a lot of web services these days. We have our own package that someone wrote a few years back using UTL_HTTP and it generally works, but needs some hard-coding of the SOAP envelope to work with certain systems. I would like to make it more generic, but lack experience to know how many scenarios I would have to deal with. The variations are in what namespaces need to be declared and the format of the elements. We have to handle both simple calls with a few parameters and those that pass a large amount of data in an encoded string.
I know that 10g has UTL_DBWS, but there are not a huge number of use-cases on-line. Is it stable and flexible enough for general use? Documentation
I have used
UTL_HTTP which is simple and works. If you face a challenge with your own package, you can probably find a solution in one of the many wrapper packages around UTL_HTTP on the net (Google "consuming web services from pl/sql", leading you to e.g.)
The reason nobody is using
UTL_DBWS is that it is not functional in a default installed database. You need to load a ton of Java classes into the database, but the standard instructions seem to be defective - the process spews Java errors right and left and ultimately fails. It seems very few people have been willing to take the time to track down the package dependencies in order to make this approach work. | https://codedump.io/share/otwvqHnGjLoH/1/consuming-web-services-from-oracle-plsql | CC-MAIN-2017-47 | en | refinedweb |
@Configuration @EnableAutoConfiguration public class SpringYarnBootApplication extends java.lang.Object
SpringApplicationclass which can be used as a main class if only requirement from an application is to pass arguments into
SpringApplication.run(Object, String...)
Usual use case for this would be to define this class as
Main-Class when creating i.e. executable jars
using Spring Boot maven or gradle plugins. User can always create
a similar dummy main class within a packaged application and let
Spring Boot maven or gradle plugin to find it during the creating
of an executable jar.
Care must be taken into account that if used, this class
will enable a system with @
EnableAutoConfiguration. If
there is a need to exclude any automatic auto-configuration, user
should define a custom class.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait | https://docs.spring.io/spring-hadoop/docs/2.1.0.RELEASE/api/org/springframework/yarn/boot/app/SpringYarnBootApplication.html | CC-MAIN-2017-47 | en | refinedweb |
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
You need scikit-image for this recipe. You will find the installation instructions here.
You also need to download the Beach dataset. ()
import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import skimage.exposure as skie %matplotlib inline
img = plt.imread('data/pic1.jpg')[...,0]
def show(img): # Display the image. plt.figure(figsize=(8,2)); plt.subplot(121); plt.imshow(img, cmap=plt.cm.gray); plt.axis('off'); # Display the histogram. plt.subplot(122); plt.hist(img.ravel(), lw=0, bins=256); plt.xlim(0, img.max()); plt.yticks([]); plt.show()
show(img)
The histogram is unbalanced and the image appears slightly over-exposed.
rescale_intensityfunction. The
in_rangeand
out_rangedefine a linear mapping from the original image to the modified image. The pixels that are outside
in_rangeare clipped to the extremal values of
out_range. Here, the darkest pixels (intensity less than 100) become completely black (0), whereas the brightest pixels (>240) become completely white (255).
show(skie.rescale_intensity(img, in_range=(100, 240), out_range=(0, 255)))
Many intensity values seem to be missing in the histogram, which reflects the poor quality of this exposure correction technique.
show(skie.equalize_adapthist(img))
The histogram seems more balanced, and the image now appears more contrasted.
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages). | http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/01_exposure.ipynb | CC-MAIN-2017-47 | en | refinedweb |
February 29, 2016 | Written by: Frederic Lavigne
Categorized: Compute Services | How-tos | Watson
Share this post:
Imagine you are attending the Cannes film festival or visiting a capital and taking pictures. Wouldn’t it be great if when you are about to share these pictures with your friends and followers, the app automatically proposed hashtags by interpreting the picture, identifying buildings, landmarks and famous people?
While we wait for this capability to come in popular image sharing apps, let’s build something like this with IBM Bluemix:
- To analyze the images, the IBM Bluemix catalog provides us with the Watson Visual Recognition and AlchemyAPI (more specifically AlchemyVision) services from IBM Watson. You provide the API with an image (URL or raw data) and in return you get a list of tags or keywords with a confidence score. Watson Visual Recognition can even be trained for fine-grain classification,
- For the app, we will pick iOS as our first target; this will be an opportunity to develop with Swift,
- Given IBM Bluemix OpenWhisk was just announced at IBM InterConnect, all of the image processing and analysis will be running as an IBM Bluemix OpenWhisk action, outside of the app logic code, with no server to set up and reusable by others.
Voilà, a sample iOS application to automatically tag images and detect faces by using IBM visual recognition technologies:
- Take a photo or select an existing picture in the camera roll,
- Let the application generate a list of tags and detect people, buildings, objects in the picture,
- Share the results with your network.
See how it was done!
The application is built with Cloudant, Watson Visual Recognition and AlchemyAPI from the Bluemix catalog to store and process images. And obviously all the backend part is implemented with IBM Bluemix OpenWhisk as a JavaScript action.
The source code, documentation and instructions to run the application are available in the IBM-Bluemix/openwhisk-visionapp project on GitHub.
While working on this, a colleague asked about the choice of IBM Bluemix OpenWhisk to implement the processing with a question: “Why not call the Watson services directly from the application?” Indeed, this would also work, at least at first since today we’re only considering an iOS app. Now let’s consider this is a real business—you will want to target other systems like Android, Windows Phone or even a more traditional web app. Do you want to have to rewrite the logic in several different languages?). In these cases, you would not want to have to manage the scalability of the service. Instead, you would leave that to IBM Bluemix OpenWhisk to handle transparently.
If you have feedback, suggestions, or questions about the app, please reach out to me on Twitter @L2FProd.
If you want to learn more about OpenWhisk, visit our OpenWhisk development center. If you want to see OpenWhisk running in IBM Bluemix, sign-up for the experimental Bluemix OpenWhisk where their motto is “Post your code. We host it. We scale it up. Pay only for what you use.” 🙂
Recent Posts
- The clock is ticking: catch perishable insights and act on them before time runs out
- Hitting the ground running: how to get your data science initiatives off to a flying start
- Watson Speech to Text and Text to Speech have released Lite Plans!
- IBM Cloud Container Service Edge Nodes
- IBM Cloud Log Analysis – Now Available in Germany and Sydney!
Archives
Home automation powered by Cloud Functions, Raspberry Pi, Twilio and Watson.
Arria brings Natural Language Generation to IBM Cloud
The Arria Natural Language Generation APIs service is an addition to the Finance category on the IBM Cloud platform. This blog post shows you how to get started with Arria’s Natural Language Generation APIs service on the IBM Cloud platform.
Shishir
This is exciting and will like to know if the system can be trained to recognize sometimes not so famous personalities.. Essentially, can I load my client images for recognition (private data of course..)
Frederic Lavigne
You can’t add your own images/faces to AlchemyAPI Face Detection feature. However you can use the Watson Visual Recognition service to build your own image classifiers tailored to your needs. Here is the link to the details
Yves Le Cléach
Following InterConnect 2016, I was trying Openwhisk and Swift on Bluemix. I was searching a good demo that implement both, and you did it in a very nice way. I just finish your tutorial in less than 30 min ! and the result is AWESOME !
Now I have to deep in the code… Thank you very much Frédéric ! I can’t wait your next one !
Note : check out the SOmusic article :
David Hicks
Very nice tutorial. I tried the app but I get TypeError: Cannot read property ‘use’ of undefined” for the openwhisk activation. stderr: at mainImpl (eval at NodeActionRunner (/nodejsAction/runner.js:32:21)
David Hicks
I have to make two changes(ServerlessAPI.js) to get this to work. First, pass your Cloudant url and name to whisk. Second, fix the object reference to the returned JSON result.
try whisk.invokeAction(name: ActionName, package: nil, namespace: ActionNamespace,
parameters: ([ “imageDocumentId”: documentId, “cloudantUrl”: CloudantUrl, “cloudantDbName”: CloudantDbName] as AnyObject),
hasResult: true) { (reply, error) -> Void in
onSuccess(Result(impl: result[“response”][“result”]))
Ronan
This is a nice tutorial.
I did the same kind of project for an Android App. Thanks to the IBM tutorials it took 3 days, even if I still have a lot of debugging to do.
Let me know if you want the code, I can share. | https://www.ibm.com/blogs/bluemix/2016/02/openwhisk-and-watson-image-tagging-app/?replytocom=3522 | CC-MAIN-2017-47 | en | refinedweb |
NetworKit is anit is also a testbed for algorithm engineering and contains a few novel algorithms from recently published research, especially in the area of community detection.
This notebook provides an interactive introduction to the features of NetworKit, consisting of text and executable code. We assume that you have read the Readme and successfully built the core library and the Python module. Code cells can be run one by one (e.g. by selecting the cell and pressing
shift+enter), or all at once (via the
Cell->Run All command). Try running all cells now to verify that NetworKit has been properly built and installed.
This notebook creates some plots. To show them in the notebook, matplotlib must be imported and we need to activate matplotlib's inline mode:
%matplotlib inline import matplotlib.pyplot as plt
IPython lets us use familiar shell commands in a Python interpreter. Use one of them now to change into the directory of your NetworKit download:
cd ../../
/home/mv/workspace/hiwi/NetworKit
NetworKit is a hybrid built from C++ and Python code: Its core functionality is implemented in C++ for performance reasons, and then wrapped for Python using the Cython toolchain. This allows us to expose high-performance parallel code as a normal Python module. On the surface, NetworKit is just that and can be imported accordingly:
from networkit import *
/usr/lib/python3.5/site-packages/pytz/__init__.py:29: UserWarning: Module _NetworKit was already imported from None, but /home/mv/workspace/hiwi/NetworKit is being added to sys.path from pkg_resources import resource_stream /usr/lib/python3.5/site-packages/matplotlib/__init__.py:872: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter. warnings.warn(self.msg_depr % (key, alt_key))
Let us start by reading a network from a file on disk: PGPgiantcompo.graph. In the course of this tutorial, we are going to work on the
PGPgiantcompo network, a social network/web of trust in which nodes are PGP keys and an edge represents a signature from one key on another. It is distributed with NetworKit as a good starting point.
There is a convenient function in the top namespace which tries to guess the input format and select the appropriate reader:
G = readGraph("input/PGPgiantcompo.graph", Format.METIS)
There is a large variety of formats for storing graph data in files. For NetworKit, the currently best supported format is the METIS adjacency format. Various example graphs in this format can be found here. The
readGraph function tries to be an intelligent wrapper for various reader classes. In this example, it uses the
METISGraphReader which is located in the
graphio submodule, alongside other readers. These classes can also be used explicitly:
graphio.METISGraphReader().read("input/PGPgiantcompo.graph") # is the same as: readGraph("input/PGPgiantcompo.graph", Format.METIS)
<_NetworKit.Graph at 0x7f84d7b5b1b0>
It is also possible to specify the format for
readGraph() and
writeGraph(). Supported formats can be found via
[graphio.]Format. However, graph formats are most likely only supported as far as the NetworKit::Graph can hold and use the data. Please note, that not all graph formats are supported for reading and writing.
Thus, it is possible to use NetworKit to convert graphs between formats. Let's say I need the previously read PGP graph in the Graphviz format:
graphio.writeGraph(G,"output/PGPgiantcompo.graphviz", Format.GraphViz)
NetworKit also provides a function to convert graphs directly:
graphio.convertGraph(Format.LFR, Format.GML, "input/example.edgelist", "output/example.gml")
converted input/example.edgelist to output/example.gml
Graph is the central class of NetworKit. An object of this type represents an undirected, optionally weighted network. Let us inspect several of the methods which the class provides.
n = G.numberOfNodes() m = G.numberOfEdges() print(n, m)
10680 24316
G.toString()
b'Graph(name=PGPgiantcompo, n=10680, m=24316)'
Nodes are simply integer indices, and edges are pairs of such indices.
V = G.nodes() print(V[:10]) E = G.edges() print(E[:10])
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [(42, 11), (101, 28), (111, 92), (128, 87), (141, 0), (165, 125), (169, 111), (176, 143), (187, 38), (192, 105)]
G.hasEdge(42,11)
True
This network is unweighted, meaning that each edge has the default weight of 1.
G.weight(42,11)
1.0
Sometimes it be may interesting to take a glance at a visualization of a graph. As this is not the scope of NetworKit, the
viztasks-module provides two convenience functions to draw graphs via NetworkX. If you have it installed, you will see usage examples throughout this guide.
It also possible to load a graph and the results of our analytic kernels directly into Gephi, a software package for interactive graph visualization, via its streaming plugin. You may want to take a look at the GephiStreaming notebook.
The
profiling-module introduced with version 4.0 of NetworKit is the successor of the
properties-module. It provides a convenient way to run a selection of NetworKit's analytic kernels. The results are further processed to show all kinds of statistics. A very brief example follows. First, let's load a different graph:
astro = readGraph("input/astro-ph.graph", Format.METIS)
One simple function call is enough to run and evaluate several kernels. The
preset-parameter is a convenient way to choose a set of algorithms. Currently,
minimal,
default and
complete can be passed.
pf = profiling.Profile.create(astro, preset="minimal")
When running inside a notebook, the
show-function can be used to display the profile. Depending on the selection of kernels, it may take a while to produce all the plots.
pf.show()
It is also possible to save the profile in a file with the following command. Two formats are available:
HTML and
LaTeX.
pf.output("HTML",".")
For a more customized selection of kernels, a
Config-object can be created and passed to
Profile.create. Take a look at the specific Profiling notebook for more detailed instructions.
A connected component is a set of nodes in which each pair of nodes is connected by a path. The following function determines the connected components of a graph:
cc = components.ConnectedComponents(G) cc.run() print("number of components ", cc.numberOfComponents()) v = 0 print("component of node ", v , ": " , cc.componentOfNode(0)) print("map of component sizes: ", cc.getComponentSizes())
number of components 1 component of node 0 : 0 map of component sizes: {0: 10680}
Node degree, the number of edges connected to a node, is one of the most studied properties of networks. Types of networks are often characterized in terms of their distribution of node degrees. We obtain and visualize the degree distribution of our example network as follows.
dd = sorted(centrality.DegreeCentrality(G).run().scores(), reverse=True) plt.xscale("log") plt.xlabel("degree") plt.yscale("log") plt.ylabel("number of nodes") plt.plot(dd) plt.show()
We choose a logarithmic scale on both axes because a powerlaw degree distribution, a characteristic feature of complex networks, would show up as a straight line from the top left to the bottom right on such a plot. As we see, the degree distribution of the
PGPgiantcompo network is definitely skewed, with few high-degree nodes and many low-degree nodes. But does the distribution actually obey a power law? In order to study this, we need to apply the powerlaw module. Call the following function:
import powerlaw fit = powerlaw.Fit(dd)
Calculating best minimal value for power law fit
The powerlaw coefficient can then be retrieved via:
fit.alpha
4.4185071392443591
If you further want to know how "good" it fits the power law distribution, you can use the the
distribution_compare-function. From the documentation of the function:
R : float
Loglikelihood ratio of the two distributions' fit to the data. If greater than 0, the first distribution is preferred. If less than 0, the second distribution is preferred.
p : float
Significance of R
fit.distribution_compare('power_law','exponential')
(10.90280164722239, 0.041197207672945678)
In the most general sense, transitivity measures quantify how likely it is that the relations out of which the network is built are transitive. The clustering coefficient is the most prominent of such measures. We need to distinguish between global and local clustering coefficient: The global clustering coefficient for a network gives the fraction of closed triads. The local clustering coefficient focuses on a single node and counts how many of the possible edges between neighbors of the node exist. The average of this value over all nodes is a good indicator for the degreee of transitivity and the presence of community structures in a network, and this is what the following function returns:
globals.clustering(G)
0.44180491618170764
A simple breadth-first search from a starting node can be performed as follows:
v = 0 bfs = graph.BFS(G, v) bfs.run() bfsdist = bfs.getDistances()
The return value is a list of distances from
v to other nodes - indexed by node id. For example, we can now calculate the mean distance from the starting node to all other nodes:
sum(bfsdist) / len(bfsdist)
11.339044943820225
Similarly, Dijkstra's algorithm yields shortest path distances from a starting node to all other nodes in a weighted graph. Because
PGPgiantcompo is an unweighted graph, the result is the same here:
dijkstra = graph.Dijkstra(G, v) dijkstra.run() spdist = dijkstra.getDistances() sum(spdist) / len(spdist)
11.339044943820225
A $k$-core decomposition of a graph is performed by successicely peeling away nodes with degree less than $k$. The remaining nodes form the $k$-core of the graph.
K = readGraph("input/karate.graph",Format.METIS) coreDec = centrality.CoreDecomposition(K) coreDec.run()
<_NetworKit.CoreDecomposition at 0x7f84d4785c50>
Core decomposition assigns a core number to each node, being the maximum $k$ for which a node is contained in the $k$-core. For this small graph, core numbers have the following range:
set(coreDec.scores())
{1.0, 2.0, 3.0, 4.0}
viztasks.drawGraph(K, nodeSizes=[(k**2)*20 for k in coreDec.scores()]) plt.show()
This section demonstrates the community detection capabilities of NetworKit. Community detection is concerned with identifying groups of nodes which are significantly more densely connected to eachother than to the rest of the network.
Code for community detection is contained in the
community module. The module provides a top-level function to quickly perform community detection with a suitable algorithm and print some stats about the result.
community.detectCommunities(G)
PLM(balanced,pc,turbo) detected communities in 0.25756025314331055 [s] solution properties: ------------------- ---------- # communities 93 min community size 6 max community size 682 avg. community size 114.839 modularity 0.881864 ------------------- ----------
<_NetworKit.Partition at 0x7f84d560b8d0>
The function prints some statistics and returns the partition object representing the communities in the network as an assignment of node to community label. Let's capture this result of the last function call.
communities = community.detectCommunities(G)
PLM(balanced,pc,turbo) detected communities in 0.09407258033752441 [s] solution properties: ------------------- ---------- # communities 98 min community size 6 max community size 656 avg. community size 108.98 modularity 0.881609 ------------------- ----------
Modularity is the primary measure for the quality of a community detection solution. The value is in the range
[-0.5,1] and usually depends both on the performance of the algorithm and the presence of distinctive community structures in the network.
community.Modularity().getQuality(communities, G)
0.8816090951171207
The result of community detection is a partition of the node set into disjoint subsets. It is represented by the
Partition data strucure, which provides several methods for inspecting and manipulating a partition of a set of elements (which need not be the nodes of a graph).
type(communities)
_NetworKit.Partition
print("{0} elements assigned to {1} subsets".format(communities.numberOfElements(), communities.numberOfSubsets()))
10680 elements assigned to 98 subsets
print("the biggest subset has size {0}".format(max(communities.subsetSizes())))
the biggest subset has size 656
The contents of a partition object can be written to file in a simple format, in which each line i contains the subset id of node i.
community.writeCommunities(communities, "output/communties.partition")
wrote communities to: output/communties.partition
The community detection function used a good default choice for an algorithm: PLM, our parallel implementation of the well-known Louvain method. It yields a high-quality solution at reasonably fast running times. Let us now apply a variation of this algorithm.
community.detectCommunities(G, algo=community.PLM(G, True))
PLM(balanced,refine,pc,turbo) detected communities in 0.10080695152282715 [s] solution properties: ------------------- ---------- # communities 96 min community size 6 max community size 683 avg. community size 111.25 modularity 0.883865 ------------------- ----------
<_NetworKit.Partition at 0x7f84d560b690>
We have switched on refinement, and we can see how modularity is slightly improved. For a small network like this, this takes only marginally longer.
We can easily plot the distribution of community sizes as follows. While the distribution is skewed, it does not seem to fit a power-law, as shown by a log-log plot.
sizes = communities.subsetSizes() sizes.sort(reverse=True) ax1 = plt.subplot(2,1,1) ax1.set_ylabel("size") ax1.plot(sizes) ax2 = plt.subplot(2,1,2) ax2.set_xscale("log") ax2.set_yscale("log") ax2.set_ylabel("size") ax2.plot(sizes) plt.show()
NetworKit supports the creation of Subgraphs depending on an original graph and a set of nodes. This might be useful in case you want to analyze certain communities of a graph. Let's say that community 2 of the above result is of further interest, so we want a new graph that consists of nodes and intra cluster edges of community 2.
c2 = communities.getMembers(2) g2 = G.subgraphFromNodes(c2)
communities.subsetSizeMap()[2]
166
g2.numberOfNodes()
166
As we can see, the number of nodes in our subgraph matches the number of nodes of community 2. The subgraph can be used like any other graph object, e.g. further community analysis:
communities2 = community.detectCommunities(g2)
PLM(balanced,pc,turbo) detected communities in 0.07078909873962402 [s] solution properties: ------------------- --------- # communities 12 min community size 4 max community size 30 avg. community size 13.8333 modularity 0.752052 ------------------- ---------
viztasks.drawCommunityGraph(g2,communities2) plt.show()
Centrality measures the relative importance of a node within a graph. Code for centrality analysis is grouped into the
centrality module.
We implement Brandes' algorithm for the exact calculation of betweenness centrality. While the algorithm is efficient, it still needs to calculate shortest paths between all pairs of nodes, so its scalability is limited. We demonstrate it here on the small Karate club graph.
K = readGraph("input/karate.graph", Format.METIS)
bc = centrality.Betweenness(K) bc.run()
<_NetworKit.Betweenness at 0x7f84d5a274a8>
We have now calculated centrality values for the given graph, and can retrieve them either as an ordered ranking of nodes or as a list of values indexed by node id.
bc.ranking()[:10] # the 10 most central nodes
[(0, 462.14285714285717), (33, 321.1031746031745), (32, 153.38095238095238), (2, 151.70158730158732), (31, 146.0190476190476), (8, 59.05873015873016), (1, 56.95714285714285), (13, 48.43174603174603), (19, 34.2936507936508), (5, 31.666666666666668)]
Since exact calculation of betweenness scores is often out of reach, NetworKit provides an approximation algorithm based on path sampling. Here we estimate betweenness centrality in
PGPgiantcompo, with a probabilistic guarantee that the error is no larger than an additive constant $\epsilon$.
abc = centrality.ApproxBetweenness(G, epsilon=0.1) abc.run()
<_NetworKit.ApproxBetweenness at 0x7f84d5a27128>
The 10 most central nodes according to betweenness are then
abc.ranking()[:10]
[(1143, 0.16365824308062585), (6655, 0.10589651022864024), (6555, 0.09025270758122747), (7297, 0.08303249097472928), (3156, 0.06859205776173288), (6744, 0.06618531889290014), (6932, 0.06498194945848378), (6098, 0.06257521058965104), (2258, 0.05655836341756921), (7369, 0.05174488567990374)]
Eigenvector centrality and its variant PageRank assign relative importance to nodes according to their connections, incorporating the idea that edges to high-scoring nodes contribute more. PageRank is a version of eigenvector centrality which introduces a damping factor, modeling a random web surfer which at some point stops following links and jumps to a random page. In PageRank theory, centrality is understood as the probability of such a web surfer to arrive on a certain page. Our implementation of both measures is based on parallel power iteration, a relatively simple eigensolver.
# Eigenvector centrality ec = centrality.EigenvectorCentrality(K) ec.run() ec.ranking()[:10] # the 10 most central nodes
[(33, 0.37335860763538437), (0, 0.3554879627576304), (2, 0.31719212126079693), (32, 0.308641699961726), (1, 0.2659584485486244), (8, 0.2274061452435449), (13, 0.22647475684342064), (3, 0.2111796960531623), (31, 0.19103658572493037), (30, 0.1747599501690216)]
# PageRank pr = centrality.PageRank(K, 1e-6) pr.run() pr.ranking()[:10] # the 10 most central nodes
[(33, 0.02941190490185556), (0, 0.029411888071820155), (32, 0.02941184486730034), (1, 0.02941180477938106), (2, 0.02941179873364914), (3, 0.029411771282676906), (31, 0.029411770725212477), (5, 0.029411768995095993), (6, 0.029411768995095993), (23, 0.029411763985014328)]
import networkx as nx nxG = nxadapter.nk2nx(G) # convert from NetworKit.Graph to networkx.Graph print(nx.degree_assortativity_coefficient(nxG))
0.238211371708
An important subfield of network science is the design and analysis of generative models. A variety of generative models have been proposed with the aim of reproducing one or several of the properties we find in real-world complex networks. NetworKit includes generator algorithms for several of them.
The Erdös-Renyi model is the most basic random graph model, in which each edge exists with the same uniform probability. NetworKit provides an efficient generator:
ERG = generators.ErdosRenyiGenerator(1000, 0.1).generate() profiling.Profile.create(ERG, preset="minimal").show() | http://nbviewer.jupyter.org/urls/networkit.iti.kit.edu/uploads/docs/NetworKit_UserGuide.ipynb | CC-MAIN-2017-47 | en | refinedweb |
0
Where are the connections stored, are they in a list, how to I access them?
I have a (-messy-) program that will allow the users to set there name and when they send input to the server it will come up with "Received: Blah from username" but I would like to send that to all of the people who are connected.
Also, So that they can shut the connection to them.
from twisted.internet import reactor, protocol PORT = 6661 class User(protocol.Protocol): connectionstat = 1 name = "" def connectionMade(self): self.transport.write("Hello, What is your name?") def dataReceived(self, data): if self.connectionstat == 1: self.name = data self.connectionstat = 2 else: print "Received: " + data.rstrip('\n') + " from " + self.name self.transport.write("You Sent: " + data) def main(): factory = protocol.ServerFactory() factory.protocol = User reactor.listenTCP(PORT,factory) print "Running Echo..." reactor.run() if __name__ == '__main__': main()
Thanks | https://www.daniweb.com/programming/software-development/threads/364837/twisted-connections | CC-MAIN-2017-47 | en | refinedweb |
See also
- File access sample
- Access to user resources using the Windows Runtime
- File access and permissions in Windows Store apps
- Quickstart: Accessing files programmatically
- Reference
- StorageFile class
- StorageFolder class
- Windows.Storage.Search namespace | https://msdn.microsoft.com/en-us/library/windows/apps/xaml/windows.storage.knownfolders.aspx?cs-save-lang=1&cs-lang=vb | CC-MAIN-2015-11 | en | refinedweb |
PROfile
All time reputation: 405
Achievement
noah's Recent JavaScript Snippets
- All /
- JavaScript /
- HTML /
- PHP /
- CSS /
- Ruby /
- Objective C
JavaScript regex javascript filter match regular Expression regulareexpression
Use a variable in a JavaScript regular expression
posted on May 27,
JavaScript oo namespace jslint
Conditionally instantiate an object
posted on October 28, 2008 by noah
JavaScript time utc
Get a UTC code from human-readable information
posted on October 23, 2008 by noah
JavaScript basic variable conditional operators logic and boolean easy tips crockford refactoring 2008 logical assignment saved by 2 people
Conditionally assign a value to a variable
posted on June 23, 2008 time performance optimization profiling saved by 2 people
Get the time elapsed between two intervals
posted on September 24, 2007 by noah wrapper loop iterator syntacticsugar
for loop wrapper
posted on June 15, 2007 by noah
JavaScript DOM work robzand utilities saved by 1 person
accept either an element or an id as a parameter
posted on June 12, 2007 by noah
JavaScript fun easy stupid trivial loops
infinite loop alert
posted on June 12, 2007 by noah
JavaScript qblog
in Qblog Tag Browser, show tags grouped by frequency
posted on June 11, 2007 by noah
JavaScript css ie crossbrowser opacity DOM web20 standards dhtml chrome transition effect transparent portable saved by 3 people
Change opacity in all browsers
posted on June 11, browser cookies web state ui interactive ixd tracking metrics saved by 1 person
Get and set state with cookie II
posted on May 22, 2007 by noah
JavaScript crossbrowser DOM height dimensions robzand saved by 4 people
Height of window
posted on May 15, 2007 by noah
JavaScript css toggle DOM simple dhtml boolean easy beginner
Toggle the className of a DOM element
posted on May 4, 2007 by noah
JavaScript check test DOM boolean boundary saved by 1 person
Does an element exist?
posted on May 4, 2007 by noah | http://snipplr.com/users/noah/language/javascript | CC-MAIN-2015-11 | en | refinedweb |
On 08/13/2012 10:19 AM, Eric Blake wrote: >. Found the root of the issue - it's libvirt's fault. Gnulib's maint.mk takes the initial definition of local-checks-to-skip, and from that, creates a macro 'local-checks' using a := rule: local-check := \ $(patsubst sc_%, sc_%.z, \ $(filter-out $(local-checks-to-skip), $(local-checks-available))) But libvirt's cfg.mk is conditionally running the local-checks-to-skip rule, via: # Most developers don't run 'make distcheck'. We want the official # dist to be secure, but don't want to penalize other developers # using a distro that has not yet picked up the automake fix. # FIXME remove this ifeq (making the syntax check unconditional) # once fixed automake (1.11.6 or 1.12.2+) is more common. ifeq ($(filter dist%, $(MAKECMDGOALS)), ) local-checks-to-skip += sc_vulnerable_makefile_CVE-2012-3386 else distdir: sc_vulnerable_makefile_CVE-2012-3386 endif Because distdir depends on the full sc_ name, rather than the sc_.z rewrite, maint.mk's timing rules don't get properly run, so the .sc-start-* file doesn't get cleaned up. I think with a bit more tweaking to libvirt's cfg.mk, I can get this working again. Meanwhile, would gnulib like to incorporate this hack from libvirt? After all, the current Automake vulnerability only affects you if you run 'make dist' or 'make distcheck'; it does not impact normal day-to-day development. Therefore, running the syntax check only in the vulnerable cases, and in such a way that the syntax check stops make before the vulnerability can actually be triggered, without penalizing day-to-day development for people relying on their distro rather than using a hand-built automake, seems like it would be nice to share among multiple packages. [It's a shame that more than a month after the CVE was reported and both Fedora 17 and RHEL 6.3 are still vulnerable, but that's a story for another day.] -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library
Attachment:
signature.asc
Description: OpenPGP digital signature | https://www.redhat.com/archives/libvir-list/2012-August/msg01137.html | CC-MAIN-2015-11 | en | refinedweb |
27 March 2011 18:24 [Source: ICIS news]
SAN ANTONIO, ?xml:namespace>
“We are watching how it will affect PBR supply,” Supreme Petrochem executive director for styrenics, Nageswaran Gopal, told ICIS on the sidelines of the International Petrochemical Conference (IPC).
Gopal, however, said that the disaster has not dented supply coming from suppliers such as Asahi Kasei and Nippon Shokubai.
The Indian company procures a monthly supply of 200,000-300,000 tonnes of PBR from
But the company is currently comfortable with its PBR inventory, said Gopal.
With just a fifth of the company’s feedstock needs coming from
Hosted by the National Petrochemical & Refiners Association (NPRA), the IPC continues through Tuesday.
($1 = €0.71) | http://www.icis.com/Articles/2011/03/27/9447424/npra-11-indias-supreme-petrochem-fears-pbr-shortage.html | CC-MAIN-2015-11 | en | refinedweb |
28 September 2011 23:27 [Source: ICIS news]
WASHINGTON (ICIS)--The US Environmental Protection Agency (EPA) on Wednesday said that it strongly disagrees with an internal finding by its own inspector general that the agency failed to meet federal scientific peer review standards in developing broad climate change regulations.
An investigation report by the EPA’s inspector general issued on Wednesday said that the agency failed to meet its own scientific requirements when it ruled in late 2009 that greenhouse gases (GHG) pose a risk to human health and must be regulated and reduced under an EPA mandate., the EPA has issued a number of regulations and restrictions to halt and reduce emissions of greenhouse gases by ?xml:namespace>
Industry argues that the EPA’s many climate rules will drive US electricity costs much higher and force closure of many production facilities and a resulting loss of jobs.
“We strongly disagree with the inspector general’s findings,” the agency said in a statement.
Among the investigation’s findings, EPA failed to test the validity of the outside scientific or technical information used by the agency to support its endangerment finding.
In its December 2009 endangerment finding and supporting documents, the EPA said it relied heavily on information in reports by the UN’s Intergovernmental Panel on Climate Change (IPCC), data that has been challenged by several members of Congress and other critics.
But in responding to the inspector general’s criticism and in defence of the IPCC research, the”.
Sceptical senators in the US Congress have called for hearings to reconsider EPA’s endangerment | http://www.icis.com/Articles/2011/09/28/9496047/us-epa-blasts-in-house-report-critical-of-climate-rules-science.html | CC-MAIN-2015-11 | en | refinedweb |
Personal draft. Not yet implemented; little review.
Blindfold can automatically create a parser for a data format or formal language if it has a suitable blindfold grammar. This process is generally hidden behind the interface of Pool.attach(uri), which gives you an RDF view of the document, file, or database identified by the URI. Here we discusses how Pool.attach() determines the identity and definition of the grammar to use for this function.
Turned around, this document addresses the question: How do I publish formal knowledge on the web, in the language of my choice, and still let the world know exactly what it means? (I have no idea if many people will actually want to do this, or if they'll just go on (1) publishing in the language of their choice without publishing their semantics and (2) publishing in standardized formal languages like RDF/XML. These techniques also work, in a somewhat simplified "server-side" form, for simultaneously publishing in various standard formats.)
Each grammar is identified by a URI. The URI should serve content such that doing a Pool.attach() on the URI provides a view to the language syntax and semantics, described using blindfold's grammar ontology [@@@]. Blindfold can then use this description to build a parser for the language. Thus grammars are defined using other grammars, in a recursive process that ends in some grammar for which blindfold already has a parser.
In some cases, it may be desirable to actually provide the grammar text in a bootstrap language, or provide a list of possible places from which to download the grammar. These are desirable features for situations where the web is not reliable.
Blindfold's configuration information includes a sequence of methods which Pool.attach() should use to identify the appropriate grammar. Each method is tried in order until one succeeds. The methods described below are the standard methods, but others may be added into the sequence before or after them, as appropriate for an application. This sequence is, however, what information providers should assume is being used. (This is kind of like CSS: the author gives some style information, while the client provides other information and may chose to over-ride the author's choices.)
The methods are summarized here, and detailed with examples below.
Look for a special header identifying the grammar to use. This can work well for contents obtained via SMTP-like protocols, including HTTP, although headers may be hard to set and/or see in some applications.
Look for a root element attribute (in a w3 namespace) identifying the grammar. This allows document authors to specify a grammar, overriding the standard namespace semantics of method 3. This method can be used with minimal impact or changes in HTML documents or invalid XML. (Of course, the blindfold grammar may support a stronger notion of validity, but it may also be a more-trivial "scraping"-style grammar.)
Look at each namespace used in the document as identifying a grammar (or a RDDL document linking to a grammar); create a new grammar which is the intersection/conjunction of these grammars, and use that for the document. This is perhaps the cleanest (and yet most complex) method, in line (I imagine) with the more ambitious camp of the XML designers.
Look for a certain predefined pattern of characters near the beginning of the text. This method can be used with any content, with or without media-type information or headers. It is kind of a hack, but it's probably a useful one. The danger that the magic string will occur by accident can be arbitrarily reduced. It's odd, but in some situations, it's the best we can do.
In some applications, an information provider will supply a document and an authoritative language definition for it. In others, the definition will come from the client software or from a third party. The situation is similar to cascading style sheets (CSS): in general, clients rely on the server to control document interpretation, but if it fails to do so properly (perhaps because of unanticipated needs) the client may take over.
Blindfold's support for clients identifying the language is simple: GrammarManager.obtainParser() lets you get a parser from the URI for its language definition. 3rd party grammars are complicated by a number of factors (like trust) and have not yet been addressed. The rest of this page concerns the case where the language is identified by the server.
The basic problem is that we want a document's author/publisher to be able to express to clients what formal language definition should be used, but not all formats and protocols make this possible.
If the document has some sort of header (in HTTP, SMTP, whatever), we can look for this:
Formal-Language-Definition:
Mark Nottingham tells me that the X- approach is neither necessary nor recommended for HTTP (or even SMTP, AFAHK).
Of course you should probably use the HTTP Extension Framework:
Opt: ""; ns=10 10-Formal-Language-Definition:
TimBL suggests trying to fix Content-Type while we're at it, but I don't quite see how.
is nice, but doesn't work because you can't meaningfully repeat header entries (or can you?). N3-Declaration: is cool, but far-fetched.is nice, but doesn't work because you can't meaningfully repeat header entries (or can you?). N3-Declaration: is cool, but far-fetched.
RDF-Property:
XML documents can simply add an attribute in some w3 namespace to the root element.
<foo xmlns: ... </foo>
TimBL argues that this might be a bad practice, since the semantics should come from the element (namespace) itself.
The namespace of the document's root element can lead to a RDDL document, which (following an xlink) can lead to our language definition. This does not really work for xhtml.
<html xmlns="" xmlns: <head> <title>Some Documentation About My Language</title> <link href="" type="text/css" rel="stylesheet" /> </head> <body> <h1>My Language</h1> <rddl:resource xlink: </body> </html>
Oops. Um, where does the element name itself come in? Is that the production name in the language?
We can look at the namespace of every XML element, and try to get a language definition for each one. (If they are part of the same namespace, they are part of the same language, and this is natural.) If an element's grammar allows children, the constraint expressions nest/merge naturally.
Here's an example. We have two XML schemas: one about books and one about people.
@@@@ in progress.@@@@ in progress.
<BookCatalog> <Book> <Title>Weaving the Web</Title> <ISBN>0-06-251587-X</ISBN> <AuthorName>Tim Berners-Lee</AuthorName> <AuthorInfo /> </Book> </BookCatalog>.)
-*- formal-language-URI: ""; -*-
<foo xmlns: ... </foo>
This should be explored in the context of 3rd party definitions, I think.
Sandro HawkeSandro Hawke | http://www.w3.org/2001/06/blindfold/langIdent | CC-MAIN-2015-11 | en | refinedweb |
One of the first tasks when using GemFire and Spring is to configure the data grid through the IoC container. While this is possible out of the box, the configuration tends to be verbose and only address basic cases. To address this problem, the Spring GemFire project provides several classes that enable the configuration of distributed caches or regions to support a variety of scenarios with minimal effort.
To simplify configuration, SGF provides a dedicated namespace for most of its components. However, one can opt to configure the beans directly through the usual <bean> definition. For more information about XML Schema-based configuration in Spring, see this appendix in the Spring Framework reference documentation.
To use the SGF namespace, one just needs to import it inside the configuration:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns:xsi="" xmlns:gfe
=""
" xsi:schemaLocation="" xsi:schemaLocation="
"> <bean id ... > <gfe"> <bean id ... > <gfe
:cache ...> </beans>:cache ...> </beans>
Once declared, the namespace elements can be declared simply by appending the aforementioned prefix. Note that is possible to change the default namespace,
for example from
<beans> to
<gfe>. This is useful for configuration composed mainly of GemFire components as
it avoids declaring the prefix. To achieve this, simply swap the namespace prefix declaration above:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns=""
xmlns:xsi=""xmlns:xsi=""
xmlns: <beans:bean id ... >xmlns: <beans:bean id ... >
<cache ...><cache ...>
</beans></beans>
For the remainder of this doc, to improve readability, the XML examples will simply refer to the
<gfe> namespace
without the namespace declaration, where possible.
In order to use the GemFire Fabric, one needs to either create a new
Cache or connect to an existing one. As in
the current version of GemFire, there can be only one opened cache per VM
(or classloader to be technically correct). In most cases the cache is
created once and then all other consumers connect to it.
In its simplest form, a cache can be defined in one line:
<gfe:cache />
The declaration above declares a bean(
CacheFactoryBean)
for the GemFire Cache, named
gemfire-cache. All the other SGF components use this
naming convention if no name is specified, allowing for very concise configurations. The definition above will try to connect to
an existing cache and, in case one does not exist, create it. Since no
additional properties were specified the created cache uses the default
cache configuration.Especially in environments with opened caches, this basic
configuration can go a long way.
For scenarios where the cache needs to be configured, the user can pass in a reference the GemFire configuration file:
<gfe:cache
In this example, if the cache needs to be created, it will use the
file named
cache.xml located in the classpath root.
Only if the cache is created will the configuration file be used.
In addition to referencing an external configuration file one can
specify GemFire settings directly through Java
Properties. This can be quite handy when just a few
settings need to be changed.
To setup properties one can either use the
properties element inside the
util namespace
to declare or load properties files (the latter is recommended for externalizing environment specific settings outside the application
configuration):
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <gfe:cache <util:properties </beans>
Or can use fallback to a raw
<beans> declaration:
<bean id="cache-with-props" class="org.springframework.data.gemfire.CacheFactoryBean"> <property name="properties"> <props> <prop key="bind-address">127.0.0.1</prop> </props> </property> </bean>
In this last example, the SGF classes are declared and configured directly without relying on the namespace. As one can tell, this approach is a generic one, exposing more of the backing infrastructure.
It is worth pointing out again, that the cache settings apply only if the cache needs to be created, there is no opened cache in existence otherwise the existing cache will be used and the configuration will simply be discarded.
Once the
Cache is configured, one
needs to configure one or more
Regions to
interact with the data fabric. SGF allows various region types to be configured and created directly from Spring or
in case they are created directly in GemFire, retrieved as such.
For more information about the various region types and their capabilities as well as configuration options, please refer to the GemFire Developer's Guide and community site.
For consuming but not creating
Regions (for example in case,
the regions are already configured through GemFire native configuration, the
cache.xml),
one can use the
lookup-region element. Simply declare the target region name the
name attribute; for example to declare a bean definition, named
region-bean
for an existing region named
orders one can use the following definition:
<gfe:lookup-region
If the
name is not specified, the bean name will be used automatically. The example above
becomes:
<!-- lookup for a region called 'orders' --> <gfe:lookup-region
Note that in the previous examples, since no cache name was defined, the default SGF naming convention (
gemfire-cache)
was used. If that is not an option, one can point to the cache bean through the
cache-ref attribute:
<gfe:cache <gfe:lookup-region
The
lookup-region provides a simple way of retrieving existing, pre-configured regions without exposing
the region semantics or setup infrastructure.
One of the common region types supported by GemFire is replicated region or replica. In short:
SGF offers a dedicated element for creating replicas in the form of
replicated-region element. A minimal declaration looks as follows
(again, the example will not setup the cache wiring, relying on the SGF namespace naming conventions):
<gfe:replicated-region
Here, a replicated region is created (if one doesn't exist already). The name of the region is the same as the bean name (
simple-replica) and
the bean assumes the existence of a GemFire cache named
gemfire-cache.
When setting a region, it's fairly common to associate various
CacheLoaders,
CacheListeners and
CacheWriters with it. These components can be either referrenced or declared inlined by the region declaration.
Below is an example, showing both styles:
<gfe:replicated-region <gfe:cache-listener> <!-- nested cache listener reference --> <ref bean="c-listener"/> <!-- nested cache listener declaration --> <bean class="some.pkg.SimpleCacheListener"/> </gfe:cache-listener> <!-- loader reference --> <gfe:cache-loader <!-- writer reference --> <gfe:cache-writer </gfe:replicated-region>
The following table offers a quick overview of the most important configuration options names, possible values and short descriptions for each of settings supported by the
replicated-region element. Please see the storage and eviction section for the relevant configuration.
Another region type supported out of the box by the SGF namespace, is the partitioned region. To quote again the GemFire docs:
A partition can be created by SGF through>
The following table offers a quick overview of the most important configuration options names, possible values and short descriptions for each of settings supported by the partition element. Please see the storage and eviction section for the relevant configuration.
GemFire supports various deployment topologies for managing and distributing data. The topic is outside the scope of this documentation however to quickly recap, they
can be categoried in short in: peer-to-peer (p2p), client-server (or super-peer cache network) and wide area cache network (or WAN). In the last two scenarios, it is common
to declare client regions which connect to a backing cache server (or super peer). SGF offers dedicated support for such configuration through the
client-region and
pool elements.
As the name imply, the former defines a client region while the latter connection pools to be used/shared by the various client regions.
Below is a usual configuration for a client region:
<!-- client region declaration --> <gfe:client-region <gfe:cache-listener </gfe:client-region> <bean id="c-listener" class="some.pkg.SimpleCacheListener"/> <!-- pool declaration --> <gfe:pool <gfe:locator </gfe:pool>
Just as the other region types,
client-region allows defining
CacheListeners. It also relies on the same naming conventions
in case the region name or the cache are not set explicitely. However, it also requires a connection
pool to be specified for connecting to the server. Each client
can have its own pool or they can share the same one.
For a full list of options to set on the client and especially on the pool, please refer to the SGF schema (Appendix A, Spring GemFire Integration Schema) and the GemFire documentation.
To minimize network traffic, each client can define its own 'interest', pointing out to GemFire, the data it actually needs. In SGF, interests can be defined for each client, both key-based and regular-expression-based types being supported; for example:
<gfe:client-region <gfe:key-interest <bean id="key" class="java.lang.String"/> </gfe:key-interest> <gfe:regex-interest </gfe:client-region>
GemFire can use disk as a secondary storage for persisting regions or/and overflow (known as data pagination or eviction to disk). SGF allows such options to be configured
directly from Spring through
disk-store element available on both
replicated-region and
partitioned-region as well as
client-region.
A disk store defines how that particular region can use the disk and how much space it has available. Multiple directories can be defined in a disk store such as in our example below:
<gfe:partitioned-region <gfe:disk-store <gfe:disk-dir <gfe:disk-dir </gfe:disk-store> </gfe:replicated-region>
In general, for maximum efficiency, it is recommended that each region that accesses the disk uses a disk store configuration.
For the full set of options and their meaning please refer to the Appendix A, Spring GemFire Integration Schema and GemFire documentation.
Both partitioned and replicated regions can be made persistent. That is:
With SGF, to enable persistence, simply set to true the
persistent attribute on
replicated-region,
partitioned-region or
client-region:
<gfe:partitioned-region
When persisting regions, it is recommended to configure the storage through the
disk-store element for maximum efficiency.
Based on various constraints, each region can have an eviction policy in place for
evicting data from memory. Currently, in GemFire
eviction applies on the least recently used entry (also known as LRU).
Evicted entries are either destroyed or paged to disk (also known as overflow).
SGF oveflow, it is recommended to configure the storage through the
disk-store element for maximum efficiency.
For a detailed description of eviction policies, see the GemFire documentation (such as this page).
SGF namespaces allow short and easy configuration of the major GemFire regions and associated entities. However, there might be corner cases where the namespaces are not enough, where
a certain combination or set of attributes needs to be used. For such situations, using directly the SGF
FactoryBeans is a possible alternative as it gives
access to the full set of options at the expense of conciseness.
As a warm up, below are some common configurations, declared through raw
beans definitions.
A basic configuration looks as follows:
<bean id="basic" class="org.springframework.data.gemfire.RegionFactoryBean"> <property name="cache"> <bean class="org.springframework.data.gemfire.CacheFactoryBean"/> </property> <property name="name" value="basic"/> </bean>
Notice how the GemFire cache definition has been nested into the declaring region definition. Let's add more regions and make the cache a top level bean.
Since the region bean definition name is usually the same with that
of the cache, the
name property can be omitted (the
bean name will be used automatically). Additionally by using the name the
p
namespace, the configuration can be simplified even more:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <!-- shared cache across regions --> <bean id="cache" class="org.springframework.data.gemfire.CacheFactoryBean"/> <!-- region named 'basic' --> <bean id="basic" class="org.springframework.data.gemfire.RegionFactoryBean" p: <!-- region with a name different then the bean definition --> <bean id="root-region" class="org.springframework.data.gemfire.RegionFactoryBean" p: </beans>
It is worth pointing out, that for the vast majority of cases configuring the cache loader, listener and writer through the Spring container is preferred since the same instances can be reused across multiple regions and additionally, the instances themselves can benefit from the container's rich feature set:
<bean id="cacheLogger" class="org.some.pkg.CacheLogger"/> <bean id="customized-region" class="org.springframework.data.gemfire.RegionFactoryBean" p: <property name="cacheListeners"> <array> <ref name="cacheLogger"/> <bean class="org.some.other.pkg.SysoutLogger"/> </array> </property> <property name="cacheLoader"><bean class="org.some.pkg.CacheLoad"/></property> <property name="cacheWriter"><bean class="org.some.pkg.CacheWrite"/></property> </bean> <bean id="local-region" class="org.springframework.data.gemfire.RegionFactoryBean" p: <property name="cacheListeners" ref="cacheLogger"/> </bean>
For scenarios where a CacheServer is used and
clients need to be configured and the namespace is not an option, SGF offers a
dedicated configuration class named:
ClientRegionFactoryBean. This allows client
interests to be registered in both key and regex
form through
Interest and
RegexInterest classes in the
org.springframework.data.gemfire.client package:
<bean id="interested-client" class="org.springframework.data.gemfire.client.ClientRegionFactoryBean" p: <property name="interests"> <array> <!-- key-based interest --> <bean class="org.springframework.data.gemfire.client.Interest" p: <!-- regex-based interest --> <bean class="org.springframework.data.gemfire.client.RegexInterest" p: </array> </property> </bean>
Users that need fine control over a region, can configure it in Spring by using the
attributes property. To ease declarative configuration in Spring,
SGF provides two
FactoryBeans for creating
RegionAttributes and
PartitionAttributes,
namely
RegionAttributesFactory and
PartitionAttributesFactory. See below an example of configuring a partitioned region through Spring
XML:
<bean id="partitioned-region" class="org.springframework.data.gemfire.RegionFactoryBean" p: <property name="attributes"> <bean class="org.springframework.data.gemfire.RegionAttributesFactory" p: <property name="partitionAttributes"> <bean class="org.springframework.data.gemfire.PartitionAttributesFactory" p: </property> </bean> </property> </bean>
By using the attribute factories above, one can reduce the size of the
cache.xml or even eliminate it all together.
With SGF, GemFire regions, pools and cache can be configured either through Spring or directly inside GemFire, native,
cache.xml file. While both are valid
approaches, it's worth pointing out that Spring's powerful DI container and AOP functionality makes it very easy to wire GemFire into an application. For example configuring a region
cache loader, listener and writer through the Spring container is preferred since the same instances can be reused across multiple regions and additionally are either to configure
due to the presence of the DI and eliminates the need of implementing GemFire's
Declarable interface (see Section 2.4, “Wiring
Declarable components” on chapter
on how you can still use them yet benefit from Spring's DI container).
Whatever route one chooses to go, SGF supports both approaches allowing for easy migrate between them without forcing an upfront decision. | http://docs.spring.io/spring-gemfire/docs/1.0.0.RC1/reference/html/bootstrap.html | CC-MAIN-2015-11 | en | refinedweb |
Author: rhillegas
Date: Thu Jun 8 14:07:31 2006
New Revision: 412859
URL:
Log:
DERBY-1379: Committed Olav's autoload.diff. This fixes the problem which caused all of the
nist tests to fail when derbyall was run against jar files under jdk1.6 with the db2jcc jar
in the classpath.
Modified:
db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/harness/RunList.java
Modified: db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/harness/RunList.java
URL:
==============================================================================
--- db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/harness/RunList.java
(original)
+++ db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/harness/RunList.java
Thu Jun 8 14:07:31 2006
@@ -42,6 +42,9 @@
import java.util.Properties;
import java.util.Vector;
import java.util.StringTokenizer;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+
public class RunList
{
@@ -263,6 +266,16 @@
( new FileOutputStream(skipFile.getCanonicalPath(),true) );
}
+ // Due to autoloading of JDBC drivers introduced in JDBC4
+ // (see DERBY-930) the embedded driver and Derby engine
+ // might already have been loaded. To ensure that the
+ // embedded driver and engine used by the tests run in
+ // this suite are configured to use the correct
+ // property values we try to unload the embedded driver
+ if (useprocess == false) {
+ unloadEmbeddedDriver();
+ }
+
System.out.println("Now run the suite's tests");
//System.out.println("shutdownurl: " + shutdownurl);
@@ -1609,5 +1622,25 @@
}
+
+ /**
+ * Unloads the embedded JDBC driver and Derby engine in case
+ * is has already been loaded.
+ * The purpose for doing this is that using an embedded engine
+ * that already is loaded makes it impossible to set new
+ * system properties for each individual suite or test.
+ */
+ private static void unloadEmbeddedDriver() {
+ // Attempt to unload the embedded driver and engine
+ try {
+ DriverManager.getConnection("jdbc:derby:;shutdown=true");
+ } catch (SQLException se) {
+ // Ignore any exception thrown
+ }
+
+ // Call the garbage collector as spesified in the Derby doc
+ // for how to get rid of the classes that has been loaded
+ System.gc();
+ }
} | http://mail-archives.apache.org/mod_mbox/db-derby-commits/200606.mbox/%3C20060608210731.B215E1A983A@eris.apache.org%3E | CC-MAIN-2015-11 | en | refinedweb |
Download Document
Showing pages : 1 - 3 of 30
This preview has blurred sections. Sign up to view the full version! View Full Document
Corporations: Introduction and Operating Rules CHAPTER 17 CORPORATIONS: INTRODUCTION AND OPERATING RULES TRUE/FALSE 1. Jeff is the sole shareholder of a C corporation. In 2007, the corporation sold a capital asset for a gain of $20,000. Jeff is required to report the capital gain on his individual income tax return for 2007, and the gain is subject to a maximum rate of 15%. ANS: F Shareholders do not report capital gains from a C corporation. PTS: 1 REF: p. 17-6 2. Herman and Henry are equal partners in Badger Enterprises, a calendar year partnership. During the year, Badger Enterprises had $305,000 gross income and $230,000 operating expenses. Badger distributed $20,000 to each of the partners. Herman and Henry each must report $37,500 of income from the partnership. ANS: T The partnership is not a taxpaying entity. Its profit (loss) and separate items flow through to the partners. The partnerships Form 1065 reports net profit of $75,000 ($305,000 income $230,000 expenses). Herman and Henry both receive a Schedule K-1 reporting net profit of $37,500. Each partner reports net profit of $37,500 on his own return. PTS: 1 REF: Example 2 3. Robin is a 50% shareholder in Robin-Wren, an S corporation. Robin-Wren earned net income of $100,000 during the year, and Robin received a distribution of $35,000 from the corporation. Robin must report a $35,000 dividend on his individual Federal income tax return (Form 1040). ANS: F The shareholders of an S corporation report their shares of net income or loss, regardless of how much of the income was withdrawn from the corporation. Robin must report income of $50,000. PTS: 1 REF: p. 17-3 1 17- 2008 Comprehensive Volume/Test Bank 4. Jeff owns a 40% interest in a partnership that earned $200,000 in the current year. He also owns 40% of the stock in an S corporation that earned $200,000 during the year. The corporation did not make any distributions, and the partnership distributed $40,000 to him. Jeff must report $80,000 of income on his individual tax return. ANS: F Jeff must report his $80,000 (40% $200,000) share of the partnerships income on his individual tax return. Jeff also reports his $80,000 share of the income earned by the S corporation. The $40,000 distribution does not affect his income. PTS: 1 REF: p. 17-3 | p. 17-4 5. Quail Corporation is a C corporation with net income of $400,000 during 2007. If Quail paid dividends of $140,000 to its shareholders, the corporation must pay tax on $260,000 of net income. Shareholders must report the $140,000 of dividends as income. ANS: F Quail Corporation must pay tax on the $400,000 of corporate net income. Shareholders must pay tax on the $140,000 of dividends received from the corporation. This is commonly referred to as double taxation.... View Full Document
CHAPTER18REVIEWQUESTIONS - ch 4
CHAPTER19REVIEWQUESTIONS - ch56
CHAPTER20REVIEWQUESTIONS - ch65
CHAPTER22REVIEWQUESTIONS - ch12
CHAPTER21REVIEWQUESTIONS - ch10
CHAPTER28REVIEWQUESTIONS - ch19
Discussion # 3 -ACCT
business proposal
disc 1-new term
discusion 12
Chapter 11
Course Hero is not sponsored or endorsed by any college or university. | https://www.coursehero.com/file/241064/CHAPTER17REVIEWQUESTIONS-ch2/ | CC-MAIN-2015-11 | en | refinedweb |
Note beforehand: I’m real happy that django has class based views now!
I’m an old zope/plone guy. And that’s object oriented all the way. So Django’s views always seemed a bit strange, being just a function you call. A simple function is fine, but you often want/need helper functions. So your views.py soon starts to look like:
def homepage(...): def overview(...): def _helper_function1(...): def _helper_function2(...): def map(..): def _helper_function3(...): def _helper_function4(...):
So I greeted Django 1.3’s new class based views with enthousiasm! Visually, the same views.py would now be something like:
class Homepage(TemplateView): class Overview(TemplateView): class Map(TemplateView):
Helper functions would now be methods inside the classes. And the possibility of class inheritance makes it easy to customize views. It would limit the amount of helper functions you’d need to import from other applications: you’d get them for free from your baseclass.
I looked at django’s source code. Hello? 2004’s zope code wants is mixins back! Remember the “who cares about zope” keynote from Martijn Faassen at this year’s djangocon.eu? He said that zope is sometimes a couple of years ahead of django regarding choices that it makes.
Looking at django’s class based views means seeing lots of multiple inheritance with lots of mixin classes. TemplateResponseMixin, YearMixin, MultipleObjectTemplateResponseMixin, SingleObjectMixin. class BaseDateDetailView(YearMixin, MonthMixin, DayMixin, DateMixin, BaseDetailView)...
Yep, looks much the same as the old zope2 code I used to be debugging :-) Multiple inheritance is fine and works well. Mixins are fine and work well. Unless it gets more elaborate and complex and you have to dig into at least five classes till you figure out which class does what. It is manageable at the moment, but will it still be so at django 1.5 or django 1.6?
When it gets too elaborate and your django views are a tangled mess of inheritance hiearchies, django can start doing what zope did after a few years of mixin classes: switch over to a component architecture (adapters+interfaces and so). Simpler and more robust. But with an extra layer of indirection and configuration, so it is still sometimes hard to figure out which component does what.
Anyway, I’ve now formally warned the django project that django 1.7 will probably contain a component architecture as that is what zope did some years ago :-)
Django’s great class based views have a generic PR problem: if you google for class based views, you find django’s documentation on class based generic views. When Martijn Faassen researched Django’s current state of the art for his keynote by reading the website, he found django’s class based views alright. But in his talk, he called them generic views all the time. A logical mistake!
When I saw that “generic view” term, my thoughts were as follows:
Now, class based views are mighty fine for making generic views with. But doesn’t Django chase away people who want a regular custom view? Away from class based views? Just with the wording on the website?
There’s so much in place that’s not what you’d normally call a generic view in those class based views. So I think ‘generic’ takes up too much of the PR and that’s bad.
So what I’m going to do: explain how to make a regular class based view with Django! That’ll be a next): | http://reinout.vanrees.org/weblog/2011/08/23/class-based-views.html | CC-MAIN-2015-11 | en | refinedweb |
a trial SoapProxy object
a trial SoapProxy object
I have read, and left a number of posts about using Ext with a classic SOAP implemenation and hadn't seen anything definitive about how to do this. I still suspect that there is an existing solution to this problem, but since I couldn't find it, I started writing one. I have built a SoapProxy implementation by starting with the HttpProxy and reading through the codeset for the Ext.lib.Ajax implementation and making a few guesses. I now have this working quite well with the latest Apache Axis implementation, so I figured I would post the code and get some feedback. I should warn everyone that I have only been using Ext for about a week and I suspect that this might not be the best way of building this interface... but it certainly is working correctly in my application.
The sample that sets up a Store and Grid object to use the SoapProxy was cut from a working application and then edited down to be smaller and more portable, so I might have screwed it up somewhere. If anyone has a problem getting it to run, I would suspect the sample setup code is the problem rather than the SoapProxy code, because that has gotten a good bit of testing.
I just realized that I didn't externalize the URI-Namespace of the specific SOAP api, so that
would have to be changed in the code for someone to use this... but I will fix that and post
it again later. For now you would have to edit the 'http:" string in
the envelope building function to reflect the namespace of 'your' api, if you want to use a
namespace. In my next post I will move that to a parameter that defaults to no namespace.
If anyone has any feedback, I would appreciate it...
Code:
Ext.namespace( "Ext.soap" ); Ext.data.SoapProxy = function() { Ext.data.SoapProxy.superclass.constructor.call(this); }; Ext.extend(Ext.data.SoapProxy, Ext.data.DataProxy, { getConnection : function() { return Ext.Ajax }, load : function(params, reader, callback, scope, arg) { if(this.fireEvent("beforeload", this, params) !== false) { var xmlData = Ext.brit.makeSoapEnvelope( params.methodName, params.xmlParams ); if ( params.method == "get" )\r\n" + " <SOAP-ENV:Body>\r\n" + " <api:" + methodName + ">\r\n"; this.buildParam = function(paramName, paramValue ) { xmlStr += " <api:"+paramName+">" + paramValue + "</api:" + paramName + ">\r\n"; } for(var prop in soapEnvelope ) if(soapEnvelope.hasOwnProperty(prop)) this.buildParam( prop, soapEnvelope[ prop ] ); xmlStr += " </api:" + methodName + ">\r\n" + " </SOAP-ENV:Body>\r\n" + "</SOAP-ENV:Envelope>"; return xmlStr; } /******************************************************************************** * * SoapGridPanel() - sample code to show how to use the SoapProxy... was cut from other * code, so it may contain some errors... * *******************************************************************************/ function SoapGridPanel( ) { // list of elements in the response list to parse out of the result XML var fieldList = [ 'id', 'name', 'date', 'desc' ]; // this is the XML element that will be the parent of the 'fieldList' var recordName = 'record'; var colList = [ {header: "Id", width: 120, dataIndex: 'id', sortable: true}, {header: "Name", width: 80, dataIndex: 'name', sortable: true}, {header: "Date", width: 80, dataIndex: 'date', sortable: true}, {header: "Description", width: 80, dataIndex: 'desc', sortable: true} ]; Ext.soapStore = new Ext.data.Store( { proxy: new Ext.data.SoapProxy(), reader: new Ext.data.XmlReader({ record: recordName }, fieldList )}); var soapGrid = { id: 'soapgrid', xtype: 'grid', store: Ext.soapStore, columns: columnList, listeners: listenerList }; // these will be passed to the SOAP container as XML in the body of the post var xmlParams = { param1 : 'testparam1', param2: 'testparam2' }; this.doQuery = function( url, methodName, xmlParams ) { Ext.soapStore.load({ params: { url: url, methodName: methodName, xmlParams : xmlParams }}); } return soapGrid; }
Last edited by brian.moeskau; 23 Dec 2007 at 8:37 AM. Reason: code tags
SOAP for Java script alternative
SOAP for Java script alternative
Nice job but do you already know this one?
I wanted it integrated with Ext.data...
I wanted it integrated with Ext.data...
I really like the XML parsing support and Grid support that I got from Ext, so I figured I would get the best bang for the buck by having a SOAP implementation that as native to Ext as possible. It seemed like the most appropriate way from an Ext perspective was to build a Soap based Proxy.
Once I looked at the source code for the HttpProxy, I realized that that it wasn't a very coding intense solution either. The most difficult thing turned out to be getting the headers right to work with Apache Axis. It was kind-of particular about what could be included in the header without producing a 'Unable to internalize SOAP request' failure. But after I figured out that it went pretty smooth.
It can still use some further elaboration on parameter passing and turning error response codes into something useful... like a Exception. I will add that stuff this week, but I figured I might get some useful feedback from other people with more Ext experience before I tried to finalize anything... Who knows what kind of stuff I might not know about.
- Join Date
- Mar 2007
- Location
- Notts/Redwood City
- 30,548
- Vote Rating
- 63
I have to say nice job. Considering you have just started with Ext, I admire your initiative in getting stuck into the source, and getting your idea off the ground without fuss!
I'm no expert, but I have written some SOAP serialization/deserialization for other languages, so I think there's more that can be done. You can also encapsulate data type into each parameter. And represent Arrays, compound data types, and null values
Here is a good resource which explains this:
Just something to be thinking about over the holidays. I think it could be very useful to have a native Ext way of communicating with SOAP webservices.Search the forum:
Read the docs too:
Scope:
Thanks for the link
Thanks for the link
That site looks good... I will take a look at it and update the Proxy and re-post it later this week...
Thanks! | http://www.sencha.com/forum/showthread.php?21537-a-trial-SoapProxy-object | CC-MAIN-2015-11 | en | refinedweb |
can someone give me an example of how to use classes? i read the tuorial and still dont get it, just a simple example of how to use functions in a class. this is what i got so far but im getting 3 errors on this.
Code:#include <iostream> using namespace std; class Average { public: float findaverage(); private: int score; float sum; int NumScores; }; float Average::findaverage() { float findaverage(int score, int NumScores); cout <<"enter a score" << endl; while ( score != 0 ) { sum += score; NumScores++; cout << "Enter a test score ( 0 to quit ): "; cin >> score; } return sum/NumScores; } int main() { Average Numbers; Average.findaverage(); cout <<"The average is" << Average <<endl; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/61728-help-classes.html | CC-MAIN-2015-11 | en | refinedweb |
Hi i wrote in c# simple app to get HTML code by httpwebreqest/response. Main definition of this method is implemented in serwer exe file. So when I using it by calling this method from client then server application gets from internet this html code. But I want to force client to get this html code from WWW. I tried to make it Marshal by value but simple Serialization doesn't working. How to do that the object who gets this html code is activated from server but is used local by client. You can download my project file with all files. I hope it will be easy to undersnad my problem
RapidShare: Easy Filehosting
thanks a lot
Code:remotable object class libary(interface): using System; using System.Collections.Generic; using System.Text; using System.Net; using System.IO; namespace Interfejs { public interface IPobierz { string KodHTML(string URL); } } definitions for this interface(in serwer) using System; using System.Collections.Generic; using System.Text; using System.Net; using System.IO; using Interfejs; namespace Serwer { class Definicje : MarshalByRefObject, IPobierz { public string KodHTML(string URL) { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URL); HttpWebResponse res = req.GetResponse() as HttpWebResponse; Stream data = res.GetResponseStream(); StreamReader sr = new StreamReader(data); string html = sr.ReadToEnd(); return html; } } } | http://cboard.cprogramming.com/csharp-programming/113808-%5Bremoting%5D-httpwebrequest-response-isn%27t-remotable.html | CC-MAIN-2015-11 | en | refinedweb |
06 March 2012 09:29 [Source: ICIS news]By Ariel Chen
?xml:namespace>
Toluene prices, which are higher than xylene prices, are supporting the xylene market, the traders said.
Toluene and xylene are produced in aromatics plants and toluene prices are usually yuan (CNY) 100-200/tonne ($16-32/tonne) lower than xylene prices because of the difference in production costs, they said.
Solvent-grade xylene prices were at CNY9,250/tonne ex-tank at Zhangjiagang on 5 March from CNY8,975-9,000/tonne ex-tank at Zhangjiagang on 15 February, according to Chemease, an ICIS service in
Toluene prices rose by CNY550-600/tonne to CNY9,250-9,300/tonne ex-tank at Zhangjiagang in the same period, the data showed.
Xylene supplies are expected to be tight in east
Another factor contributing to the tight supply is a decline in xylene imports into east
South Korean producer Yeochun NCC (YNCC) is on track to shut its No 2 aromatics unit at Yeosu on 20 March for a month-long turnaround, a company source said. The unit can produce 120,000 tonnes/year of benzene, 60,000 tonnes/year of toluene and 40,000 tonnes/year of solvent-grade xylene.
Meanwhile, demand from the downstream paraxylene (PX) sector has been rising since the start of this year because of several expansions in the purified terephthalic acid (PTA) sector, the traders said.
The rising demand is supporting PX and xylene prices, they added.
( | http://www.icis.com/Articles/2012/03/06/9538399/east-china-xylene-prices-to-rise-in-march-on-tight-supply.html | CC-MAIN-2015-11 | en | refinedweb |
In this tutorial, we will see how to do three way partitioning of an array around a given range.
What is Three way partitioning?
Given an array, and a range, say,
[A,B], the task is to partition the array into three parts such that,
- All the elements of the array that are less than A appear in the first partition
- All the elements of the array that lie in the range of A to B appear in the second partition, and,
- All the elements of the array that are greater than B come in the third partition
Note that,
A<=B and, the individual elements in each of the partitions needn’t necessarily be sorted.
Does this sound similar to quick sort?
Yes, it is. This is very similar to the partition step of quick sort. In quick sort, we pick a pivot element in an array, and place the pivot in the right index such that, all the elements lesser than the pivot come to its left and all the elements greater than the pivot come to its right. This can be compared to three-way partitioning, where pivot is not just an element, but it is a range
[A,B].
Now, let us see how this problem can be done..
Naive Approach
This can be solved by sorting the array. But, that will take order of
nlogn time. (where n is the size of the array). Can we think of a more efficient solution in a single pass without actually sorting the array?
Efficient Approach
- Take three pointers
low,
midand
high.
- Set
low and midto the 0th index, and
highto the
(n-1)thindex initially.
- Say, the partition containing all the elements less than A is the first partition and the block containing all the elements greater than B is the third partition.
- Use “low” to maintain the boundary of the first partition and “high” to maintain the boundary of the third partition.
- While, “mid” iterates over the array and swaps the right elements to the first and third partition.
- The resulting array is divided into 3 partitions.
Implementation
- Initialize
low=0,
mid=0and
high=n-1.
- Use mid to iterate over the array and visit each element. At an element
arr[mid],
if arr[mid]is less than A, then, swap it with
arr[low]and increment both low and mid by one.
if arr[mid]is greater than B, swap it with
arr[high]and decrement high by one.
- Otherwise, increment
midby one.
- The resulting array is the partitioned array.
Example:
def threeWayPartition(array, a, b): low=0 mid=0 high=len(array)-1 while(mid <= high): if(array[mid] < a): #swap array[mid] with array[low] temp = array[mid] array[mid] = array[low] array[low] = temp low += 1 mid += 1 elif(array[mid] > b): #swap array[mid] with array[high] temp = array[mid] array[mid] = array[high] array[high] = temp high -= 1 else: mid += 1 return array if __name__ == "__main__": print("Enter the array elements seperated by spaces: ") str_arr = input().split(' ') arr = [int(num) for num in str_arr] a, b = [int(x) for x in input("Enter the range [A,B] seperated by spaces (NOTE: A <= B) ").split()] arr = threeWayPartition(arr, a, b) print("The array after three way partitioning is ", arr)
Output
import java.util.*; import java.io.*; public class test{ public static void main(String[] args){ int arr[] = {2,5,27,56,17,4,9,23,76,1,45}; int n = arr.length, a=15, b=30; int low=0, mid=0, high=n-1, temp; while(mid<=high){ if(arr[mid] < a){ //swap arr[mid] and arr[low] temp = arr[mid]; arr[mid] = arr[low]; arr[low] = temp; low++; mid++; } else if(arr[mid] > b){ //swap arr[mid] and arr[high] temp = arr[mid]; arr[mid] = arr[high]; arr[high] = temp; high--; } else mid++; } System.out.println("The partitioned array is.."); for(int i=0;i<arr.length; i++) System.out.print(arr[i]+" "); System.out.println(); } }
Output
Time and space complexity
The time complexity of this algorithm is in the order of n, i.e.,
O(n) as it is a single pass algorithm. The algorithm is in-place and doesn’t take any extra space, making the space complexity constant i.e.,
O(1).
Applications
One of the important applications of this algorithm is, to sort an array with three types of elements where n>=3. Say, we have an array of 0s, 1s and 2s. Then, this approach can be employed to sort it in a single pass with constant space complexity. In this case, A=1 and B=1.
References:
Happy Learning 🙂 | https://www.onlinetutorialspoint.com/algorithms/three-way-partitioning-of-an-array-example.html | CC-MAIN-2021-31 | en | refinedweb |
Opened 2 years ago
Closed 2 years ago
Last modified 6 months ago
#2279 closed task (fixed)
harden pillow image parsing
Description (last modified by )
Apart from the obvious server-to-client transfer of window pixel data, we can receive compressed pixel data from a number of places: webcam, window icons, xdg menus, etc
Some of those can flow back to the server.
We should ensure that we only allow the encodings we support so that a vulnerability in another codec cannot be triggered from those code paths.
We only really care about: webp, png and jpeg for now.
Those all have detectable headers.
Let's move the code to a utility function that can do the checking.
Example
Image.open code that could be abused:
from PIL import Image from io import BytesIO buf = BytesIO(icondata) img = Image.open(buf) has_alpha = img.mode=="RGBA" width, height = img.size rowstride = width * (3+int(has_alpha)) pixbuf = get_pixbuf_from_data(img.tobytes(), has_alpha, width, height, rowstride)
Change History (5)
comment:1 Changed 2 years ago by
comment:2 Changed 2 years ago by
comment:3 Changed 2 years ago by
Work completed in:
- r22513 the client will validate the compressed pixel data before calling into python-pillow
- r22514 webcam mixin will also validate compressed data from the client
To test, we have to build
--without-webp and
--without-jpeg_decoder otherwise those faster cython decoders have precedence (and they don't need validating since they only decode the one format they are designed for).
Then just attach with
--encodings=jpeg (or
--encodings=png.
comment:4 Changed 19 months ago by
comment:5 Changed 6 months ago by
this ticket has been moved to:
Here's a list of the formats supported by python-pillow: Image File Formats (long!).
Work started in r22493: we filter tray icons and window icons (server to client), dbus and win32 notifiers only accept png (now actually enforced), webcam validates the encodings used.
r22494 also removes support for jpeg2000 (#618) - that encoding was pretty useless anyway.
Still TODO: | https://www.xpra.org/trac/ticket/2279 | CC-MAIN-2021-31 | en | refinedweb |
Regression
Impossible to approach Data sciences or Machine Learning without going through the Linear Regression box. Of course, there are several types of regression. We saw in a previous article how to use logistic regression to perform a classification ! do you think that sounds strange? didn’t I tell you that the algorithms are divided into several families ? Regressions and classifications among others? In fact, it’s true… and then not quite! it is indeed quite possible to use and mix the algorithms.
DataScience is really cooking!
In fact, the World of Machine Learning is a world made up of uncertainties and learning. If the data that we process have their uncertainties, it seems that the, or rather the toolboxes necessary for their processing are no exception. These are mainly composed of algorithms. These algorithms mix probabilities, statistics as well as linear algebra. To be honest, it’s a very heterogeneous world, and the objective of this article is to present the simplest of supervised tools: linear regression.
Principle of Linear Regression
The principle is quite simple. You notice a phenomenon and incredible you detect that there is a link between the parameters of your observations (the characteristics) and the result (the label). Like any good scientist, you therefore use geometry and place your findings (blue dots) on a graph. On the x-axis you will therefore put your characteristic (we start simple with 1 only) and on the y-axis your result (label).
Incredible… your points seem, but not exactly to draw a line (therefore linear)! apart from a few uncertainties it would therefore seem that there is a linear link between your characteristic. You understood ?
Linear Regression consists in guessing which is the linear equation which links characteristic (s) and label!
To put it simply, the algorithm will try to guess what is the equation ( y = a X + b) (Cf. red line above).
Yes but how ?
You have noticed ? there are really a lot of points, and unless you increase the thickness of the line, there are really a lot of potential candidates for our beautiful right. Which candidate to choose and how? Quite simply by taking into account the notion of error compared to the training data. Let me explain: in the learning phase we collected the necessary data (our blue points). We also have a candidate equation calculated by our algorithm… now we just have to confront it with reality.
To calculate the error rate, we can base ourselves on the distance between the real result and the calculated result (see graph above). Nothing will then prevent penalizing such and such ranges of values according to dispersion factors for example. Very simply we will be able to say (on the training data) if our line responds to 80, 90 or 99%
This error rate is our cockpit to refine the parameters (a and b here).
Linear Regression with Scikit-Learn
You understand the principle … a little practice now with the Python Scikit-Learn module. To illustrate my statements above, I will start again from the Coursera training dataset ( univariate_linear_regression_dataset.csv ). This data simply represents two columns with numeric data.
Using a simple (dot) graph let’s see what this looks like (below).
import pandas as pd import matplotlib.pyplot as plt data = pd.read_csv('./data/univariate_linear_regression_dataset.csv') plt.scatter (data.col2, data.col1) plt.grid()
Can you imagine the right emerging? not so simple is not it but one imagines it not so badly however. let’s use the Scikit-learn library to calculate the linear regression on these data:
from sklearn import linear_model X = data.col2.values.reshape(-1, 1) y = data.col1.values.reshape(-1, 1) regr = linear_model.LinearRegression() regr.fit(X, y) regr.predict(30)
Let’s even make a prediction with the value 30 to see the result. We get a value of 22.37, which isn’t really inconsistent, is it given the data?
Now in order to better understand what I explained to you above, I suggest you add 30 new values that we will add to our learning game. for each new value we will of course make a prediction.
predictions = range(30,51) results = [] for pr in predictions : results.append([pr, regr.predict(pr)[0][0]]) myResult = pd.DataFrame(results, columns=['col1', 'col2']) myResult.head(5)
You see it now, this famous right (top right) looming? there she is ;
You have found that this line was not so easy to find visually and you are right because the calculation of the error rate (here very bad = 70% ) will reflect this complexity of finding a linear relationship between our two values.
One thought on “Linear Regression” | http://aishelf.org/rg/ | CC-MAIN-2021-31 | en | refinedweb |
Native Queries With Spring Boot and NamedParameterJdbcTemplate
Let's see an example of how to use NamedParameterJdbcTemplate.
Join the DZone community and get the full member experience.Join For Free
Do you need to run a native query with Spring Boot? Should you consult a database of a legacy application? Do you feel that if you execute a native query you save the mapping of many tables in your database?
Well, for these cases, my favorite way is to use NamedParameterJdbcTemplate. Next, we will see an example of how to use it. In our case, we must obtain the quantity of orders from a given customer.
First, we must define the POJO (DTO) that will obtain the result of the query:
public class NativeQueryDTO { private String name; private int orderCount; ... //Getters - Setters }
Then we must create the repository, which contains the method that returns the required information:
@Component public class NativeRepository { @Autowired private NamedParameterJdbcTemplate jdbcTemplate; public NativeQueryDTO countCustomerOrder(Long id){ MapSqlParameterSource parameters = new MapSqlParameterSource(); parameters.addValue("customerId", id); String sql = " select c.name, count(o) as orderCount " + " from customers c, orders o " + " where c.id = o.customer_id and c.id = :customerId " + " group by c.name "; return (NativeQueryDTO)jdbcTemplate.queryForObject( sql, parameters, BeanPropertyRowMapper.newInstance(NativeQueryDTO.class)); } }
The previous code uses the queryForObject method that executes the query, and the necessary parameters are passed with the object of type MapSqlParameterSource. To obtain a correct mapping as a third parameter, we must send a BeanPropertyRowMapper with the class mapping. Because we are using BeanPropertyRowMapper, we do not need to implement a RowMapper.
Finally, in the official documentation of Spring, they recommend the implementation of a customized RowMapper in case of needing better performance.
public class NativeQueryDTOMapper implements RowMapper<NativeQueryDTO> { @Override public NativeQueryDTO mapRow(ResultSet rs, int rowNum) throws SQLException { NativeQueryDTO dto = new NativeQueryDTO(); dto.setName(rs.getString("name")); dto.setOrderCount(rs.getInt("orderCount")); return dto; } }
Then use it as follows:
return (NativeQueryDTO)jdbcTemplate.queryForObject( sql, parameters, new NativeQueryDTOMapper());
We have seen that it is very easy to use native queries with Spring Boot, and I hope that this example was helpful. Until next time!
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/native-queries-with-spring-boot | CC-MAIN-2021-31 | en | refinedweb |
Software Developer, Tech Enthusiast, Runner.
In my free time, I am attempting to build my own smart home devices. One feature they will need is speech recognition. While I am not certain yet as to how exactly I want to implement that feature, I thought it would be interesting to dive in and explore different options. The first I wanted to try was the SpeechRecognition library.
To put a long story short, this tutorial is going to be a little bit different. There were several errors I had to deal with and even redirect my focus. That being said, the coding portion is simple. Only a few lines of code to get it working. The installation took time and effort, but with research it was manageable. Instead, the issue was in the systems I was deciding to use. For example, the first attempt was on an Ubuntu server. Nothing wrong with that, but the default device could not be changed, seeing as the code is being run through ssh. I would have to go to the server and plug everything indirectly. Again, nothing wrong with that. I was just hoping for an easier option. Feeling particularly adventurous and being a little too lazy to plug into the server directly, I tried a few different machines.
This tutorial will be a little different than my previous posts. For this, I am first going to share the working installation code on a local Ubuntu machine, which is what I ended up using. After that, I will talk about the other machines I attempted, where I found my issues, and when I decided to switch. Hopefully, that will help anyone using other machines. Or perhaps someone will know more about the issues I encountered but did not take the time to see through.
To run the SpeechRecognition library for our code, we will first need to install SpeechRecognition but then must also install PyAudio. First, we will start with the main package:
sudo pip3 install SpeechRecognition
If your try to run code now, you will get an error about the PyAudio installation not being found. Installing should have followed exactly the same format, but it seems I was missing packages to get this to work properly, and attempting to install PyAudio threw an error. These packages should remove that error. I did not have to update apt at that point, but it does not hurt to give it an update first.
sudo apt-get install libasound-dev portaudio19-dev libportaudio2 libportaudiocpp0
With that out of the way, you should be good to install PyAudio:
sudo pip3 install PyAudio
As mentioned previously, there are very few lines of code required to get this up and running. different machines.
First, you must import the SpeechRecognition library:
import speech_recognition as speech
We added an alias to the library in order to reference it later in a simpler way. Now, we can use the Recognizer function:
sound = speech.Recognizer()
Next, we will need to allow the python file to hear what we are saying. It is the reason we needed PyAudio as well. For live speech, we will need to set up a microphone. Note, we will not set this in a loop, so we will only be able to speak to the application one time, whether that is a single word or a sentence. Nonetheless, this recognizer is only a test, so we will not need to speak multiple times. We will set up a microphone first, give that an alias, then instructs the Recognizer to set it up earlier to listen.
with speech.Microphone() as audio: said = sound.listen(audio)
Now, because our microphone could be unclear, or even the speech itself, we will need to set up a “try” to determine if the Recognizer was able to understand or not. We will use a recognize_google function, so an internet connection will be required. For security's sake, I would not use this function in any home applications. However, while just testing what Python can do, it will be good enough for now. The parameters will need what was said to recognize, the language, and whether all guesses should be displayed or not.
At this time, we want to see all potential guesses, and the language will be English. Either of these could be different for you, which is why they are specified. If it did recognize the phrase, then we want to print the results. However, if it could not understand, we will want to print a message. This can be done with an “except” which will track any errors encountered, and we can leave an error that states the speech was not understood.
try: print(sound.recognize_google(said, language = 'en-IN', show_all = True)) except LookupError: print("Could not understand. Please repeat.")
Now, all you must do is run the application.
With our code up and running, we can now talk about what gave me issues on different machines.
As mentioned before, the issue was that from a separate machine connect to the server, I was unable to change the default input device. This would not have been an issue if I would have gone to the server and plugged in the microphone directly. Other than the issue with the input, the installation process was the same, as it was also Ubuntu 16.04.
The next system I used was a Windows machine running WSL (Windows Subsystem for Linux). It too used Ubuntu 16.04, so the installation process was the same. However, when it came to using the microphone, WSL is not as easy as plugin and go. To control the microphone over the Ubuntu terminal app, PulseAudio needed to be installed. To do this, first, the repository was added:
sudo add-apt-repository ppa:therealkense/wsl-pulseaudio
From there, a regular install could be run:
sudo apt-get install pulseaudio
PulseAudio is a network-based sound server, which runs on Linux and other variations. Like other systems, you must start it and check the status. First, there is a command to restart it:
pulseaudio --k
If not already on, you can now start PulseAudio:
pulseaudio --start
Next, you will have to look at the audio devices available. These devices are known as sinks. In my case, I had only one, which was the headset with a microphone:
pacmd list-sinks
Now we also have the index, which is what we needed. We can set the default input from here:
pacmd set-default-sink 0
Please note, you may have to run the start command again on PulseAudio. I had to run it for every command. This was the final step. Now the code should run. However, when running the code there is yet another error. It is a lengthy description, but the main error is:
I dug in to find more about this error, although it did not seem to have much documentation behind it. Instead, it seemed as if StackOverflow was one of the only sites I found with usable information on it. It seems like others were having the same issue. This is where I stopped for this implementation.
Looking back now, it seems like someone had mentioned using XServer. I am wondering now if I would have run Xming, maybe that would have worked. But, oh well. Another time perhaps I will give it a go. Leaving this version, I moved to my next machine.
As usual, the very first thing to do was install SpeechRecognition via pip3. Upon trying to install PyAudio, it is important to note that still had prerequisites to install, but they are different in Fedora. Remember that Fedora syntax is different than Ubuntu:
sudo dnf install portaudio-devel redhat-rpm-config
This is not the only package required. We must also install the python portion of devel:
sudo dnf install python3-devel
Now the prerequisites were installed, go ahead and install PyAudio via pip3 just like on the previous machines. With everything installed, I ran the code. Another error:
This error seemed to be a little more complicated to get information on. Some people were thinking it just needed a restart, some people never got it working. In either case, it was difficult to find documentation.
Now, if I would have tried harder, maybe looked longer, or even just dedicated a little bit more effort to this, it is probably simple enough to solve. However, this was just an experimental project. For an experiment, I was not wanting to dedicate too much time to this.
So, this is where I stopped trying on Fedora. Perhaps for the better, as I realized I have an Ubuntu machine, could just run that code locally. And so that is what I ended up doing and had no issues with that.
At the end of the day, we got something up and working. I think it was rather interesting to mess around with. The current code we created would be used only for test purposes, however, as the microphone is making a call to google. We would not want to be using google calls for any applications intended to be used for privacy reasons.
As a difference, we talked about the errors I came across on different machines. Although they are likely solvable, I did not dedicate too much time to solving these, and therefore some were left unresolved. In the long run, we did get the code up and running. Either way, it was an interesting journey, and our voices were recognized by a python library!
Hopefully, you will find some use in seeing the spots where I went wrong. Yes, the mistakes are frustrating, and roadblocks are as well. However, every mistake can be insightful. We learned what to do, what not to do, where to go when stuck, and even when to just move on if able. Noting the differences in installing certain packages on Ubuntu versus others in Fedora was the most interesting portion in my opinion. It took a little research, but nothing was more than we could handle. So, I thank you for joining this voice recognition adventure with me. Until next time, cheers!
Previously published at
Create your free account to unlock your custom reading experience. | https://hackernoon.com/how-to-do-speech-recognition-in-python-bk1234w9?source=rss | CC-MAIN-2021-31 | en | refinedweb |
A good understanding of data structures is an important skill for every programmer to have in their toolkit. Not to mention that questions related to linked lists are common in most coding interviews.
These skills demonstrate your ability to solve ambiguous problems, think complexly, and identify patterns in code. Data structures are used to organize your data and apply algorithms to code. The Java platform provides high-performance implementations of various data structures, and it is one of the most commonly tested programming languages for interviews.
Today we will be looking at one one of the most important JavaScript data structures, the Linked List class. We will take a look at how it works and learn how we can use it to store and manipulate data.
We will cover:
A linked list is a common data structure that is made of a chain of nodes. Each node contains a value and a pointer to the next node in the chain.
The head pointer points to the first node, and the last element of the list points to null. When the list is empty, the head pointer points to null.
Linked lists can dynamically increase in size. It is easy to insert and delete from a linked list because unlike arrays, as we only need to change the pointers of the previous element and the next element to insert or delete an element.
Some important applications of Linked Lists include:
Since a linked list is a linear data structure, meaning that the elements are not stored at contiguous locations, it’s necessary to have different types of linked lists to access and modify our elements accordingly.
There are a three different types of linked lists that serve different purposes for organizing our code.
Let’s take a look.
A singly linked list is unidirectional, meaning that it can be traversed in only one direction from head to the last node (tail). Some common operations for singly linked lists are:
Doubly linked lists (DLLs) are an extension of basic linked lists, but they contain a pointer to the next node as well as the previous node. This ensures that the list can be traversed in both directions. A DLL node has three fundamental members:
Circular linked lists function circularly: the first element points to the last element, and the last element points to the first. A single linked list and double linked list can be made into a circular linked list. The most important operations for a circular linked list are:
insert− insert elements at the start of the list
display− display the list
delete− delete elements from the start of the list
In Java, the linked list class is an ordered collection that contains many objects of the same type. Data in a Linked List is stored in a sequence of containers. The list holds a reference to the first container and each container has a link to the next one in the sequence.
Linked lists in Java implement the abstract list interface and inherit various constructors and methods from it. This sequential data structure can be used as a list, stack or queue.
As I briefly discussed before, a linked list is formed by nodes that are linked together like a chain. Each node holds data, along with a pointer to the next node in the list.
The following illustration shows the theory of a Singly Linked List.
To implement a linked list, we need the following two classes:
Class Node
The Node class stores data in a single node. It can store primitive data such as integers and string as well as complex objects having multiple attributes.
Along with data, it also stores a pointer to the next element in the list, which helps in linking the nodes together like a chain.
Here’s a typical definition of a Node class:
//Class node having Generic data-type <T> public class Node<T> { public T data; //Data to store (could be int, string, Object etc) public Node nextNode; //Pointer to next node in list }
Class Linked list
As mentioned above, the Singly Linked list is made up of nodes that are linked together like a chain. Now to access this chain, we would need a pointer that keeps track of the first element of the list.
As long as we have information about the first element, we can traverse the rest of the list without worrying about memorizing their storage locations.
The Singly Linked List contains a head node: a pointer pointing to the first element of the list. Whenever we want to traverse the list, we can do so by using this head node.
Below is a basic structure of the Singly Linked List’s class:
public class SinglyLinkedList<T> { //Node inner class for SLL public class Node{ public T data; //Data to store (could be int, string, Object etc) public Node nextNode; //Pointer to next node in list } public Node headNode; //head node of the linked list public int size; //size of the list //constructor public SinglyLinkedList() { headNode = null; size = 0; } }
Linked lists are fairly easy to use since they follow a linear structure. They are quite similar to arrays, but linked lists are not as static, since each element is its own object. Here is the declaration for Java Linked List class:
public class LinkedList<E> extends AbstractSequentialList<E> implements List<E>, Deque<E>, Cloneable, Serializable
Let’s see a more detailed example of that in code. Here is how we create a basic linked list in Java:
import java.util.LinkedList; class Main { public static void main(String[] args) { LinkedList<String> names = new LinkedList<String>(); } }
The Linked List class is included in the
Java.util package. In order to use the class, we need to import the package in our code. We have initialized an empty Linked List using the new keyword. A Linked List can hold any type of object, including
null.
In order to add an element to the list, we can use the
.add() method. This method takes an element (passed as an argument) and appends it to the end of the list.
import java.util.LinkedList; class Main { public static void main(String[] args) { LinkedList<String> names = new LinkedList<String>(); names.add("Brian"); names.add("June"); System.out.println(names); // This will output [Brian, June] } }
If you want to add the new element to a specific location instead, you can do so by passing the index value as the first argument to the
.add() method.
names.add(1, "Kathy"); System.out.println(names); // Outputs [Brian, Kathy, June]
The above line of code inserts “Kathy” into the
names list at the
1 position or index. Since the first element in the list has the index
0, “Kathy” will be inserted right after “Brian” and just before “June”. This will feel familiar if you’ve worked with Arrays in JavaScript. This behavior is possible because Linked Lists are indexed like JavaScript arrays.
There are also methods for explicitly adding elements to the end or start of the list.
names.addFirst("Luke"); names.addLast("Harry"); System.out.println(names); // Outputs [Luke, Brian, Kathy, June, Harry]
The
.addFirst() method adds the specified element at the start of the list. To append an element at the list’s end position, use the
.addLast() method. In the code block above, “Luke” is inserted into the list and it becomes the first element in the list (it now has the index
0). The element “Harry” is inserted at the end, making it the last element on the list.
Learn data structures for Java coding interviews without scrubbing through videos or documentation. Educative’s text-based courses are easy to skim and feature live coding environments, making learning quick and efficient.
Data Structures for Coding Interviews in Java
Similar to element addition, Linked List provides methods for removing elements in a list. These methods are similar in operation to the methods for adding elements to the list. The
.remove() method removes the first occurrence of a specified element.
names.remove("Brian"); // This will remove the first occurrence of "Brian" in the LinkedList
This method is similar to the
.add() method, as it allows the removal of an element at a specific index. Calling
names.remove(2) will remove the element at index
2, which is “Brian” in this list. It is also possible to remove the first element and the last element in the list using the
.removeFirst() and
.removeLast() methods respectively.
names.removeFirst(); names.removeLast();
The Linked List class provides a method to change an element in a list. This method is called
.set(), and it takes an index and the element which needs to be inserted, replacing the previous element at that position.
// names list is: [Kathy, June] names.set(0, "Katherine"); // names list is: [Katherine, June]
for (int i = 0; i < names.size(); i++) { System.out.println(names.get(i)); }
We could also use a foreach loop to iterate over a Linked List.
for (String str : names) { System.out.println(str); }
A linked list acts as a dynamic array. This means we do not have to specify the size when creating it, its size automatically changes when we add and remove elements. The Linked List class is also implemented using the doubly linked list data structure.
This means that each element in the list holds a reference to elements before and after it. If an element is the last in the list, its next reference will return
null.
This design makes the Linked List useful in cases where:
This design also makes the
LinkedList comparably unfavorable to the
ArrayList, which is usually the default List implementation in Java, in the following ways:
ArrayListbecause of the storage used by its items’ references, one for the previous item and one for the next item
Congratulations on learning the basics of Java’s Linked List class. The choice to use any data structure, as well as any method in computer programming, should come down to what problem you’re trying to solve.
There’s so much more to learn and practice to master liked lists in Java. Here are some of the common data structures interview challenges for linked lists:.
Happy learning!
Join a community of 500,000 monthly readers. A free, bi-monthly email with a roundup of Educative's top articles and coding tips. | https://www.educative.io/blog/data-structures-linked-list-java-tutorial | CC-MAIN-2021-31 | en | refinedweb |
This post follows on somewhat from my recent posts on running async startup tasks in ASP.NET Core. Rather than discuss a general approach to running startup tasks, this post discusses an example of a startup task that was suggested by Ruben Bartelink. It describes an interesting way to try to reduce the latencies seen by apps when they've just started, by pre-building all the singletons registered with the DI container.
The latency hit on first request
The ASP.NET Core framework is really fast, there's no doubt about that. Throughout its development there's been a huge focus on performance, even driving the development of new high-performance .NET types like
Span<T> and
System.IO.Pipelines.
However, you can't just have framework code in your applications. Inevitably, developers have to put some actual functionality in their apps, and if performance isn't a primary focus, things can start to slow down. As the app gets bigger, more and more services are registered with the DI container, you pull in data from multiple locations, and you add extra features where they're needed.
The first request after an app starts up is particularly susceptible to slowing down. There's lots of work that has to be done before a response can be sent. However this work often only has to be done once; subsequent requests have much less work to do, so they complete faster.
I decided to do a quick test of a very simple app, to see the difference between that first request and subsequent requests. I created the default ASP.NET Core web template with individual authentication using the .NET Core 2.2 SDK:
dotnet new webapp --auth Individual --name test
For simplicity, I tweaked the logging in appsettings.json to write request durations to the console in the
Production environment:
{ "Logging": { "LogLevel": { "Default": "Warning", "Microsoft.AspNetCore.Hosting.Internal.WebHost": "Information" } } }
I then built the app in
Release mode, and published it to a local folder. I navigated to the output folder and ran the app:
> dotnet publish -c Release -o ..\..\dist > cd ..\..\dist > dotnet test.dll Hosting environment: Production Now listening on: Now listening on: Application started. Press Ctrl+C to shut down.
Next I hit the home page of the app and recorded the duration for the first request logged to the console. I hit
Ctrl+C to close the app, started it again, and recorded another duration for the "first request".
Obviously this isn't very scientific, It's not a proper benchmark, but I just wanted a feel for it. For those interested, I'm using a Dell XPS 15" 9560, w block has an i7-7700 and 32GB RAM.
I ran the "first request" test 20 times, and got the mean results shown below. I also recorded the times for the second and third requests
After the 3rd request, all subsequent requests took a similar amount of time.
As you can see, there's a big difference between the first request and the second request. I didn't dive too much into where all this comes from, but some quick tests show that the vast majority of the initial hit is due to rendering Razor. As a quick test, I added a simple API controller to the app:
public class ValuesController : Controller { [HttpGet("/warmup")] public string Index() => "OK"; }
Hitting this controller for the first request instead of the default Razor
Index page drops the first request time to ~90ms. Removing the MVC middleware entirely (and responding with a 404) drops it to ~45ms.
Pre-creating singleton services before the first request
So where is all this latency coming from for the first request? And is there a way we can reduce it so the first user to hit the site after a deploy isn't penalised as heavily?
To be honest, I didn't dive in too far. For my experiments, I wanted to test one potential mitigation proposed by Ruben Bartelink: instantiating all the singletons registered with the DI container before the first request.
Services registered as singletons are only created once in the lifetime of the app. If they're used by the ASP.NET Core framework to handle a request, then they'll need to be created during the first request. If we create all the possible singletons before the first request then that should reduce the duration of the first request.
To test this theory, I created a startup task that would instantiate most of the singletons registered with the DI container before the app starts handling requests properly. The example below uses the "
IServer decorator" approach I described in part 2 of my series on async startup tasks, but that's not important; you could also use the
RunWithTasksAsync approach, or the health checks approach I described in part 4.
The
WarmupServicesStartupTask is shown below. I'll discuss the code shortly.) { foreach (var singleton in GetSingletons(_services)) { // may be registered more than once, so get all at once _provider.GetServices(singleton); } return Task.CompletedTask; } static IEnumerable<Type> GetSingletons(IServiceCollection services) { return services .Where(descriptor => descriptor.Lifetime == ServiceLifetime.Singleton) .Where(descriptor => descriptor.ImplementationType != typeof(WarmupServicesStartupTask)) .Where(descriptor => descriptor.ServiceType.ContainsGenericParameters == false) .Select(descriptor => descriptor.ServiceType) .Distinct(); } }
The
WarmupServicesStartupTask class implements
IStartupTask (from part 2 of my series) which requires that you implement
ExecuteAsync(). This fetches all of the singleton registrations out of the injected
IServiceCollection, and tries to instantiate them with the
IServiceProvider. Note that I call
GetServices() (plural) rather than
GetService() as each service could have more than one implementation. Once all services have been created, the task is complete.
The
IServiceCollectionis where you register you register your implementations and factory functions inside
Starrup.ConfigureServices. The
IServiceProvideris created from the service descriptors in
IServiceCollection, and is responsible for actually instantiating services when they're required.
The
GetSingletons() method is what identifies the services we're going to instantiate. It loops through all the
ServiceDescriptors in the collection, and filters to only singletons. We also exclude the
WarmupServicesStartupTask itself to avoid any potential weird recursion. Next we filter out any services that are open generics (like
ILogger<T>) - trying to instantiate those would be complicated by having to take into account type constraints, so I chose to just ignore them. Finally, we select the type of the service, and get rid of any duplicates.
By default, the
IServiceCollection itself isn't added to the DI container, so we have to add that registration at the same time as registering our
WarmupServicesStartupTask:
public void ConfigureServices(IServiceCollection services) { //Other registrations services .AddStartupTask<WarmupServicesStartupTask>() .TryAddSingleton(services); }
And that's all there is to it. I repeated the test again with the
WarmupServicesStartupTask, and compared the results to the previous attempt:
I know, right! Almost knocked you off your chair. We shaved 26ms off the first request time.
I have to admit, I was a bit underwhelmed. I didn't expect an enormous difference, but still, it was a tad disappointing. On the positive side, it is close to a 10% reduction of the first request duration and required very little effort, so its not all bad.
Just to make myself feel better about it, I did an unpaired t-test between the two apps and found that there was a statistically significant difference between the two samples.
Still, I wondered if we could do better.
Creating all services before the first request
Creating singleton service makes a lot of sense as a way to reduce first request latency. Assuming the services will be required at some point in the lifetime of the app, we may as well take the hit instantiating them before the app starts, instead of in the context of a request. This only gave a marginal improvement for the default template, but larger apps may well see a much bigger improvement.
Instead of just creating the singletons, I wondered if we could just create all of the services our app uses in the startup task; not only the singletons, but the scoped and transient services.
On the face of it, it seems like this shouldn't give any real improvement. Scoped services are created new for each request, and are thrown away at the end (when the scope ends). And transient services are created new every time. But there's always the possibility that creating a scoped service could require additional bootstrapping code that isn't required by singleton services, so I gave it a try.
I updated the
WarmupServicesStartupTask to the following:) { using (var scope = _provider.CreateScope()) { foreach (var singleton in GetServices(_services)) { scope.ServiceProvider.GetServices(singleton); } } return Task.CompletedTask; } static IEnumerable<Type> GetServices(IServiceCollection services) { return services .Where(descriptor => descriptor.ImplementationType != typeof(WarmupServicesStartupTask)) .Where(descriptor => descriptor.ServiceType.ContainsGenericParameters == false) .Select(descriptor => descriptor.ServiceType) .Distinct(); } }
This implementation makes two changes:
GetSingletons()is renamed to
GetServices(), and no long filters the services to singletons only.
ExecuteAsync()creates a new
IServiceScopebefore requesting the services, so that the scoped services are properly disposed at the end of the task.
I ran the test again, and got some slightly surprising results. The table below shows the first request time without using the startup task (top), when using the startup task to only create singletons (middle), and using the startup task to create all the services (bottom)
That's a mean reduction in first request duration of 117ms, or 37%. No need for the t-test to prove significance here! I can only assume that instantiating some of the scoped/transient services triggers some lazy initialization which then doesn't have to be performed when a real request is received. There's possibly JIT times coming in to play too.
Even with the startup task, there's still a big difference between the first request duration, and the second and third requests which are only 4ms and 1ms respectively. It seems very like there's more that could be done here to trigger all the necessary MVC components to initialize themselves, but I couldn't see an obvious way, short of sending a real request to the app.
It's worth remembering that the startup task approach shown here shouldn't only improve the duration of the very first request. As different parts of your app are hit for the firat time, most initialisation should already have happened, hopefully smoothing out the spikes in request duration for your app. But your mileage may vary!
Summary
In this post I showed how to create a startup task that loads all the singletons registered with the DI container on app startup, before the first request is received. I showed that loading all services in particular, not just singletons, gave a large reduction in the duration of the first request. Whether this task will be useful in practice will likely depend on your application, but it's simple to create and add, so it might be worth trying out! Thanks again to Ruben Bartelink for suggesting it. | https://andrewlock.net/reducing-latency-by-pre-building-singletons-in-asp-net-core/ | CC-MAIN-2021-31 | en | refinedweb |
From this comprehensive tutorial, we can easily grasp the concept of PriorityQueue Class of the Java collections framework by examples. Mainly, the Java PriorityQueue class Interface implements the functionality of the heap data structure. You can also use this tutorial for learning in-depth knowledge on creating a Java Priority Queue, Access PriorityQueue Elements, Constructors, and Methods of PriorityQueue Class in Java.
This Tutorial of Java Priority Queue Class Includes:
- Java PrioirityQueue
- Queue Interface Declaration
- PriorityQueue class in Java
- PriorityQueue class declaration
- Creating PriorityQueue
- Constructors of PriorityQueue in Java
- Methods of PriorityQueue Class in Java
- Access PriorityQueue Elements
- Java PriorityQueue Example
Java PrioirityQueue
The PriorityQueue in Java is a class that extends the AbstractQueue and implements the Queue interface. As we know that the Queue follows the FIFO manner but sometimes the elements of the Queue are processed according to the priority then the PriorityQueue comes into the picture. The important points about PriorityQueue are discussed below:
- The implementation class of Queue.
- The elements of the provided are ordered according to their natural ordering or by a comparator provided at Queue construction time.
- Null elements are not allowed.
- Not thread-safe.
- It is based upon a priority heap.
Do Check:
Queue Interface Declaration
public interface Queue<E> extends Collection<E>
PriorityQueue class in Java
The PriorityQueue class gives the ease of using queue. However, it does not order the elements in a FIFO manner. It inherits AbstractQueue class.
PriorityQueue class declaration
Let’s see the declaration for java.util.PriorityQueue class.
public class PriorityQueue<E> extends AbstractQueue<E> implements Serializable
The class implements Serializable, Iterable<E>, Collection<E>, Queue<E> interfaces.
Where E is the type of elements held in this queue
Creating PriorityQueue
To create a priority queue in java, we should implement the java.util.PriorityQueue package. After importing the package, we can create a priority queue in java like shown below:
PriorityQueue<Integer> numbers = new PriorityQueue<>();
In the above syntax, we have created a priority queue without any arguments. In such cases, the head of the priority queue is the smallest element of the queue. However, elements are eliminated in ascending order from the queue. Also, we can personalize the ordering of elements by using the Comparator Interface.
Constructors of PriorityQueue in Java
The PriorityQueue defines the six constructors which are described below:
1. PriorityQueue(): The first constructor is used to create an empty Queue with the initial capacity 11.
2. PriorityQueue(int initialCapacity): This constructor creates a Queue with the specified initial capacity.
3. PriorityQueue(int initialCapacity, Comparator comparator): This constructor creates a Queue with the specified initial capacity and comparator.
4. PriorityQueue(Collection c): This constructor creates a PriorityQueue containing the element in the specified collection.
5. PriorityQueue(PriorityQueue q): This constructor creates a PriorityQueue containing the element in the specified PriorityQueue.
6. PriorityQueue(SortedSet ss): This constructor creates a PriorityQueue containing the element in the specified SortedSet.
Methods of PriorityQueue Class in Java
1. boolean add(E e): This method is used to add the element in the PriorityQueue. It returns false if the element is not successfully inserted.
2. boolean offer(E e): This method is the same as add() method only difference is it throws NoSuchElementException if the Queue is empty.
3. void clear(): This method is used to remove all the elements from the PriorityQueue.
4. int size(): This method returns the size of the Queue.
5. boolean remove(E e): This method is used to remove the specified element from the Queue.
6. E peek(): This method returns the element at the head of the Queue but not removed, it returns null if the Queue is empty.
7. E poll(): This method returns and removes the element at the head of the Queue and it returns null if the Queue is empty.
Access PriorityQueue Elements
In order to access elements from a priority queue, we must use the peek() method. By using this method, it returns the head of the queue. For instance, look at the below example program to access priority queue elements in Java:
import java.util.PriorityQueue; class Main { public static void main(String[] args) { // Creating a priority queue PriorityQueue<Integer> numbers = new PriorityQueue<>(); numbers.add(4); numbers.add(2); numbers.add(1); System.out.println("PriorityQueue: " + numbers); // Using the peek() method int number = numbers.peek(); System.out.println("Accessed Element: " + number); } }
Output:
PriorityQueue: [1, 4, 2] Accessed Element: 1
Java PriorityQueue Example
import java.util.*; class priorityQueueExample{ public static void main(String args[]){ //creating priority queue PriorityQueue pq = new PriorityQueue(); //adding element into this queue pq.add("Amit"); pq.add("Raaj"); pq.add("Ajay"); pq.add("Vijay"); pq.add("Rahul"); //Queue methods operations System.out.println("The Queue elements are: " +pq); System.out.println("The removal element is: " +pq.remove()); System.out.println("After remove Queue elements are: " +pq); System.out.println("Queue head element is: " +pq.peek()); System.out.println("Return Queue elements: " +pq.poll()); System.out.println("After remove Queue elements are: " +pq); System.out.println("Return Queue elements without removing: " +pq.element()); System.out.println("After all operations Queue elements are: " +p); } }
Output:
| https://btechgeeks.com/priorityqueue-in-java-with-example/ | CC-MAIN-2021-31 | en | refinedweb |
EIP-1559: Fee market change for ETH 1.0 chain
Table of Contents
Simple Summary
A transaction pricing mechanism that includes fixed-per-block network fee that is burned and dynamically expands/contracts block sizes to deal with transient congestion.
Abstract
We introduce a new EIP-2718 transaction type, with the format
0x02 || rlp([chainId, nonce, maxPriorityFeePerGas, maxFeePerGas, gasLimit, destination, value, data, accessList, signatureYParity, signatureR, signatureS]).. Transactions specify the maximum fee per gas they are willing to give to miners to incentivize them to include their transaction (aka: priority fee). Transactions also specify the maximum fee per gas they are willing to pay total (aka: max fee), which covers both the priority fee and the block’s network fee per gas (aka: base fee). The transaction will always pay the base fee per gas of the block it was included in, and they will pay the priority fee per gas set in the transaction, as long as the combined amount of the two fees doesn’t exceed the transaction’s maximum fee per gas.
Motivation
Ethereum historically priced transaction fees using a simple auction mechanism, where users send transactions with bids (“gasprices”) and miners choose transactions with the highest bids, and transactions that get included pay the bid that they specify. This leads to several large sources of inefficiency:
- Mismatch between volatility of transaction fee levels and social cost of transactions: bids to include transactions on mature public blockchains, that have enough usage so that blocks are full, tend to be extremely volatile. It’s absurd to suggest that the cost incurred by the network from accepting one more transaction into a block actually is 10x more when the cost per gas is 10 nanoeth compared to when the cost per gas.
- Instability of blockchains with no block reward: In the long run, blockchains where there is no issuance (including Bitcoin and Zcash) at present intend to switch to rewarding miners entirely through transaction fees. However, there are known issues with this that likely leads to a lot of instability, incentivizing mining “sister blocks” that steal transaction fees, opening up much stronger selfish mining attack vectors, and more. There is at present no good mitigation for this. priority fee, which compensates miners taking on orphan risk (e.g. 1 nanoeth), will be automatically set. Users can also manually set the transaction max fee to bound their total costs.
An important aspect of this fee system is that miners only get to keep the priority fee. The base fee is always burned (i.e. it is destroyed by the protocol). This ensures that only ETH can ever be used to pay for transactions on Ethereum, cementing the economic value of ETH within the Ethereum platform and reducing risks associated with miner extractable value (MEV). Additionally, this burn counterbalances Ethereum inflation while still giving the block reward and priority fee to miners. Finally, ensuring the miner of a block does not receive the base fee is important because it removes miner incentive to manipulate the fee in order to extract more fees from users.
Specification
Block validity is defined in the reference implementation below.
The
GASPRICE (
0x3a) opcode MUST return the
effective_gas_price as defined in the reference implementation below.
As of
FORK_BLOCK_NUMBER, a new EIP-2718 transaction is introduced with
TransactionType 2.
The intrinsic cost of the new transaction is inherited from EIP-2930, specifically
21000 + 16 * non-zero calldata bytes + 4 * zero calldata bytes + 1900 * access list storage key count + 2400 * access list address count.
The EIP-2718
TransactionPayload for this transaction is
rlp([chainId, nonce, maxPriorityFeePerGas, maxFeePerGas, gasLimit, destination, value, data, accessList, signatureYParity, signatureR, signatureS]).
The
signatureYParity, signatureR, signatureS elements of this transaction represent a secp256k1 signature over
keccak256(0x02 || rlp([chainId, nonce, maxPriorityFeePerGas, maxFeePerGas, gasLimit, destination, value, data, access_list])).
The EIP-2718
ReceiptPayload for this transaction is
rlp([status, cumulativeGasUsed, logsBloom, logs]).
Note:
// is integer division, round down.
from typing import Union, Dict, Sequence, List, Tuple, Literal from dataclasses import dataclass, field from abc import ABC, abstractmethod @dataclass class TransactionLegacy: signer_nonce: int = 0 gas_price: int = 0 gas_limit: int = 0 destination: int = 0 amount: int = 0 payload: bytes = bytes() v: int = 0 r: int = 0 s: int = 0 @dataclass class Transaction2930Payload: chain_id: int = 0 signer_nonce: int = 0 gas_price: int = 0 gas_limit: int = 0 destination: int = 0 amount: int = 0 payload: bytes = bytes() access_list: List[Tuple[int, List[int]]] = field(default_factory=list) signature_y_parity: bool = False signature_r: int = 0 signature_s: int = 0 @dataclass class Transaction2930Envelope: type: Literal[1] = 1 payload: Transaction2930Payload = Transaction2930Payload() @dataclass class Transaction1559Payload: chain_id: int = 0 signer_nonce: int = 0 max_priority_fee_per_gas: int = 0 max_fee_per_gas: int = 0 gas_limit: int = 0 destination: int = 0 amount: int = 0 payload: bytes = bytes() access_list: List[Tuple[int, List[int]]] = field(default_factory=list) signature_y_parity: bool = False signature_r: int = 0 signature_s: int = 0 @dataclass class Transaction1559Envelope: type: Literal[2] = 2 payload: Transaction1559Payload = Transaction1559Payload() Transaction2718 = Union[Transaction1559Envelope, Transaction2930Envelope] Transaction = Union[TransactionLegacy, Transaction2718] @dataclass class NormalizedTransaction: signer_address: int = 0 signer_nonce: int = 0 max_priority_fee_per_gas: int = 0 max_fee_per_gas: int = 0 gas_limit: int = 0 destination: int = 0 amount: int = 0 payload: bytes = bytes() access_list: List[Tuple[int, List[int]]] = field(default_factory=list) @dataclass class Block: parent_hash: int = 0 uncle_hashes: Sequence[int] = field(default_factory=list) author: int = 0 state_root: int = 0 transaction_root: int = 0 transaction_receipt_root: int = 0 logs_bloom: int = 0 difficulty: int = 0 number: int = 0 gas_limit: int = 0 # note the gas_limit is the gas_target * ELASTICITY_MULTIPLIER gas_used: int = 0 timestamp: int = 0 extra_data: bytes = bytes() proof_of_work: int = 0 nonce: int = 0 base_fee_per_gas: int = 0 @dataclass class Account: address: int = 0 nonce: int = 0 balance: int = 0 storage_root: int = 0 code_hash: int = 0 INITIAL_BASE_FEE = 1000000000 INITIAL_FORK_BLOCK_NUMBER = 10 # TBD BASE_FEE_MAX_CHANGE_DENOMINATOR = 8 ELASTICITY_MULTIPLIER = 2 class World(ABC): def validate_block(self, block: Block) -> None: parent_gas_target = self.parent(block).gas_limit // ELASTICITY_MULTIPLIER parent_gas_limit = self.parent(block).gas_limit # on the fork block, don't account for the ELASTICITY_MULTIPLIER to avoid # unduly halving the gas target. if INITIAL_FORK_BLOCK_NUMBER == block.number: parent_gas_target = self.parent(block).gas_limit parent_gas_limit = self.parent(block).gas_limit * ELASTICITY_MULTIPLIER parent_base_fee_per_gas = self.parent(block).base_fee_per_gas parent_gas_used = self.parent(block).gas_used transactions = self.transactions(block) # check if the block used too much gas assert block.gas_used <= block.gas_limit, 'invalid block: too much gas used' # check if the block changed the gas limit too much assert block.gas_limit < parent_gas_limit + parent_gas_limit // 1024, 'invalid block: gas limit increased too much' assert block.gas_limit > parent_gas_limit - parent_gas_limit // 1024, 'invalid block: gas limit decreased too much' # check if the gas limit is at least the minimum gas limit assert block.gas_limit >= 5000 # check if the base fee is correct if INITIAL_FORK_BLOCK_NUMBER == block.number: expected_base_fee_per_gas = INITIAL_BASE_FEE elif parent_gas_used == parent_gas_target: expected_base_fee_per_gas = parent_base_fee_per_gas elif parent_gas_used > parent_gas_target: gas_used_delta = parent_gas_used - parent_gas_target base_fee_per_gas_delta = max(parent_base_fee_per_gas * gas_used_delta // parent_gas_target // BASE_FEE_MAX_CHANGE_DENOMINATOR, 1) expected_base_fee_per_gas = parent_base_fee_per_gas + base_fee_per_gas_delta else: gas_used_delta = parent_gas_target - parent_gas_used base_fee_per_gas_delta = parent_base_fee_per_gas * gas_used_delta // parent_gas_target // BASE_FEE_MAX_CHANGE_DENOMINATOR expected_base_fee_per_gas = max(parent_base_fee_per_gas - base_fee_per_gas_delta, 0) assert expected_base_fee_per_gas == block.base_fee_per_gas, 'invalid block: base fee not correct' # execute transactions and do gas accounting cumulative_transaction_gas_used = 0 for unnormalized_transaction in transactions: # Note: this validates transaction signature and chain ID which must happen before we normalize below since normalized transactions don't include signature or chain ID signer_address = self.validate_and_recover_signer_address(unnormalized_transaction) transaction = self.normalize_transaction(unnormalized_transaction, signer_address) signer = self.account(signer_address) signer.balance -= transaction.amount assert signer.balance >= 0, 'invalid transaction: signer does not have enough ETH to cover attached value' # ensure that the user was willing to at least pay the base fee assert transaction.max_fee_per_gas >= block.base_fee_per_gas # The first two of these four rules are implicit due to the next two rules # Prevent impossibly large numbers assert transaction.max_fee_per_gas < 2**256 # Prevent impossibly large numbers assert transaction.max_priority_fee_per_gas < 2**256 # The total must be the larger of the two assert transaction.max_fee_per_gas >= transaction.max_priority_fee_per_gas # the signer must be able to afford the transaction assert signer.balance >= transaction.gas_limit * transaction.max_fee_per_gas # priority fee is capped because the base fee is filled first priority_fee_per_gas = min(transaction.max_priority_fee_per_gas, transaction.max_fee_per_gas - block.base_fee_per_gas) # signer pays both the priority fee and the base fee effective_gas_price = priority_fee_per_gas + block.base_fee_per_gas signer.balance -= transaction.gas_limit * effective_gas_price assert signer.balance >= 0, 'invalid transaction: signer does not have enough ETH to cover gas' gas_used = self.execute_transaction(transaction, effective_gas_price) gas_refund = transaction.gas_limit - gas_used cumulative_transaction_gas_used += gas_used # signer gets refunded for unused gas signer.balance += gas_refund * effective_gas_price # miner only receives the priority fee; note that the base fee is not given to anyone (it is burned) self.account(block.author).balance += gas_used * priority_fee_per_gas # check if the block spent too much gas transactions assert cumulative_transaction_gas_used == block.gas_used, 'invalid block: gas_used does not equal total gas used in all transactions' # TODO: verify account balances match block's account balances (via state root comparison) # TODO: validate the rest of the block def normalize_transaction(self, transaction: Transaction, signer_address: int) -> NormalizedTransaction: # legacy transactions if isinstance(transaction, TransactionLegacy): return NormalizedTransaction( signer_address = signer_address, signer_nonce = transaction.signer_nonce, gas_limit = transaction.gas_limit, max_priority_fee_per_gas = transaction.gas_price, max_fee_per_gas = transaction.gas_price, destination = transaction.destination, amount = transaction.amount, payload = transaction.payload, access_list = [], ) # 2930 transactions elif isinstance(transaction, Transaction2930Envelope): return NormalizedTransaction( signer_address = signer_address, signer_nonce = transaction.payload.signer_nonce, gas_limit = transaction.payload.gas_limit, max_priority_fee_per_gas = transaction.payload.gas_price, max_fee_per_gas = transaction.payload.gas_price, destination = transaction.payload.destination, amount = transaction.payload.amount, payload = transaction.payload.payload, access_list = transaction.payload.access_list, ) # 1559 transactions elif isinstance(transaction, Transaction1559Envelope): return NormalizedTransaction( signer_address = signer_address, signer_nonce = transaction.payload.signer_nonce, gas_limit = transaction.payload.gas_limit, max_priority_fee_per_gas = transaction.payload.max_priority_fee_per_gas, max_fee_per_gas = transaction.payload.max_fee_per_gas, destination = transaction.payload.destination, amount = transaction.payload.amount, payload = transaction.payload.payload, access_list = transaction.payload.access_list, ) else: raise Exception('invalid transaction: unexpected number of items') @abstractmethod def parent(self, block: Block) -> Block: pass @abstractmethod def block_hash(self, block: Block) -> int: pass @abstractmethod def transactions(self, block: Block) -> Sequence[Transaction]: pass # effective_gas_price is the value returned by the GASPRICE (0x3a) opcode @abstractmethod def execute_transaction(self, transaction: NormalizedTransaction, effective_gas_price: int) -> int: pass @abstractmethod def validate_and_recover_signer_address(self, transaction: Transaction) -> int: pass @abstractmethod def account(self, address: int) -> Account: pass
Backwards Compatibility
Legacy Ethereum transactions will still work and be included in blocks, but they will not benefit directly from the new pricing system. This is due to the fact that upgrading from legacy transactions to new transactions results in the legacy transaction’s
gas_price entirely being consumed either by the
base_fee_per_gas and the
priority_fee_per_gas.
Block Hash Changing
The datastructure that is passed into keccak256 to calculate the block hash is changing, and all applications that are validating blocks are valid or using the block hash to verify block contents will need to be adapted to support the new datastructure (one additional item). If you only take the block header bytes and hash them you should still correctly get a hash, but if you construct a block header from its constituent elements you will need to add in the new one at the end.
GASPRICE
Previous to this change,
GASPRICE represented both the ETH paid by the signer per gas for a transaction as well as the ETH received by the miner per gas. As of this change,
GASPRICE now only represents the amount of ETH paid by the signer per gas, and the amount a miner was paid for the transaction is no longer accessible directly in the EVM.
Test Cases
TODO
Security Considerations
Increased Max Block Size/Complexity
This EIP will increase the maximum block size, which could cause problems if miners are unable to process a block fast enough as it will force them to mine an empty block. Over time, the average block size should remain about the same as without this EIP, so this is only an issue for short term size bursts. It is possible that one or more clients may handle short term size bursts poorly and error (such as out of memory or similar) and client implementations should make sure their clients can appropriately handle individual blocks up to max size.
Transaction Ordering
With most people not competing on priority fees and instead using a baseline fee to get included, transaction ordering now depends on individual client internal implementation details such as how they store the transactions in memory. It is recommended that transactions with the same priority fee be sorted by time the transaction was received to protect the network from spamming attacks where the attacker throws a bunch of transactions into the pending pool in order to ensure that at least one lands in a favorable position. Miners should still prefer higher tip transactions over those with a lower tip, purely from a selfish mining perspective.
Miners Mining Empty Blocks
It is possible that miners will mine empty blocks until such time as the base fee is very low and then proceed to mine half full blocks and revert to sorting transactions by the priority fee. While this attack is possible, it is not a particularly stable equilibrium as long as mining is decentralized. Any defector from this strategy will be more profitable than a miner participating in the attack for as long as the attack continues (even after the base fee reached 0). Since any miner can anonymously defect from a cartel, and there is no way to prove that a particular miner defected, the only feasible way to execute this attack would be to control 50% or more of hashing power. If an attacker had exactly 50% of hashing power, they would make no money from priority fee while defectors would make double the money from priority fees. For an attacker to turn a profit, they need to have some amount over 50% hashing power, which means they can instead execute double spend attacks or simply ignore any other miners which is a far more profitable strategy.
Should a miner attempt to execute this attack, we can simply increase the elasticity multiplier (currently 2x) which requires they have even more hashing power available before the attack can even be theoretically profitable against defectors.
ETH Burn Precludes Fixed Supply.
Citation
Please cite this document as:
Vitalik Buterin, Eric Conner, Rick Dudley, Matthew Slipper, Ian Norden, Abdelhamid Bakhta, "EIP-1559: Fee market change for ETH 1.0 chain [DRAFT]," Ethereum Improvement Proposals, no. 1559, April 2019. [Online serial]. Available:. | https://eips.ethereum.org/EIPS/eip-1559 | CC-MAIN-2021-31 | en | refinedweb |
quisiera descargar un archivo csv desde un sitio en internet, pero no puedo lograrlo, ya que me encuentro con este error
Error: iterator should return strings, not bytes (did you open the file in text mode?)
y no se como abrir el archivo en modo texto estoy usando python 3 y las librerias urllib, csv.
import csv import urllib url = '' respuesta = urllib.request.urlopen(url) archivo = csv.reader(response) for filas in archivo: print(fil! | https://extraproxies.com/como-descargar-un-archivo-csv-desde-internet-con-python-3/ | CC-MAIN-2021-31 | en | refinedweb |
If you are using redux in your react native project then most probably you might be using redux logger library too. Redux logger is a very useful library acts as logger for redux and you can have an eye on every actions and reducers.
Redux logger is required only when your react native app is in development. Because of security as well as performance concerns, it is better to remove logs of redux logger when the app goes for production.
So how can you use Redux logger in development and remove it in production? You can use the if (process.env.NODE_ENV === ‘development’) condition to know whether app is in production or development. Then you just need to declare your middlewares conditionally.
Go through the following code snippet to get the overall idea.
import { createStore, applyMiddleware, compose, combineReducers, } from 'redux'; import thunk from 'redux-thunk'; import { createLogger } from 'redux-logger'; import AuthenticationReducer from '../reducers/AuthenticationReducer'; const appReducers = combineReducers({ AuthenticationReducer }); const rootReducer = (state, action) => appReducers(state, action); const logger = createLogger(); let middleware = []; if (process.env.NODE_ENV === 'development') { middleware = [...middleware, thunk, logger]; } else { middleware = [...middleware, thunk]; } export default createStore( rootReducer, compose(applyMiddleware(...middleware)) );
2 thoughts on “How to Remove logs of Redux Logger in Production”
There is a big problem in your code. The redux logger is still inside your build because of the import statement. So your file size is blown up.
A better solution is:
if (process.env.NODE_ENV === ‘development’) {
middleware = […middleware, thunk, require(‘redux-logger’).default];
}
Thank you for correcting Andreas. I will update the code. | https://reactnativeforyou.com/how-to-remove-logs-of-redux-logger-in-production/ | CC-MAIN-2021-31 | en | refinedweb |
telemetry¶
C programming interface for telemetrics-client¶
(C) 2017 Intel Corporation, CC-BY-SA-3.0
- Manual section
3
SYNOPSIS¶
#include "telemetry.h"
struct telem_ref { struct telem_record *record; };
int tm_create_record(struct telem_ref **t_ref, uint32_t severity, char *classification, uint32_t payload_version)
int tm_set_payload(struct telem_ref *t_ref, char *payload)
int tm_send_record(struct telem_ref *t_ref)
void tm_free_record(struct telem_ref *t_ref)
int tm_set_config_file(const char *c_file)
int tm_is_opted_in(void)
DESCRIPTION¶
The functions in the telemetry library facilitate the delivery of telemetry data to the telemprobd(1) service..
The function
tm_set_payload() attaches the provided telemetry record
data to the telemetry record. The current maximum payload size is 8192b.
The function
tm_send_record() delivers the record to the local
telemprobd(1) service.
The function
tm_set_config_file() can be used to provide an alternate
configuration path to the telemetry library.
tm_is_opted_in is a utility provided to check if the one time opt-in
has been performed.
RETURN VALUES¶
All these functions return
0 on success, or a non-zero return value
if an error occurred. The function
tm_free_record() does not return
any value.
tm_is_opted_in returns
1 when telemetry is opted-in
otherwise
0. | https://docs.01.org/clearlinux/latest/reference/manpages/telemetry.3.html | CC-MAIN-2021-31 | en | refinedweb |
In this series I've been looking at the Microsoft.FeatureManagement library (which is now open source on GitHub 🎉). This provides a thin layer over the .NET Core configuration system for adding feature flags to your application. But this library is new, and the feature flag concept is not - so what were people using previously?
I have a strong suspicion that for most people, the answer was one of the following:
- LaunchDarkly
- Some in-house, hand-rolled, feature-toggle system
Searching for "feature toggles" on NuGet gives 686 results, and searching for "feature flags" gives 886 results. I confess I've used none of them. I have always fallen squarely in the "roll-your-own" category.
On the face of it, that's not surprising. Basic feature flag functionality is pretty easy to implement, and as virtually every app has either external configuration values or a database connection of some sort, another dependency never felt necessary. That said, there's a big difference between basic on/off functionality, and staged, stable, gradual rollouts of features across your applications.
In this post I take a brief at a few of the alternatives to Microsoft.FeatureManagement, and describe their differences:
LaunchDarkly
LaunchDarkly is the real incumbent when it comes to feature toggles. They're a SaaS product that provide an API and UI for managing your feature flags, and SDKs in multiple languages (there's an open source SDK for .NET which supports .NET Standard 1.4+).
At the heart of LaunchDarkly are a set of feature flags, and a UI for managing them:
However the simplicity of that screenshot belies the multitude of configuration options available to you. Each feature flag:
- Can be a Boolean value (on/off), or a complex value like an
int,
string, or
JSON, though I'd argue with the latter you're getting more into "general configuration" territory.
- Can be marked "temporary" or "permanent" to make it easy to filter and remove old temporary flags.
- Can have rich names and descriptions.
- Can vary between different environments (dev/staging/production).
- Can depend on other feature flags being enabled, so feature A is only enabled if feature B is enabled.
The values of the feature flags are all managed in the LaunchDarkly UI, which the SDK contacts to check flag values (caching where appropriate of course). The LaunchDarkly server typically pushes any changes to feature flags from their server to your app, rather than the SDK periodically polling, so you get updates quickly. Your code for checking a Boolean feature flag with the SDK would look something like this:
// Create a LaunchDarkly user for segmentation var user = User.WithKey(username); // Use the singleton instance of the LaunchDarkly client _ldClient, // providing the feature flag name, the user, and a default value incase there's an error bool showFeature = _ldClient.BoolVariation("your.feature.key", user, false); if (showFeature) { // application code to show the feature } else { // the code to run if the feature is off }
This snippet highlights the user-segment feature - this ensures that feature flags are stable per-user, a problem I described in the previous post of this series. They also provide features for doing A/B/n testing, metrics for which features have been used (and by which users) and various other features. The SDK documentation is also really good.
Which finally brings us to the one downside - it isn't free! They have a free trial, and a basic plan for $25 a month, but prices jump to $325+ per month from there. You'll be paying based on the number of servers you have, the number of developers (UI-users) you have, as well as the number of active customers you have. It really does seem like a great product, but that comes at a cost, so it depends where your priorities lie.
RimDev.AspNetCore.FeatureFlags
RimDev.AspNetCore.FeatureFlags caught my eye as I was looking through NuGet packages, as it's from the team at RIMdev who have various open source projects like Stuntman. This library feels like a perfect example of the case I described at the start of the post - a project that started as in-house solution to a problem.
One of the interesting approaches used in the library is that features are defined using strongly-typed classes, (rather than the magic strings that are often used), and injecting the features objects directly into your services. For example, you might create a feature
MyFeature. To check if it's enabled, you inject it into your service, and check
feature.Value:
public class MyFeature : Feature { // Optional, displays on UI: public override string Description { get; } = "My feature description."; } public class MyController : Controller { private readonly MyFeature _feature; public MyController(MyFeature myFeature) { _feature = myFeature; } [HttpGet] public IActionResult OnGet() { if(_feature.Value) { // feature is enabled } } }
RimDev.FeatureFlags uses an
IFeatureProvider interface to get and update
Feature instances. The library includes a single implementation, which uses
SqlClient to store feature values in a SQL Server database. The library also includes a simple UI for enabling and disabling feature flags:
Overall this is a pretty basic library, and lacks some of the dynamic features of other options, but if basic is all you need, then why go for complex!
Moggles
Moggles is a recently open-sourced project that was pointed out to me by Jeremy Bailey, one of the maintainers. Moggles follows a similar architecture to LaunchDarkly, where you have a server component that manages the feature toggles, and a client SDK that looks up the values of feature flags in your application.
The server component has a UI that allows providing descriptions for feature flags, and supports multiple environments (desv/staging/production). It also includes the LaunchDarkly feature of marking feature flags as temporary vs permanent for filtering purposes. It can similarly integrate with a RabbitMq cluster to ensure updates to feature toggles are pushed out to applications without requiring the apps to poll for changes.
This project also grew out of an internal need, and that's relatively evident in the technologies used. The server currently only supports Microsoft SQL Server, and uses Windows Authentication with role-based authorization. If Moggles looks interesting but you have other requirements, maybe consider contributing, I'm sure they'd love the support.
Esquio
Esquio is an open source project on GitHub from Xabaril (creators of the excellent BeatPulse library). It was suggested to me by one of the maintainers of the project, Unai Zorrila Castro. It looks very interesting, is targeting .NET Core 3.0, and has a lot of nice features.
The basic API for Esquio is similar to the Microsoft.FeatureManagement library, but with a couple of key differences:
- The API is async, unlike the synchronous API of Microsoft.FeatureManagement. So instead of
IsEnabled, you have
IsEnabledAsync.
- You can have multiple stores for your feature flag configuration.
IConfigurationis an option, but there's also an EF Core store if you wish to store your feature flag configuration in the database instead. Or you could write your own store implementation!
Esquio provides the same nice hooks into the ASP.NET Core infrastructure as Microsoft.FetureManagagement, like
[FeatureFilter] attributes for hiding controllers or actions based on a flag's state; various fall-back options when an action is disabled; and Tag Helpers for conditionally showing sections of UI. As it's built on .NET Core 3.0, Esquio also allows you to attach feature filters directly to an endpoint too.
One interesting feature described in the docs is the ability to use the
[FeatureFilter] attribute as an action constraint, so you can conditionally match an action based on whether a feature is enabled:
[ActionName("Detail")] // Same ActionName on both methods public IActionResult DetailWhenFlagsIsNotActive() { return View(); } [FeatureFilter(Names = Flags.MinutesRealTime)] // Acts as an action constraint [ActionName("Detail")] public IActionResult DetailWhenFlagsIsActive() { return View(); }
Esquio also includes the equivalent of Microsoft.FeatureManagement's feature filters, for dynamically controlling whether features are enabled based on the current user, for example. In Esquio, they're called toggles, but they're a very similar concept. One of the biggest differences is how many toggles Esquio comes with out of the box:
OnToggle/
OffToggle- fixed Boolean on/off
UserNameToggle- enable the feature for a fixed set of users
RoleNameToggle- enable the feature for users in one of a set of roles
EnvironmentToggle- enable the feature when running in a given environment
FromToToggle- a windowing toggle to enable features for fixed time windows
ClaimValueToggle- enable the feature if a user has a given claim with one of an allowed set of values
GradualRolloutUserNameToggle- rollout to to a percentage of users, using a stable hash function (the Jenkins hash function) based on the username. There are similar gradual rollout toggles based on the value of a particular claim, the value of a header, or the ASP.NET Core Session ID.
As you'd expect, you're free to create your own custom Toggles too.
The gradual rollout toggles in particular are interesting, as they remove the need for the
ISessionManagerrequired by Microsoft.FeatureManagement to ensure consistency between requests for the
PercentageFilter.
Esquio also includes a similar feature to LaunchDarkly where you can make the feature flag state available to SPA/mobile clients by including an endpoint for querying features:
app.UseEndpoints(routes => { routes.MapEsquio(pattern: "esquio"); });
On top of that, there's a UI for managing your feature flags! I haven't tried running that yet, but it's next on my list.
But wait! There's more!
There are even docs about how to integrate rolling out your feature flags as part of a release using Azure DevOps. If you integrate feature flags fully into your release pipeline, you can use canary releases that are only used by a few users, before increasing the percentage and enabling the feature across the board.
All in all I'm very impressed with the Esquio library. If you're already working with .NET Core 3.0 previews then it's definitely worth taking a look at if you need feature toggle functionality.
Summary
The Microsoft.FeatureManagement is intended to act as a thin layer over the Microsoft.Extensions.Configuration APIs, and as such it has certain limitations. It's always a good idea to look around and see what the other options are before committing to a library, and to try and understand the limitations of your choices. If money is no object, you can't go wrong with LaunchDarkly - they are well known in the space, and have a broad feature set. Personally, I'm very interested in Esquio as a great open-source alternative. | https://andrewlock.net/alternatives-to-microsoft-featuremanagement/ | CC-MAIN-2021-31 | en | refinedweb |
Learning Java/Decision Structures
Intro[edit | edit source]
Control flow blocks are basically blocks of code that control the way data flows, or else which statements are executed when. This can be useful if you only want certain areas of your code to be executed under a given condition.
There are two major types of decision structures: conditionals and loops. Code contained in a conditional block may or may not be executed; the decision is made at runtime based on a given condition. Code contained in a loop is repeated; how many times is also decided at runtime.
The
boolean Primitive[edit | edit source]
Java has a specific kind of variable for determining whether something is true or false, called a
boolean. A
boolean type can be defined with a boolean literal (
true or
false), or by evaluating an expression with a boolean operator.
Boolean Operators[edit | edit source]
These can be used to check if something is true or false. Most take numeric variables (
byte,
short,
int,
long,
float, or
double) and evaluate to a
boolean. Note that booleans are variables. Here is a list of all boolean operators:
As you can see above, and, or, and not are the only operators that work on booleans. These, essentially, create new booleans. For example, if you said:
boolean b1 = false; boolean b2 = true; boolean b3 = b1 || b2; boolean b4 = b1 && b2;
In this example, b3 would be true (because one of b1 and b2 are true) and b4 would be false (because b1 and b2 are not both true).
Finally, you can do:
int i1=5; int i2=5; int i3=7; int i4=3; boolean b1 = i1 < i2; // i1 is not less than i2 (both are 5), so b1 is false boolean b2 = i1 == i2 && i3 > i2; // b2 is true because i1 is equal to i2, and i3 is greater than i2 boolean b3 = i4 < i3 && i1 > i3; // b3 is false - i4 is less than i3, but i1 is NOT greater than i3 boolean b4 = i4 < i3 || i1 > i3; // b4 is true because i4 < i3. Only one is needed because it is an || boolean b5 = (i4 < i3 && i3 < i2) || i3 > i1; // See below for info on parentheses boolean b6 = !true; // This is false because !true is false, and !false is true.
Expressions within parenthesis are always evaluated first, from inside to outside. Ample use of parentheses can greatly clarify your intent. For example, consider the following code:
boolean a = true; boolean b = true; boolean c = false; boolean result = a || b && c;
What value is stored in
result? If the or is evaluated first, then the result is
false, because the code would evalute to
a || b && c true || true && false true && false false
If the and is evaluated first, though, the result is
true:
a || b && c true || true && false true || false true
There are rules governing which operators are evaluated first. You could remember that logical and has a higher precedence than logical or (that is, and is evaluated first), you could look it up, or you could just use parentheses! Then you don't have to bother memorizing operator precedence, or waste time checking a table.
Besides, what if you wanted the or evaluated first? You're sunk... or you could add parentheses to ensure that the or is first:
boolean result = (a || b) && c;
This will force the or to evaluate first, and be sure that anyone reading your code will know what you mean.
You will learn below how these boolean operators can be used inside control flow blocks.
If/Else[edit | edit source]
You have seen this statement used in the previous section:
if (ourShip.speed < 10) ourShip.mainThrustPulse(2);
"ourShip.speed<10" is a boolean. If it is true, then it will do what comes after. Note that ourShip.speed and "10" are not booleans. The < makes a boolean if the speed is less than 10. If you did the following:
if (true) ourShip.mainThrustPulse(2);
Guess what - it would automatically go to the preceding statement. Note that this can only be used for one statement. If you want to execute multiple statement for a given condition you can do:
if (CONDITION) {//THEN... //DO SOMETHING //DO SOMETHING2 }
Everything within { and } is executed if CONDITION is true. Now, suppose you want multiple if statements. You could have separate if statements, however if the conditions are exclusive (meaning that one or the other can be true but never more than one) then you can do:
if (CONDITION) { //DO SOMETHING } else if (SECOND CONDITION) { //DO SOMETHING DIFFERENT } else if (ANOTHER CONDITION) { //DO SOMETHING ELSE }
To execute a block of code when the condition of the if statement is false the following code can be used:
if (CONDITION) { //DO SOMETHING } else { //DO SOMETHING ELSE; }
Back to our ship. Lets make an if/else block that does: "If the ship's speed is less than 10, fire 2 pulses. If it is less than 15, fire one. Otherwise, lower the speed by firing backwards (-1)"
if (ourShip.speed < 10) { ourShip.mainThrustPulse(2); } else if(ourShip.speed < 15) { ourShip.mainThrustPulse(1); } else { ourShip.mainThrustPulse(-1);//Slow down }
Answer:
if (i1 == i2) { System.out.println("Equal"); } else { System.out.println("Not equal"); }
Some examples of the usage of else/if:
public class Examples { public static void main (String[] args) { int i1 = 7; //1. Check if an integer is equal to 1 if (i1 == 1) { System.out.println("Step 1 - Equal"); } else { System.out.println("Step 1 - Not Equal"); } //2. Check if an integer is equal to 6 then 7 if (i1 == 6) { System.out.println("Step 2 - Equal #1"); } else if (i1 == 7) { System.out.println("Step 2 - Equal #2"); } else { System.out.println("Step 2 - Not Equal"); } } }
If you compile this, the result should be:
Step 1 - Not Equal
Step 2 - Equal #2
Note that any code for the "Ship" cannot be executed because that class does not exist and neither does the value "speed". You can make it a class, though, when you learn about Classes and Objects.
Switch[edit | edit source]
There are times in which you wish to check for a number of conditions, and to execute different code depending on the condition. One way to do this is with if/else logic, such as the example below:
int x = 1; int y = 2; if (SOME_INT == x) { //DO SOMETHING } else if (SOME_INT == y) { //DO SOMETHING ELSE } else { //DEFAULT CONDITION }
This works, however another structure exists which allows us to do the same thing. Switch statements allow the programmer to execute certain blocks of code depending on exclusive conditions. The example below shows how a switch statement can be used:
int x = 1; int y = 2; switch (SOME_INT) { case x: method1(SOME_INT); break; case y: method2(SOME_INT); break; default: method3(); break; }
Switch takes a single parameter, which can be either an integer or a char. In this case the switch statement is evaluating SOME_INT, which is presumably an integer. When the switch statement is encountered SOME_INT is evaluated, and when it is equal to x or y, method1 and method2 are executed, respectively. The default case executes if SOME_INT does not equal x or y, in this case method3 is executed. You may have as many case statements as you wish within a switch statement.
Notice in the example above that "break" is listed after each case. This keyword ensures that execution of the switch statement stops immediately, rather than continuing to the next case. Were the break statement were not included "fall-through" would occur and the next case would be evaluated (and possibly executed if it meets the conditions).
While Loops[edit | edit source]
Loops will execute a block of code while a given condition is true. Basically, that block of code is executed until the condition is false. The condition is checked before the block of code is executed each time. The basic syntax:
while (CONDITION) { //DO SOMETHING }
So, you can guess that:
while (true) { //DO SOMETHING }
will execute the code within the while loop forever. The statement within the ( and ) is always a boolean condition.
Do..While Loops[edit | edit source]
A do..while loop is similar to a while loop with one major difference, which is that the condition is checked after the loop executes. This means that the block of code in the body of the do..while loop will always execute at least one time. The code below is an example of a do..while loop:
do { //DO SOMETHING } while (CONDITION);
Exercise 1: Design a while block that executes the if/else if/else code for the ship in the If/Else Statements section forever.
Exercise 2: Change the above so that it stops after 10 times. You will need to know the < operator: using the if statment, "if(SOMETHING < SOMETHING2) {...}" is valid. Basically, if s is less than s2 the next part will execute. You should use int variables for this exercise.
Answer to exercise 2:
int times=0; while(times < 10) { //Put if/else code here... times++; //Equivalent of times = times + 1; }
For Loop[edit | edit source]
A for loop is a neat package for executing some statements a set number of times. The syntax is fairly simple:
for (VARIABLE INITIALIZATION; CONDITIONAL CHECK; INCREMENT/DECREMENT) { //statements to be executed }
The code below is an example of a for loop:
for (int i = 0; i < 10; i++) { System.out.println(i); }
In this example, we declare an integer named i and initialize it to 0 in the initialization step. In the conditional test, we check to see that the value of i is strictly less than 10. Finally, the increment step says i++, which is a shorthand for i=i+1, or add 1 to i. Similar to the while loop the condition is checked every time before the body of the loop is executed. Think about this code and try to determine what the output will be. Will this execute when the value of i equals 10? The output is shown below:
0 1 2 3 4 5 6 7 8 9
Notice that "10" is not included in the output. This is because the conditional check is i < 10. When i equals 10 the condition is no longer true, and the loop will not execute.
Foreach Loops[edit | edit source]
There is another for known as a foreach loop which has been more recently introduced to Java made for iteration over a collection. It uses any type of list structure as its base, including arrays and certain types of Collections. Collections are data structures with certain properties that make manipulating large amounts of data easy. In the example below we use an ArrayList which is similar to a normal array, except that it can grow dynamically and it has a number of methods (such as add(), remove(), get(), etc) to make it easier to work with than a normal array. See the example below:
int[] integerArray = {2,3,4,5}; // an integer array is a basic list of integers ArrayList<Integer> listOfNumbers = new ArrayList<Integer>(); //Don't worry about this syntax now for (int i : integerArray) { i = i+1; listOfNumbers.add(i); } //integerArray now equals [2,3,4,5] //listOfNumbers now equals [3,4,5,6]
In this example we are iterating an array of integers named integerArray. The for loop will run 4 times, each time with "i" equaling a different value in the array. When the items in the list structure (see the example above) are primitive values (byte, int, long, float, double, boolean, char...) the value of "i" will just be a copy. This is why the values inside of integerArray are not changed even though "i" is changed inside the loop.
If, however, the values in the list are references, they will be directly under the influence of any changes occurring inside the loop as in this example:
//Note, this code will not compile with a working Ship class ArrayList<Ship> listOfShips = new ArrayList<Ship>(); //Don't worry about this syntax now listOfShips.add(new Ship(Color.RED)); listOfShips.add(new Ship(Color.BLUE)); listOfShips.add(new Ship(Color.GREEN)); ArrayList<Ship> newListOfShips = new ArrayList<Ship>(); //Don't worry about this syntax now for (Ship shipToChange : listOfShips) { shipToChange.setColor(Color.BLACK); newListOfShips.add(shipToChange); } //all ships in listOfShips now have color black //all ships in newListOfShips now have color black
In this example we start out with a list named listOfShips holding three "ships" all of different colors. The for loop iterated through the list and changed each ships's color to black, then it added the ship to another list called newListOfShips. Since shipToChange is a reference to a variable instead of a variable itself, changes to it affect the items in listOfShips. | https://en.wikiversity.org/wiki/Learning_Java/Decision_Structures | CC-MAIN-2021-31 | en | refinedweb |
Modern sites often combine all of their JavaScript into a single, large main.js script. This regularly contains the scripts for all your pages or routes, even if users only need a small portion for the page they're viewing.
When JavaScript is served this way, the loading performance of your web pages can suffer – especially with responsive web design on mobile devices. So let's fix it by implementing JavaScript code splitting.
What problem does code splitting solve?
When a web browser sees a <script> it needs to spend time downloading and processing the JavaScript you're referencing. This can feel fast on high-end devices but loading, parsing and executing unused JavaScript code can take a while on average mobile devices with a slower network and slower CPU. If you've ever had to log on to coffee-shop or hotel WiFi, you know slow network experiences can happen to everyone.
Each second spent waiting on JavaScript to finish booting up can delay how soon users are able to interact with your experience. This is particularly the case if your UX relies on JS for critical components or even just attaching event handlers for simple pieces of UI.
Do I need to bother with code splitting?
It is definitely worth asking yourself whether you need to code-split (if you've used a simple website builder you probably don't). If your site requires JavaScript for interactive content (for features like menu drawers and carousels) or is a single-page application relying on JavaScript frameworks to render UI, the answer is likely 'yes'. Whether code splitting is worthwhile for your site is a question you'll need to answer yourself. You understand your architecture and how your site loads best. Thankfully there are tools available to help you here. Remember that if you're implementing changes across your design system, save those changes in your shared cloud storage so your team can see.
Get help
For those new to JavaScript code splitting, Lighthouse – the Audits panel in Chrome Developer Tools – can help shine a light on whether this is a problem for your site. The audit you'll want to look for is Reduce JavaScript Execution Time (documented here). This audit highlights all of the scripts on your page that can delay a user interacting with it.
PageSpeed Insights is an online tool that can also highlight your site's performance – and includes lab data from Lighthouse and real-world data on your site performance from the Chrome User Experience Report. Your web hosting service may have other options.
Code coverage in Chrome Developer Tools
If it looks like you have costly scripts that could be better split, the next tool to look at is the Code Coverage feature in the Chrome Developer Tools (DevTools>top-right menu>More tools> Coverage). This measures how much unused JavaScript (and CSS) is in your page. For each script summarised, DevTools will show the 'unused bytes'. This is code you can consider splitting out and lazy-loading when the user needs it.
The different kinds of code splitting
There are a few different approaches you can take when it comes to code splitting JavaScript. How much these apply to your site tends to vary depending on whether you wish to split up page/application 'logic' or split up libraries/frameworks from other 'vendors'.
Dynamic code splitting: Many of us 'statically' import JavaScript modules and dependencies so that they are bundled together into one file at build time. 'Dynamic' code splitting adds the ability to define points in your JavaScript that you would like to split and lazy-load as needed. Modern JavaScript uses the dynamic import() statement to achieve this. We'll cover this more shortly.
Vendor code splitting: The frameworks and libraries you rely on (e.g. React, Angular, Vue or Lodash) are unlikely to change in the scripts you send down to your users, often as the 'logic' for your site. To reduce the negative impact of cache invalidation for users returning to your site, you can split your 'vendors' into a separate script.
Entry-point code splitting: Entries are starting points in your site or app that a tool like Webpack can look at to build up your dependency tree. Splitting by entries is useful for pages where client-side routing is not used or you are relying on a combination of server and client-side rendering.
For our purposes in this article, we'll be concentrating on dynamic code splitting.
Get hands on with code splitting
Let's optimise the JavaScript performance of a simple application that sorts three numbers through code splitting – this is an app by my colleague Houssein Djirdeh. The workflow we'll be using to make our JavaScript load quickly is measure, optimise and monitor. Start here.
Measure performance
Before attempting to add any optimisations, we're first going to measure the performance of our JavaScript. As the magic sorter app is hosted on Glitch, we'll be using its coding environment. Here's how to go about it:
- Click the Show Live button.
- Open the DevTools by pressing CMD+OPTION+i / CTRL+SHIFT +i.
- Select the Network panel.
- Make sure Disable Cache is checked and reload the app.
This simple application seems to be using 71.2 KB of JavaScript just to sort through a few numbers. That certainly doesn't seem right. In our source src/index.js, the Lodash utility library is imported and we use sortBy – one of its sorting utilities – in order to sort our numbers. Lodash offers several useful functions but the app only uses a single method from it. It's a common mistake to install and import all of a third-party dependency when in actual fact you only need to use a small part of it.
Optimise your bundle
There are a few options available for trimming our JavaScript bundle size:
- Write a custom sort method instead of relying on a thirdparty library.
- Use Array.prototype.sort(), which is built into the browser.
- Only import the sortBy method from Lodash instead of the whole library.
- Only download the code for sorting when a user needs it (when they click a button).
Options 1 and 2 are appropriate for reducing our bundle size – these probably make sense for a real application. For teaching purposes, we're going to try something different. Options 3 and 4 help improve the performance of the application.
Only import the code you need
We'll modify a few files to only import the single sortBy method we need from Lodash. Let's start with replacing our lodash dependency in package.json:
"lodash": "^4.7.0",
with this:
"lodash.sortby": "^4.7.0",
In src/index.js, we'll import this more specific module:
js import "./style.css"; import _ from "lodash"; import sortBy from "lodash.sortby";
Next, we'll update how the values get sorted:
js form.addEventListener("submit", e => { e.preventDefault(); const values = [input1.valueAsNumber, input2.valueAsNumber, input3.valueAsNumber]; const sortedValues = _.sortBy(values); const sortedValues = sortBy(values); results.innerHTML = ` <h2> ${sortedValues} </h2> ` });
Reload the magic numbers app, open up Developer Tools and look at the Network panel again. For this specific app, our bundle size was reduced by a scale of four with little work. But there's still much room for improvement.
JavaScript code splitting
Webpack is one of the most popular JavaScript module bundlers used by web developers today. It 'bundles' (combines) all your JavaScript modules and other assets into static files web browsers can read.
The single bundle in this application can be split into two separate scripts:
- One is responsible for code making up the initial route.
- Another one contains our sorting code.
Using dynamic imports (with the import() keyword), a second script can be lazy-loaded on demand. In our magic numbers app, the code making up the script can be loaded as needed when the user clicks the button. We begin by removing the top-level import for the sort method in src/index.js:
import sortBy from "lodash.sortby";
Import it within the event listener that fires when the button is clicked:
form.addEventListener("submit", e => { e.preventDefault(); import('lodash.sortby') .then(module => module.default) .then(sortInput()) .catch(err => { alert(err) }); });
This dynamic import() feature we're using is part of a standardstrack proposal for including the ability to dynamically import a module in the JavaScript language standard. Webpack already supports this syntax. You can read more about how dynamic imports work in this article.
The import() statement returns a Promise when it resolves. Webpack considers this as a split point that it will break out into a separate script (or chunk). Once the module is returned, the module.default is used to reference the default export provided by lodash. The Promise is chained with another .then() calling a sortInput method to sort the three input values. At the end of the Promise chain, .catch() is called upon to handle where the Promise is rejected as the result of an error.
In a real production applications, you should handle dynamic import errors appropriately. Simple alert messages (similar to what is used here) are what are used and may not provide the best user experience for letting users know something has gone wrong.
In case you see a linting error like "Parsing error: import and export may only appear at the top level", know that this is due to the dynamic import syntax not yet being finalised. Although Webpack support it, the settings for ESLint (a JavaScript linting tool) used by Glitch have not been updated to include this syntax yet but it does still work.
The last thing we need to do is write the sortInput method at the end of our file. This has to be a function returning a function that takes in the imported method from lodash.sortBy. The nested function can sort the three input values and update the DOM:
const sortInput = () => { return (sortBy) => { const values = [ input1.valueAsNumber, input2.valueAsNumber, input3.valueAsNumber ]; const sortedValues = sortBy(values); results.innerHTML = ` <h2> ${sortedValues} </h2> ` }; }
Monitor the numbers
Now let's reload the application one last time and keep a close eye on the Network panel. You should notice how only a small initial bundle is downloaded when the app loads. After the button is clicked to sort the input numbers, the script/ chunk containing the sorting code gets fetched and executed. Do you see how the numbers still get sorted as we would expect them to?
JavaScript code splitting and lazy-loading can be very useful for trimming down the initial bundle size of your app or site. This can directly result in faster page load times for users. Although we've looked at adding code splitting to a vanilla JavaScript application, you can also apply it to apps built with libraries or frameworks.
Lazy-loading with a JavaScript library or framework
A lot of popular frameworks support adding code splitting and lazy-loading using dynamic imports and Webpack.
Here's how you might lazy-load a movie 'description' component using React (with React.lazy() and their Suspense feature) to provide a "Loading…" fallback while the component is being lazy-loaded in (see here for some more details):
import React, { Suspense } from 'react'; const Description = React.lazy(() => import('./Description')); function App() { return ( <div> <h1>My Movie</h1> <Suspense fallback="Loading..."> <Description /> </Suspense> </div> ); }
Code splitting can help reduce the impact of JavaScript on your user experience. Definitely consider it if you have larger JavaScript bundles and when in doubt, don't forget to measure, optimise and monitor.
This article was originally published in issue 317 of net, the world's best-selling magazine for web designers and developers. Buy issue 317 here or subscribe here.
Related articles: | https://www.creativebloq.com/how-to/all-you-need-to-know-about-javascript-code-splitting | CC-MAIN-2021-31 | en | refinedweb |
excel tab delimited image titled copy paste tab delimited text into excel step 8.
import data from excel into using adobe community , how to change a tab delimited text file,help topics tab delimited format , import data from excel into using adobe community, spreadsheet upload for document creation help,, an easier way to open files in excel, working with delimited flat files explore, sheet music consortium metadata mapping tool preparing files to,bill of materials import utility .
Related Post
Excel Fuzzy Match Tools In Excel Formula Cheat Sheet For Excel Mobile Excel Application Free Download Import Contacts To Outlook From Excel Tab Delimited Excel Outlook Import Contacts Excel Correlation Function In Excel P Value In Excel 2010 Third Axis In Excel Ms Excel Password Remover Online Problem In Excel Budgeting Excel Template Vba Export Excel Tools Tab In Excel | http://teletienda.club/excel-tab-delimited/excel-tab-delimited-image-titled-copy-paste-tab-delimited-text-into-excel-step-8/ | CC-MAIN-2018-43 | en | refinedweb |
A wind turbine wake modeling software
Project description
Further documentation is available at.
For questions regarding FLORIS, please contact Jen Annoni, Paul Fleming, or Rafael Mudafort, or join the conversation on our Slack team.
Dependencies
The following packages are used in FLORIS
- Python3
- NumPy v1.12.1
- SciPy v0.19.1
- matplotlib v2.1.0
- pytest v3.3.1 (optional)
- Sphinx v1.6.6 (optional)
After installing Python3, the remaining required dependencies can be installed with pip referencing the requirements list using this command:
pip install -r requirements.txt
Installation
Using pip, FLORIS can be installed in two ways
- local editable install
- using a tagged release version from the pip repo
For consistency between all developers, it is recommended to use Python virtual environments; this link provides a great introduction. Using virtual environments in a Jupyter Notebook is described here.
Local editable installation
The local editable installation allows developers maintain an importable instance of FLORIS while continuing to extend it. The alternative is to constantly update python paths within the package to match the local environment.
Before doing the local install, the source code repository must be cloned directly from GitHub:
git clone
Then, using the local editable installation is as simple as running the following command from the parent directory of the cloned repository:
pip install -e FLORIS/
Finally, test the installation by starting a python terminal and importing FLORIS:
import floris
pip repo installation
The Floris version available through the pip repository is always the latest tagged and released version. This version represents the most recent stable, tested, and validated code.
In this case, there is no need to download the source code directly. FLORIS and its dependencies can be installed with:
pip install floris
Executing FLORIS
floris is an importable package and should be driven by a custom script. We have provided an example driver script at example/example_script.py and a Jupyter notebook detailing a real world use case at example/FLORIS_Run_Notebook.ipynb.
Generally, a Floris class should be instantiated with a path to an input file as the sole argument:
Floris("path/to/example_input.json")
Then, driver programs can calculate the flow field, produce flow field plots, and incorporate the wake estimation into an optimization routine or other functionality.. | https://pypi.org/project/floris/ | CC-MAIN-2018-43 | en | refinedweb |
quill-d 0.1.4
Simple, unobtrusive data access library for the D programming language.
To use this package, put the following dependency into your project's dependencies section:
Quill.d
For more information visit the [API docs here]().
Quill.d is a data access library for the D programming language that sits on top of DDBC. After getting set up you'll be able to write plain SQL in a file or a string, and run it. Quill.d embraces SQL as a language and does not attempt to abstract the database away. As it turns out, SQL is a pretty good language in which to query a database.
Here are a few high level examples:
Fetch some records
auto models = database.list!(Model, "list.sql")();
where
list.sql contains
select * from models;
Fetch a single record
auto model = database.single!(Model, "select.sql")(Variant(4));
where
select.sql contains
select * from models where id = ?;
Insert a record
database.execute!(Model, "insert.sql")(Variant("name"));
where
insert.sql contains
insert into models(name) values(?);
Getting Started
{ ... "dependencies": { "quill-d": "~>0.1.4" } }
Specify a database configuration to use.
PostgreSQL
Add the PostgreSQL configuration to
dub.json
{ ... "subConfigurations": { "quill-d": "PostgreSQL" } }
Install PostgreSQL Client
If you don't already have it, install the PostgreSQL client
sudo apt-get install postgresql-client
On Linux you may get an error
cannot find -lpq. The linker is having trouble finding the client library. To fix this, you can add a symlink like this:
ln -s /usr/lib/libpq.so.5 /usr/lib/libpq.so
Create a new PostgreSQL client:
import quill; auto database = new Database("127.0.0.1", to!(ushort)(54320), "testdb", "admin", "password", true);
MySQL
Add the MySQL configuration to
dub.json
{ ... "subConfigurations": { "quill-d": "MySQL" } }
Create a new MySQL client:
import quill; auto database = new Database("127.0.0.1", to!(ushort)(33060), "testdb", "admin", "password");
SQLite
Add the SQLite configuration to
dub.json
{ ... "subConfigurations": { "quill-d": "SQLite" } }
Install SQLite3
If you don't already have it, install SQLite3
sudo apt-get install sqlite3 libsqlite3-dev
Create a new SQLite client:
import quill; auto database = new Database("/path/to/db.sqlite3");
Specify String Import Path
Quill.d uses string imports to run SQL statements in files embedded in the compiled binary. The paths must be added to
dub.json to allow for those files to be imported as strings.
{ ... "stringImportPaths": ["queries"] }
SQL queries can now be imported and run relative to the
queries directory like this:
database.execute!("statement.sql")();
Running Tests
The test suite is a collection of integration tests that actually runs SQL in all of the supported databases. Other than SQLite, you'll have to have a database to connect to. If you do not have a database, you can use Database Quickstart to spin up a server for each supported database. The connection details are in the test here. Once there is a database to connect to, the test suite can be run in all of the supported databases from the command line like this:
dub test
Query Types and Parameters
There are a bunch of overloads that can handle various kinds of queries. They are divided up by the expected return type.
| Returns | Method Name |
| ------ |:-----------:|
| none |
execute |
| many |
list |
| one |
single |
Parameters
For each return type there can be no parameters, model based parameters, or
Variant based parameters.
No Parameters Example
This will execute a SQL statement in
queries/statement.sql that returns nothing and takes no parameters:
database.execute!("statement.sql")();
Model Based Parameters
Model based parameters can be used by making a class that has fields that map to the column names in the result and the parameter names in the query. In D,
bind is used to specify the name of the parameter or the column name in a result set. In SQL,
?(parameter_name) can be used to match the name of the field or the value of
bind.
Given a class like this:
class Model { int id; @(bind("name")) string name; } auto model = new Model(); model.name = "value";
It can map the fields of the class into a query like this:
database.execute!("statement.sql")(model);
where
statement.sql looks like this:
insert into models(name) values(?(name));
It can also map the column names from the result of a query like this:
auto models = database.list!("statement.sql")();
where
statement.sql looks like this:
select id, name from models;
Variant Based Parameters
Variant based parameters are ordered parameters that can be of any type.
Given a class like this:
class Model { int id; @(bind("name")) string name; }
A
Variant or array of
Variant will map to each parameter
auto model = database.single!("statement.sql")(Variant(4));
where
statement.sql looks like this:
select * from models where id = ?;
Ignoring Properties
Properties that are not
public or include the
@omit attribute will be ignored. Given the following class, both
name and
ignored will be omitted.
class Model { int id; protected string name; @omit string ignored; }
More Examples
There are a ton more examples in the test and doc comments of database.d.
License and Copyright
- Registered by Chris Barnes
- 0.1.4 released 2 years ago
- chrishalebarnes/quill.d
- chrishalebarnes.github.io/quill.d/
- The MIT License, see license file
- Authors:
-
- Dependencies:
- ddbc
- Versions:
- Show all 6 versions
- Download Stats:
0 downloads today
0 downloads this week
0 downloads this month
210 downloads total
- Score:
- 0.9
- Short URL:
- quill-d.dub.pm | http://code.dlang.org/packages/quill-d | CC-MAIN-2018-43 | en | refinedweb |
Use the Lexology Navigator tool to compare the answers in this article with those from other jurisdictions.
Recent developments
Recent developments
Have there been any notable recent developments concerning state and local taxation in your state, including any regulatory changes or case law?
The 2018 Kentucky General Assembly enacted fairly comprehensive tax changes in House Bill (HB) 487.
Significant income tax changes, generally effective for tax years beginning on or after January 1 2018, include:
- reducing the individual income tax and corporate income tax rate to 5%;
- updating Internal Revenue Code reference as of December 31 2017 (to include Tax Cuts and Jobs Act 2017 provisions, with certain exceptions—e.g., 100% depreciation deduction, Section 179 expensing, and the 20% Qualified Business Income deduction);
- eliminating itemized deductions for individuals except mortgage interest and charitable contributions;
- adopting a single sales factor apportionment formula for apportionable income of multistate businesses; and
- replacing the mandatory nexus consolidated return filing requirement with mandatory unitary combined reporting with an election to file a federal consolidated group return (effective for tax years beginning on or after January 1 2019).
HB 487 also provides a 25% non-refundable income tax credit on property tax on inventory, starting in 2018 and increasing by 25% each year, thus partially phasing out the inventory tax.
Other significant changes include:
- imposing sales and use tax on several services including repair, installation, and maintenance of tangible personal property (with an exemption for manufacturers and industrial processors), as well as other specified services, effective July 1 2018;
- adding additional nexus-creating requirements directed at remote sellers;
- increasing the cigarette tax to $1.10 per pack; and
- implementing several pro-taxpayer administrative changes.
General framework
Legislation
What primary and secondary legislation governs the collection and remittance of taxes in your state?
Title XI of the Kentucky Revised Statutes (KRS), Chapters 131 to 144, governs the imposition, collection, and remittance of taxes. Section 2, as well as Sections 169 to 182, of the Kentucky Constitution, among others, also place limitations on the Commonwealth with regard to taxes.
Government authorities
What government authorities (at both state and local level) are charged with the collection and administration of taxes, and what are the extent of their powers?
Most state-level taxes in Kentucky are administered and collected by the Kentucky Department of Revenue. The Transportation Cabinet administers and collects certain Road Fund taxes, including the U-Drive-It tax on leases and rentals of motor vehicles. Real property taxes are assessed by county property valuation administrators, and real and personal property taxes are collected on the local level by locally elected officials; however, the Department of Revenue centrally administers tangible personal property tax audits and assessments. Local taxes are administered by the locality imposing the local tax—for instance, local occupational license (effectively income) taxes; however, the department administers centrally collected local taxes including utility gross receipts license taxes.
The Department of Revenue has certain powers delegated to it by the General Assembly, including the authority to promulgate regulations, consider and settle disputes, and other powers enumerated in KRS Chapter 131, subject to statutory limitations such as the Kentucky Taxpayers’ Bill of Rights and the Kentucky Constitution. Local taxing authorities typically have similar powers delegated to them via legislation, subject to any limitations imposed by the KRS or the Kentucky Constitution.
State/local balance
How would you describe the balance between taxes collected at state and local level?
The majority of Kentucky taxes are assessed and collected at the state level. In addition to real and personal property taxes, many counties, school boards, and cities in Kentucky impose occupational license taxes on wages and business income. Localities cannot impose sales taxes.
Tax year and filing deadlines
What is the prescribed tax year in your state and what filing deadlines apply?
Kentucky individual and). A taxpayer’s sales tax return generally must be filed by the 20th of the month following the reporting period, unless the Department of Revenue authorizes quarterly or annual tax return filings (KRS 139.540). Property taxes are assessed annually based on the fair cash value as of January 1, and taxpayers must file tangible personal property tax returns with the county of taxable situs by May 15.
Government policy
How competitive is your state in terms of taxation in relation to other states? What is the government’s general policy and approach to taxation?
Kentucky imposes the three main tax types: income, property, and sales and use. With the enactment of House Bill 487, Kentucky’s income tax rates have become more competitive with neighboring states; its sales tax rate is competitive as well. In terms of overall tax climate toward businesses, some consider Kentucky to generally rank in the top third.
Corporate income and franchise taxes
Taxable income
How is taxable income determined in your state? To what extent is the state income tax base aligned with the federal income tax base?
Kentucky taxable income is determined by reference to the Internal Revenue Code 1986, as amended as of December 31 2017 (including Tax Cuts and Jobs Act 2017 provisions, with certain exceptions—e.g., 100% depreciation deduction, Section 179 expensing, and the 20% Qualified Business Income deduction)—modifying federal taxable income by certain additions and subtractions, and allocating and apportioning such income to Kentucky (Kentucky Revised Statutes (KRS) 141.010).
How is in-state income apportioned for multi-state businesses? Does your state regulate transfer pricing?
Apportionment is governed primarily by KRS 141.120.
A single sales factor apportionment formula, except for providers of communication and cable services, is used to apportion a taxpayer’s apportionable income. The sales factor is computed based on the ratio of receipts in Kentucky versus everywhere. The location of sales of tangible personal property are assigned based on the destination of the goods, and there is no throwback rule for non-governmental sales; other receipts are assigned based on market-based sourcing rules.
The following types of income, provided that the income is non-apportionable, is allocated to Kentucky:
- net rents and royalties from real property located in the state;
- capital gains and losses from sales or other dispositions of real property located in the state;
- interest if the corporation’s commercial domicile (i.e., the principal place from which the trade or business of the corporation is managed) is in the state; and
- patents and copyrights if and to the extent the property is utilized by the payer either in the state or in a state in which the corporation is not taxable and the corporation’s commercial domicile is in the state. Non-apportionable income that is not allocated to Kentucky is allocated outside of Kentucky.
A taxpayer may petition or the Kentucky Department of Revenue may require other allocation and apportionment methods if the standard provisions do not reflect the extent of the taxpayer’s business activity in Kentucky and the alternative provision is reasonable.
Kentucky regulates transfer pricing by disallowing certain deductions resulting from expenses paid to affiliated entities or related parties for intangible expenses, management fees, or related party costs, with certain exceptions as provided for by KRS 141.205.
Nexus
How is nexus determined for corporate income tax purposes?
Nexus is determined using a ‘doing business’ standard. ‘Doing business’ in Kentucky includes, but is not limited to:
- being organized under the state’s laws;
- having a commercial domicile in the state;
- owning or leasing property in the state;
- having one or more individuals performing services in the state;
- maintaining an interest in a pass-through entity doing business in the state;
- deriving income from or attributable to sources within the state, including deriving income directly or indirectly from a trust or a single-member limited liability company (disregarded as an entity separate from its single member for federal income tax purposes) doing business in the state; or
- directing activities at Kentucky customers for the purpose of selling them goods or services.
103 KAR 16:240 provides guidance as to what constitutes doing business in Kentucky for income tax purposes and recognizes the limitations imposed by P.L. 86-272, codified as 15 U.S.C. 381 to 384, on the imposition of income taxes.
Is affiliate nexus recognized in your state? If so, to what extent? Has there been any notable case law in this area?
Other than ownership in a pass-through or disregarded entity doing business in Kentucky, the statutory definition of ‘doing business’ in KRS 141.010 and the nexus standard regulation, 103 KAR 16:240, do not directly address the concept of affiliate nexus. Kentucky has, however, adopted unitary combined reporting and elective consolidated reporting using the federal income tax consolidated group, effective for tax years beginning on or after January 1 2019. There have been no recent notable cases concerning affiliate nexus.
Rates
What are the applicable corporate income tax rates?
Kentucky’s corporate income tax rate for tax years beginning on or after January 1 2018 is 5% (KRS 141.040).
Exemptions, deductions and credits
What exemptions, deductions, and credits are available?
Deductions from gross income allowed by Chapter 1 of the Internal Revenue Code 1986, as amended as of December 31 2017 (including Tax Cuts and Jobs Act 2017 provisions, with some exceptions—e.g., the 100% depreciation deduction, Section 179 expensing, and the 20% Qualified Business Income deduction).
Any income that is exempt from taxation under the US Constitution (e.g., the Commerce Clause or Due Process Clause), federal statute (e.g., P.L. 86-272), or the Kentucky Constitution are exempt from taxation in Kentucky. Dividend income is also exempt.
There are many tax credits against the corporation income tax, including the limited liability entity tax credit, the Kentucky Business Investment Act credit, and the recycling/composting equipment credit. A list may be found on Kentucky Department of Revenue Form Schedule TCS, Tax Credit Summary Schedule.
Filing requirements
What filing requirements and procedures apply? Are there special filing requirements for groups of company?
Kentucky).
Corporate entities doing business in Kentucky are required to file a return unless they are exempt from Kentucky corporate income tax under KRS 141.040. For tax years beginning on or before January 1 2019, corporate taxpayers must file, as part of a mandatory nexus, consolidated return filing group, which generally includes each includible corporation with nexus with Kentucky 80% or more owned by a common parent corporation that is itself an includible corporation (KRS 141.200).
For tax years beginning on or after January 1 2019, Kentucky will require unitary combined reporting for multistate companies unless the company is a part of a group that elects to file a consolidated return based on the federal income tax consolidated return group.
Corporate franchise tax
Does your state impose a corporate franchise tax? If so, is it imposed in lieu of or in addition to corporate income tax?
Kentucky does not impose a corporate franchise tax, but it does impose a limited liability entity tax pursuant to KRS 141.0401 on every non-exempt corporation and limited liability pass-through entity doing business in Kentucky on all Kentucky gross receipts or Kentucky gross profits.
If your state imposes a corporate franchise tax, please stipulate:
(a) The applicable tax base
The tax base for the limited liability entity tax is the taxpayer’s Kentucky gross receipts or gross profits (KRS 141.0401).
(b) The tax rates
The tax rate is $0.095 per $100 of the entity’s Kentucky gross receipts and $0.75 per $100 of the entity’s Kentucky gross profits. The annual limited liability entity tax imposed is the lesser of these two amounts or $175 (KRS 141.0401).
(c) Any exemptions or deductions
There is a small-business exemption for businesses with total gross receipts or profits of less than $3 million, which is phased out between $3 million and $6 million. Certain organizations are exempt, including, but not limited to, financial institutions, insurance companies, alcohol production facilities, REITs, and charitable or otherwise tax-exempt corporations (KRS 141.0401).
(d) Filing formalities
A taxpayer generally complies with their limited liability entity tax filing obligation on the same return as they do their income tax return filing obligation, such as Kentucky Department of Revenue Forms 720, 720S, or 765. Individually owned single member limited liability companies may file Kentucky Department of Revenue Form 725. A disregarded single member limited liability company owned by a regarded entity is included in the return of its single member owner.
Personal income taxes
Taxable income
How is taxable personal income determined in your state?
A taxpayer’s adjusted federal gross income is used as the starting point for Kentucky’s entire net income, regardless of where it was earned. Then, there are certain additions or subtractions and deductions that result in a taxpayer’s Kentucky adjusted gross income (Kentucky Revised Statutes (KRS) 141.020).
Tax residence
Under what circumstances is an individual deemed resident in your state for personal income tax purposes?
KRS 141.010 defines ‘resident’ to mean .” Accordingly, an individual may be a resident because the individual is a domiciliary of Kentucky, or the individual may be a statutory resident because the individual comes within the statutory test of residency—that is, has an abode and spends more than 183 days in the Commonwealth.
Rates
What are the applicable personal income tax rates?
For tax years beginning on or after January 1 2018, the personal income tax is 5% (KRS 141.020).
Exemptions, deductions and credits
What exemptions, deductions, and credits are available?
Except for deductions for mortgage interest and charitable contributions, itemized deductions have been eliminated under House Bill 487. Pension income is excluded up to $31,110 per person. Taxpayers who pay taxes in a state other than Kentucky on non-Kentucky-sourced income are entitled to a credit against their Kentucky income tax for the amount paid to another state. There are many other tax credits against the income tax, including the limited liability entity tax credit, the Kentucky Business Investment Act credit, and the recycling/composting equipment credit. A list may be found on Kentucky Department of Revenue Form 740, page 2, on line 30 and in Section A.
Filing requirements
What filing requirements and procedures apply?
Kentucky individual).
Employer obligations
What obligations are imposed on the employer in relation to the collection and remittance of state personal income taxes (eg, withholding)?
Generally, every employer must deduct and withhold from wages a tax as determined by tables created by the Kentucky Department of Revenue.
Sales and use taxes
Taxableentucky Revised Statutes (KRS) 139.200). Kentucky use tax applies to non-exempt tangible personal property used in Kentucky that was purchased for use in Kentucky.
State rate
What is the state sales tax rate?
Kentucky’s sales tax rate is 6%.
Local rates
What is the range of local sales tax rates levied in your state?
There are no local sales taxes in Kentucky.
Exemptions
What goods are exempt from sales and use tax?
Certain goods are exempt from sales and use tax including). Motor vehicles, gasoline, and special fuels are exempt from sales and use tax but subject to excise taxes imposed pursuant to KRS Chapter 138 (KRS 139.470). Food for human consumption and medical supplies and equipment are exempt (KRS 139.485; KRS 139.472). There are exemptions for other items..
Services
Are any services taxed?
Sales tax is imposed on specifically enumerated services (KRS 139.200), including:
- rental of any room, lodgings or accommodations for a period of less than 30 days;
- sewer services to non-residential consumers;
- admissions with the most notable exception being race track admissions;
- prepaid calling services;
- communications services; and
- distribution and transmission of natural gas to non-residential consumers.
Effective July 1 2018, sales and use tax is also imposed on:
- labor and services associated with the repair, installation, and maintenance of taxable tangible personal property, though an exemption is included for manufacturers and industrial processors;
- landscaping and lawn care services;
- janitorial services;
- pet care (small animal) veterinarian services;
- industrial laundry services; dry cleaning services;
- linen supply services;
- pet grooming and boarding services;
- diet and weight reducing services;
- tanning services;
- limousine services; and
- many additional admissions.
Filing requirements
What filing requirements and procedures apply?
A taxpayer’s sales tax return generally must be filed by the 20th of the month following the reporting period, unless the Kentucky Department of Revenue authorizes quarterly or annual tax return filings (KRS 139.540).
Property taxes
Taxable value
How is the value of property assessed for tax purposes in your state? Which types of property are subject to tax?
All classes of property in Kentucky are subject to taxation, unless exempted by the state constitution or by statute. All taxable property must be listed annually, valued and assessed in the county where it is located as of January 1 of each year. Property is assessed at its fair cash value, estimated at the price that it would bring through a fair and voluntary sale. Property is also valued at fair market value, determined by using one, or a combination, of the leading three methods (Kentucky Revised Statutes (KRS) 132.191(2)):
- cost approach (reproduction or replacement);
- sales or market approach; or
- income/capitalization approach.
State rate
What is the state property tax rate?
Property tax rates vary according to how the property is classified and are set annually by the Department of Revenue. The 2017 state rate for most real estate was $0.122 per $100 of value, with interstate railroads and leasehold interests subject to a state rate of $0.10 per $100 and $0.15 cents per $100, respectively.
The state tax rate for tangible property without a specially defined rate is $0.45 per $100 of value (KRS 132.020). State tax rates lower than $0.45 per $100 apply to:
- certain privately owned leasehold interests;
- qualifying voluntary remediation property;
- tobacco;
- unmanufactured agricultural products;
- farm machinery;
- livestock;
- tangible personal property located in an activated foreign trade zone;
- machinery engaged in manufacturing;
- commercial radio and television equipment;
- certified pollution control facilities;
- certified alcohol production;
- historic motor vehicles;
- inventory;
- certain railroad property;
- certain aircrafts; and
- certain vessels.
Local rates
What is the range of local property tax rates levied in your state?
The typical real estate tax rates in cents per $100 are:
- counties—33¢;
- cities—22¢;
- school districts—65¢; and
- special tax districts—10¢.
Exemptions and deductions
What exemptions and deductions are available?
Section 170 of the Kentucky Constitution provides exemptions from property tax to government-owned property, educational institutions, religious institutions, public libraries, public cemeteries, and purely public charities. Tangible personal property (e.g., inventory) placed in a warehouse or distribution center for the purpose of subsequent shipment to an out-of-state destination is exempt (KRS 132.097 and KRS 132.099).
Tangible personal property subject to special lower state tax rates may also be exempt from local taxes (KRS 132.200).
Filing requirements
What filing requirements and procedures apply?
Real property owners must list their property with the county Property Valuation Administrator (PVA) between January 1 and March 1; however, as a practical matter, property owners do not typically file forms with the local PVA.
Tangible personal property owners must list their property and file a tangible property tax return with the PVA between January 1 and May 15.
Real estate transfer tax
How is the transfer of real estate taxed in your state (including tax base, rates, exemptions, and filing formalities)?
A tax imposed on the transfer of real estate at a rate of $0.50 per $500 of value based on the value of the transferred property as set forth in the deed. The tax is collected by the county clerk as a prerequisite to recording the deed. Kentucky’s real estate transfer tax does not apply to the following types of transfer (KRS 142.050):
- by gift;
- between spouses;
- on partition;
- on sale for delinquent taxes;
- foreclosure;
- pursuant to a business merger, consolidation, conversion, or upon formation;
- between trustees;
- from a trustee to a beneficiary; or
- in order to provide or release security for a debt or obligation.
Unclaimed and abandoned property
Reporting and remittance
Describe your state’s regime for reporting and remitting unclaimed and abandoned property. How is the value of such property calculated? How assertive is your state in enforcing its rights to unclaimed property?
Unclaimed property reports must be filed electronically by November 1 of each year, though an extension is allowed, and the property must be paid or delivered to the state by the same date. Kentucky Revised Statutes Chapter 393 contains the dormancy periods applicable for unclaimed property. Before filing the report, the abandoned property holder must send written notice to the owner 60 to 120 days for property worth $100 or more. There is no minimum reporting amount, though businesses do not have to report unclaimed wages of less than $50. The abandoned property holder must generally maintain records for five years after filing the report, and a reporting institution must retain interest-bearing accounts for 10 years if the property is not rightfully claimed (20 KAR 1:080). The rightful owner may file a claim to regain custody of the property.
Excise and other indirect taxes
Excise taxes
What excise taxes are levied in your state, including applicable goods, rates, and filing formalities?
Effective July 1 2018, excise taxes are imposed on cigarettes at a total combined rate of $1.10 on each 20 cigarettes. For tobacco products, an excise tax is imposed at a rate of $0.19 per 1.5 ounces of snuff and per single unit of chewing tobacco. For all other tobacco products, the rate is 15% of the products’ sales price. Licensed wholesalers and distributors must generally file monthly returns by the 20th day of the following month. Every manufacturer that ships tobacco products into Kentucky is required to file a monthly report identifying all shipments made in the previous month, including the names and addresses where the products were shipped, as well as a description of the quantity and type of tobacco product shipped.
Excise taxes are also imposed on gasoline and special fuel at a rate of 9% of the average wholesale price paid on a per-gallon basis. In addition, a supplemental highway use motor fuel tax is imposed at the rate of $0.05 per gallon on gasoline and $0.02 per gallon on special fuel. Every gasoline and special fuel dealer must transmit to the Kentucky Department of Revenue, by the 25th day of each month, reports of the total number of gallons of gasoline and special fuel received in the state during the preceding calendar month. The reports are to be accompanied with payment for the amount of tax due for the preceding calendar month.
An excise tax on motor vehicle usage is imposed at a rate of 6% of the retail price on the use of every motor vehicle in the state. The tax is collected by the county clerk when the vehicle is first registered or offered for titling. U-Drive-It tax is a usage tax of 6% levied upon the amount of the gross rental or lease charges paid by a customer or lessee renting or leasing a motor vehicle. The tax is reported and paid monthly by the U-Drive-It.
Kentucky also imposes an alcoholic beverages tax on distilled spirits, wine, and malt beverages on a per-gallon rate; tax returns and payment are due by the 20th of each month (Kentucky Revised Statutes (KRS) 243.720; KRS 243.730).
Other excise taxes are imposed pursuant to KRS Chapter 138.
Other indirect taxes
Are any other indirect taxes levied in your state?
Kentucky also imposes severance taxes on coal and other natural resources in KRS Chapters 143 and 143A.
Other taxes
Other taxes
Do any other taxes apply to businesses in your state? If so, please include applicable tax bases, rates, exemptions/deductions, and filing formalities.
The bank franchise tax is assessed at the rate of 1.1 % of net capital with a minimum of $300 due per year. The return is due on or before March 15 following each calendar year.
The insurance premiums tax is paid by all life insurance companies, all stock insurance companies, all mutual insurance companies, and all captive insurers doing business in Kentucky. It is assessed on premiums collect by insurance companies on policies written in Kentucky during the preceding calendar year.
Incentives
Incentive schemes
Does your state offer any tax incentive schemes to attract businesses and promote investment?
The Kentucky Economic Development Finance Authority, established within the Cabinet for Economic Development, provides financial support to several types of business through an array of financial assistance and tax credit programs.
Planning considerations
Compliance
What tax compliance procedures and best practices should businesses operating in your state be aware of?
Taxpayers should determine whether they have nexus with Kentucky prior to engaging in business in the Commonwealth. Taxpayers can then register with the Kentucky Department of Revenue for all the types of tax that may apply to their business. They should ensure that the contact person listed on the registration form is one who will carefully review and respond to any correspondence from the Department of Revenue. Taxpayers should keep their mailing address up to date with the Department of Revenue to ensure that they do not miss important notices from the department..
Seeking the advice of tax advisers with experience in Kentucky can help taxpayers who have received a notice of audit or a notice of tax due (i.e., an assessment) navigate the audit, protest, and appeal procedures. Taxpayers interested in appealing to the Kentucky Claims Commission should also be aware that only a licensed Kentucky attorney may file such an appeal..
Strategic planning
What strategic planning considerations should businesses operating in your state bear in mind to optimize tax efficiency?
Taxpayers should regularly review potential tax incentives programs and credits that may be available to them and be aware that some may require an application or approval before a particular project begins. Incentives may be available for capital expenditures, hiring practices, or relocation. | https://www.lexology.com/library/detail.aspx?g=74fdc9be-46ef-40bd-97be-8b2eefe3098d | CC-MAIN-2018-43 | en | refinedweb |
Implements a model viewer canvas. More...
#include <iostream>
#include "3d_rendering/3d_render_ogl_legacy/c_ogl_3dmodel.h"
#include "c3d_model_viewer.h"
#include "../3d_rendering/3d_render_ogl_legacy/ogl_legacy_utils.h"
#include "../3d_cache/3d_cache.h"
#include "common_ogl/ogl_utils.h"
#include <wx/dcclient.h>
#include <base_units.h>
#include <gl_context_mgr.h>
Go to the source code of this file.
Implements a model viewer canvas.
The propose of model viewer is to render 3d models that come in the original data from the files without any transformations.
Definition in file c3d_model_viewer.cpp.
Definition at line 80 of file c3d_model_viewer.cpp.
Referenced by C3D_MODEL_VIEWER::ogl_initialize(), and C3D_MODEL_VIEWER::OnPaint().
Scale convertion from 3d model units to pcb units.
Definition at line 45 of file c3d_model_viewer.cpp.
Referenced by C3D_MODEL_VIEWER::OnPaint(). | http://docs.kicad-pcb.org/doxygen/c3d__model__viewer_8cpp.html | CC-MAIN-2018-43 | en | refinedweb |
Thread creation features. More...
Thread creation features.
Definition in file pthread_threading.h.
#include "kernel_defines.h"
Go to the source code of this file.
Datatype to identify a POSIX thread.
Definition at line 30 of file pthread_threading.h.
Spawn a new POSIX thread.
This functions starts a new thread. The thread will be joinable (from another pthread), unless
attr tells to create the thread detached. A non-detached thread must be joined will stay a zombie into it is joined. You can call pthread_exit() inside the thread, or return from
start_routine().
attr.
== 0on success.
!= 0on error.
Make a pthread unjoinable.
The resources of a detached thread get released as soon as it exits, without the need to call pthread_join() out of another pthread. In fact you cannot join a detached thread, it will return an error. Detaching a thread while another thread tries to join it causes undefined behavior. A pthread may detach himself. A non-pthread may call this function, too. A pthread cannot be "attached" again.
== 0on success.
!= 0on error.
Compared two pthread identifiers.
0if the ids identify two different threads.
Definition at line 101 of file pthread_threading.h.
Exit calling pthread.
Join a pthread.
The current thread sleeps until
th exits. The exit value of
th gets written into
thread_return. You can only join pthreads, and only pthreads can join. A thread must not join itself.
== 0on success.
!= 0on error.
Returns the pthread id of the calling/current thread.
> 0identifies the calling pthread.
== 0if the calling thread is not a pthread. | http://riot-os.org/api/pthread__threading_8h.html | CC-MAIN-2018-43 | en | refinedweb |
cleansing multiple build directories
In my adventures of building GNOME with JHBuild, it often happens that
when I tweak something that affects the build environment (e.g. use
system Python instead of JHBuild-built one), I get a heck of a lot of
build failures. This will happen even after I run jhbuild clean (which
runs
make clean on the modules), testimony to the weakness ofthe GNOME
build infrastructure (autotools, ...). This means that I need to run
make distclean or better still (where applicable)
git clean -dfx.
Note that I sometimes even have to uninstall one or two modules (on
JHBuild path) to get a build failure fixe
(
jhbuild uninstall modulename). This is laborious work, so I sometimes
just wipe out the entire installation.
Note that there's dozens of modules to build, so I wrote this little script to take care of it:
import os import subprocess top_level = os.path.expanduser("~/src/gnome") for filename in os.listdir(top_level): full_path = "{}/{}".format(top_level, filename) if os.path.isdir(full_path): cmd = "cd ~/src/gnome/{} && git clean -dfx".format(filename) if subprocess.call(cmd, shell=True) != 0: cmd = "cd ~/src/gnome/{} && make distclean".format(filename) if subprocess.call(cmd, shell=True) != 0: cmd = "cd ~/src/gnome/{} && make clean".format(filename) subprocess.call(cmd, shell=True)
update
Someone very kind guy made a bunch of suggestions, making my code much better:
import os import subprocess top_level = os.path.expanduser("~/src/gnome") for filename in os.listdir(top_level): full_path = os.path.join(top_level, filename) if os.path.isdir(full_path): os.chdir(full_path) if subprocess.call("git clean -dfx".split()) != 0: if subprocess.call("make distclean".split()) != 0: subprocess.call("make clean".split())
further reading
modules: os, os.path, subprocess | http://tshepang.net/cleansing-multiple-build-directories/ | CC-MAIN-2018-43 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.