text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
An Xcode playground UI that you use to train a model to classify images.
SDKs
- macOS 10.14–10.15Deprecated
- Xcode 10.0–11.0Deprecated
Framework
- Create MLUI
Declaration
class MLImageClassifierBuilder
Overview
An image classifier is a machine learning model that takes an image as its input and makes a prediction about what the image represents. Use an image classifier builder in an Xcode playground with a macOS target to train an image classifier to make predictions.
When you create a builder, you show it in a live view:
import CreateMLUI let builder = MLImageClassifierBuilder() builder.showInLiveView()
After that, you interact with the builder entirely through the UI of the live view that appears in Xcode’s assistant editor. Use the configuration options that the builder gives you to control training parameters like the number of iterations and augmentation settings. Begin training the model by dragging your training data into the classifier. After training finishes, you also use the UI to test your model, and then save it to a file for use in a Core ML app.
| https://developer.apple.com/documentation/createml/mlimageclassifierbuilder | CC-MAIN-2020-16 | en | refinedweb |
Java directory FAQ: Can you share an example of how to determine the current directory in Java?
You can determine the current directory your Java application is started in using
System.getProperty() method and the
user.dir property, like this:
String currentDirectory = System.getProperty("user.dir");
Here’s a complete example:
public class JavaCurrentDirectoryExample { public static void main(String[] args) { String currentDirectory = System.getProperty("user.dir"); System.out.println("user.dir: " + currentDirectory); } }
Discussion
When you compile and run this little program, the output will show you the directory you ran the program in.
Accessing this Java property is often helpful when you want to store or read configuration, input, or output files that are related to your application.
You can get a full listing of all available Java properties at this URL.
Java current directory and web servers
When I wrote this tip, I was working on a Java/Swing/GUI application, and it works as advertised in those types of applications. I’ve been told that you can get varied results with this “current directory” technique using different Java web/application servers, like Tomcat or Glassfish. I’m sorry, I’m not working on any web projects at the moment, so I don’t know the correct cross-platform approach to get something like the web server directory. Whenever I get back to working on a Java web project I’ll be glad to update this article, or if anyone knows the correct answer, please leave a note below. | https://alvinalexander.com/blog/post/java/determine-current-directory-i-e-where-my-application-is-started/ | CC-MAIN-2020-16 | en | refinedweb |
This page describes how to export or extract Dataflow to read data from BigQuery instead of manually exporting it. For more information about using.
- You cannot choose a compression type other than
GZIPwhen you export data using the Cloud, but you can make a copy of the dataset. You cannot move a dataset from one location to another, but you can manually move (recreate) a dataset. Cloud Console or the classic BigQuery web UI
- Using the
bq extractCLI command
- Submitting an
extractjob via the API or client libraries
Exporting table data
To export data from a BigQuery table:
Console
Open the BigQuery web UI in the Cloud Console.
Go to the Cloud Console
In the navigation panel, in the Resources section, expand your project and click your dataset to expand it. Find and click the table that contains the data you're exporting.
On the right side of the window, click Export then select Export to Google Cloud Storage
In the Export table to Google Cloud Storage dialog:
- For Select Google, click your dataset to expand it.
Find and click the down arrow icon
next to the table that contains the data you're exporting.
Select Export table to display the Export to Google Cloud Storage dialog.
In the Export to Google Google Cloud Storage URI textbox, enter a valid URI in the format
gs://bucket_name/filename.ext, where bucket_name is your Cloud Storage bucket name, and filename.ext is the name and extension..ext is the name and extension.to create a job. This approach is more robust to network failure because the client can poll or retry on the known job ID.
Calling
jobs.insert = job.PollUntilCompleted().ThrowOnAnyError(); // Waits for the job to complete. Console.Write($"Exported table to {destinationUri}."); } }
Go
Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Go API reference documentation.
import ( "context" "fmt" "cloud.google.com/go/bigquery" ) // exportTableAsCompressedCSV demonstrates using an export job to // write the contents of a table into Cloud Storage as CSV. func exportTableAsCSV(projectID, gcsURI string) error { // projectID := "my-project-id" // gcsUri := "gs://mybucket/shakespeare.csv" ctx := context.Background() client, err := bigquery.NewClient(ctx, projectID) if err != nil { return fmt.Errorf("bigquery.NewClient: %v", err) } defer client.Close() srcProject := "bigquery-public-data" srcDataset := "samples" srcTable := "shakespeare" } return nil }
Java
Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Java API reference documentation.
import com.google.cloud.RetryOption; import com.google.cloud.bigquery.BigQuery; import com.google.cloud.bigquery.BigQueryException; import com.google.cloud.bigquery.BigQueryOptions; import com.google.cloud.bigquery.Job; import com.google.cloud.bigquery.Table; import com.google.cloud.bigquery.TableId; import org.threeten.bp.Duration; public class ExtractTableToJson { public static void runExtractTableToJson() { // TODO(developer): Replace these variables before running the sample. String projectId = "bigquery-public-data"; String datasetName = "samples"; String tableName = "shakespeare"; String bucketName = "my-bucket"; String destinationUri = "gs://" + bucketName + "/path/to/file"; extractTableToJson(projectId, datasetName, tableName, destinationUri); } // Exports datasetName:tableName to destinationUri as raw CSV public static void extractTableToJson( String projectId, String datasetName, String tableName, String destinationUri) { try { // Initialize client that will be used to send requests. This client only needs to be created // once, and can be reused for multiple requests. BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService(); TableId tableId = TableId.of(projectId, datasetName, tableName); Table table = bigquery.getTable(tableId); // For more information on export formats available see: // // For more information on Job see: // Job job = table.extract("CSV", destinationUri); // Blocks until this job completes its execution, either failing or succeeding. Job completedJob = job.waitFor( RetryOption.initialRetryDelay(Duration.ofSeconds(1)), RetryOption.totalTimeout(Duration.ofMinutes(3))); if (completedJob == null) { System.out.println("Job not executed since it no longer exists."); return; } else if (completedJob.getStatus().getError() != null) { System.out.println( "BigQuery was unable to extract due to an error: \n" + job.getStatus().getError()); return; } System.out.println("Table export successful. Check in GCS bucket for the CSV file."); } catch (BigQueryException | InterruptedException e) { System.out.println("Table extraction job was interrupted. \n" + e.toString()); } } }'); const bigquery = new BigQuery(); const storage = new"; // Location must match that of the source table. const options = { location: 'US', }; // Export data from the table into a Google Cloud Storage file const [job] = await bigquery .dataset(datasetId) .table(tableId) .extract(storage.bucket(bucketName).file(filename), options); console.log(`Job ${job.id} created.`); // Check the job's status for errors const errors = job.status.errors; if (errors && errors.length > 0) { throw errors; } }
timestamp-microslogical types.
DATEdata types are represented as Avro
INTtypes by default, or Avro
datelogical types if the flag
--use_avro_logical_typesis specified.
TIMEdata types are represented as Avro
LONGtypes by default, or Avro
time-microslogical types if the flag
--use_avro_logical_typesis specified.
DATETIMEdata types are represented as Avro
STRINGtypes. The encoding follows the Internet Engineering Task Force RFC 3339 spec. BigQuery API client libraries, see Client library quickstart. | https://cloud.google.com/bigquery/docs/exporting-data?hl=zh-tw | CC-MAIN-2020-16 | en | refinedweb |
Hi,
We use the Dispatch library in our code and always import as described
in the Dispatch docs like this:
import dispatch._, Defaults._
The plugin used to be happy with this but at some point in the last few
weeks it has started marking Defaults as red saying: Cannot resolve
symbol Defaults.
Of course the code compiles fine.
This reproduces with a simple project with that line as the only import
in a source file and with Dispatch as the only dependency.
I tried to reproduce this without any dependencies using a variety of
packages, objects and imports but everything worked fine. So I'm not
sure what's different about Dispatch to cause this.
Is this a known issue or should I raise a new one in YouTrack?
Thanks,
Steve.
Hi,
Looks like a bug. Please report in YouTrack with sample SBT/Gradle/Maven project (so I'll get dependencies automatically). It's hard to say if it's known or not.
Best regards,
Alexander Podkhalyuzin. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/205997749-Dispatch-import-marked-as-red | CC-MAIN-2020-16 | en | refinedweb |
OpenConfig Overview
OpenConfig is a collaborative effort in the networking industry to move toward a more dynamic, programmable method for configuring and managing multivendor networks. OpenConfig supports the use of vendor-neutral data models to configure and manage the network. These data models define the configuration and operational state of network devices for common network protocols or services. The data models are written in YANG, a standards-based, data modeling language that is modular, easy to read, and supports remote procedure calls (RPCs). Using industry standard models greatly benefits an operator with devices in a network from multiple vendors. The goal of OpenConfig is for operators to be able to use a single set of data models to configure and manage all the network devices that support the OpenConfig initiative.
OpenConfig for Junos OS supports the YANG data models and uses RPC frameworks to facilitate communications between a client and the router. You have the flexibility to configure your router directly by using Junos OS, or by using a third-party schema, such as OpenConfig. OpenConfig modules define a data model through its data, and the hierarchical organization of and constraints on that data. Each module is uniquely identified by a namespace URL to avoid possible conflicts with the Junos OS name.
The configuration and operational statements in Junos OS have corresponding path statements in OpenConfig. The following is a list of data modules for which mapping of OpenConfig and Junos OS configuration and operational statements is supported:
BGP
Interfaces
LACP
LLDP
Local routing
MPLS
Network instance
Platform
Routing policy
VLAN
When you configure OpenConfig statements on devices running Junos OS, the following features are not supported:
Using configure batch or configure private mode
Configuring statements under the [edit groups] hierarchy
For more information on the OpenConfig initiative, see. | https://www.juniper.net/documentation/en_US/junos/topics/concept/openconfig-overview.html | CC-MAIN-2020-16 | en | refinedweb |
Compareto and equals method
Difference between CompareTo and equals method.
siva
- Oct 21st, 2017
equals() method is derived from Object class which is a super class in java indirectly and this method is used to verify the content of the two objects if same return true otherwise return false. In ...
Bharat
- Oct 1st, 2012
CompareTo method which is present in the String class is used to check the two strings. It checks each character with the other String and if found equals then returns 0. Else negative value or positi......
Focus
Use of Focus() method in asp.net
Sandhya.Kishan
- Jul 5th, 2012
The focus() method sets focus to the current window.When web page is loaded you can use a BODY onload event and javascript client code to set focus method(). Example The above example assumes a web ....
Driver manager
What is driver manager?
Sandhya.Kishan
- Jun 19th, 2012
The Driver Manager is a library that manages communication between applications and drivers.The Driver Manager is used solves a number of problems related to determining which driver to load based on a data source name, loading and unloading drivers, and calling functions in drivers.
dev patel
- Jun 16th, 2012
In jdbc- object which can connect java application to a jdbc driver that is called driver manager. 19th, 2012
1.output cache extensibility
2.session state compression
3.routing in asp.net
4.increased URL character strength
5.new syntax for Html Encode
6.View State mode for individual controls
MVC Design pattern
what is the difference between MVC1 and MVC2 in j2EE?
Read Best Answer
Editorial / Best AnswerSandhya.Kishan
- Member Since Mar-2012 | May 14th, 2012
1.MVC1 consists of Web browser accessing Web-tier JSP pages. The JSP pages access Web-tier JavaBeans that represent the application model, and the next view to display is determined by hyperlinks selected in the source document or by request parameters. 2.MVC1 is page-centric design, meaning any JSP page can either present in the JSP or may be called directly from the JSP page. 3.MVC1 Combines both presentation logic with the Business logic. 4.MVC1 we can have multiple controller servlets. 1.MVC2 introduces a controller servlet between the browser and the JSP pages. The controller centralizes the logic for dispatching requests to the next view based on the request URL, input parameters, and application state. 2.MVC2 removes the page-centric property by separating Presentation, Control logic and Application state. 3.MVC2 can have only one controller servlet .
Sandhya.Kishan
- May 14th, 2012
1.MVC1 consists of Web browser accessing Web-tier JSP pages. The JSP pages access Web-tier JavaBeans that represent the application model, and the next view to display is determined by hyperlinks sele...
Testing Steps - From which phase the testing should be started ?
As well as is there having any global standard testing phase which should be sequential. Like Unit testing - Module Testing ....so on.
Sandhya.Kishan
- Jun 12th, 2012
Testing starts at the requirement phase of the SDLC and continuous till the last phase of the SDLC. Steps involved in testing 1.Static testing includes review of documents required for the software d...
Time Issues
Sandhya.Kishan
- May 26th, 2012
Answer is c)6
11AM - 5PM = 6 hrs
rain increased 1.25 every 2 hrs
3. 3*1.25 = 3.75
total rain = 2.25 + 3.75 = 6
Write a test case for Fibonacci series?
Sandhya.Kishan
- Jun 25th, 2012
Test case for fibonacci series can be 1.When an zero is entered it should return a zero. 2.When a negative integer is entered it should not accept the value and should return an error msg. 3.When a p....
BaselineTesting
What is Baseline testing? is it same for web and other type of testing?
Sandhya.Kishan
- Mar 17th, 2012
Baseline testing are testing standards to be used at the starting point of comparison within the organization.It is a test which is taken before any activity or treatment have occurred. Requirement specification validation is a baseline testing.
Why type of testings are available for VisualStudio 2010 ?
Sandhya.Kishan
- Mar 17th, 2012
Some types of testing available are:
1.ordered testing
2.unit testing
3.manual testing
4.load testing
5.coded UI testing.
What is the default wait time in Silk test?
Sandhya.Kishan
- Jun 5th, 2012
The default wait time in silk test is 10 seconds.
What is VLAN ?
what is vlan in vio server in aix ? What is its main purpose ?
Sandhya.Kishan
- Jul 11th, 2012
VLAN stands for virtual LAN,it is a broadcast domain created by switches.With VLAN’s, a switch can create the broadcast domain.The purpose of VLANS is to improve network performance by separating large broadcast domains into smaller ones.
inverse of matrix
program to find inverse of nth order square matrix?(c++)
Sandhya.Kishan
- May 29th, 2012
Void trans(float num[25][25],float fac[25][25],float r)
{
int i,j;
float b[25][25],inv[25][25],d;
for(i=0;i
Write a program to identify a duplicate value in vector ?
Sandhya.Kishan
- Mar 20th, 2012
"c void rmdup(int *array, int length) { int *current, *end = array + length - 1; for (current = array + 1; array < end; array++, current = array + 1) { while (current < ...
How would you test in cloud ?
Sandhya.Kishan
- Jun 12th, 2012
By understanding a platform providers elasticity model/dynamic configuration method we can test in cloud.
What do you mean by package access modifier?
Sandhya.Kishan
- Mar 10th, 2012
Access modifier are used to implement encapsulation feature of oops.There are 3 access specifiers namely Private: The current class will have access to the field or method. Protected - The current cl...
Linked list in java
How to implement reverse linked List using recursion?
Sandhya.Kishan
- Apr 11th, 2012
The program is
List* recur_rlist(List* head)
{
List* result;
if(!(head && head->next))
return head;
result = recur_rlist(head->next);
head->next->next = head;
head->next = NULL;
return result;
}
void printList(List* head)
{
while(head != NULL) {
std::cout
How can you read a SOL file using JAVA script?
Sandhya.Kishan
- Mar 7th, 2012
The methods IloCplex.readSolution and IloCplex.writeSolution is used to read a sol file in java script.......
Iterative Algorithm
design an iterative algorithm to traverse a binary tree represented in two dimensional matrix
Sandhya.Kishan
- Jul 16th, 2012
A binary tree can be traversed using only one dimensional array. InOrder_TreeTraversal() { prev = null; current = root; next = null; while( current != null ) { if(prev == current.parent) { prev = cu...
Sigbus error ?
what is sigbus error
Sandhya.Kishan
- Mar 10th, 2012
When a bus error occurs a signal is sent to the processor called as sigbus signal.The constant for sigbus is defined in header file signal.h.Sigbus error is thrown when there is improper memory handling.
What do you mean Inscope and Outscope
Kumar
- Jul 14th, 2014
InScope - What are all testings we are going to conduct for the App. (Like - Functional testing, Regression Testing, Load Testing...etc.) Outof Scope - What are all the testings we are NOT going to c...
Sandhya.Kishan
- Mar 19th, 2012
We can define scope by defining deliverable, functionality and data and a;so by defining technical structure. In-scope are things the project generates internally e.g. Project Charter, Business Requir...
Raise application error
can we raise_application_error in exception block?? if we use what will happen?
Sandhya.Kishan
- Jun 18th, 2012
Whenever a message is displayed using RAISE_APPLICATION_ERROR, all previous transactions which are not committed within the PL/SQL Block are rolled back automatically . RAISE_APPLICATION_ERROR is use...
Error Handling
What is an error handling framework?
Sandhya.Kishan
- Mar 12th, 2012
Error handling framework indicates serious problems that a reasonable application should not try to catch. Most such errors are abnormal conditions..
code switching and code mixing
what is the difference between code switching and code mixing?
Sandhya.Kishan
- Apr 27th, 2012
Concurrent use of more than one language in the same sentence of a conversation is known as code switching as
Code mixing refers to mixing of two or more languages in a speech.It occur within a multilingual setting where speakers share more than one language.
To print unique numbers eliminating duplicates from given array
write a java code to print only unique numbers by eliminating duplicate numbers from the array? (using collection framework)
Sandhya.Kishan
- May 26th, 2012
import javax.swing.JOptionPane;
public static void main(String[] args) {
int[] array = new int[10];
for (int i=0; i.
Command routing in MDI
What is command routing in MDI
Sandhya.Kishan
- Mar 20th, 2012
Command routing is passing commands to its targeted objects.When a command is routed, it goes to the main frame. From the main frame, it is routed to the child frame of the active view; it is then rou...
What is the difference between an image and a map
Sandhya.Kishan
- Apr 11th, 2012
Image:An image is an exact replica of the contents of a storage device stored on a second storage device. Map:A file showing the structure of a program after it has been compiled. The map file lists ...
how to capture webtable values
Read Best Answer
Editorial / Best AnswerSandhya.Kishan
- Member Since Mar-2012 | Apr 11th, 2012
By using the function getroproperty("field name") we can capture webtable values.
Sandhya.Kishan
- Apr 11th, 2012
By using the function getroproperty("field name") we can capture webtable values.
Artificial intelligence
how do humans recognize a word?
Sandhya.Kishan
- Apr 5th, 2012
We basically process the shape of each individual letter in a word at the same time, and therefore determine the word itself. We then derive the semantics of the word using "back up" files in the brain.
galla_srinivas
- Feb 22nd, 2012
By Identifing it by knowing language or knowledge
how do you establish a connection between two ear files
Sandhya.Kishan
- Jun 22nd, 2012
The Connection Pool Manager is used to establish a connection between two ear files.
what is difference between query calculation and layout calculation
Read Best Answer
Editorial / Best AnswerSandhya.Kishan
- Member Since Mar-2012 |.
Sandhya.Kishan
-.
Define Raster and Vector Data.
Define raster and vector data. Explain what is the difference between raster and vector data?
Read Best Answer
Editorial / Best AnswerSandhya.Kishan
- Member Since Mar-2012 | Jul 4th, 2012
Sandhya.Kishan
- Jul 4th, 2012
Raster data is a set of horizontal lines composed of individual pixels, used to form an image on a CRT or other screen.Raster data makes use of matrix of square areas to define where features are loca...
How to send sms from java application ?
Sandhya.Kishan
- Mar 7th, 2012
Look up SMS gateway. TO send a text, your really just sending any EMAIL with the SMS gateway. Its very easy.
for instance, versions is: yournumber@vtext.com .So just have the user input the phone number and their carrier and then send email out using JavaMail.
what are 4 member function for each object in c++.
Each C++ object possesses the 4 member fns, what are those 4 member functions.please tell me what i s the answer for this question.
Sandhya.Kishan
- Jun 13th, 2012
Each C++ object has constructor,default constructor,copy constructor and destructor as the member functions.
ABC
- Aug 24th, 2011
Following are four default functions available for each object
1) constructor
2) destructor
3) copy constructor
4) assignment operator
How to Retrieve the Hidden Field Value
how to retrieve the value of hidden filed in one page from another
Sandhya.Kishan
- Apr 11th, 2012
In .aspx, you can access the hidden fields when the page is submitted by using -
string customerId = Request.Form["txtCustomerId"];
What is the difference between MLOAD and TPUMP ?
Sandhya.Kishan
- Apr 17th, 2012
1.TPump allows us to load data into tables with referential integrity which MultiLoad doesn't allow. 2.TPpump does not support MULTI-SET tables,but multiple tables can be loaded in the same MultiLoad...
how to write program to print descending order
Bhushan Pote
- Apr 6th, 2015
Bhushan
- Apr 6th, 2015...
Sizeofthe Variable
how to find the size of the (datatype) variable in java?in c we use sizeof() operator for for finding size of data type
Sandhya.Kishan
- Mar 27th, 2012
There is no any particular function in java to find the size of the variable,because java removes the need for an application to know about how much of space needs to be reserved for a primitive value, an object or an array with a given number of elements.
What services does the internet layer provide?
Sandhya.Kishan
- Apr 19th, 2012
The internet layer packs data into data packets known as IP datagrams, which contain source and destination address information that is used to forward the datagrams between hosts and across networks....
Is it possible to debug the RSA encrypting algorithm?
If yes, how it is possible?
Ritesh Kumar
- Nov 24th, 2015
Yes we can Debug the RSA Algorithm only by Brute Force Attack. So it takes minimum 5 years (Current records) time for a Super Computer to debug the entire combination of the probable passwords to debu...
Sandhya.Kishan
- Jul 7th, 2012
The RSA algorithm as it makes use of unique prime number which is not the same each time when being generated.Hence we cannot debug the algorithm.
What is DHCP Relay Agent?
Suthakar
- Dec 4th, 2012
We use DHCP Relays when DHCP client and server don't reside on the same (V)LAN, as is the case in this scenario. The job of the DHCP relay is to accept the client broadcast and forward it to the server on another subnet.
Sandhya.Kishan
- Mar 14th, 2012
It is a Bootstrap Protocol that relays DHCP(Dynamic Host Configuration Protocol) messages between clients and servers for DHCP on different IP Network.using DHCP in a single segment network is easy. I...
what is heartbeat in clustering?
Sandhya.Kishan
- Mar 14th, 2012
Heartbeat cluster is a program that runs specialized scripts automatically whenever a system is initialized or rebooted.This cluster allows clients to know about the presence (or disappearance!) of pe...
how round robin algorithm works ?
Sandhya.Kishan
- Mar 10th, 2012
In round robin algorithm time slices are assigned to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is simple, easy to implement,...
what are the parameters in http.conf file ?
Sandhya.Kishan
- Jul 11th, 2012
Some parameters are
1.mod_rewrite
2.WLLogFile
3.DebugConfigInfo
4.StatPath
5.CookieName
6.MaxPostSize
7.FileCaching
what is the output of kill -3 pid ?
kiran78
- Jul 20th, 2012
Kill -3 pid find the thread dump jvm process
Pranaw
- May 6th, 2012
Kill -3 pid is used to create thread dump for the process id. This is basically used for troubleshooting and to understand what went wrong with the above process. Suppose some node of weblogic is not ...
What is enumerated data type ?
Sandhya.Kishan
- Jun 6th, 2012
Enumerated data type are the variables which can only assume values which have been previously declared. These values can be compared and assigned, but which do not have any particular concrete repres...
How will you test the font of any style ?
Eg: Verdana, Arial etc
pavan.7014
- May 30th, 2012
Thanks for the Answer Sandhya , Actually I have faced a question how shall we test the Font without using any tool.??
And one more query that , how can we test by previewing the font ?
Please help me out on the same....
Sandhya.Kishan
- Mar 13th, 2012
The Font Control Panel allows you to configure font settings, organize fonts and preview font styles. Preview function is used to test fonts of any style.
Minimum number of comparisons required
What is the minimum number of comparisons required to find the second smallest element in a 1000 element array?
Sandhya.Kishan
- May 26th, 2012
999 comparisons are required to find the second smallest element in an array.
Report Defects to the Developer
In how many way we can report defects to the deveploer?
Sandhya.Kishan
- Apr 20th, 2012
We can report defects to the developer either in formal way or through informal way. Communicating the details of the failure with the developers in person, in email or over the phone is an informal ...
Verification Plan
How do you plan for verification in your project?
Sandhya.Kishan
- Mar 19th, 2012
The plan for verification of a project can include steps like
1.Develop verification plan.
2.Trace between specifications and test cases.
3.Develop Verification Procedures.
4.Perform verification.
5.Document verification results.
C Program Execution Stages
Briefly explain the stages in execution of C program? How are printf and scanf statements statements being moved into final executable code?
Sandhya.Kishan
- Mar 8th, 2012
The stages of execution are:
* Making and Editing
* Saving
* Compiling
* Linking
* Running
Object Repository Extensions
When do we use .mtr and .tsr extensions in QTP? State the difference with suitable example?
Sandhya.Kishan
- Jun 25th, 2012
We use filename.mtr as an extension for Per test object rep files.
We use filename.tsr as an extension for Shared Object rep files....
Distance Between x and z Intercept
Determine the distance between x and z intercept of the plane whose eqn is 2x+9y-3z=18
Sandhya.Kishan
- May 26th, 2012
Sqrt of ((d/a)^2+(d/c)^2)
d=18 a=2 b=9 c=-3
(d/a)^2=81 (d/c)^2=36
Sqrt(81+36)=58.5
C Program Exectuion Stages
Briefly explain the stages in execution of C program ?How are printf and scanf statements statements being moved into final executable code?
Sandhya.Kishan
- Mar 28th, 2012
There are seven stages of execution
1. Forming the goal
2. Forming the intention
3. Specifying an action
4. Executing the action
5. Perceiving the state of the world
6. Interpreting the state of the world
7. Evaluating the outcome
CLR and Base Class Libraries
Define clr and base class libraries.
Sandhya.Kishan
- Jul 17th, 2012
A base class library is a standard library to all common intermediate languages.With the help of common intermediate language the base class library can encapsulate a large number of common functions,...
tsf meir ... touseef
- Mar 26th, 2012
Clr = common language runtime works like the heart works fr any being........
Java Deadlock
How to avoid deadlock in Java?
Sandhya.Kishan
- Aug 1st, 2012
A deadlock occurs when one thread has the control for A and tries to get the control for B while another thread has the control for B and tries to get the control for A. Each will wait forever for the...
Indexes Searching Capabilities
How do indexes increase the searching capabilities?
Sandhya.Kishan
- Jul 25th, 2012
By using the concept of serial scanning the indexes can increase the searching capabilities.
Compress String
How to compress a String (algorithem)?
Sandhya.Kishan
- Aug 1st, 2012
"java import java.io.ByteArrayOutputStream; java.io.IOException; import java.util.zip.GZIPOutputStream; import java.util.zip.*; public class zipUtil{ public static String compress...
Open Files Simultaneously
How will you increase the allowable number of simultaneously open files?
Sandhya.Kishan
- Mar 15th, 2012
Instant File Opener allows to create a list of multiple files, programs, folders, and URLs to be opened at the same time by opening a single special file or by logging into Windows. Files are opened ...
Invoke Another Program
How will you invoke another program from within a C program?
Sandhya.Kishan
- Mar 8th, 2012
We can invoke another program by using function like system() call like system(test.exe).
Function Call
How will you call a function, given its name as a string?
Sandhya.Kishan
- Mar 15th, 2012
We cannot call a function whose name is a string, we have to construct a table of two-field structures, where the first field is the function name as a string, and the second field is just the functi...
URL Recording Mode
What is the use of URL Recording Mode ?
m tulasi ram
- Oct 17th, 2018
URL mode is used to record the non browser applications and when we want to measure each and every component load time and when the application is generating more java flies.
sri
- Oct 29th, 2014
URL is used for recording the HTML and NON-HTML PAGES but HTML mode recording it is not possible
why bcoz where the application functionalities getting downloaded(buffering) it is possible in only URL mode of recording
Read Input at Run Time
What are the different ways to read input from keyboard at run time?
Sandhya.Kishan
- Aug 1st, 2012
By using scanner class we can input data from the keyboard.By declaring the Scanner classs input as System.in, it pulls data from the keyboard (default system input).
Subject Marks
There are 5 Sub with equal high marks. Mark scored by a boy is 3:4:5:6:7 (Not sure). If his total aggregate if 3/5 of the total of the highest score, in how many subjects has he got more than 50%?
Tanvi
- Nov 17th, 2012
It is cleanly mentioned in question that he has scored 3 subjects
Sandhya.Kishan
- May 28th, 2012
In three subjects he will get more than 50%.
Find the Speed
An Engine length 1000 m moving at 10 m/s. A bird is flying from engine to end with x sec and coming back at 2x sec. Take total time of bird traveling as 187.5s. Find the to and fro speed of the bird.
What is pre-emptive data structure ?
Sandhya.Kishan
- Apr 27th, 2012
There are primitive data types but not primitive data structures.
Primitive data types are predefined types of data, which are supported by the programming language. For example, integer, character, and string are all primitive data types.
Stacks Task
What kind of useful task does stacks support?
Read Best Answer
Editorial / Best AnswerSandhya.Kishan
- Member Since Mar-2012 | Apr 17th, 2012
Stack supports four major computing areas,they are 1.expression evaluation 2.subroutine return address storage 3.dynamically allocated local variable storage and 4.subroutine parameter passing.
Sandhya.Kishan
- Apr 17th, 2012
Stack supports four major computing areas,they are
1.expression evaluation
2.subroutine return address storage
3.dynamically allocated local variable storage and
4.subroutine parameter passing.
Inherit Private/Protected Class
Can a private/protected class be inherited? Explain
Sandhya.Kishan
- Aug 1st, 2012
Yes, but they are not accessible. Although they are not visible or accessible via the class interface, they are inherited.
Masked Code
What is the number of masked code ee@?
Bharath Yadlapalli
- May 2nd, 2012
When kill -3 command is executed, it will quit from executing the process and additionally it will dump core for that process mentioned with pid.
Sandhya.Kishan
- Apr 9th, 2012
022 is the number of mask code ee@.
DocType and DOM
What is DocType? What is DOM?
Augustin
- Jul 7th, 2012
DOM - API for HTML. It represents a web page as a tree. In other words, DOM shows how to access HTML page. DOCTYPE is used 1) for validation, "validator.w3.org" 2) specifies the version of HTML. ...
Sandhya.Kishan
- Apr 16th, 2012
The DocType declaration helps a document to identify its root element and document type definition by reference to an external file, through direct declaration.It helps in specifying certain attribute...
DataType Byte Values
What are the byte values of datatypes?
Sandhya.Kishan
- Aug 1st, 2012
The default byte value of data types in zero.
XML and SGML
What is the relationship between XML and SGML? Does XML replace SGML or is it a subset of SGML?
Sandhya.Kishan
- Mar 21st, 2012
SGML is the basis XML and HTML and provides a way to define markup languages and sets the standard for their form.SGML passes structure and format rules to markup languages.
XML is a subset of SGML.It is a meta language and is used to define other markup languages.
Compute Average of Two Scores
Describe an algorithm to compute the average of two scores obtained by each of the 100 students
HAkizimfura Yves
- May 27th, 2014
Write an algorithm that will display the sum of 5 integers by using two variables only
NB: do not use loops!
Shikhar Singhal
- Jun 10th, 2013
Algorithm- Let score1[100] and score2[100] be the arrays storing respective marks of the 100 students let float avg[100] store the average of the respective students. for n=0 to 99 avg[n]= (f...
Race Around Condition
What is race around condition?
Read Best Answer
Editorial / Best AnswerSandhya.Kishan
- Member Since Mar-2012 |.
Sandhya.Kishan
-.
Data Migration
How will you migrate the data from one system domain to another system domain? What testing procedures will follow?
Sandhya.Kishan
- Jun 25th, 2012
Domain migration happens when servers are upgraded and the data (including any authentication and authorization information) must be moved to a new system, when an administrator changes from one ISP t...
Compile...
ActiveX Component
What project option causes the necessary files to be generated when the project is compiled?
Sandhya.Kishan
- Jul 12th, 2012
Gcc -c proc.adb is an option to generate the necessary files during compilation.
Print Using string copy and concate Commands
How will you print TATA alone from TATA POWER using string copy and concate commands in C?
Sandhya.Kishan
- Jun 15th, 2012
Include
#include
#include
int main()
{
char myString[] = "TATA POWER";
char output[10];
strcpy(output,myString);
output[4] = ;
printf("OUTPUT :%s
", output);
printf("ORGINAL STRING :%s", myString);
getch();
}
Masked Code
What is the number of the masked code ee@?
Sthitaprajna kar
- Mar 20th, 2017
022 is the masked code.
Sandhya.Kishan
- Jun 15th, 2012
022 is the number of the masked code ee@
Read the heights in inches and wieght in pounds
Read the heights in inches and wieght in pounds of an individual and compute and print their BMI=((weight/height)/height)*703
Sandhya.Kishan
- Apr 17th, 2012
#include
#include
void main
{
int h,w,bmi;
clrscr();
printf{"Enter your height in inches:"};
scanf{"%d",&h};
printf{"Enter your weight in pounds:"}
scanf{"%d",&w};
bmi=((w/h)/h)*703;
printf{"BMI=%d",&bmi};
getch();
}
State of the Art in QA
What is the "State of the Art in QA"?)
Smart Client
What is Smart Client?
Sandhya.Kishan
- Apr 12th, 2012
Smart client is an application which can simultaneously hold the advantages of the thin client such as auto-update,zero install and advantages of thick client such as high productivity and high performance.
sutharsanan
- Sep 27th, 2011
Smart client can be worked as thick client or thin client.
Microsoft XML
What is Microsoft XML?
Sandhya.Kishan
- Mar 17th, 2012
It is a service which enables the developers to create interoperable XML applications on all platforms of XML 1.0.
Java Copy Command
Write the Java version of MS DOS Copy Command
Sandhya.Kishan
- Jul 13th, 2012
The command FileUtils.copyFile(fOrig, fDest); is similar to ms ds copy command.
Names of Constraints
Oracle stores information regarding the names of all the constraints on which table? A)USER_CONSTRAINTS B)DUAL C)USER D)None of these
Naresh kumar
- Oct 3rd, 2012
Constraints divided into 3 types those are: 1. domain integrity constraints:- not null,check 2. entity integrity constraints:- unique,Primary key 3. referential integrity constraints:- foreign key In...
sukrampal
- Jul 12th, 2012
User-constraints and all_constraints
Functional Difference
What is the functional difference between wave trap, lighning arrestor, surge absorber.
Sandhya.Kishan
- Mar 17th, 2012
The function of Wave trap is to trap the communication signals of higher frequency sent from remote substation and diverting them to teleprotection panel in the control room substation. The function ...
Function that Counts Number of Primes
Write a function that counts the number of primes in the range [1-N]. Write the test cases for this function.
Sandhya.Kishan
- Jun 25th, 2012
Static int getNumberOfPrime(int N) {
int count = 0;
for (int i=2; i
Advantages of ADO over Data Control
Name two advantages of ADO over data control.
Sandhya.Kishan
- Apr 6th, 2012
Some advantages of ADO over data control are 1.ADO is faster with most databases compared to data control. 2.ADO separates Datahandling and Database Structure manipulation,hence its easier to protect...
Grid Control
What is Grid Control? For what purpose it is used?
Sandhya.Kishan
- Apr 6th, 2012
DataGrid control is a control in vb which helps in displaying the entire table of a record-set of a database. The control also allows users to view and edit the data.
Average Temperature
The average temperature of Monday to Wednesday was 37C and of Tuesday to Thursday was 34C. If the temperature on Thursday was 4/5 th of that of Monday, the temperature on Thursday was?
vishwanatham
- Dec 27th, 2012
37 - 3 = 34
( (mon + tue + wed) / 3 ) - 3 = (tue+wed+thu)/3
( mon + tue + wed -9) / 3 = (tue + wed+ thu) /3
( mon + tue + wed - 9) = (tue + wed + thu )
mon - 9 = thu
(since thu = (4/5) mon )
(5 * thu)/4 - 9 = Thu
Thu = 36
Sandhya.Kishan
- May 12th, 2012
The average temperature on Thursday will be 36 degrees.
DDL Operation Trigger
Which types of trigger can be fired on DDL operation? A. Instead of triggerB. DML triggerC. System triggerD. DDL trigger
Sandhya.Kishan
- Apr 5th, 2012
The trigger which can be fired on DDL operator is DDL trigger.
Explain how Sequence Diagram differ from Component Diagram?
swati
- Jun 19th, 2014
1.Component diagrams are used to illustrate complex systems,they are building blocks so a component can eventually encompass a large portion of a system, but sequence diagrams are not intended for sho...
Sandhya.Kishan
- Apr 11th, 2012
1. A component diagram represents how the components are wired together to form a software system where as a sequence diagram is an interaction diagram which represents how the processes operate with ...
Design a Framework
What are the criterias that are considered to design a framework in QTP?
Sandhya.Kishan
- Apr 2nd, 2012
Some criterias in designing a framework are 1.Based on the requirements the framework should be kept simple, because Complexities can only destruct the whole purpose of framework. 2.As the project p...
Neutral Grounding Resistor
What is use of Grounding the Neutral of the star connecting transformer through resistor (NGR)?
Sandhya.Kishan
- May 17th, 2012
All electrical systems should have a link to ground.otherwise there will be severe ground insulation stress on transients. A neutral grounding transformer links the power system neutral to ground.
A resistors used for earthing the star point of a transomer and protect the transformer.
Flat Hit Numbers
If the number of hits become flat, then the issue is with,a)App Serverb)Web serverc)Db serverd)Authorization server
Shiv
- Jul 2nd, 2012
Its an issue with connection of Webserver.
jai hanuman
- May 30th, 2012
Problem related to webserver to tune the weblogic connections
Garbage Collection Algorithm
What algorithm is used in garbage collection?
Sandhya.Kishan
- Aug 1st, 2012
The algorithms used by garbage collectors are
1.Naïve mark-and-sweep
2.Tri-color marking...
Post Order Binary Tree Traverse
Design a conventional iterative algorithm to traverse a binary tree represented in two dimensional array in postorder.
Sandhya.Kishan
- Mar 10th, 2012
In Postorder traversal sequence we first look for the left node then the right node and then the root. Algorithm:Code
- void postOrder(tNode n)
- {
- if(n==null)
- return;
- postOrder(n.left);
- postOrder(n.right);
- visit(n);
- }
Algorithm Characteristics
List out the characteristics of an algorithm
Sandhya.Kishan
- Mar 7th, 2012
1.It should be simple.
2.Generally written in simple language.
3.It involves finite number of steps.
4.should be executed in short period of time.
5.Output of algorithm should be unique.
Importance of Algorithm
What is the importance of algorithms in the field of computer science?
Sandhya.Kishan
- Mar 7th, 2012
Algorithms are blue prints of a program which gives all the details and functionality involved in finding the solution to a problem.It is important as we can build a program on any platform with the help of an algorithm.
Pure Virtual Functions
How can you make a class as interface, if you cannot add any Pure Virtual Function?
Sandhya.Kishan
- Jun 13th, 2012
By putting a “virtual destructor inside an interface” makes a class an interface.
sangeeta
- Feb 21st, 2012
Add pure virtual destructor in that class
Two-Dimensional Arrays
A Two-dimensional array X (7,9) is stored linearly column-wise in a computer's memory. Each element requires 8 bytes for storage of the value. If the first byte address of X (1,1) is 3000, what would be the last byte address of X (2,3)?
Read Best Answer
Editorial / Best AnswerSandhya.Kishan
- Member Since Mar-2012 | Apr 17th, 2012
use the formulae X(i,j)=Base+w[n(i-1)+(j-1)] where m=7 ,n =9 ,i=2 ,j=3 hence 3000+8*[9(2-1)+(3-1)] =3000+8*(9+2) =3000+8*11=3088
Sandhya.Kishan
- Apr 17th, 2012
Use the formulae
X(i,j)=Base+w[n(i-1)+(j-1)]
where m=7 ,n =9 ,i=2 ,j=3
hence 3000+8*[9(2-1)+(3-1)]
=3000+8*(9+2)
=3000+8*11=3088
Bytecode to Sourcecode
How to convert bytecode to sourcecode?
Sandhya.Kishan
- Aug 1st, 2012
A Java Decompiler (JD) can convert back the Bytecode (the .class file) into the source code (the .java file).
Error Trapping Functions
Functions for error trapping are contained in which section of a PL/SQL block?
Sandhya.Kishan
- Apr 5th, 2012
The Exception section of the PL/SQL block contains the functions for error handling.
Internet and Telephone Network Topology
Which topology is mostly used as the internet & telephone network?
Rahul
- Mar 18th, 2013
In Internet WE mostly use Star Topology, But Mesh topology Is Secure,and In telephone System mostly , used Star Topology.
Sandhya.Kishan
- Apr 19th, 2012
Internet does not follow a standard topology,networks may combine topologies and connect multiple smaller networks, in effect turning several smaller networks into one larger one.
A ring topology can be used for telephone networks.
EAI Internal and External IO
What is Internal IO and External IO?
Sandhya.Kishan
- Apr 12th, 2012
The internal io are created through EAI Siebel Wizard.These object have their base type as siebel business objects. The internal io are used in EAI Siebel Adapter BS through query methods. External i...
Change jar File Icon
How to change jar file icon.
Sandhya.Kishan
- Aug 1st, 2012
The jar file doesnt have an icon, its a system-wide setting that applies to ALL jar files.)
Requirements Elicitation process
Explain the various steps to conduct Requirements Elicitation process
Sandhya.Kishan
- Jun 25th, 2012
Stepe involved in elicitation requirement are 1.Identify the real problem, opportunity or challenge 2.Identify the current measure which show that the problem is real 3.Identify the goal measure to s...
Intersection table
What is an intersection table and why is it important?
Sandhya.Kishan
- Jul 6th, 2012
An intersection table implements a many-to-many relationship between two business components.
scud021
- Jul 19th, 2008
A table added to the database to break down a many-to-many relationship to form two one-to-many relationships
Define Delay time - Load runner
Sandhya.Kishan
- Jun 12th, 2012
Delay time is the time the elapses between request and response.
VB.NET testing
What's involved in end to end VB.NET testing?
Sandhya.Kishan
- Jun 18th, 2012
A software once completed goes though rigorous testing before its actual integration.It also goes through different types of software testing and also different types of integration. The different ty...
File Compression
how can we compress any text file using c. can anybody provide me sample code
Sandhya.Kishan
- Mar 8th, 2012
The function comp() can be used for compression.The compression logic for comp() should provide the fact that ASCII only uses the bottom (least significant) seven bits of an 8-bit byte. The compressio...
Requirement Gathering
Name three activities involved in requirement gathering
Sandhya.Kishan
- Apr 5th, 2012
Three activities involved in requirement gathering are
1.Eliciting requirement
2.Analyzing requirement
3.Recording requirement
Integrity Rules
List the rules used to enforce table level integrity.
Sandhya.Kishan
- Apr 5th, 2012
There are 3 rules to enforce table level integrity. 1.Foreign key value can be modified only if we want to match the corresponding primary key value. 2.We cannot delete records either from parent or c... | http://www.geekinterview.com/user_answers/663204 | CC-MAIN-2020-16 | en | refinedweb |
Transparency effect between image and drawing
I want to give a transparency effect to an image, so the underlying drawing could be seen through. Has anybody used this king of effect, often visible in Windows desktop, with Qt ?
I'm grateful for any indication.
@Furkas Hi! Are we talking about QWidgets or QtQuick? Do you want to make a whole window transparent or just an object within a window?
@Wieland,
I'm talking about QWidgets, and I want just one transparent object, I think a QLabel may be cool.
see QGraphicsOpacityEffect class in case you want to make the whole widget transparent
Its also possible to add a linear opacity (gradient-like), but there is more work involved.
@raven-worx Thanks for your advice. It seems to be the right tool. But I have troubles with this class. I get an exception "Read acces violation" any time I try to call setGraphicsEffect() with this effect on a widget in my UI. I have not found in the manual what is the restriction.
It's ok if I add a QLabel outside of the UI.
hard to tell ... you're probably trying to accessing a uninitialized/deleted variable.
show some code pls.
@raven-worx Here the significant code :
#include <QMessageBox>
#include <QStandardPaths>
#include <QFileDialog>
#include <QImageReader>
#include <QTimer>
#include <QtWidgets>
#include "radararea.h"
#include "mainwindow.h"
#include "ui_mainwindow.h"
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
imageLabel = new QLabel;
imageLabel->setBackgroundRole(QPalette::Base);
imageLabel->setSizePolicy(QSizePolicy::Ignored, QSizePolicy::Ignored);
imageLabel->setScaledContents(true);
effect = new QGraphicsOpacityEffect; effect->setOpacity(0.6); /->setupUi(this); /->comboBoxAntType->addItem(tr("Antenne F1")); ui->comboBoxAntType->addItem(tr("Antenne F2")); ui->comboBoxAntType->addItem(tr("Antenne F3")); connect(ui->comboBoxAntType, SIGNAL(currentIndexChanged(int)), SLOT(on_ComboChanged(int)));
......
I think UI is not available before the call to setupUI(), but it's the same after.
yes indeed as you said, dont access any member of the ui variable before calling setupUi() on it.
Also - as the docs of QWidget::setGraphicsEffect() are clearly stating - you cannot reuse a graphicseffect for multiple widgets.
@raven-worx I noted. So, I tried one after another. | https://forum.qt.io/topic/62553/transparency-effect-between-image-and-drawing | CC-MAIN-2018-26 | en | refinedweb |
dir,
dirent—
#include <dirent.h>
<dirent.h>:
/* * A directory entry has a struct dirent at the front of it, containing * its inode number, the length of the entry, and the length of the name * contained in the entry. These are followed by the name padded to some * alignment (currently 8 bytes) with NUL bytes. All names are guaranteed * NUL terminated. The maximum length of a name in a directory is MAXNAMLEN. */ struct dirent { ino_t d_fileno; /* file number of entry */ off_t d_off; /* offset of next */ }; #define d_ino d_fileno /* backward compatibility */ /* * File types */ #define DT_UNKNOWN 0 #define DT_FIFO 1 #define DT_CHR 2 #define DT_DIR 4 #define DT_BLK 6 #define DT_REG 8 #define DT_LNK 10 #define DT_SOCK 12
dirfile format appeared in Version 7 AT&T UNIX. The d_off member was added in OpenBSD 5.5. | https://man.openbsd.org/dirent.5 | CC-MAIN-2018-26 | en | refinedweb |
-
Do some people not even have access to google? or a book? or a brain? or a colleague? Or even simply intellisense and some vague sense of curiosity....
Admin
But everyone knows that floating point operations lead to a loss of precision, so the only way to correctly do math with floating point numbers is to convert them to strings.
Admin
Bonus points also for not specifying culture in float.ToString() '.' might be a decimal point, thousands separator or not used at all depending on local settings.
Admin
The code doesn't even work. It truncates to two digits after the separator, then checks if the second digit is 9. If it isn't, nothing more happens. So if the input is 1.455, it will return 1.45. If the second digit IS 9, it will round by increasing the first digit. So 1.49 becomes 1.5.
Then there is the bug. The code tries to test for the edge case x.99, in which case it intends to increase the integer part by one and return 0 decimals. I.e. (x + 1).0. However, it tests for <= 9, not < 9, so it fails to detect this and will round x.99 to x.10.
Admin
This looks like C#. For those wondering on a good way to get a rounded string in C#, consider: float myFloat = 32454367.45555555F; string roundedTo1DecimalPoint = myFloat.ToString("#.#", System.Globalization.CultureInfo.InvariantCulture); string roundedTo2DecimalPoint = myFloat.ToString("#.##", System.Globalization.CultureInfo.InvariantCulture); string roundedTo3DecimalPoint = myFloat.ToString("#.###", System.Globalization.CultureInfo.InvariantCulture);
Admin
or good old myFloat .ToString("n2")
Admin
I've just had to reject a module because it uses if ... elseif ... elseif ... ... else ... endif instead of doing what ought to have been done: configure the data as a map and do a key-value lookup. All very well, but the engineer in question was told to change it 18 months ago and repeatedly since. In the end I asked him straight up: "Do you understand the concept of a map?" and when pressed, he had to answer "... no."
The fail is great with this one.
Admin
Also knows as Stringly Typed.
Admin
I have a feeling that this would epicly fail on extremely big or extremely small values. You know, the ones involving "e".
Admin
That ain't no engineer.
So what you're saying is, you failed to provide adequate training? ;)
Admin
King Philip came over for good soup. Kingdom, Phylum, Class, Order, Family, Genus, Species.
Yippie! I still remember something from Jr. High science. Wait, it's obsolete information now? (cue sad trombone sound).
Admin
We should suppose that this code does just what it does on purpose. Customers want to have it just this way! So, of course the programmers had to eschew the builtin functions. There just aren't any that round numbers in just the strange way required. So they had to concoct their own. :-P
Admin
Aren't you the smug elitist, pressing for a specific solution (Map) where the chosen solution fits the functionality just fine? (I don't know, of course, whether it does: you failed to specify.)
I found at least 1 use case in which using a Map is bad practice: when the data you're storing should get exposed neither in the source code nor via reflection.
I was developing a shared library that would store database credentials, use them internally to set up a connection, without ever exposing them itself. I tried various solutions, including Maps, but they'd all expose the credentials either in source code or via reflection.
The only way I found that didn't, was by using a simple if-else-if-else construction inside a method, and storing the data in a char array.
I'm appalled that this was necessary, for I too am an elitist and would rather have used various Map implementations. But practicality forced my hands.
Back to you. Do you enjoy micro-managing your coworkers?
Admin
"Does this string code work?"
" 'Frayed knot"
Woo hoo got that inB4 y'all
Admin
That's not micromanaging. That's just code review. Micromanaging would be if the engineer in question came back with a good reason for using an if-else if-else instead of a Map and you told him to use the Map anyway.
An engineer who comes back and says that he doesn't know what a Map actually is is someone who should be thinking about all of those data structures classes they skipped out on in school, thinking that the stuff in that course couldn't possibly ever be relevant to the real world...
Admin
I didn't try it (partly because your post left out details about your environment), but I have a hard time to believe that your solution with an array would prevent getting the data using reflection. Besides if you develop a shared library I would expect any configuration of your database connection to be outside of your library, otherwise it would be coupled to another system, which is not the property of a well designed shared library.
Admin
ShortenFloatString(1.999f) gives "1.10" - which is SOOOO wrong. ShortenFloatString(1.091f) gives "1.1" as (with no trailing zero), which is also wrong, but a bit closer. ShortenFloatString(1.089f) gives "1.08" which is also wrong and weird.
A real WTF here...
Admin
The only thing wrong with this superlative bit of code is that it returns the wrong string. May I humbly suggest "Brillant"?
Admin
Best. Joke. EVAR.
Not this one of course. The one who's punch line is referenced.
Admin
Or perhaps with you, your colleagues and the manager for waiting 18 months to help a co-worker out? Nah.. that couldn't be a thing right?
Seriously, if you waited 18 months to ask someone if they knew what a map was and they said "no", I'd fire both of you.
Admin
To be fair, I've never come across a language that supported a rounding function that allowed you to round x.yzw to x.yz if z is not 9, and to x.(y+1) if z is 9. (And possibly the rule that x.99n should be rounded to x.10, though that probably is just a bug.)
@Sam Judson: 1.091 being shortened to 1.1 isn't actually wrong; it's pretty clear from the final couple of blocks that the intent is actually to shorten it to one decimal place.
If the "num <= 9" bug was corrected, and an else clause was added for the "if(xString2[2] == '9')" block that stripped off the final character of xString2, it would actually work -- provided your rounding rule was "9 rounds up, everything else rounds down".
Of course, even with those fixes it would still be a WTF, but it would be a correctly functioning one if you had that rather odd requirement. And it would be only one bugfix away from rounding more or less normally! (In effect, not manner.)
Addendum 2016-10-04 04:31: It would be fun to see what the original coder would come up with if they were told that a different field had to be rounded to two decimals instead of one.
I mean, I wouldn't want to have it in my codebase, but it'd be fun to point at and laugh.
Admin
… is someone who is only fit for hitting code with a hammer, not actually an engineer. They should know. Or they should deal with their ignorance the moment they realise they're ignorant. But actually… if you don't know Lists and Maps, you're a crappy code monkey, not an engineer; it takes dedication to stupidity to make it as a programmer without learning the basic tools of the trade.
Admin
Two minutes of thinking are way harder than 30 minutes of typing, you know.
Admin
so this is how you implement Math.Round(double value, int digits)!
Admin
"Looks like C#' - Which bit of 'Which brings us to today’s C# code, from Aaron.' gave that away?
Admin
Me neither. That's why, if I were given such a requirement, I would roll my own instead of looking for a library with it.
I bet some sales tax jurisdiction has a rule somewhat like that.
Yes, that looks like a WTF. Front page material.
Admin
The "Programming is Hard" thread is ------------------->
Admin
I particularly loved the line;
int num = int.Parse(xString2[1].ToString())
All strings in C# are taken as char arrays. So "xString2[1]" returns the char-typed value, hence the need to call ToString() in "xString2[1].ToString() so that it could be parsed by "int.Parse()". Alternatively (if you really had to)...;
int num = (int)xString[1];
Admin
It's a dot. If you're using anything else, "You’re Doing It Wrong™".
OR: It's working on my machine. Clearly, your machine is broken. | https://thedailywtf.com/articles/comments/not-the-shortest-shortener | CC-MAIN-2018-26 | en | refinedweb |
Create a thread
#include <pthread.h> int pthread_create( pthread_t* thread, const pthread_attr_t* attr, void* (*start_routine)(void* ), void* arg );
If attr is NULL, the default attributes are used (see pthread_attr_init()).
The thread in which main() was invoked behaves differently. When it returns from main(), there's an implicit call to exit(), using the return value of main() as the exit status.
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The pthread_create() function creates a new thread, with the attributes specified in the thread attribute object attr. The created thread inherits the signal mask of the parent thread, and its set of pending signals is empty.
QNX Neutrino extensions
If you adhere to the POSIX standard, there are some thread attributes that you can't specify before creating the thread:
There are no pthread_attr_set_* functions for these attributes.
As a QNX Neutrino extension, you can OR the following bits into the __flags member of the pthread_attr_t structure before calling pthread_create():
After creating the thread, you can change the cancellation properties by calling pthread_setcancelstate() and pthread_setcanceltype().; } | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_create.html | CC-MAIN-2018-26 | en | refinedweb |
The.
At this point, it would be useful to have a nice abstraction to handle all this that you could code against while keeping your application's code elegant and simple to use. As usual, when looking for a design to start with, it turns out this problem was already nicely solved for C# developers with the XNA Game Studio GamePad class.
GamePad class
The September 2014 release of DirectX Tool Kit includes a C++ version of the GamePad class. To make it broadly applicable, it makes use of XInput 9.1.0 on Windows Vista or Windows 7, XInput 1.4 on Windows 8.x, and IGamePad on Xbox One. It's a simple class to use, and it takes care of the nuanced issues above. It implements the same thumb stick deadzone handling system as XNA, which is covered in detail by Shawn Hargreaves in his blog entry "Gamepads suck". The usage issue that continues to be the responsibility of the application is ensuring that you poll it fast enough to not miss user input, which mostly means ensuring your game has a good frame rate.
See the documentation wiki page on the new class for details, and the tutorial.
The headset audio features of XInput are not supported by the GamePad class. Headset audio is not supported by XInput 9.1.0, has some known issues in XInput 1.3 on Windows 7 and below, works a bit differently in XInput 1.4 on Windows 8, and is completely different again on the Xbox One platform.
The GamePad class is supported on all the DirectX Tool Kit platforms: Win32 desktop applications for Windows Vista or later, Windows Store apps for Windows 8.x, and Xbox One. You can create and poll the GamePad class on Windows Phone 8.x as well, but since there's no support for gamepads on that platform it always returns 'no gamepad connected'.
Xbox One Controller
Support for the Xbox One Controller on Windows was announced by Major Nelson in June and drivers are now hosted on Windows Update, so using it is a simple as plugging it into a Windows PC via a USB cable (see the Xbox One support website). The controller is supported through the XInput API as if it were an Xbox 360 Common Controller with the View button being reported as XINPUT_GAMEPAD_BACK, and the Menu button being reported as XINPUT_GAMEPAD_START. All the other controls map directly, as do the left and right vibration motors. The left and right trigger impulse motors cannot be set via XInput, so they are not currently accessible on Windows.
The Xbox One Wireless controller is not compatible with the Xbox 360 Wireless Receiver for Windows, so you have to use a USB cable to use it with Windows. Note also that it will unbind your controller from any Xbox One it is currently setup for, so you'll need to rebind it when you want to use it again with your console.
Update: DirectX Tool Kit is also hosted on GitHub.
Windows 10: There is a new WinRT API in the Windows.Gaming.Input namespace for universal Windows apps. This API supports both the Xbox 360 Common Controller and the Xbox One controller, including access to the left/right trigger motors. The latest version of GamePad is implemented using this new API when built for Windows 10. Note that existing XInput-based Windows Store applications can link against
xinputuap.lib which is an adapter for the new API for universal Windows apps--this adapter does not exist headset audio either.
Related: XInput and Windows 8, XInput and XAudio2
I came by a delay, but thanks noting about Windows 10 update and DirectX Tool Kit+ Windows.Gaming.Input namespace!
Hi,
It seems that this api does not work when executed on a Win10 IOT Core. Could you confirm?
Rgds | https://blogs.msdn.microsoft.com/chuckw/2014/09/05/directx-tool-kit-now-with-gamepads/ | CC-MAIN-2018-26 | en | refinedweb |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
With scriptrunner for JIRA: Is it possible (if it is, please tell me how) to copy field values, e.g. the reporter of an issue to a custom field via postfunction?
In a later workflow step I'd like to do it the other way around and copy the value of my custom field back into the reporter field.
Thanks
David
Hi,
Can you try the snippet below?
Note that, I did not try the code
import com.atlassian.jira.issue.Issue import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.issue.fields.CustomField def customFieldManager = ComponentAccessor.getCustomFieldManager() def customField = customFieldManager.getCustomFieldObject("customfield_12700") issue.setCustomFieldValue(customField, issue.reporter.name)
Tuncay
Assuming the custom field name used there is correct, this should work for setting the field. Assuming your custom field is also a user picker, you can basically reverse it to set the reporter back again with
def user = issue.getCustomFieldValue(customField) issue.setReporter(user.id)
You can do that with with the jira-suite-utilities plugin (free)
Thank you!
But I'm also curios if (and how!) you can do it with groovy script.. | https://community.atlassian.com/t5/Marketplace-Apps-questions/Copy-field-values-to-custom-fields-and-vice-versa/qaq-p/454529 | CC-MAIN-2018-26 | en | refinedweb |
Simple MNIST and EMNIST data parser written in pure Python
Project description
Simple MNIST and EMNIST data parser written in pure Python.
MNIST is a database of handwritten digits available on. EMNIST is an extended MNIST database.
Requirements
- Python 2 or Python 3
Usage
git clone
cd python-mnist
Get MNIST data:
./get_data.sh
Check preview with:
PYTHONPATH=. ./bin/mnist_preview
Installation
Get the package from PyPi:
pip install python-mnist
or install with setup.py:
python setup.py install
Code sample:
from mnist import MNIST mndata = MNIST('./dir_with_mnist_data_files') images, labels = mndata.load_training()
To enable loading of gzip-ed files use:
mndata.gz = True
EMNIST
Get EMNIST data:
./get_emnist_data.sh
Check preview with:
PYTHONPATH=. ./bin/emnist_preview
To use EMNIST datasets you need to call:
mndata.select_emnist('digits')
Where digits is one of the available EMNIST datasets. You can choose from
- balanced
- byclass
- bymerge
- digits
- letters
- mnist
EMNIST loader uses gziped files by default, this can be disabled by by setting:
mndata.gz = False
You also need to unpack EMNIST files as get_emnist_data.sh script won’t do it for you. EMNIST loader also needs to mirror and rotate images so it is a bit slower (If this is an issue for you, you should repack the data to avoid mirroring and rotation on each load).
Notes
This package doesn’t use numpy by design as when I’ve tried to find a working implementation all of them were based on some archaic version of numpy and none of them worked. This loads data files with struct.unpack instead.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/python-mnist/ | CC-MAIN-2018-26 | en | refinedweb |
In the previous recipe, we learned how to get data from an API using
fetch. In this recipe, we will learn how to POST data to the same endpoint to add new bookmarks.
Before going through this recipe, we need to create an empty app named
SendingData. You can use any other name; just make sure you set the correct name when registering the app.
index.ios.jsand
index.android.jsfiles, remove the previous code, and add the following:
import React from 'react'; import { AppRegistry } from 'react-native'; import MainApp from './src/MainApp'; AppRegistry.registerComponent('SendingData', () => MainApp);
src/MainApp.jsfile, import ...
No credit card required | https://www.safaribooksonline.com/library/view/react-native-cookbook/9781786462558/ch04s04.html | CC-MAIN-2018-26 | en | refinedweb |
import seems to be stuck on part X/109 parts of large file and there are 12 large files. HEM isn't not responding however..
should i close and restart the upload (perhaps 1 large file at a time instead of all 12)?
or leave it and expect results? (it has been stuck on part X for 2hrs or so now.
any other ideas welcome obviously
- files are the hh docs requested from stars fwiw
cheers | http://forums.holdemmanager.com/manager-general/40720-uploading-many-hands.html | CC-MAIN-2018-26 | en | refinedweb |
, "Using Google Maps and Bing Maps"
Section 8.7, "Transforming Data to a Spherical Mercator Coordinate System"
Section 8.8, "Dynamically Displaying an External Tile Layer" the Oracle Maps JavaScript V1 API. This example, along with sample applications, tutorials, and API documentation, is included in a separate
mvdemo.ear file, which can be downloaded from. The
mvdemo.ear file should be deployed into the same container as the
mapviewer.ear file.
Note:The Oracle Maps JavaScript V1 and V2 APIs are described in Section 8.4.
Section 8.1.2.1, "Simple Application Using the V2 API" describes essentially the same simple example but implemented using the V2 API.
The simple application shown in Figure 8-2 can be accessed at
http://
host:port
/mvdemo/fsmc/sampleApp.html. To run this application, follow the instructions in
http://
host:port
/mvdemo/fsmc/tutorial/setup.html to set up the database schema and the necessary map tile layers.
Figure 8-2 Application Created Using Oracle Maps (V1 API) (V1 API)
<html> <head> <META http- <TITLE>A sample Oracle Maps Application</TITLE> <script language="Javascript" src="jslib/oraclemaps.1.3.
Figure 8-3 shows a simple example with the essentially the same logic as that shown in Figure 8-2, but using the Oracle Maps JavaScript V2 API.
Figure 8-3 Application Created Using Oracle Maps (V2 API)
Example 8-2 shows the complete source code for the simple application shown in Figure 8-3.
Example 8-2 Source Code for the Simple Application (V2 API)
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html> <head> <META http- <TITLE>A sample Oracle Maps V2 application</TITLE> <script language="Javascript" src="/mapviewer/jslib/v2/oraclemapsv2.js"></script> <script language=javascript> var customersLayer=null;(); } function setLayerVisible(checkBox) { // Show the customers vector layer if the check box is checked and // hide it otherwise. if(checkBox.checked) customersLayer.setVisible(true) ; else customersLayer.setVisible(false); } </script> </head> <body onload= javascript:on_load_mapview() > <h2>A Sample Oracle Maps V2 Application</h2> <INPUT TYPE="checkbox" onclick="setLayerVisible(this)" checked/>Show customers <div id="map" style="width: 600px; height: 500px"></div> </body> </html>-4 shows the layout of the map layers.
Figure 8-4 Layers in a Map
As shown in Figure 8 and Microsoft Bing Maps tile layers are examples. For more information, see Section 8.6, "Using Google Maps and Bing Maps" and the JavaScript API documentation for class
MVGoogleTileLayer and
MVBingTileLayer. (If you need to overlay your own spatial data on top of the Google Maps or Bing Maps tile layer, see also Section 8-5 shows the basic workflow of the map tile server.
Figure 8-5 Workflow of the Map Tile Server
As shown in Figure 8-6.
Figure 8-6 Tiling with a Longitude/Latitude Coordinate System
On each zoom level, the map tiles are created by equally dividing the whole map coordinate system along the two dimensions (X and Y, which inFigure 8-7 shows the mesh codes of the tiles on a map.
Figure 8-3 shows the XML definition of an internal map tile layer, and Example 8-4 shows the XML definition of an external map tile layer. Explanations of the
<map_tile_layer> element and its subelements follow these examples.
Example 8-3-5 shows an external map source adapter.
Example 8-6 shows the implementation of the
MapSourceAdapter.getTileImageBytes method.
Example 8-7.
Example 8-7 specify the following. To see how these specifications affect the map display, see Figure 8-2, "Application Created Using Oracle Maps (V1 API)"-8 shows the XML styling rules for a templated predefined theme that uses two binding variables (with the relevant text shown in bold in the
<features> element).
Example 8-8-8,-9 shows some JavaScript client code to create an FOI layer that displays a buffer around each customer location.
Example 8. The JavaScript API has two versions:
Version 1 (V1), the traditional API that is still supported, and described in Section 8.4.1, "JavaScript API V1"
Version 2 (V2), a new API introduced in Release 11.1.1.7, and described in Section 8.4.2, "JavaScript API V2"
For detailed information about all classes in the Oracle Maps JavaScript API (V1 and V2), see the Javadoc-style reference documentation, which is included in the
mvdemo.ear file and is available at the following locations:
http://
host:port
/mvdemo/api/oracle_maps_api.jsp (for V1)
http://
host:port
/mvdemo/api/oracle_maps_html5_api.jsp (for V2)
Tutorials and demos for both the V1 and V2 APIs are available as a standalone packaged application with the root context path
/mvdemo. The tutorials start with the basics (display a map tile layer, add a navigation panel, display interactive features and information windows) and move on to more complex topics such as registering event listeners, programmatically creating and using styles, and spatial filtering.
The tutorials are all based on the MVDEMO sample data set (available from the MapViewer page on the Oracle Technology Network) and assume a data source named mvdemo. The tutorial page has three panels. The left one lists the sample code, or demo, titles. Click on one and a map, or the result of executing that sample code, is displayed in the top right panel. The bottom panel has tabs titled JavaScript and HTML, which respectively show the JavaScript and HTML code fragments for the selected demo.
To access the functions of the Oracle Maps JavaScript client, use the JavaScript API Version 1 (V1),.
The Oracle Maps JavaScript API Version 2 (V2) takes advantage of the capabilities of modern browsers. Some of its features include:
Built-in support of various third party map tile services, such as maps.oracle.com, Nokia Maps, Bing Maps, OpenStreet Maps, and other mapping service providers
Rich client side rendering of geospatial data with on-the-fly application of rendering styles and effects such as gradients, animation, and drop-shadows
Autoclustering of large number of points and client side heat map generation
Client side feature filtering based on attribute values as well as spatial predicates (query windows)
A rich set of built-in map controls and tools, including a customizable navigation bar and information windows, configurable layer control, and red-lining and distance measurement tools
The V2 API is not backward compatible with the existing Oracle Maps JavaScript V1 API applications. If you want to use V2-specific features with existing V1 applications (that is, applications written with the V1 API using classes such as
MVThemeBasedFOI), those applications will need to be migrated first.
Note, however, that existing server-side predefined styles and themes will work with the V2 API. For example, the following code snippet creates an interactive vector layer based on a predefined theme
mvdemo.customers, which has an associated predefined style:
var baseURL = "http://"+document.location.host+"/mapviewer"; var layer = new OM.layer.VectorLayer("layer1", { def:{ type:OM.layer.VectorLayer.TYPE_PREDEFINED, dataSource:"mvdemo", theme:"customers", url: baseURL } });
The V2 API has the following top-level classes and subpackages, all of which are in the namespace
OM:
The
Map class is the main class of the API.
The
Feature class represents individual geo features (or FOIs as they were known in V1).
The
MapContext class a top-level class encapsulating some essential contextual information, such as the current map center point and zoom level. It is typically passed into event listeners.
The
control package contains all the map controls, such as navigation bar and overview map.
The
event package contains all the map and layer event classes.
The
filter package contains all the client-side filters (spatial or relational) for selecting, or subsetting, the displayed vector layer features.
The
geometry package contains various geometry classes.
The
layer package contains various tile and vector layer classes. The tile layer classes include access to a few online map services such as Oracle, Nokia, Bing, and OpenStreetMap. The vector layers are interactive feature layers and correspond to the
MVThemeBasedFOI and MVFOI
classes of V1.
The
infowindow package contains the customizable information windows and their styles.
The
style package contains styles applicable to vector data on the client side. It also includes visual effects such as animation, gradients, and drop shadows.
The
tool package contains various map tools such as for distance measuring, red-lining, and geometry drawing.
The
universe package contains built-in, or predefined, map universes. A map universe defines the bounding box and set of zoom level definitions for the map content. It is similar to a tile layer configuration in the V1 API.
The
util package contains various utility classes.
The
visualfilter package provides an interface for the various visual effects, such as gradients and drop shadows.
OM.Map is the main entry class for all map operations inside the web browser. This and other classes provide interfaces for adding application-specific logic, operations, and interactivity in web mapping applications. The application logic and operations can include the following:
Create a map client instance and associate it with the map container DIV object created in the web page.
Configure map parameters such as map center and map zoom level.
Optionally, create and manipulate map tile layers. Unlike in V1, a map tile layer is not required in V2. An application can have only interactive vector layers using a custom Universe that programmatically defines the zoom levels and scales.
Create and manipulate vector layers (known as FOIs in V1).
Display an information window on the map.
Create fixed map decorations, such as a map title, a copyright note, and map controls.
Access built-in utilities such as a navigation panel, rectangle or circle tool, scale bar, and overview map panel.
Use event listeners to customize event handling and thus map interactions.
For information about developing applications using the V2 API, see Section 8.5.2, "Using the V2 API" and the Oracle-supplied tutorials and demos.
Both V1 and V2 APIs have major similarities:
They have the same architecture and content organization. (Figure 8-1, "Architecture for Oracle Maps Applications" and Figure 8-4, "Layers in a Map" apply to both versions.)
They depend on Oracle Spatial or Locator for spatial analysis (proximity, containment, nearest neighbor, and distance queries) and coordinate system support (SRIDs and transformations).
However, there are some significant differences:
The V2 client-side rendering of interactive features (that is, using HTML5 Canvas or SVG) provides for a richer client interactivity and user experience.
The V1 "FOI server" is in V2 a data server that streams the vector geometries and attributes for features to the client for local rendering. Therefore, the V1 "FOI layers" and called vector layers in V2.
In V2, a background map tile layer is not required in order to display interactive vector layers. So in V2, for example, an application can display a thematic map of states (such as color-filled by population quintile) with no background tile layer.
The V2 API depends on and includes
JQuery and
JQueryUI. So,
oraclemapsv2.js includes
jquery-1.7.2.min.js and
jquery-ui-1.8.16.min.js. If your application also uses JQuery and JQueryUI and includes them already, then use the file
oraclemapsv2_core.js in the
<script> tag instead to load the Oracle Maps V2 library. That is, use the following:
<script src=”/mapviewer/jslib/v2/oraclemapsv2_core.js”></script>
instead of:
<script src=”/mapviewer/jslib/v2/oraclemapsv2.js”></script>
Table 8-2 shows the general correspondence between the classes in V1 and V1, although the relationships are not always one-to-one.
If you have all your map data stored in an Oracle database and have MapViewer deployed in Oracle Fusion Middleware, you can develop a web-based mapping application using Oracle Maps by following the instructions in the section relevant to the API version that you are using:.
To develop Oracle Maps applications using the Version 1 (V1) API, follow the instructions in these sections:
Creating One or More Map Tile Layers
Creating the Client Application with the V1 API/oraclemaps
Developing applications with the V2 API is similar to the process for the V1 API. If all the spatial data used for base maps, map tile layers, and interactive layers or themes is stored in an Oracle database, then the map authoring process using the Map Builder tool is the same for both APIs.
If the underlying base map and layers are managed in an Oracle database, each map tile layer displayed in the client application must have a corresponding database metadata entry in the USER_SDO_CACHED_MAPS metadata view (described in Section 8.2.2.2) . Similarly, if an interactive layer is based on database content, it must have a metadata entry in the USER_SDO_THEMES view (described in Section 2.9, especially Section 2.9.2). These tile and interactive layers, and the styles and styling rules for them, can be defined using the Map Builder tool (described in Chapter 9).
To develop Oracle Maps applications using the Version 2 (V2) API, follow these basic steps:
Import the
oraclemapsv2.js library.
The API is provided in a single JavaScript library packaged as part of the MapViewer EAR archive.
After MapViewer is deployed and started, load the library through a
<script> tag, for example:
<script type="text/javascript" url=""/>
Create a
<DIV> tag in the HTML page, which will contain the interactive map. (This is the same as in the V1 API.)
Create a client-side map instance that will handle all map display functions.
The class is named
OM.Map and is the main entry point of the V2 API. So,
OM.Map in V2 is equivalent to
MVMApView in V1.
Set up a map universe (unless you also do the optional next step).
A map universe basically defines the overall map extent, the number of zoom levels, and optionally the resolution (in map units per pixel) at each zoom level. In the V1 API, this information is contained in a tile layer definition. Those will continue to work in V2; however, in V2 a predefined tile layer is not necessary in order to display interactive vector layers or themes. For example, an interactive thematic map of sales by region does not need to have a background map, or tile layer.
(Optional) Add a tile layer that serves as the background map.
The tile layer can be from the database, such as
mvdemo.demo_map, or from a supported service, such as Nokia Maps. Adding a tile layer also implicitly defines a map universe, and therefore the preceding step (setting up a map universe) is not necessary in this case.
Add one or more interactive vector layers.
An
OM.layer.VectorLayer is equivalent to
MVThemeBasedFOI in the V1 API. The main difference in that
OM.VectorLayer uses HTML5 (Canvas or SVG) technology to render all the data in the browser. So, unless specified otherwise, all vector layer content is loaded once and there are no subsequent database queries, or data fetching, on map zoom or pan operations.
Add one or more map controls, tools, and other application-specific UI controls so that users can set the displayed layers, styling, and visual effects.
For detailed instructions and related information, see the Oracle-supplied tutorials and demos.
Oracle Maps V2 applications run inside web browsers and require only HTML5 (Canvas) support and JavaScript enabled. No additional plugins are required.
As shown in Example 8-1, "Source Code for the Simple Application (V1 API)"in Section 8.1.2, the source for an Oracle Maps application is typically packaged in an HTML page, which consists of the following parts:
A
<script> element that loads the Oracle Maps V2 client library into the browser's JavaScript engine. For example:
<script src=”/mapviewer/jslib/v2/oraclemapsv2.js”></script>
An HTML
<div> element that will contain the map. For example:
<div id="map" style="width: 600px; height: 500px"></div>
JavaScript code that creates the map client instance and sets the initial map content (tile and vector layer), the initial center and zoom, and map controls. This code should be packaged inside a function which is executed when the HTML page is loaded or ready. The function is specified in the
onload attribute of the
<body> element of the HTML page. For example:(); }
Additional HTML elements and JavaScript code that implement other application-specific user interface and control logic. For example, the HTML
<input> element and JavaScript function
setLayerVisible together implement a layer visibility control. The
setLayerVisible function is coded as follows:
function setLayerVisible(checkBox) { // Show the customers vector layer if the check box is checked and // hide it otherwise. if(checkBox.checked) customersLayer.setVisible(true) ; else customersLayer.setVisible(false); }
The function is specified in the
onclick attribute of the
<input> element defining the checkbox. In the following example, the function is executed whenever the user clicks on the Show Customers check box:
<INPUT TYPE="checkbox" onclick="setLayerVisible(this)" checked/>Show Customers
Applications can display Google Maps tiles or Microsoft Bing Maps tiles as a built-in map tile layer, by creating and adding to the map window an instance of
MVGoogleTileLayer or
MVBingTileLayer, respectively. Internally, the Oracle Maps client uses the official Google Maps or Bing Maps API to display the map that is directly served by the Google Maps or Microsoft Bing Maps server.
To use the Google Maps tiles, your usage of the tiles must meet the terms of service specified by Google (see).
To use the Bing Maps tiles, you must get a Bing Maps account. Your usage must meet the licensing requirement specified by Microsoft (see).
If you need to overlay your own spatial data on top of the Google Maps or Microsoft Bing Maps tile layer, see also Section 8.7, "Transforming Data to a Spherical Mercator Coordinate System".)
The following sections describe the two options for using built-in map tile layers:
Section 8.6.1, "Defining Google Maps and Bing Maps Tile Layers on the Client Side"
Section 8.6.2, "Defining the Built-In Map Tile Layers on the Server Side"
To define a built-in map tile layer on the client side, you need to create a
MVGoogleTileLayer or
MVBingTileLayer object, and add it to the MVMapView object. (As of Oracle Fusion Middleware Release 11.1.1.6,
MVGoogleTileLayer uses the Google Maps Version 3 API by default, and
MVBingTileLayer uses the Bing Maps Version 7 API by default.)
For example, to use Google tiles, add the Google tile layer to your map:
mapview = new MVMapView(document.getElementById("map"), baseURL); tileLayer = new MVGoogleTileLayer() ; mapview.addMapTileLayer(tileLayer);
In your application, you can invoke the method
MVGoogleTileLayer.setMapType or
MVBingTileLayer.setMapType to set the map type to be one of the types supported by the map providers, such as road, satellite, or hybrid.
For usage examples and more information, see the JavaScript API documentation for
MVGoogleTileLayer and
MVBingTileLayer, and the tutorial demos Built-in Google Maps Tile Layer and Built-in Bing Maps Tile Layer.
You can define a built=-in map tile layer on the server side and use it as a regular MapViewer tile layer on the client side. To define a built-in map tile layer on the server side, follow these steps:
Log into the MapViewer Administration Page (explained in Section 1.5.1).
Select the Manage Map Tile Layers tab and click Create.
When you are asked to select the type of map source, choose Google Maps or Bing Maps and click Continue.
Select the data source where the tile layer is to be defined.
Set the license key that you have obtained from the map provider.
Click Submit to create the tile layer.
After you have created the built-in map tile layer on the server side, you can use it like any other tile layer served by MapViewer. You do not need to add any
<script> tag to load the external JavaScript library.
The following example shows a Bing Maps tile layer defined on the server side:
mapview = new MVMapView(document.getElementById("map"), baseURL); // The Bing tile layer is defined in data source "mvdemo". tileLayer = new MVMapTileLayer("mvdemo.BING_MAP") ; mapview.addMapTileLayer(tileLayer);
In your application, you can invoke the method
MVMapTileLayer.setMapType to set the map type to be one of the types supported by the map providers, such as road, satellite, or hybrid..7.1.
Either pre-transform your spatial data for better performance, or let MapViewer transform the data at runtime ( and Graph Developer's Guide.)
To let MapViewer transform the data at runtime,-10 shows SQL statements that are included in the
csdefinition.sql script and that create such transformations rules. However, if the coordinate system of your spatial data is not covered by the rules shown in Example 8-10, you can create your own rule if the coordinate system of your data is not covered by these rules. (For more information about creating coordinate system transformation rules, see Oracle Spatial and Graph Developer's Guide.)
Example 8;
The Oracle Maps JavaScript API supports dynamically defining an external tile layer without needing any server-side storage of either the definition or the tile images. Basically, you can use the class
MVCustomTileLayer to reference and display tile layers served directly from any external map tile server on the web, such as the ESRI ArcGIS tile server, the OpenStreet map tile server, or other vendor-specific map tile servers.
To do so, you need to do the following when creating a new
MVCustomTileLayer instance:.
Know the configuration of the map tile layer, specifically its coordinate system, boundary, and zoom level.
Supply a function that can translate a tile request from Oracle Maps into a tile URL from the external tile server.
The configuration of a tile layer takes the form of a JSON object, and is generally in the format illustrated by the following example:
var mapConfig = {mapTileLayer:"custom_map", format:"PNG", coordSys:{srid:8307,type:"GEODETIC",distConvFactor:0.0, minX:-180.0,minY:-90.0,maxX:180.0,maxY:90.0}, zoomLevels: [{zoomLevel:0,name:"level0",tileWidth:15.286028158107968,tileHeight:15.286028158107968,tileImageWidth:256,tileImageHeight:256}, {zoomLevel:1,name:"level1",tileWidth:4.961746909541633,tileHeight:4.961746909541633,tileImageWidth:256,tileImageHeight:256}, {zoomLevel:2,name:"level2",tileWidth:1.6105512127664132,tileHeight:1.6105512127664132,tileImageWidth:256,tileImageHeight:256}, {zoomLevel:3,name:"level3",tileWidth:0.5227742142726501,tileHeight:0.5227742142726501,tileImageWidth:256,tileImageHeight:256}, {zoomLevel:4,name:"level4",tileWidth:0.16968897570090388,tileHeight:0.16968897570090388,tileImageWidth:256,tileImageHeight:256}, {zoomLevel:5,name:"level5",tileWidth:0.05507983954154727,tileHeight:0.05507983954154727,tileImageWidth:256,tileImageHeight:256}, {zoomLevel:6,name:"level6",tileWidth:0.017878538533723076,tileHeight:0.017878538533723076,tileImageWidth:256,tileImageHeight:256}, {zoomLevel:7,name:"level7",tileWidth:0.005803187729944108,tileHeight:0.005803187729944108,tileImageWidth:256,tileImageHeight:256}, {zoomLevel:8,name:"level8",tileWidth:0.0018832386690789012,tileHeight:0.0018832386690789012,tileImageWidth:256,tileImageHeight:26}, {zoomLevel:9,name:"level9",tileWidth:6.114411263243185E-4,tileHeight:6.114411263243185E-4,tileImageWidth:256,tileImageHeight:256} ] };
For the a function that can translate a tile request from Oracle Maps into a tile URL from the external tile server, specify a function such as the following example:
function getMapTileURL(minx, miny, width, height, level) { var x = (minx-mapConfig.coordSys.minX)/mapConfig.zoomLevels[level].tileWidth ; var y = (miny-mapConfig.coordSys.minY)/mapConfig.zoomLevels[level].tileHeight ; return "" + mapConfig.format + "&zoomlevel="+level+"&mapcache=mvdemo.demo_map&mx=" + Math.round(x) + "&my=" + Math.round(y) ; }
In the preceding example, the function
getMapTileURL() is implemented by the application to supply a valid URL from the external tile server that fetches a map tile image whose top-left corner will be positioned at the map location (
minx,miny) by the Oracle Maps client. Each map tile image is expected to have the specified size (
width,height), and it should be for the specified zoom level (
level). This specific example is actually returning a
gettile URL from the local MapViewer tile server; however the approach also applies to any non-MapViewer tile servers.
The new custom tile layer is added to the client
mapViewer just like a built-in map tile layer. | http://docs.oracle.com/cd/E28280_01/web.1111/e10145/vis_omaps.htm | CC-MAIN-2016-07 | en | refinedweb |
Milter-regex is a sendmail milter plugin that allows to reject mail
based on regular expressions matching SMTP envelope parameters and
mail headers and body.
In order to build milter-regex, sendmail needs to be compiled with
milter support, installing the libmilter library.
This is the default for the sendmail in the base system.
Some of the sendmail ports omit libmilter by default (SENDMAIL_WITHOUT_MILTER).
This program is developed on OpenBSD by the maintainer.
LICENSE: BSD
WWW:
To install the port: cd /usr/ports/mail/milter-regex/ && make install cleanTo add the package: pkg install milter-regex
cd /usr/ports/mail/milter-regex/ && make install clean
pkg install milter-regex
PKGNAME: milter-regex
No options to configure
Number of commits found: 48
- cleanup
- handle vardir via plist
PR: 203199
Submitted by: Dmitry Marakasov
- allow group access for postfix
PR: 192229
Submitted by: Mel Muth
- remove broken MANPREFIX
- fix pkgng problem
do not use @dirrmtry
PR: 184695
- update to 2.0
- use STAGEDIR
- fix misplaced NO_STAGE in slaveports and ifdefs
Add NO_STAGE all over the place in preparation for the staging support (cat:
mail)
- Strip header at request of original creator
Submitted by: trevor
With hat: portmgr
-.9
- 1.8
- add COPYRIGHT
LICENSE BSD
- Fix pkg-plist to delete directories installed out of PREFIX
PR: 145742
Submitted by: Sahil T.
- add LICENSE:
- update to 1.7.
- add :
- fix pidfile
Reportey by: Clemens Fischer
- merge and clean patches in ascii
- add flag -quiet
PR: 106602
Submitted by: Denis Eremenko
- use milter framework
- fix package name
- let user set SPOOLDIR
- distfiles was repacked
diff -urN showed that no file was changed, only the name of the WRKSRC
used DIST_SUBDIR to save users and mirrod from stalled updates.
Reported by: ache
- take maintainership
- Add support for rc.d style startup scripts for mail/milter-regex port.
- reset maintainership
PR: ports/103114
Submitted by: Derek Marcotte <derekm dot nospam_AT_rogers dot com>
Approved by: maintainer (dhartmei)
Remove USE_REINPLACE from ports starting with M
- update to 1.6
- make prefix safe
- register shared milter dependency
Approved by: dhartmei
Update to 1.5
Turn maintainership over to dhartmei, the author of the program and
of the OpenBSD port on which this port is based.
Approved by: dhartmei
Quotes are unnecessary in the COMMENT.
Be less verbose.
Add size data.
Update to 0.8, requested by Stephane Lentz. Use PLIST_FILES.
Update to 0.7. Don't treat warnings as errors.
PR: 61410
Submitted by: dinoex
Enable compilation on FreeBSD 5.2-BETA by avoiding "log" namespace
conflict.
PR: 59975
Submitted by: Volker Stolz of the Lehrstuhl fur Informatik II at
RWTH Aachen
Remove my e-mail address from DESCR files of ports
I have contributed, in order to attract less spam.
Remove unnecessary USE_PERL5.
Submitted by: Stephane Lentz
Update to 0.6 (still untested).
Submitted by: Daniel Hartmeier and Stephane Lentz
new port of the milter-regex plugin for sendmail
Obtained from: the OpenBSD port by Daniel Hartmeier (author of
milter-regex)
Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD
30 vulnerabilities affecting 78 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities | http://www.freshports.org/mail/milter-regex/ | CC-MAIN-2016-07 | en | refinedweb |
WebKit Bugzilla
this would require to retrieve the JSGlobalContextRef and some functions as well as access to the required header files at compile time.
It sounds like this is basically the functionality that -[WebFrame windowObject], -[WebFrame globalContex] and the webView:didClearWindowObject:forFrame: delegate method available in the Mac port. This would seem to map to two WebKitFrame methods and a signal in the Gtk API.
I mailed Michael a patch that implements most of this functionality recently. It was blocked by a few minor issues. Are you able to complete it and attach it for review?
This looks like a tracking bug for a few distinct bugs for which I've added dependencies.
Created attachment 17639 [details]
Fix
Landed in r28313.
Until is fixed, applications will have to do:
#include <JavaScriptCore/JSBase.h>
#include <JavaScriptCore/JSContextRef.h>
#include <JavaScriptCore/JSStringRef.h>
#include <JavaScriptCore/JSObjectRef.h>
#include <JavaScriptCore/JSValueRef.h>
Comment on attachment 17639 [details]
Fix
Already r'ed by aroben, bug closed. | https://bugs.webkit.org/show_bug.cgi?id=15687 | CC-MAIN-2016-07 | en | refinedweb |
This is part 2 of a 3 part series on developing an HTML5 game with many platforms in mind. Last week I went over some of the visual and performance aspects when dealing with various screen sizes, today I want to focus on different types of input you might consider using.
Part 1: Getting the game to look great and run well across all platforms
Part 2: Handling the various input types of each platform
Part 3: Dealing with security for your game
Part 2 covers:
- Case Studies of input in our 3 HTML5 games using:
- Keyboard
- Mouse
- Touch
- Accelerometer
Because HTML5 allows you to develop games for multiple platforms, it makes sense to cater to the different inputs of each one. Mobile phones give you access to features like the accelerometer while desktops have keyboards -- both are great, and both should be utilized if it makes sense for your game.
The real key is to be creative - which if you're a game designer, you definitely have that quality. Typically in game development you're limited to one or two types of input. With HTML5 you're no longer trapped beneath those restrictions, be adventurous and take full advantage of the technology.
Meshing Keyboard, Mouse, and Touch Events
When developing a cross-platform game, you're no longer dealing with just keyboard and mouse input, or just touch input -- you have to take both into account.
How you handle these various input methods certainly depends on the type of game you have. Below is how we catered three of the games on Clay.io to different devices through creative use of input.
Word Wars is a game where you try to find the most words in a jumbled mix of letters.
For desktops, we accept both keyboard and mouse as input. Words can be typed, or the player can drag and select the tiles to form a word. Clicking and dragging might be nice for some, but it feels a bit cumbersome compared with simply typing words as you see them.
I was going to paste in the code we have for this, but honestly it's a bit boring, and not very elegant since it was written in a 24 hour hackathon. I'll paraphrase. If you want to get down and dirty in some code, read the next section on what we did for Slime Volley.
- We attach keydown and keyup events to the canvas element
- For each new keydown, we verify that it's a valid letter (adjacent to the previous letter)
- If it's valid, we highlight the new letter and add to an array of selected letters
- If it's invalid, we clear the letters
- We also listen for when the "enter" key is pressed (e.keyCode === 13)
- If so, we check to see if the letters in our array make up a valid word for this board, then clear the letters
- If backspace is pressed (e.keyCode === 8) we pop the last letter in our selected keys array (and cancel the browser back button functionality with e.preventDefault())
- It's important to prevent the default action on key events for space and backspace, otherwise space will act as "page down" and backspace will go to the previous page in the browser history. I've seen this ignored in a few games, and it made them unplayable on a screen without much vertical height
- We attach mousedown, mousemove and mouseup events to the canvas element
- Mousedown and mousemove select a tile based on the x, y coordinates
- Mouseup checks if the word selected is valid or not, then clears the letters
- We attach touchstart, touchmove and touchend events to the canvas element
- Coordinates within the page are found with event.touches[0].pageX (and pageY)
- If you're using what I wrote about for retina devices in my last post, be sure to multiply the coordinate by window.devicePixelRatio
- Touchstart and touchmove are equivalent mousedown and mousemove above
- Touchend is equivalent to mouseup
Slime Volley is a remake of an older Java game called Slime Volleyball. You control the left slime, and can move back and forth, or jump, to prevent the ball from landing on your side.Slime Volley is a remake of an older Java game called Slime Volleyball. You control the left slime, and can move back and forth, or jump, to prevent the ball from landing on your side.
The entire source code up on GitHub, so be sure to have a look (do note that it's written in CoffeeScript not JavaScript). Most of what you'll want to look at is in the /src directory.
In this game it made sense to have WASD and arrow keys as input on desktops - we just needed 3 actions: left, right and jump. That was easy enough and was implemented in our input class: /src/shared/input.coffee
handleKeyDown = (e) => @keys['key'+normalizeKeyEvent(e).which] = true handleKeyUp = (e) => @keys['key'+normalizeKeyEvent(e).which] = false
Later we can tell if, say, the left arrow key, is down with @keys[‘key37'] which is a boolean true/false.
Since no arrow keys available on a mobile device, we had to create a special UI for the three actions we allow. You can see it in the screenshot above, and the code is located in /src/shared/lib/gamepad.coffee. You can choose to have this only show on mobile devices with some media queries, or even detect touch with some JavaScript.
// props to function is_touch_device() { return !!('ontouchstart' in window) ? 1 : 0; }
Our ‘gamepad' (the UI we created) also uses multi-touch, and touch gestures so you're able to move left/right by sliding your finger, as well as jump at the same time.
The code for being able to slide from button to button is in gamepad.coffee, it basically detects ontouchmove with the coordinates if you're in the same box, or a new one, and acts accordingly.
Here's what we have for handling multi-touch:
# multitouch shim wraps a callback and applies it for each individual touch # If you're not familiar with coffeescript, (callback) -> is equivalent to function(callback) {} # Also note that someFunc param1, param2 is equivalent to someFunc( param1, param2 ) multitouchShim = (callback) -> return ((cb) -> # create a scope to protect the callback param return (e) -> e.preventDefault() cb( x: t.clientX, y: t.clientY, identifier: t.identifier ) for t in e.changedTouches return ).call(this, callback) canvas.addEventListener 'touchstart', multitouchShim(handleMouseDown), true canvas.addEventListener 'touchend', multitouchShim(handleMouseUp), true canvas.addEventListener 'touchmove', multitouchShim(handleMouseMove), true canvas.addEventListener 'touchcancel', multitouchShim(handleMouseUp), true
You're more than welcome to use the entire src/shared/lib directory for your game as well!
Falldown is a game that's been done countless times before on every device you can imagine (I spent a good portion of my high school classes playing this on my TI-83). The main idea is you control a ball and have to get through the gaps so you don't hit the top.
Like Slime Volley, there are only three things you can do, move left, move right, and jump. Clearly the best way to go about this was to use the arrow keys for desktops again. However, rather than implement a UI for this on mobile, we chose to use the accelerometer, something I haven't seen utilized very much in the browser.
Accelerometer SupportImplementing accelerometer support is actually quite simple. You just listen for the devicemotion event, (similar to how you would for mousemove). The callback function receives a DeviceMotionEvent object that contains information about the acceleration in all three planes (x, y, z)
Here's the code taken directly from Falldown (which is more or less the simplest use case of accelerometer)
window.onDeviceMotion = function(e) { var x_accel; // Portrait if(window.orientation == 0 || window.orientation == 180) x_accel = e.accelerationIncludingGravity.x; // Landscape else x_accel = e.accelerationIncludingGravity.y; // Reverse left and right if the phone is flipped (upside-down landscape or upside-down portrait) if(window.orientation == 90 || window.orientation == 180) x_accel *= -1; // If it's tilted more than just a little bit, move the ball in that direction if(x_accel > 0.5) { de.keys.right = true; // equivalent of right arrow key pressed de.keys.left = false; } else if(x_accel < -0.5) { de.keys.left = true; // equivalent of left arrow key pressed de.keys.right = false; } else { de.keys.left = de.keys.right = false; // no tilt in device, so unset both left & right keys } }; window.addEventListener("devicemotion", onDeviceMotion, false);
Note how we had to use a different axis (y instead of x) for landscape, and we had to reverse the numbers if the phone was ‘upside-down'.
The real key to handling input across multiple devices is being creative with your options. Pick the forms of input that make the most sense for your game and implement them, it's really not difficult, nor very time consuming, and will vastly improve your game on each device.
Here are some of your many options for input:
- Keyboard
- Mouse
- Touch events
- Multi-touch events
- Accelerometer
- Gamepad (one of my favorite HTML5 APIs, here's a great tutorial)
Pick a couple and implement them in your awesome new HTML5 game!
In part three of this series, I will cover security and a backend for your HTML5 game - stuff I've gotten very familiar with in developing two games with backends, and having to find a secure way to pass data (like high scores and user info) back and forth for Clay.io.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/developing-cross-platform-1 | CC-MAIN-2016-07 | en | refinedweb |
3.2-stable review patch. If anyone has any objections, please let me know.------------------Content-Length: 5974Lines: 176From: Hugh Dickins <hughd@google.com>commit 85046579bde15e532983438f86b36856e358f417 upstream.scan_mapping_unevictable_pages() is used to make SysV SHM_LOCKed pagesevictable again once the shared memory is unlocked. It does this withpagevec_lookup()s across the whole object (which might occupy most ofmemory), and takes 300ms to unlock 7GB here. A cond_resched() everyPAGEVEC_SIZE pages would be good.However, KOSAKI-san points out that this is called under shmem.c'sinfo->lock, and it's also under shm.c's shm_lock(), both spinlocks.There is no strong reason for that: we need to take these pages off theunevictable list soonish, but those locks are not required for it.So move the call to scan_mapping_unevictable_pages() from shmem.c'sunlock handling up to shm.c's unlock handling. Remove the recentlyadded barrier, not needed now we have spin_unlock() before the scan.Use get_file(), with subsequent fput(), to make sure we have a referenceto mapping throughout scan_mapping_unevictable_pages(): that's somethingthat was previously guaranteed by the shm_lock().Remove shmctl's lru_add_drain_all(): we don't fault in pages at SHM_LOCKtime, and we lazily discover them to be Unevictable later, so it servesno purpose for SHM_LOCK; and serves no purpose for SHM_UNLOCK, sincepages still on pagevec are not marked Unevictable.The original code avoided redundant rescans by checking VM_LOCKED flagat its level: now avoid them by checking shp's SHM_LOCKED.The original code called scan_mapping_unevictable_pages() on a lockedarea at shm_destroy() time: perhaps we once had accounting cross-checkswhich required that, but not now, so skip the overhead and just let: Andrew Morton <akpm@linux-foundation.org>Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>--- ipc/shm.c | 37 ++++++++++++++++++++++--------------- mm/shmem.c | 7 ------- mm/vmscan.c | 12 +++++++++++- 3 files changed, 33 insertions(+), 23 deletions(-)--- a/ipc/shm.c+++ b/ipc/shm.c@@ -870,9 +870,7 @@ SYSCALL_DEFINE3(shmctl, int, shmid, int, case SHM_LOCK: case SHM_UNLOCK: {- struct file *uninitialized_var(shm_file);-- lru_add_drain_all(); /* drain pagevecs to lru lists */+ struct file *shm_file; shp = shm_lock_check(ns, shmid); if (IS_ERR(shp)) {@@ -895,22 +893,31 @@ SYSCALL_DEFINE3(shmctl, int, shmid, int, err = security_shm_shmctl(shp, cmd); if (err) goto out_unlock;- - if(cmd==SHM_LOCK) {++ shm_file = shp->shm_file;+ if (is_file_hugepages(shm_file))+ goto out_unlock;++ if (cmd == SHM_LOCK) { struct user_struct *user = current_user();- if (!is_file_hugepages(shp->shm_file)) {- err = shmem_lock(shp->shm_file, 1, user);- if (!err && !(shp->shm_perm.mode & SHM_LOCKED)){- shp->shm_perm.mode |= SHM_LOCKED;- shp->mlock_user = user;- }+ err = shmem_lock(shm_file, 1, user);+ if (!err && !(shp->shm_perm.mode & SHM_LOCKED)) {+ shp->shm_perm.mode |= SHM_LOCKED;+ shp->mlock_user = user; }- } else if (!is_file_hugepages(shp->shm_file)) {- shmem_lock(shp->shm_file, 0, shp->mlock_user);- shp->shm_perm.mode &= ~SHM_LOCKED;- shp->mlock_user = NULL;+ goto out_unlock; }++ /* SHM_UNLOCK */+ if (!(shp->shm_perm.mode & SHM_LOCKED))+ goto out_unlock;+ shmem_lock(shm_file, 0, shp->mlock_user);+ shp->shm_perm.mode &= ~SHM_LOCKED;+ shp->mlock_user = NULL;+ get_file(shm_file); shm_unlock(shp);+ scan_mapping_unevictable_pages(shm_file->f_mapping);+ fput(shm_file); goto out; } case IPC_RMID:--- a/mm/shmem.c+++ b/mm/shmem.c@@ -1068,13 +1068,6 @@ int shmem_lock(struct file *file, int lo user_shm_unlock(inode->i_size, user); info->flags &= ~VM_LOCKED; mapping_clear_unevictable(file->f_mapping);- /*- * Ensure that a racing putback_lru_page() can see- * the pages of this mapping are evictable when we- * skip them due to !PageLRU during the scan.- */- smp_mb__after_clear_bit();- scan_mapping_unevictable_pages(file->f_mapping); } retval = 0; --- a/mm/vmscan.c+++ b/mm/vmscan.c@@ -3353,6 +3353,7 @@ int page_evictable(struct page *page, st return 1; } +#ifdef CONFIG_SHMEM /** * check_move_unevictable_page - check page for evictability and move to appropriate zone lru list * @page: page to check evictability and move to appropriate lru list@@ -3363,6 +3364,8 @@ int page_evictable(struct page *page, st * * Restrictions: zone->lru_lock must be held, page must be on LRU and must * have PageUnevictable set.+ *+ * This function is only used for SysV IPC SHM_UNLOCK. */ static void check_move_unevictable_page(struct page *page, struct zone *zone) {@@ -3396,6 +3399,8 @@ retry: * * Scan all pages in mapping. Check unevictable pages for * evictability and move them to the appropriate zone lru list.+ *+ * This function is only used for SysV IPC SHM_UNLOCK. */ void scan_mapping_unevictable_pages(struct address_space *mapping) {@@ -3441,9 +3446,14 @@ void scan_mapping_unevictable_pages(stru pagevec_release(&pvec); count_vm_events(UNEVICTABLE_PGSCANNED, pg_scanned);+ cond_resched(); }- }+#else+void scan_mapping_unevictable_pages(struct address_space *mapping)+{+}+#endif /* CONFIG_SHMEM */ static void warn_scan_unevictable_pages(void) { | http://lkml.org/lkml/2012/1/23/622 | CC-MAIN-2016-07 | en | refinedweb |
Paul Brown wrote:
>
> > I'm absolutely fine with only supporting DOM Level 2, but
> > one thing is missing in the DOMBuilder: the support for
> > namespaces.
>
> Looks good to me other than the trailing slash on the xmlns
> namespace (might
> trip up people who don't compare URIs properly).
>
The problem is that the parser is not forced to append the xmlns
attributes to an element when sending SAX events. If the parser
does not send them, the resulting DOM will not have the
namespace attributes either! When you then for example try to
serialize the DOM, you get a document without the namespace
declarations!
The patch fixes this as it also records the namespaces during
the startPrefixMapping method and adds these information as attributes
to the DOM.
We had this problem once in Cocoon as the default configuration for
the parser is not to append the xmlns attributes to the sax events.
So believe me, this is a problem.
Regards,
Carsten
> -- Paul
> | http://mail-archives.apache.org/mod_mbox/xml-xalan-dev/200201.mbox/%3CGMEBIBHGAOFGJCDPJANDMEEBDGAA.cziegeler@s-und-n.de%3E | CC-MAIN-2016-07 | en | refinedweb |
The
final keyword is used in several different
contexts as a modifier meaning that what it modifies cannot be
changed in some sense.
You will notice that a number of the classes in Java library are
declared
final, e.g.
public final class String
This means this class will not be subclassed, and informs the compiler that it can perform certain optimizations it otherwise could not. It also provides some benefit in regard to security and thread safety.
The compiler will not let you subclass any class that is
declared
final. You probably won't want or need to
declare your own classes
final though.
You can also declare that methods are
final. A method
that is declared
final cannot be overridden in a
subclass. The syntax is simple, just put the keyword
final after the access specifier and before the return type
like this:
public final String convertCurrency()
You may also declare fields to be
final. This is not
the same thing as declaring a method or class to be
final. When a field is declared
final, it is a
constant which will not and cannot change. It can be set once (for
instance when the object is constructed, but it cannot be changed
after that.) Attempts to change it will generate either a
compile-time error or an exception (depending on how sneaky the
attempt is).
Fields that are both
final,
static,
and
public are effectively named constants. For
instance a physics program might define
Physics.c, the
speed of light as
public class Physics { public static final double c = 2.998E8; }
In the
SlowCar class, the
speedLimit
field is likely to be both
final and
static though it's
private.
public class SlowCar extends Car { private final static double speedLimit = 112.65408; // kph == 70 mph public SlowCar(String licensePlate, double speed, double maxSpeed, String make, String model, int year, int numberOfPassengers, int numDoors) { super(licensePlate, (speed < speedLimit) ? speed : speedLimit, maxSpeed, make, model, year, numberOfPassengers, numDoors); } public void accelerate(double deltaV) { double speed = this.speed + deltaV; if (speed > this.maxSpeed) { speed = this.maxSpeed; } if (speed > speedLimit) { speed = speedLimit; } if (speed < 0.0) { speed = 0.0; } this.speed = speed; } }
Finally, you can declare that method arguments are
final. This means that the method will not directly change
them. Since all arguments are passed by value, this isn't
absolutely required, but it's occasionally helpful.
What can be declared
final in the
Car
and
MotorVehicle classes? | http://www.cafeaulait.org/course/week4/42.html | CC-MAIN-2016-07 | en | refinedweb |
While working on a project, I was asked to include a PropertyGrid control to a screen to allow for configuration of equipment and to do validation. As I had not used a PropertyGrid control before, I started searching for examples, and was dismayed at what I was finding, or in the case of validation, not finding.
PropertyGrid
In an effort to remember for the next time I need to use a PropertyGrid, and for others starting out, hopefully, this will provide some insights into the multiple aspects of using a PropertyGrid.
This article will provide information and a code example for the following:
ComboBox
TypeConverter
The code listed at the top of this article contains an example of each of the topics that will be discussed as well as some utilities for reflection, parsing, and obtaining a class type. These utilities will not be discussed, but are important aspects in obtaining the data dynamically at runtime, and should be reviewed for complete understanding.
The code is broken into application, controls, model, and utilities. The views in the application are not necessarily laid out using best coding practices, but were done so to allow an example of each of the topics listed above.
The code was written using VS.NET 2005, and doesn't require any other external libraries.
SelectedObject
Microsoft provides some attributes you can add to the top of your property that will provide a user friendly experience. Category will group items with the same arbitrary name (e.g., “Person”) together, while DisplayName and Description provide a friendly text to display and describe the property, respectively.
Category
Person
DisplayName
Description
[Category( "Person" )]
[DisplayName( "Last Name" )]
[Description( "Enter the last name for this person" )]
public string PersonLastName
{
get {…}
set {…}
}
get
PropertySort
Finding an easy way to change the default ordering of properties was rather frustrating. It would have been nice to have an attribute that would allow you to order your properties in a particular order. I found a relatively easy way to accomplish ordering, albeit rather ugly. The tab character, “\t”, can be used to order the items, and you get the benefit that the character will not be displayed on the screen. The ViewPerson2 class has an example of this. The only thing to note is that it is in reverse order. The more tab characters you add, the greater likelihood the property will be displayed first. In order to create a more elegant solution, I created a new class that derives from the DisplayNameAttribute, OrderedDisplayNameAttribute. The attribute will add the tab character for you. This will help with your own categories, but trying to use this in pre-existing Microsoft categories won't work so well.
ViewPerson2
DisplayNameAttribute
OrderedDisplayNameAttribute
The ViewPerson and ViewPerson2 classes are such an example. The ViewPerson class will be displayed using a TypeConverter of type PersonConverter, and the ViewPerson2 class is of type ListNonExpandableConverter. The views allow the same exact business object to be displayed in two completely different ways. While the functionality of the Converters will be described later, the PersonConverter will display the entire contents of the Person in the PropertyGrid, and will allow you to drill down the person’s children and grandchildren, whereas the ListNonExpandableConverter will display the name of the person only.
ViewPerson
PersonConverter
ListNonExpandableConverter
A ComboBox, in general, is a great way to provide a list of items to the user to prevent data entry errors. Creating a ComboBox for use in a PropertyGrid is not as straightforward as dropping a control on the form and adding data to it. The PropertyGrid has no idea that a property should be selected from a pre-existing list of data or where to obtain the data from. I will describe my implementation strategy for obtaining data dynamically, later in the article.
Now is a good to time to add, for those of you digging through the code, that I am actually using a ListBox and not a ComboBox. Because I am creating a control that acts and looks like a ComboBox, I call it such. The PropertyGrid actually provides the inverted triangle, denoting a ComboBox on my behalf. If I were to use a ComboBox as my control, internal to my GridComboBox, when you clicked on the drop down arrow, another ComboBox would show, in which case you would have to click on the drop down arrow again in order to select the data. This behavior is not ideal.
ListBox
GridComboBox
I've created a base ComboBox control called GridComboBox that you can derive from to create your own ComboBox. Two such controls are included in the project: EnumGridComboBox and ListGridComboBox.
EnumGridComboBox
ListGridComboBox
The GridComboBox control derives from UITypeEditor. There are two methods that were overridden: EditValue and GetEditStyle. EditValue is responsible for populating the ComboBox, attaching the ComboBox to the PropertyGrid and retrieving the new value selected from the ComboBox. The GetEditStyle denotes how you want to display the control. In the case of a ComboBox, a UITypeEditorSyle.DropDown was used. You could set it to Modal as well if you were using a form to collect information. I do not use the Modal setting.
UITypeEditor
EditValue
GetEditStyle
UITypeEditorSyle.DropDown
Modal
It is important to remember that if you are binding a collection of data to the ComboBox, you override the ToString() method so you get something pleasant displayed and not the fully qualified business object name.
ToString()
It is important to note that the EnumGridComboBox class is not necessary under normal circumstances. Anytime a business object is bound to a PropertyGrid that contains a property of type enum, a default ComboBox is created for you and will display all the values automatically. I have added the EnumGridComboBox implementation to show another ComboBox, and is only really necessary if you want to do something after selecting an enumeration from the ComboBox. The ViewCar class has two enumerations: Engine and BodyStyle that show the default behavior, and the ViewWheel class contains SupplyParts that uses the EnumGridComboBox implementation.
enum
ViewCar
Engine
BodyStyle
ViewWheel
SupplyParts
In order to get the custom ComboBox to display in the PropertyGrid, you have to tell the PropertyGrid to display the data in the control. This is accomplished by adding another attribute, Editor, to the top of your property. You will have to do this for each property that you wish to show in a ComboBox.
Editor
[Editor( typeof( EnumGridComboBox ), typeof( UITypeEditor ) )]
[EnumList( typeof(SupplyStore.SupplyParts))]
[Category( "…" ), Description( "…" ), DisplayName( "…" )]
public SupplyStore.SupplyParts SupplyParts
{
get { return _supplyPartsEnum; }
set { _supplyPartsEnum = value; }
}
These are classes that help you to transform an embedded property of type class so it may be displayed in a particular manner. For instance, an example of this would be the ViewCar class containing a property of type Wheel. This is a brief description of the Converters that are included in the project.
class
Wheel
[TypeConverter( typeof( ListConverter<ViewPersonCollection> ) )]
public List<ViewPersonCollection> Children
{
…
}
[TypeConverter( typeof( ListExpandableConverter ) )]
public class ViewPersonCollection : IDisplay
{
…
}
[TypeConverter( typeof( ListNonExpandableConverter ) )]
public class ViewPerson2 : IDisplay
{
…
}
[TypeConverter( typeof( PersonConverter ) )]
public class ViewPerson : IDisplay
{
…
}
The ListPropertyDescriptor is included in the Converters as it is used in conjunction with ListConverter to build the list of objects to display. This is a generic class that can be reused, and really needs no explanation for how it is being used in conjunction with the PropertyGrid. To see how to use a functional PropertyDescriptor, see my first article: Dynamic Properties - A Database Created At Runtime.
ListPropertyDescriptor
ListConverter
PropertyDescriptor
The data for the ComboBox is provided by you, the developer. This section of the article will describe how to get data to put in the ComboBox. You have different options depending on your need and your requirements.
BodyColor
If the text of the enumeration is acceptable, then no additional work is required. Most of the time, enumerations are a concatenation of words using Camel casing, underscores, or all caps. Such a display to an end-user is generally unacceptable. Using one of the next bullet items would be a better approach.
As to hard coding the values, the easiest way to accomplish this would be to derive your own ComboBox control from GridComboBox and to override the RetrieveDataList to set the base.DataList to a predefined list of items.
RetrieveDataList
base.DataList
protected override void RetrieveDataList( ITypeDescriptorContext context )
{
List<string> list = new List<string>();
list.Add( "Tom" );
list.Add( "Jerry" );
base.DataList = list;
}
There are no restrictions on the data type that is used for the list. If you are using a data type other than a value type, make sure you provide a way to display pretty text in the ComboBox. The default will be the fully qualified name of the business object. To accomplish this, you can add a ToString() to the class or view.
Another approach would be to create an interface, IDisplay, as I have done in the case of ViewPerson, and write some additional code in the GridComboBox to wrap the business object. I have not provided this functionality, but an example of what this would look like is the following. The adding of the business object to the list and returning the business object during the selection would need to be updated.
IDisplay
public class Wrapper
{
IDisplay _dataObject;
public Wrapper( IDisplay dataObject )
{
_dataObject = dataObject;
}
public override string ToString()
{
return ( _dataObject.Text );
}
}
Reflection and the creation of attributes will not be discussed in this article as they are topics unto themselves. I have provided a class in the Utilities project named Reflect. This class contains a bunch of static methods for using reflection to get/set fields and properties, call methods, etc. These methods are used throughout for implementing dynamic data.
Reflect
static
set
The place to start implementing dynamic data is at the creation of a base class deriving from Attribute. This will allow for the base ComboBox, GridComboBox, to be generic by only working with one type.
Attribute
[AttributeUsage( AttributeTargets.Field | AttributeTargets.Property )]
public abstract class ListAttribute : Attribute
{
}
Once this is done, the creation of your specific attribute can begin. I have created two attributes in the project: DataListAttribute and EnumListAttribute. I will walk through the creation of the DataListAttribute, the basics of the GridComboBox and ListGridComboBox, and how to wire it all up. The EnumListAttribute is a simpler form of the DataListAttribute, and I will leave it for you to review.
DataListAttribute
EnumListAttribute
DataListAttribute – Several constructors have been added to allow the user to tailor the attribute to their needs. The first three constructors allow for obtaining the data list which is part of an instance of a class.
public DataListAttribute( string path )
public DataListAttribute( string path, bool allowNew )
public DataListAttribute( string path, bool allowNew, string eventHandler )
[DataList( "GetPeopleList", true, "OnAddedEventHandler" )]
public ViewPersonCollection ChoosePerson
{
…
}
The next three are for retrieving a data list that is part of a static class.
public DataListAttribute( string dllName, string className, string path )
public DataListAttribute( string dllName, string className,
string path, bool allowNew )
public DataListAttribute( string dllName, string className,
string path, bool allowNew, string eventHandler )
[DataList( "CarApplication", "CarApplication.SupplyStore",
"Instance.Wheels", false, "OnAddedEventHandler" )]
public Wheel Wheel
{
…
}
For the constructors that are to be accessing a static class, the name of the DLL and the fully qualified class name need to be provided, while path, allowNew, and eventHandler are common to both sets of constructors. The allowNew flag denotes if a new business object can be created and stored in the property. This will display an additional entry in the list, <Add New...>. When the user selects this option, it will create a new instance of the object type. If you want to have the business object show up in the ComboBox as an option to choose from next time, you will have to provide the name of an event handler. When the event handler is called, you will have to add it into the list manually.
path
allowNew
eventHandler
<Add New...>
private void OnAddedEventHandler( object sender, ObjectCreatedEventArgs arg )
{
if ( arg != null )
{
ViewPersonCollection collection = arg.DataValue as ViewPersonCollection;
if ( collection != null )
{
collection.Name = "New Person #" + new Random().Next( 1, 100 );
this.ChooseParent.Children.Add( collection );
this._list.Add( collection );
}
}
}
The path defines how to navigate from one business object to the next in order to obtain the data list to display. If you were to write the path in code and cut and paste it into the string, it would be very close to being complete. The path can include functions, fields, properties, and arrays with subscripts that can be numeric, strings, or enumerations. Here are the ones used within this project.
"Instance.Wheels"
"Instance.Supplies[Wheels]"
"Instance.SuppliesArray[1]"
"Instance.SuppliesArray
[CarApplication.SupplyStore+SupplyParts.Wheels,CarApplication]"
"GetPeopleList"
"SupplyStore"
GridComboBox - The GridComboBox has been written to be derived from to provide the functionality of retrieving data. The class has been derived from UITypeEditor, which is required for use in the PropertyGrid. The class has two methods that will need to be implemented by the derived class. These methods define how to retrieve the list of data and what to do after an item has been selected from the ComboBox. We will look at these in the next section.
protected abstract object GetDataObjectSelected( ITypeDescriptorContext context );
protected abstract void RetrieveDataList( ITypeDescriptorContext context );
The only other code of interest in the file is PopulateListBox and EditValue, the rest deals with the behavior of the ComboBox.
PopulateListBox
EditValue is overridden from the base class UITypeEditor. This function is the central point of the functionality of the ComboBox. When a user clicks on the arrow of the ComboBox to get the list of items, an event is fired which calls into this method. This method then calls the PopulateListBox method, attaches the internal ListBox that is used in the PropertyGrid, and waits for the user to perform an action (i.e., click on an item or press the ESC key). Processing continues by calling the derived GetDataObjectSelected method, which returns the data value or the instance of an object.
GetDataObjectSelected
The PopulateListBox method will call the derived method RetrieveDataList to find the data and populate the ComboBox. If a prior value was selected, the value is auto-selected in the list.
ListGridComboBox – The RetrieveDataList is defined here. First, the list of attributes is obtained from the property that we are currently working with. The attribute that we are looking for is the DataListAttribute, which contains the path to the data. Once the attribute is found, the path is broken into its parts. Processing continues by determining if the data is found by navigating the current business object or if it is stored in a static class.
The processing is essentially the same for both paths; break each segment of the path into various parts. This takes into account arrays, lists, and dictionaries that may be used. Once we retrieve the components of the current segment, the information is passed off to the reflection class, Reflect, which will retrieve the actual value/object.
This process continues for each segment of the path until there are no more segments, at which time we should have obtained the list of data that we are looking for. If there are more segments, the value of the property obtained from the previous segment is used as the starting point for this segment.
The value is returned, and a reference to the list is saved for future use. The reference to the list is saved because of the use of reflection. The other reason is because the location of the data will usually be stored in the same location. If you find that this is not true for your circumstances, don't store the reference to the data list.
The other overridden method, GetDataObjectSelected, is implemented here as well. It is responsible for retrieving the value/object from the list and returning it. This implementation checks to see if the “<Add New...>” was selected, and then creates a new instance of the data object. If the object was created, a notification is sent if the option was set in the DataListAttribute. The creation of the object and the sending of the notification are once again handled by calling methods in the Reflect class.
The event handler was designed with the intent of performing actions like setting default values on the object, such as a first or last name, which might be used in the ToString() method. The event handler would also need to add the object to the list so it can be reselected the next time.
Validation of the data poses another challenge when using the PropertyGrid. There is no real mechanism for doing this. I have created an implementation that suited my general need of validating when the value of the property changed. If you need to do validation for each key stroke as in the case of a mask, you will have to provide your own implementation.
The down side to the current implementation is that if the data entered is incorrect and you move off the field after seeing the warning message box, you will be allowed to do so. I did this intentionally. There are too many implications that can arise from trying to prevent the user from moving off the field and how to allow it under some conditions. This article is meant to help you get going on providing validation. I leave the details of what to do after the data is not valid, to you, the developer.
In the CustomControls project is a folder named Rule that contains a base class and two implementations. The base class derives from Attribute once again, and provides for an error message field and an abstract IsValid() method. The other classes are:
CustomControls
IsValid()
PatternRuleAttibute
string
Regex.IsMatch
string
[Category( "…" ), DescriptionAttribute( "…" ), DisplayName( "…" )]
[PatternRule( @"^(\d{5}-\d{4})|(\d{5})$" )]
public string Zip
{
…
}
LengthRuleAttribute
[Category( "…" ), DescriptionAttribute( "…" ), DisplayName( "…" )]
[LengthRule( 4, 20 )]
public string City
{
…
}
The addition of other validation rules can be easily done, just derive from the base class RuleBaseAttribute, add a constructor and fields, and implement the IsValid() method. Once the new attribute is added to the top of the property, the rule will be ready to run. In order for these rules to magically work, you will have to use the PropertyGridControl provided in the solution, or copy the code in it to your implementation. The PropertyGridControl just derives from the PropertyGrid, and has the PropertyValueChanged event wired up. The basic functionality of the PropertyValueChanged event handler is:
RuleBaseAttribute
PropertyGridControl
PropertyValueChanged
Included in the solution is a project named Utilities. This project contains some classes to help with various aspects of obtaining the data for the ComboBox and for data validation. This article will look at them. Here is a general description of them.
ClassType
Type
PathParser. | http://www.codeproject.com/Articles/23242/Property-Grid-Dynamic-List-ComboBox-Validation-and?fid=992359&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None | CC-MAIN-2016-07 | en | refinedweb |
and 1 contributors
REGEXP Functions
Convenience macro to get the REGEXP from a SV. This is approximately equivalent to the following snippet:
if (SvMAGICAL(sv)) mg_get(sv); if (SvROK(sv)) sv = MUTABLE_SV(SvRV(sv)); if (SvTYPE(sv) == SVt_REGEXP) return (REGEXP*) sv;
NULL will be returned if a REGEXP* is not found.
Returns a boolean indicating whether the SV (or the one it references) is a REGEXP.
If you want to do something with the REGEXP* later use SvRX instead and check for NULL. | https://metacpan.org/pod/release/RJBS/perl-5.18.1/regexp.h | CC-MAIN-2016-07 | en | refinedweb |
However, I am wondering how exactly a double linked list would be able to be implemented to make moving from room to room a bit easier? It seems like it would make my code look much cleaner and stop my headache. Could someone explain to me how I would use it in concurrence with this code? Or maybe point me in the direction of a good example for double linked lists? Thank you.
import java.util.Scanner; public class HauntedMansion { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub String choice; @SuppressWarnings("resource") Scanner user_in = new Scanner(System.in);//This creates an 'opening' for user input System.out.println("Welcome to the Haunted Mansion. You see an expansive staircase ahead of you and rooms to your left and right."+"\n"+"Type 'upstairs' and hit enter to go up the staircase, 'right' to go to the room on your right,"+"\n"+"or 'left to go to the room on your left."); choice = user_in.nextLine();//this allows the user to input and continue on if(choice.equals("upstairs")){ System.out.println("You are now upstairs."+"\n"+"There is a musky and dark hallway to your left and a door in front of you."+"\n"+"Type 'left' and hit enter to go down the hallway or 'door' to open the door in front of you."); choice = user_in.nextLine(); if(choice.equals("left")){ System.out.println("As you turn to go down the hallway, you notice a dim light floating at the end of the hallway. The rest of the hall is nearly black."+"\n"+" With each step, the floorboard creaks and dust falls from the beams overhead."+"\n"+"The dim light grows brighter the closer you get to it."+"\n"+"'continue' to go on or 'back' to return to the staircase."); choice = user_in.nextLine(); if(choice.equals("continue")){ System.out.println("The glowing light appears to be coming from a cellphone on the floor."+"\n"+"Pick it up? 'Y' or 'N'"); choice = user_in.nextLine(); if(choice.equals("Y")){ System.out.println("There is a new text message on the cellphone."); choice = user_in.nextLine(); }else if(choice.equals("N")){ System.out.println("The cellphone keeps flashing that there is a new message. You notice that the battery is nearly empty. This could serve as a good light if you could find a charge for it. You pick it up anyways. Do you read the message?"+"\n"+"'Y' or 'N'"); choice = user_in.nextLine(); } }else if(choice.equals("back")){ System.out.println("As you turn your back to the light you feel a cold breeze flow through the room. A clammy hand wraps itself around your neck..."+"\n"+"GAME OVER"); } } else if(choice.equals("door")){ System.out.println("You reach your hand out and turn the door handle. There is a slight rattle as you push open the door."); } } else if(choice.equals("right")){ System.out.println("You are now in the kitchen."); } else if(choice.equals("left")){ System.out.println("You are now in the dining room"); } else{ System.out.println("That is not a valid answer."); } } } | http://www.dreamincode.net/forums/topic/288303-question-on-implementing-a-double-linked-list-in-a-text-game/page__pid__1681037__st__0 | CC-MAIN-2016-07 | en | refinedweb |
Quick sort is the fastest known comparision sort for arrays. SPseud
package me.rerun; import java.util.Arrays; public class QuickSort { public static void quickSort (Comparable comparableArray[], int lowIndex, int highIndex){ //at least one item must exist in the array if (lowIndex>=highIndex){ return; } int pivotIndex=getMedianIndexAsPivotIndex(lowIndex, highIndex); //1) Choose pivot from the sublist Comparable pivot=comparableArray[pivotIndex]; System.out.println("Pivot : "+pivot); //2) Swap the pivot to the last item in the array swapItemsWithIndices(comparableArray, pivotIndex, highIndex); /* Get the border indices sandwiching the unsorted items alone (ignore pivot (now, in the highIndex)) set 'i' to point to the item before the first Index set 'j' to point to the item before pivot Notice that this way, the following invariant gets maintained all through the sorting procedure a. All items left of Index 'i' have a value <=pivot b. All items right of Index 'j' have a value >=pivot */ int i=lowIndex-1; int j=highIndex; do{ //Notice the <j (pivot item is ignored). We stop when both the counters cross //compareTo will return 0 when it reaches the pivot - will exit loop do {i++;} while (comparableArray[i].compareTo(pivot)<0); //we dont have the protection as the previous loop. //So, add extra condition to prevent 'j' from overflowing outside the current sub array do {j--;} while (comparableArray[j].compareTo(pivot)>0 &&(j>lowIndex)); if (i<j){ swapItemsWithIndices(comparableArray, i, j); } System.out.println("I :"+i + " J :"+j); } while (i<j); swapItemsWithIndices(comparableArray, highIndex, i);//bring pivot to i's position System.out.println("Comparable array : "+Arrays.asList(comparableArray)); //the big subarray is partially sorted (agrees to invariant). Let's recurse and bring in more hands quickSort(comparableArray, lowIndex, i-1); //sort subarray between low index and one before the pivot quickSort(comparableArray, i+1, highIndex); //sort subarray between low index and one before the pivot } //... since swapping with array is the easiest way to swap two objects private static void swapItemsWithIndices(Comparable[] comparableArray, int firstItem, int secondItem) { System.out.println("Swapping "+comparableArray[firstItem] +" and "+comparableArray[secondItem]); final Comparable tempItem=comparableArray[firstItem]; comparableArray[firstItem]=comparableArray[secondItem]; comparableArray[secondItem]=tempItem; System.out.println("After swap array : "+Arrays.asList(comparableArray)); } //Variation 1 - chose median as pivot private static int getMedianIndexAsPivotIndex(int lowIndex, int highIndex) { return lowIndex+((highIndex-lowIndex)/2); } public static void main(String[] args) { //Integer[] unsortedArray=new Integer[]{1,32,121,1424,2,1214,121214,3535,754,343}; //Integer[] unsortedArray=new Integer[]{4,4,8,0,8,9,7,3,7,6}; Integer[] unsortedArray=new Integer[]{5,5,5,5,5,5,5,5,5,5}; long startTime = System.nanoTime(); System.out.println("Original array : "+Arrays.asList(unsortedArray)); quickSort(unsortedArray, 0, unsortedArray.length-1); System.out.println("Sorted array : "+Arrays.asList(unsortedArray)); System.out.println(System.nanoTime()-startTime); } }
References
Goodrich and Tamassia
Three Beautiful Quicksorts
Quicksort is Optimal
Unimodal sequences
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/quicksort-easy-way | CC-MAIN-2016-07 | en | refinedweb |
Octave Programming Tutorial/Loops and conditions
Loops are used to repeat a block of code for a known or unknown number of times, depending on the type of loop. Using loops, you will draw some nice pictures of fractals and shapes drawn with random dots.
Contents
The
for loop[edit]
We use
for loops to repeat a block of code for a list of known values. As an example, we'll calculate the mean of a list of values. The mean is calculated from
We set up a vector with some values
octave:1> x = [1.2, 6.3, 7.8, 3.6];
and calculate the mean with
octave:2> sum = 0; octave:3> for entry = x, octave:4> sum = sum + entry; octave:5> end; octave:6> x_mean = sum / length(x)
Line 2: Set sum equal to 0.
Line 3: For each value in x, assign it to entry.
Line 4: Increment sum by entry.
Line 5: Ends the for loop when there are no more members of x.
Line 6: Assign the final value of sum divided by the length of x to x_mean.
TO DO: get a better example and explain the code.
In general, we write a
for loop as
for variable = vector ... end
The
... represents the block of code that is executed exactly once for each value inside the
vector.
Example: The Sierpinski triangle[edit]
The Sierpinski triangle is a fractal that can be generated with a very simple algorithm.
- Start on a vertex of an equilateral triangle.
- Select a vertex of the triangle at random.
- Move to the point halfway between where you are now and the selected vertex.
- Repeat from step 2.
Plotting the points that you visit by following this procedure, generates the following picture.
You can download the code that generates this fractal from [2shared.com]. Note that this code uses one very simple for loop to generate the fractal:
for i = 1:N ... end
Exercises[edit]
- Write a script that sums the first N integers. You can check your result with the formula
.
- Write a script that does the same thing as the
linspacefunction. It should start at some value,
xstart, stop at
xstopand create a vector that contains N values evenly spaced from
xstartto
xstop. You can use the
zerosfunction to create a zero-filled vector of the right size. Use
help zerosto find out how the function works.
The
while loop[edit]
The while loop also executes a block of code more than once but stops based on a logical condition. For example
x = 1.0; while x < 1000 disp(x); x = x*2; endwhile
will multiply
x by 2 until its value exceeds 1000. Here,
x < 1000 is the condition of the loop. As long as the condition holds (is true), the loop will continue executing. As soon as it is false, the loop terminates and the first instruction after the loop is executed.
The general form of a while loop is
while condition ... endwhile
Exercise[edit]
- Write a script that calculates the smallest positive integer, n, such that
for some real numbers a and b. (Meaning, find the smallest power of a that is at least b.) Using the
logfunction is considered cheating.
Example: The Mandelbrot fractal[edit]
The Mandelbrot set is another fractal and is generated by checking how long it takes a complex number to become large. For each complex number, c,
- Start with
.
- Let
- Find the first i such that
.
We record all of these i values and assign a colour to each of them. This is used to generate an image like this one.
You can download the code that generates this fractal from Mandelbrot.m. Note that there is a while loop (inside some for loops) that tests whether the complex number z has modulus less than 2:
while (count < maxcount) & (abs(z) < 2) ... endwhile
The first condition in the while loop checks that we do not perform too many iterations. For some values of c the iteration will go on forever if we let it.
See also another version by Christopher Wellons
The
do...until statement[edit]
These loops are very similar to while loops in that they keep executing based on whether a given condition is true or false. There are however some important difference between
while and
do...until loops.
whileloops have their conditions at the beginning of the loop;
do...untilloops have theirs at the end.
whileloops repeat as long as the condition is true;
do...untilloops continue as long as theirs is false.
whilewill execute 0 or more times (because the condition is at the beginning);
do...untilloops will execute 1 or more times (since the condition is at the end).
The general form of a
do...until loop is
do ... until condition
Exercise[edit]
Write a script that calculates the greatest common divisor (GCD) of two positive integers. You can do this using Euclid's algorithm.
Challenge[edit]
Write a script that generates random number pairs (a, b) that are distributed uniformly
- over the disc
(the first image below);
- as in the second image below
The
break and
continue statements[edit]
Sometimes it is necessary to stop a loop somewhere in the middle of its execution or to move on to the next value in a for loop without executing the rest of the loop code for the current value. This is where the
break and
continue statements are useful.
The following code demonstrates how the break statement works.
total = 0; while true x = input('Value to add (enter 0 to stop): '); if x == 0 break; endif total = total+x; disp(['Total: ', num2str(total)]); endwhile
Without the
break statement, the loop would keep executing forever since the condition of the
while loop is always true. The
break allows you to jump past the end of the loop (to the statement after the endwhile).
The
break statement can be used in any loop:
for,
while or
do...until.
The continue statement also jumps from the inside of a loop but returns to the beginning of the loop rather than going to the end. In a
forloop, the next value inside the vector will be assigned to the for variable (if there are any left) and the loop restarted with that value;
whileloop, the condition at the beginning of the loop will be retested and the loop continued if it is still true;
do...untilloop, the condition at the end of the loop will be tested and the loop continued from the beginning if it is still false.
As an example, the following code will fill the lower triangular part of a square matrix with 1s and the rest with 0s.
N = 5; A = zeros(N); % Create an N x N matrix filled with 0s for row = 1:N for column = 1:N if column > row continue; endif A(row, column) = 1; endfor endfor disp(A);
Note that the inner
for skips (continues) over the code that assigns a 1 to an entry of
A whenever the column index is greater than the row index.
The
if statement[edit]
The general form of the
if statement is
if condition1 ... elseif condition2 ... else ... endif
If
condition1 evaluates to true, the statements in the block immediately following the
if are executed. If
condition1 is false, the next condition (
condition2 in the
elseif) is checked and its statements executed if it is true. You can have as many
elseif statements as you like. The final set of statements, after the
else, is executed if all of the conditions evaluate to false. Note that the
elseif and
else parts of the
if statement are optional.
The following are all valid
if statements:
% Take the log of the absolute value of x if x > 0 y = log(x); elseif x < 0 y = log(-x); else disp("Cannot take the log of zero."); endif x = input("Enter a value: "); if x > 0 disp("The number is positive"); endif if x < 0 disp("The number is negative"); endif if x == 0 disp("The number is zero"); endif
Example: The fractal fern[edit]
This algorithm is not quite complete. Have a look at the .m file available from.
The image to the right can be generated with the following algorithm:
1. Let x1 and y1 be random values between 0 and 1. 2. Choose one of the linear transformations below to calculate (xi+1, yi+1) from (xi, yi): 1. xi+1 = 0 yi+1 = 0.16yi 2. xi+1 = 0.20xi − 0.26yi yi+1 = 0.23xi + 0.22yi + 1.6 3. xi+1 = −0.15xi + 0.28yi yi+1 = 0.26xi + 0.24yi + 0.44 4. xi+1 = 0.85xi + 0.04yi yi+1 = −0.04xi + 0.85yi + 1.6 The first transformation is chosen if probability 0.01, the second and third with probability 0.07 each and the fourth with probability 0.85. 3. Calculate these values for i up to at least 10,000.
You can download the code that generates this fractal as fracfern.m (this is disabled for now).
Return to the Octave Programming tutorial index | https://en.wikibooks.org/wiki/Octave_Programming_Tutorial/Loops_and_conditions | CC-MAIN-2016-07 | en | refinedweb |
Opened 6 years ago
Closed 6 years ago
Last modified 4 years ago
#12032 closed (invalid)
Signals receivers not called when the receiver is a closure
Description
>>> from django.db import models >>> def x(): ... def y(sender, **kwargs): ... print 'y called' ... models.signals.post_init.connect(y, sender = User) ... >>> x() >>> >>> def z(sender, **kwargs): ... print 'z called' ... >>> models.signals.post_init.connect(z, sender = User) >>> >>> a = User() z called >>>
Expected result:
>>> a = User() y called z called >>>
Change History (4)
comment:1 Changed 6 years ago by anonymous
- Needs documentation unset
- Needs tests unset
- Owner changed from nobody to anonymous
- Patch needs improvement unset
- Status changed from new to assigned
comment:2 Changed 6 years ago by anonymous
- Owner anonymous deleted
- Status changed from assigned to new
comment:3 Changed 6 years ago by dc
- Resolution set to invalid
- Status changed from new to closed
comment:4 Changed 4 years ago by jacob
- milestone 1.2 deleted
Milestone 1.2 deleted
Note: See TracTickets for help on using tickets.
That's not a bug, that's a feature.
def connect(self, receiver, sender=None, weak=True, dispatch_uid=None)
weak | https://code.djangoproject.com/ticket/12032 | CC-MAIN-2016-07 | en | refinedweb |
Rails Label Helper For Forms
Join the DZone community and get the full member experience.Join For Free
When creating forms I get tired of creating labels for each field so I like to shorten my typing by using this application helper.
def l(id,label) "" endthen later on in your view you type
<%= l('my_field_id','My Label') %>
Form (document) Label
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/rails-label-helper-forms | CC-MAIN-2022-40 | en | refinedweb |
quack ~master
A compile-time duck typing library for D
To use this package, run the following command in your project's root directory:
Manual usage
Put the following dependency into your project's dependences section:
quack
A library for enabling compile-time duck typing in D.
Duck Typing
Duck typing is a reference to the phrase "if it walks like a duck, and quacks
like a duck, then it's probably a duck." The idea is that if a
struct or
class has all the same members as another, it should be usable as the other.
Usage
Duck exists so that you may treat non-related types as polymorphic, at compile time. There are three primary ways to use quack:
1) Taking objects as arguments: For this, you should use
extends!( A, B ),
which returns true if A "extends" B. It can do this by implementing all of the
same members B has, or by having a variable of type B that it has set to
alias this.
2) Storing pointers to objects: For this, you should use a
DuckPointer!A,
which can be created with
duck!A( B b ), assuming B "extends" A (the actual
check is done using
extends, so see the docs on that). Note that this approach
should only be used when you need to actually store the object, as it is much
slower than the pure template approach.
3) Checking for the presence of a mixin: For this, you'll want
hasStringMixin!( A, mix ) or
hasTemplateMixin!( A, mix ). These two
templates will instantiate a struct with the given mixin,
mix, and check if it is
compatible with the type given,
A.
Examples
import quack; import std.stdio; struct Base { int x; } struct Child1 { Base b; alias b this; } struct Child2 { int x; } void someFunction( T )( T t ) if( extends!( T, Base ) ) { writeln( t.x ); } struct HolderOfBase { DuckPointer!Base myBase; } void main() { someFunction( Child1() ); someFunction( Child2() ); auto aHolder1 = new HolderOfA( duck!Base( Child1() ) ); auto aHolder2 = new HolderOfA( duck!Base( Child2() ) ); }
import quack; import std.stdio; enum myStringMixin = q{ void stringMember(); }; mixin template MyTemplateMixin( MemberType ) { MemberType templateMember; } void doAThing( T )( T t ) if( hasTemplateMixin!( T, MyTemplateMixin, float ) ) { // Doing a thing... } void doAnotherThing( T )( T t ) if( hasStringMixin!( T, myStringMixin ) ) { // Still doing things... } struct TemplateMixinImpl { mixin MyTemplateMixin!float; } struct StringMixinImpl { mixin( myStringMixin ); } void main() { doAThing( TemplateMixinImpl() ); doAnotherThing( StringMixinImpl() ); }
- Registered by Colden Cullen
- ~master released 2 years ago
- ColdenCullen/quack
- MIT
- Authors:
-
- Dependencies:
- tested
- Versions:
- Show all 6 versions
- Download Stats:
0 downloads today
0 downloads this week
0 downloads this month
251 downloads total
- Score:
- 0.6
- Short URL:
- quack.dub.pm | https://code.dlang.org/packages/quack/~master | CC-MAIN-2022-40 | en | refinedweb |
SfDataForm
The Xamarin DataForm (SfDataForm) control helps editing the data fields of any data object. It can be used to develop various forms such as login, reservation, data entry, etc. Key features includes the following:
- Layout and grouping: Supports to linear, grid layout and floating label layout with grouping support. Supports customizing the layout with different heights for each item.
- Caption customization: Supports loading the image as caption for the editor.
- Editors: Built-in support for text, numeric, numeric up-down, picker, date picker, time picker, switch,drop-down,autoComplete and checkbox editors.
- Custom editor: Supports loading the custom editors.
- Validation: Built-in support to validate the data based on the INotifyDataErrorInfo and data annotations. It also programmatically supports validation handling.
Getting Started with Xamarin DataForm (SfDataForm)
23 Sep 202124 minutes to read
This section explains the quick overview to use the Xamarin DataForm (SfDataForm) (SfDataForm) for Xamarin.Forms in your application.fDataForm reference
You can add SfDataForm reference using one of the following methods:
Method 1: Adding SfDataForm reference from nuget.org
Syncfusion Xamarin components are available in nuget.org. To add SfDataForm to your project, open the NuGet package manager in Visual Studio, search for Syncfusion.Xamarin.SfDataForm, and then install it.
NOTE
Install the same version of SfDataForm NuGet in all the projects.
Method 2: Adding SfDataForm reference from toolbox
Syncfusion also provides Xamarin Toolbox. Using this toolbox, you can drag the SfDataForm control to the XAML page. It will automatically install the required NuGet packages and add the namespace to the page. To install Syncfusion Xamarin Toolbox, refer to Toolbox.
Method 3: Adding SfDataForm data form on each platform
To use the data form inside an application, each platform application must initialize the data form renderer. This initialization step varies from platform to platform and is discussed in the following sections:
Android
The Android launches the data form without any initialization and is enough to only initialize the Xamarin.Forms Framework to launch the application.
NOTE
If you are adding the references from toolbox, this step is not needed.
iOS
To launch the data form in iOS, call the
SfDataFormRenderer.Init() in the
FinishedLaunching overridden (); SfDataFormRenderer.Init(); LoadApplication (new App ()); … }
Universal Windows Platform (UWP)
The UWP launches the data form without any initialization and is enough to only initialize the Xamarin.Forms Framework to launch the application.
ReleaseMode issue in UWP platform
The known Framework issue in UWP platform is that the custom controls will not render when deployed the application in
Release Mode.
The above problem can be resolved by initializing the data form assemblies in
App.xaml.cs file in UWP project as in the following code snippet:
// In App.xaml.cs protected override void OnLaunched(LaunchActivatedEventArgs e) { … rootFrame.NavigationFailed += OnNavigationFailed; // you'll need to add `using System.Reflection;` List<Assembly> assembliesToInclude = new List<Assembly>(); //Now, add all the assemblies your app uses assembliesToInclude.Add(typeof(SfDataFormRenderer).GetTypeInfo().Assembly); assembliesToInclude.Add(typeof(SfNumericTextBoxRenderer).GetTypeInfo().Assembly); assembliesToInclude.Add(typeof(SfNumericUpDownRenderer).GetTypeInfo().Assembly); assembliesToInclude.Add(typeof(SfSegmentedControlRenderer).GetTypeInfo().Assembly); assembliesToInclude.Add(typeof(SfComboBoxRenderer).GetTypeInfo().Assembly); assembliesToInclude.Add(typeof(SfCheckBoxRenderer).GetTypeInfo().Assembly); assembliesToInclude.Add(typeof(SfRadioButtonRenderer).GetTypeInfo().Assembly); assembliesToInclude.Add(typeof(SfMaskedEditRenderer).GetTypeInfo().Assembly); assembliesToInclude.Add(typeof(SfTextInputLayoutRenderer).GetTypeInfo().Assembly); assembliesToInclude.Add(typeof(SfAutoCompleteRenderer).GetTypeInfo().Assembly); // replaces Xamarin.Forms.Forms.Init(e); Xamarin.Forms.Forms.Init(e, assembliesToInclude); … }
Supported platforms
- Android
- iOS
- Windows (UWP)
Creating the data form
In this section, you will create Xamarin.Forms application with
SfDataForm. The control should be configured entirely in C# code.
- Creating the project.
- Adding data form in Xamarin.Forms.
- Creating data object.
- Setting data object.
Creating the project
Create a new BlankApp (.Net Standard) application in Xamarin Studio or Visual Studio for Xamarin.Forms.
Adding data form in Xamarin.Forms
To add the data form to your application, follow the steps:
- Add required assemblies as discussed in assembly deployment section.
- Import the control namespace as
xmlns:dataForm="clr-namespace:Syncfusion.XForms.DataForm;assembly=Syncfusion.SfDataForm.XFormsin XAML Page.
- Create an instance of data form control and add as a view to the linear layout.
<"> <dataForm:SfDataForm x: </ContentPage>
using Syncfusion.XForms.DataForm; using Xamarin.Forms; namespace GettingStarted { public partial class MainPage : ContentPage { SfDataForm dataForm; public MainPage() { InitializeComponent(); dataForm = new SfDataForm(); } } }
Creating data object
The
SfDataForm is a data edit control so, create a data object to edit the data object.
Here, the data object named ContactsInfo created with some properties.
public class ContactsInfo { private string firstName; private string middleName; private string lastName; private string contactNo; private string email; private string address; private DateTime? birthDate; private string groupName; public ContactsInfo() { } public string FirstName { get { return this.firstName; } set { this.firstName = value; } } public string MiddleName { get { return this.middleName; } set { this.middleName = value; } } public string LastName { get { return this.lastName; } set { this.lastName = value; } } public string ContactNumber { get { return contactNo; } set { this.contactNo = value; } } public string Email { get { return email; } set { email = value; } } public string Address { get { return address; } set { address = value; } } public DateTime? BirthDate { get { return birthDate; } set { birthDate = value; } } public string GroupName { get { return groupName; } set { groupName = value; } } }
NOTE
If you want your data model to respond to property changes, then implement
INotifyPropertyChangedinterface in your model class.
Create a model repository class with ContactsInfo property initialized with required data in a new class file as shown in the following code example and save it ViewModel.cs file:
public class ViewModel { private ContactsInfo contactsInfo; public ContactsInfo ContactsInfo { get { return this.contactsInfo; } set { this.contactsInfo = value; } } public ViewModel() { this.contactsInfo = new ContactsInfo(); } }
Setting data object
To populate the labels and editors in the data form, set the DataObject property.
<"> <ContentPage.BindingContext> <local:ViewModel/> </ContentPage.BindingContext> <dataForm:SfDataForm x: </ContentPage>
dataForm.DataObject = new ContactsInfo();
Now, run the application to render the
data form to edit the data object as in the following screenshot:
You can download the entire source code of this demo for Xamarin.Forms from here DataFormGettingStarted.
Defining editors
The data form control automatically generates DataFormItems (which has UI settings of data field) when the data object set to the
SfDataForm.DataObject property. The DataFormItem encapsulates the layout and editor setting for the data field appearing in the data form. When the
DataFormItems are generated, you can handle the SfDataForm.AutoGeneratingDataFormItem event to customize or cancel the
DataFormItem.
The type of input editor generated for the data field depends on the type and attribute settings of the property. The following table lists the
DataFormItem and its constraints for generation:
The following list of editors are supported:
Layout options
Label position
By default, the data form arranges the label at left side and input control at the right side. You can change the label position by setting the SfDataForm.LabelPosition property. You can position the label from left to top of the input control by setting the
LabelPosition as Top.
<dataForm:SfDataForm
dataForm.LabelPosition = LabelPosition.Top;
Grid layout
By default, the data form arranges one data field per row. It is possible to have more than one date field per row by setting the ColumnCount property which provides grid like layout for the data form.
<dataForm:SfDataForm
dataForm.ColumnCount = 2;
StackLayout positions the child elements one after another either horizontally or vertically. Space of the StackLayout depends on the HorizontalOptions and VerticalOptions properties. Views in a stack layout can be sized based on space in the layout using layout options.
The DataForm control can be loaded inside any layout such as Grid, StackLayout, etc. When loading DataForm inside a
StackLayout, set the HorizontalOptions and VerticalOptions properties of DataForm, and set parent(StackLayout) of DataForm to
LayoutOptions.FillAndExpand.
Refer to the following code example to load the DataForm control inside a
StackLayout. Set the VerticalOptions and HorizontalOptions of the
StackLayout and DataForm to
FillAndExpand.
<StackLayout x: <dataForm:SfDataForm x: </StackLayout>
public partial class MainPage : ContentPage { StackLayout StackLayout; SfDataForm dataForm; public MainPage() { InitializeComponent(); stackLayout = new StackLayout(); stackLayout.VerticalOptions = LayoutOptions.FillAndExpand; stackLayout.HorizontalOptions = LayoutOptions.FillAndExpand; dataForm = new SfDataForm(); dataForm.DataObject = new ContactInfo(); dataForm.VerticalOptions = LayoutOptions.FillAndExpand; dataForm.HorizontalOptions = LayoutOptions.FillAndExpand; stackLayout.Children.Add(dataform); this.Content = stackLayout; } }
The DataForm can be loaded with specific height and width inside different layouts using the SfDataForm.HeightRequest and SfDataForm.WidthRequest properties.
<dataForm:SfDataForm x:
dataForm.HeightRequest = 300; dataForm.WidthRequest = 300; dataForm.VerticalOptions = LayoutOptions.CenterAndExpand; dataForm.HorizontalOptions = LayoutOptions.Center;
Editing
By default, the data form enables editing of the data field. You can disable editing by setting the IsReadOnly property of the data form. You can enable or disable editing for a particular data field by setting the IsReadOnly property of DataFormItem in the
AutoGeneratingDataFormItem event. The data field editing behavior can also be defined by using EditableAttribute
Additional Help Resources
The
Xamarin.Forms SfDataForm in the past.
See also
How to render DataForm using MVVMCross in Xamarin.Forms
How to render DataForm using RealmObject in Xamarin.Forms
How to render DataForm using ReactiveUI in Xamarin.Forms
How to bind data object in Xamarin.Forms DataForm (SfDataForm) using Fresh MVVM framework
How to bind data object in Xamarin.Forms DataForm(SfDataForm) using Prism framework
How to import and export data objects from SQLite Offline database into Dataform (SfDataForm)
How to render DataForm using FSharp application in Xamarin.Forms
How to bind JSON data to Xamarin.Forms DataForm (SfDataForm) | https://help.syncfusion.com/xamarin/dataform/getting-started | CC-MAIN-2022-40 | en | refinedweb |
SSL_CTX_sessions(3) OpenSSL SSL_CTX_sessions(3)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias.
NAME
SSL_CTX_sessions - access internal session cache
LIBRARY
libcrypto, -lcrypto
SYNOPSIS
#include <openssl/ssl.h> struct lhash_st *SSL_CTX_sessions(SSL_CTX *ctx);
DESCRIPTION
SSL_CTX_sessions() returns a pointer to the lhash databases containing the internal session cache for ctx.
NOTES modified directly but by using the SSL_CTX_add_session(3) family of functions.
RETURN VALUES
SSL_CTX_sessions() returns a pointer to the lhash of SSL_SESSION.
SEE ALSO
ssl(7), LHASH(3), SSL_CTX_add_session(3), SSL_CTX_set_session_cache_mode_sessions(3) | https://man.netbsd.org/SSL_CTX_sessions.3 | CC-MAIN-2022-40 | en | refinedweb |
One of the new features in .NET 6 and C¤ 10 is support for global usings. We can define using declarations as global and they work all over the project so we don’t have to define them in other files anymore. Here’s how it works.
Sample project
Suppose we have have console program that runs on Task Scheduler and performs some nightly tasks. Screenshot of Solution Explorer on right illustrates what we have. There are some classes to access local file system and cloud storage. There are also some classes to perform import tasks.
Classes that move data from one place to another usually have some repeated using directives. One good example is System.Data and its child namespaces. Everytime we add new class we have to add those using directives again. This holds also true for importer classes shown on screenshot.
Here’s the fragment on one importer class.
using System.Data; using System.Data.SqlClient; using ConsoleApp4.FileAccess; namespace ConsoleApp4.ImportExport; public class CustomersImporter {
private readonly IFileClient _fileClient;
private readonly string _connectionString;
public CustomersImporter(IFileClient fileClient, string connectionString)
{
_fileClient = fileClient;
_connectionString = connectionString;
}
// ... }
Three using declarations – first three lines of code above – are present also in other import and export classes.
Introducing global usings
C# 10 introduces global usings that are effective over whole project. Let’s add new file GlobalUsings.cs to our project. In this file we define usings that must be available in every class and interface file in our project.
global using System.Data; global using System.Data.SqlClient; global using ConsoleApp4.FileAccess;
We can remove these usings from all other files in our project.
Don’t put global usings to same file with class or interface. It’s easy to create a mess when global usings are defined in same file with classes or interfaces. Always keep global usings in separate file.
ASP.NET Core 6 application
Global usings work also in ASP.NET Core applications. There’s _ViewImports.cs file in ASP.NET Core where we can define usings for all views in project. Here’s the example of _ViewImports.cs file after creating a new ASP.NET Core MVC project.
@using WebApplication1 @using WebApplication1.Models @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
Global usings work also for ASP.NET Core MVC views and Razor pages. Instead of keeping usings in _ViewImports.cs file we can make them global in global usings file.
global using System.Diagnostics; global using Microsoft.AspNetCore.Mvc; global using Microsoft.AspNetCore.Mvc.RazorPages;
Not so fast… Before abandoning _ViewImports.cs for using declarations think for a moment. Views can be taken as a separate context from all other code in your web application. Using declarations for views may be not needed in code files of your application. Don’t pollute your classes with using declarations you only use in views.
Wrapping up
Global usings is useful feature introduced in C# 10. It may also come specially handy when working with code that uses (too) many namespaces. Global usings are also good feature to keep code files cleaner and define all most needed usings in same file. Luckily it works also with ASP.NET Core MVC views and Razor pages. | https://gunnarpeipman.com/global-usings/amp/ | CC-MAIN-2022-40 | en | refinedweb |
ReadyAPI + CosmosDB connection
ReadyAPI + CosmosDB connection
Hey Team,
I am trying to connect to Cosmos DB using the connection string but I am unable to connect to the collection available in the DB. Please find the script below. Please suggest how can I connect and read the data from the Cosmos DB collection.
import com.gmongo.GMongoClient
import com.mongodb.MongoCredential
import com.mongodb.ServerAddress
import com.mongodb.BasicDBObject
import com.gmongo.GMongo
def mongo = new GMongo('Sample Connection string');
def db = mongo.getDB('prs');
log.info("DB connected")
BasicDBObject query = new BasicDBObject("policy_number","12345")
def collection = db.getCollection('cccVacPolicy')
log.info("connect with PRS DB")
def myDoc = collection.findAll();
log.info myDoc.toString();
I think i have a CosmosDB ReadyAPI project saved somewhere. I'll try and dig it out, it might help.
Cheers,
Rich
Hey @mayank451589
digging through my old hard drive now for it,
cheers,
Rich
Hey @mayank451589
I found it - this isn't one of my projects - I think I found this on the web somewhere and saved it cos I thought - "I might have to test a CosmosDB endpoint one day"
I'm attaching the whole project - you'll have to edit the custom project level properties for your instance - but I think this will allow you to connect. the first testcase appears to have all the different REST requests to call all the different info you'd want - a Collection, All Collections, documents, etc.
Have a look and see if it helps,
Cheers,
Rich
| https://community.smartbear.com/t5/ReadyAPI-Questions/ReadyAPI-CosmosDB-connection/td-p/235040 | CC-MAIN-2022-40 | en | refinedweb |
Test Drive
You can use Smartface on-device Android and iOS emulators to run your code instantly and seamlessly on a real device.
Install Smartface On-Device Emulator
Downloading the Smartface On-Device Emulator for iOS and Android
If you haven't installed the Smartface On-Device Emulator, get it from the internal Enterprise App Store from your phone. Please contact with the support to retrieve the credentials.
Activating the Smartface iOS Widget
You can activate the Smartface Widget on iOS by following steps below;
Note: Once you activated the widget, you won't have to activate it again even you if reinstall the Smartface On-Device Emulator.
Locating the On-Device Emulator on iOS and Android
When you install the on-device emulator, it will appear in the application menu of your device.
Run on Device (On-Device Android and iOS.
To deploy your app to your device there are two different ways you can take.
Run Your App on Device via Wireless Connection
To connect your on-device emulator with Smartface IDE via a wireless connection, your computer and mobile device need to be connected to the same local network.
Then to test your application press on the Connect Wireless via QR Code button under the
Run > Run on Device menu.
It will generate a unique QR Code for your workspace to be able to emulate your project.
Open Smartface on-device Android/iOS emulator from your device and scan the QR Code. It will download all the resources and codes and load your application in the Smartface on-device emulator. After initializing and downloading steps you will be able to run your application for testing purposes.
Run Your App on Device via Wired Connection
Smartface also provides a way to connect and test your project with your device through a USB Cable. Let's have a look at the steps you should take to achieve this behavior.
Android
For Android, there are two things that have to be covered on your side.
- Android's USB debugging feature needs to be enabled on your mobile device.
- adb must be installed on your computer.
To enable USB debugging on your mobile device follow the steps below:
- On the device, go to Settings > About (device).
- Tap the Build number seven times to make Settings > Developer options available.
- Then enable the USB Debugging option.
You might also want to enable the Stay awake option, to prevent your Android device from sleeping while plugged into the USB port.
To install adb on your computer follow the steps below:
First connect your phone to your system with a data cable. You will get a prompt asking whether or not do you want to allow USB debugging. Check ‘Always allow from this computer‘ and tap ‘OK.’
Windows
If Android Studio is installed in your system that means the adb is already installed so the only thing you need to do is add adb to your System PATH, To achieve this you can skip the manual installation steps below and refer to the
Adding the adb to your System Path at the end of this section.
Manual Installation:
- Download Platform Tools:
- Extract the contents of this ZIP file into an easily accessible folder (such as C:\platform-tools)
- Open up a new command prompt in that folder.
- In the Command Prompt window, enter the following command to launch the ADB daemon:
./adb devices
- Finally, the adb command to be recognized on any folder you are in, adb needs to be added to your System PATH. To do this, the following guide can be followed for learning how to add adb to your System PATH.
Adding the adb to your System, type down your path to platform-tools (e.g. C:\platform-tools). Click OK. Close all remaining windows by clicking OK.
Linux
If Android Studio is installed in your system that means the adb is already installed so you can skip the manual installation steps below.
For Debian-based Linux users, type the following command to install ADB:
sudo apt install android-tools-adb
For Fedora/Suse-based Linux users, type one of the following commands to install ADB:
sudo dnf install android-tools
sudo yum install android-tools
For Arch-based Linux users, type the following command to install ADB:
sudo pacman -S android-tools
Then open up a new terminal window on your computer and execute the
adb start-server command.
MacOS
If Android Studio is installed in your system that means the adb is already installed so you can skip the manual installation steps below.
You can use one single
homebrew command to install adb:
brew install --cask android-platform-tools
Alternatively, manual installation steps below can be followed:
- Download Platform Tools:
- Extract the ZIP file to an easily-accessible folder (like the Desktop for example)
- Open Terminal
- To browse to the folder you extracted ADB into, enter the following command:
cd /path/to/extracted/folder/
- For example, on my Mac it was this: cd /Users/MyUsername/Desktop/platform-tools/
- Once the Terminal is in the same folder your ADB tools are in, you can execute the following command to launch the ADB daemon:
./adb devices
Now after configuring your adb installation the last thing you need to do to is, connect your device with a USB Cable to your computer, then click on to Connect Wired via ADB (Android Only) menu item under the
Run > Run on Device menu.
By this way you can now test your project on a real device without any need of internet access.
iOS
For iOS, to deploy and test your project to a mobile device via USB Cable we will be using the Internet Sharing feature of MacOS therefore it is required to have any MacOS installed system. Though your data will get transferred through USB Cable, for iOS you will still need a common internet connection between your mobile device and your system.
Here are the steps you should take to run iPhone locally on Mac:
- Plug your iPhone into your Mac via USB
- On your Mac, go to
System preferences → Sharing
- Click and select Internet Sharing from the left hand-side and make sure that you have selected the iPhone USB from the
To computers using:menu. (Make sure the Internet Sharing is On.)
- Check and note your dispatcher connection port from the Smartface IDE. To learn where to find port number you can refer to document below:
- Then you need to check for your system's IP Address. It can be found at
System Preferences > Networkon your Mac.
- To match your devices one last thing to do is, open your mobile device and search for on your favorite browser. (e.g.)
- Finally, to test your project in your on-device emulator click on to
Run > Run On Device > Connect Wireless via QR Codeand scan the newly generated QR from your mobile device..
If you use iOS 14 or later, you can also use Haptic Touch feature to fast-acces the Update button. Simply hold the Smartface icon:
Smartface IDE also gives you a three different ways to update the application that runs on your on-device emulator.
The first two exist under the
Run > Run on Device menu, from here Select device(s) to Apply Changes and Apply Changes to Connected Devices(to all of them at once) can be used to update your on-device emulator with the latest changes from your project.
And the third one is the Apply Changes button that placed at the bottom-right of your IDE. It's functionality is completely same with the Apply Changes to Connected Devices menu item.
For this, you can also use the keyboard shortcut. The default keybinding for applying changes is ctrlcmd+alt+v. To change default keybindings, press
ctrlcmd+p and type down >Open Keyboard Shortcuts. In the newly opened Keyboard Shortcuts page search for Smartface: Apply Changes to Connected Devices and edit from there.
Clearing On-Device Emulator Contents
Clear allows you to completely remove the downloaded files for Smartface on-device emulator. It cleans up the cached files.
Determine if the Current Code is Running on On-Device Emulator
On some cases, you might want your code to be run only on Smartface On-Device Emulator since logs can chunk up and slow down the application. The common cases are:
- Logging for debug purposes
- Published-Application-only concepts like Firebase, Push Notification
- Plugins(some might require to be run on Published App only)
- Internal concepts like auto-filling passwords for easier development or developer settings
Example:
import System from "@smartface/native/system";
if (System.isEmulator) {
console.log("Debug log for X purpose");
}
or within Service Call to keep logs
import System from '@smartface/native/system';
import ServiceCall from '@smartface/extension-utils/service-call';
export default const service = new ServiceCall({
baseUrl: '',
logEnabled: System.isEmulator // Only show logs if the application is running by emulator.
});
Simply use System.isEmulator property to detect such cases. | https://docs.smartface.io/7.0.0/smartface-getting-started/test-drive/ | CC-MAIN-2022-40 | en | refinedweb |
Demonstrates the use of string handles to represent objects in Excel
This add-in contains code very similar to that described in the User Guide topic Using object handles. The difference is that it uses string handles instead.
String handles (as implemented in the
StringHandles.xpe
extension) consist of the object's name followed by its index
within the list of all active handles.
This is the easiest handle type for a user to "debug", since the
numbers are manageably short, unlike the pointers which are used by the
NumericHandles.xpe and
StringPtrHandles.xpe extensions.
However, this type of handle is slightly less robust than the
other two styles, since it is possible for a handle which has been
erroneously changed from a formula to a value to remain a valid -
but misleading - handle value.
To edit the functions in this add-in, you need to load the extension file
StringHandles.xpe.
See Loading an extension file
for instructions.
You should also make sure that none of the following extension files is loaded, since the various types of handles are mutually exclusive.
RtdHandles.xpe
NumericHandles.xpe
StringPtrHandles.xpe
The add-in functions in this sample were copied from the NumericHandleDemo sample. The following steps are required to change the handle type from numeric to string:
NumericHandles.xpeand load
StringHandles.xpe.
#include "extensions\NumericHandles.h".
The project can then immediately be rebuilt and run.
CXlOper::operator = | CXlOper::Ret
Each sample project is located in a sub-directory of the Samples directory of the XLL+ installation. To use the sample project, open the solution file StringHandleDemo.sln or the project file StringHandleDemo.vcproj.
You can enable debugging under Excel by using the Setup Debugging command in the XLL+ ToolWindow.
When delivered, the help files are excluded from the build.
You can enable the help build by selecting the files
StringHandleDemo.help.xml and
StringHandleDemo.chm
in the Solution Explorer,
and using the right-click menu to view Properties.
Select the page "Configuration Properties/General" and
set the "Excluded from build" property to "No".
See Generating help
in the User Guide for more information.
List of Sample Projects | Using object handles | RtdHandleDemo sample | NumericHandleDemo sample | StringPtrHandleDemo sample | http://planatechsolutions.com/xllplus7-online/sample_StringHandleDemo.htm | CC-MAIN-2022-40 | en | refinedweb |
There are some fun wearable projects that you can do with small Arduino modules like the Gemma ($9) and some neopixels. ($7).
Conductive thread is used to both secure the modules to the material and to do the wiring.
For this project we wanted to have a sequence of colour patterns, so we had:
- a rainbow moving around in a circle
- a red smiley face, that had a left/right smirk and then a wink
- a rainbow moving around in a circle
- a red smiley face, that had a left/right smirk and then a wink
However there a tons of possible patterns that could be used.
For the battery mounting there are a few options such as lipo batteries, coin batteries and small battery packs.
Below is some example code for the moving rainbow that we used.
Have Fun
#include <Adafruit_NeoPixel.h> #define PIN 1 int theLED = 0; Adafruit_NeoPixel strip = Adafruit_NeoPixel(12, PIN, NEO_GRB + NEO_KHZ800); void setup() { strip.begin(); strip.setBrightness(20); strip.show(); // Initialize all pixels to 'off' } void loop() { // Create a rainbow of colors that moves in a circle int col; for (int i=0; i < strip.numPixels(); i++) { strip.setPixelColor(i, 0, 0, 255); } theLED = theLED + 1; if (theLED >= strip.numPixels()) { theLED = 0; } strip.setPixelColor(theLED, 209, 31, 141); strip.setPixelColor(theLED - 1, 11, 214, 180); strip.setPixelColor(theLED - 2, 240, 210, 127); strip.setPixelColor(theLED - 3, 191, 127, 240); strip.show(); delay (500); }
2 thoughts on “Neopixel Hats”
That is really cool Pete. Do you do all your own coding? John.
I was teaching my daughters, so they did most of the coding. I did the sewing because it was a first attempt but I should have had them do it, (my sewing skills suck). | https://funprojects.blog/2019/12/11/neopixel-hat/ | CC-MAIN-2022-40 | en | refinedweb |
#include "llvm/Support/Compiler.h"
#include "llvm/Support/Format.h"
#include "llvm/Support/MathExtras.h"
#include "llvm/Support/Printable.h"
#include "llvm/Support/raw_ostream.h"
Go to the source code of this file.
A common definition of LaneBitmask for use in TableGen and CodeGen.
A lane mask is a bitmask representing the covering of a register with sub-registers.
This is typically used to track liveness at sub-register granularity. Lane masks for sub-register indices are similar to register units for physical registers. The individual bits in a lane mask can't be assigned any specific meaning. They can be used to check if two sub-register indices overlap.
Iff the target has a register such that:
getSubReg(Reg, A) overlaps getSubReg(Reg, B)
then:
(getSubRegIndexLaneMask(A) & getSubRegIndexLaneMask(B)) != 0
Definition in file LaneBitmask.h. | https://www.llvm.org/doxygen/LaneBitmask_8h.html | CC-MAIN-2022-40 | en | refinedweb |
the available memory, which is a scarce and expensive resource.
It turns out that it is not difficult to figure out how much memory is actually consumed. In this article, I'll walk you through the intricacies of.
Depending on the Python version, the numbers are sometimes a little different (especially for strings, which are always Unicode), but the concepts are the same. In my case, am using Python 3.10.
As of 1st January 2020, Python 2 is no longer supported, and you should have already upgraded to Python 3.) 28
Interesting. An integer takes 28 bytes.
sys.getsizeof(5.3) 24
Hmm… a float takes 24 bytes.
from decimal import Decimal sys.getsizeof(Decimal(5.3)) 104
Wow. 104 bytes! This really makes you think about whether you want to represent a large number of real numbers as
floats or
Decimals.
Let's move on to strings and collections:
sys.getsizeof('') 49 sys.getsizeof('1') 50 sys.getsizeof('12') 51 sys.getsizeof('123') 52 sys.getsizeof('1234') 53
OK. An empty string takes 49 bytes, and each additional character adds another byte. That says a lot about the tradeoffs of keeping multiple short strings where you'll pay the 49 bytes overhead for each one vs. a single long string where you pay the overhead only once.
The
bytes object has an overhead of only 33 bytes.
sys.getsizeof(bytes()) 33
Lets look at lists.
sys.getsizeof([]) 56 sys.getsizeof([1]) 64 sys.getsizeof([1, 2]) 72 sys.getsizeof([1, 2,3]) 80 sys.getsizeof([1, 2, 3, 4]) 88 sys.getsizeof(['a long longlong string']) 64
What's going on? An empty list takes 56 bytes, but each additional
int adds just 8 bytes, where the size of an
int is 28 bytes. A list that contains a long string takes just 64, which addresses this issue.
sys.getsizeof(()) 40 sys.getsizeof((1,)) 48 sys.getsizeof((1,2,)) 56 sys.getsizeof((1,2,3,)) 64 sys.getsizeof((1, 2, 3, 4)) 72 sys.getsizeof(('a long longlong string',)) 48
The story is similar for tuples. The overhead of an empty tuple is 40 bytes vs. the 56 of a list. Again, this 16 bytes difference per sequence is low-hanging fruit if you have a data structure with a lot of small, immutable sequences.
sys.getsizeof(set()) 216 sys.getsizeof(set([1)) 216 sys.getsizeof(set([1, 2, 3, 4])) 216 sys.getsizeof({}) 64 sys.getsizeof(dict(a=1)) 232 sys.getsizeof(dict(a=1, b=2, c=3)) 232.abc, str):()) 56
A string of length 7 takes 56 bytes (49 overhead + 7 bytes for each character).
deep\_getsizeof([], set()) 56
An empty list takes 56 bytes (just overhead).
deep\_getsizeof([x], set()) 120
A list that contains the string "x" takes 124 bytes (56 + 8 + 56).
deep\_getsizeof([x, x, x, x, x], set()) 152
A list that contains the string "x" five times takes 156 bytes (56 + 5\*8 + 56).% extra overhead is obviously not trivial.
Integers
CPython keeps a global list of all the integers in the range -5 to 256. This optimization strategy makes sense because small integers pop up all over the place, and given that each integer takes 28 bytes, it saves a lot of memory for a typical program.
It also means that CPython pre-allocates 266 * 28 = 7448 bytes for all these integers, even if you don't use most of them. You can verify it by using the
id() function that gives the pointer to the actual object. If you call
id(x) for any
x in the range -5 to 256, you will get the same result every time (for the same integer). But if you try it for integers outside this range, each one will be different (a new object is created on the fly every time).
Here are a few examples within the range:
id(-3) 9788832 id(-3) 9788832 id(-3) 9788832 id(201) 9795360 id(201) 9795360 id(201) 9795360
Here are some examples outside the range:
id(257) 140276939034224 id(301) 140276963839696 id(301) 140276963839696 id(-6) 140276963839696 id(-6) 140276963839696 function) with:
Filename: python_obj.py Line # Mem usage Increment Occurrences Line Contents ============================================================= 3 17.3 MiB 17.3 MiB 1 @profile 4 def main(): 5 17.3 MiB 0.0 MiB 1 a = [] 6 17.3 MiB 0.0 MiB 1 b = [] 7 17.3 MiB 0.0 MiB 1 c = [] 8 18.0 MiB 0.0 MiB 100001 for i in range(100000): 9 18.0 MiB 0.8 MiB 100000 a.append(5) 10 18.7 MiB 0.0 MiB 100001 for i in range(100000): 11 18.7 MiB 0.7 MiB 100000 b.append(300) 12 19.5 MiB 0.0 MiB 100001 for i in range(100000): 13 19.5 MiB 0.8 MiB 100000 c.append('123456789012345678901234567890') 14 18.9 MiB -0.6 MiB 1 del a 15 18.2 MiB -0.8 MiB 1 del b 16 17.4 MiB -0.8 MiB 1 del c 17 18 17.4 MiB 0.0 MiB 1 print('Done!')
As you can see, there is 17.3 9 adds 0.8MB while the second on line 11 adds just 0.7MB and the third loop on line 13 adds 0.8MB. Finally, when deleting the a, b and c lists, -0.6MB is released for a, -0.8MB is released for b, and -0.8MB is released for c.
How To Trace Memory Leaks in Your Python application with tracemalloc
tracemalloc is a Python module that acts as a debug tool to trace memory blocks allocated by Python. Once tracemalloc is enabled, you can obtain the following information :
- identify where the object was allocated
- give statistics on allocated memory
- detect memory leaks by comparing snapshots
Consider the example below:
import tracemalloc tracemalloc.start() a = [] b = [] c = [] for i in range(100000): a.append(5) for i in range(100000): b.append(300) for i in range(100000): c.append('123456789012345678901234567890') # del a # del b # del c snapshot = tracemalloc.take_snapshot() for stat in snapshot.statistics('lineno'): print(stat) print(stat.traceback.format())
Explanation
tracemalloc.start()—starts the tracing of memory
tracemalloc.take_snapshot()—takes a memory snapshot and returns the
Snapshotobject
Snapshot.statistics()—sorts records of tracing and returns the number and size of objects from the traceback.
linenoindicates that sorting will be done according to the line number in the file.
When you run the code, the output will be:
[' File "python_obj.py", line 13', " c.append('123456789012345678901234567890')"] python_obj.py:11: size=782 KiB, count=1, average=782 KiB [' File "python_obj.py", line 11', ' b.append(300)'] python_obj.py:9: size=782 KiB, count=1, average=782 KiB [' File "python_obj.py", line 9', ' a.append(5)'] python_obj.py:5: size=576 B, count=1, average=576 B [' File "python_obj.py", line 5', ' a = []'] python_obj.py:12: size=28 B, count=1, average=28 B [' File "python_obj.py", line 12', ' for i in range(100000):']
Conclusion
CPython uses a lot of memory for its objects. It also uses various tricks and optimizations for memory management. By keeping track of your object's memory usage and being aware of the memory management model, you can significantly reduce the memory footprint of your program.
This post has been updated with contributions from Esther Vaati. Esther is a software developer and writer for Envato Tuts+.
| https://code.tutsplus.com/tutorials/understand-how-much-memory-your-python-objects-use--cms-25609?ec_unit=translation-info-language | CC-MAIN-2022-40 | en | refinedweb |
motpy - simple multi object tracking librarymotpy - simple multi object tracking library
Project is meant to provide a simple yet powerful baseline for multiple object tracking without the hassle of writing the obvious algorithm stack yourself.
video source: - sequence 11
FeaturesFeatures
- tracking by detection paradigm - IOU + (optional) feature similarity matching strategy - Kalman filter used to model object trackers - each object is modeled as a center point (n-dimensional) and its size (n-dimensional); e.g. 2D position with width and height would be the most popular use case for bounding boxes tracking - seperately configurable system order for object position and size (currently 0th, 1st and 2nd order systems are allowed) - quite fast, more than realtime performance even on Raspberry Pi
InstallationInstallation
Latest releaseLatest release
pip install motpy
Additional installation steps on Raspberry PiAdditional installation steps on Raspberry Pi
You might need to have to install following dependencies on RPi platform:
sudo apt-get install python-scipy sudo apt install libatlas-base-dev
DevelopDevelop
git clone cd motpy make install-develop # to install editable version of library make test # to run all tests
Example usageExample usage
2D tracking - synthetic example2D tracking - synthetic example
Run demo example of tracking N objects in 2D space. In the ideal world it will show a bunch of colorful objects moving on a grey canvas in various directions, sometimes overlapping, sometimes not. Each object is detected from time to time (green box) and once it's being tracked by motpy, its track box is drawn in red with an ID above.
make demo
2d_multi_object_tracking_demo.mp4
Detect and track objects in the videoDetect and track objects in the video
- example uses a COCO-trained model provided by torchvision library
- to run this example, you'll have to install
requirements_dev.txtdependencies (
torch,
torchvision, etc.)
- to run on CPU, specify
--device=cpu
python examples/detect_and_track_in_video.py \ --video_path=./assets/video.mp4 \ --detect_labels=['car','truck'] \ --tracker_min_iou=0.15 \ --device=cuda
video_tracking.mp4
video source:, a great YT channel created by J Utah
MOT16 challange trackingMOT16 challange tracking
- Download MOT16 dataset from extract to
~/Downloads/MOT16directory,
- Type the command:This will run a simplified example where a tracker processes artificially corrupted ground-truth bounding boxes from sequence 11; you can preview the expected results in the beginning of the README file.
python examples/mot16_challange.py --dataset_root=~/Downloads/MOT16 --seq_id=11
Face tracking on webcamFace tracking on webcam
Run the following command to start tracking your own face.
python examples/webcam_face_tracking.py
Basic usageBasic usage
A minimal tracking example can be found below:
import numpy as np from motpy import Detection, MultiObjectTracker # create a simple bounding box with format of [xmin, ymin, xmax, ymax] object_box = np.array([1, 1, 10, 10]) # create a multi object tracker with a specified step time of 100ms tracker = MultiObjectTracker(dt=0.1) for step in range(10): # let's simulate object movement by 1 unit (e.g. pixel) object_box += 1 # update the state of the multi-object-tracker tracker # with the list of bounding boxes tracker.step(detections=[Detection(box=object_box)]) # retrieve the active tracks from the tracker (you can customize # the hyperparameters of tracks filtering by passing extra arguments) tracks = tracker.active_tracks() print('MOT tracker tracks %d objects' % len(tracks)) print('first track box: %s' % str(tracks[0].box))
CustomizationCustomization
To adapt the underlying motion model used to keep each object, you can pass a dictionary
model_spec to
MultiObjectTracker, which will be used to initialize each object tracker at its creation time. The exact parameters can be found in definition of
motpy.model.Model class.
See the example below, where I've adapted the motion model to better fit the typical motion of face in the laptop camera and decent face detector.
model_spec = { 'order_pos': 1, 'dim_pos': 2, # position is a center in 2D space; under constant velocity model 'order_size': 0, 'dim_size': 2, # bounding box is 2 dimensional; under constant velocity model 'q_var_pos': 1000., # process noise 'r_var_pos': 0.1 # measurement noise } tracker = MultiObjectTracker(dt=0.1, model_spec=model_spec)
The simplification used here is that the object position and size can be treated and modeled independently; hence you can use even 2D bounding boxes in 3D space.
Feel free to tune the parameter of Q and R matrix builders to better fit your use case.
Tested platformsTested platforms
- Linux (Ubuntu) - macOS (Catalina) - Raspberry Pi (4)
Things to doThings to do
- [x] Initial version - [ ] Documentation - [ ] Performance optimization - [x] Multiple object classes support via instance-level class_id counting - [x] Allow tracking without Kalman filter - [x] Easy to use and configurable example of video processing with off-the-shelf object detector
References, papers, ideas and acknowledgementsReferences, papers, ideas and acknowledgements
- - - | https://libraries.io/pypi/motpy | CC-MAIN-2022-40 | en | refinedweb |
Testing applications have become a necessary skill set to become a proficient developer today. The Python community supports testing and the Python standard library has well-built tools to support testing. In the Python environment, there are abundant testing tools to handle the complex testing needs.
Unittest
Inspired by the JUnit framework and having similar characteristics, this unit testing framework supports test automation, sharing of setup and shutdown codes and acts independently of the tests from the reporting environment. The UNittest comprises of several Object-Oriented Concepts such as:
Test Fixture: A test fixture is a representation of the preparation which is needed to execute one or more tests, and collaborates cleanup actions
data-src="" width: 728px; height: class="lazyload" 90px;>
Test Suite: It is a collection of test cases, test suites which are used as aggregate tests which will be executed together
Test Case: It is an individual unit of testing that tries to find a specific response to appropriate input sets. A test case can be used to create new test cases which provide base classes for the unit test
Test Runner: The basic operation of a test runner is to execute tests and provide the output to the user. The procedure may use a graphical interface, a textual interface, or a special attribute to specify the results of executed tests
Here’s the sample code:
import unittest
class TSM(unittest.TestCase):
def test_upper(auto):
auto.assertEqual(‘abc’.upper(), ‘ABC’)
def test_isupper(auto):
auto.assertTrue(‘ABC’.isupper())
auto.assertFalse(‘Abc’.isupper())
def test_split(auto):
s = ‘hello world’
auto.assertEqual(s.split(), [‘This’,’Is’, ‘Python’])
# check that given condition and s.split fails when the separator is not a string
with auto.assertRaises(TypeError):
s.split(2)
if __name__ == ‘__main__’:
unittest.main()
Nose Testing
An updated form of Unitest framework, the nose testing framework automatically collects the tests from the subclasses of the unittest. Users need to script simple test functions to execute them. The Nose provides various helpful functions for implementing timed tests, testing for exceptions, and other uses.
Sample Code
def dimension_overlap(ranges):
“””Returns common overlap among a set of [minimum, maximum] ranges”””
minimum = dimention[0][0] maximum = dimention[0][1] for (min_1, max_1) in dimention:
minimum = max(minimum, min_1)
maximum = min(maximum, max_1)
if minimum >= maximum:
return None
else:
return (minimum, maximum)
Pytest
Preferred by most of the Python developers, Pytest framework is largely used for Unit testing. The package offers various sub packages collectively known as Pytest framework. The refined and Pythonic expressions which were introduced for testing development have made it preferable for test suites to be scripted in a high-toned firm pattern. It is widely used for small projects and has the best test automation technique. One of the best features of Pytest is that it offers complete information about the failures in the testing case, which will help the developers to address the issue in a relevant and fast way.
Sample Code:
pip install pytest (installing pytest)
import pytest (installing pytest into work environment)
from Purse import purse, InsufficientAmount (user-defined package)
def test_default_initial_amount():
purse = Purse()
assert purse.balance == 0
def test_setting_initial_amount():
purse = Purse(100)
assert Purse.balance == 100
def test_purse_add_cash():
purse = Purse(10)
purse.add_cash(90)
assert purse.balance == 100
def test_purse_spend_cash():
purse =Purse(20)
purse.spend_cash(10)
assert purse.balance == 10
def test_purse_spend_cash_raises_exception_on_insufficient_amount():
purse= Purse()
with pytest.raises(insufficient amount):
purse.spend_cash(100)
Robot Framework
Being an operating system and application independent, Robot framework is a universal test automation framework for acceptance testing and acceptance driven development(ATDD). Giving access to the easy usage of test data and best suited for the keyword drove testing approach, the robot framework testing capabilities can be drawn out of test libraries either by using Python or Java. The framework gives users the scope to create new high-level keywords from current ones that are in usage by using the same syntax that is used for building test cases.
Sample Code
*** Settings ***
Library SeleniumLibrary (set the working library)
*** Test Cases ***
The user can search for colleges
[Tags] search_colleges
Open browser Chrome
Select From List By Value xpath://select[@name=’most_preferred’] India
Select From List by Value xpath://select[@name=’least_preferred’] India
Click Button css:input[type=’submit’] @{colleges}= Get WebElements css:table[class=’table’]>tbody tr
Should Not Be Empty ${colleges}
Close All Browsers
Zope Testing
The Zope debugger allows the user to point out the exact step where there is a problem in the running process. The Zope facilitates logging options which permit the user to issue warnings and error messages. The testing provides information through various sources and allows to combine channels to fetch information about the debugging.
The Control Panel
The control panel gives various views that can help user debug zope, specifically in the area of accomplishments, the debugging information link present on the control panel gives two views, debugging info and profiling.
Product Refresh Settings
Zope provides a refreshing view of all control panel devices. This allows the user to reload the products modules as they are being changed or updated.
Debug Mode
Once you set the debug mode, it reduces the Zope performance and gives effects such as
- Tracebacks will be displayed on the browser when errors raise
- External methods and DTML file objects are analyzed to see if they have been updated every time they have been called for an update and they are reloaded
The Python Debugger
Integrated with Python debugger Zope can shut down the server and produce a request through a command line. It develops an infrastructure to raise new objects and debug them immediately.
Sample Code
from zope.interface.verify import verifyClass, verifyObject
from uspkg.app import SampleApp (uspkg=user defined package)
from uspkg.interfaces import ISampleApp
def test_app_create():
# Assure we can create instances of `Sample_App`
app_1 = SampleApp_1()
assert app_1 is not None
def test_app_1_class_iface():
# Assure the class implements the declared interface
assert verifyClass(ISample_App, Sample_App)
def test_app_1_instance_iface():
# Assure instances of the class provide the declared interface
assert verifyObject(ISample_App, Sample_App()) | https://www.analyticsindiamag.com/5-python-unit-test-frameworks-to-learn-in-2019/ | CC-MAIN-2019-39 | en | refinedweb |
MVVM Pattern Overview
Model View ViewModel (MVVM) is a design pattern which helps developers separate the Model, which is the data, from the View, which is the user interface (UI).
The View-Model part of the MVVM is responsible for exposing the data objects from the Model in such a way that those objects are easily consumed in the View. The Kendo UI MVVM component is an implementation of the MVVM pattern which seamlessly integrates with the rest of the Kendo UI framework—Kendo UI widgets and Kendo UI DataSource.
Kendo UI MVVM initialization is not designed to be combined with the Kendo UI server wrappers. Using wrappers is equivalent to jQuery plugin syntax initialization. If you want to create Kendo UI widget instances via the MVVM pattern, then do not use server wrappers for these instances.
Getting Started
Start by creating a View-Model object. The View-Model is a representation of your data (the Model) which will be displayed in the View. To declare your View-Model use the
kendo.observablefunction and pass it a JavaScript object.
var viewModel = kendo.observable({ name: "John Doe", displayGreeting: function() { var name = this.get("name"); alert("Hello, " + name + "!!!"); } });
Declare a View. The View is the UI, i.e. a set of HTML elements, which will be bound to the View-Model. In the following example, the
inputvalue (its text) is bound via the
data-bindattribute to the
namefield of the View-Model. When that field changes, the
inputvalue is updated to reflect that change. The opposite is also true: when the value of the
inputchanges, the field is updated. The
clickevent of the
buttonis bound via the
data-bindattribute to the
displayGreetingmethod of the View-Model. That method will be invoked when the user clicks the
button.
<div id="view"> <input data- <button data-Display Greeting</button> </div>
Bind the View to the View-Model. This is done by calling the
kendo.bindmethod:
Setting the data-* Options
For more information on the naming convention setting the configuration options of the Kendo UI MVVM widgets, check the naming convention for the set
data options.
The hybrid widgets and frameworks in Kendo UI are not included in the default list of initialized namespaces. You can initialize them explicitly by running
kendo.bind(element, viewModel, kendo.mobile.ui);.
The following example demonstrates how to set the
data-* options.
kendo.bind($("#view"), viewModel);
Bindings
A binding pairs a DOM element (or widget) property to a field or method of the View-Model. Bindings are specified via the
data-bind attribute in the form
<binding name>: <view model field or method>, e.g.
value: name. Two bindings were used in the aforementioned example:
value and
click.
The Kendo UI MVVM supports binding to other properties as well:
source,
html,
attr,
visible,
enabled, and other. The
data-bind may contain a comma-separated list of bindings e.g.
data-bind="value: name, visible: isNameVisible". For more information on each Kendo UI MVVM binding, refer to the MVVM bindings articles.
- Bindings cannot include hard-coded values but only references to properties of the
viewModel. For example, the
data-bind="visible: false, source: [{ foo: 'bar'}]"configuration is incorrect.
- The
data-templateattributes cannot contain inline template definitions, but only ID's of external templates.
The Kendo UI MVVM also supports data binding to nested View-Model fields.
<div data- </div> <script> var viewModel = kendo.observable({ person: { name: "John Doe" } }); kendo.bind($("div"), viewModel); </script>
Important Notes
- Set numeric options as strings. Some Kendo UI widgets accept string options, which represent numbers and can be parsed as such, for example,
<input data-. This mask will be parsed as a number and the widget will receive a single 9-digit in its initialization method, instead of a
"09"string. In such scenarios, the widget options must be set with custom MVVM binding.
Bindings are not JavaScript code. Although bindings look like JavaScript code, they are not. The
<div data-</div>chunk of code is not a valid Kendo UI MVVM binding declaration. If a value from the View-Model requires processing before displaying it in the View, a method should be created and used instead.
<div data-</div> <script> var viewModel = kendo.observable({ person: { name: "John Doe", lowerCaseName: function() { return this.get("name").toLowerCase(); } } }); kendo.bind($("div"), viewModel); </script>
See Also
- ObservableObject Overview
- Tutorial on How to Build MVVM Bound Forms
- How to Apply Source and Template Binding Using Model with Computed Field
For more information on the bindings Kendo UI MVVM supports, refer to the section about Kendo UI MVVM bindings. | https://docs.telerik.com/kendo-ui/framework/mvvm/overview | CC-MAIN-2019-39 | en | refinedweb |
Nsid Nsid Nsid Class
Definition
Abstract Numbering Definition Identifier. When the object is serialized out as xml, its qualified name is w:nsid.
public class Nsid : DocumentFormat.OpenXml.Wordprocessing.LongHexNumberType
type Nsid = class inherit LongHexNumberType
Public Class Nsid Inherits LongHexNumberType
- Inheritance
-
Remarks
[ISO/IEC 29500-1 1st Edition]
nsid (Abstract Numbering Definition Identifier).
If this element is omitted, then the list shall have no nsid and one can be added by a producer arbitrarily.
[Note: This element can be used to determine the abstract numbering definition to be applied to a numbered paragraph copied from one document and pasted into another. Consider a case in which a given numbered paragraph associated with a abstract numbering definition with nsid FFFFFF23, is pasted among numbered paragraphs associated with a completely different appearance and an abstract numbering definition with an nsid of FFFFFF23. Here, because of the distinction enabled by the identical nsid values, the hosting application would not have to arbitrarily keep the pasted numbered paragraph associated with its original abstract numbering definition, as it might use the information provided by the abstract numbering definition's identical nsid values to know that those two numbering sets are identical, and merge the paragraphs into the target numbering format. end note]
[Example: Consider the WordprocessingML for an abstract numbering definition below:
<w:abstractNum w: <w:nsid w: <w:multiLevelType w: <w:tmpl w: … </w:abstractNum>
In this example, the given abstract numbering definition is associated with the unique hexadecimal ID FFFFFF89. end example]
[Note: The W3C XML Schema definition of this element’s content model (CT_LongHexNumber) is located in §A.1. end note]
� ISO/IEC29500: 2008. | https://docs.microsoft.com/en-us/dotnet/api/documentformat.openxml.wordprocessing.nsid?view=openxml-2.8.1 | CC-MAIN-2019-39 | en | refinedweb |
#include "petscmat.h" PetscErrorCode MatMPIAIJSetPreallocation(Mat B,PetscInt d_nz,const PetscInt d_nnz[],PetscInt o_nz,const PetscInt o_nnz[])Collective Users-Manual: ch_mat.. | https://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html | CC-MAIN-2019-39 | en | refinedweb |
2 major reasons why modern C++ is a performance beast
Use smart pointers and move semantics to supercharge your C++ code base.
Use smart pointers and move semantics to supercharge your C++ code base..
One only needs to do a bit of Googling to see that there are a lot of new features in modern C++. In this article, I’ll focus on two key features that represent major milestones in C++’s performance evolution: smart pointers and move semantics..
The trouble with raw pointers is that there are too many ways to misuse them, including: forgetting to initialize them, forgetting to release dynamic memory, and releasing dynamic memory too many times. Many such problems can be mitigated or even completely eliminated through the use of smart pointers—class templates designed to encapsulate raw pointers and greatly improve their overall reliability. C++98 provided the
auto_ptr template that did part of the job, but there just wasn’t enough language support to do that tricky job completely.
As of C++11, that language support is there, and as of C++14 not only is there no remaining need for the use of raw pointers in the language, but there’s rarely even any need for the use of the raw
new and
delete
Pre C++11, there was still one fundamental area where performance was throttled: where C++’s value-based semantics incurred costs for the unnecessary copying of resource-intensive objects. In C++98, a function declared something like this:
vector<Widget> makeWidgetVec(creation-parameters);
struck fear into any cycle-counter’s heart, due to the potential expense of returning a vector of
Widgets by value (let’s assume that
Widget is some sort of resource-hungry type). The rules of the C++98 language require that a vector be constructed within the function and then copied upon return or, at the very least, that the program behave as if that were the case. When individual
Widgets optimization and circumstances do not always allow compilers to apply it. Hence, the cost of such code could not be reliably predicted.
The introduction of move semantics in modern C++ completely removes that uncertainty. Even if the
Widgets are not “move-enabled,” returning a temporary container of
Widgets from a function by value becomes a very efficient operation because the vector template itself is move-enabled.
Additionally, if the
Widget class is
Widget, a “conventional” version as displayed below:
#include <cstring> class Widget { public: const size_t TEST_SIZE = 10000; Widget() : ptr(new char[TEST_SIZE]), size(TEST_SIZE) {} ~Widget() { delete[] ptr; } // Copy constructor: Widget(const Widget &rhs) : ptr(new char[rhs.size]), size(rhs.size) { std::memcpy(ptr, rhs.ptr, size); } // Copy assignment operator: Widget& operator=(const Widget &rhs) { Widget tmp(rhs); swap(tmp); return *this; } void swap(Widget &rhs) { std::swap(size, rhs.size); std::swap(ptr, rhs.ptr); } private: char *ptr; size_t size; }; // Output of test program: // // Size of vw: 500000 // Time for one push_back on full vector: 53.668
And one enhanced to support move semantics:
#include <cstring> #include <utility> class Widget { public: const size_t TEST_SIZE = 10000; Widget() : ptr(new char[TEST_SIZE]), size(TEST_SIZE) {} ~Widget() { delete[] ptr; } // Copy constructor: Widget(const Widget &rhs) : ptr(new char[rhs.size]), size(rhs.size) { std::memcpy(ptr, rhs.ptr, size); } // Move constructor: Widget(Widget &&rhs) noexcept : ptr(rhs.ptr), size(rhs.size) { rhs.ptr = nullptr; rhs.size = 0; } // Copy assignment operator: Widget& operator=(const Widget &rhs) { Widget tmp(rhs); swap(tmp); return *this; } void swap(Widget &rhs) noexcept { std::swap(size, rhs.size); std::swap(ptr, rhs.ptr); } // Move assignment operator Widget &operator=(Widget &&rhs) noexcept { Widget tmp(std::move(rhs)); swap(tmp); return *this; } private: char *ptr; size_t size; }; // Output: // // Size of vw: 500000 // Time for one push_back on full vector: 0.032
Using a simple
timer class, the test program populates a vector with half a million instances of a
Widget and, making sure the vector is at its capacity, reports the time for a single additional
push_back of a Widget onto the vector. In C++98, the
push_back operation takes almost a minute (on my ancient Dell Latitude E6500). In modern C++ and using the move-enabled version of
Widget, the same
push_back operation takes .031 seconds. Here’s a simple
timer class:
#include <ctime> class Timer { public: Timer(): start(std::clock()) {} operator double() const { return (std::clock() - start) / static_cast<double>(CLOCKS_PER_SEC); } void reset() { start = std::clock(); } private: std::clock_t start; };
And here’s a test program to time one
push_back call on a large, full vector of
Widgets (a memory-hogging class):
.
It’s difficult to say much more about these new features without drilling down into implementation techniques. If you’d like to learn more, register for my in-person training course on October 26-28, Transitioning to Modern C++, where I’ll teach you these features and much more will be explored in greater detail. | https://www.oreilly.com/radar/2-major-reasons-why-modern-c-is-a-performance-beast/ | CC-MAIN-2019-39 | en | refinedweb |
I am working on a multiplayer game in Unity which is using Playfab and the Authentication and Photon which is hosting the multiplayer. I can successfully get players into the same room and I can load the scene after players 'join' the room, however, when 2 players are in the same room, they can not see each other. This is my authentication service:
public class LoginWithCustomID : MonoBehaviour { private string _playFabPlayerIdCache; private bool _isNewAccount; private string _playerName; // Use this to auth normally for PlayFab void Awake() { PhotonNetwork.autoJoinLobby = false; PhotonNetwork.automaticallySyncScene = true; DontDestroyOnLoad(gameObject); authenticateWithPlayfab(); } private void authenticateWithPlayfab() { var request = new LoginWithCustomIDRequest { CustomId = "CustomId123", CreateAccount = true, InfoRequestParameters = new GetPlayerCombinedInfoRequestParams() { GetUserAccountInfo = true, ProfileConstraints = new PlayerProfileViewConstraints() { ShowDisplayName = true } } }; PlayFabClientAPI.LoginWithCustomID(request, requestPhotonToken, OnLoginFailure); } private void requestPhotonToken(LoginResult result) { PlayerAccountService.loginResult = result; _playFabPlayerIdCache = result.PlayFabId; _playerName = result.InfoResultPayload.AccountInfo.TitleInfo.DisplayName; if (result.NewlyCreated) { _isNewAccount = true; setupNewPlayer(result); } PlayFabClientAPI.GetPhotonAuthenticationToken(new GetPhotonAuthenticationTokenRequest() { PhotonApplicationId = "d090b4a8-35dc-41de-b33b-748861e04ccb" }, AuthenticateWithPhoton, OnLoginFailure); } private void setupNewPlayer(LoginResult result) { PlayFabClientAPI.UpdateUserData( new UpdateUserDataRequest() { Data = new Dictionary<string, string>() { { "Level", "1" }, { "xp", "0" } } }, success => { Debug.Log("Set User Data"); }, failure => { Debug.Log("Failed to set User Data.."); } ); } private void AuthenticateWithPhoton(GetPhotonAuthenticationTokenResult result) { Debug.Log("Photon token acquired: " + result.PhotonCustomAuthenticationToken); var customAuth = new AuthenticationValues { AuthType = CustomAuthenticationType.Custom }; customAuth.AddAuthParameter("username", _playFabPlayerIdCache); customAuth.AddAuthParameter("token", result.PhotonCustomAuthenticationToken); PhotonNetwork.AuthValues = customAuth; setNextScene(); } private void setNextScene() { if(_isNewAccount || _playerName == null) { SceneManager.LoadSceneAsync("CreatePlayerName", LoadSceneMode.Single); } else { SceneManager.LoadSceneAsync("LandingScene", LoadSceneMode.Single); } } private void OnLoginFailure(PlayFabError error) { Debug.LogWarning("something went wrong in auth login"); Debug.LogError("Here's some debug info:"); Debug.LogError(error.GenerateErrorReport()); } } }
public static GameManager instance; public static GameObject localPlayer; private void Awake() { if (instance != null) { DestroyImmediate(instance); return; } DontDestroyOnLoad(gameObject); instance = this; PhotonNetwork.automaticallySyncScene = true; } // Use this for initialization void Start() { PhotonNetwork.ConnectUsingSettings("A_0.0.1"); } public void JoinGame() { RoomOptions ro = new RoomOptions(); ro.MaxPlayers = 4; PhotonNetwork.JoinOrCreateRoom("Test Room 2", ro, null); } public override void OnJoinedRoom() { Debug.Log("Joined Room!"); if (PhotonNetwork.isMasterClient) { PhotonNetwork.LoadLevel("Test_Map1"); } } private void OnLevelWasLoaded(int level) { if (!PhotonNetwork.inRoom) return; localPlayer = PhotonNetwork.Instantiate( "Player", new Vector3(0, 1f, 0), Quaternion.identity, 0); } public void LeaveRoom() { PhotonNetwork.LeaveRoom(); SceneManager.LoadScene("LandingScene", LoadSceneMode.Single); } This loads the scene that I named "Test_scene1" successfully and I show within my scene, the room name and number of active players in the room. When I do a run and build, I get a user's playerPrefab to load into the room. When I run the game through unity, I can get a second player to log into the room. The problem is, the players do not see eachother and I can not figure out why that is. I am following the PLayerfab/Photon tutorials on their respective sites, but I can't find anything that I did wrong in either one. From what I read, it looks like my instantiate method might be wrong but I'm not sure why. Below is my player Prefab showing the components attached to it:
I apologize for this huge question, I just wanted to provide as much information as I could.
For a little but of clarity, even in Unity when I load "Test_Map1" I even see that there are two players in the inspector when I have two instances running and that I can only control one of them. But I do not actually see the other player.
Turns out, the Prefabs are loading in 'deactived' I placed a ball in the middle of my test map and each player could see the ball moving but could not see the player who was pushing it, causing it to move.. So the connection is working fine, but for some reason my player prefabs are loading in 'deactivated' which is why I can't see them.
Answer by pfnathan · Apr 27, 2018 at 09:50 PM
Thanks for the findings, You might need to ping folks at Unity about prefab being loaded as deactivated state though.
Well, at least someone else recognizes my pain! haha Will do, thanks
Answers Answers and Comments
2 People are following this question.
Confused about server setup - C# Custom Server, Unity Client 1 Answer
Matchmaking Cancellation Reason: Internal? 1 Answer
How can I access game data using apache php server ?,Can I access User data on apache server. 1 Answer
Custom game server instance (unity) does not start 1 Answer
Unable to call ExecuteCloudScript from Thunderhead server 4 Answers | https://community.playfab.com/questions/19466/unity-playfab-and-photon-players-do-not-see-each-o.html | CC-MAIN-2019-39 | en | refinedweb |
SDL_JoystickEventState - Enable/disable joystick event polling
#include "SDL.h" int SDL_JoystickEventState(int state);
SDL. ATTRIBUTES See attributes(7) for descriptions of the following attributes: +---------------+------------------+ |ATTRIBUTE TYPE | ATTRIBUTE VALUE | +---------------+------------------+ |Availability | library/sdl | +---------------+------------------+ |Stability | Volatile | +---------------+------------------+ SEE ALSO SDL Joystick Functions, SDL_JoystickUpdate, SDL_JoyAxisEvent, SDL_Joy- BallEvent, SDL_JoyButtonEvent, SDL_JoyHatEvent, 22:59 SDL_JoystickEventState(3) | https://docs.oracle.com/cd/E88353_01/html/E37842/sdl-joystickeventstate-3.html | CC-MAIN-2019-39 | en | refinedweb |
Core editor architecture
The
@ckeditor/ckeditor5-core package is relatively simple. It comes with just a handful of classes. The ones you need to know are presented below.
# Editor classes
The
Editor class represents the base of the editor. It is the entry point of the application, gluing all other components. It provides a few properties that you need to know:
config– The configuration object.
pluginsand
commands– The collection of loaded plugins and commands.
model– The entry point to the editor’s data model.
data– The data controller. It controls how data is retrieved from the document and set inside it.
editing– The editing controller. It controls how the model is rendered to the user for editing.
keystrokes– The keystroke handler. It allows to bind keystrokes to actions.
Besides that, the editor exposes a few of methods:
create()– The static
create()method. Editor constructors are protected and you should create editors using this static method. It allows the initialization process to be asynchronous.
destroy()– Destroys the editor.
execute()– Executes the given command.
setData()and
getData()– A way to retrieve the data from the editor and set the data in the editor. The data format is controlled by the data controller’s data processor and it does not need to be a string (it can be e.g. JSON if you implement such a data processor). See, for example, how to produce Markdown output.
For the full list of methods check the API docs of the editor class you use. Specific editor implementations may provide additional methods.
The
Editor class is a base to implement your own editors. CKEditor 5 Framework comes with a few editor types (for example, classic, inline, balloon and decoupled) but you can freely implement editors which work and look completely different. The only requirement is that you implement the
Editor interface.
# Plugins
Plugins are a way to introduce editor features. In CKEditor 5 even typing is a plugin. What is more, the
Typing plugin depends on the
Input and
Delete plugins which are responsible for handling the methods of inserting text and deleting content, respectively. At the same time, some plugins need to customize Backspace behavior in certain cases and handle it by themselves. This leaves the base plugins free of any non-generic knowledge.
Another important aspect of how existing CKEditor 5 plugins are implemented is the split into engine and UI parts. For example, the
BoldEditing plugin introduces the schema definition, mechanisms rendering
<strong> tags, commands to apply and remove bold from text, while the
Bold plugin adds the UI of the feature (i.e. the button). This feature split is meant to allow for greater reuse (one can take the engine part and implement their own UI for a feature) as well as for running CKEditor 5 on the server side. At the same time, the feature split is not perfect yet and will be improved.
The tl;dr of this is that:
- Every feature is implemented or at least enabled by a plugin.
- Plugins are highly granular.
- Plugins know everything about the editor.
- Plugins should know as little about other plugins as possible.
These are the rules based on which the official plugins were implemented. When implementing your own plugins, if you do not plan to publish them, you can reduce this list to the first point.
After this lengthy introduction (which is aimed at making it easier for you to digest the existing plugins), the plugin API can be explained.
All plugins need to implement the
PluginInterface. The easiest way to do so is by inheriting from the
Plugin class. The plugin initialization code should be located in the
init() method (which can return a promise). If some piece of code needs to be executed after other plugins are initialized, you can put it in the
afterInit() method. The dependencies between plugins are implemented using the static
requires property.
import MyDependency from 'some/other/plugin'; class MyPlugin extends Plugin { static get requires() { return [ MyDependency ]; } init() { // Initialize your plugin here. this.editor; // The editor instance which loaded this plugin. } }
You can see how to implement a simple plugin in the Quick start guide.
# Commands
A command is a combination of an action (a callback) and a state (a set of properties). For instance, the
bold command applies or removes the bold attribute from the selected text. If the text in which the selection is placed has bold applied already, the value of the command is
true,
false otherwise. If the
bold command can be executed on the current selection, it is enabled. If not (because, for example, bold is not allowed in this place), it is disabled.
We recommend using the official CKEditor 5 inspector for development and debugging. It will give you tons of useful information about the state of the editor such as internal data structures, selection, commands, and many more.
All commands need to inherit from the
Command class. Commands need to be added to the editor’s command collection so they can be executed by using the
Editor#execute() method.
Take this example:
class MyCommand extends Command { execute( message ) { console.log( message ); } } class MyPlugin extends Plugin { init() { const editor = this.editor; editor.commands.add( 'myCommand', new MyCommand( editor ) ); } }
Calling
editor.execute( 'myCommand', 'Foo!' ) will log
Foo! to the console.
To see how state management of a typical command like
bold is implemented, have a look at some pieces of the
AttributeCommand class on which
bold is based.
The first thing to notice is the
refresh() method:
refresh() { const doc = this.editor.document; this.value = doc.selection.hasAttribute( this.attributeKey ); this.isEnabled = doc.schema.checkAttributeInSelection( doc.selection, this.attributeKey ); }
This method is called automatically (by the command itself) when any changes are applied to the model. This means that the command automatically refreshes its own state when anything changes in the editor.
The important thing about commands is that every change in their state as well as calling the
execute() method fire events (e.g.
#set:value and
#change:value when you change the
#value property and
#execute when you execute the command).
Read more about this mechanism in the Observables deep dive guide.
These events make it possible to control the command from the outside. For instance, if you want to disable specific commands when some condition is true (for example, according to your application logic, they should be temporarily disabled) and there is no other, cleaner way to do that, you can block the command manually:();
The command will now be disabled as long as you do not off this listener, regardless of how many times
someCommand.refresh() is called.
# Event system and observables
CKEditor 5 has an event-based architecture so you can find
EmitterMixin and
ObservableMixin mixed all over the place. Both mechanisms allow for decoupling the code and make it extensible.
Most of the classes that have already been mentioned are either emitters or observables (observable is an emitter, too). An emitter can emit (fire) events as well as listen to them.
class MyPlugin extends Plugin { init() { // Make MyPlugin listen to someCommand#execute. this.listenTo( someCommand, 'execute', () => { console.log( 'someCommand was executed' ); } ); // Make MyPlugin listen to someOtherCommand#execute and block it. // You listen with a high priority to block the event before // someOtherCommand's execute() method is called. this.listenTo( someOtherCommand, 'execute', evt => { evt.stop(); }, { priority: 'high' } ); } // Inherited from Plugin: destroy() { // Removes all listeners added with this.listenTo(); this.stopListening(); } }
The second listener to
'execute' shows one of the very common practices in CKEditor 5 code. Basically, the default action of
'execute' (which is calling the
execute() method) is registered as a listener to that event with a default priority. Thanks to that, by listening to the event using
'low' or
'high' priorities you can execute some code before or after
execute() is really called. If you stop the event, then the
execute() method will not be called at all. In this particular case, the
Command#execute() method was decorated with the event using the
ObservableMixin#decorate() function:
import ObservableMixin from '@ckeditor/ckeditor5-utils/src/observablemixin'; import mix from '@ckeditor/ckeditor5-utils/src/mix'; class Command { constructor() { this.decorate( 'execute' ); } // Will now fire the #execute event automatically. execute() {} } // Mix ObservableMixin into Command. mix( Command, ObservableMixin );
Check out the deep dive into observables guide to learn more about the advanced usage of observables with some additional examples.
Besides decorating methods with events, observables allow to observe their chosen properties. For instance, the
Command class makes its
#value and
#isEnabled observable by calling
set():
class Command { constructor() { this.set( 'value', undefined ); this.set( 'isEnabled', undefined ); } } mix( Command, ObservableMixin ); const command = new Command(); command.on( 'change:value', ( evt, propertyName, newValue, oldValue ) => { console.log( `${ propertyName } has changed from ${ oldValue } to ${ newValue }` ); } ) command.value = true; // -> 'value has changed from undefined to true'
Observables have one more feature which is widely used by the editor (especially in the UI library) — the ability to bind the value of one object’s property to the value of some other property or properties (of one or more objects). This, of course, can also be processed by callbacks.
Assuming that
target and
source are observables and that used properties are observable:
target.bind( 'foo' ).to( source ); source.foo = 1; target.foo; // -> 1 // Or: target.bind( 'foo' ).to( source, 'bar' ); source.bar = 1; target.foo; // -> 1
You can also find more about data bindings in the user interface in the UI library architecture guide.
# Read next
Once you have learned how to create plugins and commands you can read how to implement real editing features in the Editing engine guide. | https://ckeditor.com/docs/ckeditor5/latest/framework/guides/architecture/core-editor-architecture.html | CC-MAIN-2019-39 | en | refinedweb |
There are plenty of ways to create a game for Android and one important way is to do it from scratch in Android Studio with Java. This gives you the maximum control over how you want your game to look and behave and the process will teach you skills you can use in a range of other scenarios too – whether you’re creating a splash screen for an app or you just want to add some animations. With that in mind, this tutorial is going to show you how to create a simple 2D game using Android Studio and the Java. You can find all the code and resources at Github if you want to follow along.
Setting up
In order to create our game, we’re going to need to deal with a few specific concepts: game loops, threads and canvases. To begin with, start up Android Studio. If you don’t have it installed then check out our full introduction to Android Studio, which goes over the installation process. Now start a new project and make sure you choose the ‘Empty Activity’ template. This is a game, so of course you don’t need elements like the FAB button complicating matters.
The first thing you want to do is to change AppCompatActivity to Activity. This means we won’t be using the action bar features.
Similarly, we also want to make our game full screen. Add the following code to onCreate() before the call to setContentView():
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); this.requestWindowFeature(Window.FEATURE_NO_TITLE);
Note that if you write out some code and it gets underlined in red, that probably means you need to import a class. In other words, you need to tell Android Studio that you wish to use certain statements and make them available. If you just click anywhere on the underlined word and then hit Alt+Enter, then that will be done for you automatically!
Creating your game view
You may be used to apps that use an XML script to define the layout of views like buttons, images and labels. This is what the line setContentView is doing for us.
But again, this is a game meaning it doesn’t need to have browser windows or scrolling recycler views. Instead of that, we want to show a canvas instead. In Android Studio a canvas is just the same as it is in art: it’s a medium that we can draw on.
So change that line to read as so:
setContentView(new GameView(this))
You’ll find that this is once again underlined red. But now if you press Alt+Enter, you don’t have the option to import the class. Instead, you have the option to create a class. In other words, we’re about to make our own class that will define what’s going to go on the canvas. This is what will allow us to draw to the screen, rather than just showing ready-made views.
So right click on the package name in your hierarchy over on the left and choose New > Class. You’ll now be presented with a window to create your class and you’re going to call it GameView. Under SuperClass, write: android.view.SurfaceView which means that the class will inherit methods – its capabilities – from SurfaceView.
In the Interface(s) box, you’ll write android.view.SurfaceHolder.Callback. As with any class, we now need to create our constructor. Use this code:
private MainThread thread; public GameView(Context context) { super(context); getHolder().addCallback(this); }
Each time our class is called to make a new object (in this case our surface), it will run the constructor and it will create a new surface. The line ‘super’ calls the superclass and in our case, that is the SurfaceView.
By adding Callback, we’re able to intercept events.
Now override some methods:
@Override public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { } @Override public void surfaceCreated(SurfaceHolder holder) { } @Override public void surfaceDestroyed(SurfaceHolder holder) { }
These basically allow us to override (hence the name) methods in the superclass (SurfaceView). You should now have no more red underlines in your code. Nice.
You just created a new class and each time we refer to that, it will build the canvas for your game to get painted onto. Classes create objects and we need one more.
Creating threads
Our new class is going to be called MainThread. And its job will be to create a thread. A thread is essentially like a parallel fork of code that can run simultaneously alongside the main part of your code. You can have lots of threads running all at once, thereby allowing things to occur simultaneously rather than adhering to a strict sequence. This is important for a game, because we need to make sure that it keeps on running smoothly, even when a lot is going on.
Create your new class just as you did before and this time it is going to extend Thread. In the constructor we’re just going to call super(). Remember, that’s the super class, which is Thread, and which can do all the heavy lifting for us. This is like creating a program to wash the dishes that just calls washingMachine().
When this class is called, it’s going to create a separate thread that runs as an offshoot of the main thing. And it’s from here that we want to create our GameView. That means we also need to reference the GameView class and we’re also using SurfaceHolder which is contains the canvas. So if the canvas is the surface, SurfaceHolder is the easel. And GameView is what puts it all together.
The full thing should look like so:
public class MainThread extends Thread { private SurfaceHolder surfaceHolder; private GameView gameView; public MainThread(SurfaceHolder surfaceHolder, GameView gameView) { super(); this.surfaceHolder = surfaceHolder; this.gameView = gameView; } }
Schweet. We now have a GameView and a thread!
Creating the game loop
We now have the raw materials we need to make our game, but nothing is happening. This is where the game loop comes in..
For now, we’re still in the MainThread class and we’re going to override a method from the superclass. This one is run.
And it goes a little something like this:
@Override public void run() { while (running) { canvas = null; try { canvas = this.surfaceHolder.lockCanvas(); synchronized(surfaceHolder) { this.gameView.update(); this.gameView.draw(canvas); } } catch (Exception e) {} finally { if (canvas != null) { try { surfaceHolder.unlockCanvasAndPost(canvas); } catch (Exception e) { e.printStackTrace(); } } } } }
You’ll see a lot of underlining, so we need to add some more variables and references. Head back to the top and add:
private SurfaceHolder surfaceHolder; private GameView gameView; private boolean running; public static Canvas canvas;
Remember to import Canvas. Canvas is the thing we will actually be drawing on. As for ‘lockCanvas’, this is important because it is what essentially freezes the canvas to allow us to draw on it. That’s important because otherwise, you could have multiple threads attempting to draw on it at once. Just know that in order to edit the canvas, you must first lock the canvas.
Update is a method that we are going to create and this is where the fun stuff will happen later on.
The try and catch meanwhile are simply requirements of Java that show we’re willing to try and handle exceptions (errors) that might occur if the canvas isn’t ready etc.
Finally, we want to be able to start our thread when we need it. To do this, we’ll need another method here that allows us to set things in motion. That’s what the running variable is for (note that a Boolean is a type of variable that is only ever true or false). Add this method to the MainThread class:
public void setRunning(boolean isRunning) { running = isRunning; }
But at this point, one thing should still be highlighted and that’s update. This is because we haven’t created the update method yet. So pop back into GameView and now add method.
public void update() { }
We also need to start the thread! We’re going to do this in our surfaceCreated method:
@Override public void surfaceCreated(SurfaceHolder holder) { thread.setRunning(true); thread.start(); }
We also need to stop the thread when the surface is destroyed. As you might have guessed, we handle this in the surfaceDestroyed method. But seeing as it can actually take multiple attempts to stop a thread, we’re going to put this in a loop and use try and catch again. Like so:
@Override public void surfaceDestroyed(SurfaceHolder holder) { boolean retry = true; while (retry) { try { thread.setRunning(false); thread.join(); } catch (InterruptedException e) { e.printStackTrace(); } retry = false; } }
And finally, head up to the constructor and make sure to create the new instance of your thread, otherwise you’ll get the dreaded null pointer exception! And then we’re going to make GameView focusable, meaning it can handle events.
thread = new MainThread(getHolder(), this); setFocusable(true);
Now you can finally actually test this thing! That’s right, click run and it should actually run without any errors. Prepare to be blown away!
It’s… it’s… a blank screen! All that code. For a blank screen. But, this is a blank screen of opportunity. You’ve got your surface up and running with a game loop to handle events. Now all that’s left is make stuff happen. It doesn’t even matter if you didn’t follow everything in the tutorial up to this point. Point is, you can simply recycle this code to start making glorious games!
Doing a graphics
Right, now we have a blank screen to draw on, all we need to do is draw on it. Fortunately, that’s the simple part. All you need to do is to override the draw method in our GameView class and then add some pretty pictures:
@Override public void draw(Canvas canvas) { super.draw(canvas); if (canvas != null) { canvas.drawColor(Color.WHITE); Paint paint = new Paint(); paint.setColor(Color.rgb(250, 0, 0)); canvas.drawRect(100, 100, 200, 200, paint); } }
Run this and you should now have a pretty red square in the top left of an otherwise-white screen. This is certainly an improvement.
You could theoretically create pretty much your entire game by sticking it inside this method (and overriding onTouchEvent to handle input) but that wouldn’t be a terribly good way to go about things. Placing new Paint inside our loop will slow things down considerably and even if we put this elsewhere, adding too much code to the draw method would get ugly and difficult to follow.
Instead, it makes a lot more sense to handle game objects with their own classes. We’re going to start with one that shows a character and this class will be called CharacterSprite. Go ahead and make that.
This class is going to draw a sprite onto the canvas and will look like so
public class CharacterSprite { private Bitmap image; public CharacterSprite(Bitmap bmp) { image = bmp; } public void draw(Canvas canvas) { canvas.drawBitmap(image, 100, 100, null); } }
Now to use this, you’ll need to load the bitmap first and then call the class from GameView. Add a reference to private CharacterSprite characterSprite and then in the surfaceCreated method, add the line:
characterSprite = new CharacterSprite(BitmapFactory.decodeResource(getResources(),R.drawable.avdgreen));
As you can see, the bitmap we’re loading is stored in resources and is called avdgreen (it was from a previous game). Now all you need to do is pass that bitmap to the new class in the draw method with:
characterSprite.draw(canvas);
Now click run and you should see your graphic appear on your screen! This is BeeBoo. I used to draw him in my school textbooks.
What if we wanted to make this little guy move? Simple: we just create x and y variables for his positions and then change these values in an update method.
So add the references to your CharacterSprite and then then draw your bitmap at x, y. Create the update method here and for now we’re just going to try:
y++;
Each time the game loop runs, we’ll move the character down the screen. Remember, y coordinates are measured from the top so 0 is the top of the screen. Of course we need to call the update method in CharacterSprite from the update method in GameView.
Press play again and now you’ll see that your image slowly traces down the screen. We’re not winning any game awards just yet but it’s a start!
Okay, to make things slightly more interesting, I’m just going to drop some ‘bouncy ball’ code here. This will make our graphic bounce around the screen off the edges, like those old Windows screensavers. You know, the strangely hypnotic ones.
public void update() { x += xVelocity; y += yVelocity; if ((x & gt; screenWidth - image.getWidth()) || (x & lt; 0)) { xVelocity = xVelocity * -1; } if ((y & gt; screenHeight - image.getHeight()) || (y & lt; 0)) { yVelocity = yVelocity * -1; } }
You will also need to define these variables:
private int xVelocity = 10; private int yVelocity = 5; private int screenWidth = Resources.getSystem().getDisplayMetrics().widthPixels; private int screenHeight = Resources.getSystem().getDisplayMetrics().heightPixels;
Optimization
There is plenty more to delve into here, from handling player input, to scaling images, to managing having lots of characters all moving around the screen at once. Right now, the character is bouncing but if you look very closely there is slight stuttering. It’s not terrible but the fact that you can see it with the naked eye is something of a warning sign. The speed also varies a lot on the emulator compared to a physical device. Now imagine what happens when you have tons going on on the screen at once!
There are a few solutions to this problem. What I want to do to start with, is to create a private integer in MainThread and call that targetFPS. This will have the value of 60. I’m going to try and get my game to run at this speed and meanwhile, I’ll be checking to ensure it is. For that, I also want a private double called averageFPS.
I’m also going to update the run method in order to measure how long each game loop is taking and then to pause that game loop temporarily if it is ahead of the targetFPS. We’re then going to calculate how long it now took and then print that so we can see it in the log.
@Override public void run() { long startTime; long timeMillis; long waitTime; long totalTime = 0; int frameCount = 0; long targetTime = 1000 / targetFPS; while (running) { startTime = System.nanoTime(); canvas = null; try { canvas = this.surfaceHolder.lockCanvas(); synchronized(surfaceHolder) { this.gameView.update(); this.gameView.draw(canvas); } } catch (Exception e) { } finally { if (canvas != null) { try { surfaceHolder.unlockCanvasAndPost(canvas); } catch (Exception e) { e.printStackTrace(); } } } timeMillis = (System.nanoTime() - startTime) / 1000000; waitTime = targetTime - timeMillis; try { this.sleep(waitTime); } catch (Exception e) {} totalTime += System.nanoTime() - startTime; frameCount++; if (frameCount == targetFPS) { averageFPS = 1000 / ((totalTime / frameCount) / 1000000); frameCount = 0; totalTime = 0; System.out.println(averageFPS); } } }
Now our game is attempting to lock it’s FPS to 60 and you should find that it generally measures a fairly steady 58-62 FPS on a modern device. On the emulator though you might get a different result.
Try changing that 60 to 30 and see what happens. The game slows down and it should now read 30 in your logcat.
Closing Thoughts
There are some other things we can do to optimize performance too. There’s a great blog post on the subject here. Try to refrain from ever creating new instances of Paint or bitmaps inside the loop and do all initializing outside before the game begins.
If you’re planning on creating the next hit Android game then there are certainly easier and more efficient ways to go about it these days. But there are definitely still use-case scenarios for being able to draw onto a canvas and it’s a highly useful skill to add to your repertoire. I hope this guide has helped somewhat and wish you the best of luck in your upcoming coding ventures!
Next – A beginner’s guide to Java
Save 60% on our Introduction to Android Development CourseAndroid Authority has created a course to help you learn how to develop Android Apps! No Coding Experience Required. Start Your Journey on Becoming an Android Developer today.
| https://www.androidauthority.com/android-game-java-785331/ | CC-MAIN-2019-39 | en | refinedweb |
This is the mail archive of the binutils@sourceware.org mailing list for the binutils project.
On 04/03/2012 10:39 PM, Tom Tromey wrote: >>>>>> : > Yes, but (at least for the binutils case) that's only because you already have an hack *unrelated to the cygnus option* to make it work; i.e., in 'binutils/doc/Makefile.am', I read: # Automake 1.9 will only build info files in the objdir if they are # mentioned in DISTCLEANFILES. It doesn't have to be unconditional, # though, so we use a bogus condition. if GENINSRC_NEVER DISTCLEANFILES = binutils.info endif > barimba. pwd > /home/tromey/gnu/baseline-gdb/build/binutils > barimba. grep '^ ./doc/binutils.info > barimba. find ../../src/binutils -name 'binutils.info' > barimba. > > How did you test it? > With the testcase attached to my mail (warning: it requires the Automake testsuite infrastructure to work). I can transform it in an independent test script if you are really interested. > If you built from a distribution tar, then it is expected that the info > file would be in srcdir. > I didn't use the binutils distribution to test my claim, but the minimal test case I had created on purpose, and attached in the previous mail. Regards, Stefano | http://www.sourceware.org/ml/binutils/2012-04/msg00049.html | CC-MAIN-2017-47 | en | refinedweb |
iswgraph man page
iswgraph — test for graphic wide character
Synopsis
#include <wctype.h> int iswgraph(wint_t wc);
Description Value
The iswgraph() function returns nonzero if wc is a wide character belonging to the wide-character class "graph". Otherwise, it returns zero.
Attributes
For an explanation of the terms used in this section, see attributes(7).
Conforming to
POSIX.1-2001, POSIX.1-2008, C99.
Notes
The behavior of iswgraph() depends on the LC_CTYPE category of the current locale.
See Also
isgraph(3), iswctype(3)
Colophon
This page is part of release 4.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
isalpha(3), iswctype(3). | https://www.mankier.com/3/iswgraph | CC-MAIN-2017-47 | en | refinedweb |
Patch Exchange Autodiscover Code for Security Issue The change removes the unauthenticated GET fallback attempt for the Autodiscover process. Given that the Autodiscover code is functionally broken and this fallback attempt wouldn't succeed unless an attacker faked a success response, a good way to patch the security issue is to disable the attempt. The change also updates the request content type, disables automatic redirects, and allows for parsing namespaces to help the first two attempts succeed. As this is not meant to be a functional patch but a security patch, there are no further changes to the Autodiscover code. BUG: 26488455 Change-Id: I0fc93c95e755c8fa60e94da5bec4b3b4c49cdfc1 | https://android.googlesource.com/platform/packages/apps/Exchange/+/0d1a38b | CC-MAIN-2017-47 | en | refinedweb |
The Qt Contributors Summit [1] is happening Berlin from 16-18 June to discuss
the future of Qt under Open Governance. Many members of the KDE community
will be there either as direct representatives of KDE or on behalf of their
employer. A rough estimate puts our presence at about 10% of the 200-250
attendees.
We would like to co-ordinate our efforts at QtCS to ensure the best possible
outcome for KDE and Qt. To help this we would like all KDE community members
attending to list their name on the KDE at QtCS wiki page [2].
Hi,
KCalendarSystem is a public class with many virtual methods which are
reimplemented in derived classes such as KCalendarSystemGregorian. The
derived classes are not exported or part of the api, only KCalendarSystem is
exposed, but the derived classes are created and returned in a static factory
method.
I'm implementing support for US week numbers, but there's conflicting
information on the great interweb tubes as to what the standard is.
Some sources say the US Standard is Week 1 is from Jan 1 to the first Saturday
of the year (which may be less than 7 days) then each following week starts
from Sunday.
Other sources say Week 1 starts on the first Sunday in the year, with the days
preceeding it being labelled either Week 0 or the last week of the previous
year.
Finally there's some suggestion the US military uses simple week numbering,
i.e.
There's been a short discussion on the GeoClue mailing list related to
resolving the issues we have around their dependencies on gconf and gsettings
and the latest response has been:
Did anything come out from discussions on feature branch naming in git? With
GSoC starting soon we'll be getting a lot of new feature branches and it would
be nice if they were consistantly named to make them easy to find and manage.
See
<a href="" title="">...</a>
for the original meeting notes.
I'd prefer to see the gsoc branches under a common prefix in the main project
repo rather than as personal branches or repos:
origin/gsoc2011/<subproject>/<branchname>
e.g.
I was wondering if we need to define a "New Dependencies" policy, or at least
some guidelines to remind people what to think about when choosing or creating
new dependencies?
Something like:
*
I'll be attending the OpenPrinting Summit [1] to discuss how to complete the
Common Printing Dialog [2] and integrate it into KDE and Qt. I'm looking for
any feedback people may have about the CPD, and any questions you want me to
ask while I'm there.
The CPD is a common print dialog implementation in Qt and Gtk that gets called
via DBus. The dialog includes a preview image, more user-friendly options,
better driver integration, and settings management. | http://www.devheads.net/people/20006 | CC-MAIN-2017-47 | en | refinedweb |
think recursively much better in Haskell but interviewers aren’t always cool when you ask to solve the problem in a non-standard language (I lucked out last time).
Because I was writing a lot of similar code, I created an interface first, so that I could write parametrized test cases. These methods should really be static, but I compromised on that in order to make my testing easier.
public interface Sort { public abstract void sort(int[] array); }
This was pretty easy, and allowed me to write a parametrized test case that would run all my tests on anything implementing Sort. So I had some code that looked like this:
private Sort sort; public TestSort(Sort sort) { this.sort = sort; } @Parameters public static Collection regExValues() { return Arrays.asList(new Object[][] { { new BubbleSort() }, { new InsertionSort() }, { new ShellSort() }, { new MergeSort() }, { new QuickSort() }, { new CountingSort() } }); }
And as I created each new class I just added it to my list of parameters, and all my tests (labeled @Test) would be run on them as well. My tests were pretty repetitive and probably could have been parametrized as well, but because I was testing all these different classes I opted for slightly more code in the tests. (Full test code below)
This approach really reduced my coding time, because once I’d set up the interface and the tests I could get a new class working very easily without using cut and paste. I also determined what I wanted to test: null, empty, one element, two elements sorted, two elements unsorted, odd number of elements, even number of elements, and longer with duplicates. When I added an extra test case part way through, it applied to all my previous classes. Determining the tests in advance was more effective than ad-hoc testing, for example, the odd number of elements test was helpful in catching an off-by-one bug.
It also encouraged me to write better code. I might write the function, understand it, and just have a little bug – but I’d keep at it, because I wanted all the tests to pass before I moved on. I also had one bug that was resulting in an array with just two elements reversed – I didn’t notice that staring at the debug statement, it looked fine. But of course, my tests noticed! And the time when I wrote the algorithm right first time? That was pretty nice! Even if it was the super easy CountingSort.
Full test code is below if you have any use for it! Note – parameterized test cases are JUnit 4, and I’ve just used ints rather than a Generic <extends Comparable> approach – if you decide to do that, just change to Integer or anything else that implements Comparable.
import java.util.Arrays; import java.util.Collection; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; @RunWith(Parameterized.class) public class TestSort { private Sort sort; public TestSort(Sort sort) { this.sort = sort; } @Parameters public static Collection regExValues() { return Arrays.asList(new Object[][] { {new BubbleSort()}, {new InsertionSort()}, {new ShellSort()}, {new MergeSort()}, {new QuickSort()}, {new CountingSort()}}); } // null array @Test(expected=IllegalArgumentException.class) public void testNullArray() { sort.sort(null); } // empty array @Test public void testEmptyArray() { int[] array = new int[0]; sort.sort(array); Assert.assertArrayEquals(new int[0], array); } // one element array @Test public void testOneElementArray() { int[] array = {42}; int[] test = {42}; sort.sort(array); Assert.assertArrayEquals(test, array); } // two element array ordered @Test public void testTwoElementOrdArray() { int[] array = {7, 42}; int[] test = {7, 42}; sort.sort(array); Assert.assertArrayEquals(test, array); } // two element array unordered @Test public void testTwoElementUnordArray() { int[] array = {42, 7}; int[] test = {7, 42}; sort.sort(array); Assert.assertArrayEquals(test, array); } // odd numbered element array @Test public void testOddNoElementsArray() { int[] array = {42, 68, 9, 7, 100, 36, 27}; int[] test = {7, 9, 27, 36, 42, 68, 100}; sort.sort(array); Assert.assertArrayEquals(test, array); } // even numbered element array @Test public void testEvenNoElementsArray() { int[] array = {42, 68, 9, 7, 100, 36, 27, 99}; int[] test = {7, 9, 27, 36, 42, 68, 99, 100}; sort.sort(array); Assert.assertArrayEquals(test, array); } // array in reverse order @Test public void testReverseOrder() { int[] array = {100, 99, 68, 42, 36, 27, 9, 7}; int[] test = {7, 9, 27, 36, 42, 68, 99, 100}; sort.sort(array); Assert.assertArrayEquals(test, array); } // longer array @Test public void testLongerArray() { int[] array = {13, 14, 94, 33, 82, 25, 59, 94, 65, 23, 45, 27, 73, 25, 39, 10}; int[] test = {10, 13, 14, 23, 25, 25, 27, 33, 39, 45, 59, 65, 73, 82, 94, 94}; sort.sort(array); Assert.assertArrayEquals(test, array); } }
One thought on “Test Driven Revision” | https://cate.blog/2010/07/07/test-driven-revision/ | CC-MAIN-2017-47 | en | refinedweb |
Demonstrates how to add a native SAFEARRAY to a database and how to marshal a managed array from a database to a native SAFEAR native SAFEARRAY types are being passed as values for the database column ArrayIntsCol. Inside DatabaseClass, these SAFEARRAY types are marshaled to managed objects using the marshaling functionality found in the namespace. Specifically, the method is used to marshal a SAFEARRAY to a managed array of integers, and the method is used to marshal a managed array of integers to a SAFEARRAY.
Output
Compiling the Code
To compile the code from the command line, save the code example in a file named adonet_marshal_safearray.cpp and enter the following statement:
Security
For information on security issues involving ADO.NET, see .
See Also
Reference
Other ResourcesData Access Using ADO.NET in C++
Native and .NET Interoperability | http://www.yaldex.com/c_net_tutorial/html/1034b9d7-ecf1-40f7-a9ee-53180e87a58c.htm | CC-MAIN-2017-47 | en | refinedweb |
Integrating the Android analytics monitor.
This article will describe how to integrate the Analytics Android monitor in an Android application.
Before getting started you must also have created an application so you have a product id. You will need that in the integration. Read more about how to Add Analytics to Your Application.
Adding the Android analytics monitor to your project.
Download the monitor for your project and unzip the downloaded monitor file to your machine.
Android Studio:
- Create folder called libs in your project that is in the same hierarchical level as the build and src folders if not already created.
- Put the analyticsmonitor.jar file inside.
Right-click the library and select the option
"Add as library"
Make sure
compile files('libs/analyticsmonitor.jar')is present in your build.gradle file:
dependencies { compile files('libs/analyticsmonitor.jar') }
Eclipse:
- Create folder called libs in your project that is in the same hierarchical level as the build and src folders if not already created.
Right-click libs folder and choose
Import…Then select
General->
File Systemin the tree. Click
Next, browse in the file system to find the library's parent directory (i.e.: where the monitor library resides) and click
OK.
Click the directory name in the left pane, then check the analyticsmonitor.jar in the right pane. Click
Finish. This puts the library into your project.
Right-click on your project, choose
Build Path->
Configure Build Path. Then click the
Librariestab, then
Add JARs…button. Navigate to analyticsmonitor.jar in the libs directory and add it. Click the
OKbuttons in both windows.
The analytics monitor uses internet to send information, so you need to add INTERNET permission to your AndroidManifext.xml:
<uses-permission android:</uses-permission>
Starting and stopping the monitor
The monitor should start when the application begins and stop when the user is done using it. The tricky part is that Android does not provide a native way to catch when the application exits. To work around this it is enough to know when the application is visible and when it goes to the background. Stopping the monitor is best placed right before going to the background, and if the user decides to come back after that (say the application is still alive), then the monitor needs to start again. This is done by defining a custom class that implements the Application.ActivityLifecycleCallbacks. This allows you to get all the calls from all the activities and respond to the correct moments for toggling the monitor’s state:
- Start the monitor at the beginning of the app.
- Stop the monitor if the application loses focus (call, home button etc.)
- Start the monitor if the user comes back.
- Stop the monitor if the user exits the application (back button all the way).
Here is a sample implementation:
import android.app.Activity; import android.app.Application; import android.os.Bundle; public class TrackedActivityLifeCycleHandler implements Application.ActivityLifecycleCallbacks { private int activitiesCount = 0; @Override public void onActivityCreated(Activity activity, Bundle bundle) { } @Override public void onActivityStarted(Activity activity) { } @Override public void onActivityResumed(Activity activity) { if (appIsInBackground()) { // The application is either just starting or it just has been resumed after being in the background. Analytics.getInstance(activity.getApplicationContext()).monitor().start(); } activitiesCount++; } @Override public void onActivityPaused(Activity activity) { } @Override public void onActivityStopped(Activity activity) { activitiesCount--; if (appIsInBackground()) { // The application is going to the background (User received a call or pressed the Home button etc.) Analytics.getInstance(activity.getApplicationContext()).monitor().stop(); } } @Override public void onActivitySaveInstanceState(Activity activity, Bundle bundle) { } @Override public void onActivityDestroyed(Activity activity) { } private boolean appIsInBackground() { return activitiesCount == 0; } }
Now call
retisterActivityLifecycleCallbacks() in
onCreate() method of your application:
import android.app.Application; public class MonitoredApp extends Application { @Override public void onCreate() { super.onCreate(); registerActivityLifecycleCallbacks(new TrackedActivityLifeCycleHandler()); } }
Sharing a single monitor instance among all activities.
Sharing a single instance is done through a singleton:
import android.content.Context; import eqatec.analytics.monitor.AnalyticsMonitorFactory; import eqatec.analytics.monitor.IAnalyticsMonitor; import eqatec.analytics.monitor.Version; public final class Analytics { private final static String ANALYTICS_KEY = "123abc..."; // Use your real key here private static Analytics instance; private IAnalyticsMonitor monitor; private Analytics(Context context) { try { this.monitor = AnalyticsMonitorFactory.createMonitor( context.getApplicationContext(), ANALYTICS_KEY, new Version(context.getResources().getString(R.string.version_name) )); } catch (Throwable e) { e.printStackTrace(); } } public IAnalyticsMonitor monitor() { return this.monitor; } public static Analytics getInstance(Context context) { if (instance == null){ instance = new Analytics(context); } return instance; } }
Now you can access the monitor and use its API to start tracking features:
Analytics.getInstance(this).monitor().trackFeature(String.format("%s.%s", this.getString(R.string.featureCategory1), this.getString(R.string.feature2))); | https://docs.telerik.com/platform/analytics/integration/monitor/platform/android | CC-MAIN-2017-47 | en | refinedweb |
Custom Qlabel show Videocamera on different thread
Goodmorning,
I'm using a raspberry pi3 board with a custom image built with Yocto, where I've included opencv libraries. Qt version is 5.7.1.
I succeded to show the camera video output on a QLabel, but now I wanted to make the opencv operations on a different thread, to not overload the GUI thread.
Hence, I've created a custom QLabel widget called videocamera, where the opencv task is done by a Worker class.
Unfortunately, when I get the error Segmentation fault. Have you got any ideas?
Thank you.
videocamera.h
#ifndef VIDEOCAMERA_H #define VIDEOCAMERA_H #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/videoio.hpp> #include <QWidget> #include <QTimer> #include <QLabel> #include <QThread> #include <QSignalMapper> using namespace cv; class Worker : public QObject { Q_OBJECT QImage m_image; VideoCapture cap; public slots: void doWork() { if(cap.open(0)) { Mat image; cap >> image; //conversion from Mat to QImage Mat dest; cvtColor(image, dest,CV_BGR2RGB); m_image = QImage((uchar*) dest.data, dest.cols, dest.rows, dest.step, QImage::Format_RGB888); emit resultReady(m_image); } } void deleteLater() { } signals: void resultReady(const QImage &result); }; class VideoCamera : public QLabel { Q_OBJECT public: explicit VideoCamera(QWidget *parent = nullptr); ~VideoCamera(); public slots: void updatePicture(const QImage&); private: Worker* worker; QThread workerThread; QTimer* timer; }; #endif // VIDEOCAMERA_H
videocamera.cpp
#include "videocamera.h" using namespace cv; VideoCamera::VideoCamera(QWidget *parent): QLabel(parent) { worker=new Worker(); worker->moveToThread(&workerThread); timer = new QTimer(this); connect(timer, SIGNAL(timeout()), worker, SLOT(doWork())); connect(&workerThread, SIGNAL(finished()), worker, SLOT(deleteLater())); connect(worker, SIGNAL(resultReady(const QImage&)), this, SLOT(updatePicture(const QImage&))); workerThread.start(); timer->start(10); } void VideoCamera::updatePicture(const QImage &image) { //show Qimage using QLabel this->setPixmap(QPixmap::fromImage(image)); } VideoCamera::~VideoCamera() { workerThread.quit(); workerThread.wait(); }
Hi,
Your original data has likely become invalid by the time you try to create the QPixmap object. You should make a copy before sending it.
Hello SGaist,
thank you for your answer!! You are right. I solved passing directly the Pixmap, instead of QImage. Is it what you meant?
- SGaist Lifetime Qt Champion
No it's not, can you show the new version of your code ?
arn't the
const,
&references of
void resultReady(const QImage &result);ignored by a Qt::QueuedConnection and a copy is send anyway? The default connection type between threads is QueuedConnection after all.
@davidino I had a case where QImage was not properly registerstered in the metatyp system, that caused problems for qued connections. Do you get any warnings in your consol output during run time?
Or better yet, use the new Qt5 Syntax and see if it compiles:
//5th argument Qt::QueuedConnection just to be sure connect(worker, Worker::resultReady, this, VideoCamera::updatePicture, Qt::QueuedConnection);
Here it's the QImage construction that is important. It uses the third constructor which doesn't copy the data hence you have to handle the lifetime of the underlying data. In this case there's a need to force a deep copy.
Hello Gaist,
thank you very much. Now I got it!
Below the working Worker class.
class Worker : public QObject { Q_OBJECT Mat dest; QImage m_image; VideoCapture cap; QObject* parent; public slots: void doColor() { if(cap.isOpened()) { Mat image; cap >> image; //conversion from Mat to QImage cvtColor(image, dest,CV_BGR2RGB); m_image = QImage((uchar*) dest.data, dest.cols, dest.rows, dest.step, QImage::Format_RGB888); emit resultReady(m_image); } else cap = VideoCapture(0); }
Hello @J-Hilk,
thank you for your email.
The only console outputs that I have are the following, but they don't bother the program.
evdevkeyboard: Could not read from input device (No such device)
evdevkeyboard: Failed to query led states
I'll pass to Qt5 connect Syntax for sure, thank you.
There's another solution worth investigating. Since your images are likely not going to change size, you could make m_image directly the right size and then wrap the data from it in the dest Mat (see this constructor). Doing so you you can remove the
m_image = etc.line. | https://forum.qt.io/topic/84963/custom-qlabel-show-videocamera-on-different-thread/5 | CC-MAIN-2017-47 | en | refinedweb |
ModelMap
An experiment in using mirrors to convert from a map to a model and vice versa.
Introduction
Mirrors are a way of performing reflection in Dart. This permits an unknown class to be traversed and information about fields, properties and methods to be extracted, which is particularly useful if you wish to convert a serialized representation of an object such as JSON into an actual object instance.
Be warned that mirrors are still being developed and so leading up to the release of Dart v1.0 there may still be breaking changes that would stop this library from functioning.
Anything based on mirrors can not yet be fully compiled to javascript and so ModelMap is not recommended for browser application development at the moment.
At this stage ModelMap only worries about non-static, public fields and will ignore getters and setters.
Examples
Simple model
import 'package:model_map/model_map.dart'; class SimpleModel extends ModelMap { String string; int integer; bool flag; num float; } main() { var map = { 'string': 'some text', 'integer': 42, 'flag': true, 'float': 1.23 }; var model = new SimpleModel().fromMap(map); // The model is populated and ready to use at this point }
You can also take an existing model and convert it to a map
// A map of <String, dynamic> var map = model.toMap();
Using JSON
A couple of utility functions are included that simply wrap fromMap and toMap using the built-in parse and stringify capabilities so that you can simply call fromJson and toJson on your model instance.
Complex model
ModelMap can support model instances within models, but they must all extend the ModelMap class. It also handles List and Map collections (maps are limited to string keys only, since this is all that JSON can use).
import 'package:model_map/model_map.dart'; class ComplexModel extends ModelMap { int integer; SimpleModel simple; List<SimpleModel> modelList; Map<String, DateTime> dateMap; Map<String, SimpleModel> modelMap; Map<String, List<SimpleModel>> modelListMap; } main() { var json = getJsonFromServer(); var model = new ComplexModel().fromJson(json); print(model.modelMap['a key'].flag); }
ModelMap should cope with reasonably complex object trees.
DateTime
When converting to JSON or a map, ModelMap will always convert dates to an ISO 8601 string, however when parsing a map it will accept an ISO 8601 string or an integer representing UTC unix time. | https://www.dartdocs.org/documentation/model_map/0.2.3/index.html | CC-MAIN-2017-47 | en | refinedweb |
Answers to "Duck Typing Done Right"
By bblfish on May 26, 2007
I woke up this morning with a large number of comments to my previous post "Duck Typing Done right" . It would be confusing to answer them all together in the comments section there, so I aggregated my responses here.
I realize the material covered here is very new to many people. Luckily it is very easy to understand. For a quick introduction see my short Video introduction to the Semantic Web.
Also I should mention that the RDF is a declarative framework. So its relationship to method Duck Typing is not a direct one. But nevertheless there is a lot to learn by understanding the simplicity of the RDF framework.
On the reference of Strings
Kevin asks why the URI "" is less ambigous than a string "Duck". In one respect Kevin is completely correct. In RDF they are both equally precise. But what they refer to is quite different from what one expects. The string "Duck" refers to the string "Duck". A URI on the other hand refers to the resource identified by it; URIs stand for Universal Resource Identifiers after all. The URI "", as defined above, refers to the set of Ducks. How do you know? Well you should be able to GET <> and receive a human or machine representation for it, selectable via content negotiation. This won't work in the simple examples I gave in my previous post, as they were just quick examples I hacked together by way of illustration. But try GETing <> for a real working example. See my longer post on this issue GET my meaning?
Think about the web. Everyday you type in URLs into a web browser and you get the page you want. When you type "" you don't sometimes get <>. The web works as well as it does, because URLs identify things uniquely. Everyone can mint their own if they own some section of the namespace, and PUT the meaning for that resource at that resource's location.
On Ambiguity and Vagueness
Phil Daws is correct to point out that URIs don't remove all fuzziness or vagueness. We can have fuzzy or vague concepts, and that is a good thing. foaf:knows (<> ) whilst unambigous is quite a fuzzily defined relation. If you click on it's URL this is what you will get:.
For more information on this see my post "Fuzzy thinking in Berkeley"
On UFOs
Paddy worries that this requires a Universal Class Hierarchy. No worries there. The Semantic Web is designed to work in a distributed way. People can grow their vocabularies, just like we all have grown the web by each publishing our own files on it. The Semantic Web is about linked data. The semantic web does not require UFOs (Unified Foundational Ontologies) to get going, and it may never need them at all, though I suspect that having one could be very helpful. See my longer post UFO's seen growing on the Web.
Relations are first class objects
Paddy and Jon Olson were mislead by my uses of classes to think that RDF ties relations/properties to classes. They don't. Relations in RDF are first class citizens, as you may see in the Dublin Core metadata initiative, which defines a set of very simple and very general relations to describe resources on the web, such as
dc:author,
dc:created etc... I think we need a
:sparql relation that would relate anything to an authoritative SPARQL endpoint, for example. There clearly is no need to constrain the domain of such a relation in any way.
Scalability and efficiencyJon Olson agrees with me that duck typing is good enough for some very large and good software projects. One of my favorite semantic web tools for example is cwm, which is written in python. When I say Duck Typing does not scale as implemented in those languages, I mean really big scale, like you know, the web. URIs is what has allowed the web to scale to the whole planet, and what will allow it to scale into structured data way beyond what we may even be comfortable imagining right now. This is not over engineered at all as Eric Biesterfeld fears. In fact it works because it gets the key elements right. And they are very simple as I demonstrated in my recent JavaOne BOF. The key concepts are:
- URIs refer to resources,
- resources return representations,
- to describe something on the web one needs to
- refer to the thing one wishes to describe, and that requires a URI,
- second specify the property relation one wishes to attribute to it (and that also requires a URI)
- and finally specify the value of that property.
Semantics
An anonymous writer mentions the "ugliness" of the syntax. This is not a problem. The semantic web is about semantic (see the illustration on this post) It defines the relationship of a string to what it names. It does not require a specific syntax. If you don't like the xml/rdf syntax, which most people think is overly complicated, then use the N3 syntax, or come up with something better.
On Other Languages
As mentioned above there need not be one syntax for RDF. Of course it helps in communication if we agree on something, and currently, for better of for worse that is rdf/xml.
But that does not mean that other procedural languages cannot play well with it. They can since the syntax is not what is important, but the semantics, and those are very well defined.
There are a number of very useful bindings in pretty much every language. From Franz lisp to the redland library for c, python, perl, ruby, to Prolog bindings, and many Java bindings such as Sesame and Jena. Way too much to list here. For a very comprehensive overview see Mike Bergman's full survey of Semantic tools.
Posted by Henry Story on May 29, 2007 at 09:36 AM CEST #
Could someone please explain why, when URIs are required for things like namespaces, everyone invariably uses URLs rather than URNs? (this is not a sarcastic comment, by the way, I would really like to know).
When I first learnt HTML years ago, one of the most confusing concepts was that of namespaces. The examples always used URLs for namespaces. It was really hard to grasp that these apparent addresses did not really exist at all, they were really just being used as unique names.
Relatively recently, last year, I finally got around to sussing out the differences between URIs, URLs and URNs (a couple of good explanations in the Wikipedia: and).
It seems to me that URNs are far clearer for naming or categorizing things than URLs are. Compare "" (URL) with "urn:animal:ferret" (URN - from Wikipedia). The URL looks like an address, the URN doesn't.
Posted by SimonTeW on May 30, 2007 at 08:54 PM CEST #
I think the people who use URLs as namespaces and then don't put the description of what the names mean or the relax ng for the syntax at that location , are confused, not you. It is a bit like having an anchor href on a web page to some other page that does not exist. Sites that do that get to be annoying. So much so that one will tend not to visit them anymore.
The web allows pointers to point to nowhere. It is a simple pragmatic fact that sites that do that won't get much traffic.
The same will ably to the machine readable web. It is just that it is so new and people developing it have come from such diverse backgrounds that these type of obvious things got lost in the fray. (I don't want to say that using URNs is always a bad idea btw.)
Another reason is perhaps nobody could quite agree on what should be there. In the semantic web community it is now widely agreed that each term should be dereferenceable at...
As far as the discussion between URLs and URNs go and read Norman Walsh's post Names and Addresses. It summarises the issue very well.
Posted by Henry Story on May 30, 2007 at 09:27 PM CEST #
Posted by Henry Story on May 31, 2007 at 02:11 PM CEST # | https://blogs.oracle.com/bblfish/entry/answers_to_duck_typing_done | CC-MAIN-2015-48 | en | refinedweb |
Re: [json] Format & interpretation of URL fragments for JSON resources
Expand Messages
- [+restful-json]
Jacob,
You may already be aware of this, but a specification for the
dot-delimited hash/fragment resolution mechanism is in the JSON Schema
I-D (6.2.1) [1]. One thing to be noted that you can specify alternate
hash/fragment resolution mechanisms in the schema, the draft just
defines dot-delimited as the default. However, we do certainly want the
default to be legitimate. I'd be glad to change the draft to slashes if
there is consensus that using slashes is more appropriate. However,
based on prior conversations [2], I had thought that there was agreement
that the stipulations of RFC 3986 didn't need to be strictly applied to
hashes, since they aren't transferred over the wire and don't identify
resources (they identify internal parts of a resource, and the text you
quoted from RFC 3986 refers to how resources are identified). I am
certainly open to the idea that slashes might be better though, but
since dots are currently in use, I would only want to alter the JSON
schema draft if there is sufficient reason.
[1]
[2]
Thanks,
Kris
On 2/26/2010 5:34 PM, Jacob Davies wrote:
>
>
> I have a question regarding the use of URL fragments (the part after
> the # (hash) character in a standard URL) for navigating JSON
> resources. So far as I can see from some searches & investigation,
> there does not seem to be a firm consensus on the format and
> interpretation of them, and there is a fairly major problem with the
> most common suggestion I've seen, which is the interpretation of the
> fragment as a series of dot-delimited, URL-encoded keys to be used to
> navigate through a set of nested JSON objects and arrays.
>
> So, an example. The fragment:
>
> #foo.bar.0
>
> when used to navigate the JSON resource:
>
> {
> "foo" : {
> "bar" : [
> "xyz"
> ]
> }
> }
>
> would refer to the value "xyz".
>
> This has the attractive feature of looking like the Javascript or Java
> dot-notation for navigating objects.
>
> The problem is that dot/period is explicitly included in the list of
> non-reserved characters in URL-encoding:
>
>
> <>
>
> "For consistency, percent-encoded octets [...] period (%2E) [...]
> should not be created by URI producers"
>
> So the simple statement of the format ("dot-delimited, URL-encoded
> keys") is either ambiguous or cannot accommodate keys containing
> periods.
>
> A simple example to illustrate:
>
> {
> "foo" : {
> "bar" : "xyz"
> },
> "foo.bar" : "abc"
> }
>
> Does the fragment #foo.bar refer to the value "xyz" or "abc".
>
> Obviously it is straightforward to replace the periods in keys with %2E
> and therefore distinguish between these fragments:
>
> #foo.bar - intended to refer to "xyz"
> #foo%2Ebar - intended to refer to "abc"
>
> But, there are some problems with this procedure, two minor, one major.
>
> The first minor problem is that standard URL-encoding routines do not
> replace dots with the %2E escape. The second minor problem is that it
> makes it awkward to construct fragments by hand that refer to keys that
> contain dots.
>
> The major problem is that this method of interpretation of a URL is
> explicitly disallowed. Quoting again from RFC 3986:
>
> "URIs that differ in the replacement of an unreserved character with
> its corresponding percent-encoded US-ASCII octet are equivalent: they
> identify the same resource."
>
> Clearly this is not true in the above example. Replacement of %2E with
> a period changes the interpretation of the fragment. Note that the
> word "unreserved" is significant in the above quote - the
> replacement of a reserved character by its URL-encoded counterpart IS
> allowed to make a difference in distinguishing between resources.
>
> So, I have a suggestion for an alternative format and interpretation,
> which is:
>
> "URL fragments contain a slash-delimited, URL-encoded list of keys
> used to navigate a JSON structure from the root".
>
> So, given the JSON resource:
>
> {
> "foo" : {
> "bar" : "xyz"
> },
> "foo.bar" : "abc",
> "foo/bar" : "123"
> }
>
> the contained values can be unambiguously referred to using the
> fragments:
>
> #foo/bar - "xyz"
> #foo.bar - "abc"
> #foo%2Fbar - "123"
>
> Slash IS a reserved character for URL-encoding, which means,
> firstly, that we can legitimately distinguish between the first and
> last examples there as referring to different resources; secondly,
> that standard URL-encoding routines will correctly escape it, and
> the wording of the format is unambiguous; and thirdly, that keys
> containing dots can be easily used in URLs - in my experience such
> keys are far more common than keys containing slashes, and there
> have been several recent suggestions for using reversed domain names
> in dotted keys as an ad-hoc namespace mechanism in JSON similar to the
> use for Java package names, for instance:
>
> {
> "org.itemscript.Name" : "Jacob"
> }
>
> One final note: the use of an initial slash to indicate that the value
> is rooted at the top level of the JSON structure seems unnecessary,
> since fragment identifiers by definition are global to a given resource
> or document.
>
> Anyway, just some thoughts. I know that the dot-delimited fragment
> format already has some momentum, but I had to make a decision about
> which format to use for something I was working on recently, and after
> thinking about it (and using the dot-delimited format for a while) I
> found that the problems with dot-delimited were significant enough that
> I didn't use it. I do think a consistent interpretation of URL fragments
> in JSON resources would be quite useful though.
>
> --
> Jacob Davies
> jacob@... <mailto:jacob%40well.com>
>
>
--
Thanks,
Kris
[Non-text portions of this message have been removed]
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/json/conversations/topics/1478?source=1&var=1&l=1 | CC-MAIN-2015-48 | en | refinedweb |
Am Dienstag, den 18.10.2005, 09:00 +0100 schrieb Jochen Voss: > Dear Stelian, > > On Tue, Oct 18, 2005 at 07:37:19AM +0200, Stelian Pop wrote: > > A modified and both technicaly and aestheticaly correct patch was > > already merged in the latest stable kernel. But it requires a modified X > > input driver in order to work. > > > > Wasn't this already discussed in > > ? > Sorry, this had escaped my mind. But I remember now. > > Despite being correct, does the new method also work? Did anybody > get, for example, the combination fn-up to produce pageup with the new > system? Is that patch to the X input driver publicly available? First of all, I'm writing about the situation on my Fedora Rawhide system. It's most likely the same on Debian systems. This change in 2.6.14 doesn't work on console at the moment. The function key is shown through showkey with keycode 464. But loadkeys and dumpkeys only support keycodes up to 255. The package version of kbd is 1.12 with some unrelated patches. While adding a line "plain keycode 464 = CtrlR" to my keymap and trying to load it with "loadkeys -u" I get following error message: "addkey called with bad index 464" Next I applied the attached patch to kbd to see if it works. Partially successful, because I've got loadkeys to accept my modified keymap. But two things are going wrong: 1. After login I get four times the message KDSKBENT: Invalid argument failed to bind key 256 to value 638 2. Adding lines like "ctrlr keycode 103 = PageUp" to the keymap works, but has no effect. PageUp, PageDown, Home and End are not triggered by pressing fn with cursor keys, despite I see their entries with dumpkeys (for grepping use Select, Next, Find, Prior). Somebody else new ideas? Frank
Attachment:
de-latin1-nodeadkeys-powerbook56.map.gz
Description: GNU Zip compressed data
diff -upr orig/kbd-1.12/src/dumpkeys.c kbd-1.12/src/dumpkeys.c --- orig/kbd-1.12/src/dumpkeys.c 2004-01-16 20:45:31.000000000 +0100 +++ kbd-1.12/src/dumpkeys.c 2005-10-18 17:46:59.000000000 +0200 @@ -47,7 +47,9 @@ has_key(int n) { static void find_nr_keys(void) { - nr_keys = (has_key(255) ? 256 : has_key(127) ? 128 : 112); + nr_keys = (has_key(511) ? 512 : + has_key(255) ? 256 : + has_key(127) ? 128 : 112); } static void diff -upr orig/kbd-1.12/src/loadkeys.y kbd-1.12/src/loadkeys.y --- orig/kbd-1.12/src/loadkeys.y 2004-01-16 22:51:25.000000000 +0100 +++ kbd-1.12/src/loadkeys.y 2005-10-18 17:49:38.000000000 +0200 @@ -32,7 +32,7 @@ #endif #undef NR_KEYS -#define NR_KEYS 256 +#define NR_KEYS 512 /* What keymaps are we defining? */ char defining[MAX_NR_KEYMAPS]; | https://lists.debian.org/debian-powerpc/2005/10/msg00235.html | CC-MAIN-2015-48 | en | refinedweb |
Interoperability with Enterprise Services and COM+ Transactions
The System.Transactions namespace supports interoperability between transaction objects created using this namespace and transactions created through COM+.
You can use the EnterpriseServicesInteropOption enumeration when you create a new TransactionScope instance to specify the level of interoperability with COM+.
By default, when your application code checks the static Current property, System.Transactions attempts to look for a transaction that is otherwise current, or a TransactionScope object that dictates that Current is null. If it cannot find either one of these, System.Transactions queries the COM+ context for a transaction. Note that even though System.Transactions may find a transaction from the COM+ context, it still favors transactions that are native to System.Transactions.
Interoperability levels
The EnterpriseServicesInteropOption enumeration defines the following levels of interoperability—None, Full and Automatic.
The TransactionScope class provides constructors that accept EnterpriseServicesInteropOption as a parameter.
None, as the name implies, implies that there is no interoperability between System.EnterpriseServices contexts and transaction scopes. After creating a TransactionScope object with None, any changes to Current are not reflected in the COM+ context. Similarly, changes to the transaction in the COM+ context are not reflected in Current. This is the fastest mode of operation for System.Transactions because there is no extra synchronization required. None is the default value used by TransactionScope with all constructors that do not accept EnterpriseServicesInteropOption as a parameter.
If you do want to combine System.EnterpriseServices transactions with your ambient transaction, you need to use either Full or Automatic. Both of these values rely on a feature called services without components, and therefore you should be running on Windows XP Service Pack 2 or Windows Server 2003 when using them.
Full specifies that the ambient transactions for System.Transactions and System.EnterpriseServices are always the same. It results in creating a new System.EnterpriseServices transactional context and applying the transaction that is current for the TransactionScope to be current for that context. As such, the transaction in Current is completely in synchronization with the transaction in Transaction. This value introduces a performance penalty because new COM+ contexts may need to be created.
Automatic specifies the following requirements:
When Current is checked, System.Transactions should support transactions in the COM+ context if it detects that it is running in a context other than the default context. Note that the default context cannot contain a transaction. Therefore, in the default context, even with Automatic, the transaction stored in the thread local storage used by System.Transactions is returned for Current.
If a new TransactionScope object is created and the creation occurs in a context other than the default context, the transaction that is current for the TransactionScope object should be reflected in COM+. In this case, Automatic behaves like Full in that it creates a new COM+ context.
In addition when Current is set in both Full and Automatic, both of these modes imply that Current cannot be set directly. Any attempt to set Current directly other than creating a TransactionScope results in an InvalidOperationException. The EnterpriseServicesInteropOption enumeration value is inherited by new transaction scopes that do not explicitly specify which value to use. For example, if you create a new TransactionScope object with Full, and then create a second TransactionScope object but do not specify an EnterpriseServicesInteropOption value, the second TransactionScope object also has a Full.
In summary, the following rules apply when creating a new transaction scope:
Current is checked to see if there is a transaction. This check results in:
A check to see if there is a scope.
If there is a scope, the value of the EnterpriseServicesInteropOption enumeration passed in when the scope was initially created is checked.
If the EnterpriseServicesInteropOption enumeration is set to Automatic, the COM+ transaction (System.EnterpriseServices Transaction) takes precedence over the System.Transactions transaction in managed thread local storage.
If the value is set to None, the System.Transactions transaction in managed thread local storage takes precedence.
If the value is Full, there is only one transaction and it is a COM+ transaction.
The value of the TransactionScopeOption enumeration passed in by the TransactionScope constructor is checked. This determines if a new transaction must be created.
If a new transaction is to be created, the following values of EnterpriseServicesInteropOption result in:
Full: a transaction associated with a COM+ context is created.
None: a System.Transactions transaction is created.
Automatic: if there is a COM+ context, a transaction is created and attached to the context.
The following table illustrates the Enterprise Services (ES) context and a transactional scope that requires a transaction using the EnterpriseServicesInteropOption enumeration.
The following table illustrates what the ambient transaction is, given a particular System.EnterpriseServices context, and a transactional scope that requires a transaction using the EnterpriseServicesInteropOption enumeration.
In the preceding table:
ST means that the scope's ambient transaction is managed by System.Transactions, separate from any System.EnterpriseServices context's transaction that may be present.
ES means that the scope's ambient transaction is same as the System.EnterpriseServices context's transaction. | https://msdn.microsoft.com/en-us/library/ms229974(v=vs.85).aspx | CC-MAIN-2015-48 | en | refinedweb |
I have to setup a small windows network inside my bigger linux/mac infrastructure. In order to get the windows clients logging onto the domain, I have had to make the DC their primary DNS server, which seems to have worked.
I would much prefer to have one DNS server running on my network, or at least one authoritative server running on the network.
I have a USG 200 router/firewall and I can configure some static records for DNS, but I an not sure what I need to put in order to get DNS and AD working together, and hints and tips appreciated.
The first thing you should know is that Active Directory and DNS are so intertwined that they're almost one. For all intents and purposes, you should forget the idea of having an Active Directory domain which doesn't have a primary DNS server for Windows clients.
I won't say it's "impossible", but I will strongly advise you that it's a path with only pain.
As an alternative, why not let AD and DNS do their thing together and then add forwarders to your normal DNS servers. It's the same end result, you can basically forget about your Microsoft DNS server as it will just plod along doing its own thing as you actively maintain and update your other Name Servers.
Just deploy AD on subdomain like windowsdomain.example.com instead of on example.com, and then delegate this subdomain to your domain controllers.
This way, you will get two domains, which you could potentially split up for greater security.
You do not need to run windows DNS on a domain controller for proper functionality of AD. DNS is the backbone of AD so you want to have a very resilient very reliable DNS infrastructure prior to adding active directory. I would strongly recommend using either windows OR your existing DNS infrastructure but I would not use both. Bind 9 will work fine. You should verify that the namespace you are using is valid for active directory.
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
479 times
active | http://serverfault.com/questions/401966/windows-server-2008-active-directory-dns-setup | CC-MAIN-2015-48 | en | refinedweb |
Automating the world one-liner at a time…
Windows PowerShell Desired State Configuration: Push and Pull Configuration Modes
The Windows PowerShell Desired State Configuration (DSC) system has two configuration modes, which are referred to as push and pull modes. Each mode allows the user to apply a desired state over a target node.
How push mode works
As illustrated in the preceding diagram, the push model is unidirectional and immediate. . The configuration is pushed to its intended targets, and they are configured..
Pulling a configuration requires that a series of steps be taken on the pull server and target nodes. We will look at this series of steps in detail in the section titled The pull mode configuration steps in this post. 'Invoke CimMethod' with the following parameters, ''methodName' = SendConfigurationApply,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' = root/Microsoft/Windows/DesiredStateConfiguration'. 'Get-WindowsFeature' started: DSC-Service
VERBOSE: [LOUISC-VMHOSTD]: [[WindowsFeature]DSCService] The operation 'Get 'Invoke CimMethod' following sections describe how to set up the configuration.
Configuration SimpleConfigurationForPullSample
Node 1C707B86-EF8E-4C29-B7C1-34DA2190AE24
Computer ManagedNode
Ensure = "Present"
Name = “DomainClient1”
DomainName = “TestDomain”
SimpleConfigurationForPullSample-output "." 'D:.
For configuring a machine as a pull server from which the pull clients will get their respective configurations, please refer to “DSC Resource for configuring pull server environment” blog .
In order to provision the pull server with configuration files, we need to:
The MOF files containing the configurations of the nodes must be stored on the pull server in the following location $env:SystemDrive\Program Files\WindowsPowershell\DscService\Configuration..
Configuration SimpleMetaConfigurationForPull
LocalConfigurationManager
{
ConfigurationID = "1C707B86-EF8E-4C29-B7C1-34DA2190AE24";
RefreshMode = "PULL";
DownloadManagerName = "WebDownloadManager";
RebootNodeIfNeeded = $true;
RefreshFrequencyMins = 30;
ConfigurationModeFrequencyMins = 60;
ConfigurationMode = "ApplyAndAutoCorrect";
DownloadManagerCustomData = @{ServerUrl = ""; AllowUnsecureConnection = “TRUE”}
}
}
SimpleMetaConfigurationForPull -Output "."]
How can you remove and add nodes in the pull scenario? what are the steps involved for pausing/stopping the DSC on a node?.
Good One!!!
Nice Post !!!
@Louisc would you simply do the reverse to remove a node. Im looking for the workflow an administrator would follow for doing a staggered install on several nodes in a farm.
.
@Powershell Team
Fantastic, thanks for clarifying a staggered installation scenario. It would be helpful if a post was created showing the best way to accomplish this. Is there a service that can be simply stopped/disabled on the pull client to stop the sync?
Just FYI your code for the InstallDSCService is incorrect. "configuration Install_DSCService" should be InstallDSCService or vice verse.
Hi Joe - thanks for your comment. Can you please clarify? Are you referring to the naming of the configuration or script?
Thanks.
Louis
Joe - thank you for pointing out the typo. We have fixed it now.
Hello,
thanks for the tuto it is great. I do have a little question tho.How could we remove a Windowsfeature by using the push mode?
Thanks for your answer :) | http://blogs.msdn.com/b/powershell/archive/2013/11/26/push-and-pull-configuration-modes.aspx | CC-MAIN-2015-48 | en | refinedweb |
Searched out on the internet and didn't really find anything that was horribly succinct, so I wrote this class for fun. I had help from. I hope you enjoy! Here's the code to call it:
PostSubmitter post=
And here's the class:
using
namespace Snowball.Common{/// <summary>/// Submits post data to a url./// </summary>public class PostSubmitter{/// <summary>/// determines what type of post to perform./// </summary>public enum PostTypeEnum{/// <summary>/// Does a get against the source./// </summary>Get,/// <summary>/// Does a post against the source./// </summary>Post}
private string m_url=string.Empty;private NameValueCollection m_values=new NameValueCollection();private PostTypeEnum m_type=PostTypeEnum.Get;/// <summary>/// Default constructor./// </summary>public PostSubmitter(){} | http://geekswithblogs.net/rakker/archive/2006/04/21/76044.aspx | CC-MAIN-2015-48 | en | refinedweb |
While developing, I like to keep a browser, and code editor open on the same screen. I'd like to find some way to join them and treat them as a single window, so they minimize together, and resize together, and have a movable divider between them.
Is there some sort of window-panel application that I can use to create a multi-program window?
This question came from our site for professional and enthusiast programmers.
Well, I don't have access to comments on this site yet; guess the points system is separate from stackoverflow.
Wanted to respond to OP's last comment. I really like xmonad. You can set it up to run within gnome, kde, etc. with extensions. Here are steps I took in gnome 2.28.2:
sudo yum install -y xmonad.x86_64
sudo yum install -y ghc-xmonad-contrib.x86_64
sudo yum install -y ghc-xmonad-contrib-devel.x86_64
Create a file ~/.xmonad/xmonad.hs and put in:
import XMonad
import XMonad.Config.Gnome
main = xmonad gnomeConfig
Set the window manager:
gconftool-2 -t string -s /desktop/gnome/session/required_components/windowmanager xmonad
gconftool-2 -t string -s /desktop/gnome/applications/window_manager/current xmonad
If you change your mind, you can run the above two commands, substituting xmonad with metacity to put it back as it was.
links:
xmonad in Gnome
xmonad in KDE
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
Don't know if it can be done with Gnome (or Unity), but you can definitelly do it with the Fluxbox WM, as named like tabs.
Another option is KDE's tabbed windows. IIRC PWM and PekWM can do it too.
It's probably not exactly what you want, but if your window manager does not provide support for this, you might be able to use an Xnest instance instead.
Xnest is essentially a nested X server that outputs to a window, rather than a hardware device. You can run specific applications, or even a full-featured desktop within an Xnest window.
Unfortunately, this approach is slightly cumbersome: applications need to be launched with a non-default display option (or a modified DISPLAY environment variable) and you cannot move windows into or out of the Xnest window.
DISPLAY
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
372 times
active | http://superuser.com/questions/378829/is-there-a-way-ubuntu-program-to-glue-a-bunch-of-program-windows-together-into-o | CC-MAIN-2015-48 | en | refinedweb |
Hello,
I have a question re print options in P6.
When I print view (layout) with the Gantt Chart and columns I always have black frame (line) around the printout, which wraps up the header, footer and rest of the content.
Is there a way to get rid of this frame somehow? or change color?
I don't see any option which does that, but maybe someone knows the trick?
Thanks,
Karol
To be honest, I do not think this black frame can be disabled from printing.
Thanks AMA.
Another question re Printing. Seems impossible for me to have text Arial 6pt in header / footer. Size available in P6 from drop-down list is between 8 - 36.
I managed to copy and paste from MS Word text Arial size 6 but when I apply - P6 change it to Times New Roman or similar. Sometimes when I paste the text (font 6), change font to e.g. Arial Black and then back to Arial again - xml code appears on printout as part of the text: <?xml:namespace. Is it something wrong with my Database (sql 2008 R2) or its a P6 issue.
Thanks.
Karol | https://community.oracle.com/message/11139923 | CC-MAIN-2015-48 | en | refinedweb |
I accidentally compiled these headers in a simple test program I was running to get a better understanding of strtok:
Code:
#include <string.h>
#include <iostream>
#include <cstring>
And I received this warning:
warning: #include_next is a GCC extension.
I Googled and found out that it is a GCC extension, well that is quite obvious from the warning I received but what exactly does it mean or do?
If I get rid of the string.h header (which is what I meant to do ) or include it after iostream this warning goes away! This leads me to another question is there a case where one would need to include to similar header files like string.h and cstring?
:confused: | http://cboard.cprogramming.com/cplusplus-programming/71121-what-sharpinclude_next-printable-thread.html | CC-MAIN-2015-48 | en | refinedweb |
So polymorphism is the act of inheriting a class and overriding methods from it/adding to it, so making a class based on another. And as polymorphism means many forms, we call this polymorphism...
Type: Posts; User: TP-Oreilly
So polymorphism is the act of inheriting a class and overriding methods from it/adding to it, so making a class based on another. And as polymorphism means many forms, we call this polymorphism...
Hi,
Im just trying to get the right idea about polymorphism.
Is polymorphism simply the act of inheriting an abstract class and defining the methods yourself. As polymorphism means many forms,...
Thanks. :)
Hi, in my code I have
public boolean equals(Object obj) {
if (this.equals(obj)){
return true;
}
return false;
}
I found out why....
The format of the sound file was .wav, i changed it to a .mp3 and now it sounds as it should.
Hi,
I have:
final MediaPlayer mp = MediaPlayer.create(this, R.raw.popping_sound_effect);
When I play the sound file it is how I want it to be, but when I run the app and making the...
I got it working now, just to let you know why it wasnt working:
I changed this:
public class BackgroundService extends Service{
@Override
public IBinder onBind(Intent intent) {
Ive created a service which plays the audio, but still when i change the layout on the activity the audio stops?
I cant because I have set it so that when I click on a button the this is called: setContentView(R.layout.menu_screen);
I tried making a different activity for the menu_screen.xml layout but when...
Hi,
Why does my MediaPlayer stop when I change the layout?
MediaPlayer mp = MediaPlayer.create(this, R.raw.background_song);
mp.setLooping(true);
mp.start();
To answer my own question............ What I'm actually doing is setting the content of the tab as an Activity???
This is what made me think that each activity is a different window:
"An activity is the equivalent of a Frame/Window in GUI toolkits. It takes up the entire drawable area of the screen"
I do have an understanding of Java. It's concept of an Activity which is confusing me. Reading different things on the internet have made me confused.
Hello, I'm trying to understand what exactly is happening in my code. Thanks in advance for replying, I'm pretty confused and need some help. I have read many things on the internet but i havnt found...
Thank you very much for your detailed reply.
I ask this because I havnt actually created an instance of my class so how can I be referring to an INSTANCE of my HelloAndroidActivity class?
Thanks for the reply. I also have another question, and as it is regarding this code I thought I'd ask it in this thread.
I know that the constructor of the TextView class requires a Context...
Hello, I following a "Hello World" tutorial for android development and I want to understand how the code works exactly, instead of just using it without understanding it, heres the code:
...
Thank you :)
Got it, thank you :)
Thank you very much for your detailed reply, much appreciated.
So, when i create an instance of my class which has extended JFrame, both my class constructor and the JFrame constructor is called...
I would understand how it all works if our class also inherits JFrame's constructor, if not then I can not see how creating an instance of our class creates a window.
Hi, I know when we extend a class our class gets all of the properties of the class we extend.
I am abit confused. I have read that when our class extends the JFrame class, our class then IS the...
Ah thank you, ive now got it to compile but it doesnt work.
I have to defined all of the methods of the KeyListener interface. I have removed the "if (keyCode == KeyEvent.VK_ESCAPE){" and just put...
This is my code, it is only small:
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
class MoveIcon implements KeyListener{ | http://www.javaprogrammingforums.com/search.php?s=f5c85cf983715e2ae34b46e7edc4410b&searchid=1929959 | CC-MAIN-2015-48 | en | refinedweb |
Newbie: What XML to use?
Discussion in 'XML' started by rjm12@hotmail: XML 2 XML and namespacesMark Smits, Sep 17, 2003, in forum: XML
- Replies:
- 2
- Views:
- 746
- Mark Smits
- Sep 21, 2003
Newbie. xml "dynamically" including another xml documentClive, Aug 21, 2005, in forum: XML
- Replies:
- 1
- Views:
- 485
- Peter Flynn
- Aug 21, 2005
XML Newbie Alert! FOR XML EXPLICIT help pleaseiamaran, Dec 22, 2005, in forum: XML
- Replies:
- 1
- Views:
- 465
- iamaran
- Dec 23, 2005
- Replies:
- 1
- Views:
- 238
Different results parsing a XML file with XML::Simple (XML::Sax vs. XML::Parser)Erik Wasser, Mar 2, 2006, in forum: Perl Misc
- Replies:
- 5
- Views:
- 744
- Peter J. Holzer
- Mar 5, 2006 | http://www.thecodingforums.com/threads/newbie-what-xml-to-use.169649/ | CC-MAIN-2015-48 | en | refinedweb |
Need help cloning? Learn how to
clone a repository.
Atlassian SourceTree
is a free Git and Mercurial client for Windows.
Atlassian SourceTree
is a free Git and Mercurial client for Mac.
TDD
-----
-On the internet, you could often hear TDD termin. From it's definition
-Test-Driven Development you could thought that writing unit-tests and
-TDD is the same, but it isn't.
+The most common mistake in the testing world is to mess TDD and
+(unit-) testing in general. TDD is not just unit-testing, it's much
+more. TDD is a way to formalize process of developing software into
+something that more seems like a compiler-driven development, where
+instead of compiler trying to figure out errors as you develop small
+pieces, you write tests to figure them out.
-TDD is a methodology when you first write your test, and only then
-change your code until it satisfies the test. As creators of TDD
-(probably) would say, this encourages you to write better tests, not
-to skip anything, keep design simple to test and so on.
+Robert Martin defines `3 rules of TDD
+<>`_ as
+this:
-My vision here is -- I don't use TDD, but also I don't write the whole
-code first.
+-.
-What I do is I do both things simultaneously. And the reason for this
-is that TDD is great idea that makes sense, but it's hard for me to
-design functions and classes without first prototyping them. It's much
-easier and faster to prototype with real code, not with how it will be
-used when you need to create something more complex than adding just 1
-new method simple to class.
+As you can see, if your whole team of programmers would work this way,
+your codebase would keep being stable all the time, and it would
+evolve little by little, no matter how complex the tasks are.
-So I first go and prototype function with it's parameters, and only
-then go and look how test for that would look like. Then I would go
-and implement the function. Then I review changes and see what also
-needs to be covered by tests.
+From my observations, when you work with lots of "state" or when you
+need to prototype as you develop, TDD is hard to adopt, and it's
+sometimes even becomes absurd to see how people transform TDD into
+"almost-TDD" or "better-TDD" by weaken some of TDD rules (or adding
+exceptions). I actually did that too. And now I see that in places
+like "best software practices.ppt" inside my company.
-Also I should mention that it's very good idea to first be sure that
-your test fails (write it before you implement something and make sure
-it fails), because it's often that your logic is not so easy and your
-test can really not test anything.
-
-So as a conclusion: I may be a bad (or have not enough experience)
-person, but I don't use TDD. Also you should go and read more about
-TDD not from here definitly, and speak to people who use it a lot (I
-hope I will get opportunity to work with real person who uses it and
-knows how to TDD in a good way).
-Also, there's a technique called "ping-pong programming" when one
-person writes a test, and another implements it. Then they
-switch. Nice idea :)
+I believe you shouldn't add any exceptions in these rules of TDD or
+try to make it "more real-world from your perspective". Programmers
+are smart enough to understand if it's not fit for they're tasks or if
+they want to add some exceptions into these rules.
BDD
-Different idea is Behaviour-Driven Development. As I've heard at some
-podcast, it was born as an idea that the big question in testing is
-not "how do I test things", but "what do I need to test", "what am I
-testing". And to focus on things that you test, instead of calling
-tests "test_foo_bar" you should call tests starting from "should"
-keyword, like "should_feed_pony()" or something, and author of BDD
-created fork of JUnit that would make this happened.
+Different idea of upgrading testing experience is `Behaviour-Driven
+Development
+<>`_. The main
+idea is that you need to keep constantly focus on what you test and
+why you should do that, as a result Dan North created a `JBehave`
+framework, which was the same as `JUnit` but with test methods called
+with prefix `should_*` instead of `test_*`. BDD has whole filosophy
+behind that, you can go and read wikipedia and other things about BDD
+(for example, frameworks that let you describe your tests as plain
+text).
As lesson from that super-idea, I now call all my tests with prefix
``test_should_`` (as you have probably already saw). That really helps
-focusing on what test does.
+focusing on what test does at time of test-naming. Of course, BDD is
+not just about naming your tests, but the only visible part is prefix
+one (at my tests). I am still investigating frameworks that let you
+describe test as text, but didn't have experience with that.
+
+-----------------------
+ Ping-pong programming
+As a bonus, there's a technique called "ping-pong programming" when
+one person writes a test, and another implements it. Then they
+switch.
-----------------------------------
Writing test from action to mocks
-When I write tests, I start from it's name (and focus on what it
-should do), like:
+To write tests, you start from it's name (and focus on what it should
+do), like:
.. code-block:: python
def test_should_sort_by_name(self):
pass
-Then I go and write what it should actually do (call the action) with
-"``# do``" comment before that.
+Then you go and write what it should actually do (call the action)
+with "``# do``" comment before that.
query.order_by.assert_called_with('-name', '-updated_at')
-And at last, if necessarily, you will add all the ``@patch`` before method.
+And at last, if necessarily, you will add all the ``@patch`` before
+method. That's an idea of building a test without a lot of thinking
+about what to start from, and moving step-by-step from smaller pieces
+to whole-picture of test.
Let's move on to :doc:`change-the-way`. | https://bitbucket.org/k_bx/blog/diff/source/en_posts/testing_python/tdd-bdd.rst?diff2=80bf4dbb727d&at=default | CC-MAIN-2015-48 | en | refinedweb |
Changes related to "cpp/language/namespace"
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
22 November 2015
- (diff | hist) . . cpp/language/reference; 08:33 . . (-84) . . 114.222.142.185 (Talk)
18 November 2015
- (diff | hist) . . cpp/language/alignas; 08:11 . . (+450) . . Cubbi (Talk | contribs) (cwg 2027 (simplifiy description of alignas), also this never mentioned that pack expansions are allowed, or that bit fields and parameters are not allowed.)
- (diff | hist) . . cpp/language/dependent name; 07:45 . . (+37) . . Cubbi (Talk | contribs) (cwg 2024 (unexpanded parameter pack makes template-id a dependent type), Not revboxing because I can't imagine it being implemented as non-dependent) | http://en.cppreference.com/w/Special:RecentChangesLinked/cpp/language/namespace | CC-MAIN-2015-48 | en | refinedweb |
HtmlCleaner release 2.14
This contains the following bug fixes:... read more
What another release already??
Well, a big thanks to Wolfgang Koppenberger who spotted a problem in 2.11 with OPTION tags which needed fixing and releasing right away.
Apologies to anyone using 2.11 who encountered that issue.
Adds much better HTML5 support, pipelining of HTML from stdin (and XML to stdout), and more!
Here's the changelog:
New version brings most of required features and number of bug fixes. HtmlCleaner is now thread-safe, it introduces html-based serializers, API is extended to ease document manipulation. Parser is about 20% faster and now it runs on Java 1.5+, benefiting from language improvements.
- Parsing transformations are developed in order to easily skip or change specified tags or attributes during the cleanup process.
- Few more constructors added in class HtmlCleaner giving possibility to reuse same cleaner properties with multiple cleaner instances.
- Code cleanup.
Together with new milestone version 2.0, project web site is complitely redesigned giving better look and better organized information.
<a href="">Go to HtmlCleaner web site</a>
New version comes with a number of improvements and fixes. Some of them are:
- Complete code refactoring, making the Cleaner's API better and more flexible.
- Methods for DOM manipulation added.
- Basic XPath support added.
- New parameters introduced to control cleaner's behavior.
- New flag parameter ignoreQuestAndExclam is introduced offering control over special tags - <?TAGNAME....>, <!TAGNAME....>.
- Bug fixes.
- Added Reader based HtmlCleaner constructors.
- New parameter pruneTags is introduced offering a way to remove undesired tags with all the children from XML tree after parsing and cleaning.
- Bug fixes.
- Several bug fixes.
- Added option to escape XML content in DOM serializer - HtmlCleaner.createDOM(boolean escapeXml)
- New flag allowHtmlInsideAttributes is introduced in order to give the parser flexibility in handling attribute values.
- Several bug fixes.
* New browser-compact serializer added, that preserves single whitespace where multiple occure.
* New flag namespacesAware is introduced in order to control namespace prefixes and namespace declarations. It should be used instead of omitXmlnsAttributes that existed in previous versions and had limited functionality.
* New flag allowMultiWordAttributes is introduced giving HtmlCleaner's parser flexibility to (dis)allow tag attributes consisting of multiple words.
* New flag useEmptyElementTags is introduced in order to controll output of tags with empty body
(<xxx/> vs <xxx></xxx>).
* Several bug fixes.
- Several bugs fixed.
- New flags added to control behaviour of unknown/deprecated tags.
- New flag added to optionally remove HTML envelope from resulting XML.
- JDOM serializer added.
- Latest source may be checked out from.
- Source can be browsed at
Serialization of XML to Java DOM supported with createDOM() method of HtmlCleaner class.
Hexadecimal entities escaping supported (i.e. ).
- Compact XML serializer improved.
- Minor XML escaping bug fixed.
- A html tokenizing bug fixed.
- Methods of the class TagNode made public in order to enable creating custom XML serializers.
- Method writeXml(XmlSerializer) added to HtmlCleaner class in order to support creating custom XML serializers.
Minor bug in advanced XML escaping fixed.
- HtmlCleaner Ant task added
- XML compact serializer added - stripps all unneeded whitespaces from the result
- Few minor bugs fixed
HtmlCleaner is open-source HTML parser written in Java. For specified HTML it prooduces well-formed XML. | http://sourceforge.net/p/htmlcleaner/news/?source=navbar | CC-MAIN-2015-48 | en | refinedweb |
Hi - Hibernate Interview Questions
Hi please send me hibernate interview
update count from session - Hibernate
update count from session I need to get the count of how many rows got updated by session.saveOrUpdate(). How would I get this?
Thanks,
mpr
hibernate - Hibernate
hibernate hi friends i had one doubt how to do struts with hibernate in myeclipse ide
its when I run Hibernate Code Generation wizard in eclipse I'm...;Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
Hi Radhika,
i think, you hibernate configuration... Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import....
Hibernate Count
In this section you will learn different way to count number of records in a table:
spring hibernate - Hibernate
with hibernate? Hi friend,
For solving the problem Spring with Hibernate visit to :
Thanks.
Thanks Criteria Count Distinct
Hibernate Criteria Count Distinct
The Hibernate Criteria count distinct is used to count the number of distinct
records in a table. Following a way to use... this application it will display message as shown below:
select count(distinct
Hibernate code problem - Hibernate
Hibernate code problem Hi,
This is Birendra Pradhan.I want... & Regards
Birendra Hi friend,
For solving the problem visit to : example code of how insert data in the parent table and child tabel coloums at a time in hibernate Hi friend,
I am...:
Thanks Search
Hibernate Search Where to learn Hibernate Search module?
Thanks
Hi,
Check the tutorial: Hibernate Search - Complete tutorial on Hibernate Search
Thanks
Struts-Hibernate-Integration - Hibernate
Struts-Hibernate-Integration Hi,
I was executing struts hibernate...)
javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
Hi Friend,
Please visit the following link:
Hope | http://www.roseindia.net/tutorialhelp/comment/96862 | CC-MAIN-2015-48 | en | refinedweb |
This action might not be possible to undo. Are you sure you want to continue?
04/09/2012
text
original
Harvard Educational Review, 72(3), 367-392 (as HER does not grant permission to post the published paper).
Democracy and Education: The Missing Link May Be Ours
John Willinsky Much has changed since Dewey (1916) first laid out in Democracy and Education his vision of the US as a state of perpetual inquiry where citizens are engaged in sharing educational experiences. Changes for the good include extending suffrage to women and people of color, rising educational attainment, the successful challenging of racial segregation in the courts, and the recognition of cultural diversity through mulitcultural initiatives. On the other hand, American voter participation has declined, particularly since the 1960s; civic involvement, not to mention bowling-league membership, is down (Putman, 2000); corporate control of the media has increased, as has the media’s political influence (Bagdikian, 2000; McChesney, 1999); and affirmative action measures, which were showing positive educational effects (Bowen and Bok, 1998), are being challenged and blocked (Dworkin, 2001). Against this century-long backdrop, we now face a rather different order of political change with the rapid development of the Internet. Over the course of the last ten years, the Internet has opened a new world of information to the public. The increased access to information relates to every aspect of our lives and is on such a scale that it seems bound to alter the relationship between democracy and education. Whether the introduction of the Internet bears comparison with the revolution that Gutenberg initiated with his invention of moveable type and printer’s ink, as Christine Borgman contends (2000), it seems to me far too early to say. While the political and educational impact of the printing press was centuries in the making, I think that we could do worse than be inspired by such historical analogies in our efforts to make sense of this new communication technology and to shape how it is used in this political and educational sense. Certainly, the Internet has already starting showing signs that it will reshape political participation and the way we are governed, with the emphasis in this new digital democracy on providing more powerful public access to information and officials (Alexander and Pal, 1998; Hague and Loader, 1999; Heeks (1999), and Wilhem, 2000) One dramatic, if surprising, example of the Internet’s democratic impact on public education and empowerment, in its broadest sense, is with public access to health information. The result has been that patients and their families now bring Web-based medical information to their doctors’ offices, although they may not understand it well, nor is the information always reliable. However, it is the very availability of this information that is altering the nexus of power and knowledge in doctor-patient relationships – on the side of more empowering and democratic processes – as well as fostering more informative and educational visits for both parties.1 The technology is also
The federally funded MEDLINEplus () provides an excellent example. For a discussion of changing doctor-patient relationships, see Freudenheim (2000, p. A1).
1
being used to better inform people in a more traditional political sense, as governments in the developed world continue to expand new online information services. These services increase citizens’ abilities to tap into their rights and entitlements, to more thoroughly explore policies and programs, and to inundate politicians with their views and positions by email.2 Scholarly publishing outside the life sciences has also begun to contribute to this greater world of public information, with electronic journals and research websites in many disciplines providing “open access” to their articles and other scholarly resources. Scientists have created, often with government support, substantial open-access indexes and abstract services for research, as well as many full-text archives that can be freely accessed by their colleagues and students globally.3 These new open-access systems still offer only partial, often overlapping, coverage of their respective fields of study. As things currently stand, most electronic journals, including those published by scholarly societies, as well as commercial academic publishers, still require a library or individual subscription to access them. But there is a growing open-access movement afoot among researchers, perhaps best indicated by the nearly 30,000 signatures from scientists in 177 countries on the Public Library of Science petition calling for open access to scientific research: “We believe that the permanent, archival record of scientific research and ideas,” the Public Library of Science website states, “should neither be owned nor controlled by publishers, but should belong to the public, and should be made freely available. We support the establishment of international online public libraries of science that contain the complete text of all published scientific articles in searchable and interlinked formats.”4 Those who have signed have agreed to submit to, review for, and edit only those journals that, the website goes on to say, “grant unrestricted free distribution rights to any and all original research reports that they have published, through PubMed Central and similar online public resources, within six months of their initial publication date.” This determination to make open and complete access to scientific knowledge available to medical students in Tanzania, high school teachers in Latvia, bio-chemists in Vietnam, as well as community college students in Montana, represents exactly the sort of ideal for scholarly publishing on a global scale that I hold to be part of the Internet’s great democratic promise. Such moves have been supported by the Open Archives Initiative, which began in 1999, and has developed standards that enable globally distributed research databases to
2
For example, in British Columbia, “InfoSmart is the B.C. government’s strategic framework to improve the way it works and delivers services to the public using information technology” (). In the U.S., members of Congress received 80 million emails last year from constituents (Congress Struggles, 2001). 3 The federally funded PubMed, for example, contains 11 million citations with full-text access to 1,800 journals (); Paul Ginsparg’s Los Alamos National Laboratory self-archiving e-print service will post 35,000 articles this year (htpp://arXiv.org); Stanford University Library’s HighWire Press offers “one of the 2 largest free full-text science archives on earth” with over 250,000 free full-text articles and hundreds of thousands of pay-for-view articles (); and NEC’s Researchindex (Lawrence, Giles, and Bollacker, 1999) provides access to 300,000 articles from among its four million citations (). Also see, for example, William Y. Arms (2000) on open access principle and Peterson (2001), as well as Robert Cameron’s (1997) proposal for a “freely available universal citation database,” 4 Public Library of Science ().
2
share a common indexing or metadata system so that they can be searched from a single source.5 More recently, the Budapest Open Access Initiative, funded by the Soros Foundation, has been launched to support and speed up processes that make “research articles in all academic fields freely available on the internet.”6 There is also the Open Knowledge Initiative, which is making MIT’s course materials and course-ware freely available to the public, while the Public Knowledge Project, with which I work at the University of British Columbia, is developing free software to help journals and conferences around the globe publish open-access scholarly resources in an easily managed and well-indexed form.7 This emerging commitment among scholars to make the knowledge they create freely available is at the heart of my own call on the readers and editors of this journal to consider how turning educational research into a more accessible public resource can further the connection between democracy and education. While offering open access to all forms of scholarly research is certainly a global boon to students and faculty as well as curious minds everywhere, it has a special political significance for the social sciences, as this work bears directly on social policies, programs, and practices. If open access to research in the life sciences can create a more democratic and educational dynamic in doctor-patient relationships, then, as I have argued elsewhere, it is worth exploring across the social sciences (Willinsky, 2000). Here I am specifically asking researchers in the field of education to weigh the reasons why for greater public access to educational research is consistent with our understanding of our own work as fostering education and furthering democratic participation, just as it holds the love of learning and pursuit of knowledge that has driven so many of us in this line of work But before I go any further let me make it clear that providing public access to educational research takes more than simply posting journal pages on the Web as if it were a giant bulletin board at the back of a great public classroom. It will require rethinking how our research works, once it is published, in terms of how it connects to a larger world. Although we have grown comfortable with stuffing the journal in our bookbag at the end of the day, to open it later at the kitchen table or in cafés, these low circulation, finely bound volumes are becoming harder to justify against their electronic counterparts. The print journal is proving too expensive for even well financed research libraries, let alone universities in developing nations, and it is nowhere near as efficient for locating specific ideas or following them across citations, for delving into the data or comparing related studies.8 This is a time, then, for rethinking the scholarly journal (rather than the book, I would hold, perhaps too nostalgically) in ways that relate to the scholarly and public qualities of our work. This essay is not, however, about the technologies behind this new publishing medium. It is devoted to presenting the reasons why educational researchers should do more to foster open, better organized scholarly communication in the name of democracy
Open Archives Initiative (). Budapest Open Access Initiative (). 7 Open Knowledge Initiative (); the Public Knowledge Project () is a federally funded research initiative at the University of British Columbia that seeks to improve the scholarly and public quality of academic research through innovative online environments. 8 On the unsustainable costs of journals, see ARL Monograph and Serial Costs in ARL Libraries, 19861999 (), on the potential of electronic journal indexing systems, see Willinsky and Wolfson (2001)
6 5
3
and education, rather than setting out technical solutions for achieving this organized openness. Still, I think it important to have some idea of what the actual systems at issue may entail. While “open access” publishing simply refers to providing free access to the complete contents of a journal or other resource, I believe something more is required if we are to truly improve the scholarly and public quality of research. While a number of research groups are developing new publishing tools that improve the quality of access to academic journals, we at the Public Knowledge Project are currently working on four components of online publishing that we believe can significantly improve public access to research in areas such as education: (1) Online systems that enable less technically inclined faculty members to manage refereed journals, scholarly conferences, and other research sites that provide open access to complete studies with support for less experiences research readers, those with disabilities, and those without the latest technology; (2) Comprehensive, open access and automated indexing and archiving systems for online research, which allow readers to locate refereed research, dissertations, and other resources, and conduct fine-grained searches by, for example, research topic, sample characteristics, methodology, works cited, etc.; (3) Research support tools that enable readers to readily move from a given research study to its data set and research instruments, to related studies, reviews, overviews, and glossaries, and to relevant policy, program, and media materials in other databases; and (4) Open forums for researchers, professionals, policymakers, and the public to discuss educational issues, methods, and research agendas within the context of this body of research.9 You can see how this approach to open access publishing would support both the scholarly and public quality of research, as it not only extends public access but enhances faculty members’ ability to track ideas, conduct peer reviews, and position their own work within the field. I do not, however, want to underestimate what it means to ask journals to move from the paid-subscription world of print to open access publishing, even of the simplest sort. It is obviously a major step for a journal editorial team or professional association to undertake. At this point in the field of education, close to a hundred e-journals, including such notable titles as Educational Researchers and Teachers College Record, have been made freely available online, demonstrating that open access can be sustained in this field through institutional and association support.10 The software for running a peer review journal online is now being made freely available from a number of sources, including the Public Knowledge Project. The Association for Research Libraries, whose member libraries collectively spend $500 million on journals, has understandably begun supporting projects in open access and non-profit online publishing, under the theme of “returning science to scientists.”11 One way of thinking about the financing of open access publishing is to see it as a matter of reallocating that $500 million, moving some
See the Public Knowledge Project (), and for the full range of electronic publishing tools being used by academic journals, see McKiernan (2001). 10 See AERA’s Electronic Journals in the Field of Education (). 11 The Association of Research Libraries provides support through the Scholarly Publishing and Academic Resources Coalition (SPARC), where a listing of “publishing resources” can be found (). On new publishing economies, see Bailey (19962001) for a complete bibliography, Willinsky (2000a) for a funding model based on research library reallocation of funds and BioMed Central () for open access supported by charging the authors a $500 processing fee (waived for developing countries).
9
4
portion of this money from the often excessive subscriptionrates of commercial publishers to more direct forms of support for online publishing by the leading research institutions, where a great number of the editors and scholarly association officers work. As we slowly wean ourselves away over the next decade from what is currently the unsustainable and inefficient publication of both print and electronic versions of the same journal, my hope is that we take advantage of these new technologies to explore with research libraries and professional associations an alternative political economy for academic knowledge that is based on open access publishing. At the very least, it would place this public good squarely within the public realm, in far more than a rhetorical sense. As professors of education, we seem especially well-positioned to test the impact of this new communication medium on research’s public role, especially as it might further the relationship between democracy and education. And while there are reasons enough to be skeptical about the educational impact of this new technology (Cuban, 2001; 1986), I do not think that this is the time to sit back and wait for things not to happen, not when the public presence of our own work is at issue. Insofar as we are committed to the value of research in informing policy and practice, we would do well to test whether these new publishing technologies can increase the contribution that research makes to the public’s understanding of education, as well as contribute more to professional practices and policy decisions within education. In asking researchers to consider new ways of testing the public value of their work, I am appealing to the experimental quality of democracy which was identified nearly two centuries ago by Alexis de Tocqueville as part of the very dynamic of the young American republic. De Tocqueville was inspired by his visit to America in 183132 to conclude that, “Democratic eras are periods of experiment, innovation, and adventure” (1969, p. 672, n.1).12 And as this democratic era has not ended, so this “great experiment,” as de Tocqueville named it, should be sustained by innovation and adventure should be sustained today when democratic opportunities appear to present themselves. That there is something to democracy constantly in need of renewal and testing was also an operating premise of John Dewey. Consider how the final results are still not in on Dewey’s own democratic experiment with education, for example, which continues to play out in progressive schools to this day.13 Across a wide rage of issues, we have yet to exhaust or fully explore the democratic possibilities of deliberation, justice, or equality, just as we continue to arrive over the course of our lifetimes at new understandings of what responsibility and freedom, community and cooperation mean within the democratic states within which we live. My premise is that at this point, given the possibilities for a better informed public, we need to push the democratic experiment by introducing new ways of accessing and utilizing existing sources of information bodies of knowledge that hold some promise of contributing to policymaking, personal decision-making, and other facets of democratic life.
Alexis de Tocqueville uses “experiment” many times in reference to American democracy, as is revealed by doing a search on the word with the online version of Democracy in America (). There is also Abraham Lincoln: “Our popular Government has often been called an experiment” (1861). 13 Dewey was prepared to re-evaluate his progressive education experiment, as he made clear in Education and Experience (1938) and as others continue to do (Ravitch, 2001).
12
5
To that end, I devote the remainder of this paper to setting out a political philosophy of public access to scholarly publishing, as it pertains to the study of education. I argue that publishing systems that provide greater public access are likely to help us to better understand and extend Dewey’s democratic theory of education, while enhancing the prospects of creating a more deliberative democratic state; and that they are in a good position to expand education’s role within democracy, as well as increase the impact that education research has on practice, and provide an alternative source of information to the media’s coverage of such issues as education. Think of these arguments as the first step in understanding how this new online publishing medium is going to test our fundamental assumption that education advances democracy. Think of these arguments as inviting the informed consent of the education research community, that its members might knowingly agree to participate in what may well prove to be the principal publishing experiment of this new medium in the years ahead. Now, experimentation with electronic publication is already well underway, and open access publishing has been tested and is now the channel of choice for physicists, who have had open pre-print archives for over a decade.14 Yet the real experimentation with systems that serve a world larger than the researcher’s still await the participation of researchers, journal editors and scholarly societies, all of whom have now to make critical decisions about these technologies based on larger issues of social and political responsibility. It is time, I am suggesting, to think beyond the speed and convenience of our own desktop access to research, and to see access to this body of knowledge, in a field such as education, as far more of an experiment in what Dewey might call the communicative quality of democracy.
Dewey, Deliberation, and Democracy
The emphasis that I place on going public with our research follows from Dewey’s concern for the particularly educational quality of democratic life. Can these new publishing systems be made to serve Dewey’s democratic ideal – “to enable individuals to continue their own education” (1916, pp. 100-101)? Can they do so in ways that improve what is currently offered by newsstands, bookstores, the Web, and the media more generally? Can they extend education beyond formal schooling, which is Dewey’s hope for democracy. For Dewey. education in a democracy represents a broadly based and lifelong embrace of learning: “Not only is social life identical with communication, but all communication (and hence all genuine social life) is educative” (1916, p. 5). While Dewey recognizes that “as societies become more complex in structure and resources, the need for formal and intentional teaching and learning increases,” he seeks to work against “an undesirable split between the experience gained in more direct associations and what is acquired in school” (p. 9). This interest in integrating learning into a greater part of life is at the heart of his contribution to progressive education, as well as central to his role as a public intellectual. To pursue Dewey’s political philosophy through these publishing experiments is to see what they can do to integrate the systematic inquiry of research with “the experience gained in more direct associations.” The question is whether greater access to research, as
14
For the physics experiment in open access publishing, see the arXiv.org E-Print Archive (http:arXiv.org).
6
well as its integration with other forms of knowledge, can enhance how people work and deliberate together. At issue is what might be framed as the democratic quality of communication which is concerned with giving people a means to elaborate, substantiate, and challenge educational ideas, in this case, whether at the policy or school level. For Dewey, democracy is very much a matter of communication: “Men live in a community in virtue of the things they have in common; and communication is the way in which they come to possess things in common” (1916, p. 4). He also insists that “a democracy is more than a form of government; it is primarily a mode of associated living, of cojoint communicated experience” (p. 87). Although he says little of voting booths, candidate debates, or issue advertising, Dewey frequently refers to a basic level of communication among people, especially in this educational sense. The communication of research, however, poses a special challenge to this democratic vision. It is not enough to simply open the doors of the research libraries a little wider. Dewey is concerned with people being overcome by the quantity and variety of knowledge they faced: “Man has never had such a varied body of knowledge in his possession before, and probably never before has he been so uncertain and so perplexed as to what his knowledge means, what it points to in action and consequences” (1988a, p. 249). Elsewhere, Dewey points to how the increasing complexity of the knowledge entailed in organizing modern society creates a fundamental democratic tension between expert and public control: “A class of experts is inevitably so removed from common interests as to become a class with private interests and private knowledge, which in social matters is not knowledge at all” (1988b, p. 365). To this Dewey adds the warning that “the world has suffered more from leaders and authorities than from the masses” (ibid).15 Rather than having people resign themselves to expert control, Dewey seeks to increase public access to the pertinent information. His own efforts to support an ill-fated newspaper entitled, Thought News, which sought to sell “the truth” came to naught in his early days in Michigan (Lagemann, 2000, p. 45). Yet he continued to hold to the idea that “a newspaper which was only a daily edition of a quarterly journal of sociology or political sciences would undoubtedly possess a limited circulation and a narrow influence. Even at that, however, the mere existence and accessibility of such material would have some regulative effect” (1988b, p. 349). This regulative effect would be on the side of a better informed public who would then be in a position to work with democracy’s necessary class of experts rather than be governed by them. Such is the intellectual faith in systematic inquiry that drives our work. Dare we put it to the test? Yet Dewey’s careful reading of democracy also leaves me troubled with its emphasis on “associated living, of cojoint communicated experience” by which people “come to possess things in common” (1916, p. 87). This is one notion of democracy that has changed since Dewey first held that “in order to have a large number of values in
15
As I discuss elsewhere (2000b), Dewey’s stance on experts needs to be contrasted with the position of the popular political commentator Walter Lippmann who asked “whether it is possible for men to find a way of acting effectively upon highly complex affairs by very simple means,” as people’s “political capacity is simple” (1963a, p. 89-90). Lippmann saw the future lying in the hands of a technocracy of experts: “They initiate, they administer, they settle” (p. 92). Still Lippmann also held that “a democracy must have a way of life which educates the people for the democratic way of life” if only to make “people safe for democracy” (1963b, pp. 16, 26).
7
common, all members of the group must have an equable opportunity to receive and take from others. There must be a large variety of shared understandings and experiences” (1916, p. 84). Instead, we see democracy as a means of governing those who do not necessarily share “a large variety of shared understandings and experiences.” Dewey’s sense of the nation as a shared experience tends to limit democracy’s inclusiveness, just as his focus on the nation itself curtails a more global approach to this democratic exchange of understandings and experiences.16 In fact, one argument for going public with educational research is that it can bring into focus the level of diversity within which we already live. Researchers’ own plurality of values, methods and understandings – which includes the very critique of such plurality (e.g., Schlesinger, 1992; Himmelfarb, 1995) – further supports a concept of democracy given to working with differences, rather than seeking a singular truth or vision of, for example, the good school. Democracy has far less to offer, after all, if people are assumed to already be in accord on all the major issues. This pluralism, then, provides the very reason why democratic citizens are necessarily interested in talking with, and learning from, each other. Increasing the public presence of a body of research that is itself pluralistic in its values, as well as given to representing the plurality within communities, can only help further what is seen by many as research’s most important democratic task which is to assert the rights of those who are too often thought to fall outside the ken of shared concepts and culture.17 Certainly, academic culture has its own share of common values, from conventions of evidence to peer review, just as democracy requires the acceptance of a few basic principles of equality and justice.18 Yet within academic culture, such shared values are tempered by an ethos of critique, as well as a championing of the disenfranchised. It may be, then, that this body of research can afford the public not only a greater means of understanding how we live with differences, but a way of talking about that life which goes beyond Dewey’s aim “to have a large number of values in common” (1916, p. 84). Ready access to this research could better equip people, whether educators, reporters, parents, or politicians, to publicly challenge comforting myths and assumptions, while providing missing evidence, histories, and ideas that may inspire a
See Katharyne Mitchell (2001) on the “the limits of Deweyean liberalism,” as she explores “the potential for educating students for democracy in a non-nationalist framework” (p. 71, original emphasis); Author (in press) on the educational limits of nationalism; and the Council of Europe (1999) which has linked democratic citizenship with social cohesion, addressing issues of exclusion in the fields of housing, health, social protection and education, and calling for a coherent rather than a homogeneous whole. 17 Dewey’s sense of a democratic people possessing “a large number of values in common” (1916, p. 84) was not particularly sensitive to the recent influx of immigrants of the previous decades, nor to communities that fell outside such sharing, such as Native Americans, whose unqualified citizenship was only achieved in 1924, with full voting rights not guaranteed until 1970. Compare Dewey’s repeated contrasts of the “savage” with the civilized in thinking about democracy to the Native American influence on Rousseau’s thinking about democracy and the possibilities of cooperative living (Sioui, 1992). Also see, Anthea Taylor (1996) on democratic education’s insensitivities to Aboriginal Australians. 18 I do not, however, see democratic citizens requiring “a commitment to a shared political morality,” (Callan, 1997; p. 10). This “commitment” to a democratic morality, which Callan sees existing in “tension” with “the accommodation of pluralism” constrains democracy’s basic liberties. In educational settings, Callan argues “it becomes rational to nourish a sense of solidarity among those who share that common status so far as solidarity makes it more likely that the relevant rights and duties are honored,” to which I must add that such solidarity reduces the need to honor such rights and democracy itself (1997, p. 98).
16
8
way forward. This knowledge will not resolve the disputes. If it can level the playing field at all, it will not be by dumbing things down but by providing access to a powerful source of knowledge, enabling people to explore the limits of their own and others’ claims, while being able to identify the different perspectives and values at play. Dewey writes on the final page of Democracy and Education that “all education which develops power to share effectively in social life is moral” (1916, p. 360). Can the improved access and intelligibility of educational research contribute to people’s experience of such power? Is knowledge still a source of power when it is available to everyone? My argument is that we, as creators of such knowledge, should feel some obligation to take up and test such questions. We need to explore whether we are doing all that we can, in light of new technologies, to promote the democratic lifeblood of educative communication, as Dewey would have it. Yet as I have already suggested our ideas of democracy do not stand still, and one development that has pushed Dewey’s position on democracy within a pluralistic society while being especially relevant to improving the public quality of education research is the concept of “deliberative democracy” (Bohamn and Rheg, 1997; Elster 1998). For example, in Democracy and Disagreement, Amy Gutmann and Dennis Thompson step over Dewey’s concern with shared values, to focus on how people can talk through and ultimately live with fundamental disagreements, by “seeking moral agreement when they can, and maintaining mutual respect when they cannot” (1996, p. 346). This attention to democracy’s deliberative qualities, as opposed to its procedural or constitutional aspects, creates a civic space for social science research, whether to inform or otherwise be a part of the public articulation of issues and ideas. Gutmann and Thompson advance three principles – reciprocity, publicity, and accountability – for managing the “economy of moral disagreement” which they recognize as “a permanent condition of democratic politics” (pp. 3, 9). Each of these principles provides a further and final warrant for public-access initiatives in scholarly publishing, just as these initiatives can help us assess the public’s capacity for a more deliberative democracy.19 Reciprocity, first among Gutmann and Thompson’s principles, “asks us to appeal to reasons that are shared or could come to be shared by our fellow citizens” (1996, p. 14). This includes ensuring that the “empirical claims that often accompany moral arguments… be consistent with the most reliable methods of inquiry at our collective disposal” (pp. 14-15). Now, educational research is rife with reliable methods, while the differences among them, and the results which they lead to, can lead researchers at times to emulate that democratic “economy of moral disagreement.” Making research public, as I have stressed, is not intended simply to resolve disagreements once and for all, although it may in rare cases. More often, the research should help clarify the probableor likely implications and consequences of people’s positions. Given that deliberation leads at best to provisional conclusions, “subject to revision in light of new information and better arguments,” open access to an ongoing body of research has a substantial contribution to make to these political processes (p. 356).
The impact of “deliberative democracy” has been tested empirically by James Fishkin, who has with various collaborators “conducted fourteen Deliberative Polls in different parts of the world with random samples of respondents, brought together face to face, to deliberate for a few days. The samples have been representative of the relevant populations and they have undergone large, statistically significant changes of opinion on many policy issues” (Fiskin, 1999).
19
9
Gutmann and Thompson’s second and third principles – publicity and accountability – also work well with pubic access to educational research. As Gutmann and Thompson employ these concepts, publicity refers to openly sharing both the “reasons that officials and citizens give to justify political actions, and the information necessary to assess those reasons” (1996, p. 94). The scope of accountability for this deliberative process includes, for Gutmann and Thompson, a need to “address the claims of anyone who is significantly affected” by those actions (p. 129). A careful review of research results can improve the level of accountability, substantiating the claims of those who are significantly affected.20 In sum, these two political philosophers identify what I would hold up as one of the principal democratic warrants for public-access experiments with research: “Respect for [a citizen’s] basic liberty to receive politically relevant information is an essential part of deliberative democracy” (p. 126). To better prepare the public for such deliberative engagements, Gutmann and Thompson suggest that people need to learn more about how “to justify one’s own actions, to criticize the actions of one’s fellow citizens, and to respond to their justifications and criticisms” (p. 65). My argument, in turn, is that scholarly publishing could do more to help people turn to research, as a way of cultivating such critical reasoning abilities, although it will also fall to the schools to teach new lessons on locating and drawing on intellectual resources that best serve these processes of justification and criticism. Although this is not the place to develop the curricular benefits for the schools of going public with social science research, I would follow Jay Lemke, who in the Web’s earliest days spotted the educational potential of having students pursue this more democratic approach to the larger world of knowledge, as opposed to staying within the confines of the textbook (1994). At this point, I only ask whether we could do more with our research to demonstrate a greater continuity between the democratic theory and practice of the institutions for which we are responsible. What is at stake in such a link is the most commonplace of democratic assumptions, namely that education is necessary for its advancement.
Education, Research, and Democracy
It may seem obvious enough that people need a certain level of formal education to participate effectively in a modern democratic state. Certainly, the pertinent research points to how education makes a difference, although if you look closely those with only seven years of education in America (albeit a small proportion of the population) are more active voters than all but those with 18 years of schooling (Nie, Junn, and StehlikBarry, 1996, p. 16). And while American post-secondary education attendance doubled in the quarter-century after the Second World War, the proportion of people who voted declined in that period, especially since the 1960s (p. 99). Equally so, public primary schooling in developing countries increases the chances of democracy taking hold, while secondary education does not (Kamens, 1988). What is it about education, then, that is sufficient and necessary for democracy? What the political science research team of Nie, Junn, and Stehlik-Barry found, for example, was that formal schooling encourages people to believe “that their fate is
This is not to discount what Gutmann and Thompson identify as publicity’s amusement factor, first noted by Jeremy Bentham, that comes of people coming to know enough to catch out public officials (Gutmann and Thompson, 1996, p. 97).
20
10
controlled in fundamental ways by the actions and policies of democratic governments” and that “the goals of fairness and equality are important to the long-term stability of the democratic system” (1996, p. 19). Education can predict the degree of political participation because education situates people within “politically important social networks” that offer “proximity to those who make policy decisions” and “accessibility to sources of relevant political information” (p. 45). If that is indeed the case, then educational researchers may have it within their power to at least increase public accessibility to one source of potentially relevant political information. I would not want to exaggerate the political clout of this research. Coming to the table with a handful of pertinent studies hardly compares to old-boy networks and school-tie connections. But those lingering traditions provide reason enough, I feel, for researchers committed to this close connection between democracy and education to support the development of a public information resource to which people, as well as the organizations and agencies that would represent their interests, have equal access. There are, however, two common assumptions about the public role of research that this open access approach challenges. The first is that research is best summarized, translated, and synthesized before being made public. It needs to have the wrinkles and disputes cleared away, so that it can present a singular, definitive answer to pressing questions. This mediated approach to preparing research for public consumption has been the tack, for example, of the American Educational Research Association’s outreach activities and the National Research Council consensus panels.21 Yet, we should not assume that the public cannot bear the complexities of current educational research, given how we have learned to live, for example, with the lack of definitive scientific studies on the effectiveness of screening tests for cancer. Greater public familiarity with the discrepancies and disagreements that mark an ongoing body of research will act as a check on the temptation to bring in the experts to resolve social issues, effectively removing those issues from the democratic sphere of deliberation. It will also help people see that disagreements among scientists often reflect conflicts in values within the larger society, again suggesting that science does not somehow stand outside of the democratic sphere (Fischer, 2000, p. 64). A democracy would seem to demand direct access to public relevant and credible sources of knowledge, even as those sources are recognized as shaped by their own democratic differences in values and judgments. It may well be that enhancing public access to this knowledge will also prove a boon for inspiring faculty and students to give greater thought to writing for this expanded audience, taking the time to explain themselves in a way that will reward their work with a greater impact thanit has previously had a chance of achieving. This openness may well prove a source of insight into the intricate links between the public and scholarly forces that drive research within a public sphere like the schools. The second common assumption about education research in particular which this open access approach challenges, is that the way to enhance its public status is to focus it more systematically on improving school practices, as recent proposals by the National Research Council (1999) and National Academy of Education (1999) recommend
The National Research Council seeks “to have a positive influence on public policy and to increase public awareness of scientific, technical, and medical issues” (Choppin and Dinneen, 2000, p. 34).
21
11
(Willinsky, 2001a). This may end up doing less for the democratic quality of our lives, as research is used to fine tune teaching procedures and school programs, while offering less to contribute to what people think about education in a larger sense. The educational contribution that research can make to democracy is far more about providing, for example, the historical contexts of long-standing school issues, posing challenges to people’s basic thinking about learning, envisioning radical alternatives to current programs, and otherwise becoming a part of how people think about what schools can and should do. There is certainly a place for research directed at improving teaching practices within the scope of certain standardized tests, but I think that many researchers would be rightly apprehensive about going public with their work if it means that the immediate applicability of research becomes the principal and most prized aspect of our work as intellectuals. In arguing for improving public access to education research, I recognize that one of the educational issues that we will need to face is bringing the public in on the very scope and diversity of research. Yet I cannot help but think that to encourage this broader awareness of what schooling is about is itself educationally enriching in a public sense. In thinking about how children should be educated, whether in making personal, professional, or policy decisions, people should be able to find ways of getting close to the daily life of the classroom, in ways that researchers have, as well as gain an overview of how students in their nation are performing on international assessments. People would do well to discover how a science student learns to make ethical decisions, just as they need to know whether girls have an equal opportunity to be scientists. They also need a framework for thinking about school choice and public education in terms larger than current instructional efficacy comparisons. AERA’s motto – “Research Improves Education” – seems to me to unnecessarily limit what research can help us know. The organization would be better served, given what I have argued here, by a motto closer to “Research Informs Education.” It is not, of course, that I imagine everyone using this research on anything like a daily basis, although new work on evidence-based practices in medicine and other forms of professional practice would suggest it could have a regular role to play.22 Far more often, this engagement with research will be a matter of personal interests, pressing public issues, and passing curiosities. Still, we should not underestimate the difference that this occasional interest can make. When the public has turned to research, as citizen groups have around environmental issues, for example, they are “not necessarily hostile to technical data,” political scientist Frank Fischer has found in his study of citizen action groups, especially if that data is “presented and discussed in an open democratic process” (2000, p. 130). Although members of these groups may initially have found it hard to even speak with researchers, before long these concerned citizens were actively involved in the research process itself, giving rise to, for example, “popular epidemiology” in which the public helps to track the distribution of diseases (pp. 151-157). The instance of a researcher-public alliance forming around environmental issues suggests how local and expert knowledge can play a critical part in these deliberative processes: “Instead of questioning the citizen’s ability to participate, we must ask,” Fischer insists, “how can we interconnect and coordinate the different but inherently interdependent discourses of citizens and experts” (2000, p. 45). He calls for a reconstructed concept of professional
22
On the prospects of evidence-based practice for education, see Willinsky (2001b)
12
practice among researchers whose task is “authorizing space for critical discourse among competing knowledges, both theoretical and local, formal and informal” (p. 27). Such are the goals for scholarly publishing publicly accessible. Perhaps the most dramatic lesson of how the educational benefits of this public engagement works for both the public and science can be drawn from the AIDS activists of the 1980s and 1990s. As Steven Epstein tells it in Impure Science (1996), these activists successfully struggled for public participation in medical knowledge, which meant, among other things, bringing otherwise overlooked research into the limelight and changing the conduct of clinical trials. Scientists found themselves moved by activists in both an intellectual and ethical sense, while activists “imbibed and appropriated the languages and cultures of biomedical sciences,” acquiring their own forms of credibility in public and scientific deliberations over how to respond to AIDS by “yoking together moral (or political) arguments and methodological (epistemological) arguments” (pp. 335-56).The AIDS struggle established the need for, in the words of ACT-UP activist Mark Harrington, “a lasting culture of information, advocacy, intervention, and resistance” (p. 350). The lesson drawn from the fight against this tragic pandemic that is no less with us today, is that enabling people to play a greater part in directing their own lives amid a complex crisis can lead to better science and an extension of the democratic sphere. The public place of research also needs to be seen on a global scale, where disparities in educational opportunities, and access to knowledge more generally, are greatest. Avinish Persaud, of the State Street Bank in Boston, holds that the current knowledge economy is only increasing the gap between rich and poor nations – a knowledge gap that he calculates (based on number of scientists) to be ten times the income gap. He asks us to imagine the discrepancies between an imagined economist in Iowa, tapping into “thousands of journals on-line” as well as news services and other resources, while “many researchers in developing countries lack this opportunity” as do “civil servants who wish to explore policy options” (2001, pp. 109-110). The problem is not simply a lack of phone-lines and computers. The gap between haves and have-nots is just as much a matter of access to well organized sources of knowledge. Consider, for example, how critical open-access to an e-journal such as the British Medical Journal is to the University of Zimbabwe, which has had to slash its journal subscriptions from 600 to 170 due to rapidly escalating subscription costs. It “has won our hearts because it is free,” reports the university’s medical librarian (Nagourney, 2001). A number of scholarly societies have found it easy enough to grant open access to developing nations for their electronic editions. And even the the six major commercial publishers of academic journals, otherwise accused of provoking the crisis in scholarly publishing with their price increases over the last decade (ARL, 2000), have recently announced that they will make 1,000 of the world's top 1,240 medical journals free or deeply discounted for developing countries (Peterson, 2001). As scholars, we appear to now have it within our power to share our knowledge with the larger world of students, teachers and policy-makers. We need to think about how we, as educational researchers, could give more back to education. What we might well find is that the increased scale of this give and take, between public and researchers on an international scale, could well influence how we work and write in response to the increased educational and democratic value of this knowledge for people everywhere.
13
Historian Ellen Condliffe Lagemann (2000) has identified educational research as “an elusive science,” as a way of pointing to researchers’ frustrated pursuit of scientific ideals and academic respectability. She claims that, “Since the earliest days of university sponsorship, education research has been demeaned by scholars in other fields, ignored by practitioners, and alternatively spoofed and criticized by politicians, policy makers and members of the public at large” (p. 232). She concludes that what is needed is more systematic planning of research agendas in education, as well as a means of “reconciling the differences that inevitably arise as scholars study such difficult, complex problems” (pp. 240-241). I am suggesting that one way to improve the research agenda is to make the whole research process more open and public, as well as better connected and easier to track, all of which would, in turn, help researchers and the public work together at identifying priorities, opportunities, and gaps in what we know about education. This would be consistent with Lagemann’s critical suggestion that “scholars of education might also more commonly come to acknowledge their responsibility to educate the public about education and about education research” (2000, pp. xiii, 245).
Media, Research, and Democracy
To move academic research more thoroughly into the public domain is to create a substantial alternative source of public information. Democracies have typically relied on a free press to create an informed electorate and an informed governing body, or as Thomas Jefferson put it in a letter in 1787 to Edward Carrington: ” (1997). In thinking about making this body of research more widely available, we have lessons and inspiration to draw from the earlier political role of an emerging periodical press, and the printing press more generally. The United States’ “Enlightenment” during those years was driven by a “technology of publicity,” in historian Michael Warner’s estimation, a technology rendered “civic and emancipatory” by Thomas Paine, Benjamin Franklin, and other of the day’s determined democrats (1990, p. 3). Beginning in seventeenth-century Europe, the daring and steady stream of pamphlets, broadsides, and newsletters, amid the risks of state censorship, forged a new sense of public voice, interest, and energy. As historian David Zaret (2000) observes, “practical innovations in political communication preceded and prepared the way for democratic principles” (p. 270). Zaret also makes it clear that for democratic theories and revolutions, these “practical innovations” needed to be combined with a John Locke’s “liberal confidence in the capacity for individual self-help and reason” (pp. 275, 270). Print fostered a market whose political force defined what we now call the public opinion. I turn, if ever so briefly, to the press’ golden past because the democratic spirit of that age, with its practical innovation and liberal confidence, corresponds far more closely to what inspires this move for open access to scholarship than is reflected in the current state of the press. Today, the media’s democratic force strikes many as dissipated, if not lost completely. Ben H. Bagdikian (2000), the former School of Journalism Dean at
14
the University of California Berkeley, finds that the emancipatory press of yesteryear has been reduced largely through corporate concentration to “trivialized and self-serving commercialized news,” in estimation (p. ix). In the preface to the sixth edition of Media Monopoly, Bagdikian observes that “power over the American mass media is flowing to the top with such devouring speed that it exceeds even the accelerated consolidations of the last twenty years” (2000, p. viii). Not only do a handful of mega-corporations control “the country’s most widespread news, commentary and daily entertainment,” but these conglomerates have “achieved alarming success in writing the media laws and regulations in favor of their own corporations and against the interests of the general public” (2000, p. viii).23 I interpret this disenchantment with the press as democracy’s great hope to be a further warrant, not surprisingly, for testing whether social science research, which is no less dedicated to the public interest, might offer a substantial and reliable alternative or supplementary source of systematic inquiry and information.24 At this point, the relationship between press and research remains uneasy in ways that suggest that neither feels all that well served by the other. It is common to find researchers, such as Christopher Forrest, a professor of pediatrics and health policy at Johns Hopkins University, accuse the press of, in effect, supporting public shortsightedness, or as Forrest puts it: “The public reads the bottom line. They act on that without putting the study into context. In politics, there is always a context. The same is true for science, but it doesn’t get reported that way” (quoted in Stolberg, 2001; p WK3). The press is not above hitting back at researchers, as Sheryl Gay Stolberg, the reporter who cited Forrest, responded that, “we live in a dizzying world, where scientists produce a stream of research, and each new study seems to contradict the previous one” (Stolberg, 2001; p WK3). The problem here may indeed be that the context for interpreting science goes missing, as Forrest points out, but then we do little enough to help reporters or the public establish even the most basic context or background for any given study. . This was fine as long as the research was taking place far away from public eyes, where only an intrepid reporter might venture, interrupting the researcher long enough to get a snappy quote or soundbite. If we begin to think about research as part of the public record, financed as so much of it is by public money, then suddenly our relationship to the larger world shifts as we become responsible for a source of public knowledge. What this greater access to research could mean, as I have been describing it, is providing a context for our work, a technology enabled context in which reporters and readers can readily turn to related studies, overviews, policies and programs, that would make clear how contradictions play out in this difficult work with knowledge. This would improve the
Bagdikian is hardly alone in his critique of the press’ declining democratic contribution, and in addition to well-known media gadfly Chomsky, (e.g., 1998) and the already cited McChesney (1999), see Cappella and Jamieson (1997), Iyengar (1991), Page, (1996), and Schiller (1996). The big seven media corporations, as I write, are AOL Time Warner, Bertelsmann, Walt Disney, the News Corporation, Sony, Viacom and Vivendi, with a combined revenue of $153 billion for 2001, and a collective market share of, for example, 80% in U.S. book publishing by revenue (Schiesel, 2002). 24 In support of that supplementary approach, the Public Knowledge Project ran a week-long research support website with a local newspaper which allowed readers to tap into a database of links to research studies related to the paper’s series on technology and education, as well as join discussion forums with researchers and view pertinent teaching materials, policies, and organizations. See “Prototypes” at the Public Knowledge Project ().
23
15
press’ coverage of research, but perhaps more importantly, given that scholarship’s methodical pursuit of knowledge is not well suited to the fast-news fare of today’s media, it would enable readers to move from press coverage to the study itself, enabling them to travel as far as they wish into research’s realm..25 The final argument to be made for ensuring that research stands alongside the media as a public source of information comes from the apparent electronic future of the press, which poses its own threat to the press’ traditional service to democracy. Legal scholar Cass Sunstein (2001) has perceptively warned that the Internet is being used to create what might be thought of as gated information-communities. Readers can personalize the news that crosses their sceens, pre-selecting topics and sources, which makes them less readers of the news and more of info-consumers, “able to see exactly what they want to see” (Sunstein 2001; p. 5). He holds to the basic democratic principle that “people should be exposed to materials that they would not have chosen in advance. Unplanned, unanticipated encounters are central to democracy itself” (p. 8). Although he affirms, much like Dewey, the importance of citizens having common experiences, which I addressed above, the educational quality of “unplanned, unanticipated encounters” with information,which he sees as critical to democracy, is very close to the heart of the proposal under consideration here. People within a community may have far fewer media experiences in common than they did in the past, but one advantage of this increasing variety is that it may well draw citizens into comparing where they turn for information and entertainment, all of which hardly weakens, I would think, the ties that bind democracy to education. Still, Sunstein offers a healthy caution for an open access project that is set on improving public access to educational research. If it is going to steer clear of a narrowly cast information consumerism, in its efforts to improve the scholarly quality of that engagement, then public-access systems will need to ensure that contrary and critical commentary are within a click or so of the work that it challenges, just as related work from abroad needs to sit near domestic studies, to keep the parochialism at bay. Contrary viewpoints can still be ignored, of course, but a little less easily, perhaps, and certainly it is more difficult to deny their existence when they loom but a click or two away. The very availability of information in a democracy, whether people attend to it or not, Sunstein holds, “increases the likelihood that government will actually be serving people’s interests,” or as Sunstein cites Justice Louis Brandeis holding, “sunlight is the best of disinfectants” (2001, pp. 90, 176).26 If the measure of a democracy is not to be gauged by how many take up this public knowledge or how often they turn to it, the ready availability of this knowledge can still be said to contribute to the educational and communicative qualities of its citizens’ lives together. Like the public libraries that can be found in the smallest of communities, no less than the newspapers of the smallest town, the presence and possibilities of being able to turn to a given body of knowledge exerts its own force of
Todd Gitlin (1980) addresses these issues head-on when he speaks of the press’ focus on “the novel event, not the underlying, enduring condition; the person, not the group; the visible conflict, not the deep consensus; the face that advances the story, not the one that explains or enlarges it” (p. 263). 26 Sunstein also holds that the “absence of the demand [to see some form of information on the part of the people] is likely to be the product of the deprivation,” which I would suggest that we at least test in the case of educational research (2001, p. 111).
25
16
reasonableness and reassurance. Here, then, is our chance as educators and knowledge workers of some sophistication to extend the vital force of the media as a source of greater awareness and understanding, as well as to supplement if not challenge its particular framing of what can be known of the world.
Final Remarks
One encouraging bit of news in education over the last few years has been a few signs that the notorious theory-practice gap is narrowing. Gloria Ladson-Billings (1995) commends researchers for their “willingness to listen and learn from practitioners [which] is providing researchers and teacher educators with opportunities to build a knowledge base in conjunction and collaboration with teachers” (p. 755). With this growing knowledge base in hand, now would seem a time for researchers to give more back to teachers by opening that collaboratively developed knowledge to public and professionals alike. The concern for reciprocity should inspire researchers to pursue new systems of scholarly communication that strengthen the public dimensions of this collaborative spirit. Otherwise, it may turn out that these new technologies for scholarship end up serving little more than the immediate interests of researchers, and as such prove yet another boon for well-financed universities, leaving the rest of the world further behind. The preferred goal that lies ahead, as I have outlined here, is the design and development of systems that address both the public and scholarly quality of our research activities. There is no way of predicting how new media will massage old messages, but we can reasonably expect both public discourse and educational research to be altered. Thus, my interest, as an educator and student of literacy, is in treating these new systems as experiments in how knowledge can extend its contribution within a democratic and educational culture, a culture that has room to grow, one hopes, as part of a larger global society. These experiments are best seen as part of a long and often difficult history in the spreading and sharing, challenging and augmenting, of ideas. As such, it would not be wise to deny the risks associated with such experiments in the history of ideas. In asking researchers, journal editors and scholarly associations to give, as it were, their informed consent before participating in publishing experiments aimed at improving public access to education research, it is only fair to acknowledge the risks this might entail. These publishing experiments may lead to momentary vertigo, induced by uncertainties over career impact and prestige risk. These new publishing systems will clearly need to be as sensitive to the career aspirations of contributors, as to their desire to see this earnest pursuit of a knowledge have a larger impact in a global exchange of ideas. Fortunately, the early indications from studies of the impact of e-journals are encouraging for career concerns.27 These experiments may also cause professional associations temporary consternation, over the prospects of seemingly irrelevant and irreverent questions being raised about research directions and practices from a newly informed public. Similarly, journal editors may also worry for the academic freedom of
27
Anderson, Sack, Krauss, and O'Keefe (2001) found that free online refereed publications are cited as often as traditional print and slightly more than closely related studies in the same area, and that these open access publications were felt by faculty to fully count for tenure. Steven Lawrence (2001) found in a study of 119,924 conference articles in computer science that more highly cited articles are more likely to be freely available online.
17
their authors, now that the refuge of inaccessibility will no longer be the great protector of that freedom. It will, however, be that much easier to defend the fruits of academic freedom by being able to present where a single study fits within the larger context of scholarly inquiry. So, too, can this openness foster greater public support for research, one would hope, within an atmosphere of open discussion about the range and scope of academic inquiry.? This world of knowing needs to be transformed into a public resource, if only as an alternative to what can otherwise seem like a singular stream of media confluence coursing through some 500 television channels. If nothing else, this open access to research resources will put common assumptions about the value of this knowledge, whether among the public or researchers, politicians or teachers, to the test. Given the innovative and experimental nature of this publishing environment, it becomes important to test these assumptions, by assessing the impact, across a range of measures, of open access scholarly publishing systems on the public, professionals, and policy officials (as well as on progress of academic careers). Our own research plans include asking whether and how the design of these open access publishing systems contribute to people’s ability to consult pertinent research evidence in decision making, to critically evaluate sources of educational information, to link educational practices to related theories, and to place educational issues within a historical perspective. It also seems important to know if the availability of this research supports people’s participation in civic and educational forums, increases their interests in collaborating with the research community, or expands their appreciation of how research works. Then, there is the question of how this increased access to a wide range of scholarly resources, from data sets to dissertations, adds to the rigor and reliability of peer review processes, just as increased public engagement may work on the direction, design, and writing of research. If there are gains in any of these areas, they will be modest, at best, but all of them are worth pursuing, if only for what such inquiries can tell us about learning and knowledge in this new information environment, as well as about the nature of our own work. Many of the details of creating a more accessible public space for knowledge have still to be worked out, in a similar process to the one public libraries faced in the past, as they set out to overcome the public’s limited access to print over the last two centuries through a number of successful strategies. We have only to imagine how to take the next step in creating places to which people can turn, however rarely or infrequently, when they are taken by the urge to go deep and far into existing bodies of knowledge. We have also to realize that going public with our research will gradually change how we conduct our studies in and outside of schools, how we write about and connect our work to other studies, as well as to larger and local worlds of information. In this way, new publishing and broadcasting systems seem bound to reshape both democracy and education, strengthening the link between them. Or at least, I have argued the reasons why we are under some obligation to test such propositions. Let the democratic experiment continue.
18
Acknowledgements
I would like to thank Anne White and the editors of this journal for their assistance with this article, as well as the Social Science and Humanities Research Council of Canada and the Max Bell Foundation for their support of this work. References Alexander, C. J. and Pal, L. A. (Eds.). (1998). Digital democracy: Policy and Politics in the Wired World. Toronto ON: Oxford University Press. Anderson, K., Sack, J., Krauss, L., and O'Keefe, L. ( 2001). Publishing online-only peerreviewed biomedical literature: Three years of citation, author perception, and usage experience. Journal of Electronic Publishing, 6(3). Retrieved April 30, 2002, from. Arms, W. Y. (2000). Economic models for open access publishing. IMP: The Magazine on Information Impacts. Retrieved April 30, 2002, from. Association of Research Libraries (ARL). (2000). Scholars under siege [web-page], Available at:. Bagdikian, B. H. (2000). The media monopoly (6th Ed). Boston: Beacon. Bailey, C.W., Jr. (1996-2001). Scholarly Electronic Publishing Bibliography. Houston: University of Houston Libraries. Retrieved April 30, 2002, from. Bohman, J. & Rheg, W. (Eds.). (1996). Deliberative democracy: Essays on reason and politics. Cambridge, MA: MIT Press. Bowen, W. G. & Bok, D. (1998). The shape of the river: Long-term consequences of considering race in college and university admissions. Princeton, NJ: Princeton University Press. Callan, E. (1997). Creating citizens: Political education and Liberal democracy. Oxford, UK: Oxford University Press. Cameron, R. D. (1997). A Universal Citation Database as a catalyst for reform in scholarly communication. First Monday, 2(4). Retrieved April 30, 2002, from. Cappella, J. N. and Jamieson, K. H. (1997). Spiral of cynicism: The press and the public good. New York: Oxford University Press. Chomsky. N. (1998). Propaganda and the control of the public mind. In Eds. R. W. McChesney, E. M. Wood, and J. B. Foster, Capitalism and the Information Age: The political economy and the global communication revolution (pp. 180-181). New York: Monthly Press. Choppin, P.W. & Dinneen. G. P. (2000). The NRC in the 21st century: Report of the task force on NRC goals and operations. Washington, DC: National Research Council. Retrieved April 30, 2002, from. Congress struggles with flood of e-mail. (2001, March 4). New York Times, p. A16. Cuban, L. (2001). Oversold and Underused : Computers in Classrooms. Cambridge, MA: Harvard University Press. Cuban, L. (1986). Teachers and Machines : The Classroom Use of Technology Since 1920. New York: Teachers College Press. 19
De Tocqueville, A. (1969). Democracy in America (Trans. G. Lawrence). New York: Doubleday. Dewey, J. (1916). Democracy and education. New York: Macmillan. Dewey, J. (1938). Education and experience. New York: Macmillan. Dewey, J. (1988a). The public and its problems. In The later works, 1925-1953 (Vol. 2: 1925-1927). (Ed. J. A. Boydston). Cabrondale, Il: Southern Illinois University Press. Dewey, J. (1988b). The quest for certainty. In The later works (Vol. 4: 1929). (Ed. J. A. Boydston). Carbondale, IL: Southern Illinois University Press. Dworkin, R. (2001, April 13). Race and the use of law. New York Times, p. A19. Elster, J. (Ed.). (1998). Deliberative democracy. Cambridge, UK: Cambridge University Press. Epstein, S. (1996). Impure science: AIDS, activism, and the politics of knowledge. Berkeley, CA: University of California Press. Fischer, F. (2000). Citizens, experts and the environment: The politics of local knowledge. Chapel Hill, NC: Duke University Press, Fishkin, J. S. (1999). Deliberative polling as a model for ICANN membership. Unpublished paper, Berkman Center for Internet & Society at Harvard Law School. Retrieved April 30, 2002, from. Freudenheim, M. (2000, May 30). New web sites altering visits to patients. New York Times, pp. A1, C14. Gitlin, T. (1980). The whole world is watching: Mass media in the making and unmaking of the new left. Berkeley, CA: University California Press. Gutmann, A. and Thompson, D. (1996). Democracy and disagreement. Cambridge, MA: Harvard University Press. Hague, B. N. and Loader, B. D., Eds. (1999). Digital democracy: Discourse and decision making in the information age. London: Routledge. Heek, R., Ed. (1999). Reinventing government in the information age: International practices in IT enabled public sector reform. London: Routledge. Himmelfarb, G. (1995). On looking into the abyss : Untimely thoughts on culture and society. New York: Vintage Books. Iyengar, S. (1991). Is anyone responsible: How television frames political issues. Chicago, IL: University of Chicago Press. Jefferson, T. (1997). Letter of Thomas Jefferson to Edward Carrington, 1787. The Letters of Thomas Jefferson: 1743-1826. Groningen, Netherlands: Humanities Computing. Retrieved April 30, 2002, from. Kamens, D. H. (1988). Education and democracy: A comparative institutional analysis. Sociology of Education, 61, 114-127. Ladson-Billings, G. (1995). Multicultural teacher education: Research, practice, and policy. In J. A. Banks & C. A. McGee Banks, Handbook of research on multicultural education ( pp. 747-759). (ERIC Document Reproduction Service No. ED 382 738). New York: Macmillan. Lagemann, E. C. (2000). An elusive science: The troubling history of education research. Chicago: University of Chicago Press.
20
Lawrence, S. (2001). Online or invisible? Nature, 411(6837), 521. Retrieved April 30, 2002, from. Lawrence, S., Giles, L. C., and Bollacker, K. (1999). Digital libraries and autonomous citation indexing. IEEE Computer, 32 (6), 67-71. Retrieved April 30, 2002, from Lemke, J. (1994). The coming paradigm wars in education: Curriculum vs. information access." In Cyberspace Superhighways: Access, Ethics, and Control, Proceedings of the Fourth Conference on Computers, Freedom, and Privacy (pp.76-85). Chicago: John Marshall Law School. Retrieved April 30, 2002, from. Lincoln, A. (1861, July 4). Message to Congress. In The Official Records of the Union and Confederate Armies, Series IV, I, 311-321, p. 35. Retrieved April 30, 2002, from. Lippmann, W. (1963a). The public and its role. In The essential Lippmann: A political philosophy for liberal democracy (pp. 85-125). New York: Random House. Lippmann, W. (1963b). The dilemma of liberal democracy, in The essential Lippmann: A political philosophy for liberal democracy (pp. 3-26) New York: Random House. Macedo, S. (Ed.). (1998). Deliberative politics: Essays on Democracy and Disagreement. New York: Oxford University Press. McChesney, R. W. (1999). Rich media, poor democracy: Communication politics in dubious times. New York: Free Press. McKiernan, G. (2001). EJI(sm): A registry of innovative e-journal features, functionalities, and content. Ames, IA: Iowa State University Library. Retrieved April 30, 2002, from. Mitchell, K. (2001). Education for democratic citizenship: Transnationalism, multiculturalism, and the limits of liberalism. Harvard Educational Review, 71(1), 51-78. Nagourney, E. (2001, March 20). For medical journals, a new world online. New York Times, pp. 1-2. National Academy of Education (NAE). (1999). Recommendations regarding research priorities: An advisory report to the National Educational Research and Policy and Priorities Board. New York: National Academy of Education. National Research Council (NRC). (1999). Improving student learning: A strategic plan for educational research and its utilization. Committee on a Feasibility Study for a Strategic Educational Research Program. Washington, DC: National Academy Press. Page, B. I. (1996). Who deliberates: Mass media in modern democracy. Chicago, IL: University of Chicago Press. Persaud, A. (2001). The knowledge gap. Foreign Affairs, 80(2), 107-117. Peterson, M (2001, July 9). Medical journals to offer lower rates in poor nations, New York Times, p. A3. Putman, R. D. (2000). Bowling alone: The collapse and revival of American community. New York: Simon and Schuster. Ravitch, D. (2001). Left back: A century of battles over school reform. New York: Tocuhstone. Ross, L. (2001, April 9). Dept. of Big Thoughts: The 1845th. New Yorker, p. 39.
21
Schiesel, S. (2002, March 11). The media giants: Overview, the Corporate strategy. New York Times, C1. Schiller, H. I. (1996). Information inequality: The deepening social crisis in America. New York: Routledge. Schlesinger, A. (1992). The disuniting of America: Reflections on a multicultural society. New York: Norton. Shavelson, R., Feuer, M., and Towne, L. (2001). A scientific basis for educational research? Themes and lessons from a workshop. Paper presented at AERA, Seattle. Sioui, G. E. (1992). For an Amerindian autohistory: An essay on the foundations of a social ethic (S. Fischman, Trans.). Montreal: McGill-Queens University Press. Stolberg, S. G. (2001, April 22). Science, studies and motherhood. New York Times, p. WK3. Sunstein, C. R. (2001). Republic.com. Princeton, NJ: Princeton University Press. Taylor, A. (1996). Education for democracy: Assimilation or emancipation for Aboriginal Australians. Comparative Education Review, 40(4), 426-438. Warner, M. (1990). The letters of the republic: Publication and the public sphere in eighteenth-century America. Cambridge, MA: Harvard University Press. Wilhelm, A. G. (2000). Democracy in a digital age: Challenges to political life in cyberspace. New York: Routledge. Willinsky, J. (2001a). The Strategic Education Research Program and the public value of research. Educational Researcher, 30(1), 5-14. Retrieved April 30, 2002, from. Willinsky, J. (2001). Extending the prospects of evidence-based education. IN>>SIGHT, 1(1), 23-41. Retrieved April 30, 2002, from. Willinsky, J.(2000a). Proposing a Knowledge Exchange Model for Scholarly Publishing. Current Issues in Education, 3(6). Retrieved April 30, 2002, from. Willinsky, J. (2000b). If only we knew: Increasing the public value of social science research. New York: Routledge. Willinsky, J. & Wolfson, L. (2001). The indexing of scholarly journals: A tipping point for publishing reform? Journal of Electronic Publishing, 7(2). Retrieved April 30, 2002, from. Zaret, D. (2000). Origins of democratic culture: Printing, petitions, and the public sphere in early modern England. Princeton, NJ: Princeton University Press.
22
This action might not be possible to undo. Are you sure you want to continue? | https://www.scribd.com/doc/88608997/Democracy-and-Education-The-Missing-Link-May-Be-Ours-John-Willinsky | CC-MAIN-2015-48 | en | refinedweb |
Browse by Keyword: "require"
← previous Page 2 next →
groupdocs-javascript Javascipt client for GroupDocs API
grunt-commonjs-coffee Deprecated in favor of grunt-browserify
grunt-contrib-commonjs Wrap CoffeeScript or JavaScript into a CommonJS compatible require definition
grunt-contrib-requiregrep Grunt task that creates AMD modules by searching for dependencies on source files
grunt-dead-simple-include An awesome little utility for no fuss source file includes through grunt, compatible with all file types in your arsenal.
grunt-dependencygraph Visualize your CommonJS or AMD module dependencies.
grunt-dependo Visualize your CommonJS or AMD module dependencies.
grunt-durandal Grunt Durandal Builder - Build durandal project using a custom require config and a custom almond
grunt-dust-require Grunt.js plugin to compile dustjs templates.
grunt-glue-js Grunt task to build CommonJS modules for the browser using gluejs.
grunt-gluejs2 A Grunt plugin for GlueJS v2.2+.
grunt-include-replace Grunt task to include files and replace variables. Allows for parameterised includes.
grunt-include-replace-cwd Grunt task to include files and replace variables. Allows for parameterised includes.
grunt-include-replace-if Grunt task to include files, replace variables and remove if blocks. Allows for parameterised includes.
grunt-include-replace-s2 Grunt task to include files, replace variables and remove if blocks. Allows for parameterised includes. bug fixed
grunt-includejs include.js is a node preprocessor for including one JavaScript files into another by "@include" operator. It's a fork of wepp module.
grunt-init-browser Grunt init template for generating and testing multiple versions of a browser script
grunt-mocha-require-phantom Grunt plugin for testing requireJS code using mocha in phantomJS and regular browsers
grunt-nested-exports A gruntjs plugin to autocreate index files that exports nested nodejs modules.
grunt-ozjs grunt tasks for oz.js and ozma.js
grunt-ozjs-tudou grunt tasks for oz.js and ozma.js
grunt-require Easy switching between built and un-built versions of require.js based applications with grunt
grunt-requiregrep Grunt task that creates AMD modules by searching for dependencies on source files
grunt-urequire A Grunt wrapper around uRequire <>
grunt-useuses A grunt plugin allowing you to use `@uses` annotations to load dependencies for your javascript files.
grunt-wildamd Grunt task to generate AMD namespace modules from module dependencies that use globbing (pattern matching) syntax
html-to-js Make HTML require()-able
httprequire A way to include remote node.js modules
include-all An easy way to include all node.js modules within a directory. This is a fork of felixge's awesome module, require-all () which adds the ability to mark an include as **optional**.
include-folder expose the content of each file in a folder as an object property.
includemvc Helpers to require files without the relative paths.
initialise Lazy initialization / require wrapper. Makes sure you only load the modules once when you need them.
is-require Tests whether an JavaScript AST node is likely to be a valid `require` call.
isg-connector Code of universal connection module for JavaScript.
lazy-require Lazy require allows you to require modules lazily, meaning that when you lazy require a missing module, it is automatically installed. If the installation or require fails, the error is returned to the lazy require callback.
lint-deps Command-line tool to check for dependencies that are not listed in package.json, and optionally add them. Also tells you when packages that aren't used anywhere are listed in package.json.
literalify A browserify transform for replacing require calls with arbitrary code.
live-require add scripts to a page programmatically
load-common-grunt-tasks Load common grunt tasks and configs so you don't need to redefine them for every module
load-grunt-subtasks Load multiple grunt tasks from subprojects using globbing patterns
load-grunt-tasks Load multiple grunt tasks using globbing patterns
load-modules Load the resolved filepaths to npm modules, either directly in your config or from Underscore/Lo-Dash templates.
loadit Asynchronously loads (requires) all files in the given directory and all recursive subdirectories that match the given regular expression.
lua-loader Manage your Lua modules with npm
make-commonjs-depend Create dependencies for Makefiles. It's like makedepend but for JavaScript.
matchkeys Package.json utility for matching, comparing or filtering keywords against one or more arrays of keywords.
mattisg.requirewith Wrapper to require() modules with dependency injection.
micro-modules Micro implementation of commonjs modules.
microplugin A lightweight plugin / dependency system for javascript libraries.
mimosa-canary A modern browser development toolkit and application assembler. Compile, lint, optimize, serve and more.
mimosa-require AMD/RequireJS module for Mimosa browser development workflow tool
modular-amd A simple JavaScript loader, based on the AMD pattern.
modular-js Modular JavaScript.
module-bundler ModuleBundler combines javascript files and provides a minimal CommonJS-like loader to access them
module-deps walk the dependency graph to generate json output that can be fed into browser-pack
module-resolver Asynchronous require.resolve() implementation
more-minerals we require more minerals
multi-require require a folder, resolving name, validation supported
naked-objects Shorthand notation for Object.create(null) via node module hook
named-require Name-spacing of the Node.js require cache
nesh-require nesh plugin for a command that requires a module and assign it to a variable with the same name
ng-require-dir node require dir
node-define Makes AMD modules require’able in node by adding a global define function
← previous Page 2 next → | https://www.npmjs.org/browse/keyword/require/1/ | CC-MAIN-2014-10 | en | refinedweb |
Summary
An important feature in Flex 4 is a new component architecture, Spark, that allows a complete separation of a component's view from its display logic. This article provides a tutorial introduction to creating custom Flex 4 skins using the Spark architecture.
A key innovation in the Flex 4 SDK is a thorough separation of a component's visual appearance from its display logic. By contrast, previous Flex versions required that a component be defined in a single MXML or ActionScript file: component layout, possible subcomponents, logic defining how a component should behave in the presence of data, as well as styling, all could be provided in a single component definition. The ability to reference external stylesheet files provided a modest measure of controller-view separation in earlier Flex versions.
While convenient, limited model-view separation in previous Flex SDKs also meant that developers interested in providing a custom look and feel—or skins—for their Flex components had to use FlexBuilder or similar Flex-centric developer tools to work with the visual aspects of a component. That, in turn, made it difficult for developers and designers to work together on a Flex application, since developers and designers are accustomed to different sets of tools. Flex 4 solves that problem by defining a new component architecture, Spark, that separates component logic and view into different artifacts. These artifacts are tailored to work well with developer and designer tools, respectively.
This article provides a tutorial on how to take advantage of Spark architecture features to design a custom look-and-feel for a Flex component. A custom look-and-feel brings to a Flex application benefits beyond visual pizzazz. For example, having a separate view definition allows an application to swap component skins at runtime and to radically alter almost every visual aspect of a component, such as layout, font sizing, and so on. Such a modular approach to component design, in turn, makes it easy to build Flex applications that gracefully adapt to their runtime environments, such as varying display sizes or the requirements of mobile devices.
Although becoming familiar with just a handful of Spark architecture concepts makes skinnable component development feel natural, Flex 4 does not require that every component in a Flex application follow the new architecture. Flex 4 applications can continue using components based on the earlier Flex component architecture, Halo, although the best practice is to use Spark components, whenever possible. In addition, it is also possible to mix and match Halo and Spark components within the same Flex application. Such component interoperability was a key requirement for Flex 4, as it enables seamless migration of older Flex applications to Flex 4.
When moving an existing Flex application to Flex 4, you can keep using your Halo-based components, which will continue to work as expected. At the same time, an instructive way to introduce the benefits of the new Spark architecture is to migrate an existing Halo-based Flex component to Spark. This article's example builds on the temperature converter application introduced in the earlier Artima article, Two-Way Data Binding in Flex 4. The simple application converts temperature values from Fahrenheit to Celsius and vice versa:
Enter a numeric value into either field and press enter. Press Toggle to enable or disable the component. To view the source code, right-click or control-click on the application.
Several applications may wish to make use of the temperature conversion functionality. It is advantageous, therefore, to define the converter as a Flex component so that it can be reused in any Flex application. Using Flex 3's Halo component architecture, one way to define such a component is as follows:
<?xml version="1.0" encoding="utf-8"?> <mx:VBox xmlns: <mx:Script> <![CDATA[ private function onCelsiusEntered(e: Event): void { fahrenheitInput.text = (Number(celsiusInput.text) * 9/5 + 32) + ""; } private function onFahrenheitEntered(e: Event): void { celsiusInput.text = ((Number(fahrenheitInput.text) - 32) * 5/9) + ""; } ]]> </mx:Script> <mx:Form> <mx:FormItem <mx:TextInput </mx:FormItem> <mx:FormItem <mx:TextInput <mx:FormItem> </mx:Form> </mx:VBox>
A notable feature of this component definition is that it combines layout as
well as display logic. The component itself extends the
VBox
container, and includes a single
Form subcomponent that lays out
two text labels and two text input fields. The display logic—or
controller—part of the component is defined in ActionScript code inside
the
Script tags: The two event handler methods capture input from
the Celsius or Fahrenheit input fields, respectively, and update the opposite
field's value.
The Halo-based converter component can be used in any Flex 3 application as follows:
<?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <local:Converter </mx:Application>
The convenience of being able to provide such an all-in-one definition for a component comes at the cost of some flexibility, however. Consider, for instance, that some users would prefer the Fahrenheit input field to appear before the Celsius one, based on regional preferences. In the current design, the order of the text input fields is baked into the component, so to speak: You would have to define a new version of the entire component to achieve that requirement, duplicating a significant portion of the component's code. Even with two components, it would require tedious, boilerplate code to determine at runtime which of the two component to display based on the user's preferences.
Such inflexibilities in a component's presentation can become a problem as
Flex components are embedded into a larger application. For instance,
application requirements may dictate that the temperature converter application
enter a disabled state—a state in which it cannot accept user input. That
requirement can be implemented by setting the component's
enabled
property to
false, as the following example shows:
<?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:VBox> <local:Converter <mx:Button </mx:VBox> </mx:Application>
Using the Halo-based component design, setting the entire component to a
disabled state yields somewhat unattractive results, with the
entire application assuming a different background color. It would be better to
handle a disabled state more gracefully by disallowing input only on the text
fields, and leaving the component's background color unchanged. Installing a
component state listener could achieve that effect—however, that would add
further display-specific code to the entire component.
The Spark architecture addresses these, and many other, component limitations in an elegant way: The visual and display-logic aspects of the application are split into separate files. The two component definitions work together by each adhering to a contract defined in the Spark architecture. The rest of this article will illustrate how to refactor the temperature converter component to one based on the Spark architecture.
It is not always obvious what aspect of a Flex component belong in the view and what aspects should be defined inside the controller. As a general principle, behavior intended to be reused regardless of visual changes to the application belong in the controller; component aspects envisioned to change based on customized presentation, on the other hand, should be defined in the view.
Regardless of how the two text fields are displayed in the temperature converter component, entering text into one field should cause the value in the other field to change. That functionality is a good candidate for inclusion in the controller, since that behavior should be consistent across presentation. On other hand, the display and layout order of the text fields are better defined in the view.
The view aspects of a Spark component are defined in class that extends
SparkSkin, the root of the Spark skin hierarchy. A
SparkSkin is often defined in MXML. In fact, the relatively simple
XML structure of a skin is a key enabler when editing Flex skin definitions in
tools, such as Adobe's Catalyst or Illustrator.
A skin file contains every display-related aspect of a component, such as graphic elements, subcomponents, layouts, images, transitions, and so on. A novel feature of Spark skins is that they can include a new XML-based declarative graphics language for the Flash Player, FGX. FGX allows you to declare a wide range of graphics operations using XML tags—data structures that design tools can easily read and write. FGX-based declarations are translated by the Flex compiler to efficient Flash Player graphics bytecode, making FGX a good tool for defining the graphics-related aspects of a component.
Another useful feature of Spark skins that they allow developers to define skins and components in a modular fashion. Indeed, most Flex components consist of multiple subcomponents. Each subcomponent may have its independent skinning definition. The Flex runtime matches a component's skinnable sub-components with skin parts declared in the skin.
To see how this works in practice, consider the following skin definition for the temperature converter:
<?xml version="1.0" encoding="utf-8"?> <s:SparkSkin xmlns: <fx:Metadata> [HostComponent("DegreeConverter")] </fx:Metadata> <s:states> <s:State <s:State </s:states> <mx:Form> <mx:FormItem <s:TextInput <mx:FormItem> <mx:FormItem <s:TextInput <mx:FormItem> </mx:Form> </s:SparkSkin>
The first element of this skin definition is metadata identifying the host
component for this skin. While optional, such a declaration enables the skin to
have access to the host component itself via the
hostComponent
property.
A declaration of two skin states comes next. In Spark-based components a skin and a component each maintains its own state. As the component state changes, the component can notify its associated skin to change the skin state, too. The temperature converter component has only two states: One for the enabled status of the component, and the other one for the disabled state.
These states can be referred to anywhere inside the Flex skin definition using
the new Flex 4 state syntax: the state name, followed by a dot, followed by the
name of the property, and finally followed by the value the property should
assume in the specified state. For instance, the temperature converter skin
specifies that each
FormItem should have an
alpha
value of 0.5 in the disabled state, and that the text input boxes'
disabled property values in the
enabled component
state should be
false (in other words, the input fields should be
disabled when the component is disabled, and enabled when the component is
enabled).
Having defined a component skin, the next task is to code up the component
itself. Skinnable Spark components extend the
SkinnableComponent
class. As the following implementation shows, there is no display-related code
in the component itself:
package com.artima { import flash.events.Event; import spark.components.TextInput; import spark.components.supportClasses.SkinnableComponent; [SkinState("normal")] [SkinState("disabled")] public class DegreeConverter extends SkinnableComponent { [SkinPart(required="true")] public var celsiusInput: TextInput; [SkinPart(required="true")] public var fahrenheitInput: TextInput; override public function set enabled(value:Boolean) : void { if (enabled != value) invalidateSkinState(); super.enabled = value; } override protected function getCurrentSkinState() : String { if (!enabled) return "disabled"; return "normal"; } override protected function partAdded(partName: String, instance: Object): void { if (instance == celsiusInput) celsiusInput.addEventListener(Event.CHANGE, onCelsiusInput); if (instance == fahrenheitInput) fahrenheitInput.addEventListener(Event.CHANGE, onFahrenheitInput); } override protected function partRemoved(partName:String, instance:Object) : void { if (instance == celsiusInput) celsiusInput.removeEventListener(Event.CHANGE, onCelsiusInput); if (instance == fahrenheitInput) fahrenheitInput.removeEventListener(Event.CHANGE, onFahrenheitInput); } private function onCelsiusInput(e: Event): void { fahrenheitInput.text = (Number(celsiusInput.text) * 9/5 + 32) + ""; } private function onFahrenheitInput(e: Event): void { celsiusInput.text = ((Number(fahrenheitInput.text) - 32) * 5/9) + ""; } } }
The component's code centers around interacting with skin parts and managing
component state. As mentioned earlier, skin parts facilitate a modular approach
component design. The temperature converter's two text input fields are defined
as skin parts so that they can easily be referenced between the skin and the
component: Declaring the same
id value in the skin as the
component's property name allows the Flex runtime to automatically associate a
skin element with sub-components inside a Spark component. For this association
to work, the
SkinPart metadata must be attached to the
component's property. In this example,
fahrenheitInput and
celsiusInput both have that annotation. The skin's corresponding
text fields, in turn, have
fahrenheitInput and
celsiusInput
id values.
In addition to associating skin elements with sub-components, skin parts play
another important role in the lifecycle of a Spark component: Skin parts can be
associated with component state, and adding and removing skin parts from a
component—perhaps as a result of changing component state—causes the Flex
runtime to call the component's
partAdded() and
partRemoved() methods. Implementing those methods, in turn, allows
a component author to interact with the newly added sub-component, often for
the purpose of adding and removing event listeners.
The example code above adds event listeners to each text input field as those fields are added to the component, and removes those listeners if the input fields are removed as well.
This simple temperature converter example has only two application states:
disabled and
normal. These states are declared both
in the skin and in the component as metadata elements. The Flex compiler
requires that a Flex skin reference only valid component states. The
component's current state is provided to the skin via the
getCurrentState() method. Since the
enabled property
is defined for all Flex
UIComponents, we must override that
property's setter method and cause the skin to request the current state value
by calling
invalidateSkinState(). When the skin obtains the
current component state, it matches that state with the corresponding state
defined in the skin. The Flex runtime then ensure that all the skin property
values are set according to the specified state.
The skin and component files together define the temperature converter Spark component. The simplest way to use this component from a Flex application is to associate a skin with the component inside an MXML declaration:
<:Script> <![CDATA[ import com.artima.ConverterSkin; ]]> </fx:Script> <s:Group> <s:layout> <s:VerticalLayout/> </s:layout> <artima:DegreeConverter <s:Button </s:Group> </s:Application>
Alternate ways of assigning the component's
skinClass property
includes ActionScript and CSS. Having defined a separate skin for a component
allows a Flex application to switch skins at runtime, or to pick the right skin
when the application starts up. For instance, an alternate skin can be
specified to reverse the order of the text input fields; such a skin class
could then be assigned to the component's
skinClass property at
runtime.
Experimenting with this simple skinned component already shows improvement over
the older version: When the component is placed in a
disabled
state, the text fields' alpha values are set to 0.5, providing a smooth,
semi-transparent appearance.
Have an opinion on styling Flex Spark components? Discuss this article in the Articles Forum topic, Creating a Custom Look and Feel for Flex 4 Components.
Adobe's Flash Builder 4
Flex 4 SDK
Gumbo Project
Flex.org
Frank Sommers is Editor-in-Chief of Artima. | http://www.artima.com/articles/flex_4_styling.html | CC-MAIN-2014-10 | en | refinedweb |
I. The attached patch fixes allocation listing for me. Thanks, Cole
diff --git a/src/storage_backend.c b/src/storage_backend.c index 787630c..54e9289 100644 --- a/src/storage_backend.c +++ b/src/storage_backend.c @@ -36,6 +36,7 @@ #include <fcntl.h> #include <stdint.h> #include <sys/stat.h> +#include <sys/param.h> #include <dirent.h> #if HAVE_SELINUX @@ -204,8 +205,15 @@ virStorageBackendUpdateVolTargetInfoFD(virConnectPtr conn, if (allocation) { if (S_ISREG(sb.st_mode)) { #ifndef __MINGW32__ - *allocation = (unsigned long long)sb.st_blocks * - (unsigned long long)sb.st_blksize; + + unsigned long long blksize; +#ifdef DEV_BSIZE + blksize = (unsigned long long) DEV_BSIZE; +#else + blksize = (unsigned long long)sb.st_blksize; +#endif + *allocation = (unsigned long long)sb.st_blocks * blksize; + #else *allocation = sb.st_size; #endif | http://www.redhat.com/archives/libvir-list/2009-March/msg00394.html | CC-MAIN-2014-10 | en | refinedweb |
There
Java 6.0 Collection Framework
Java 6.0 Collection Framework
... AbstractMap.SimpleEntry
doesn?t.
Updated Classes in Java 6.0
LinkedList... introduced in Java 6.0. These are:
Deque:
It is used to represent Double ended
collection and framework.
collection and framework. please give some study material of collection and framework
Collection framework
Collection framework what are the real life examples of using Collection in java
COLLECTION FRAMEWORK
COLLECTION FRAMEWORK Hi,
i need complete detailed explanation on COLLECTION FRAMEWORK with examples(which include set,list,queue,map and operations performed on them).need help...
Regards,
Anu
Collection framework
Collection framework import java.util.*;
public class CollectFrame {
public static void main(String [] args) {
System.out.println( "Collection Example!\n" );
//int size;
HashSet collection = new HashSet();
String str1
Collection Framework
Collection Framework Please help me out..I have a class as below
public class Employee {
private int Id;
private String name;
private int salary;
private int age;
}
public class Collection {
public static void main
collection - Framework
://
Collection framework tutorial
framework in Java. I want many examples of Java collection framework. Tell me the best tutorial website for learning Java Collection framework.
Thanks
Hi,
Following urls are best for learning Java Collection framework:
Collections
Collection Framework - Java Beginners
Collection Framework Pls explain the disadvantages of using collection framework as it was written in this forum
? It must cast to correct type.
? Cannot do compile time checking.
I couldnot get in detail as to what
Collection Framework - Java Interview Questions
Collection Framework While inserting an object which sould be prefered- ArrayList or LinkedList and why? Inserting an object with ArrayListArrayListExample.javaimport java.util.*;import java.io.*;public class
Advantages and Disadvantages of the Collection Framework
will learn the
advantages and disadvantages of Java Collection Framework. A collection...
Advantages and Disadvantages of the Collection Framework... ordered and unordered elements.
Advantages of collections
framework
Collections Framework
to Collections Framework
Collection Interfaces
Set Interface
Introduction... Algorithms
Custom Collection Implementations
Java Collection Examples...The Collections Framework provides a well-designed set of interfaces
Collection of Large Number of Java Sample Programs and Tutorials
Collection Examples
Java 6.0
New Features (Collection Framework... will
read the advantages and disadvantages of Java Collection
Framework...
6.0 Collection Framework
Some of the new collections
Tomcat 6.0
Tomcat 6.0 Hi,this is harish i got an issue while running my project.I hope something wrong with my tomcat server ..
ERROR:An internal error occurred during: "Publishing to Tomcat v6.0 Server at localhost
Collections Framework
;
Java provides the Collections Framework.
In the Collection Framework, a collection represents the group of the
objects. And a collection framework.... This framework is
based on:
Interfaces that categorize common collection types
Java Collection : NavigableSet Example
Java Collection : NavigableSet Example
In this tutorial we are describing NavigableSet of collection Framework
NavigableSet :
NavigableSet interface... collection;
import java.util.Iterator;
import java.util.NavigableSet;
import
Array to Collection
Array to Collection
In this example we are converting values of an array into
collection.
List interface is a member of the Java Collection
Framework
Java
Java Collection API - Java Tutorials
Java Collection API
Collection was added to Java with J2SE 1.2 release. Collection framework is
provided in 'java.util.package'.
All collections... interface.
The main benefits of Collection Framework are :
1. The implementations
Collection
Collection What is the exact difference between lagacy classes and collection classes?
and Enumeration is possible on Collection classes
Collection to Array
; interface is a member of the Java Collection
Framework and extends Collection... is a member of the Java Collections Framework.
The method used:
toArray...
Collection to Array
Collection iterator with example
The Java Collection Iterator is present at the highest level interface
in the Collection framework.
Iterator interface has methods for traversing....
Example of Java Collection Iterator
import java.util.ArrayList;
import
struts2 - Framework
/struts/struts2/struts-2-hello-world-files.shtml
i m using JDK 1.6 and Tomcat 6.0 in my
Collection
Collection actually why do we need collections? means we can also store group of objects in relational data base and dbms also provides all the operatoins insert,delete,update,sort,search etc. then why collection
Iterate java collection
Collection is the top level interface of the Collection framework.
Iterator interface has methods for traversing over the elements
of the collection.
But Collection doesn't has iterator() method.
So create object
struts2 - Framework
using JDK 1.6 and Tomcat 6.0 ,eclipse 3.3in my application.
i extract..., named Src and WebRoot.
3)Create Package com.code
4)Copy the Java Files you
Prog in LinkedHashMapbalu June 21, 2012 at 5:42 PM
There | http://www.roseindia.net/discussion/18852-Java-6.0-Collection-Framework.html | CC-MAIN-2014-10 | en | refinedweb |
14 Sharing Application Module View Instances
This 14.1, "About Shared Application Modules"
Section 14.2, "Sharing an Application Module Instance"
Section 14.3, "Defining a Base View Object for Use with Lookup Tables"
Section 14.4, "Accessing View Instances of the Shared Service"
Section 14.5, "Testing View Object Instances in a Shared Application Module"
14.1 About Shared Application Modules.
14.1.1 Shared Application Module Use Cases and Examples.
14.1.2 Additional Functionality for Shared Application Modules
You may find it helpful to understand other Oracle ADF features before you start working with shared application modules. Following are links to other functionality that may be of interest.
For details about configuring application module instances to improve runtime performance, see Chapter 49, "Using State Management in a Fusion Web Application" and Chapter 50, bopackage, see the following Javadoc reference document:
14.2 Sharing an Application Module Instance 14 re-create.
14.2.1 How to Create a Shared Application Module Instance
To create a shared application module instance, use the Project Properties dialog. You define a logical name for a distinct, separate root application module that will hold your application's read-only data. shared application module instance:
In the Application window,.
14.2.2 What Happens When You Define a Shared Application Moduleis set to
falseto specify that requests from multiple sessions can share a single instance of the application module, which is managed by the application pool for the lifetime of the web server virtual machine. When you do not enable application module sharing, JDeveloper sets the value
trueto repopulate the data caches from the database for each application session on every request.
jbo.ampool.maxpoolsize opening the application module in the overview editor and choosing Configurations from the navigation menu. JDeveloper saves the
bc4j.xcfg file in the
./common subdirectory relative to the application module's XML document./summit/model/services/common directory of the
SummitADF application workspace, you will see the two named configurations for the
BackOfficeAppModule application module, as shown in Example 14-1. Specifically, the
BackOfficeAppModuleShared configuration sets the
jbo.ampool runtime properties on the shared application module instance. For more information about the ADF Business Components application module pooling and runtime configuration of application modules, see Chapter 50, "Tuning Application Module Pools and Connection Pools."
Example 14-1 LookupServiceAMShared Configuration in the bc4j.xcfg File
"/> <
Model.jpx file in the
./src directory of the application's
Model project, you will see that the
SharedLookupService application module's usage definition specifies
SharedScope = 2, corresponding to application-level sharing, as shown in Example 14-2. An application module that you set to session-level sharing will show
SharedScope = 1.
Example 14-2 Application Module Usage Configuration in the .jpx File
<JboProject xmlns="" Name="Model" SeparateXMLFiles="true" PackageName="oracle.summit.model"> . . . <AppModuleUsage Name="SharedLookupService" FullName="oracle.summit.model.services.BackOfficeAppModule" ConfigurationName="oracle.summit.model.services.BackOfficeAppModuleShared" SharedScope="2"/> </JboProject>
14.2.3 What You May Need to Know About Design Time Scope of the Shared Application Module 44, "Reusing Application Components."
When viewing a data control usage from the
DataBindings.cpx file in the Structure window, do not set the Configuration property to a shared application module configuration. By default, for an application module named AppModuleName, the Property window 17.5, "Working with the DataBindings.cpx File."
14.2.4 What You May Need to Know About the Design Time Scope of View Instances of the Shared Application Module 14.4, "Accessing View Instances of the Shared Service."
14.2.5 What You May Need to Know About Managing the Number of Shared Query Collections Configuration dialog, select the AppModuleNameShared configuration, and set these properties in the Properties page of the editor:
jbo.qcpool.monitorsleepintervalthe time (ms) that the shared query collection pool monitor should sleep between pool checks.
jbo.qcpool.maxinactiveagethe maximum amount of time (ms) that a shared query collection may remain unused before it is removed from the pool.
14.2.6 What You May Need to Know About Shared Application Modules and Connection Pooling
The default connection behavior for all application modules is to allow each root application module to have its own database connection. When your application defines more than one shared application module, you can configure the application to nest them under the same transaction in order to use a single database connection. This optimization allows the shared application module instances to share the same connection and entity cache and reduces the database resources that the application uses. This is particularly useful for shared application modules cases because they are read only and have longer life than transactional application modules.
Best Practice:
Oracle recommends nesting shared application module instances under a single transaction to reduce the database resources required by the application. You can make this shared application module optimization by setting the
jbo.shared.txn property to use the same transaction name (an arbitrary identifier you supply) for each shared application module configuration the application defines.
Set the
jbo.shared.txn property using the Edit Configuration dialog that you open from the Configurations page of the overview editor for the shared application module, as shown in Figure 14-2. Repeat the
jbo.shared.txn property setting using the same transaction name for each shared application module configuration that your application defines.
Currently, the application module configuration parameter
jbo.doconnectionpooling=true is not supported for use with shared application modules. This feature is available to configure nonshared 50.2.6, "What You May Need to Know About How Database and Application Module Pools Cooperate."
14.3 Defining a Base View Object for Use with Lookup Tables.
14.3.1 How to Create a Base View Object Definition for a Lookup Table base view object for a lookup table:
In the Application window, locate the shared application module, right-click its package node, and choose New and then View Object. Select text box.
Your query names the columns of the lookup table, and should look similar to the SQL statement shown in Figure 14-3 which queries the
LOOKUP_CODE,
MEANING, and
DESCRIPTIONcolumns in the
LOOKUP_CODEStable.list 14-4.
If you want to rename individual attributes to use names that might be more appropriate, from the Select Attributes dropdown, choose the attribute and enter the desired name in the Name field. When you are finished, click Next.
For example, you might rename the default attributes
LookupTypeand
LookupCodeto
Typeand
Valuerespectively.value. For details about adding the view object instances to the data model, see Section 13.2.3.2, "Adding Master-Detail View Object Instances to an Application Module."
14.3.2 What Happens When You Create a Base View Object
When you create the view object definition for the lookup table, JDeveloper first describes the query to infer the following from the columns in the
SELECT list:
The Java-friendly view attribute names (for example,
LookupTypeinstead of
LOOKUP_TYPE)
By default, the wizard creates Java-friendly view object attribute names that correspond to the
SELECTlist column names.
The SQL and Java data types of each attribute window, select the XML file under the expanded view object, and open the Structure window. The Structure window displays the list of definitions, including the SQL query and the properties of each attribute. To open the file in the editor, double-click the corresponding .xml node. As shown in Example 14 14>
14.3.3 How to Define the WHERE Clause of the Lookup View Object Using View Criteriaattribute.
The view criteria item will consist of an
Typeattribute name, the Equal operator, and the value of the
LOOKUP_TYPEthat will filter the query results.
Because a single view criteria is defined, no logical conjunctions are needed to bracket the
WHERE clause conditions. may also find it helpful to have an understanding of view criteria. For more information, see Section 5.9, "Working with Named View Criteria."
You will need to complete this task:
- Create the base view object for the lookup data, as described in Section 14.3.1, "How to Create a Base View Object Definition for a Lookup Table."
To create LOOKUP_TYPE view criteria for the lookup view object:
In the Application window, double-click the lookup base view object you defined.
In the overview editor, click the View Criteria navigation tabcolumn).
Choose Equals as the operator.
Keep Literal as the operand choice and enter the value name that defines the desired type. For example, to query the marital status codes, you might enter the value
MARITAL_STATUS_CODEcorresponding to the
LOOKUP_TYPEcolumn.
Leave all other settings unchanged.
The view object
WHEREclause shown in the editor should display a simple criteria similar to the one shown in Figure 14-5, where the value
MARITAL_STATUS_CODEis set to filter the
LOOKUP_TYPEcolumn.
Click OK.
Repeat this procedure to define one view criteria for each
LOOKUP_TYPEthat you wish to query.
14.3.4 What Happens When You Create a View Criteria with the Editor
The Create View Criteria dialog in JDeveloper lets you easily create view criteria and save them as named definitions. These named view criteria definitions add metadata to the target view object's own definition. Once defined, named view criteria appear by name in the View Criteria page of the overview editor for the view object. window, select the XML file under the expanded view object, open the Structure window, and expand the View Criteria node. As shown in Example 14-4, the
LookupsBaseVO.xml file specifies the
<ViewCriteria> definition that allows the
LookupsBaseVO to return only the marital types. Other view criteria added to the
LookupsBaseVO are omitted from this example for brevity.
Example 14.summit.model>
14.3.5 What Happens at Runtime: How a View Instance Accesses Lookup Data
When you create a view instance based on a view criteria, the next time the view instance is executed it augments its SQL query with an additional
WHERE clause predicate corresponding to the view criteria that you've populated in the view criteria rows.
14.4 Accessing View Instances of the Shared Service 14.
14.4.1 How to Create a View Accessor for an Entity Object or View Object
View accessors provide the means to access a data source independent of the application module. View accessors can be defined at the level of the entity object or individual view objects. However, because at runtime view accessor results are often filtered depending on the usage involved, it is recommended that you create unique view accessors for each usage in your application.
Best Practice:
Oracle recommends creating unique view accessors whenever your application needs to expose an LOV-enabled attribute. Reusing view accessors to create multiple list of values is discouraged because LOV results are often filtered at runtime. For example, the results of a saved search will filter the row set of the target view object until the end user unapplies the search criteria. Consequently, view accessors that get applied to this same destination view object will have their results filter too. To ensure the view accessor always returns the intended row set at runtime, create unique view accessors for each usage.
Defining view accessors on the entity object should be used carefully since view objects that you create based on the entity object will inherit the view accessors of their base entity objects. While defining the view accessor once on the entity object itself allows you to reuse the same view accessor, the view accessor must not be used in different application scenarios. If you intend to define validation rules for the entity object attributes and create LOV-enabled attributes for that entity object's view object, it is recommended that you create separate view accessors.
For example, assume. But a different view accessor for
AddressesVO should be used to enable these tasks:
Create the entity object or view object that you want to access, as described in Chapter 4, "Creating a Business Domain Layer Using Entity Objects" and Chapter 5, "Defining SQL Queries Using View Objects."
Enable application module sharing, as described in Section 14.2.1, "How to Create a Shared Application Module Instance."
Create the base view object for the lookup data, as described in Section 14.3.1, "How to Create a Base View Object Definition for a Lookup Table."
You can optionally refine the list returned by a view accessor by applying view criteria that you define on the view object. To create view criteria for use with a view accessor, see Section 14.3.3, "How to Define the WHERE Clause of the Lookup View Object Using View Criteria."
To create the view accessor:
In the Application window, Accessors navigation tab and then, in the View Accessors section,.
The dialog will display all view objects and view instances from your application. For example, the View Accessors dialog in Figure 14-6 shows the shared application module
LookupServiceAMwith the list of view instances, as shown . 14-6 shows the view accessor
AddressUsageTypesVAfor the
AddressUsageTypesview instance selection in the shared application module
LookupServiceAM. This view accessor is created on the base entity object
AddressUsagesEOand accesses the row set of the
AddressUsageTypesview instance..
In the View Accessors dialog, click OK.
14.4.2 How to Validate Against the Attribute Values Specified by a View Accessoroperator you select to compare against the values returned by the view accessor.
The List validator compares an entity attribute against a list of values. When you specify a view accessor to determine the valid list values, the List validator applies an
Inor
NotInoperator desired entity object, as described in Section 4.2.1, "How to Create Multiple Entity Objects and Associations from Existing Tables."
To validate against a view accessor comparison, list, or collection type:
In the Application window, 14-7 shows what the dialog looks like when you use a List validator to select a view accessor attribute.
Click the Failure Handling tab and enter a message that will be shown to the user if the validation rule fails.
Click OK.
14.4.3 What Happens When You Define a View Accessor Validator
When you use a List validator, a
<ListValidationBean> tag is added to an entity object's XML file. Example 14-5 shows the XML code for the
CountryId attribute in the
Address entity object. A List validator has been used to validate the user's entry against the list of country ID values as retrieved by the view accessor from the
Countries view instance.
Example 14>
14.4.4 What You May Need to Know About Dynamic Filtering with View Accessors.
14.4.5 How to Create an LOV Based on a Lookup Table Accessors page of the overview editor for the view object will display the inherited view accessor, as shown in Figure 14-8. Alternatively, if you choose to create the view accessor on the attribute's view object, you can accomplish this from either the editor for the LOV definition or from the Accessors page of the overview editor. may also find it helpful to understand additional examples of how to work with LOV-enabled attributes. For more information, see Section 5.12, "Working with List of Values (LOV) in View Object Attributes."
You will need to complete these tasks:
Create the view object that is the data source for the view accessor, as described in Section 5.2.1, "How to Create an Entity-Based View Object," and Section 5.8.1, "How to Create an Expert Mode View Object."
Create the view accessor for the view object, as described in Section 14.4.1, "How to Create a View Accessor for an Entity Object or View Object."
To create an LOV that displays values from a lookup table:
In the Application window, double-click the view object that contains the desired attribute.
In the overview editor, click the Accessors navigation tab.
In the Accessors page, in the View Accessors list, check to see whether the view object inherited the desired view accessor from its base entity object. If no view accessor is present, either create the view accessor on the desired entity object or click the Create New View Accessors button.
In the overview editor, click the Attributes navigation tab.
In the Attributes page, select the attribute that is to display the LOV, and then click the List of Values tab and click the Add List of Values button.from the
OrdersViewview object would map to the attribute
OrderIdfrom the
Shared_OrdersVAview accessor.
If you want to specify supplemental values that your list returns to the base view object, click the Add buttonfrom the
OrdersViewview object, you would choose the attribute
StartDatefrom the
Shared_OrdersVAview accessor. Do not remove the default attribute mapping for the attribute for which the list is defined.
Click OK.
14.4.6 What Happens When You Define an LOV for a View Object Attribute
When you add an LOV to a view object attribute, JDeveloper updates the view object's XML file with an
LOVName property in the
<ViewAttribute> element. The definition of the LOV appears in a new
<ListBinding> element. The metadata in Example 14-6 shows that the
PaymentTypeId attribute refers to the
LOV_PaymentTypeId LOV and sets the
choice control type to display the LOV. The LOV definition for
LOV_PaymentTypeId appears in the
<ListBinding> element.
Example 14-6 View Object with LOV List Binding XML Code
<ViewObject xmlns="" Name="CustomerRegistrationVO" ... <ViewAttribute Name="PaymentTypeId" LOVName="LOV_PaymentTypeId" PrecisionRule="true" EntityAttrName="PaymentTypeId" EntityUsage="OrdEO" AliasName="PAYMENT_TYPE_ID"> <Properties> <SchemaBasedProperties> <CONTROLTYPE Value="choice"/> </SchemaBasedProperties> </Properties> </ViewAttribute> ... <ListBinding Name="LOV_PaymentTypeId" ListVOName="PaymentTypeVA" ListRangeSize="-1" NullValueFlag="start" NullValueId="LOVUIHints_NullValueId" MRUCount="0"> <AttrArray Name="AttrNames"> <Item Value="PaymentTypeId"/> </AttrArray> <AttrArray Name="ListAttrNames"> <Item Value="Id"/> </AttrArray> <AttrArray Name="ListDisplayAttrNames"> <Item Value="Payment Type"/> </AttrArray> <DisplayCriteria/> </ListBinding> ... </ViewObject>
14.4.7 How to Automatically Refresh the View Object of the View Accessor view accessor, as described in Section 14.4.1, "How to Create a View Accessor for an Entity Object or View Object."
To enable auto-refresh for a view instance of a shared application module:
In the Application window, double-click the view object that you want to receive database change notifications.
In the overview editor, click the General navigation tab.
In the Property window, expand the Tuning section, and select true from the Auto Refresh dropdown menu.
14.4.8 What Happens at Runtime: How the Attribute Displays the List of Values.
14.4.9 What You May Need to Know About Displaying List of Values From a Lookup Table an LOV component based on the read-only view object collection. Without a key attribute to specify the row key value, the LOV may not behave properly and a runtime error can result.
14.4.10 What You May Need to Know About Programmatically Invoking Database Change Notifications
When you create a databound UI component in a web page, you can enable the auto-refresh feature on the corresponding view object, as described in Section 14. Calling the
processChangeNotification() method before refreshing the view instance ensures that the shared application module cache gets updated if the corresponding queried data has changed in the database.
To programmatically refresh a view instance of a shared application module, follow these steps (as illustrated in Example 14-7 from the
processChangeTestClient.java example in the
SummitADF_Examples workspace):
Call the
processChangeNotification()method.
Get the view instance from the shared application module.
Refresh the view instance.
Example 14-7 Programmatically Invoking a View Instance and Processing the Database Change Notification
public class processChangeTestClient { public static void main(String[] args) { processChangeTestClient processChangeTestClient = new processChangeTestClient(); ApplicationModuleHandle handle = null; String amDef = "oracle.summit.model.services.BackOfficeAppModule"; String config = "BackOfficeAppModuleLocal"; try { handle = Configuration.createRootApplicationModuleHandle(amDef, config); ApplicationModule am = handle.useApplicationModule(); // 1. Update the shared application module cache with changed data. am.processChangeNotifications(); // 2. Get the view instance to refresh. ViewObject vo = am.findViewObject("Inventory"); // 3. Refresh the view instance with updated data. ((ViewObjectImpl)vo).refreshCollection(null, false, false); vo.reset(); while (vo.hasNext()) { Row r = vo.next(); System.out.println((String)r.getAttribute("Name")); } } finally { if (handle != null) Configuration.releaseRootApplicationModuleHandle(handle, false); } } }
14.4.11 What You May Need to Know About Inheritance of AttributeDef Properties 16."
14.4.12 What You May Need to Know About Using Validators 11, "Defining Validation and Business Rules Declaratively."
14.5 Testing View Object Instances in a Shared Application Module.
14.5.1 How to Test the Base View Object Using the Oracle ADF Model Tester test the view objects you added to an application module, use the Oracle ADF Model Tester, which is accessible from the Application window.
Before you begin:
It may be helpful to have an understanding of Oracle ADF Model Tester. For more information, see Section 14.5, "Testing View Object Instances in a Shared Application Module."
You may also find it helpful to understand functionality that can be added using other Oracle ADF features. For more information, see Section 14.1.2, "Additional Functionality for Shared Application Modules."
You may also find it helpful to understand the diagnostic messages specific to ADF Business Components debugging. For more information, see Section 8.3.10, "How to Enable ADF Business Components Debug Diagnostics."
You will need to complete this task:
- Create the application module with view instances, as described in Section 13.2, "Creating and Modifying an Application Module."
To test view objects in an application module configuration:
In the Application window,. The debugger process panel opens in the Log window and the various debugger windows. 14-9. The fields in the tester panel of a read-only view object will always appear disabled since the data it represents is not editable.
14.5.2 How to Test LOV-Enabled Attributes Using the Oracle ADF Model Tester
To test the LOV you created for a view object attribute, use the Oracle ADF Model Tester, which is accessible from the Application window. For details about displaying the tester and the supported control types, see Section 5.12.8, "How to Test LOV-Enabled Attributes Using the Oracle ADF Model Tester."
14.5.3 What Happens When You Use 14-9 shows just one instance in the expanded tree, called
ProductImages. After you double-click the desired view object instance, the Oracle ADF Model Tester will display a panel to inspect the query results, as shown in Figure 14 8.3.4, "How to Test Entity-Based View Objects Interactively."
14.5.4 What Happens at Runtime: How Another Service Accesses the Shared Application Module Cache 13.4, "Defining Nested Application Modules." | http://docs.oracle.com/middleware/1212/adf/ADFFD/bclookups.htm | CC-MAIN-2014-10 | en | refinedweb |
Add an Admin Controller
In this section, we’ll add a Web API controller that supports CRUD (create, read, update, and delete) operations on products. The controller will use Entity Framework to communicate with the database layer. Only administrators will be able to use this controller. Customers will access the products through another controller.
In Solution Explorer, right-click the Controllers folder. Select Add and then Controller.
In the Add Controller dialog, name the controller
AdminController. Under
Template, select "API controller with read/write actions, using Entity Framework". Under
Model class, select "Product (ProductStore.Models)". Under
Data Context, select "<New Data Context>".
If the Model class drop-down does not show any model classes, make sure you compiled the project. Entity Framework uses reflection, so it needs the compiled assembly.
Selecting "<New Data Context>" will open the New Data Context dialog. Name the data context
ProductStore.Models.OrdersContext.
Click OK to dismiss the New Data Context dialog. In the Add Controller dialog, click Add.
Here's what got added to the project:
- A class named
OrdersContextthat derives from DbContext. This class provides the glue between the POCO models and the database.
- A Web API controller named
AdminController. This controller supports CRUD operations on
Productinstances. It uses the
OrdersContextclass to communicate with Entity Framework.
- A new database connection string in the Web.config file.
Open the OrdersContext.cs file. Notice that the constructor specifies the name of the database connection string. This name refers to the connection string that was added to Web.config.
public OrdersContext() : base("name=OrdersContext")
Add the following properties to the
OrdersContext class:
public DbSet<Order> Orders { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; }
A DbSet represents a set of entities that can be queried. Here is the complete listing for the
OrdersContext class:
public class OrdersContext : DbContext { public OrdersContext() : base("name=OrdersContext") { } public DbSet<Order> Orders { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Product> Products { get; set; } }
The
AdminController class defines five methods that implement basic CRUD functionality. Each method corresponds to a URI that the client can invoke:
Each method calls into
OrdersContext to query the database. The methods that modify the collection (PUT, POST, and DELETE) call
db.SaveChanges to persist the changes to the database. Controllers are created per HTTP request and then disposed, so it is necessary to persist changes before a method returns.
Add a Database Initializer
Entity Framework has a nice feature that lets you populate the database on startup, and automatically recreate the database whenever the models change. This feature is useful during development, because you always have some test data, even if you change the models.
In Solution Explorer, right-click the Models folder and create a new class named
OrdersContextInitializer. Paste in the following implementation:
namespace ProductStore.Models { using System; using System.Collections.Generic; using System.Data.Entity; public class OrdersContextInitializer : DropCreateDatabaseIfModelChanges<OrdersContext> { protected override void Seed(OrdersContext context) { var products = new List<Product>() { new Product() { Name = "Tomato Soup", Price = 1.39M, ActualCost = .99M }, new Product() { Name = "Hammer", Price = 16.99M, ActualCost = 10 }, new Product() { Name = "Yo yo", Price = 6.99M, ActualCost = 2.05M } }; products.ForEach(p => context.Products.Add(p)); context.SaveChanges(); var order = new Order() { Customer = "Bob" }; var od = new List<OrderDetail>() { new OrderDetail() { Product = products[0], Quantity = 2, Order = order}, new OrderDetail() { Product = products[1], Quantity = 4, Order = order } }; context.Orders.Add(order); od.ForEach(o => context.OrderDetails.Add(o)); context.SaveChanges(); } } }
By inheriting from the DropCreateDatabaseIfModelChanges class, we are telling Entity Framework to drop the database whenever we modify the model classes. When Entity Framework creates (or recreates) the database, it calls the Seed method to populate the tables. We use the Seed method to add some example products plus an example order.
This feature is great for testing, but don’t use the DropCreateDatabaseIfModelChanges class in production,, because you could lose your data if someone changes a model class.
Next, open Global.asax and add the following code to the Application_Start method:
System.Data.Entity.Database.SetInitializer( new ProductStore.Models.OrdersContextInitializer());
Send a Request to the Controller
At this point, we haven’t written any client code, but you can invoke the web API using a web browser or an HTTP debugging tool such as Fiddler. In Visual Studio, press F5 to start debugging. Your web browser will open to, where portnum is some port number.
Send an HTTP request to ". The first request may be slow to complete, because Entify Framework needs to create and seed the database. The response should something similar to the following:
HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Mon, 18 Jun 2012 04:30:33 GMT X-AspNet-Version: 4.0.30319 Cache-Control: no-cache Pragma: no-cache Expires: -1 Content-Type: application/json; charset=utf-8 Content-Length: 175 Connection: Close [{"Id":1,"Name":"Tomato Soup","Price":1.39,"ActualCost":0.99},{"Id":2,"Name":"Hammer", "Price":16.99,"ActualCost":10.00},{"Id":3,"Name":"Yo yo","Price":6.99,"ActualCost": 2.05}] | http://www.asp.net/web-api/overview/creating-web-apis/using-web-api-with-entity-framework/using-web-api-with-entity-framework,-part-3 | CC-MAIN-2014-10 | en | refinedweb |
Silverlight 5 Beta was announced recently at the MIX11 conference by Scott Guthrie. In this article, we will see some of the new features introduced in Silverlight 5 Beta.
Before you proceed, check some important links including the link to download Silverlight 5 Beta from here. Just make sure you have installed Visual Studio 2010 SP1. There are tones of new features included in Silverlight 5.0 Beta. In this article, we will explore some of these new features -
Please note that this article was written using the Silverlight 5 Beta, as available at MIX11.
1. Debugging Bindings with Silverlight 5. Adding Break points in XAML
XAML debugging with breakpoints on bindings has been one of the most requested tooling features. Let’s see what this feature is and how to use it.
Let's create our First Silverlight 5 Project using Visual Studio 2010. Name the project as 'BindingDebugInSL5' and choose Silverlight 5 version as shown below -
Now let's add a Customers class as shown below, to our Silverlight Project -
Now let's design our screen to show customer data in our Silverlight Page. To design the screen, let's copy and paste the XAML code below, in between the <Grid></Grid> tags -
Now let's create an instance of 'Customers' class and assign it as a data context. Write the following code in the Constructor of the MainPage.xaml -
Now let's add the break point in our XAML. Please note that you can add Break Points only on the Bindings and not on plain XAML as shown below -
Now press 'F5'and run your Silverlight application. You will see your break point got hit and you can now see the data in a 'Local Window'-
If you want, you can access the break points on errors too. I have seen this syntax on the Silverlight.NET site -
You can write the following code -
2. RichTextBox Overflow
In Silverlight 5, we now have the RichTextBoxOverflow which can be used instead of the Vertical and Horizontal scroll bars -
Let's create a project with the name 'RTBOverFlowExample'. Now add a 'RichTextBox' control with some text. I have set the 'Height' and 'Width' of the Grid to '300' and '500' respectively. Here is some code -
If you check the above code, I have set 'VerticalScrollBarVisibility="Visible"'. Because of this, I can scroll the contents vertically. The output is shown below -
Now what if I don't want to Scroll the contents. Well we can use the new Silverlight 5 RichTextBoxOverflow here. Let's make 'VerticalScrollBarVisibility="Disabled"'. Add the following code after the </RichTextBox> as shown below -
Now let's inform our 'RichTextBox' about the overflow by adding below property in our 'RichTextBox' as shown below -
OverflowContentTarget="{Binding ElementName=ContinueNews}"
Now if you run the project, your overflow text will be shown in the second 'RichTextBox' as shown below -
3. Silverlight 5 - Double Click and Triple click and may more
In Silverlight 5, we have now a 'ClickCount' Property. This property captures the number of clicks on an object, like the Ellipse.
So let's create a Silverlight project with the name 'MultiClickExample'. Now let's add a 'Rectangle' control to our Silverlight Page as shown below -
If you see the code carefully, we have a MouseLeftButtonDown event. So, let's add a code to this event.
Before that, we need to declare a variable at the class level as shown below -
int seconds=0;
Now let's write code in our MouseLeftButtonDown event as below -
Now hit 'F5' and count your clicks!
4. Create New Operating System Window using 'Window' class
In previous versions of Silverlight, we could not create new operating system windows. To create pop up content, we used either the ChildWindow control or the Popup element.
Now for creating a PopUp windows in Silverlight 5, we can use the 'Window' class. Please note that this will work only in Out-Of-Browser Silverlight applications. So let's create a Silverlight Application with the name 'ÓSWindowExample'. Add a User Control with the name 'ThoughtOfTheDay' and copy the below contents in between <Grid>/<Grid>
Now add the following code in between <Grid></Grid> in our MainPage.xaml -
<Button Height="50" Width="200" Content="Show Thought of the Day !!" Click="Button_Click"/>
In the code behind of the MainPage.xaml.cs file, write some code for displaying a window which will embed the UserControl as its content.
If you see the above code, we are creating an object of Window class and setting its 'Height' and 'Width' property. We are then setting its Title property. After this, we are setting the Content of this window as an object of UserControl, followed by setting its Visibility property to visible.
That's all. Run your application and click the button. You will see the Window PopUp as shown below -
5. Use Low Latency Sound effects in Silverlight 5 with the use of two different classes
The MediaElement was not good enough for low latency sounds (like audio loops) or real time sound effects. However in Silverlight 5, you have two classes - the first class is the SoundEffect class and second class is SoundEffectInstance class which allows control over the volume, pitch and much more. These classes belongs to a namespace - Microsoft.Xna.Framework.Audio;
You can also play .WAV files in Silverlight 5 Beta.
Apart from these changes mentioned in this articles, there are a couple of other General changes and in the Silverlight 3d graphic API as well. We will cover all these in the next article.
Conclusion- In this article, we have seen What's new in Silverlight 5.0 beta. We have see 'Debugging Bindings', 'RichTextBoxOverflow', 'Double and more clicks', 'Çreating OS Windows' in Silverlight 5 Beta and changes in Low latency sound effects with simple examples.
The entire source code of this article can be downloaded over here | http://www.dotnetcurry.com/showarticle.aspx?ID=689 | CC-MAIN-2014-10 | en | refinedweb |
So, why not use this definition? Is there something special about ST you are trying to preserve? -- minimal complete definition: -- Ref, newRef, and either modifyRef or both readRef and writeRef. class Monad m => MonadRef m where type Ref m :: * -> * newRef :: a -> m (Ref m a) readRef :: Ref m a -> m a writeRef :: Ref m a -> a -> m () modifyRef :: Ref m a -> (a -> a) -> m a -- returns old value readRef r = modifyRef r id writeRef r a = modifyRef r (const a) >> return () modifyRef r f = do a <- readRef r writeRef r (f a) return a instance MonadRef (ST s) where type Ref (ST s) = STRef s newRef = newSTRef readRef = readSTRef writeRef = writeSTRef instance MonadRef IO where type Ref IO = IORef newRef = newIORef readRef = readIORef writeRef = writeIORef instance MonadRef STM where type Ref STM = TVar newRef = newTVar readRef = readTVar writeRef = writeTVar Then you get to lift all of the above into a monad transformer stack, MTL-style: instance MonadRef m => MonadRef (StateT s m) where type Ref (StateT s m) = Ref m newRef = lift . newRef readRef = lift . readRef writeRef r = lift . writeRef r and so on, and the mention of the state thread type in your code is just gone, hidden inside Ref m. It's still there in the type of the monad; you can't avoid that: newtype MyMonad s a = MyMonad { runMyMonad :: StateT Int (ST s) a } deriving (Monad, MonadState, MonadRef) But code that relies on MonadRef runs just as happily in STM, or IO, as it does in ST. -- ryan 2009/2/19 Louis Wasserman <wasserman.louis at gmail.com>: >: >> >> >> >> ============================================================================== >> > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > > > | http://www.haskell.org/pipermail/haskell-cafe/2009-February/056164.html | CC-MAIN-2014-10 | en | refinedweb |
Relative path to my app's Resource folder?
- Currently Being ModeratedJan 31, 2013 2:57 PM (in response to Futurefrog)
Look in the Standard Additions osax for the path to resource command. And note for future consideration that I've never seen a computer-based practical joke that the target thought was funny. If you end up with a black eye, don't say you weren't warned.
- Currently Being ModeratedJan 31, 2013 7:00 PM (in response to twtwtw)
Ok so I found the Standard Additions Dictionary and revised my code with path to resource, however it keeps returning the error that the file cannot be found.
I have tried both of the following code configurations:
repeat
tell application "Finder"
set desktop picture to path to resource in bundle "picture.png" as alias
end tell
end repeat
and
repeat
tell application "Finder"
set desktop picture to path to resource "picture.png" as alias
end tell
end repeat
How can I get this error to stop? The Standard Additions thing is not all that self explanatory. I have no idea how to actually insert what it is listing in to my code. The picture is in my resources folder and I can even see it when I open Bundle Contents inside of AppleScript.
- Currently Being ModeratedJan 31, 2013 7:21 PM (in response to Futurefrog)
the resource is the file in the resouce folder of the app. So something like
path to resource "file" in bundle (path to me)
should work
- Currently Being ModeratedJan 31, 2013 7:58 PM (in response to Futurefrog)
Tha's because you're asking it to find the picture in the Finder's bundle. do it this way (in principle):
repeat
set pic to path to resource "picture.png"
tell application "Finder"
set desktop picture to pic
end tell
end repeat
however, you don't want to do it that way (in practice) because it will give you an infinite loop that freezes up both the script and the Finder. I assume you're saving this as an application, right? Then make sure you save it as a stay open application, and use the following code:
on idle
set pic to path to resource "picture.png"
tell application "System Events"
set picture rotation of every desktop to 0
set picture of every desktop to pic
end tell
return (random number from 15 to 45)
end idle
This will give you an app that runs continuously and changes the desktop picture every 15 to 45 seconds, and the idle handler will pu the application to sleep between changes so it doesn't eat up system resources.
- Currently Being ModeratedJan 31, 2013 8:04 PM (in response to Futurefrog)
From the StandardAdditions dictionary:
Parameters
- string -- the name of the requested resource
- [in bundle file] -- an alias or file reference to the bundle containing the resource (default is the target application or current script bundle)
- [in directory string] -- the name of a subdirectory in the bundle’s “Resources” directory
Result
- alias -- the path to the resource
The Contents/Resources/ folder in the current application bundle is the default directory, so all you need to use is the file name. Also note that the Finder is not needed for this command, and in general you should avoid using any scripting addition command inside an application tell statement. | https://discussions.apple.com/message/21100089?tstart=0 | CC-MAIN-2014-10 | en | refinedweb |
Overview
Class Reference
Class usage tips
Design notes
Conclusion
History
This article presents a C++/CLI class StringConvertor which can be used to make conversions between System::String and native types like a char*, BSTR etc. The class keeps track of allocated unmanaged memory handles and frees them automatically, so the caller needn't worry about freeing up unmanaged resources.
StringConvertor
System::String
char*
BSTR
The class has several constructor overloads that accept various native types (as well as managed types) like char*, __wchar_t* etc and internally maintains a System::String object that can be extracted using the String operator or the ToString override. There are also various properties each exposing a native type like char*, BSTR etc which can be used to convert the string to your preferred unmanaged type.
__wchar_t*
String
ToString
The class reference follows with suitable examples. Make sure you read the Class usage tips section at the end though.
[Back to Contents]
StringUtilities::StringConvertor
The StringConvertor class is declared in the StringUtilities namespace.
StringUtilities
StringConvertor(String^ s)
StringConvertor
nullptr
String^ str1 = "Hello world";
StringConvertor sc1(str1);
StringConvertor(const char* s)
const char*
null
const char* str2 = "Apples and oranges";
StringConvertor sc2(str2);
StringConvertor(const __wchar_t* s)
const __wchar_t*
const __wchar_t* str3 = L"Apples and oranges";
StringConvertor sc3(str3);
StringConvertor(array<Char>^ s)
Char
array<Char>^ str4 = {'H', 'e', 'y', '.'};
StringConvertor sc4(str4);
StringConvertor(BSTR s)
BSTR str5 = SysAllocString(L"Interesting");
StringConvertor sc5(str5);
SysFreeString(str5);
StringConvertor(std::string s)
std::string
std::string str6 = "STL is kinda handy";
StringConvertor sc6(str6);
StringConvertor(std::wstring s)
std::wstring
std::wstring str7 = L"STL is kinda handy";
StringConvertor sc7(str7);
virtual String^ ToString() override
System::String
String
StringConvertor sc1(str1);
int len = sc1.ToString()->Length;
Console::WriteLine(len);
operator String^()
StringConvertor sc1(str1);
Console::WriteLine(sc1); //Operator String^ invoked
All properties in the class are read-only.
interior_ptr<const Char> InteriorConstCharPtr
const Char
interior_ptr<const Char> p1 = sc1.InteriorConstCharPtr;
for(interior_ptr<const Char> tmp=p1; *tmp; tmp++)
Console::Write(*tmp);
interior_ptr<Char> InteriorCharPtr
interior_ptr<Char> p2 = sc2.InteriorCharPtr;
for(interior_ptr<Char> tmp=p2; *tmp; tmp++)
if(Char::IsLower(*tmp)) //swap case
*tmp = Char::ToUpper(*tmp);
else
*tmp = Char::ToLower(*tmp);
Console::WriteLine(sc2);
char* NativeCharPtr
char* p3 = sc1.NativeCharPtr;
printf("%s \r\n", p3);
__wchar_t* NativeWideCharPtr
__wchar_t* p4 = sc1.NativeWideCharPtr;
printf("%S \r\n", p4);
Note - I've included a generic text mapping, NativeTCharPtr, for these two properties so you can use it with an LPTSTR. Since it's a #define, intellisense will not detect it. NativeTCharPtr maps to NativeWideCharPtr if _UNICODE is defined, else to NativeCharPtr.
NativeTCharPtr
LPTSTR
#define
NativeWideCharPtr
_UNICODE
NativeCharPtr
LPTSTR p5 = sc1.NativeTCharPtr;
_tprintf(_T("%s \r\n"),p5);
array<Char>^ CharArray
array<Char>^ arr1 = sc5.CharArray;
for each(Char c in arr1)
Console::Write(c);
BSTR BSTRCopy
BSTR
BSTR b1 = sc5.BSTRCopy;
printf("BSTR contains %S \r\n", b1);
Each call creates a new BSTR, and if you change the contents of the returned BSTRs, newly returned BSTRs will contain the original string; for example, in the below snippet, b1 and b2 are not the same string any longer.
b1
b2
BSTR b1 = sc5.BSTRCopy;
b1[0] = L'X';
BSTR b2 = sc5.BSTRCopy;
If for any reason, you want to change the string represented by the StringConvertor, use the InteriorCharPtr property.
InteriorCharPtr
std::string STLAnsiString
std::string s1 = sc5.STLAnsiString;
printf("%s \r\n", s1.c_str());
std::wstring STLWideString
std::wstring s2 = sc5.STLWideString;
printf("%S \r\n", s2.c_str());
Two of the primary purposes of the class are (1) to make it easy to convert from System::String to unmanaged types and vice-versa and (2) to free the user from the responsibility of freeing up the unmanaged memory allocations. The various properties and constructors handle the first role and the destructor handles the second one. When I say destructor, I mean destructor (not the finalizer), which means the destructor needs to get called as soon as the object goes out of scope or has served its purpose. For that, it's recommended that you either use the auto-variable non-handle declaration format or explicitly call delete on the StringConvertor object after use.
delete
void SomeFunc(...)
{
StringConvertor s1(...);
//...
} //Destructor gets called
or
StringConvertor^ s1 = gcnew StringConvertor(...);
//...
delete s1; //Destructor gets called
Try to avoid making a StringConvertor a member of a long-living class (or better still, never make it a class-member). Always try and limit the scope and life-time of StringConvertor objects. While, a finalizer has been provided (for unanticipated circumstances), ideally, the finalizer should never need to be called. To help you with avoiding this, in debug builds, an exception gets thrown if the finalizer is ever called and in release builds, Trace::WriteLine is used to write a warning message to any active trace listeners.
Trace::WriteLine
When you have functions returning unmanaged strings (like a char*), you need to make sure that the pointer you return is allocated by you separately (since the pointer returned by StringConvertor will be freed when the StringConvertor goes out of scope. The snippets below show two functions, one returning a BSTR and the other returning a char*.
BSTR ReturnBSTR(String^ s)
{
return SysAllocString(StringConvertor(s).BSTRCopy);
}
char* ReturnCharPointer(String^ s)
{
StringConvertor sc(s);
char* str = new char[sc.ToString()->Length + 1];
strcpy(str, sc.NativeCharPtr);
return str;
}
And when you've done using the strings returned by these functions, you need to free the pointers manually.
BSTR s = ReturnBSTR(gcnew String("Hello there"));
printf("%S \r\n",s);
SysFreeString(s);
char* str = ReturnCharPointer(gcnew String("Hello again"));
printf("%s \r\n",str);
delete[] str;
Most of the data conversions make use of the Marshal class from the System::Runtime::InteropServices namespace. Now most of these functions allocate memory in the native heap and the class uses two vector<> objects to keep track of all allocated memory handles. One vector<> stores all the pointers that need to be freed using Marshal::FreeHGlobal while the other one stores all the BSTR pointers that need to be freed using Marshal::FreeBSTR. The initialization and clean-up of these vectors is done by the StringConvertorBase class which is an abstract ref class that's the base class for the StringConvertor class.
Marshal
System::Runtime::InteropServices
vector<>
Marshal::FreeHGlobal
Marshal::FreeBSTR
vector
StringConvertorBase
abstract
ref
Initially, I had designed the class so that the unmanaged pointers are cached, so that multiple calls to any of the properties that returned pointers to the native heap always returned the cached pointer. But I soon saw that this meant that, if the caller modifies data using one of these pointers, it dirties the internal state of the StringConvertor object which was obviously something that should not be allowed. Considering that, managed-unmanaged transitions should not be over-used and that in most frequent situations, the required life-time of the unmanaged data is very small, usually for making a function call, I assumed that any user of this class with any level of adequacy as a programmer would never abuse the class to such an extent that there'd be lots of unmanaged memory lying un-freed. So the vectors and the unmanaged pointers/handles they contain are freed up in the class destructor (as well as in the finalizer as a safety measure).
As earlier mentioned, if for any reason, you want to change the internal string of the StringConvertor object, use the InteriorCharPtr property and write to the buffer directly - but be very careful how you use it.
CString
Conversions to CString and from CString are so trivial, that it didn't make sense to add them to this class.
CString
CString str("Hello");
String^ s = gcnew String(str); //CString to String
CString strnew(s); //String to CString
I believe I've covered all the basic data types and that these types can be used to create any other more complex data types. For example, the BSTR can be used to make a CComBSTR or a _bstr_t, while the char* (or __wchar_t*) can be used to make a CString.
CComBSTR
_bstr_t
Suggestions and feedback | http://www.codeproject.com/Articles/10400/StringConvertor-A-convertor-class-for-managed-unma?msg=4144284 | CC-MAIN-2014-10 | en | refinedweb |
and tracker starts like this, just like in the map example:
Code:
Ext.ns('xplugin.google'); Ext.define('xplugin.google.Tracker', { extend: 'Ext.util.GeoLocation', alias : 'plugin.gmaptracker',
- Join Date
- Mar 2007
- Location
- St. Louis, MO
- 35,537
- Vote Rating
- 706
Ok, shouldn't have an issue then, could be an issue with the beta (it is beta). Also, in Ext JS 4, there is no need to call Ext.ns Ext.ns part was part taken from the map example....
it's a shame this isn't working, I was hoping to demo openURL functionality on Wednesday... unfortunately it looks like I've run out of time with Sencha Touch... I'm not going to able to wait to test again and I'm having too many problems with ios6 even on the release version, let alone the betas... gonna have to start using objective c again.
Thanks for your help Mitchell.
Same issue here, custom plugin loading is broken.
I renamed all my extensions classes only to find out that this can't be fixed.
Even using my custom namespace with no specific loader, the build won't work...
And there is no easy way to go back to the old tools...
I actually took out the references to the plugins and still got the rest of the errors... the familiar line 707 error that a lot of people seem to be getting... I think we need to wait until the next cmd beta at least.
I'm not sure if this is the correct way or not, but I edited .sencha/app/sencha.cfg file and add the plugin folder to app.classpath. Now can build no problem.
I was also facing the same issue. I realized it was a name space issue.
Following is my namespace configuration ->
Ext.Loader.setPath({
'Ext' : 'touch/src',
'WU' : 'app',
'Ext.plugin': 'lib/plugin'
});
And I placed the lib/google inside the touch/src folder to use the same namespace. It has solved this issue.
Thank you for reporting this bug. We will make it our priority to review this report. | http://www.sencha.com/forum/showthread.php?242590-2.1b3-can-t-include-plugins-in-build/page2 | CC-MAIN-2014-10 | en | refinedweb |
I always forget about R and bioconductor
Hi,
I was about to re-invent the wheel, again, when I thought that there are probably many among you who had already solved this problem.
I have a few huge fasta files (half a million sequences) and I would like to know the average length of the sequences for each of these files, one at a time.
So, let's call this a code golf. The goal is to make a very short code to accomplish the following:
Given a fasta file, return the average length of the sequences.
The correct answer will go to the answer with the most votes on friday around 16h in Quebec (Eastern Time).
You can use anything that can be run on a linux terminal (your favorite language, emboss, awk...), diversity will be appreciated :)
Cheers
CONCLUSION:
And the correct answer goes to the most voted question, as promised :) Thank you all for your great participation! I think I am not the only one to have appreciated the great diversity in the answers, as well as very interesting discussions and ideas.
EDIT:
Although the question is now closed, do not hesitate to vote for all the answers you found interesting, especially those at the bottom! They are not necessarily less interesting. They often have only arrived later :)
Thanks again!
Bioconductor?
library(Biostrings) s <- read.DNAStringSet(filename,"fasta") sum(width(s)) / length(s)
@Stefano: Yeah, I think the above does just read everything in. Seems to be able to cope with pretty big files, but actually maybe this is better:
library(Biostrings) s<-fasta.info(filename) sum(s)/length(s)
I always forget about R and bioconductor
and littleR could make this command-line-able:
This could be a winner. Mean = stats, stats = R.
Am I wrong or this code would read the whole file in memory? with Huge fasta file it might be problem
Hmm, I think you're right. It seems to be able to cope with pretty big files though. Maybe this instead...?
library(Biostrings) s<-fasta.info(filename) sum(s)/length(s)
Not to mention that the "code" itself is just a hack, as it depends on a library. The real code would be to see the actual library function that reads the file and parse the data. I could do this in any language as long as I had a library that would do everything for me.
Ah, maybe I missed the point - I thought the OP was just looking for a quick way to solve the problem. Are the rules that you can only use core features of a language? In that case, I guess I'd go with Perl.
There are no rules, but if someone wants to use a library that does everything, then we don't have different approaches or code, we just have libraries and hacks. That's my own perspective.
Indeed, maybe next time I propose a code golf, I'll ask people to use their favorite language natively, w/o supplementary libraries. Not because it is functionally better, but maybe didactically preferable :) It also shows better how different approaches/paradigms/philosophies of programming give birth to a lot of diversity! Cheers
Fair enough, although I would argue that's it's the opposite of hacky to use an existing, well supported, open source library. Sure, it's useful to see how to implement solutions to a problem in different languages and in different ways, but it's also fairly handy to know if someone has already done all the work for you somewhere.
@CassJ I personally find nothing wrong with any of the answers posted here. I just have some thoughts that, for further questions, and in the context that these could be used to fuel a maybe-to-be-called Project Mendel, I may be inclined to ask people to use only with native code from their favorite language! Thanks for your answer :)
The benefit of using high level languages like R is specifically that they have libraries to help make these kind of tasks easy. Why force folks to re-implement a Fasta parser when a robust solution exists? The interesting thing about this post is the wide variety of approaches; you would take that away by putting restrictions on folks creativity.
I'd point out that the original question uses the phrase "reinvent the wheel" and specifies any command-line solution, including emboss. Libraries do not "do everything". You need to know where to find them, how to install them and how to write code around them. These are core bioinformatics skills. Writing code from first principles, on the other hand, is a computer science skill. Personally, I favour quick, elegant and practical solutions to problems over computational theory.
@Eric - Project Mendel is a grand idea :) If it's mainly focusing on bioinformatics algorithms then maybe there's a similar need for a bioinformatics CookBook with JFDI implementations of common tasks in different languages?
@CassJ This whole discussion about native code vs. use of modules led me to the very same conclusion earlier today. Namely, that 2 different projects were probably needed. I had, not seriously, thought it could be the "Mendel CookBook" :P More seriously, these 2 projects MUST come into existence. They would be a great asset for the developpement of the bioinformatics community among less computer-oriented people among biologists (and maybe as well among less biology-oriented people among informaticians :)
One day we won't need to code, we will have only libraries. I dream of that day, when other people can do 100% of the work for me.
This version is almost twice longer than mine perl version and is even longer than perl version including invocation. LOL, is it still code golf contest?
How about the following very simple AWK script?
awk '{/>/&&++a||b+=length()}END{print b/a}' file.fa
43 characters only (not counting file name). AWK is not known for its speed, though.
Indeed, this is the slowest I tried so far! 18secs for the uniprot_sprot.fasta file. Funny, when I asked the question, I thought this approach, being 'closer to linux', would be the fastest. We learn everyday :)
Since it uses itertools and never loads anything into memory this should scale well for even giant fasta files.
from itertools import groupby, imap tot = 0 num = 0 with open(filename) as handle: for header, group in groupby(handle, lambda x:x.startswith('>')): if not header: num += 1 tot += sum(imap(lambda x: len(x.strip()), group)) result = float(tot)/num
2.55 seconds for 400,000 sequences. Not bad! I think there is a mistake with the 'imap' function. It should be 'map' :) (I change it)
actually its supposed to be
imap but also need another
import form
itertools ... the nice thing about imap is that it won't load the whole iterator into memory at once, it will process the iterator as requested. This is REALLY useful when dealing with fasta files of contigs from many genomes.
I also corrected that the result should be a float so I put 'float(tot)' instead of 'tot' in the result.
one of these days I'll remember that Python doesn't auto-convert on division ... that gotcha has bitten me in the ass many times before, thank goodness for unit-testing.
And just for completeness, a version using the BioRuby library:
#!/usr/bin/ruby require "rubygems" require "bio" seqLen = [] Bio::FlatFile.open(ARGV[0]) do |ff| ff.each do |seq| seqLen << seq.length end end puts (seqLen.inject(0) { |sum, element| sum + element }).to_f / seqLen.size
you can get rid of the (0) after inject, I think.
You could. It just sets an initial value for sum.
I voted +1 (regretfully) for your anwser Neil ;-)
Very gracious, I may return the favour :-)
This seem to do it. straight from the command line. Obviously, being perl, it is compact but a bit obfuscate (also in the reasoning ;))
perl -e 'while(<>){if (/^>/){$c++}else{$a+=length($_)-1}} print $a/$c' myfile.fa
It basically counts all nucleotides (in $a) and the number of sequences (in $c). At the end divides the number of nucleotides by the number of sequences, getting the average length.
$a+=(length($_)-1)
-1 because there is the newline to take into account. use -2 if you use a MS window file. Or chomp.
Does not require lot of memory, and it only need to read the file once.
I see it uses a similar logic as the one posted by Will. Didn't see it, but he posted first
I actually pilfered the idea from a blog post: ... I use it for all of my production code since it deals well with giant (or functionally infinite) fasta files.
Same thing but a few chars shorter...
perl -le 'while(<>){/^>/?$c++:{$a+=length}} print $a/$c' myfile.fa
perl -le 'while(<>){/^>/?$c++:{$a+=length}} print $a/$c' myfile.fa
and I thought perl code properly indented was tough to read ... that might just as well be in Wingdings to me ;)
[?]perl -nle'if(/^>/){$n++}else{$l+=length}END{print$l/$n}' myfile.fa[?]
I personally like the implicit while loop and use of and END block in command line versions such as this.
perl -nle'if(/^>/){$n++}else{$l+=length}END{print$l/$n}' myfile.fa
I personally like the implicit while loop and use of and END block in command line hacks such as this.
perl -nle'if(/^>/){$n++}else{$l+=length}END{print$l/$n}' myfile.fa
I personally like the use of the -n option plus an END block for this kind of command line version.
The perl script might be not so short but quite simple:
my($n, $length) = (0, 0); while(<>) { chomp; if(/^>/){ $n++; }else { $length += length $_; } } print $length/$n;
I LIKE to see perl written on many lines like that :) My eyes hurt less!
In fact, code like this one bring me closer to the day where I will want to learn Perl! Thanks!
Haskell Version
I'm a relative Haskell newbie, and I'm sure this could be written more compactly or more elegantly. It's fast, though; even faster than the FLEX version, in my hands. I did see some timing strangeness that I haven't figured out, though--I thought the flex version ran faster the first time I tried it, but now it's consistently slower when I try to run it. Also, I seem to get a different mean value with the perl version than with the others I've tried. So please do reproduce.
EDIT: As Pierre Lindenbaum points out, I forgot the "-f" flag to flex; the times below are updated with that flag.
[?]
Some comparisons, on my machine:
[?]
Thanks for the nice clean problem to learn with!
Interesting. I had a look at haskell before. By the way, I compiled my flex version with "-f": the lexer is larger but faster.
B.interact takes stdin and reads it into a string. And it does that lazily, so this code doesn't ever have the entire file in memory. B.lines separates that string into a list of lines (eliminating newlines); foldl' fastaCount (0, 0) folds the "fastaCount" function over the list of lines, accumulating a count of sequences and a sum of their lengths. showMean takes the resulting count and length and returns a string with the mean, and B.pack does a bit of type conversion (turns the regular string from "show" into the bytestring that B.interact expects)
Ah, the "-f" is what I forgot. With -f, the flex version takes 0.986 seconds real time on my machine (about 0.24 seconds slower than the haskell version)
import Bio.Sequence
main = do
ls <- map seqlength
fmap readFasta "input.fasta"
print (sum ss
div length ss)
You could also use the Haskell bioinformatics library, which provides Fasta parsing and, among other things, a seqlength function.
Common Lisp version using cl-genomic (shameless self-promotion)
(asdf:load-system :cl-genomic) (in-package :bio-sequence-user) (time (with-seq-input (seqi "uniprot_sprot.fasta" :fasta :alphabet :aa :parser (make-instance 'virtual-sequence-parser)) (loop for seq = (next seqi) while seq count seq into n sum (length-of seq) into m finally (return (/ m n))))) Evaluation took: 9.301 seconds of real time 9.181603 seconds of total run time (8.967636 user, 0.213967 system) [ Run times consist of 0.791 seconds GC time, and 8.391 seconds non-GC time. ] 98.72% CPU 22,267,952,541 processor cycles 2,128,118,720 bytes consed 60943088/172805
Reading the other comments, use of libraries seems controversial, so in plain Common Lisp
(defun mean-seq-len (file) (declare (optimize (speed 3) (safety 0))) (with-open-file (stream file :element-type 'base-char :external-format :ascii) (flet ((headerp (line) (declare (type string line)) (find #\> line)) (line () (read-line stream nil nil))) (do ((count 0) (total 0) (length 0) (line (line) (line))) ((null line) (/ (+ total length) count)) (declare (type fixnum count total length)) (cond ((headerp line) (incf count) (incf total length) (setf length 0)) (t (incf length (length line)))))))) (time (mean-seq-len "uniprot_sprot.fasta")) Evaluation took: 3.979 seconds of real time 3.959398 seconds of total run time (3.803422 user, 0.155976 system) [ Run times consist of 0.237 seconds GC time, and 3.723 seconds non-GC time. ] 99.50% CPU 9,525,101,733 processor cycles 1,160,047,616 bytes consed 60943088/172805
Thanks, I was only dreaming of seeing a Common Lisp answer to this question :)
34 chars, 45 including invocation.
$ time perl -nle'/^>/?$c++:($b+=length)}{print$b/$c' uniprot_sprot.fasta 352.669702844246 real 0m2.169s user 0m2.048s sys 0m0.080s
Challenge accepted!
5 lines of Python (only 3 of actual work) that print the mean, given the fasta file as a command line argument. It is a bit obfuscated, but I went for brevity:
import sys from Bio import SeqIO handle = open(sys.argv[1], 'rU') lengths = map(lambda seq: len(seq.seq), SeqIO.parse(handle, 'fasta')) print sum(lengths)/float(len(lengths))
Disclaimer: I have no idea how this will scale. Probably not great.
you could save a line by putting the open() in the SeqIO.parse() call, but my soul rebelled at that point.
Thanks :) I understand. I tend to write longer code these days in order to be clearer. After all, I'm going to have to re-read this code sometime!
It took 6.6 seconds for 400,000 sequences. Not uber fast, but usable :)
Simon, you can replace the reduce(lambda x,y: x+y, lengths) bit with just sum(lengths) to make the last line more straightforward.
Brad, of course, I always forget about sum()... Changed for the increased readability.
If it depends on an external library, I can do in one line.
Since the question was about not re-inventing the wheel, I decided not to do it with the fasta parsing. But since that's where the biggest performance hit is, I guess I could have done better implementing it myself.
Here is another R example, this time using the excellent seqinR package:
library(seqinr) fasta <- read.fasta("myfile.fa") mean(sapply(fasta, length))
And here's a way to run it from the command line - save this code as meanLen.R:
library(seqinr) fasta <- read.fasta(commandArgs(trailingOnly = T)) mean(sapply(fasta, length))
And run it using:
Rscript meanLen.R path/to/fasta/file
Another R hack, show me the function in the library and I will upvote it.
R is open source. Feel free to examine the library code yourself :-)
Having made the shortest and slowest program, allow me to also present a longer but much faster solution in C that makes use of UNIX memory mapping to read the input file:
#include "sys/types.h" #include "sys/stat.h" #include "sys/mman.h" #include "fcntl.h" #include "stdio.h" #include "unistd.h" void main(int ARGC, char *ARGV[]) { int fd; struct stat fs; char *p1, *p2, *p3; unsigned long c1, c2; // Map file to memory. fd = open(ARGV[1], O_RDWR); fstat(fd, &fs); p1 = mmap(NULL, fs.st_size, PROT_READ, MAP_SHARED, fd, 0); p2 = p1; p3 = p1+fs.st_size; // Do the actual counting. c1 = 0; c2 = 0; while (p2 < p3 && *p2 != '>') ++p2; while (*p2 == '>') { ++c1; while (p2 < p3 && *p2 != '\n') ++p2; while (p2 < p3 && *p2 != '>') { if (*p2 >= 'A' && *p2 <= 'Z') ++c2; ++p2; } } // Print result. printf("File contains %d sequences with average length %.0f\n", c1, (float)c2/c1); // Unmap and close file. munmap(p1, fs.st_size); close(fd); }
On my server it takes only 0.43s wall time (0.37s CPU time) to process uniprot_sprot.fasta
maybe you can put the code to compile this program?
gcc -O3 -o avglength avglength.c
It's C, but it's not ANSI :-)
please enlighten me ... which part of it is not ANSI? (ok, appart from the // that should be / / if I remember right.)
please enlighten me - which part is not ANSI-C? (ok, I know that the // comments should be / /)
"unistd.h" , open() , mmap(), "sys/*.h", "fcntl.h" are not defined in the ANSI-C spec.
but I'm just being picky :-)
something like a copy+paste from a previous question: . It won't be the shortest code, but it may be the fastest.
The following code is a FLEX lexer, with the following rules
raise an error for the other cases:
%{ #include <stdio.h> #include <stdlib.h> static long sequences=0; static long size=0L; %} %option noyywrap %% ^>.*\n { sequences++; } ^[A-Za-z]+ {size+=yyleng;} [ \t\n] ; . { fprintf(stderr,"Illegal character \"%s\".", yytext); exit(EXIT_FAILURE); } %% int main(int argc,char** argv) { yylex(); if(sequences==0) return EXIT_FAILURE; printf("%f\n",(double)size/(double)sequences); return EXIT_SUCCESS; }
Compilation:
flex -f golf.l gcc -O3 -Wall lex.yy.c
Test:
wget "" gunzip uniprot_sprot.fasta.gz ##232Mo time ./a.out < uniprot_sprot.fasta 352.669703 real 0m0.692s user 0m0.572s sys 0m0.120s
definately faster:
time python mean_length.py uniprot_sprot.fasta
352.669702844
11.04s user 0.20s system 100% cpu 11.233 total
Thanks for testing Simon
In Ruby!
a,b=0,[];File.new(ARGV[1]).each do |i|;if i[0]!=62;a+=i.length-1;else;b.push a;a=0;end;end;puts b.inject{|s,v|s+=v}/b.length.to_f
130 Chars, works w/ wrapped FASTA files.
Uncondensed:
a, b = 0, [] File.new(ARGV[0]).each do |i| if i[0]!=62 a += i.length-1 else b.push a a=0 end end; puts b.inject{ |s, v| s+=v }/b.length.to_f
Just now learning Ruby so help me out.
You're definitely not late, as the correct answer will be decided in 3 days and all solutions are welcomed! :) The assumption that the fasta files are not wrapped is probably not met most of the time, however. Cheers!
A Clojure version using BioJava to handle the Fasta parsing:
(import '(org.biojava.bio.seq.io SeqIOTools)) (use '[clojure.java.io]) (defn seq-lengths [seq-iter] "Produce a lazy collection of sequence lengths given a BioJava StreamReader" (lazy-seq (if (.hasNext seq-iter) (cons (.length (.nextSequence seq-iter)) (seq-lengths seq-iter))))) (defn fasta-to-lengths [in-file seq-type] "Use BioJava to read a Fasta input file as a StreamReader of sequences" (seq-lengths (SeqIOTools/fileToBiojava "fasta" seq-type (reader in-file)))) (apply fasta-to-lengths *command-line-args*))))
and a more specific implementation without using external libraries for FASTA:
(use '[clojure.java.io]) (use '[clojure.contrib.str-utils2 :only (join)]) (defn fasta-lengths [in-file] "Generate collection of FASTA record lengths, splitting at '>' delimiters" (->> (line-seq (reader in-file)) (partition-by #(.startsWith ^String % ">")) (filter #(not (.startsWith ^String (first %) ">"))) (map #(join "" %)) (map #(.length ^String %)))) (fasta-lengths (first *command-line-args*)))))
I've been iterating over a couple versions of this to improve performance. This thread on StackOverflow has more details for those interested.
% time cljr run fasta_counter.clj uniprot_sprot.fasta PROTEIN 60943088/172805 11.84s
OCaml:
[?]
To give you some idea of the performance, the natively compiled version takes about 1.4s (real time) on my laptop vs 0.6 s for the C version posted by Lars Juhl Jensen.
[?]
$totalLength = 0; $nbSequences = 0; $averageLength = 0; $fastaFile = fopen("./database/uniprot_sprot.fasta", "r"); if ($fastaFile) { while (!feof($fastaFile)) { $line = fgets($fastaFile, 4096); if(preg_match('/^>/', $line)) { $nbSequences++; } else { $totalLength += strlen(trim($line)); } } fclose($fastaFile); } $averageLength = ($totalLength / $nbSequences); print "Total sequences : " . $nbSequences . "\n"; print "Total length : " . $totalLength . "\n"; print "Average length : " . $averageLength . "\n";
Output :
[?]
[?]
[?]
Can I post more than one?
grep/awk:
grep '^[GATC].*' f | awk '{sum+=length($0)}END{print sum/NR}'
where f is the filename ;)
62 chars, btw.
dangit... this too only works for unwrapped fasta files. shoot me!
Erlang special golfing 213 chars version:
-module(g). -export([s/0]). s()->open_port({fd,0,1},[in,binary,{line,256}]),r(0,0),halt(). r(C,L)->receive{_,{_,{_,<<$>:8,_/binary>>}}}->r(C+1,L);{_,{_,{_,Line}}}->r(C,L+size(Line));_->io:format("~p~n",[L/C])end.
Readable but reliable version:
-module(g). -export([s/0]). s()-> P = open_port({fd, 0, 1}, [in, binary, {line, 256}]), r(P, 0, 0), halt(). r(P, C, L) -> receive {P, {data, {eol, <<$>:8, _/binary>>}}} -> r(P, C+1, L); {P, {data, {eol, Line}}} -> r(P, C, L + size(Line)); {'EXIT', P, normal} -> io:format("~p~n",[L/C]); X -> io:format("Unexpected: ~p~n",[X]),exit(bad_data) end.
Compile:
$ erl -smp disable -noinput -mode minimal -boot start_clean -s erl_compile compile_cmdline @cwd /home/hynek/Download @option native @option '{hipe, [o3]}' @files g.erl
Invocation:
$ time erl -smp disable -noshell -mode minimal -boot start_clean -noinput -s g s<uniprot_sprot.fasta 352.6697028442464 real 0m3.241s user 0m3.060s sys 0m0.124s
Another answer. Last time I used FLEX, I now use the GNU BISON parser generator. Here I've created a simple grammar to describe a Fasta File.Of course this solution is slower than Flex :-)
%{
static long sequences=0L; static long total=0L; %} %error-verbose %union { int count; }
%token LT %token<count> SYMBOL %token CRLF %token OTHER
%start file
%{ void yyerror(const char* message) { fprintf(stderr,"Error %s\n",message); }
int yylex() { int c=fgetc(stdin); if(c==-1) return EOF; if(c=='>') return LT; if(c=='\n') return CRLF; if(isalpha(c)) return SYMBOL; return OTHER; } %} %%
file: fasta | file fasta; fasta: header body { ++sequences; }; header: LT noncrlfs crlfs; body: line | body line; line: symbols crlfs; symbols: symbol {++total;}| symbols symbol {++total;}; symbol: SYMBOL; crlfs: crlf | crlfs crlf; crlf: CRLF; noncrlfs:noncrlf | noncrlfs noncrlf; noncrlf: SYMBOL| OTHER | LT; %%
int main(int argc,char *argv) { yyparse(); if(sequences>0L) fprintf(stdout,"%f\n",(total/(1.0sequences))); return 0; }
Compilation:
bison golf.y gcc -Wall -O3 golf.tab.c
Test:
time ./a.out < uniprot_sprot.fasta 352.669703
real0m9.723s user0m9.633s sys0m0.080s
And now, with awk:
$awk '$1!~ /^>/ {total += length; count++} END {print total/count}' uniprot_sprot.fasta
Still not smoking fast but down to one line (for certain values of "one" and "line")
EDIT
And,as Pierre points out below, I got this wrong again, as it doesn't account for wrapped lines. This one does work:
$ awk '{if (/^>/) {record ++} else {len += length}} END {print len/record}' uniprot_sprot.fasta
It returns the mean length of the fasta lines. We need the mean length of the whole fasta sequences.
right, should have worked that out really (i'd tested it on an unwrapped fasta file)
Using MYSQL, yes we can:
create temporary table T1(seq varchar(255)); LOAD data local infile "uniprot_sprot.fasta" INTO TABLE T1 (seq); select @SEQ:=count(*) from T1 where left(seq,1)=">"; select @TOTAL:=sum(length(trim(seq))) from T1 where left(seq,1)!=">"; select @TOTAL/@SEQ;
Result:
me@linux-zfgk:~> time mysql -u root -D test < golf.sql @SEQ:=count(*) 518415 @TOTAL:=sum(length(trim(seq))) 182829264 @TOTAL/@SEQ 352.669702844000000000000000000000 real 0m44.224s user 0m0.008s sys 0m0.008s
Here's another python one, it's really the same as Simon's but using list expressions instead of lamdas (I find them more readable, you might not)
import sys from Bio import SeqIO lengths = [len(seq) for seq in SeqIO.parse(sys.argv[1], "fasta")] print sum(lengths)/float(len(lengths))
For SeqIO to accept filenames as strings (instead of handles) you need biopython 1.54.
And I'm sure this will be slower than any of the other options already presented ;)
EDIT:
As Eric points out below this pumps all the lengths into one big list, which isn't exactly memory extensive ;)
I can hang on to my completely irrational fear of lamda/map and itertools with a generator expression:
lengths = (len(seq) for seq in SeqIO.parse(sys.argv[1], "fasta")) total= 0 for index, length in enumerate(lengths): total += length print total/float(index + 1)
Still pretty slow, about a minute on computer, so probably not a goer for the original challenge!
@david. Slow, I think, is not the problem here. For very large fasta files, your code is going to create a list of many thousand hundreds (or millions) of integers, which is going to be memory intensive. The best thing to do I guess would be to test it with the data that Pierre used for his test, the uniprot_sprot.fasta file :) Cheers
Oh, yeah, Read the specficication ...
Just added a generator expression version, interestingly enough the uniprot_sprot file doesn't make too much of a footprint with ~500 000 integers. But both methods are really too slow for what you want to do ;(
I guess I should have tested the memory footprint before putting the comment in the first place. A list of 50,000,000 integers will take about 1 Gb or so (just saturated the memory of my old faithful laptop trying).
A third variation, this time using Javacc (the Java Compiler Compiler):
options { STATIC=false;} PARSER_BEGIN(Golf) public class Golf { private int sequences=0; private int total=0; public static void main(String args[]) throws Exception { new Golf(new java.io.BufferedInputStream(System.in)).input(); } } PARSER_END(Golf) TOKEN: /*RESERVED TOKENS FOR UQL */ { <GT: ">"> | <CRLF: ("\n")+> | <SYMBOLS: (["A"-"Z", "a"-"z"])+> | <OTHER: (~["A"-"Z", "a"-"z","\n",">"]) > } void input():{} { (fasta() )+ {System.out.println(total/(double)sequences);} } void fasta():{} { header() (line())*} void header():{} { <GT> (<SYMBOLS>|<OTHER>|<GT>)* <CRLF> {sequences++;}} void line():{Token t;} { t=<SYMBOLS> <CRLF> { total+=t.image.length(); }}
Compile
javacc Golf.javacc javac Golf.java
Test
time java Golf < uniprot_sprot.fasta 352.6697028442464 real 0m11.014s user 0m11.529s sys 0m0.180s
Perl 6. Painfully slow, so I'm probably doing something dumb that isn't obvious to me.
[?]
one suggestion: in the case of multi-line FASTA files, the more common scenario is that any given line will NOT be a header line. thus, one speedup would be to test for "ne '>'" first in the while loop. not sure what speeds you're seeing but you could also try to "slurp" the file into memory first.
OK I'll give it a go. It's still much slower than Perl5 doing the same thing, i.e. reading line by line.
Maybe a grammar would be the best option? It is Perl 6...
I might give a grammar a go but I don't expect it'll help much. Even basic I/O - reading a file line by line = is still an order of magnitude slower than in Perl 5.
another [?]fastest[?] solution in 'C', with a lot of micro-optimization
#include <stdio.h> #define BUFFER_SIZE 10000000 int main(int argc,char** argv) { size_t i; char buffer[BUFFER_SIZE]; long total=0L; long sequences=0L; size_t nRead=0; size_t inseq=1; while((nRead=fread(buffer,sizeof(char),BUFFER_SIZE,stdin))>0) { for(i=0;i<nRead;++i) { switch(buffer[i]) { case '>': if(inseq==1) ++sequences; inseq=0;break; case '\n': inseq=1;break;': if(inseq==1) ++total;break; default: break; } } } printf("%f\n",(total/(1.0*sequences))); return 0; }
Compilation:
gcc -O3 -Wall golf.c
Result:
time ./a.out < uniprot_sprot.fasta 352.669703 real 0m0.952s user 0m0.808s sys 0m0.140s
Pierre, not to blow my own horn, but on my machine your C implementation is more than 2x slower than the one I posted ;)
oh nooooooo :-)
but my solution is ANSI :-))
Touché! I guess the price of standard compliance is 2-3x loss in speed? Sounds about right to me ;-)
Touché! I guess the price to pay for standards compliance is 2-3x loss in execution speed? Sounds about right to me ;-)
ab-so-lu-te-ly ! :-)
Create a 'flat' fasta for the human genome that for each chromosome contains the entire sequence as a single line. Now run the tools below and see which one can still do it.
I would be incredibly impressed to see an answer in BF (), Var'aq () or Lolcode () ... not for usability but just to see a "real-word" application.
A series of "code golf" tournaments might be a fun way to seed the proposed "Project Mendel"
Kind of what I had in mind :) Something like doing 1 golf-code per week and slowly building up a set of fun problems. Maybe there should be and off-biostars group discussing problem propositions and formulation. For example forming a google group... What do you think?
I love the answers to this question! Amazing diversity: we get the perl, python, R scripts then .... whoa ... flex, clojure, erlang, haskell, ocaml, memory mapped C files.
This is one of the best posts in the forum! Opened my eyes to the full universe of programming approaches. Can I +1 code golf as well as this post? | http://www.biostars.org/p/1758/ | CC-MAIN-2014-10 | en | refinedweb |
18 April 2011
By clicking Submit, you accept the Adobe Terms of Use.
You should be generally familiar with principles of streaming video on the web and encoding video for Flash.
Intermediate
Adobe HTTP Dynamic Streaming was developed by Adobe to deliver content to users via HTTP, enabling dynamic switching of video content quality depending on the bandwidth available to the user. It is especially effective when it works together with Adobe Flash Accesss 2.0 to protect valuable assets. This guide gives you step-by-step instructions to get your Flash media video platform up and running.
Here are the main features of HTTP Dynamic Streaming content delivery when used with Flash Access 2.0:
These features meet the requirements of most major video content rightsholders, thus enabling you to maintain a broad choice of content at your online resources.
The solution to protect the content and distribute it using Flash Access is intended for the protection of content in multimedia projects using HTTP Dynamic Streaming technology (see Figure 1).
The solution consist of four main modules (content stages):
Content preparation (File Packager for video on demand)
Content preparation includes encoding and encryption using the File Packager tool, which supports the FLV (VP6/MP3) and F4V (H.264/AAC) file formats.
The policies applied at content playback can be managed by the license server, so you can apply the simplest (anonymous) policy of content encryption for HTTP Dynamic Streaming.
For a detailed description, please refer to the section "Content preparation" in this tutorial.
Content delivery (HTTP delivery)
The content encryption process creates three types of files:
For caching the video fragments, a CDN or Nginx server can be used.
License server
Adobe Flash Access Server for Protected Streaming is a license server issuing licenses to users and managing content delivery policies. For a detailed description, please refer to the section, "Configuring the Flash Access Server for Protected Streaming".
Playback
To play back the test content, you can use the freely available OSMF player. You'll find a detailed description in the section, "OSMF video player".
To get things working (see the complete picture in Figure 1), you will have to use some additional tools and modules, as follows.
File Packager (f4fpackager) is an Adobe console application that does the following:
For a detailed description, please refer to the section, "Content preparation".
For video content encoding, a symmetric block encryption algorithm (Advanced Encryption Standard) is used with a block size of 128 bits and a 128-bit key. This is an encryption standard providing high storage security and content delivery in CDNs.
HTTP Origin Module for Apache handles requests for content fragments. It is included in all versions of Flash Media Server 4, or it can be downloaded from the Adobe website.
A detailed description of content delivery process is given in the section, "HTTP Origin Module operation."
Open Source Media Framework is a reliable and flexible ActionScript framework for rapid development of SWF-based video players. The OSMF sample player (see Figure 2) is designed for HTTP Dynamic Streaming. A detailed description of the player is given in the section, "OSMF video player".
To prepare content for HTTP streaming, you need to use the File Packager, which provides the following:
To enable content fragmentation in Windows:
f4fpackager - input-file = sample.f4v - output-path = c: \ sampleoutput
After the encoding is complete, you get the following files: sampleSeg1.f4f, sample.f4x, and sample.f4m.
To enable content fragmentation in Linux:
f4fpackager - input-file = sample.f4v - output-path = / sampleoutput
After encoding, you get the following files: sampleSeg1.f4f, sample.f4x, and sample.f4m.
This includes encoding of content at several bit rates: for example, 150 kbps, 700 kbps, and 1500 kbps. In this example, three files are encoded at different bitrates: sample1_150kbps.f4v, sample1_700kbps.f4v, and sample1_1500kbps.f4v. In Flash Media Server, these files are located in the directory rootinstall\applications\vod\media.
f4fpackager - input-file = sample1_150kbps.f4v - bitrate = 150
After the encoding is complete, you get the following files: sample1_150kbpsSeg1.f4f, sample1_150kbps.f4x, and sample1_150kbps.f4m.
f4fpackager - input-file = sample1_700kbps.f4v - manifest-file = sample1_150kbps.f4m - bitrate = 700
After the encoding is complete, you get the following files: sample1_700kbpsSeg1.f4f, sample1_700kbps.f4x, and sample1_700kbps.f4m.
In addition to details on the current encoding (sample1_700kbps.f4m), the manifest file sample1_700kbps.f4m also contains information about the first encoding (sample1_150kbps.f4m).
After the encoding is complete, you get the following files: sample1_1500kbpsSeg1.f4f, sample1_1500kbps.f4x and sample1_1500kbps.f4m.
In addition to details on the current encoding, the manifest file sample1_1500kbps.f4m contains information about the first encoding (sample1_150kbps.f4m) and the second encoding (sample1_700kbps.f4m). If you encode with multiple bit rates, the information from the first manifest file is copied to the second manifest file, from the second to the third, and so on.
The latest manifest file includes the most up-to-date information on all three files encoded and their different bit rates.
File Packager is designed not only to encode but also to encrypt content. Setting a large number of parameters is much easier using a configuration file:
f4fpackager - conf-file = f4fpackager_config.xml
Here's a description of the parameters:
input-fileis the path to the source video file.
content-idis a content identifier you select. It's used with the
common-keyparameter to generate the content encryption key. Keep the same
content-idand
common-keysettings for an entire set of content to make sure that users can decrypt your content set with a single license.
common-keyis a unique 128-bit key (created by the OpenSSL utility) that's used with the
content-idto create the encryption key.
license-server-urlis a URL of the Flash Access for Protected Streaming license server. It grants the user license.
license-server-certis an encoded license server certificate. It is obtained from Adobe as a result of licensing and never changes.
transport-certis an encoded transport certificate (.der). It is obtained from Adobe as a result of licensing and never changes.
packager-credentialis a mandate for encryption of content (. pfx). It is obtained from Adobe as a result of licensing and never changes.
credential-pwdis a password. It is obtained from Adobe as a result of licensing and never changes.
policy-fileis a policy (. pol). The policy file can be created using the java API or a utility that comes with Flash Access (AdobePolicyManager.jar).
All parameters should contain relative or absolute file paths for the files. For more information on File Packager, see these resources:
The manifest file (F4M) includes the following:
Here is an example of the manifest file for/medium" bitrate="908" width="800" height="600"/> </ Manifest>
Here is an example of the manifest file for multi-bitrate/low" bitrate="408" width="640" height="480"/> <media url="/myvideo/medium" bitrate="908" width="800" height="600"/> <media url="/myvideo/high" bitrate="1708" width="1920" height="1080"/> </ Manifest>
The content file (F4F) contains fragments of the encrypted content. You'll find more information about F4F files in the white paper, HTTP Dynamic Streaming on the Adobe Flash Platform (PDF).
When your video library is ready to be delivered through HTTP Dynamic Streaming, you are ready to configure your server's HTTP infrastructure. Content delivery is enabled by two main modules:
HTTP provides a variety of popular tools for load balancing, caching, and efficient content delivery that are applicable to standard web content.
Table 1 compares various content delivery methods and their parameters, delineating the benefits of HTTP Dynamic Streaming.
Table 1. Comparing content delivery methods
* Live latency for RTMP may vary depending on network infrastructure and buffer settings.
** Live latency for HTTP Dynamic Streaming may vary depending on encoding, fragmentation, and buffer settings.
One of the most impressive benefits of HTTP Dynamic Streaming is that a user with a very slow Internet connection can pause the playback and wait until the video content is fully downloaded. This way, even users with a narrowband Internet connection can watch high-quality videos without interruption.
You'll find more information about the characteristics of HTTP Dynamic Streaming in the Adobe white paper, HTTP Dynamic Streaming on the Adobe Flash Platform (PDF).
To play back content in an OSMF player, you should specify the URL of the manifest file (F4M). Here is how it works:
The fragmented structure of the content (F4F) is shown in Figure 3.
As the content is delivered to the user via HTTP, the delivery process can be analyzed using Firefox with the FireBug plugin installed (see Figure 4).
To create an HTTP Dynamic Streaming application, do the following:
Setting up the Flash Media Interactive Server 4 for HTTP Dynamic Streaming is described in the white paper, HTTP Dynamic Streaming on the Adobe Flash Platform (PDF). For more general information on FMS setup and configuration, see the Flash Media Server 4.0 Help.
Note: If the webroot directory already has the "images" directory, copy the files from the OSMF player's \images directory to webroot\images.
For more details, see the section, "OSMF video player".
Note: The OSMF player requires Flash Player 10.1 or later.
For multi-bitrate streaming, press Q. Press Q– and Q+ to change the content playback bitrate.
To manage digital rights and user access to protected content, Adobe provides Flash Access Server for Protected Streaming. This server issues user licenses to protected content.
Because the policies applied on content playback can be customized by the license server, you can encrypt the content by the simplest (anonymous) policy of content protection.
Flash Access Server for Protected Streaming ignores the policy in the encrypted file itself. Instead, content access parameters and limitations need to be defined on the server side in these configuration files:
In the owner's global configuration file, you can specify a full path to flashaccess-tenant.xml or a path relative to the tenant's directory (LicenseServer.ConfigRoot/flashaccessserver/tenants/tenantname).
Tenant's configuration file:
The tenant's configuration file, flashaccess-tenant.xml, provides settings that control access to the tenant's content:
Note: All licenses issued by Flash Access Server for Protected Streaming are valid for no more than 24 hours (86,400 seconds).
Global configuration file:
The most important settings, such as caching and logging, are configured in the global configuration file, flashaccess-global.xml:
<Caching>: cache management parameters; for example,
<Caching refreshDelaySeconds ="..." numTenants ="..."/>
refreshDelaySeconds determines update frequency. Small interval can affect server performance.
numTenants is number of tenants of the server.
<Logging>: specifies the logging level; for example,
<Logging level ="..." rollingFrequency =""/>
level determines the level of logging. If it is set to "DEBUG", it saves quite a lot of messages to the log file. For optimum performance, Adobe recommends setting the value to "WARN". However, there is a risk of losing important information, such as licensing audit data. For minimal logging, set the value to "INFO".
rollingFrequency specifies how frequently the log files are changed. You can set the value to "
MINUTELY ", "
HOURLY ", "TWICE-
DAILY ", "
DAILY ", "
WEEKLY ", "
MONTHLY ", or "
NEVER ".
With Flash Access Server for Protected Streaming, a specific policy is used when playing back the content (the parameters are set in the Flash Access configuration files by default):
Log files are created by Flash Access Server for Protected Streaming and are located in a directory defined as LicenseServer.LogRoot.
Note: If the current log file is deleted or moved on server startup, the server will not create a new log file and the data might be lost in the future.
Directory structure:
LicenseServer.LogRoot/ flashaccess-global.log flashaccessserver/ flashaccess-partition.log tenants/ tenantname/ flashaccess-tenant.log
The global log file flashaccess-global.log is located in a directory defined as LicenseServer.LogRoot. This log file contains messages from the Flash Access SDK and messages generated by the server initialization.
The flashaccess-partition.log configuration file is located in the LicenseServer.LogRoot/flashaccesserver directory. This log file includes messages on requested licenses.
The configuration file flashaccess-tenant.log islocated in LicenseServer.LogRoot/flashaccesserver/tenants/tenantname.
This log file includes information on each requested license.
Using a custom authentication mechanism implies inserting a special token (AuthenticationToken) into the license request:
To enable custom authentication in Flash Access Server for Protected Streaming, you should:
Customauthentication type. Such policies can be created with the Flash Access SDK.
SampleAuthorizer.
<AuthExtensions> <Extension className="com.adobe.flashaccess.server.license.extension.auth.SampleAuthorizer"/> </AuthExtensions>
For more information, see the white paper, Protecting Content (PDF).
To implement a custom authentication class, follow these steps:
com.adobe.flashaccess.server.license.extension.auth(see Figure 5).
package com.adobe.flashaccess.server.license.extension.auth; import java.io.DataInputStream; import java.io.IOException; import java.net.URL; import java.net.URLConnection; import org.apache.commons.logging.Log; public class SampleAuthorizer implements IAuthorizer { public SampleAuthorizer() {} public void authorize(IMessageFacade message, IAuthRequestFacade request, IAuthResponseFacade response, IAuthChain chain, Log log) throws Exception { if(message.getAuthToken() != null){ System.out.println(new String(message.getAuthToken())); URLConnection conn = null; DataInputStream dis = null; boolean authValid = false; try { conn = new URL("?" + new String(message.getAuthToken())).openConnection(); conn.setDoOutput(true); conn.setConnectTimeout(10000); conn.setReadTimeout(10000); dis = new DataInputStream(conn.getInputStream()); String inputLine = null; while ((inputLine = dis.readLine()) != null) if(inputLine.equalsIgnoreCase("auth=true")){authValid = true; k;} } catch (IOException e) { } finally{ if(dis != null) dis.close(); } if(authValid){ chain.execute(message); return; } } throw new Exception("AuthToken error"); } @Override public IAuthorizer clone() { return new SampleAuthorizer(); } }
To play the video content in your application, you should create a Flash Access compliant video player for secure content playback based on OSMF 1.5 (see Figure 6). The source code for the player is given in this article.
During initialization of the video player, create a new instance of the MediaPlayerSprite class. For its
mediaPlayer property, set the
DRMEvent.DRM_STATE_CHANGE event handler. This means that on
DRMEvent , the
onDRMStateChangeHandler method is called to analyze the event:
private function initMediaPlayer (): void { _mediaPlayerSprite = new MediaPlayerSprite (); _mediaPlayerSprite.mediaPlayer.addEventListener (DRMEvent.DRM_STATE_CHANGE, onDRMStateChangeHandler) addChild (_mediaPlayerSprite); }
The
addMedia method is used to add a new stream to play. This method creates an instance of the URLResource class, and the content URL (for example,) is passed to the class constructor. Next, create a new MediaFactory object and an instance of the F4MElement class whose constructor accepts your URLResource, and an instance of the F4MLoader class whose constructor receives the MediaFactory object:
private function addMedia (m_url: String): void { var _urlResource: URLResource = new URLResource (m_url); var _factory: MediaFactory = new MediaFactory (); var _f4mElement: F4MElement = new F4MElement (_urlResource, new F4MLoader (_factory)); _mediaPlayerSprite.mediaPlayer.media = _f4mElement; }
The
onDRMStateChangeHandler method is invoked whenever a DRMEvent is raised by the
mediaPlayer property of the
MediaPlayerSprite class. This method loops through all the events of this type and initiates certain actions when it finds a match. For example, when the
RMState.AUTHENTICATION_NEEDED event is raised, this indicates that authentication is required. In this case, authentication is performed as follows:
_mediaPlayerSprite.mediaPlayer.authenticate ("test", "test")
where
test is a username and password, respectively.
It should be noted that the authentication function can be implemented so that the username and password play an entirely different role (for example, as web session identifiers):
protected function onDRMStateChangeHandler (evt: DRMEvent): void { switch (evt.drmState) { case DRMState.AUTHENTICATING: break; case DRMState.AUTHENTICATION_COMPLETE: break; case DRMState.AUTHENTICATION_ERROR: break; case DRMState.AUTHENTICATION_NEEDED: _mediaPlayerSprite.mediaPlayer.authenticate ("test", "test"); break; case DRMState.DRM_SYSTEM_UPDATING: break; case DRMState.UNINITIALIZED: break; } }
By using these methods, you can play back DRM content in SWF-based video players.
If you need to pass a token for authentication, you should use the new API in the DRMManager class. The method
setAuthenticationToken
(serverUrl:
String
, domain:
String
, token:
ByteArray
):
void let you pass any token you have.
For more information, see the ActionScript 3.0 Reference documentation for the DRMManager class.
This tutorial has given you an introduction to using Flash Access to protect content served with HTTP Dynamic Streaming, including building a Flash Access compliant OSMF-based media player.
The resources referenced in this tutorial can provide you with more in-depth knowledge and guidance:
Also check out the DENIVIP blog, where we publish useful Flash Platform related content, such as the post, Flash Media Server: URLs tokenization.
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Permissions beyond the scope of this license, pertaining to the examples of code included within this work are available at Adobe. | http://www.adobe.com/devnet/adobe-media-server/articles/dynamic-streaming-protection.html | CC-MAIN-2014-10 | en | refinedweb |
public class HTTPResponse extends java.lang.Object implements java.io.Serializable
HTTPResponseencapsulates the results of a
HTTPRequestmade via the
URLFetchService.
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public int getResponseCode()
public byte[] getContent()
public java.net.URL getFinalUrl()
public java.util.List<HTTPHeader> getHeadersUncombined()
Listof HTTP response headers that were returned by the remote server. These are not combined for repeated values.
public java.util.List<HTTPHeader> getHeaders()
Listof HTTP response headers that were returned by the remote server. Multi-valued headers are represented as a single
HTTPHeaderwith comma-separated values. | https://developers.google.com/appengine/docs/java/javadoc/com/google/appengine/api/urlfetch/HTTPResponse | CC-MAIN-2014-10 | en | refinedweb |
JET is one of the 21 projects bundled in the Eclipse Europa release. More precisely, it belongs to the Eclipse project Model To Text (M2T), which provides ready-to-use engines that perform model-to-text transformations. You can use JET to perform the following tasks:
Before you get started with your first transformation, you must install JET into your Eclipse IDE from the update manager (See Sidebar 2. Starting with JET in Your Eclipse IDE).
JET Basics
At the heart of JET code generation resides a template, a plain text file that drives the JET engine to transform an input model into a software artifact. The input model and output artifact do not have to match any constraint (as you will see shortly). You can use JET to generate Java source files from an XML model, as well as C (or PHP or Ruby or whatever) code from an Eclipse EMF model.
The template is a mixture of static sections, which JET will reproduce unmodified in the generated output, and XML-like directives, which perform transformations on the input model. If you are familiar with Java Server Pages (JSP), PHP, ASP, or any other templating engine, these concepts should be familiar.
The other important concept to grasp is JET's use of XPath. By default, JET expects models to be represented by XML structures or EMF models. Therefore, its processing directives rely on XPath selectors and functions to identify and isolate the parts of the model upon which it will act. See Sidebar 3. Essential XPath for a quick intro to the language.
However, you are not limited to XML or EMF input models. Since JET is packaged and distributed as an Eclipse plugin, it offers various extension points to augment its capabilities, including the definition of additional input formats. The documentation bundled with JET includes full specifications for the available extension points.
JET processing directives can be expressed in various forms, including:
Pretty much in the same way as defined by the JSP syntax, XML tags are provided to the engine in the form of tag libraries. A number of them, for the most common tasks, are bundled with the engine, but you can create additional custom tag libraries for your specific needs (again note the similarity with the contribution mechanism for custom tag libraries in the JSP world).
Given the sample XML model in Listing 1, you can easily understand the JET template in Listing 2, which transforms the model into the usual HelloWorld class.
Listing 1. Sample XML Model That Describes a Phrase
<class name="HelloClass">
<phrase>Hello,World!</phrase>
</class>
Listing 2. JET Template That Converts the Model into a Working Java Source File
public class <c:get {
public static void main(String[] args) {
System.out.println("<c:get");
}
}
You can identify both the static sections and the XML directives. In particular, <c:get /> prints the result of an XPath selector passed as parameter (such as /class/@name, which isolates the name attribute of the class tag).
You are not limited to generating Java files either. For example, the following template transforms the same model into an equivalent Ruby class:
class <c:get
def sayPhrase
puts "<c:get"
end
end
These examples offer a glimpse of the power behind code generation and MDD: if the model is sufficiently robust, it is easy to adapt the final product to different environments and to migrate it to a new software architecture.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/opensource/Article/34929 | CC-MAIN-2014-10 | en | refinedweb |
Here's an interesting bit of Python code I hacked together – it's a script that takes an image and warps it so that it is tileable (making it suitable for a repeating backgound or a texture in a game).
If you use it on a photograph, it will come out looking like a fair-ground mirror. But it works well when applied to a pattern, or something more abstract, such as the fractal image on the left.
The code is public domain – use it for whatever the heck you want!
Example Output
Update: Here's another, more interesting example, The original is here.
The Code
import Image from math import * def maketilable(src_path, dst_path): src = Image.open(src_path) src = src.convert('RGB') src_w, src_h = src.size dst = Image.new('RGB', (src_w, src_h)) w, h = dst.size def warp(p, l, dl): i = float(p) / l i = sin(i*pi*2 + pi) i = i / 2.0 + .5 return abs(i * dl) warpx = [warp(x, w-1, src_w-1) for x in range(w)] warpy = [warp(y, h-1, src_h-1) for y in range(h)] get = src.load() put = dst.load() def getpixel(x, y): frac_x = x - floor(x) frac_y = y - floor(y) x1 = (x+1)%src_w y1 = (y+1)%src_h a = get[x, y] b = get[x1, y] c = get[x, y1] d = get[x1, y1] area_d = frac_x * frac_y area_c = (1.-frac_x) * frac_y area_b = frac_x * (1. - frac_y) area_a = (1.-frac_x) * (1. - frac_y) a = [n*area_a for n in a] b = [n*area_b for n in b] c = [n*area_c for n in c] d = [n*area_d for n in d] return tuple(int(sum(s)) for s in zip(a,b,c,d)) old_status_msg = None status_msg = '' for y in xrange(h): status_msg = '%2d%% complete' % ((float(y) / h)*100.0) if status_msg != old_status_msg: print status_msg old_status_msg = status_msg for x in xrange(w): put[x, y] = getpixel(warpx[x], warpy[y]) dst.save(dst_path) if __name__ == "__main__": import sys try: src_path = sys.argv[1] dst_path = sys.argv[2] except IndexError: print "<source image path>, <destination image path>" else: maketilable(src_path, dst_path)
Smells like a job for Numpy ( [scipy.org]):
Pauli, Nice! I think it is a job for Numpy.
Once more, this time demonstrating the use of scipy.ndimage for the interpolation: | http://www.willmcgugan.com/2009/7/18/make-tilable-backgrounds-with-python/ | CC-MAIN-2014-10 | en | refinedweb |
backbone-associate
Presumtionless model associations for Backbone.js
npm install backbone-associate
Backbone.associate
Presumptionless model relations for Backbone in < 1kb.
Usage
Use
Backbone.associate to define relationships between application models
and collections.
var Flag = Backbone.Model.Extend({ /* ... */ }), City = Backbone.Model.Extend({ /* ... */ }), Cities = Backbone.Collection.extend({ model: City }), Country = Backbone.Model.Extend({ /* ... */ }); Backbone.associate(Country, { flag: { type: Flag }, cities: { type: Cities, url: '/cities' } });
Here, we're associating a model (
Country) with two relations: a
Flag
model and a collection of
Cities. The association keys can be anything,
but they should match the keys used in the data that will be passed into
the application's
parse method.
var canada = new Country({ url: '/countries/canada', flag: { colors: ['red','white'] }, cities: [ { name: 'Calgary' }, { name: 'Regina' } ] });
When it's time to sync the parent resource back up with the server, child resources can be serialized and included in the request.
canada.toJSON(); // { flag: { colors: ['red','white'] }, ...
Since associates are just attributes, they may be accessed at any
time using the usual
get.
// GET /countries/canada/cities canada.get('cities').fetch();
For the truly lazy,
associate provides a convenient accessor for
each association:
canada.flag().set({ colors: ['red','white'] }); canada.cities().add([ { name: 'Edmonton' }, { name: 'Montreal' }, { name: 'Ottawa' }, { name: 'Vancouver' } ]);
That's handy for manipulating the relations, setting up eventing, or any of the many other things this plugin won't do for you. Speaking of which,
Things this plugin won't do...
...include making any but the most basic presumptions about how it will be used. Fortunately, all of these can be implemented as needed (fiddle here):
Identity mapping
var getCountry = function (id) { _countries = {}; return (getCountry = function (id) { if (!_countries[id]) { _countries[id] = new Country({ id : id }); } return _countries[id]; })(id); };
Child events
canada.onCityAdded = function (model) { console.log('city added!', model.get('name')); } canada.listenTo(canada.cities(), 'add', canada.onCityAdded);
Testing
Specs are implemented with
jasmine-node. After cloning this repo,
install dependencies and test with npm.
$ npm install $ npm test
Contributing
Have something to add? Contributions are enormously welcome!
- Fork this repo
- Update the spec and implement the change
- Submit a pull request
Related projects
Looking for a more fully-featured alternative? Check out:
License
Backbone.associate is released under the terms of the MIT license | https://www.npmjs.org/package/backbone-associate | CC-MAIN-2014-10 | en | refinedweb |
.
Example:
Support.showConversation(MyActivity.this);
where MyActivity.this is the Activity you're calling Helpshift from
You can use the api call
Support.showConversation(MyActivity.this).
Applicable to SDK version 3.10.0 and above.
Example:
Support.showFAQs(MyActivity.this);
where MyActivity.this is the Activity you're calling Helpshift from
You can use the api call
Support.showFAQs(Activity a)3 :
String operator = FaqTagFilter.Operator.AND / FaqTagFilter.Operator.OR / FaqTagFilter.Operator.NOT; String[] filterTags = new String[]{"tag1", "tag2"}; FaqTagFilter faqTagFilter = new FaqTagFilter(operator, filterTags); ApiConfig apiConfig = new ApiConfig.Builder() .setWithTagsMatching(faqTagFilter) .build();
The
withTagsMatching option will take a FaqTagFilter object which takes two parameters
FaqTagFilter.Operator.AND,
FaqTagFilter.Operator.OR,
FaqTagFilter.Operator.NOTwhich will serve as conditional operators for the given tags (String).
Example :
//If the developer wants to show all FAQs with tags “android-phone” or “android-tablet” String operator = FaqTagFilter.Operator.OR; String[] filterTags = new String[]{"android-phone", "android-tablet"}; FaqTagFilter faqTagFilter = new FaqTagFilter(operator, filterTags); ApiConfig apiConfig = new ApiConfig.Builder() .setWithTagsMatching(faqTagFilter) .build(); Support.showFAQs(MyActivity.this, apiConfig);
HashMap<String, Object> map = new HashMap<>(); String operator = "and/or/not"; String[] filterTags = new String[]{"tag1", "tag2"}; map.put("operator", operator); map.put("tags", filterTags); HashMap config = new HashMap(); config.put("withTagsMatching", map);
The
withTagsMatching option will be a HashMap containing 2 keys
Example :
//If the developer wants to show all FAQs with tags “android-phone” or “android-tablet” HashMap<String, Object> map = new HashMap<>(); String operator = "or"; String[] filterTags = new String[]{"android-phone", "android-tablet"}; map.put("operator", operator); map.put("tags", filterTags); HashMap config = new HashMap(); config.put("withTagsMatching", map); Support.showFAQs(MyActivity.this, config);
Example:
Support.showFAQSection(MyActivity.this, "11"); where MyActivity.this is the
Activity you're calling Helpshift from and "11" is the FAQ section
publish-id
You can use the api call
Support:
Support.showSingleFAQ(MyActivity.this, "51"); where MyActivity.this is the
Activity you're calling Helpshift from and "51" is the FAQ publish-id
You can use the api call
Support.showSingleFAQ(Activity a, String questionPublishId) to show a single faq question.
You'll need the
publish-id of the FAQ in this case:
Applicable to version'. There are 6 types of flows:
Flow to show conversation screen
new ConversationFlow(int titleStringResId, ApiConfig config)
Flow to show all FAQs
new FAQsFlow(int titleStringResId, ApiConfig config)
Flow to show a FAQ section
new FAQSectionFlow(int titleStringResId, String sectionPublishId, ApiConfig config)
Flow to show a single FAQ
new SingleFAQFlow(int titleStringResId, String questionPublishId, ApiConfig config)
Flow to show a nested Dynamic form
new DynamicFormFlow(int titleStringResId, List<Flow> nextDynamicFormFlowList)
Flow to perform a custom action
For Example,
import com.helpshift.support.flows.Flow; ... public class CustomFlow implements Flow { private final int labelResId; private final Activity activity; public CustomFlow(int labelResId, Activity activity) { this.labelResId = labelResId; this.activity = activity; } @Override public int getLabelResId() { return labelResId; } @Override public void performAction() { Intent settingsActivity = new Intent(activity, UserSettingActivity.class); activity.startActivity(settingsActivity); } }
Each flow needs a display text. This will be the text that will be displayed in the list item. It has to be localized String resource . Some flows also expect a 'config' Map. This will be the config that is passed to the subsequent Helpshift Support API. For example any config you wish apply to the conversation screen needs to be passed to the ConversationFlow(int titleStringResId, Map config)API. This is also where you will add your custom HSTags. These APIs do not add any custom meta data by default. You need to add your custom meta data including HSTags at your own.
The app can create any number of flows. These flows are then grouped into a dynamic form and displayed in a list view. There are two ways to display a dynamic form:
Launch a new activity to show the Dynamic Form Screen.
Support.showDynamicForm(@NonNull Activity activity, @NonNull String title, @NonNull List<Flow> flowList)
Get an embeddable Fragment to show in your activity (for eg. you can use this in your Navigation Drawer to replace the current fragment)
Support.getDynamicFormFragment(@NonNull Activity activity, @NonNull String title, @NonNull List<Flow> flowList)
Each dynamic form needs :
Applicable to version 4.1.0 and above.
Helpshift now supports embeddable SupportFragments which can be used to embed a Helpshift support view inside your application's Activity. These support fragments can also be configured by providing a config map just like the previous Helpshift show-support-APIs. You need to complete the following steps to embed Helpshift fragments in your Activity :
Make sure that the Activity holding Helpshift's Embeddable fragments is extended from AppCompatActivity only.
Inherit theme of your Activity from one of the Helpshift's themes: Helpshift.Theme.Light.DarkActionBar, Helpshift.Theme.Light, Helpshift.Theme.Dark or Helpshift.Theme.HighContrast.
Create a style named Helpshift.Theme.Base and set its parent to your Activity's theme.
For Example,
<style name="YourActivityTheme" parent="Helpshift.Theme.Light.DarkActionBar"> <item name="colorAccent">@color/your_custom_color</item> <item name="colorPrimary">@color/your_custom_color</item> <item name="colorPrimaryDark">@color/your_custom_color</item> </style> <style name="Helpshift.Theme.Base" parent="YourActivityTheme"/>
In your AndroidManifest.xml set the Activity theme (which contains the embeddable fragment) to Helpshift.Theme.Activity; which now contains both YourActivityTheme and the theme required by Helpshift embeddable fragments.
For Example,
<activity android:
Following is the list of supported APIs :
All FAQs
Support.getFAQsFragment(Activity activity, ApiConfig config);
Support.getFAQsFragment(Activity activity, Map config);
Support.getFAQsFragment(Activity activity);
Conversation
Support.getConversationFragment(Activity activity, ApiConfig config);
Support.getConversationFragment(Activity activity, Map config);
Support.getConversationFragment(Activity activity);
Single section
Support.getFAQSectionFragment(Activity activity, String sectionPublishId, ApiConfig config);
Support.getFAQSectionFragment(Activity activity, String sectionPublishId, Map config);
Support.getFAQSectionFragment(Activity activity, String sectionPublishId);
Single FAQ
Support.getSingleFAQFragment(Activity activity, String questionPublishId, ApiConfig config);
Support.getSingleFAQFragment(Activity activity, String questionPublishId, Map config);
Support.getSingleFAQFragment(Activity activity, String questionPublishId);
Dynamic Form
Support.getDynamicFormFragment(Activity activity, String title, List<Flow> flowList, ApiConfig config);
Support.getDynamicFormFragment(Activity activity, String title, List<Flow> flowList, Map config);
Support.getDynamicFormFragment(Activity activity, String title, List<Flow> flowList);
Support.getDynamicFormFragment(Activity activity, List<Flow> flowList, ApiConfig config);
Support.getDynamicFormFragment(Activity activity, List<Flow> flowList, Map config);
Support.getDynamicFormFragment(Activity activity, List<Flow> flowList);
Example :
FragmentManager fragmentManager = getSupportFragmentManager(); FragmentTransaction fragmentTransaction = fragmentManager.beginTransaction(); fragmentTransaction.replace(R.id.fragment_container, Support.getFAQsFragment(this, config)); fragmentTransaction.commit();
When using SupportFragments with a standalone toolbar (i.e. toolbar which is a standalone widget and not used as an action bar) pass your toolbar id in the config map with the "toolbarId" key.
SupportFragments can only be embedded inside Activities and not fragments.
There is an issue with Android where
onBackPressed doesn't behave as
expected when used with nested fragments. Use the following guidelines for
handling onBackPressed behaviour:
Override
onBackPressed inside your parent activity which contains
the Helpshift's
SupportFragment.
If the fragment is Helpshift's
SupportFragment, call SupportFragment's
onBackPressed method which return a boolean value.
If back press was handled by Helpshift's SupportFragment, this method will
return value
true.
If there is nothing to go back in, this method will return value
false,
in which case you can either pop the SupportFragment or
finish the activity
as per your requirements.
For Example:
public void onBackPressed() { List<Fragment> fragments = getSupportFragmentManager().getFragments(); if (fragments != null) { for (Fragment fragment : fragments) { if (fragment != null && fragment.isVisible() && fragment instanceof SupportFragment) { if (((SupportFragment) fragment).onBackPressed()) { return; } else { FragmentManager childFragmentManager = fragment.getChildFragmentManager(); if (childFragmentManager.getBackStackEntryCount() > 0) { childFragmentManager.popBackStack(); return; } } } } } super.onBackPressed(); }
If you are using Helpshift SDK's setSDKLanguage API on devices with Android API level 25 and above, then use the following guidelines to update the Helpshift SDK's locale:
attachBaseContextmethod inside your activity which contains Helpshift's embeddable fragment.
attachBaseContext.
Refer to the sample code provided:
@Override protected void attachBaseContext(Context context) { if (Build.VERSION.SDK_INT >= 17) { Locale locale = new Locale("<LANGUAGE CODE>", "<COUNTRY CODE>"); Resources res = context.getResources(); Configuration config = new Configuration(res.getConfiguration()); config.setLocale(locale); context = context.createConfigurationContext(config); } super.attachBaseContext(context); }
This language handling is required because of the following changes in the Android:
Context.createConfigurationContext()API is introduced to create custom configuration context object.
Resources.updateConfiguration()API is deprecated and it is recommended to use
Context.createConfigurationContext()which will create a new context instead of updating the old context object. Reference
There are some known issues in rendering the FAQ fragment views if hardware acceleration is explicitly disabled for the application (the system enables it by default) due to known WebView issues on Android 5.1. Thus, Helpshift recommends you to enable hardware acceleration in your app. However, if you want to disable hardware acceleration in your app, you can enable it only for the activity in which you plan to attach the SDK's fragments. For example, in your AndroidManifest.xml:
<!-- (optional) if you want to keep hardwareAcceleration as false for you app --> <application android: ... <activity android: </application> | https://developers.helpshift.com/android/support-tools/ | CC-MAIN-2022-05 | en | refinedweb |
Functional
const Button = props => { const [disabled, setDisabled] = useState(false) return ( <button disabled={disabled} onClick={() => setDisabled(prev => !prev)} > {props.text} </button> ) } // can become const Button = props => ( <button disabled={props.disabled} onClick={props.setDisabled} >{props.text}</button> )
- Compose Components from Props
const Button = props => ( <button disabled={props.disabled} onClick={props.setDisabled} >{props.spinner}{props.text}</button> ) // can become // children will hold spinner // & parent can decide when to show/hide spinner const Button = props => ( <button disabled={props.disabled} onClick={props.setDisabled} >{props.children}</button> ) const App = () => { const [loading] = false return <Button> {loading && <Spinner />} <span>Click me</span> </Button> }
- Use DefaultProps in case of Class components
- Use prop destructuring along with Default Values for Functional components?
Let me know through comments 💬 or on Twitter at @patel_pankaj_ and/or @time2hack
If you find this article helpful, please share it with others 🗣
Subscribe to the blog to receive new posts right to your inbox.
Credits
Photo by Ferenc Almasi on Unsplash
Originally published at on Nov 4, 2020.
Discussion (2)
It's depends on whether your class need constructor or not. If they need constructor then we have to move it inside, unless it won't work.
No, you can still write it outside. | https://dev.to/time2hack/where-do-you-initialize-state-in-react-component-1331 | CC-MAIN-2022-05 | en | refinedweb |
Azure Service Bus libraries for Python
Microsoft Azure Service Bus supports a set of cloud-based, message-oriented middleware technologies including reliable message queuing and durable publish/subscribe messaging.
Libraries for data access
The latest version of the Azure Service Bus library is version 7.x.x. We highly recommend using version 7.x.x for new applications.
To update existing applications to version 7.x.x, please follow the migration guide.
Version 7.x.x
To send and receive messages from an Azure Service Bus queue, topic or subscription, you would use the latest version of the
azure-servicebus. This also allows to manage your Azure Service Bus resources like queues, topics, subscriptions and rules, but not the namespace itself.
Version 0.50.x
The older verson allows you to send and receive messages from an Azure Service Bus queue, topic or subscription, but it lacks a lot of the new features and performance improvements available in the latest version of the same package.
Libraries for resource management
To manage your Azure Service Bus resources like namespaces, queues, topics, subscriptions and rules via the Azure Resource Manager, you would use the below package: | https://docs.microsoft.com/sv-se/python/api/overview/azure/servicebus?preserve-view=true&view=azure-python | CC-MAIN-2022-05 | en | refinedweb |
TL 3
Implementing the Leaderboard Feature in Your React Game
The first thing you will do to make your game look like a real game is to implement the leaderboard feature. This feature will enable players to sign in, so your game can track their max score and show their rank.
Integrating React and Auth0
To make Auth0 manage the identity of your players, you have to have an Auth0 account. If you don't have one yet, you can sign up for a free Auth0 account here .
After creating your account, you just have to create an Auth0 Application to represent your game. To do this, head to the Applications page on the Auth0 dashboard and click on the Create Application button. The dashboard will show you a form where you will have to inform the name of your application and its type. You can type Aliens, Go Home! as the name and choose the Single Page Web Application type (your game is an SPA based on React after all). Then, you can click on Create.
When you click this button, the dashboard will redirect you to the Quick Start tab of your new application. As you will learn how to integrate React and Auth0 in this article, you won't need to use this tab. Instead, you will need to use the Settings tab, so head to it.
There are three things that you will need to do in this tab. The first one is to add the value to the field called Allowed Callback URLs. As the dashboard explains, after the player authenticates, Auth0 will only call back one of the URLs in this field. So, if you are going to publish your game on the web, be sure to add its public URL there as well (e.g.).
After inputting all your URLs on this field, hit the Save button or press
ctrl +
s (if you are using a MacBook, you will need to press
command +
s instead).
The last two things you will need to do is to copy the values from the Domain and Client ID fields. However, before using these values, you will need to code a little.
For starters, you will need to issue the following command in the root directory of your game to install the
auth0-web package:
npm i auth0-web@1.7.0
As you will see, this package facilitates the integration between Auth0 and SPAs.
The next step is to add a login button in your game, so your players can authenticate via Auth0. To do this, create a new file called
Login.jsx inside the
./src/components directory with the following code:
import React from 'react'; import PropTypes from 'prop-types'; const Login = (props) => { const button = { x: -300, // half width y: -600, // minus means up (above 0) width: 600, height: 300, style: { fill: 'transparent', cursor: 'pointer', }, onClick: props.authenticate, }; const text = { textAnchor: 'middle', // center x: 0, // center relative to X axis y: -440, // 440 up style: { fontFamily: '"Joti One", cursive', fontSize: 45, fill: '#e3e3e3', cursor: 'pointer', }, onClick: props.authenticate, }; return ( <g filter="url(#shadow)"> <rect {...button} /> <text {...text}> Login to participate! </text> </g> ); }; Login.propTypes = { authenticate: PropTypes.func.isRequired, }; export default Login;
The component that you have just created is agnostic in terms of what it will do when clicked. You will define this action when adding it to the
Canvas component. So, open the
Canvas.jsx file and update it as follows:
// ... other import statements import Login from './Login'; import { signIn } from 'auth0-web'; const Canvas = (props) => { // ... const definitions return ( <svg ...> // ... other elements { ! props.gameState.started && <g> // ... StartGame and Title components <Login authenticate={signIn} /> </g> } // ... flyingObjects.map </svg> ); }; // ... propTypes definition and export statement
As you can see, in this new version, you have imported the
Login component and the
auth0-web package. Then, you have added your new component to the block of code that is shown only if players have not started the game. Also, you have indicated that, when clicked, the login button must trigger the
With these changes in place, the last thing you will have to do is to configure the
auth0-web with your Auth0 Application properties. To do this, open the
App.js file and update it as follows:
// ... other import statements import * as Auth0 from 'auth0-web'; Auth0.configure({ domain: 'YOUR_AUTH0_DOMAIN', clientID: 'YOUR_AUTH0_CLIENT_ID', redirectUri: '', responseType: 'token id_token', scope: 'openid profile manage:points', }); class App extends Component { // ... constructor definition componentDidMount() { const self = this; Auth0.handleAuthCallback(); Auth0.subscribe((auth) => { console.log(auth); }); // ... setInterval and onresize } // ... trackMouse and render functions } // ... propTypes definition and export statement
Note: You have to replace
YOUR_AUTH0_DOMAINand
YOUR_AUTH0_CLIENT_IDwith the values copied from the Domain and Client ID fields of your Auth0 application. Besides that, when publishing your game to the web, you will have to replace the
redirectUrivalue as well.
The enhancements in this file are quite simple. This list summarizes them:
configure: You used this function to configure the
auth0-webpackage with your Auth0 application properties.
handleAuthCallback: You triggered this function in the
componentDidMountlifecycle hook to evaluate if the player is returning from Auth0 after authenticating. This function simply tries to fetch tokens from the URL and, if it succeeds, fetches the player profile and persists everything in the
localstorage.
subscribe: You used this function to log if the player is authenticated or not (
truefor authenticated and
falseotherwise).
That's it, your game is already using Auth0 as its identity management service. If you run your app now (
npm start) and head to it in your browser (), you will see the login button. Clicking on it will redirect you to the Auth0 login page where you will be able to sign in.
After you finish the sign in process, Auth0 will redirect you to your game again where the
handleAuthCallback function will fetch your tokens. Then, as you have told your app to
console.log any changes on the authentication state, you will be able to see it logging
true in your browser console.
"Securing games with Auth0 is simple and painless."
Tweet This
Creating the Leaderboard React Component
Now that you have configured Auth0 as your identity management system, you will need to create the components that will show the leaderboard and the max score for the current player. For that, you will create two components:
Leaderboard and
Rank. You will need to split this feature into two components because, as you will see, it's not that simple to show player's data (like max score, name, position, and picture) in a nice way. It's not hard either, but you will have to type some good amount of code. So, adding everything into one component would make it look clumsy.
As your game does not have any players yet, the first thing you will need to do is to define some mock data to populate the leaderboard. The best place to do this is in the
Canvas component. Also, since you are going to update your canvas, you can go ahead and replace the
Login component with the
Leaderboard (you will add
Login inside the
Leaderboard in a moment):
// ... other import statements // replace Login with the following line import Leaderboard from './Leaderboard'; const Canvas = (props) => { // ... const definitions const leaderboard = [ { id: 'd4', maxScore: 82, name: 'Ado Kukic', picture: '', }, { id: 'a1', maxScore: 235, name: 'Bruno Krebs', picture: '', }, { id: 'c3', maxScore: 99, name: 'Diego Poza', picture: '', }, { id: 'b2', maxScore: 129, name: 'Jeana Tahnk', picture: '', }, { id: 'e5', maxScore: 34, name: 'Jenny Obrien', picture: '', }, { id: 'f6', maxScore: 153, name: 'Kim Maida', picture: '', }, { id: 'g7', maxScore: 55, name: 'Luke Oliff', picture: '', }, { id: 'h8', maxScore: 146, name: 'Sebastian Peyrott', picture: '', }, ]; return ( <svg ...> // ... other elements { ! props.gameState.started && <g> // ... StartGame and Title <Leaderboard currentPlayer={leaderboard[6]} authenticate={signIn} leaderboard={leaderboard} /> </g> } // ... flyingObjects.map </svg> ); }; // ... propTypes definition and export statement
In the new version of this file, you defined a constant called
leaderboard that holds an array of fake players. These players have the following properties:
id,
maxScore,
name, and
picture. Then, inside the
svg element, you added the
Leaderboard component with the following parameters:
currentPlayer: This defines who the current player is. For now, you are using one of the fake players defined before so you can see how everything works. The purpose of passing this parameter is to make your leaderboard highlight the current player.
authenticate: This is the same parameter that you were adding to the
Logincomponent in the previous version.
leaderboard: This is the array of fake players. Your leaderboard will use it to show the current ranking.
Now, you have to define the
Leaderboard component. To do this, create a new file called
Leaderboard.jsx in the
./src/components directory and add the following code to it:
import React from 'react'; import PropTypes from 'prop-types'; import Login from './Login'; import Rank from "./Rank"; const Leaderboard = (props) => { const style = { fill: 'transparent', stroke: 'black', strokeDasharray: '15', }; const leaderboardTitle = { fontFamily: '"Joti One", cursive', fontSize: 50, fill: '#88da85', cursor: 'default', }; let leaderboard = props.leaderboard || []; leaderboard = leaderboard.sort((prev, next) => { if (prev.maxScore === next.maxScore) { return prev.name <= next.name ? 1 : -1; } return prev.maxScore < next.maxScore ? 1 : -1; }).map((member, index) => ({ ...member, rank: index + 1, currentPlayer: member.id === props.currentPlayer.id, })).filter((member, index) => { if (index < 3 || member.id === props.currentPlayer.id) return member; return null; }); return ( <g> <text filter="url(#shadow)" style={leaderboardTitle}Leaderboard</text> <rect style={style} { props.currentPlayer && leaderboard.map((player, idx) => { const position = { x: -100, y: -530 + (70 * idx) }; return <Rank key={player.id} player={player} position={position}/> }) } { ! props.currentPlayer && <Login authenticate={props.authenticate} /> } </g> ); }; Leaderboard.propTypes = { currentPlayer: PropTypes.shape({ id: PropTypes.string.isRequired, maxScore: PropTypes.number.isRequired, name: PropTypes.string.isRequired, picture: PropTypes.string.isRequired, }), authenticate: PropTypes.func.isRequired, leaderboard: PropTypes.arrayOf(PropTypes.shape({ id: PropTypes.string.isRequired, maxScore: PropTypes.number.isRequired, name: PropTypes.string.isRequired, picture: PropTypes.string.isRequired, ranking: PropTypes.number, })), }; Leaderboard.defaultProps = { currentPlayer: null, leaderboard: null, }; export default Leaderboard;
Don't be scared! The code of this component is quite simple:
- You are defining the
leaderboardTitleconstant to set how the leaderboard title will look like.
- You are defining the
dashedRectangleconstant to style a
rectelement that will work as the container of the leaderboard.
- You are calling the
sortfunction of the
props.leaderboardvariable to order the ranking. After that, your leaderboard will have the highest max score on the top and the lowest max score on the bottom. Also, if there is a tie between two players, you are ordering them based on their names.
- You are calling the
mapfunction on the result of the previous step (the
sortfunction) to complement players with their
rankand with a flag called
currentPlayer. You will use this flag to highlight the row where the current player appears.
- You are using the
filterfunction on the result of the previous step (the
mapfunction) to remove everyone who is not among the top three players. Actually, you are letting the current player stay on the final array if they don't belong to this select group.
- Lastly, you are simply iterating over the filtered array to show
Rankelements if there is a player logged in (
props.currentPlayer && leaderboard.map) or showing the
Loginbutton otherwise.
Then, the last thing you will need to do is to create the
Rank React component. To do this, create a new file called
Rank.jsx beside the
Leaderboard.jsx file with the following code:
import React from 'react'; import PropTypes from 'prop-types'; const Rank = (props) => { const { x, y } = props.position; const rectId = 'rect' + props.player.rank; const clipId = 'clip' + props.player.rank; const pictureStyle = { height: 60, width: 60, }; const textStyle = { fontFamily: '"Joti One", cursive', fontSize: 35, fill: '#e3e3e3', cursor: 'default', }; if (props.player.currentPlayer) textStyle.fill = '#e9ea64'; const pictureProperties = { style: pictureStyle, x: x - 140, y: y - 40, href: props.player.picture, clipPath: `url(#${clipId})`, }; const frameProperties = { width: 55, height: 55, rx: 30, x: pictureProperties.x, y: pictureProperties.y, }; return ( <g> <defs> <rect id={rectId} {...frameProperties} /> <clipPath id={clipId}> <use xlinkHref={'#' + rectId} /> </clipPath> </defs> <use xlinkHref={'#' + rectId} <text filter="url(#shadow)" style={textStyle} x={x - 200} y={y}>{props.player.rank}º</text> <image {...pictureProperties} /> <text filter="url(#shadow)" style={textStyle} x={x - 60} y={y}>{props.player.name}</text> <text filter="url(#shadow)" style={textStyle} x={x + 350} y={y}>{props.player.maxScore}</text> </g> ); }; Rank.propTypes = { player: PropTypes.shape({ id: PropTypes.string.isRequired, maxScore: PropTypes.number.isRequired, name: PropTypes.string.isRequired, picture: PropTypes.string.isRequired, rank: PropTypes.number.isRequired, currentPlayer: PropTypes.bool.isRequired, }).isRequired, position: PropTypes.shape({ x: PropTypes.number.isRequired, y: PropTypes.number.isRequired }).isRequired, }; export default Rank;
Nothing to be scared of about this code either. The only unordinary thing that you are adding to this component is the
clipPath element and a
rect inside the
defs element to create a rounded portrait.
With these new files in place, you can head to your app () to see your new leaderboard feature. | https://auth0.com/blog/developing-games-with-react-redux-and-svg-part-3/ | CC-MAIN-2022-05 | en | refinedweb |
.
USB devices have defined
interfaces
which relate to their functionality. For example, a USB
keyboard with built in LEDs may have an interface for
sending key presses and an interface for controlling the
lights on the keyboard. Interfaces as defined as a set
of
endpoints.
Endpoints are used as communication channels to and from
the device and host and can either be IN or OUT. They
are defined relative to the host - OUT endpoints
transport data to the device (write) and IN endpoints
transport data to the host (read).
Once we obtain a USB device handle, we must
claim the interface
we want to use. This will allow us Windows using
Python and libusb. Basics of opening a USB device
handle, writing and reading data, as well as closing the
handle of the ADU usb device is provided as an example.
The suggested way of working with ADU devices in Python
and Linux is with the HIDAPI module (see: ). For working
with Python and ADU devices in Windows, it's preferred
to use the AduHid module (see: )
All
source code is provided so that you may review details
that are not highlighted here.
NOTE: See also
Python and HIDAPI library with ADU
Devices for alternate method of USB communication using
HIDAPI
This example illustrates the basics of reading and
writing to ADU devices using the libusb library.
NOTE: When
running the example, it must be run with root privileges
in order to access the USB device.
libusb is
a library that provides low level access to:.
First we'll import the libusb library. If you haven't
yet installed it, you may do so in the command line via
pip install libusb
or by installing via requirements.txt (pip
install -r requirements.txt).
We'll declare OnTrak's vendor ID and the product ID for
the ADU device we wish to use (in our case 200 for
ADU200).
import usb.core
import usb.backend.libusb1 libusb
(opening, closing, reading and writing commands).
device = usb.core.find(idVendor=VENDOR_ID, idProduct=PRODUCT_ID)
if device is None:
raise ValueError('ADU Device not found. Please ensure it is connected to the tablet.')
sys.exit(1)
# Claim interface 0 - this interface provides IN and OUT endpoints to write to and read from
usb.util.claim_interface(device, 0)
Now that we have successfully opened our device and
claimed an interface, commands to ADU
bytes_written = write_to_adu(device, 'SK0') # set relay 0
bytes_written = write_to_adu(device, 'RK string format on
success, and None on failure. A timeout is supplied for
the maximum amount of time that the host (computer) will
wait for data from the read request.
# Read from the ADU
bytes_written = write_to_adu(device, 'RPA') # request the value of PORT A in binary
data = read_from_adu(device, 200) # read from device with a 200 millisecond timeout
if data != None:
print("Received string: {}".format(data))
print("Received data as int: {}".format(int(data))) # the returned value is a string - we can convert it to a number (int) if we wish
usb.util.release_interface(device, 0) device.write() to write our command to the
device.)
num_bytes_written = 0
try:
# 0x01 is the OUT endpoint
num_bytes_written = dev.write(0x01, byte_str)
except usb.core.USBError as e:
print (e.args)
return num_bytes_written
If the write is successful, we should now have a result
to read from the command we previously sent. We can use
read_from_adu() to read the value. The arguments are the
USB device and a timeout. device.read() should return
the data read from the device.
If reading from the device was successful, we will need
to extract the data we are interested in. The first byte
of the data returned is 0x01 and is followed by an ASCII
representation of the number. The remainder of the bytes
are padded with 0x00 (NULL) values. We can construct a
string from the second byte to the end and strip out the
null '\x00' characters.
def read_from_adu(dev, timeout):
try:
# try to read a maximum of 64 bytes from 0x81 (IN endpoint)
data = dev.read(0x81, 64, timeout)
except usb.core.USBError as e:
print ("Error reading response: {}".format(e.args)) | https://ontrak.net/LibUSBPy.htm | CC-MAIN-2022-05 | en | refinedweb |
Code blocks
Code blocks within documentation are super-powered 💪.
#Code title
You can add a title to the code block by adding
title key after the language (leave a space between them).
#Syntax highlighting
Code blocks are text blocks wrapped around by strings of 3 backticks. You may check out this reference for specifications of MDX.
Use the matching language meta string for your code block, and Docusaurus will pick up syntax highlighting automatically, powered by Prism React Renderer.
By default, the Prism syntax highlighting theme we use is Palenight. You can change this to another theme by passing
theme field in
prism as
themeConfig in your docusaurus.config.js.
For example, if you prefer to use the
dracula highlighting theme:
By default, Docusaurus comes with a subset of commonly used languages.
caution
Some popular languages like Java, C#, or PHP are not enabled by default.
To add syntax highlighting for any of the other Prism supported languages, define it in an array of additional languages.
For example, if you want to add highlighting for the
powershell language:
If you want to add highlighting for languages not yet supported by Prism, you can swizzle
prism-include-languages:
- npm
- Yarn
It will produce
prism-include-languages.js in your
src/theme folder. You can add highlighting support for custom languages by editing
prism-include-languages.js:
You can refer to Prism's official language definitions when you are writing your own language definitions.
#Line highlighting
You can bring emphasis to certain lines of code by specifying line ranges after the language meta string (leave a space after the language).
To accomplish this, Docusaurus adds the
docusaurus-highlight-code-line class to the highlighted lines. You will need to define your own styling for this CSS, possibly in your
src/css/custom.css with a custom background color which is dependent on your selected syntax highlighting theme. The color given below works for the default highlighting theme (Palenight), so if you are using another theme, you will have to tweak the color accordingly.
To highlight multiple lines, separate the line numbers by commas or use the range syntax to select a chunk of lines. This feature uses the
parse-number-range library and you can find more syntax on their project details.
You can also use comments with
highlight-next-line,
highlight-start, and
highlight-end to select which lines are highlighted.
Supported commenting syntax:
If there's a syntax that is not currently supported, we are open to adding them! Pull requests welcome.
#Interactive code editor
You can create an interactive coding editor with the
@docusaurus/theme-live-codeblock plugin.
First, add the plugin to your package.
- npm
- Yarn
You will also need to add the plugin to your
docusaurus.config.js.
To use the plugin, create a code block with
live attached to the language meta string.
The code block will be rendered as an interactive editor. Changes to the code will reflect on the result panel live.
SyntaxError: Unexpected token (1:8) 1 : return () ^
#Imports
react-live and imports
It is not possible to import components directly from the react-live code editor, you have to define available imports upfront.
By default, all React imports are available. If you need more imports available, swizzle the react-live scope:
- npm
- Yarn
The
ButtonExample component is now available to use:
SyntaxError: Unexpected token (1:8) 1 : return () ^
#Multi-language support code blocks
With MDX, you can easily create interactive components within your documentation, for example, to display code in multiple programming languages and switching between them using a tabs component.
Instead of implementing a dedicated component for multi-language support code blocks, we've implemented a generic Tabs component in the classic theme so that you can use it for other non-code scenarios as well.
The following example is how you can have multi-language code tabs in your docs. Note that the empty lines above and below each language block is intentional. This is a current limitation of MDX, you have to leave empty lines around Markdown syntax for the MDX parser to know that it's Markdown syntax and not JSX.
And you will get the following:
- JavaScript
- Python
- Java
You may want to implement your own
<MultiLanguageCode /> abstraction if you find the above approach too verbose. We might just implement one in future for convenience.
If you have multiple of these multi-language code tabs, and you want to sync the selection across the tab instances, refer to the Syncing tab choices section. | http://deploy-preview-4756--docusaurus-2.netlify.app/docs/next/markdown-features/code-blocks | CC-MAIN-2022-05 | en | refinedweb |
Dataset Tensor Support¶
Tables with tensor columns¶
Datasets supports tables with fixed-shape tensor columns, where each element in the column is a tensor (n-dimensional array) with the same shape. As an example, this allows you to use Pandas and Ray Datasets to read, write, and manipulate e.g., images. All conversions between Pandas, Arrow, and Parquet, and all application of aggregations/operations to the underlying image ndarrays are taken care of by Ray Datasets.
With our Pandas extension type,
TensorDtype, and extension array,
TensorArray, you can do familiar aggregations and arithmetic, comparison, and logical operations on a DataFrame containing a tensor column and the operations will be applied to the underlying tensors as expected. With our Arrow extension type,
ArrowTensorType, and extension array,
ArrowTensorArray, you’ll be able to import that DataFrame into Ray Datasets and read/write the data from/to the Parquet format.
Automatic conversion between the Pandas and Arrow extension types/arrays keeps the details under-the-hood, so you only have to worry about casting the column to a tensor column using our Pandas extension type when first ingesting the table into a
Dataset, whether from storage or in-memory. All table operations downstream from that cast should work automatically.
Single-column tensor datasets¶
The most basic case is when a dataset only has a single column, which is of tensor type. This kind of dataset can be created with
.range_tensor(), and can be read from and written to
.npy files. Here are some examples:
# Create a Dataset of tensor-typed values. ds = ray.data.range_tensor(10000, shape=(3, 5)) # -> Dataset(num_blocks=200, num_rows=10000, # schema={value: <ArrowTensorType: shape=(3, 5), dtype=int64>}) # Save to storage. ds.write_numpy("/tmp/tensor_out", column="value") # Read from storage. ray.data.read_numpy("/tmp/tensor_out") # -> Dataset(num_blocks=200, num_rows=?, # schema={value: <ArrowTensorType: shape=(3, 5), dtype=int64>})
Reading existing serialized tensor columns¶
If you already have a Parquet dataset with columns containing serialized tensors, you can have these tensor columns cast to our tensor extension type at read-time by giving a simple schema for the tensor columns. Note that these tensors must have been serialized as their raw NumPy ndarray bytes in C-contiguous order (e.g. serialized via
ndarray.tobytes()).
import ray import numpy as np import pandas as pd path = "/tmp/some_path" # Create a DataFrame with a list of serialized ndarrays as a column. # Note that we do not cast it to a tensor array, so each element in the # column is an opaque blob of bytes. arr = np.arange(24).reshape((3, 2, 2, 2)) df = pd.DataFrame({ "one": [1, 2, 3], "two": [tensor.tobytes() for tensor in arr]}) # Write the dataset to Parquet. The tensor column will be written as an # array of opaque byte blobs. ds = ray.data.from_pandas([df]) ds.write_parquet(path) # Read the Parquet files into a new Dataset, with the serialized tensors # automatically cast to our tensor column extension type. ds = ray.data.read_parquet( path, _tensor_column_schema={"two": (np.int, (2, 2, 2))}) # Internally, this column is represented with our Arrow tensor extension # type. print(ds.schema()) # -> one: int64 # two: extension<arrow.py_extension_type<ArrowTensorType>>
If your serialized tensors don’t fit the above constraints (e.g. they’re stored in Fortran-contiguous order, or they’re pickled), you can manually cast this tensor column to our tensor extension type via a read-time user-defined function. This UDF will be pushed down to Ray Datasets’ IO layer and executed on each block in parallel, as it’s read from storage.
import pickle import pyarrow as pa from ray.data.extensions import TensorArray # Create a DataFrame with a list of pickled ndarrays as a column. arr = np.arange(24).reshape((3, 2, 2, 2)) df = pd.DataFrame({ "one": [1, 2, 3], "two": [pickle.dumps(tensor) for tensor in arr]}) # Write the dataset to Parquet. The tensor column will be written as an # array of opaque byte blobs. ds = ray.data.from_pandas([df]) ds.write_parquet(path) # Manually deserialize the tensor pickle bytes and cast to our tensor # extension type. For the sake of efficiency, we directly construct a # TensorArray rather than .astype() casting on the mutated column with # TensorDtype. def cast_udf(block: pa.Table) -> pa.Table: block = block.to_pandas() block["two"] = TensorArray([pickle.loads(a) for a in block["two"]]) return pa.Table.from_pandas(block) # Read the Parquet files into a new Dataset, applying the casting UDF # on-the-fly within the underlying read tasks. ds = ray.data.read_parquet(path, _block_udf=cast_udf) # Internally, this column is represented with our Arrow tensor extension # type. print(ds.schema()) # -> one: int64 # two: extension<arrow.py_extension_type<ArrowTensorType>>
Please note that the
_tensor_column_schema and
_block_udf parameters are both experimental developer APIs and may break in future versions.
Working with tensor column datasets¶
Now that the tensor column is properly typed and in a
Dataset, we can perform operations on the dataset as if it was a normal table:
# Arrow and Pandas is now aware of this tensor column, so we can do the # typical DataFrame operations on this column. ds = ds.map_batches(lambda x: 2 * (x + 1), batch_format="pandas") # -> Map Progress: 100%|████████████████████| 200/200 [00:00<00:00, 1123.54it/s] print(ds) # -> Dataset( # num_blocks=1, num_rows=3, # schema=<class 'int', # class ray.data.extensions.tensor_extension.ArrowTensorType>) print([row["two"] for row in ds.take(5)]) # -> [2, 4, 6, 8, 10]
Writing and reading tensor columns¶
This dataset can then be written to Parquet files. The tensor column schema will be preserved via the Pandas and Arrow extension types and associated metadata, allowing us to later read the Parquet files into a Dataset without needing to specify a column casting schema. This Pandas –> Arrow –> Parquet –> Arrow –> Pandas conversion support makes working with tensor columns extremely easy when using Ray Datasets to both write and read data.
#>>
End-to-end workflow with our Pandas extension type¶
If working with in-memory Pandas DataFrames that you want to analyze, manipulate, store, and eventually read, the Pandas/Arrow extension types/arrays make it easy to extend this end-to-end workflow to tensor columns.
from ray.data.extensions import TensorDtype # Create a DataFrame with a list of ndarrays as a column. df = pd.DataFrame({ "one": [1, 2, 3], "two": list(np.arange(24).reshape((3, 2, 2, 2)))}) # Note the opaque np.object dtype for this column. print(df.dtypes) # -> one int64 # two object # dtype: object # Cast column to our TensorDtype Pandas extension type. df["two"] = df["two"].astype(TensorDtype()) # Note that the column dtype is now TensorDtype instead of # np.object. print(df.dtypes) # -> one int64 # two TensorDtype # dtype: object # Pandas is now aware of this tensor column, and we can do the # typical DataFrame operations on this column. col = 2 * df["two"] # The ndarrays underlying the tensor column will be manipulated, # but the column itself will continue to be a Pandas type. print(type(col)) # -> pandas.core.series.Series print(col) # -> 0 [[[ 2 4] # [ 6 8]] # [[10 12] # [14 16]]] # 1 [[[18 20] # [22 24]] # [[26 28] # [30 32]]] # 2 [[[34 36] # [38 40]] # [[42 44] # [46 48]]] # Name: two, dtype: TensorDtype # Once you do an aggregation on that column that returns a single # row's value, you get back our TensorArrayElement type. tensor = col.mean() print(type(tensor)) # -> ray.data.extensions.tensor_extension.TensorArrayElement print(tensor) # -> array([[[18., 20.], # [22., 24.]], # [[26., 28.], # [30., 32.]]]) # This is a light wrapper around a NumPy ndarray, and can easily # be converted to an ndarray. type(tensor.to_numpy()) # -> numpy.ndarray # In addition to doing Pandas operations on the tensor column, # you can now put the DataFrame directly into a Dataset. ds = ray.data.from_pandas([df]) # Internally, this column is represented with the corresponding # Arrow tensor extension type. print(ds.schema()) # -> one: int64 # two: extension<arrow.py_extension_type<ArrowTensorType>> #>> read_df = read_ds.to_pandas() print(read_df.dtypes) # -> one int64 # two TensorDtype # dtype: object # The tensor extension type is preserved along the # Pandas --> Arrow --> Parquet --> Arrow --> Pandas # conversion chain. print(read_df.equals(df)) # -> True
Limitations¶
This feature currently comes with a few known limitations that we are either actively working on addressing or have already implemented workarounds for.
-
All tensors in a tensor column currently must be the same shape. Please let us know if you require heterogeneous tensor shape for your tensor column! Tracking issue is here.
-
Automatic casting via specifying an override Arrow schema when reading Parquet is blocked by Arrow supporting custom ExtensionType casting kernels. See issue. An explicit
_tensor_column_schemaparameter has been added for
read_parquet()as a stopgap solution.
-
Ingesting tables with tensor columns into pytorch via
ds.to_torch()is blocked by pytorch supporting tensor creation from objects that implement the __array__ interface. See issue. Workarounds are being investigated.
-
Ingesting tables with tensor columns into TensorFlow via
ds.to_tf()is blocked by a Pandas fix for properly interpreting extension arrays in
DataFrame.valuesbeing released. See PR. Workarounds are being investigated. | https://docs.ray.io/en/latest/data/dataset-tensor-support.html | CC-MAIN-2022-05 | en | refinedweb |
The following references to pages refer to file interaction-v6.1-en.pdf of.
On page 115 in section “Creating a test environment” there is the following warning:
“Only use this environment creation method for test environments. In all other cases, use the target mapping wizard. For more on this, refer to Creating an offer environment [page 19].”
But: for the test environment I have a rather clear description that works (what is the problem to use it for production?) while with the description on page 19-21 (section “Creating an offer environment”) I have problems:
- I am testing in a test Adobe Campaign instance where offer environments for Visitor and Recipient already exist. When I follow the example on page 19-21 to create a new Visitor environment, I get no error but I also do not find a new visitor instance; though in page 20 I read: “3 Adobe Campaign creates two environments (Design and Live)”. I only see in Administration > Configuration > Data schemas, that some new Data schemas have been created, if I used a new extension namespace (Tracking and Delivery logs for Visitors, and in addition Reactions for Recipients)
- In pages 19-21 I find no way to select where in the explorer tree to add a new offer environment. But there surely should be a way to do this (e.g. to create more than 1 offer environment for visitors).
So, confronted with these issues: what is a good way to create an offer environment for production?
Michael
Views
Replies
Total Likes
Hi Michael,
I'm not sure I understand your requirements. The "Creating a test environment" section is for sandbox mode (i.e not to deploy anything), while the method described in "Creating an offer environment" via the target mapping is the correct way of setting a real environment and will automatically associate the offer environment to the selected target and it will add the correct schemas and entries to the explorer (one entry under "Offers - design" and one entry under "Offers - live".
If Visitor and Recipient environment already exist, it is normal if you cannot have 2 environments for the same. So if you want to test the environment creation, you may want to use another dimension than visitors or recipients.
Hope this helps,
Florent
Views
Replies
Total Likes
Hi Florent
You wrote: “So if you want to test the environment creation, you may want to use another dimension than visitors or recipients.” I tried out all alternatives in Administration > Campaign/Delivery management > Delivery mappings. In most alternatives, there were error messages (mostly of the kind “Schema … not found …”). No error message was, in addition to “Visitors” and “Recipients”, only for “WebEvent-Recipient”. And though no error was shown for “WebEvent-Recipient”, I could find no additional offer environments in the Explorer. In my sandbox the old offer environments are below a folder “Explorer-root > company-name > Offer Management”; there no additional offer environment had been added and I can not find it anywhere else in the Explorer.
You also wrote: “it is normal if you cannot have 2 environments for the same”, but for me it is not that simple:
On page 103 of interaction-v6.1-en.pdf (or in) I find: “If you want to filter several types of visitors, for instance in the case of anonymous offers presented for one or more brands, you need to create an environment for each brand, and a Visitors type folder for each environment.”
The concrete example is “brand”, which I can not easily use in the sandbox. But it also says “for instance … brands”, so there should be also other examples beside them. So what would be an other example for “Several types of visitors” for a production (not test) case (and how do I realize it)?
(I tried out a duplicate of mapVisitor, I did it connected with the visitor schema and an extension of the visitor schema, and could not create a new offer environment in the way explained in “Creating an offer environment” of)
Views
Replies
Total Likes | https://experienceleaguecommunities.adobe.com/t5/adobe-campaign-standard/good-best-way-to-create-basic-offer-environments-for-production/m-p/276539 | CC-MAIN-2022-05 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.