text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
The Universe is not static nor is the data it generates. As your business produces more data points, you need to be prepared to ingest and process them, and then load the results into a data lake that has been prepared to keep them safe and ready to be analyzed.
In this article, you will learn how to build scalable data pipelines using only Python code. Despite the simplicity, the pipeline you build will be able to scale to large amounts of data with some degree of flexibility.
ETL-based Data Pipelines
The classic Extraction, Transformation and Load, or ETL paradigm is still a handy way to model data pipelines. The heterogeneity of data sources (structured data, unstructured data points, events, server logs, database transaction information, etc.) demands an architecture flexible enough to ingest big data solutions (such as Apache Kafka-based data streams), as well as simpler data streams. We’re going to use the standard Pub/Sub pattern in order to achieve this flexibility.
In our test case, we’re going to process the Wikimedia Foundation’s (WMF) RecentChange stream, which is a web service that provides access to messages generated by changes to Wikipedia content. Because the stream is not in the format of a standard JSON message, we’ll first need to treat it before we can process the actual payload. The definition of the message structure is available online, but here’s a sample message:
event: message id: [{"topic":"eqiad.mediawiki.recentchange","partition":0,"timestamp":1532031066001},{"topic":"codfw.mediawiki.recentchange","partition":0,"offset":-1}]data: {"event": "data", "is": "here"}
Server Side Events (SSE) are defined by the World Wide Web Consortium (W3C) as part of the HTML5 definition. They allow clients to receive streams using the HTTP protocol. In this particular case, the WMF EventStreams Web Service is backed by an Apache Kafka server. Our architecture should be able to process both types of connections:
- SSE events, and
- Subscriptions to more sophisticated services
Once we receive the messages, we’re going to process them in batches of 100 elements with the help of Python’s Pandas library, and then load our results into a data lake. The following diagram shows the entire pipeline:
The four components in our data pipeline each have a specific role to play:
- SSE Consumer – This component will receive the events from the WMF server, extract the JSON payload, and forward it to our second component.
- Message Queue – This component should be a massively scalable, durable and managed service that will queue up messages until they can be processed.
- Stream Processor – This component will process messages from the queue in batches, and then publish the results into our data lake.
- Data Lake – This long-term storage service will store our processed messages as a series of Comma Separated Value (CSV) files.
In this post, we’ll show how to code the SSE Consumer and Stream Processor, but we’ll use managed services for the Message Queue and Data Lake.
Getting Started with Data Pipelines
To follow along with the code in this tutorial, you’ll need to have a recent version of Python installed. When starting a new project, it’s always best to begin with a clean implementation in a virtual environment. You have two choices:
- Download the pre-built Data Pipeline runtime environment (including Python 3.6) for Linux or macOS and install it using the State Tool into a virtual environment, or
- Follow the instructions provided in my Python Data Pipeline Github repository to run the code in a containerized instance of JupyterLab.
All set? Let’s dive into the details.
How to Mock AWS SQS and S3
To run our data pipelines, we’re going to use the Moto Python library, which mocks the Amazon Web Services (AWS) infrastructure in a local server. The two AWS managed services that we’ll use are:
- Simple Queue System (SQS) – this is the component that will queue up the incoming messages for us
- Simple Storage Service (S3) – this is the data lake component, which will store our output CSVs
Other major cloud providers (Google Cloud Platform, Microsoft Azure, etc) have their own implementations for these components, but the principles are the same.
Once you’ve installed the Moto server library and the AWS CLI client, you have to create a credentials file at ~/.aws/credentials with the following content in order to authenticate to the AWS services:
[default] AWS_ACCESS_KEY_ID = foo AWS_SECRET_ACCESS_KEY = bar
You can then launch the SQS mock server from your terminal with the following command:
moto_server sqs -p 4576 -H localhost
If everything is OK, you can create a queue in another terminal using the following command:
aws --endpoint-url= sqs create-queue --queue-name sse_queue --region us-east-1
This will return the URL of the queue that we’ll use in our SSE Consumer component. Now it’s time to launch the data lake and create a folder (or ‘bucket’ in AWS jargon) to store our results. Use the following snippet to launch a mock S3 service in a terminal:
moto_server s3 -p 4572 -H localhost
To create a bucket called sse-bucket in the US East region, use the following command:
aws --endpoint-url= s3 mb s3://sse-bucket --region us-east-1
Consuming Events with Python
Our SSE Consumer will ingest the entire RecentChange web service message, but we’re only interested in the JSON payload. To extract just the JSON, we’ll use the SSEClient Python library and code a simple function to iterate over the message stream to pull out the JSON payload, and then place it into the recently created Message Queue using the AWS Boto3 Python library:
import boto3 import json from sseclient import SSEClient as EventSource #SQS client library sqs = boto3.client('sqs' , endpoint_url="" #only for test purposes , use_ssl=False #only for test purposes , region_name='us-east-1') queue_url = '' def catch_events(): url = '' for event in EventSource(url): if event.event == 'message': try: message = json.loads(event.data) except ValueError: pass else: enqueue_message( json.dumps(message) ) def enqueue_message( message ): response = sqs.send_message( QueueUrl = queue_url, DelaySeconds=1, MessageBody = message ) print('\rMessage %s enqueued' % response['MessageId'], sep=' ', end='', flush=True ) if __name__== "__main__": catch_events()
This component will run indefinitely, consuming the SSE events and printing the id of each message queued. Now it’s time to process those messages.
Processing Data Streams with Python
In order to explore the data from the stream, we’ll consume it in batches of 100 messages. To make sure that the payload of each message is what we expect, we’re going to process the messages before adding them to the Pandas DataFrame. Let’s start reading the messages from the queue:
import boto3 import json import time import pandas as pd def read_batch(): while True: try: response = sqs.receive_message( QueueUrl = queue_url, MaxNumberOfMessages = 10 #Max Batch size ) process_batch( response['Messages'] ) except KeyError: print('No messages available, retrying in 5 seconds...') time.sleep(5)
This short function takes up to 10 messages and tries to process them. If there are no messages in the queue, it will wait five seconds before trying again.
Next, the process_batch function will clean the message’s body and enrich each one with their respective ReceiptHandle, which is an attribute from the Message Queue that uniquely identifies the message:
def process_batch( messages ): global list_msgs for message in messages: d = json.loads(message['Body']) #This just cleans the message's body from non-desired data clean_dict = { key:(d[key] if key in d else None) for key in map_keys } #We enrich our df with the message's receipt handle in order to clean it from the queue clean_dict['ReceiptHandle'] = message['ReceiptHandle'] list_msgs.append(clean_dict) if len( list_msgs ) >= 100: print('Batch ready to be exported to the Data Lake') to_data_lake( list_msgs ) list_msgs = []
This function is an oversimplification. It creates a clean dictionary with the keys that we’re interested in, and sets the value to None if the original message body does not contain one of those keys. Also, after processing each message, our function appends the clean dictionary to a global list.
Finally, if the list contains the desired batch size (i.e., 100 messages), our processing function will persist the list into the data lake, and then restart the batch:
def to_data_lake( df ): batch_df = pd.DataFrame( list_msgs ) csv = batch_df.to_csv( index=False ) filename = 'batch-%s.csv' % df[0]['id'] #csv to s3 bucket s3.Bucket('sse-bucket').put_object( Key=filename, Body=csv, ACL='public-read' ) print('\r%s saved into the Data Lake' % filename, sep=' ', end='', flush=True ) remove_messages( batch_df )
The to_data_lake function transforms the list into a Pandas DataFrame in order to create a simple CSV file that will be put into the S3 service using the first message of the batch’s ReceiptHandle as a unique identifier.
We then proceed to clean all the messages from the queue using the remove_messages function:
def remove_messages( df ): for receipt_handle in df['ReceiptHandle'].values: sqs.delete_message( QueueUrl = queue_url, ReceiptHandle = receipt_handle )
If we want to check whether there are files in our bucket, we can use the AWS CLI to list all the objects in the bucket:
aws --endpoint-url= s3 ls sse-bucket
Putting It All Together
The complete source code of this example is available in my Github repository. There, you’ll find:
- A set of JupyterLab notebooks.
- A run.sh file, which you can execute by pointing your browser at and following the notebooks.
I’ve left some exercises to the reader to fill in, such as improving the sample SSE Consumer and Stream Processor by adding exception handling and more interesting data processing capabilities. Python has a number of different connectors you can implement to access a wide range of Event Sources (check out Faust, Smartalert or Streamz for more information).
When it comes to scaling, a good recommendation is to deploy both services as auto-scalable instances using AWS Fargate or similar service at your cloud provider. Finally, our entire example could be improved using standard data engineering tools such as Kedro or Dagster.
Next Steps –
- Install the State Tool:
sh <(curl -q)
- Install the pre-built Data Pipeline runtime environment (including Python 3.6 and all required packages) into a virtual environment for Linux:
state activate Pizza-Team/Data-Pipeline
Related Blogs:
Pandas: Framing the Data
How to Best Manage Threads in Python
|
https://www.activestate.com/blog/how-to-create-scalable-data-pipelines-with-python/
|
CC-MAIN-2020-16
|
refinedweb
| 1,725
| 54.97
|
Yes, I’ve been playing with Docker again – no big surprise there. This time I decided to take a look at scaling an application that’s in a Docker container. Scaling and load balancing are concepts you have to get your head around in a microservices architecture!
Another consideration when load balancing is of course shared memory. Redis is a popular mechanism for that (and since we’re talking Docker I should mention that there’s a Docker image for Redis) – but for this POC I decided to keep the code very simple so that I could see what happens on the networking layer. So I created a very simple .NET Core ASP.NET Web API project and added a single MVC page that could show me the name of the host machine. I then looked at a couple of load balancing options and started hacking until I could successfully (and easily) load balance three Docker container instances of the service.
The Code
The code is stupid simple – for this POC I’m interested in configuring the load balancer more than anything, so that’s ok. Here’s the controller that we’ll be hitting:
namespace NginXService.Controllers { public class HomeController : Controller { // GET: /<controller>/ public IActionResult Index() { // platform agnostic call ViewData["Hostname"] = Environment.GetEnvironmentVariable("COMPUTERNAME") ?? Environment.GetEnvironmentVariable("HOSTNAME"); return View(); } } }
Getting the hostname is a bit tricky for a cross-platform app, since *nix systems and windows use different environment variables to store the hostname. Hence the ?? code.
Here’s the View:
@{ <h1>Hello World!</h1> <br/> <h3>Info</h3> <p><b>HostName:</b> @ViewData["Hostname"]</p> <p><b>Time:</b> @string.Format("{0:yyyy-MM-dd HH:mm:ss}", DateTime.Now)</p> }
I had to change the Startup file to add the MVC route. I just changed the app.UseMvc() line in the Configure() method to this:
app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); });
Finally, here’s the Dockerfile for the container that will be hosting the site:
FROM microsoft/dotnet:1.0.0-core # Set the Working Directory WORKDIR /app # Configure the listening port ARG APP_PORT=5000 ENV ASPNETCORE_URLS http://*:$APP_PORT EXPOSE $APP_PORT # Copy the app COPY . /app # Start the app ENTRYPOINT dotnet NginXService.dll
Pretty simple so far.
Proxy Wars: HAProxy vs nginx
After doing some research it seemed to me that the serious contenders for load balancing Docker containers boiled down to HAProxy and nginx (with corresponding Docker images here and here). In the end I decided to go with nginx for two reasons: firstly, nginx can be used as a reverse proxy, but it can also serve static content, while HAProxy is just a proxy. Secondly, the nginx website is a lot cooler – seemed to me that nginx was more modern than HAProxy (#justsaying). There’s probably as much religious debate about which is better as there is about git rebase vs git merge. Anyway, I picked nginx.
Configuring nginx
I quickly pulled the image for nginx (docker pull nginx) and then set about figuring out how to configure it to load balance three other containers. I used a Docker volume to keep the config outside the container – that way I could tweak the config without having to rebuild the image. Also, since I was hoping to spin up numerous containers, I turned to docker-compose. Let’s first look at the nginx configuration:
worker_processes 1; events { worker_connections 1024; } http { sendfile on; # List of application servers upstream app_servers { server app1:5000; server app2:5000; server app3:5000; } # Configuration for the server server { # Running port listen [::]:5100; listen 5100; # Proxying the connections location / { proxy_pass; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; } } }
This is really a bare-bones config for nginx. You can do a lot in the config. This config does a round-robin load balancing, but you can also configure least_connected, provide weighting for each server and more. For the POC, there are a couple of important bits:
- Lines 10-16: this is the list of servers that nginx is going to be load balancing. I’ve used aliases (app1, app2 and app3, all on port 5000) which we’ll configure through docker-compose shortly.
- Lines 22-23: the nginx server itself will listen on port 5100.
- Line 26, 28: we’re passing all traffic on to the configured servers.
I’ve saved this config to a file called nginx.conf and put it into the same folder as the Dockerfile.
Configuring the Cluster
To configure the whole cluster (nginx plus three instances of the app container) I use the following docker-compose yml file:
version: '2' services: app1: image: colin/nginxservice:latest app2: image: colin/nginxservice:latest app3: image: colin/nginxservice:latest nginx: image: nginx links: - app1:app1 - app2:app2 - app3:app3 ports: - "5100:5100" volumes: - ./nginx.conf:/etc/nginx/nginx.conf
That’s 20 lines of code to configure a cluster – pretty sweet! Let’s take a quick look at the file:
- Lines 4-9: Spin up three containers using the image containing the app (that I built separately, since I couldn’t figure out how to build and use the same image multiple times in a docker-compose file).
- Line 12: Spin up a container based on the stock nginx image.
- Lines 13-16: Here’s the interesting bit: we tell docker to create links between the nginx container and the other containers, aliasing them with the same names. Docker creates internal networking (so it’s not exposed publically) between the containers. This is very cool – the nginx container can reference app1, app2 and app3 (as we did in the nginx config file) and docker takes care of figuring out the IP addresses on the internal network.
- Line 18: map port 5100 on the nginx container to an exposed port 5100 on the host (remember we configured nginx to listen on the internal 5100 port).
- Line 20: map the nginx.conf file on the host to /etc/nginx/nginx.conf within the container.
Now we can simply run docker-compose up to run the cluster!
You can see how docker-compose pulls the logs into a single stream and even color-codes them!
The one thing I couldn’t figure out was how to do a docker build on an image and use that image in another container within the docker-compose file. I could just have three build directives, but that felt a bit strange to me since I wanted to supply build args for the image. So I ended up doing the docker build to create the app image and then just using the image in the docker-compose file.
Let’s hit the index page and then refresh a couple times:
You can see in the site (the hostname) as well as in the logs how the containers are round-robining:
Conclusion
Load balancing containers with nginx is fairly easy to accomplish. Of course the app servers don’t need to be running .NET apps – nginx doesn’t really care, since it’s just directing traffic. However, I was pleased that I could get this working so painlessly.
Happy load balancing!
|
https://www.colinsalmcorner.com/post/load-balancing-dotnet-core-docker-containers-with-nginx
|
CC-MAIN-2019-39
|
refinedweb
| 1,201
| 53.71
|
Applies to:
SAP NetWeaver BI 7.0 and SAP CRM 5.0(and above).
Summary
This paper explains the enhancements specific to business partner in CRM and then retracting BW relevant data to CRM.
Author: Akella Kameswari
Company: Deloitte Consulting
Created on: 25 March 2012
Author Bio
Kameswari Akella is currently working in Deloitte Consulting on SAP BI/BW.
Scenario
There is a need to write attribute values for the business partner into SAP CRM and store this data in a table in CRM .I want to transfer infoobject attributes (Address and Gender) for the infoobject Business Partner. For transferring data from SAP BW, start by defining a data target in SAP CRM, and then use this data target in SAP BW to model the data transfer process in the form of an analysis process. I want to store the transferred data from SAP BW in the form a table in CRM, so that this custom table can be used for other purposes.
To achieve this, we have to first create a data target in CRM of the type “Enhancements Specific to Business Partners” and then create an analysis process in BW to retract the BW data to CRM.
Enhancements Specific to Business Partners (Steps in CRM)
Step 1: Create a table ‘ZTABLE_APD’ in CRM. It should fall within the customer namespace (that is, the table name has to begin with Y or Z) and it has to have the business partner number as its key.
Step 2: You should have authorizationfor the authorization object C_CRMBWTGT to perform step 3
Step 3: Goto SPRO –>Integration with Other mySAP Components –>Data Transfer from SAP Business Information Warehouse
Now, execute ‘Release Data Targets for Replication from SAP BW’ or if you want to jump to the below screen without following the navigation, directly use transaction code ‘CRMBWTARGETS’. Define the table ‘ZTABLE_APD‘ as a data target and target attributes as ‘ADDRESS’ and ‘GENDER’ and specify their corresponding BW InfoObjects(0ADDR_LINE0 and 0GENDER respectively) in Customizing for SAP CRM.
BW Retraction
Model an analysis process “ZAPDRR” for transferring data from attributes (0ADDR_LINE0 and 0GENDER) of 0BPARTNER to CRM attributes (ADDRESS and GENDER),which are the fields of the table ‘ZTABLE_APD’. To achieve this, create the analysis process with source as 0BPARTNER infoobject and Data target of the type CRM with selection of the data target that is defined in step 3(above). You set a data flow arrow between the datasource and the data target to assign the BW attributes ‘0ADDR_LINE0’ and ‘0GENDER’ to the fields ‘ADDRESS’ and ‘GENDER’ of the table ‘ZTABLE_APD’ that is created. The screenshot for the datasource is given below:
Data target should be of the type CRM with selection of the data target that was defined previously.
Select the datatarget as ‘Enhancements Specific to Business Partners’ and the subobject should be the table created in step1. The table fields will be available as data targets.
The screenshots are given below:
Select the table fields of the table(table is created in step1) from list of Available Attributes to the Selected Attributes.
Double click on the dataflow arrow between the datasource and the datatarget. Initially, no assignments will be present between the source structure and target structure.
Now, assign 0BPARTNER of the source to the target field Business Partner ID. The screenshot is given below:
Now, assign 0ADDR_LINE0 of the source to ADDRESS field of target table(ZTABLE_APD). The screenshot is given below:
Now, assign 0GENDER of the source to GENDER field of target table(ZTABLE_APD). The screenshot is given below:
The final screen looks as follows:
Now check analysis process.
Activate the above analysis process.
Give the technical name for the above analysis process.
Execute the analysis process.
The logs are shown below:
The data is transferred from SAP BW into the CRM table ZTABLE_APD which we have created.
Results
Execute the 0BPARTNER in BW for few business partners and compare the address and gender values in BW and CRM. In BW, let us restrict data to 6 Business Partners. The screenshot is given below:
The attributes Address Line 0 and Gender values for the corresponding 6 Business Partners is given below:
In CRM,execute the table ‘ZTABLE_APD’ with the same restrictions as in BW i.e., for the 6 Business Partners with which we have restricted data in BW.
The screenshot of the selection screen for the table ‘ZTABLE_APD’ in CRM is given below:
The address and genders are populated correctly for the corresponding Business Partners. The screenshot is given below:
In this way, enhancements specific to Business Partners can be done in CRM and then data can be retracted from SAP BW to CRM.
Related Content
Analysis Process Designer
|
https://blogs.sap.com/2012/03/18/sap-crmbw-enhancements-specific-to-business-partners-and-bw-retraction/
|
CC-MAIN-2018-13
|
refinedweb
| 782
| 58.82
|
:
1. Add into your Java classes a start tag (like //[ifdef]) and an end tag (like //[enddef]) around the code in question. To include code for version 1:
//[ifdef]
import java.sql.ParameterMetaData;
//[enddef]
//[ifdef]
public byte[] getBytes(String parameterName)
throws SQLException {
...
}
//[enddef]
/* //=""/>
ant -Dcompile.to.version.1=false build
5. Make sure that you compile from the right directory:
<javac ...
>
<src path="${src.dir}/${version}"/>
</javac>
I hope this will help you in the future – Andy
Very interesting Andreas,
I had not seen this before; Thanks!
I noticed how it requires Ant:
Perhaps this will now make it clear to Sun that it is high time to include an official macro language for Java. If only to make the source consistent across all builders. Make no mistake; if Sun doesn't do this: The community will!
PS And with this, we will finally dump all of this AOP cra... nonsense.
Posted by: cajo on January 20, 2005 at 08:04 PM
Given that we have Annotations we could define some annotations like so:
@Retention (SOURCE)
@Target (???) // ALL possible values
public @interface ifdef {
String value();
}
@Retention (SOURCE)
@Target (???) // ALL possible values
public @interface endif {}
And use them like so:
import java.util.logging.*;
public class Example {
private static final Logger LOGGER = Logger.global;
public byte[] getBytes(String parameterName)
throws SQLException {
@ifdef (DEBUG)
LOGGER.info("Some complex str concat based debug message goes here");
@endif
}
And build a compiler plugin to deal with these annotations. However, @Target of the annotations does not contain meaningful values to place them ***ANYWHERE*** in the source code of the compilation units! This approach will eliminate addition of new keywords or syntax to the language.
However, I wouldn't think it would be prudent to add this one to the language spec. First of all, the idea of annotations itself might kill the long forgotten WORA - Write Once Run Anywhere philosophy. But then again, the community could build a compiler plug-in if they need that sort of functionality.
Posted by: chakrayadavalli on January 20, 2005 at 09:51 PM
If you´re using Ant as macro language you could also do replacements via task or filter.
Posted by: jhm on January 21, 2005 at 12:22 AM
I recommend using VPP instead.
Posted by: erikhatcher on January 21, 2005 at 02:21 AM
How would a macro language be better than Ant or AOP?
Posted by: monika_krug on January 21, 2005 at 04:04 AM
Monika,
Ant is a great build tool, I use it exclusively. However it is not the only build tool out there. Adding macro language support to specific builders reduces code portablity.
When you consider what it actually does; AOP is simply a degenerate subset of the functionality provided by a macro language. It's a hack-around, so to speak. Worse, it is performed on bytecode, rather than source code, which makes post process inspection unpleasant.
What would be best is an official Java macro language specification, and preprocessor. Otherwise very shortly there will be many variants of macro languages.
John
Posted by: cajo on January 21, 2005 at 04:56 AM
I see no burning need for a preprocessor or macro language in Java, since many of the things that make them indispensible to C and C++, such as differing word sizes or byte order among platforms, are not a factor in Java.
Moreover, if I have a class file that I know was compiled from a particular source file, I like the fact that I can look at the source file and know exactly how the class will behave without also needing to know what preprocessor flags were set at compile time. To my mind, there are numerous more elegant ways to solve the problems that might be solved by a Java preprocessor.
That said, however, there's always the old standby, using public final static boolean flags, a la:
public final static boolean DEBUG = true;
...
if(DEBUG) {
// do something
}
with the assumption that the compiler will optimize out everything in the if block since it can never be reached, and if it doesn't the code doesn't execute anyway.
Posted by: dglasser on January 21, 2005 at 07:04 AM
with the assumption that the compiler will optimize out everything in the if block since it can never be reached, and if it doesn't the code doesn't execute anyway.
I should have said, "with the assumption that, if DEBUG is false, the compiler will optimize out..." etc.
Posted by: dglasser on January 21, 2005 at 07:06 AM
Preprocessors are seldom the best answer for Java. The example code could have been restructured to be more maintainable without a preprocessor. With that said, there are other preprocessors available, like the ones here.
Posted by: coxcu on January 21, 2005 at 07:28 AM
The Java+ preprocessor looks like it has some potential Curt. Is Brad any relation, or is that just a surprising coincidence?
If we are discussing solely conditional compilation, then I agree with Dave above, the if(DEBUG) Java language approach is cleaner.
I was thinking more about the algorighmic meta-programming features of a preprocessor; where macros actually generate source code. Brad alludes to these types of features in his Future Plans section.
Posted by: cajo on January 21, 2005 at 08:42 AM
Some thoughts to the comments
1) Conditional compilation or a macro language is not a replacement of AOP. With the macro language one can add/replace or remove code anywhere whereas in AOP one can only add code add the edge of a method call or can conpletely replace the entire method. On the other hand AOP can inject code into bytecode and some of them can do this add runtime. Also AOP can inject code on the caller as well as on the method side meaning that I can inject code that is for a particular group of classes using a certain method. Finally with the AOP descriptor language I can inject code very easily without chaning a lot of source code fast and easy.
2) The if(DEBUG) {} trick is only working within a method and therefore I can add/replace/remove import statements, member and method declarations or entire methods.
3) Conditional compilation is useful when one has to support different JDK versions. For example one could not provide code for the JDBC of JDK 1.3 and JDK 1.4 within the same class. Therefore conditional compilation helps to migrate to the next JDK and use their feature and still being able to provide a backwards compatible version.
4) I want to emphasis again that one of the goal for this approach was to keep the source code w/o the conditional compilation a valid Java class to keep the support from editors and IDEs. So some of the other solutions do not meet this requirement.
Have fun and thanks for your comments - Andy
Posted by: schaefa on January 21, 2005 at 09:07 AM
Aren't pre-processors just a fiddle for a bad design in the first place? Abstract classes? Factories? Inheritance? Packages? OO, eh?
Posted by: rob005 on January 21, 2005 at 10:55 AM
Every tool is great if used wisely. For pre-processors helps to deal with code that needs different variations depending on JDK versions (like different JDBC version) or specification versions (JCA 1.0 to 1.5). AOP does not work here because it cannot change the method signature.
Of course my solution is not the best but it works, is elegant (meaning it does not invalidate the Java code) and only needs Ant / Maven to build with.
-Andy
Posted by: schaefa on January 21, 2005 at 11:33 AM
Conditional compiilation is must have. To implement such few conditional statemens in Java, SUN could do in one day. If you use preprocessor from company XY, in few years you can get into troubles, if company XY goes down and you don't have updated preprocessor.
"Aren't pre-processors just a fiddle for a bad design in the first place? Abstract classes? Factories? Inheritance? Packages? OO, eh?"
No rob005, get real life!
Gorazd
Posted by: gorazd_praprotn on January 22, 2005 at 05:35 AM
imho, conditional compilation is not necessary, and is not nice
if you really want to do this then that hack that this blog suggests is not the way to go and I'm quite annoyed that it has even been suggested - if you come to a new project where someone has implemented there own conditional compilation scheme by using a trick like this then it could be a real nightmare, especially if they are no longer around
Posted by: asjf on January 24, 2005 at 03:31 AM
Nice implmentation Andy...thank you. I've often thought and wondered why to myself conditional compilation is not part of the core J2SE.
Conditional compliation in some cases is necessary, or at the very least, nice to have. Say for example you have a static initializer that takes a lot of time whenever the ClassLoader references the class, directly or indirectly. Some builds, you might not need that static initialization, and not want to spend the extra resources upon it; conditional compilation for that build cold solve such.
Yes I know, one could argue to simply make an interface and two implementations of the class, or a similiar implementation with two subclasses of an abstract parent class. But then you're adapting the design to fit the limitations of Java.
Posted by: phlogistic on January 24, 2005 at 12:19 PM
Andy,
Can you please post the complete build.xml ? I'm having trouble implementing even a simple example. Of course I have little experience with Ant.
Thanks in Advance
Posted by: raviies on January 04, 2007 at 11:49 AM
Andreas,
This tip was *awesome*. I understand people that see conditional compiling as something that should not be necessary on a WORA world, since it can be handled by techniques such as inclusion or exclusion of different implementations of classes in the build process.
However, I had a problem with a J2ME game that had to be split in "light" and "full" editions due to size limits in some mobile devices (and having an extra class to handle the excluded code would kill the savings offered by the light edition). In that case, conditional compiling hits the bullseye!
Thank you very much for sharing it.
Posted by: chester_br on June 16, 2007 at 08:00 PM
|
http://weblogs.java.net/blog/schaefa/archive/2005/01/how_to_do_condi_1.html
|
crawl-002
|
refinedweb
| 1,764
| 60.35
|
Remote Config Uitility Class
RemoteConfig is a wrapper class that works with the plugin, firebase_remote_config, which in turn, uses the Firebase Remote Config API to communicate with the Remote Config cloud service offered to all Firebase projects.: remote_config: ^1.0.0
For more information on version numbers: The importance of semantic versioning.
Firebase Console
When you go to your Firebase console in your own Firebase project, there's an option on the left-hand side called, Remote Config. Traditionally, Remote Config is used to store server-side default values to control the behavior and appearance of your app. This library package makes doing so that much easier for a Flutter developer.
A Walkthrough
Let's walk through the RemoteConfig class found in the library file, remote_config.dart, and explain what it does. There's a screenshot below displaying the start of this class. Again, it works with the plugin that knows how to talk to Firebase's Remote Config, and you can see it being imported with the prefix, "r", in the screenshot below. Note, however, it also exports specific classes from the plugin file as well. This is so when using the library routine, you don't have to import the plugin file as well, you just import this one file.
Therefore, you can then only use one import statement:
import remote_config.dart
Instead of using two import statements:
import package:firebase_remote_config/firebase_remote_config.dart import remote_config.dart
A further note in the constructor, I explicitly test the parameters if they're assigned null. With such a utility class used by the general populace, null values could be passed in - by accident or otherwise. In addition, the named parameters, defaults, and expiration used by the enclosed plugin are assured to be assigned valid values. Lastly, the class doesn't store the parameter values in instance variables. These values are instead passed to 'private' variables. They don't need to be accessible again as a public class property. It's a security precaution. Another characteristic of such a utility class.
The next stretch of code lists the private variables and the getters used in this routine. The one of note, is the getter, instance, as it's a reference to the underlying Remote Config plugin itself. You likely won't need to reference it directly, but it's an option.
Init Your Remote
The next stretch of code is the init() function. I decided to have a separate init() function in this routine so to remind any developer using this class to not only call the init() function to initialize things but to also call its corresponding dispose() function to then 'clean things up.' In the screenshot of the init() function below you can see it's there where the Remote Config plugin is actually initialized. Further, any default values passed to the routine are then assigned to the plugin. Finally, it is there where the plugin's fetch() function is called to 'fetch' the parameter values stored in the Remote Config.
You'll notice an encryption key is conceived in the init() function as well. Encryption and Decryption your remote values is an option available to you when using this class. If this key was not explicitly passed to this function, the package name of your app is used to look up a possible key in Remote Config.
Further note, the whole operation in the init() function is enclosed in a try-catch statement and any exceptions recorded. Any such utility class should record any and all exceptions. Lastly, a boolean value of true is returned if everything goes successfully.
The next stretch of code will mirror the properties and functions found in the plugin itself. Again, being a utility class - made available for public use, you have to ensure the routine is used properly. In this case, the init() function has to be called before you can do anything else, and that's what the series of assert() functions you see below are for. If the developer forgets to call the init() function, they'll know it if they try to work with the routine any further.
What follows in the next bit of code is what you'll be using most often. You'll supply the appropriate key-value and retrieve parameter values from Firebase's Remote Config using the following functions. The getStringed() function is found here. Note, its role is to involve decryption in your value retreval. More on that later.
The rest of the code below continues to mirror the functions available to the plugin itself. You can even add 'listeners' to introduce functions that will fire whenever a Remote Config value possibly changes during the app's execution.
Remote Error
Lastly, in this RemoteConfig wrapper class, there's the code for recording any exceptions. If something goes wrong, there are two getters that you can use in your app itself to test if the wrapper class failed in any way. For instance, if either _remoteConfig.hasError or _remoteConfig.inError is set to true, there was an exception.
As you know by now, throughout the class and accompanying ever try-catch statement, the function, getError(), is called to record any exceptions. Well, your app can also call getError() without a parameter to retrieve the actual exception that has occurred and act accordingly. That means, as a developer, you can use the function as well in your own Flutter app to test if your remote config operations were successful or not.
Decrypt The Encrypted
With this Remote Config routine, I had a need for the use of Cryptography. As it happens, that's what the getStringed() function is for-to decrypt the Remote Config values. Of course, using your favourite IDE and breakpoints, you'll have to first encrypt those values and store them up there in your own Remote Config the first place. And so, there is a StringCrypt class for you to do this, and it's listed below. It works with another popular plugin called, flutter_string_encryption.dart, using the class, PlatformStringCryptor().
Again, like the Remote Config routine, this class doesn't store it's three parameters in class properties but instead takes them into private variables. You can also see its plugin is initialized in the constructor. Lastly, instead of explicitly providing the plugin a key parameter, you can instead provide a password and 'salt' to generate the key. Such keys are required to generate encryption.
The 'salt' parameter is an additional string that accompanies the password when generating a key for encryption and decryption. It's an additional safeguard in case the password is ever compromised. It's 'one more component' needed if a bad guy wants its access.
Below, you now see the 'decrypt' routines used by the RemoteConfig class. In addition, you see the functions you would use to first generate a key you'd then store away on Remote Config for example.
The private function, _keyFromPassword(), is pretty self-explanatory. If there's a password and a salt provided, it's called back up in the constructor and assigns a key to the private variable, _key.
Lastly, you'll recognize the bit of code at the very end of this class. Again, such a utility class should catch any and all exceptions and save it for the developer to optionally retrieve and act upon accordingly. That code is listed below.
An Example
The routine below is a screenshot of one of my ongoing projects. It calls the plugin, flutter_twitter, but not before accessing the project's Remote Config and supplying the two required String values. You can see this routine draws out the app's package name and uses the first two parts, 'com' and 'andrioussolutions', to retrieve the particular Remote Config values you first saw in the Firebase Console screenshot above.
The function, getStringed(), will be of particular interest. It returns the String values from the Remote Config, but no before decrypting the retrieved value. The function is called for both the public key and the secret token as you can see highlighted in the screenshot below. Hence, back in the screenshot of the Remote Config screen for that particular Firebase project above, the values stored there are, in fact, the encrypted versions of the public key and secret token.
Taking a peek at the class library, RemoteConfig, we can see the function calls its regular getString() function used to retrieve the String value from Firebase's Remote Config, but then it passes that value to an asynchronous function called, de().
It's an abbreviation form of the function, decrypt(), which in turn, calls this other function to perform the actual decryption. A screenshot of these functions is displayed below. Of course, it's not recommended you store such sensitive information in Google's cloud service.
This utility class, RemoteConfig, was written up out of necessity - I needed to work with Firebase's Remote Config service. I've supplied this class and the StringCrypt class to our fledgling Flutter community.
Cheers.
|
https://pub.dev/documentation/remote_config/latest/
|
CC-MAIN-2022-27
|
refinedweb
| 1,503
| 63.29
|
f
Prompt and read in the number of table rows. (You do not need to check that this number is positive.) If the table has m rows, then the table will contain f(x,y) for x and y from 0 to m − 1.
Use two nested for loops to print your table, looping over the values of x and y. Note that x and y should be integers and should both start at 0. Note also that y should be less than or equal to x.
Print the output in fixed precision with one digit after the decimal point. Columns and decimal points should be aligned.
ihave:
#include<iostream> #include<cmath> #include<iomanip> using namespace std; int function(int x, int y) { int f; f=sqrt(pow(x,2)+pow(y,2)); return(f); } int main() { int x(0); // x integers int y(0); // y integers int m(0); // number of integers int f(0); // function (x,y) cout<<"Enter number of rows; "; cin>>m; for (x=0; x<=m-1; x++) { for (y=0; y<=x; y++) { f=function(x,y); cout.precision(1); cout<< setw(f)<<endl; } } return 0; }
I am having trouble making the table any help???
This post has been edited by jimblumberg: 06 November 2011 - 09:12 AM
Reason for edit:: Added missing Code Tags, Please learn to use them.
|
http://www.dreamincode.net/forums/topic/254547-nested-loop-for-function-that-outputs-table/
|
CC-MAIN-2016-50
|
refinedweb
| 226
| 79.09
|
Resize problem at the core of QML
Hi all!
Pre to this post was "QML SplitView - handleDelegate"
I managed (by trial and error) to come up with this code:
import QtQuick 2.6 import QtQuick.Controls 1.5 import QtQuick.Layouts 1.3 Item { id: mainItem anchors.fill: parent SplitView { id: splitMain orientation: Qt.Vertical anchors.fill: parent signal handleChanged property real ratio: 0.6 Rectangle { id: r1 width: { if (splitMain.orientation == Qt.Horizontal) { console.log("width: " + splitMain.width) return splitMain.width * splitMain.ratio } } height: { if (splitMain.orientation == Qt.Vertical) { console.log("height: " + splitMain.height) return splitMain.height * splitMain.ratio } } Text { text: splitMain.ratio.toFixed(2) } onWidthChanged: { if ((splitMain.orientation == Qt.Horizontal) && splitMain.resizing) splitMain.handleChanged() } onHeightChanged: { if ((splitMain.orientation == Qt.Vertical) && splitMain.resizing) splitMain.handleChanged() } } Rectangle { id: r2 Layout.fillWidth: (splitMain.orientation == Qt.Horizontal) ? true : false Layout.fillHeight: (splitMain.orientation == Qt.Vertical) ? true : false } handleDelegate: Rectangle { width: (splitMain.orientation == Qt.Horizontal) ? 4 : 0 height: (splitMain.orientation == Qt.Vertical) ? 4 : 0 color: "red" } onHandleChanged: { ratio = (splitMain.orientation == Qt.Horizontal) ? r1.width / width : r1.height / height console.log("handle changed to ratio " + ratio.toFixed(2)) } } }
Please try this urself, without touching the splitview handle, and see that all resizes (including the splitview) like you would expect when you resize the main window.
Now try again, but now fiddle with the splitview first....
And now try to resize the containing window...
As you will see, the SplitView aspect is not kept...
Now, review my code, I think I made everything possible on Window resize, but it won't keep the aspect ratio of the SplitView.
So:
- either my splitMain is not informed of parent (size) changes
- Or there is something really wrong here
I am sure there are some experts here?
To be clear:
I assume that my anchors.fill parent will trigger something I can work with.
According to QML bindings and so, the resize of the parent should trigger something in my class... But alas... that does not happen.
When resizing the parent (without touching SplitView handle) is works ok, but after touching the SplitView handle, it does not.
Apparently my anchors.fill parent does not trigger anything?
When parent changes, what do I have to do?
This is probably a BUG too, right?
When parent changes, the SplitView should be signaled.
[Edit: Merged the other very short postings into this one -- @Wieland]
- SGaist Lifetime Qt Champion
Hi,
This is a community driven forum. If you want to get in the touch with the developers of Qt you should post on the interest mailing list.
If you think you may have found something, then you should also check the bug report system to see if it's something known.
- Wieland Moderators
Hi! I've prepared a small example. The main point here is that using the handle breaks the binding to the rectangle's
heightproperty, so it must be restored after resizing.
main.qml
import QtQuick 2.7 import QtQuick.Controls 2.0 import QtQuick.Layouts 1.0 ApplicationWindow { id: window visible: true width: 640 height: 480 title: qsTr("Hello World") About { anchors.fill: parent } }
About.qml
import QtQuick 2.7 import QtQuick.Controls 1.5 import QtQuick.Layouts 1.3 SplitView { id: splitView orientation: Qt.Vertical // set inital ratio property real ratio : 0.33 // usually you'd use data from your backend object for that // property real ratio : backend.ratio() onRatioChanged: { // inform backend about changed ratio // backend.setRatio(ratio) } onResizingChanged: { if (!resizing) { // update height ratio = topRect.height / splitView.height // restore binding topRect.height = Qt.binding(function() {return splitView.height * splitView.ratio}) } } handleDelegate: Rectangle { height: 4 color: "red" } Rectangle { id: topRect color: "pink" // initialize height. note that this is only valid until the handle is used. // using the handle breaks that binding height: splitView.height * splitView.ratio } Rectangle { id: buttomRect color: "plum" Layout.fillHeight: true } }
|
https://forum.qt.io/topic/72395/resize-problem-at-the-core-of-qml
|
CC-MAIN-2017-47
|
refinedweb
| 635
| 54.39
|
Next Previous Suggestions
I noticed that many people have tried to create a next/previous links in their articles. I was able to do it although the way I did it might not be the prettiest in town.
I used the extra1 and extra2 tags provided for an article.
I used extra1 as my previous and there I typed in the name of the page for the previous. I left it blank for the first page. I used extra2 as my next. Here is the code that I put in the template
<p>{$ if nonblank .extra1 $}<A href="{$.extra1$}">Previous</A>{$ endif $} {$ if nonblank .extra2 $}<A href="{$.extra2$}">Next</A>{$ endif $}</p>
This way, on the first and last page, the previous and next links don't show up.
Now that I think about it, it may have been easier to insert a link with the insert link button in these fields.
Would be nice if CityDesk had those variables next/previous and by default they would come up some way (maybe you could specify the default way somehow) and then you could modify it for each page making the trail whatever you felt like it.
On a somewhat similar topic, it would also be nice to be able to browse through the articles from the article editor. I am finding that sometimes I want to ensure that formatting is consistent across articles (eg for variables) and that having to close, select and open is rather cumbersome.
S Marcotte
Sunday, March 31, 2002
That's a nice trick for Next and Previous.
The main reason that we haven't implemented a scripting way to do it is simply because it's a very complicated problem, and we haven't been able to figure out a general solution.
Joel Spolsky
Tuesday, April 2, 2002
As far as browsing through articles - would global search and replace get you at least part of what you want? Or an interface that let you browse the articles, but forced you to work in HTML view?
Mike Gunderloy
Tuesday, April 2, 2002
It would be very nice to browse articles without previewing. As in FrontPage: If you Ctrl-click on a link, you'll open the linked page. Makes it very easy to see if you are linking to the right article and makes it easier to edit several articles at the same time.
tk
Tuesday, April 2, 2002
Mike,
I think they might mean browse as in a back and next arrow to go through all the articles, instead of having to open and close each article in its own window.. so one window would open with the articel in it, and if you hit the next area that window would then show the next article, etc...
Michael H. Pryor
Wednesday, April 3, 2002
Yep, I got that. Just trying to figure out what tools could be written from an external point of view that might help solve the problem. I can easily do an article browser -- but rendering anything other than the raw HTML would force me to deal with the DHTML edit control, and you guys are already having ENOUGH fun with that.
Mike Gunderloy
Wednesday, April 3, 2002
True, true. The DHTMLEdit control is not worth the effort.. Although maybe a "preview" would work just as well.. Source is editable, Preview is web browser control...
Michael H. Pryor
Thursday, April 4, 2002
Recent Topics
Fog Creek Home
|
https://discuss.fogcreek.com/CityDesk/2848.html
|
CC-MAIN-2019-22
|
refinedweb
| 575
| 79.09
|
Greetings my fellow monks,
I have recently been given the opportunity to review a software project that was written 3 or 4 years ago. The original author has since departed, but the project looks promising, and fortuitously (for me) it was written in Perl. I've been asked to look through it and document it so that we may be able to adapt it later.
The project is composed of a series of Perl scripts, using Bash scripts as wrappers to call them and pass data between them. The original author wrote several modules to handle various program tasks, all of which appear to be (to my supposed ability, such as it is) well-written though using Perl coding practices that seem older (e.g. using || die instead of or die after two-arg open statements).).
The two modules don't really feel like discrete entities -- almost as if they are catch-all modules instead of serving a well-defined function.
What I'm wondering is whether this is considered good Perl coding practice? I realize there are several ways to do things in Perl, but I'm trying to gauge whether this is something I would want to do or if there are better ways to do it -- and by it I mean share configuration data between several Perl scripts that do not run concurrently.
My review is not complete at this point, so take into account that I do not completely understand all the code yet, but I believe I've reviewed the majority of it.
What I'm wondering is whether this is considered good Perl coding practice?
I think you're asking the wrong question. You should consider asking (yourself) the following:
Given this is an existing project, I'm gonna assume the answer is yes.
So then you have to ask yourself: what are the risks involved with changing it?
Reading between the lines of your description, I'm gonna assume the answer is no.
If you need to change the value of an existing configuration parameter, you go into the .pm file and edit it.
If you need to add a new one, you edit the .pm file and add it.
That doesn't sound particularly hard to me. In fact, it sounds identical to the process of maintaining a configuration file in some other format--say YAML.
But--and here is the crux of my argument--when you've made changes, validation is as simple as:
perl -c configModule.pm
[download]
And if the editor forgets to validate, the error will be detected immediately at program startup, as a Perl syntax error, with the clarity of Perl's syntax error messages.
Ask yourself:
How good are the the equivalent error messages for your prospective config file alternative?
How good is the beta test cycle of your prospective config file format parser (say YAML), compared to that of Perl parser itself?
Many will condemn the existing mechanism as "crude". But crude is often a euphemism for 'simple'. And there is very little wrong with simple.
Changing a module is not always trivial. Where I work it involves creating a document requesting permission for a change, a process which normally takes a week or two. Of course there are emergency requests you can process without outside discussion, to handle things that break at 2 am, but the discussion of why it was needed simply follows the change rather than precedes it. Then you have to unlock the source project, check out the file, change it, check it in, and distribute the changes to the production file system.
As Occam said: Entia non sunt multiplicanda praeter necessitatem.
If you have configuration files, that drive the actions and correctness of your applications, that are subject to less strict procedures than your source files, you have a gaping hole in your process.
The important question is ... is changing a config file in a different format any easier?
Jenda
Enoch was right!
Enjoy the last years of Rome.
The project is composed of a series of Perl scripts, using Bash scripts as wrappers to call them and pass data between them.).
Finally, in the interests of loose coupling and high cohesion, you should strive to make it clear
which subset of these many configuration variables are actually used by each of your modules.
A simple way to do that is to have a module take a hash of attributes in its constructor; the module itself uses those attributes only to get its work done and never peeks at the global package variables.
Personally I would probably have used an external configuration file (I am a fan of YAML) but I can see the attraction of having your "global" variables in a separate module under their own namespace.
whether this is considered good Perl coding practice?
It's bad practice and generally shows a lack of maturity in the developer which is why it seems to be pretty common. Though, as CountZero said, it depends.
The sniff test is something like-
I'm not sure it's necessarily bad practice. If it's user-level configuration, there should probably be a user-level interface to change it. If it's really more a set of constants gathered together similar to a C header file that may change but only rarely and at the hands of a programmer, then I think a module is a pretty good idea.
Package variables or constants can be very handy. Since it's older it might not have been written with the benefits of our, or the author may have wanted to be extra clear on where those values reside. IMO there's not much reason to bring in YAML, JSON, XML, Windows-style INI files, or some other non-Perl language to parse if they are just used as common constants unlikely to change.
If the end user is not a programmer and is being expected to change the modules, that's asking for some level of frustration. That frustration could spread to many people by the end of it, too. There should be a well-documented interface from the user interface to do user-level configuration. Whether that's a very simple .rc file, command-line options, a separate configuration program, a menu in the main program, or whatever, it shouldn't be in the code if a less technical end-user is meant to change it. That's not always the case, though.
A lot would depend on who would be changing the configuration. If only programmers, then putting it in a common module would make things easier; they would already know the syntax. If users were changing the configuration, then having a interface to validate their input would be best. Turning them loose with nothing but a text editor in your files is nothing but a recipe for disaster. But then you could have your interface rewrite the configuration files with something like Data::Dumper. Just remember that not using Perl modules for configuration means everyone has to learn yet another syntax.
Best Practice? No. Unacceptable? No, as long as these modules only have configuration data, and are marked as configuration files in the packages that they come from.
Purists would say configuration information should never live in a module; if you're writing something new from scratch, I'm a fan of YAML, but not everyone likes it. For something that already exists, configuration info in a module is fine.
Alex / talexb / Toronto
"Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds
Some projects do use “executable code” to hold configuration data. (Sometimes, the modules are programmatically generated.) My wisdom is, “if it works now, don’t change it now.” Pick you battles carefully, like a good triage nurse at an E.R. You might not like the way it was done, but it does not appear to be screaming in pain...
TMTOWTDI... and only two that count: the one that works, and the other one that also works.
You could, cautiously, say that such a design is “elegant,” if the design that you are stuck with is that “there are a bunch of separate scripts.” (Whether or not you think that having “there are a bunch of separate scripts” qualifies as “elegant” is beside the point.) All of the configuration options become available with one simple use statement. No separate parsing is required: the .pm file is the configuration file. If the options, once set, almost never ever change, that works.
What I'm wondering is whether this is considered good Perl coding practice?
Let's answer the second question first. It's not my first choice. I prefer having a single module provides configuration settings that extend the scope of a single file, but than I rather import variables (or constant subs) into my name space. Or sometimes a hash with all the settings (like Config.pm). For large projects, I may group the settings, allowing them to be imported by tags (Fcntl.pm does this as well).
For the first question, it depends. For some settings, it makes more sense to store them outside of the code - for instance, for settings that may be vary from machine to machine (you may have a webserver farm, not all of them connecting to the same physical database - you may want to store connection settings in a config file, whose content varies from machine to machine. Or you have a user applications, allowing each user their own settings - think the preferences file of your browser). Other settings you really do not want to be configurable in a separate file. Think for instance positions in a bitfield. Or the value of π. Or settings that are calculated, or derived from others.
Who are the intended editors of your config file? Are they just changing values or are they authorized to change/create logic?
Write rights to a .pm file convey more privilege than rights to YAML/.ini type files.
I have worked on a similar project. An nightly cron gathers a lot of data that doesn't change frequently, from databases and corporate sites outside our server pool. The cron script rewrites a large module used by many other modules. It frees the database load to deal with more volatile data and less commonly accessed data.
It's fast and complex, but modifiable. Not a best preactice perhaps, but it does its job well.
I'm not sure what module from CPAN I used (tried several), used it for a time, but actually got sick of having to watch out so I don't override the config files on the production server with the ones from my development machine. And this is especially boring since if you want to use SVN/GIT - you cant have the production being work folder and just do "svn up" to update it. You have to export it somewhere else, then copy all but the configuration files.
That also leaves a problem of what to do when you have to update the config with some new variables and such.
So then I actually started doing exactly what you wrote - having a Config.pm file holding just the config values. Depending on the need it's either just a couple of exported globals (if it's just a one Perl file script I don't even have a module - I just set the values in the script itself), or a dummy function returning just a hash(ref) with all the config values.
Of course the trick being to have two (or more) sets of configs and figuring out which one you want with the help of Sys::Hostname; and "hostname" function.
That way I have all the configurations for all the locations where the code would run - all in one place - one .pm file. And I get to just "svn up" (or "svn export" if I can't afford .svn directories/files all around) on the production machine - and don't have to think about what files I shouldn't copy or such. And there is another + that in case I (dirtily but quickly) fix an error on the production server - I can just commit it back to the repository.
I guess that could be a bad thing if you had a lot of servers - but even then you could make a module load a config file based on hostname - or throw out an error if it can't.
O and you can also change the @INC path that way - though I try to have everything relatively placed (so "use lib './PerlLibs/;'" would work no matter what the server, or configure Perl for the "whole server", I've also run into occasions when this was needed:
#!/usr/bin/perl
use strict;
use warnings;
BEGIN {
use Sys::Hostname;
my $hostname = hostname;
if($hostname eq 'devserver'){
use lib '/foo/bar';
}
else {
use lib '/bar/foo';
}
}
# ... And Some/Module.pm being found in those paths set in BEGIN block
use Some::Module;
[download]
One general comment I’d make, in any case, is that: you should also consider how these configuration options will be queried and used, no matter how you store them. I like to build a Settings.pm module (say...) which contains “the code to answer any question that this application might need to ask about|using its configuration settings.” This module also encapsulates the process of determining what the configuration-settings are. So, if the settings come from a database or an external file or what-have-you, this module will contain the logic to get them from that source. Likewise, if the settings (or some of them) come from a Perl-module that is used, this module contains that use statement (and does not contain the settings statements themselves).
My Settings.pm module is, as is usual for me, “very suspicious.” It not only gets the settings (by whatever means), but also thoroughly examines them. All of the clients of that module, not only know where to go for the answers that they need, but also have some reason to believe that those answers will be correct. Most of these checks are encapsulated in an initialization-routine that is called by the application at startup time.
My spouse
My children
My pets
My neighbours
My fellow monks
Wild Animals
Anybody
Nobody
Myself
Spies
Can't tell (I'm NSA/FBI/HS/...)
Others (explain your deviation)
Results (51 votes). Check out past polls.
|
http://www.perlmonks.org/?node_id=860654
|
CC-MAIN-2016-50
|
refinedweb
| 2,419
| 70.63
|
About Designing apps for Big Screen Windows Phone
This article explains almost everything about designing apps for big screen devices like Nokia Lumia 1520 and optimizing existing apps for these devices.
Windows Phone 8
Note: This is an entry in the Nokia Imaging and Big UI Wiki Competition 2013Q4.
Introduction
Recently Windows phone update 3 also known as GDR3 was introduced. It comes with 1080p resolution and support for large screen displays. So new Lumias, such as Lumia 1520/1320, launched with big 6 inch displays. Lumia 1520 comes with 1080x1920 and Lumia 1320 with 720x1280 resolutions with 16:9 aspect ratios.
Due to this 1080p resolution and big display screen support, designing app concepts for such phones has changed a little bit. Existing apps that run on smaller 4 inch devices and don't support 720p resolution need to be optimized for large screen phones.
In this article, we will go through all aspects of designing and optimizing apps for 6 inch large screen phones.
Supported Resolution in GDR3
Below table describes supported resolution, aspect ratio, and scaled resolution for Windows Phone 8 GDR3. On a 1080p resolution screen, the scaleFactor property will return 1.5 although real scale factor is 2.25 which used internally by Windows Phone 8 OS.
Below table also shows how a screen appears in phones that have different resolutions. (Though 720p resolution could have 2 column layout start screen too, like in HTC 8x and 3 column layout in Lumia 1320)
How existing app will look on 1080p resolution
Both Windows Phone 7 and 8 apps run differently on 1080p resolutions.
Windows Phone 7 Apps
Windows Phone 7 apps were designed to run on 15:9 scale and don't scale perfectly on 16:9 so when WP7 apps run on 1080p application, black ribbon is displayed below system tray on top of the app as shown below in image.
To remove this black ribbon from app on 1080p resolution devices these wp7 apps need to port to Windows Phone 8 and support 720p resolution.
Windows Phone 8 Apps
Windows Phone 8 apps which support 720p resolution support will automatically be up-scaled to 1080p resolution and will display as full screen without black ribbon. In GDR3 update, 1080p support is implemented as up-scaled 720p. Therefore an app must have 720p support to run in 1080p resolution without ribbon.
To support 720p, app must have declared this in application manifest file WMAppManifest.xml like below. Note there is no dedicated value for 1080p resolution, instead 720 tag is for both 720 and 1080p. This must be declared in ScreenResolutions tag as shown below.
<ScreenResolutions>
<ScreenResolution Name="ID_RESOLUTION_WVGA"/>
<ScreenResolution Name="ID_RESOLUTION_WXGA"/>
<ScreenResolution Name="ID_RESOLUTION_HD720P"/> <!-- This also use for 1080p-->
</ScreenResolutions>
Below you can see app supporting 720p resolution running on devices having 720p and 1080p resolution. You can note from second screenshot that app screen is bigger and it shows more content.
Testing Application and 1080p Simulator
Currently there is no Simulator released for 1080p to test and verify app look on 1080p but one can test app and verify layout of their app in 720p resolution in current emulator. We can test our apps for different resolution by checking Microsoft.Devices.Environment.DeviceType and App.Current.Host.Content.ActualWidth APIs.
Microsoft soon will release Windows Phone GDR3 SDK update which will have 1080p supported Emulator in it.
Optimizing Existing Apps for 1080p resolution
Why do we need to optimize our apps
Phones supporting 1080p resolution has bigger screen like 6inch or greater. So in bigger screen phones, app can show bigger and so it has more content to show than lower screen size phones. So as per User Experience Design Guide, user can easily navigate and app can cover the most of the screen area.
So to make UX better on these big screen phone, we need to optimize our apps so that UX on this device gets better when user use our apps. User can navigate easily and can able to see more content area than on lower devices. So for this we need to optimize below things:
- UI Layout
- To show more content on content area
- To make easy navigation
- Graphics, to look good for bigger screen
- Images size and resolution
- Videos size and resolution
- Splash screen
- Fonts
Below are two steps to optimize our existing apps for 1080p resolution. You can also follow this steps to create apps for all resolutions.
- Resolution Dependent Layout - Detect device’s resolution and load different assets
- Dynamic layout - Design an auto scaling layout
1. Resolution Dependent Layout (By detecting device’s resolution)
In this type, we first detect device scaleFactor using App.Current.Host.Content.ScaleFactor and from it we recognize resolution of the device which should be either WVGA, WXGA or HD (720/1080) and as per the detected resolution we show various assets like UI layout, Graphics, Images, Videos, fonts.
To detect resolution of device, we need to detect scaleFactor using App.Current.Host.Content.ScaleFactor. Below ReferenceHelper.cs class reruns the current resolution of device. By getting resolution of this helper class we can load various assets as per resolutions and our needs.
ReferenceHelper.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ResolutionDependant
{
public enum Resolutions { WVGA, WXGA, HD };
public static class ResolutionHelper
{
private static bool IsWvga
{
get
{
return App.Current.Host.Content.ScaleFactor == 100;
}
}
private static bool IsWxga
{
get
{
return App.Current.Host.Content.ScaleFactor == 160;
}
}
private static bool IsHD
{
get
{
return App.Current.Host.Content.ScaleFactor == 150;
}
}
public static Resolutions CurrentResolution
{
get
{
if (IsWvga) return Resolutions.WVGA;
else if (IsWxga) return Resolutions.WXGA;
else if (IsHD) return Resolutions.HD;
else throw new InvalidOperationException("Unknown resolution");
}
}
}
}
And using this helper class, we will load respective various content. Below in class Reference.cs, we first get the resolution and set the content like Image, Video or navigate to other page layout which has all image, videos, page link etc. for that particular resolution. Here in sample example, we just showing detected resolution on screen of the app.
Reference.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ResolutionDependant
{
public class Resolution
{
public string BestResolutionImage
{
get
{
switch (ResolutionHelper.CurrentResolution)
{
case Resolutions.HD:
return "720/1080. Load 720p Assets or page layout.";
case Resolutions.WXGA:
return "WXGA. Load 768x1280 res Assets or page layout.";
case Resolutions.WVGA:
return "WVGA. Load 480x800 res Assets or page layout.";
default:
throw new InvalidOperationException("Unknown resolution type");
}
}
}
}
}
Here are some screenshot of sample app created to get idea about this resolution based optimization technique.
Source Code: You can download source code of this simple app to detect screen resolution from here: Media:ResolutionDependant.zip
2. Dynamic layout (Design an auto scaling layout)
In this technique of optimization, we need to first check that whether device is big screen device or not and based on it we set the style them of the page and/or app to show different layout for bigger screen devices..
Windows Phone 8 Update GDR3 introduces 3 new resolution related properties which can be get using the DeviceExtendedProperties API. The new properties are:
- PhysicalScreenResolution - Width / height in physical pixels.
- RawDpiX - The DPI along the horizontal of the screen. When the value is not available, it returns 0.0.
- RawDpiY - The DPI along the vertical of the screen. When the value is not available, it returns 0.0.
Below we will see how we can use Dynamic Layout technique to optimize our apps for bigger screens.
Detecting Bigger Screen devices - Below is the code to know that screen is bigger or not using above new properties.
using Microsoft.Phone.Info;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
namespace DynamicLayout
{);
}
}
}
}
Using above help class we will first find that device is the bigger screen and if it is bigger screen phone than we set Theme style we have created for bigger screen to show more content. Below is the code use above helper and set Style.
We will call above method in App's constructor.
public partial class App : Application
{
public static PhoneApplicationFrame RootFrame { get; private set; }
/// <summary>
/// Constructor for the Application object.
/// </summary>
public App()
{
UnhandledException += Application_UnhandledException;
InitializeComponent();
// Load style for bigger screen
StyleSelector.SetStyle();
InitializePhoneApplication();
InitializeLanguage();
}
So this way we can set dynamic layout using Theme Style for bigger screen setting runtime after checking device screen size.
To get more information about this, check out Nokia Developer Library for Optimizing Layout for Big Screen here.
Sample of Dynamic layout (Nokia Developer Example)
Nokia developer Library has given one sample of how we can make use of Dynamic Layout where they have change the dataItem using different Style setting at runtime after detecting screen size.
We have shown above how to detect screen size and set different Them Style at runtime and below is one example using it.
They created one XAML file which has a Grid and one Image control in it. We will set different style at runtime to these controls.
<DataTemplate x:
<Grid Margin="12,6,12,6" Style="{StaticResource GridItem}">
<Grid.RowDefinitions>
<RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
</Grid.RowDefinitions>
<Image Grid.
<TextBlock Grid.
</Grid>
</DataTemplate>
And then They have created two different Theme style to apply over above controls.
First Theme Style for Smaller Screen: SampleDataItemStyleDefault.xaml
<ResourceDictionary xmlns=""
xmlns:
<!-->
</ResourceDictionary>
You can see above in SampleDataItemStyleDefault.xaml that they have added 4 style for
- NormalText
- LargeBoldText
- GridImage
- GridItem
So they want to optimize Text size and Layout for large screen phones by overriding style for large screen over default one. Below is the style theme for Larger screen:
<ResourceDictionary xmlns=""
xmlns:
<!-->
</ResourceDictionary>
And then they have checked the screen size using above explained code using ScreenSizeHelper helper class and override style at runtime in case of big screen using Application.Current.Resources.MergedDictionaries.Add
- NormalText:- In case of Large Screen Style, NormalText has been removed and they only want to apply LargeBoldText.
- LargeBoldText:- In case of Large Screen style, text size of LargeBoldText style has been reduces to 22 while it was 32 in Small screen Style. They have reduced text size as on bigger screen, it shows bigger than in smaller screen as it should be adjusted accordingly by OS.
- GridImage:- In this style key they have changed the value of width and height of Image control.
- GridItem:- In this style key they have changed the value of width of Grid control so that we get more column in bigger screen size than smaller screen.
Nokia Developer Example Screenshot:
Some UI Design Tips for Dynamic layout
- Provide high enough resolution assets to avoid up-scaling of low resolution images: up-scaling becomes more apparent with large screens.
- Don't give fix width or hight to any control
- Don't give fix margin to any control as it would be different for various screen sizes
- During dropping the control from toolbox, check that it has not given fix margin etc and remove it.
- Create different style themes for different screen sizes which can be set at runtime
- Try to cover as much as room in bigger screen and show more content
- Don't give fix sizes in Fonts
Splash Screen
One common Splash Screen Image for each resolutions
To display common single splash screen for all resolutions, use a single image file named SplashScreenImage.jpg that is 768 × 1280. The phone automatically scales the image to the correct size for each resolution.
Pixel perfect splash screen for all resolutions
If you want to provide pixel-perfect splash screens for all resolutions, you can add images with the following file names. All splash screen images must be in the root folder of your app project.
- WVGA Resolution: SplashScreenImage.Screen-WVGA.jpg
- WXGA Resolution: SplashScreenImage.Screen-WXGA.jpg
- 720p Resolution: SplashScreenImage.Screen-720p.jpg
- 1080p Resolution: SplashScreenImage.Screen-720p.jpg
Note above in 1080p resolution splash screen, we have provided same 720p splash screen image as there is no need to provide 1080p resolution image and phone will default using 720p splash screen image for 1080p phone.
Lock Screen
If your app setting up lock screen, it is recommended to use high resolution image for lock screen as it will be automatically down scaled for smaller screen devices but up scaling of lower resolution image decrease image quality and make it blur and noisy.
If your app creating lock screen image with fonts or image in it at runtime, I would suggest to use create new lock screen image with optimized images, fonts, by detecting resolution and/or screen size at runtime.
Tiles
For Tile, we only need to include tile images for WXGA resolution. The phone automatically scales tile images to the correct size for WVGA, 720p, and 1080p screens. So No need to optimize or change anything.
You can get more information regarding Tiles Design for bigger screen here in Nokia Developer Library.
App icons / List
Like Tile, for app icons, we only need to include tile images for WXGA resolution. The phone automatically scales tile images to the correct size for WVGA, 720p, and 1080p screens. So No need to optimize or change anything.
You can get more information regarding App Icon Design for bigger screen here in Nokia Developer Library.
Fonts
System fonts are automatically scaled by a 2.25 factor..
You can get more information regarding the Fonts Sizes for bigger screen size phones on Nokia Developer Library.
Graphics
Graphics are automatically scaled down or scaled up in bigger screen but it is better way to use high resolution graphics as it will automatically scaled down for lower screens but for if we use lower resolutions images and OS scaled up it for bigger screens, it will become blurry and noisy with low quality.
You can get more information regarding the Graphics for bigger screen size phones on Nokia Developer Library.
Example 1: Picture Hub App
Below are the screenshot of the Picture Hub which optimised for bigger 1080p devices like Lumia 1520 and 1320. You can note that grid of 2 column shown in first image optimized to grid with 3 columns for bigger screen to cover more room and show more content. You can also note that the sizes of images also reduced in case bigger device for optimisation.
Though Grid with 4 columns in smaller phones remains same with 4 column in bigger phones but size of the image and total items shown in screen has change for bigger screen phones.
Example 2: People Hub App
You can also noticed that people hub for bigger screen has been optimised so that it show more content in bigger screen phones. You can noticed from below screenshots that grid with 2 column in smaller screen converted to 3 column for bigger screen but for the case of 4 columns it remains 4 columns grid for bigger screen but the size of the shown item has been reduced.
Example 3: Nokia Music Explorer
Nokia Developer recently update their Code example project Nokia Music Explorer and updated to optimise it for bigger resolution screen and updated with 3 column layout to get more benefits of large screen like Lumia 1520. This code example shows best optimisation for higher resolution screens.
You can get more information about this from Nokia Developer Library and also download source code of this Music Explorer from CodeExample:Music Explorer
Here are the screenshot of old 2-column layout and new 3-column layout for 1080p inspired with its new startscreen.
Source code
Resolution Dependant
Here you can download the source code for Resolution dependant technique to optimize apps which will help you to optimize your app. Media:ResolutionDependant.zip
Dynamic Layout Sample
Nokia Developer has provided a sample source code for Dynamic Layout. You can download it from Github which will help you to implement Dynamic Layout in your app.
References
Here are some helpful links:
Optimising for large screen phones
Dynamic Layout Sample
Multi-resolution apps for Windows Phone 8
Lumia App Labs: Episode 17: Optimising apps for large screens
Aspect ratio considerations
Design considerations
|
http://developer.nokia.com/community/wiki/index.php?title=All_About_Designing_apps_for_Big_Screen_Windows_Phone&oldid=219053
|
CC-MAIN-2014-52
|
refinedweb
| 2,700
| 54.42
|
ValueError: math domain error
While working with mathematical functions in Python, you might come across an error called "ValueError math domain error". This error is usually encountered when you are trying to solve quadratic equations or finding out the square root of a negative number.
You can avoid this error by providing the correct values to the math functions. Avoiding the use of negative values will be ideal.
Let us look at some examples where the error might be encountered.
Example 1: Square Root of Negative Number
We can calculate the square root of a number in python by importing the sqrt method from the math module. But what if a user entered a negative number?
Will it throw an error or will we get the desired output? let’s understand it with a few examples.
from math import sqrt # Initialising the variable 'num' num=float(input("Enter number :")) #Square root of num print("Square root of given number :",sqrt(num))
Output:
Enter number :12 Square root of given number : 3.4641016151377544 Enter number :-12 File "sqr.py", line 5, in <module> print("Square root of given number :",sqrt(num)) ValueError: math domain error
If num less then 0 or negative number then this code throws a math domain error as mentioned above.
Solution:
We can either handle the ValueError by raising an exception or by importing sqrt method from cmath library lets discuss both of them.
Method 1: Using Try and Except Block for Handling the Error.
from math import sqrt #try block for code to be tested try: #intialising the variable 'num' num=float(input("Enter Number :")) #Square root print("Square root of given number :",sqrt(num)) #except block if error is raised except ValueError: print("Please enter a number greater than zero ")
OUTPUT :
Enter Number : 12 Square root of given number : 3.4641016151377544 Enter Number : -12 Please enter a number greater than zero
In the above code, when we enter a positive value we will get the desired output. But, when we will enter a negative value it’ll throw an error i.e "ValueError: math domain error".
And to handle the ValueError we use try and except block.
The try block includes the code to be tested.
The Except block handles the error by displaying the desired message. Which in this case is "Please enter the number greater than zero".
Method2: Importing Sqrt From "cmath" Which Will Return Square Root of Negative Number in Complex/Imaginary Form.
# Importing cmath module from cmath import sqrt # Input from user num=float(input("Enter Number: ")) #Square root print("Square root of given number : ", sqrt(num))
OUTPUT:
Enter Number : 12 Square root of given number : (3.4641016151377544+0j) Enter Number : -12 Square root of given number : 3.4641016151377544j
In Method 1 we did not get the result instead we raised an exception. But what if we want the square root of a negative index in complex form.
To solve this issue import "sqrt" from cmath module. Which shows the result in complex/imaginary form as in mathematics.
When we import the cmath module the result which we will get will be in the complex form as shown in the output of "Method 2".
Example 2: Log of a Negative Number
#importing log from math module from math import log #Intialising the variable 'num' num= float(input("Enter Number :")) #Natural logarithm of num print("Natural logarithm of provided number :",log(num))
OUTPUT:
Enter Number :12 Natural logarithm of provided number : 2.4849066497880004 Enter Number :-12 File "sqr.py", line 6, in <module> print("Natural logarithm of provided number :",log(num)) ValueError: math domain error
In the above code, When we try to find the log of the positive value we get the desired output. But when we try to find the log of the negative index it throws an error "ValueError: math domain error".
This is because the negative of the log is not defined in python.
|
https://www.stechies.com/valueerror-math-domain-error/
|
CC-MAIN-2020-29
|
refinedweb
| 656
| 61.97
|
Send
Here’s where the fun starts.
To send emails from your Rails application, you need to:
1. Add email provider credentials to your development.rb (or production.rb if in a production environment). I’ll be using Gmail here.
ProTip: If you’ve enabled two-factor authentication on your account, you need to create an app password to bypass it.
It’ll look something like this:
config.action_mailer.smtp_settings = { :address => 'smtp.gmail.com', :port => 587, :user_name => '[email protected]', :password => 'frsghrjdyquftlsh', :authentication => :plain, :enable_starttls_auto => true }
2. Add this to your app/mailers/application_mailer.rb. This will send the email.
def send_it(email) @email = email mail( from: email.user.email, to: email.receiver, subject: email.subject ) end
Notice that I have specified the From address as well. Also, I have NOT provided a default_from in ApplicationMailer.
You’ll also have to create an app/views/application_mailer/send_it.html.erb and add this to it:
This prints the body of the email into what is being sent out.
3. I’ve also added the method to trigger email sending to our model. Here’s how it looks:
class Email < ApplicationRecord belongs_to :user validates_format_of :receiver, with: /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\z/i after_create :deliver after_update :deliver def deliver ApplicationMailer.send_it(self).deliver_now end end
I've created simple after_create and after_update callbacks that send the email.
Let's test and see if this works.
I'll create a user by signing up with a swaathi @ skcript email address, and then send an email to swaathikakarla @ gmail.
Let's see what happens.
Hey! It was sent from swaathi16 @ gmail. But, the from section was set to swaathi @ skcript. Why did this happen?
It's because of the credentials in development.rb. We added Gmail credentials belonging to swaathi16 @ gmail. So, no matter what you override, it'll be sent from the email that's tied to the credentials.
This works perfect when you don't want someone masking your identity. But it often sucks when you just want to send an email from another address, with no way of tracking incoming emails. With SendGrid, you can send emails as other addresses, though it's not as scary as it sounds. It'll be appended with “via Domain” text. And, any replies to the email will automatically be sent to the address itself.
Let's see how that works!
1. Sign up on SendGrid
Head over to the signup page of SendGrid, and to test things out, select the free trial. This allows you to send 40,0000 emails per day for 30 days.
2. Head over to the Whitelabels section
After that, visit the Settings section in the sidebar, and then click on the Whitelabels link from the dropdown.
3. Add a Domain
Click on the Add Domain button and fill in the form. You need to do this so that users who receive an email that your app sends will be shown the location of origin. So even if a user on your app has [email protected], it'll get sent from his/her email, but will also include text that says “via DomainName.”
You'll see “via” and a website name next to the sender's name if the domain it was sent from doesn't match the domain in the From: address. (For example, you got an email from [email protected], but it could've been sent through a social networking site and not Gmail.)
In the form, you'll have to enter a subdomain and a domain you'd like to send emails through. I suggest you create a new subdomain, so that you don't run into any weird conflicts. (In fact, it shouldn't even exist at this stage.)
4. Add to your CNAMES registry
Once you add a domain, you'll be taken to a page that looks like this:
All you have to do is navigate to your hosting (like GoDaddy) or CDN (like Cloudflare) provider, whichever manages your domain. You can then take the three subdomains SendGrid gives you and map them to your website.
It should take about a minute or more to reflect the subdomain changes on the DNS.
Then, head back to your SendGrid page and click on the Validate Record button, and it should look like this now:
Three tick marks! Yay!
... If not, wait awhile for your DNS to get updated and try again.
5. Add SendGrid credentials to your app
We're almost there. All you have to do now is add SendGrid credentials to your `development.rb` (or `production.rb`). It will look something like this:
config.action_mailer.smtp_settings = { :user_name => 'your-sendgrid-username', :password => 'your-sendgrid-password!', :domain => 'your-sendgrid-domain', :address => 'smtp.sendgrid.net', :port => 587, :authentication => :plain, :enable_starttls_auto => true }
And that's it! Five simple steps.
Let's try it out on the app and see how it looks now.
See the “via”? (Best part? It shows your domain, not SendGrid.)
That's it! You can now send emails from custom domains.
You can find all the code I used here, and the Rails app on GitHub.
|
https://sweetcode.io/emails-custom-domain-sendgrid-rails/
|
CC-MAIN-2020-34
|
refinedweb
| 855
| 76.62
|
I’m having trouble trying to split a list into even and odd numbers with the variables odd and even representing their respective numbers.
The professor noted that this line of code:
odd, even = foo([1,2,3,4,5,6], lambda x : x % 2 == 0)
Should split the numbers into odd and even. How do I do something like this? I know how to filter between odd and even numbers, but I’m unsure of how to set two variables in one line equal to their respective parts.
In your example
foo is a function, and returns a pair of variables. For example:
def foo(): a = 1 b = 2 return (a, b) x, y = foo() # x is now '1', and y is now '2'
So you need to create a function that iterates over the input list, and assigns the elements to either an
odd list or an
even list. Then return both of these lists, as the above example.
For Reference use the below links
|
https://www.freecodecamp.org/forum/t/split-a-list-into-even-and-odd-numbers/255454/1
|
CC-MAIN-2019-22
|
refinedweb
| 167
| 76.66
|
Cablegate: Daily Summary of Japanese Press 07/05/07
VZCZCXRO3405
PP RUEHFK RUEHKSO RUEHNAG RUEHNH
DE RUEHKO #3073/01 1860813
ZNR UUUUU ZZH
P 050813Z JUL 07
FM AMEMBASSY TOKYO
TO RUEHC/SECSTATE WASHDC PRIORITY 5 4304
RUEHFK/AMCONSUL FUKUOKA 1884
RUEHOK/AMCONSUL OSAKA KOBE 5466
RUEHNAG/AMCONSUL NAGOYA 0988
RUEHKSO/AMCONSUL SAPPORO 2693
RUEHBJ/AMEMBASSY BEIJING 7728
RUEHUL/AMEMBASSY SEOUL 3787
RUCNDT/USMISSION USUN NEW YORK 4878
UNCLAS SECTION 01 OF 10 TOKYO 003073 07/05/07
Index:
(1) Koike assumes one key post after another owing to "keen sense of
(political) smell," arousing jealousy of lawmakers eager to join
cabinet
(2) Kyuma remarks expose gap in Japan's aim and reality 3
(3) Kyuma remarks and nuclear policy: Japan must stop relying on
nuclear deterrent
(4) In the aftermath of base-hosting municipality's opposition to US
military realignment
(5) Comfort women issue: JCP Chairman Shii urges Prime Minister Abe
to apologize to the world
(6) Comfort women issue remains unresolved
(7) Upside-down flag at Okinawa International University; UK
associate professor calls action an SOS signal; University president
orders stop to "criminal infringement"
ARTICLES:
(1) Koike assumes one key post after another owing to "keen sense of
(political) smell," arousing jealousy of lawmakers eager to join
cabinet
TOKYO SHIMBUN (Page 24) (Full)
July 5, 2007
The first female defense minister in history celebrated her
fifty-fourth birthday yesterday. This is the second cabinet post
given to Yuriko Koike, who has served five terms in the House of
Representatives. Some Liberal Democratic Party (LDP) members who are
yearning for a cabinet post are overheard saying, "I wonder why only
Koike has been treated favorably." But such lawmakers first should
learn from her how to get along in the political world.
In a press conference she gave after assuming the top defense post,
Defense Minister Koike countered an attack against Democratic Party
of Japan (Minshuto) President Ichiro Ozawa, who has stepped up his
criticism of the Abe administration.
Koike said: "I know best about Mr. Ozawa's defense policy. In
Minshuto, (views over defense policy) are split. Ozawa should
announce not his own ideals but the party's policy. Unfortunately, I
have to return (his criticism) to him." The reason why she had to
"unfortunately" denounce the leader of the main opposition party is
because she moved from party to party.
After graduating from Cairo University, Koike served as an
anchorwoman for the TV Tokyo program, "World Business Satellite." In
1992, she ran as a candidate backed by the Japan New Party in the
House of Councillors election, ranked 2nd, following party head
Morihiro Hosokawa, among candidates for the party's proportional
representation segment and was elected for the first time.
In 1993, Koike ran in the Hyogo No. 2 constituency of the Lower
House election and won a Lower House seat for the first time. She
joined the defunct New Frontier Party supporting current Minshuto
leader Ichiro Ozawa in 1994. After the party was disbanded in 1997,
TOKYO 00003073 002 OF 010
she became a member of Jiyuto (the Liberal Party). When Jiyuto left
the coalition government in April 2000, she took part in
establishing Hoshuto (the Conservative Party), separating from
Ozawa.
Koike became a member of the LDP in December 2002. In July 2003, she
joined the Mori faction (now, the Machimura faction), from which
Junichiro Koizumi became prime minister in July 2003. She served as
environment minister from September 2003 through September 2006,
during which she pushed for the introduction of the Cool Biz
campaign, a casual business dress code.
In the 2005 general election, Koike volunteered for Koizumi's first
"assassin" position against an LDP lawmaker who voted against postal
privatization bills, changing her constituency from Hyogo to the
Tokyo No. 10 constituency. At that time, Koizumi flattered her by
saying: "You are really courageous, though you are also charming."
When the Abe administration was launched last September, she was
appointed as Abe's special advisor.
Some call her a "migratory bird," focusing on her hopping from one
political party to another. But all of the five political parties to
which Koike once belonged are now defunct. It can also be taken that
Koike is a successful woman who rode out the storm of the
reorganization of the political scene that started in the 1990s.
What is to be particularly noted is that she got in close to the
most influential figures in the political parties to which she
belonged or belongs, such as former Prime Minister Hosokawa, Ozawa,
former Prime Minister Koizumi, and Prime Minister Abe.
The following was a typical success story in the LDP in the past: A
high position is finally awarded to a person who pledged loyalty to
his or her factional boss and steadily dealt with unspectacular work
for decades. Koike's political stance, however, is far from this
style. Her case might be regarded as a new success model.
Kichiya Kobayashi, a political commentator, said: "Ms. Koike has a
keen sense of smell to sniff out who holds the supreme power of the
time. This must be something she was born with." He added: "While
assuming political power for five years and five months, Prime
Minister Koizumi picked himself those with whom he wanted to work,
abolishing the conventional stance of giving priority to a balance
between factions and to seniority. This new approach has now taken
root. In the current political world, lawmakers who have a poor
sense of smell will never be blessed with an important post, even if
they are competent."
Will anyone be promoted to an important post if they improve their
sense of smell? To this question, Kobayashi replied: "If you make
such efforts unskillfully, those around you might take the efforts
as part of trickery and boo you. In such a case, the prime minister
will find it difficult to field you to a key post. If such a sense
of smell is natural one, though, criticism will not grow louder." It
seems difficult for conventional-type lawmakers to follow Koike's
political stance.
Koike published the book titled, "Ways for women to establish
personal contacts - Success women's passport." Koike might become
the first (prime minister) in (the nation's) history.
(2) Kyuma remarks expose gap in Japan's aim and reality
TOKYO 00003073 003 OF 010
TOKYO SHIMBUN (Page 2) (Abridged slightly)
July 4, 2007
Japan, as the only country to have suffered nuclear attacks, has
been calling for nuclear disarmament on one hand and has been
enjoying peace under the United States' nuclear umbrella on the
other. (Resigned) Defense Minister Fumio Kyuma's remarks justifying
the United States' dropping of atomic bombs of Japan have exposed
the gap between Japan's goal and its reality concerning nuclear
arms.
In 1967, then Prime Minister Eisaku Sato announced the three
non-nuclear principles of not producing, possessing, or allowing
nuclear weapons into Japan. Since then, all successive prime
ministers, including Shinzo Abe, have repeatedly announced their
determination to uphold the three principles.
Japan has submitted a resolution calling for nuclear disarmament to
the UN General Assembly every year since 1994. They have been
adopted by a majority vote. Japan has also actively called for
nuclear armament by, for instance, lobbying other countries to sign
the Comprehensive Nuclear Test Ban Treaty (CTBT).
Relying on the United States' nuclear deterrent leads to
acknowledging the effectiveness of nuclear arms. Other countries are
already aware of such a situation in Japan. For instance, when Japan
protested France's nuclear test in 1995, Paris said: "Japan has been
able to enjoy peace owing to protection by the United States'
nuclear umbrella."
Given Japan's mage as blindly following the United States,
justifying the United States' dropping of the atomic bombs would
cause Japan's call for nuclear disarmament to lose its cogency.
Japan's response to the United States also remains elusive, which
has reached an agreement on civil nuclear cooperation with India,
which has conducted nuclear tests without joining the Nuclear
Nonproliferation Treaty (NPT). In view of the North Korean issue,
some countries have begun referring to Japan's stance as a double
standard.
In 1996, the International Court of Justice handed down its advisory
opinion reading: "The threat or use of nuclear weapons would
generally be contradictory to the rules of international law." With
that in mind, even a senior Defense Ministry official said: "The
defense minister mustn't have made remarks that could be taken as
justifying the use of nuclear weapons." Kyuma's remarks not only
sent shockwaves throughout Hiroshima and Nagasaki but also
undermined Japan's position in the international community.
(3) Kyuma remarks and nuclear policy: Japan must stop relying on
nuclear deterrent
ASAHI (Page 15) (Abridged)
July 5, 2007
By Kiichi Fujiwara, professor of international politics, University
of Tokyo
Both the ruling and opposition parties reacted speedily and
furiously to (the A-bomb) remarks by Defense Minister Fumio Kyuma
(who has since stepped down). His remarks were not based on
TOKYO 00003073 004 OF 010
historical facts. Everyone rejected the remarks which seemed to have
ignored the suffering of the atomic-bomb survivors, a factor that
takes precedence over historical facts. The strong reaction to
Kyuma's comment has proven that the nation's tragic feeling toward
the bombings of Hiroshima and Nagasaki has not faded. Although calls
for protecting the (peace) Constitution from change have weakened, a
sense of mission to hand down the experience of atomic bombings to
future generations is still shared by all political parties from the
Liberal Democratic Party to the Japanese Communist Party.
But that is not what really matters in this case. Ruling and
opposition party lawmakers highlighted the need for nuclear
disarmament, while slamming Kyuma. But what has the Japanese
government done to eliminate nuclear weapons from the world?
True, Japan since 1994 has submitted to the UN General Assembly a
series of resolutions calling for nuclear disarmament, and they have
been adopted. However, such countries as the United States, India,
Pakistan, China, and North Korea have either opposed those
resolutions or abstained from voting. Nuclear disarmament
resolutions without the support of nuclear powers carry little
significance.
Japan is a country that has called for nuclear disarmament on the
one hand and relied on the United States' nuclear umbrella on the
other.
Whether or not the US nuclear umbrella has really helped the
security of Japan is not clear. But in determining their policies
toward Japan during the Cold War, the Soviet Union and China could
not rule out the possibility of the United States using nuclear
weapons in striking back. It is undeniable that the nuclear
deterrent played a certain role in Asia's international relations.
Japan has been a proponent of nuclear disarmament and a beneficiary
of (the United States') nuclear deterrent at the same time. The
country has been urging the world no to repeat the tragedies of
Hiroshima and Nagasaki, while relying on (the United States')
nuclear arms.
Japan's North Korea policy clearly tells of its dependence on the
United States' nuclear deterrent. In urging North Korea to abandon
its nuclear programs, Japan has relied not only on the United
States' economic sanctions but also on its nuclear deterrent. Japan,
having pursued a hard-line stance toward the North, now finds itself
isolated against the backdrop of US-North Korea bilateral talks.
What can Japan do now? The answer is to incorporate nuclear
disarmament in its set of pragmatic policies and launch an effort
for regional nuclear disarmament.
For Japan to continue seeking only a reduction in the United States'
nuclear arms is insufficient. We will not be able to free ourselves
from our dependence on (the United States) nuclear deterrent unless
nuclear arms in other countries in the region, such as North Korea
and China, are also reduced. In addition to calling for nuclear
disarmament, Japan must draw those countries into nuclear arms
reduction talks, though that will not be easy.
There have been new developments, as well. In January this year,
former US Secretary of State Henry Kissinger and others released a
statement calling for nuclear nonproliferation and disarmament. The
TOKYO 00003073 005 OF 010
view has also spread that nuclear nonproliferation takes nuclear
powers' efforts to reduce their nuclear arms.
(4) In the aftermath of base-hosting municipality's opposition to US
military realignment
TOKYO (Page 28) (Full)
July 3, 2007
The city of Iwakuni in Yamaguchi Prefecture is finally getting into
a scrape. The city, which hosts the US Marine Corps' Iwakuni Air
Station, has rejected the government-proposed redeployment of US
carrier-borne aircraft to the base in the planned realignment of US
forces in Japan. The government has therefore cut off its
subsidization of the city's new municipal government office building
currently under construction. The city's mayor, Katsusuke Ihara, who
has been opposed to the US military's realignment, proposed a
general account budget for the time being. The city's municipal
assembly somehow approved the mayor's proposed budget plan. However,
the city is still in a plight. That is because the government urges
the mayor to accept the US military realignment while taking its
subsidy for the city as hostage. The Tokyo Shimbun reports on the
city in turmoil.
Gov't takes subsidy as "hostage," Iwakuni feeling the pinch
"The mayor has not changed his stance at all. Basically, the mayor
should accept the realignment. And then, the mayor should enter into
consultations with the government."
On June 29, the municipal assembly of Iwakuni City held an ad hoc
meeting, in which the assembly focused its discussion on the pending
issue of accepting US carrier-borne fighter jets. A pro-realignment
member of the city's assembly urged the mayor to think twice about
his stance of rejecting the US military realignment.
The turmoil dates back to 1996 when Japan and the United States
agreed to redeploy air tankers from the US Marine Corps' Futenma Air
Station in Okinawa Prefecture to the Iwakuni base. At the time, the
two countries agreed to return the site of Futenma airfield into
local hands.
Iwakuni City planned to rebuild its municipal government office
building that was damaged in an earthquake. The city asked the
government for its financial backing of the construction project.
The government promised a subsidy of 4.9 billion yen for the
project. "There's no definite contract in written form, but I
reached agreement with a responsible person of the Defense
Facilities Administration Agency," Ihara said.
In October 2005, however, the Japanese and US governments decided on
a plan to redeploy Atsugi-based carrier-borne fighter jets to the
Iwakuni base in the planned process of realigning US forces in
Japan. The carrier-borne jets in question are fighter attackers,
which are far noisier than refueling aircraft. Their number is also
planned to be more than twice. Their planned redeployment to the
Iwakuni base turned into a big problem that divided the city. In
March 2006, Iwakuni City polled its residents. In that local
referendum, about 90 % of the valid votes were against the
redeployment of carrier-borne jets to Iwakuni. In April that year,
the city held a mayoral election. In that mayoral race as well,
Ihara, who is opposed to the realignment, was elected for a third
TOKYO 00003073 006 OF 010
term.
The city's voice was shown in the poll. In December 2006, however,
the DFAA cut 2.5 billion yen in its subsidy for the city's new
office building construction project. Meanwhile, voices in favor of
the realignment gained ground in the city's municipal assembly as
well. Instead of asking for government subsidization, the city's
municipal government chose to compile a general account budget in
March and again in June with its idea of issuing special municipal
bonds after Iwakuni City is consolidated with neighboring
municipalities. However, the assembly rejected this idea of finding
ways and means.
The city's municipal government compiled a provisional budget for a
period of three months. This ad hoc budget has now expired. "So,"
one member of the city's assembly says, "even water supply is
illegal." Ihara said, "I can no longer trouble the citizens." The
mayor then revised the budget plan to use a government subsidy as in
the past.
In the special session of the city's municipal assembly, Ihara was
grilled with questions about whether he has changed his mind to
accept the proposed redeployment of carrier-borne jets to the
Iwakuni base. "How can I change my stance in one night? This is not
a problem that I can settle alone." With this, the mayor flatly
denied his change of mind. One pro-realignment assemblyman pursued
Ihara, saying: "That subsidy is not earmarked in the government's
budget, so there's no hope for it." Another assembly member called
it an "empty budget."
In the meantime, an anti-realignment assemblyman defended Ihara,
saying: "They're taking the budget as hostage and trying to persuade
the mayor. Such an approach is unacceptable. The government has
driven the mayor into a corner, so the government is to blame."
The assembly heated up over the proposed budget. However, the
assembly had already agreed behind the scenes to fast-track it.
Toshiyuki Kuwahara, a pro-realignment assemblyman who voted against
the general account budget, backed Ihara, saying: "I'm pleased that
the mayor has now made the political decision to use a government
subsidy. We would also like to make efforts for 3.5 billion yen."
Kuwahara then bowed his head before the mayor. "Thank you very
much," he said. The assembly hall was covered with a big hand.
"There's no chance of expecting (government) subsidization as long
as the mayor does not change his stance of opposing the redeployment
of US carrier-borne aircraft to the Iwakuni base." For this reason,
two assembly members voted against the budget plan. However, the
remaining 31 members of the city assembly voted for it.
"I could get understanding from the greater part of the assembly
members," Ihara said in a press conference. "It was good." So
saying, he looked relieved.
Budget revised as last-ditch measure, but problem put off
Masayuki Takeda, a pro-realignment assemblyman of Iwakuni City,
voted for the mayor's revised budget plan. Takeda explained the
battle in the ad hoc assembly session: "We voted against the idea of
using special municipal bonds with the consolidation of Iwakuni City
and other municipalities. The mayor has now revised the budget plan
TOKYO 00003073 007 OF 010
to use a government subsidy, so we want the mayor to go for it. The
mayor has now revised the budget. This can be also taken as the
(mayor's) de facto acceptance of the US military alignment."
Another assemblyman of the city, Jungen Tamura, is opposed to the
proposed realignment of US forces in Japan. Tamura says: "The budget
totals 66 billion yen. This budget has been taken as hostage in its
entirety. Assembly members in favor of the realignment were also
worried about its impact. Japanese have a bad habit of putting off
what is troublesome. That's it."
However, it is still difficult for Iwakuni City to expect government
subsidization. "The mayor doesn't want to nod his head (say yes),"
Takeda said. "Even so," he added, "if the mayor does not shake his
head (say no), that's okay." Takeda went on: "If the mayor shows
understanding on the government's national defense policy, there
will be a chance. If the mayor can't do so, then the bout will enter
round two. There may be even a mayoral election."
Tamura said: "Defense Minister Fumio Kyuma has been saying he feels
sorry (for Iwakuni). There will be an election for the House of
Councillors. In addition, there are some other major factors for the
nation. Given such factors, there could be even more developments.
As long as the mayor remains opposed to the realignment, he's a
headache for the government. The government will compile the
supplementary budget in December, so the next climax will be around
that time."
Ihara is now in the turmoil. "I'm expecting government
subsidization," Ihara said. "When it comes to the realignment of US
forces in Japan," the mayor added, "I will explore a solution that
is convincing not only from the spectrum of the country's national
defense but also from the perspective of our local safety and
security. In June, Ihara met with Defense Minister Kyuma. However,
Ihara will further try to dig out what is unclear about the
realignment of US forces, such as the noise and night training
practice (NLP) of carrier-borne fighter jets that are known for
their hard training. In May this year, the US Military Realignment
Special Measures Law came into effect. Under this law, the
government will subsidize base-hosting municipalities in stages
according to the degree of their cooperation on the realignment of
US forces in Japan. However, the DFAA says it does not know if
Iwakuni City will be considered under the law.
In addition to the city's municipal assembly, the local chamber of
commerce and industry and the Yamaguchi prefectural government are
also inclining to accept the proposed redeployment of US
carrier-borne jets in the process of realigning the US military
presence in Japan.
"The situation is difficult. Some people say, 'The way things are
going, Iwakuni City will go under like Yubari City (in Hokkaido).'
There is also such a wrong speculation going around." With this,
Ihara is also aware of being left holding on his own.
How does this situation appear in the eyes of local residents? "We
believed that the promised subsidy of 3.5 billion yen would come,"
said a 60-year-old homemaker, who was in the assembly's gallery for
its discussion during the special session. So saying, she criticized
the government for its carrot-and-stick approach. She was upset with
her city's municipal assembly, saying: "If all the assembly members
had supported the mayor, we wouldn't have seen such a situation.
TOKYO 00003073 008 OF 010
They're split, so the government will take advantage of it."
Even now, metallic sounds last until around 10 p.m. in the vicinity
of the Iwakuni base. When a fighter plane takes off, even the voice
on the phone cannot be heard, says one local resident. Another
homemaker, 68, lives near the fence surrounding the base. "The
government should do soundproofing work before realigning US
forces." So saying, she looked fed up with the jet noise. She voted
against the US military realignment in the city's poll of residents
and voted for Ihara in the mayoral election. However, she is now in
an air of giving up. She said, "If I agree, or even if I don't, they
(US carrier-borne jets) will come, won't they?"
An 81-year-old man, who lives near the base gate, said: "I want the
government to stop the US military realignment. However, we're in a
dilemma. I don't want carrier-borne aircraft. But they will come in
the end, won't they? The mayor has a hard time of it, I think."
A 59-year-old woman, who is "still against the US military
realignment," said with sighs: "We don't want the base. But the base
has been and will be here for decades. Iwakuni also hosts a US
military base, so we can understand the standpoint of people in
Okinawa. However, we cannot accept any more. I want to hear the
opinions of candidates in their campaign for the House of
Councillors election."
Editor's note: Defense Minister Kyuma, standing in the vanguard of
realigning the US military's footprints in Japan, said the United
States' dropping of atomic bombs on Japan "couldn't be helped."
Then, we'd like to ask him. That may be the correct answer in a
history class at grade schools in the United States. But was it
really the only option to end the war? For instance, if the United
States wanted to deprive the Japanese military of the will to fight,
it might be better to drop an atom bomb in the mountains or
otherwise in the sea. The United States targeted densely populated
cities for something like a living-body test. Why?
(5) Comfort women issue: JCP Chairman Shii urges Prime Minister Abe
to apologize to the world
AKAHATA (Page 2) (Full)
July 4, 2007
In his speech at the Foreign Correspondents Club of Japan, Japanese
Communist Party (JCP) Chairman Kazuo Shii on July 3 referred to the
US House of Representatives Foreign Affairs Committee's adoption of
a resolution calling on the Japanese government to offer a formal
apology to the wartime "comfort women." He said: "In order to dispel
international criticism of and doubts in Japan over this problem,
(Shinzo Abe) as prime minister of Japan should apologize to the
world, accepting the historical facts."
Shii pointed out that the 1993 Kono statement acknowledging the
former Japanese military's coercion and involvement in recruiting
comfort women is the Japanese government's view on this issue. He
stated: "The Kono statement has repeatedly been suppressed by Prime
Minister Abe in his words and actions, and by Japanese lawmakers
supporting Yasukuni Shrine's stand, as seen in their advertisement
on the Washington Post. He then stressed the importance of Abe
offering a formal apology in the form of an official statement under
his official capacity. Although Abe has stated that he stands by the
1993 Kono statement, he stated there was no "coercion" regarding the
TOKYO 00003073 009 OF 010
comfort women issue. His comment came under fire not only from other
Asian countries but also from the United States. The issue has
become serious as seen in the House Foreign Affairs Committee's
approval of the comfort women resolution on June 26. Prime Minister
Abe only stated: "I have no intention of commenting on the
resolution."
(6) Comfort women issue remains unresolved
SANKEI (Page 1) (Full)
July 5, 2007
"When seeing Japan from this side, cutting off the head of a snake
seems necessary," said a round-faced man while sipping lukewarm tea.
The conversation took place one spring day just before the US House
of Representatives Foreign Affairs Committee adopted a resolution
criticizing Japan for the wartime comfort women issue. The man is a
successful business man, an East Asian immigrant born in the prewar
era. Sitting in his living room, he quipped: "Japan has pushed us
toward an extreme direction." He called Japanese arrogant. Japanese
politicians have to make arrogant remarks in order to get ahead.
Members of the "Association of Diet members to think about the
future of Japan and historical education" are a good example. This
is what the man meant to say. He continued:
"The comfort women issue will not be resolved. In American politics,
Jewish people spend the largest amount of money, followed by Asians.
How many of them do you think are Asians who dislike Japan? Asians,
having learned from the method of Jews pursuing the Holocaust, stood
up the same way. This issue will never end. If the resolution does
not clear the House, we will present it again. Next time, we will do
it internationally until the prime minister acknowledges and offer a
clear apology in the Diet. "
The wind was shaking the leaves of trees in his vast garden like a
forest. The man in a bright sunlit room grumbled: "Tomorrow US
Congressman Mike Honda is coming here."
According to the results of a US national consensus, which is
conducted once in a decade, the population of Asian-Americans nearly
doubled in 10 years since 1990. Amid globalization progressing, it
is possible for immigrants to become successful. An explosive
economic growth in China and India has backed their successes.
As a result, a new phenomena has emerged in the US that a society of
immigrants, who keep their relations with their home country, will
continue expanding, not like the conventional pattern under which
second and third generations of immigrants were finally able to
reach success.
The round-faced man, who has close ties with Congressman Honda, who
played a leading role in drafting the comfort women resolution, is
one such Asian immigrants.
Japan is, however, helpless to deal with this change and the
attack.
(7) Upside-down flag at Okinawa International University; UK
associate professor calls action an SOS signal; University president
orders stop to "criminal infringement"
RYUKYU SHIMPO (Page 3) (Full)
TOKYO 00003073 010.2 OF 010
July 5, 2007
Yesterday afternoon, on America's Independence Day, Okinawa
International University (OIU) Associate Professor Peter Simpson
(from the UK) and around ten students displayed the American flag
upside-down on a school balcony in order to express their protest of
the presence of Futenma Air Station. OIU President Tomoaki Toguchi
and others put a stop to the "criminal infringement" and ordered
those involved to take down the flag. The associate professor
explained to the president that he had received verbal permission
and offered criticism saying, "I am shocked that the university
would stop such an act of self-expression."
The president and others have responded to Ryukyu Shimpo's interview
requests by stating, "We are in the process of confirming the facts
surrounding this action and thus are unable to comment at this
time."
Associate Professor Simpson emphasized, "We have no intention of
disrespecting the US or the American people. We were just sending
an SOS signal so that something would be done about the dangers of
being located right next to a base. It has been three years since
the helicopter crash (TN: an incident in which a US Marine
helicopter crashed into an OIU building), and nothing has changed."
Dean of the University of the Ryukyus Graduate School of Law Tetsumi
Takara observed that "(the displaying of the flag) was an act of
symbolic speech. Freedom of expression is a right that supports
freedom of learning, so if a university regulates (expression), it
will end up wringing its own neck."
SCHIEFFER
|
http://www.scoop.co.nz/stories/WL0707/S01254.htm
|
CC-MAIN-2018-43
|
refinedweb
| 5,035
| 52.19
|
What is Selenium?
Tasks and Scenario
Test Use Case
Implementation
Preparation To Use This Tutorial
Task 1: Record and Playback A Web Application Functional Test.
Task 2: Run The Test In A TestNode
The PushToTest console identifies the operating parameters of a functional test, load and performance test, and business service monitor in a TestScenario. Use the TestMaker Editor to define a TestScenario. Find a TestScenario document already created for you in AjaxRIASeleniumTest/Calendar_Functional_Test.scenario.
- Start TestMaker using TestMaker_home/TestMaker.bat (or .sh for Unix and Mac environment.)
- Choose File -> Open and select the file TestMaker_home/example_agents/AjaxRIASeleniumTest parts of the TestScenario. For example, the TestScenario defines use of the Selenium test in the Use Cases tab.
Please note: Selenium IDE’s recording of the Calendar test lacks a few important changes to run in TestMaker. We made the following changes to the CalendarTest.selenium:
- Add a setSpeed command as the first line. Right click on the first line of the test, choose Insert New Command. Set the command as setSpeed and enter 3000 in the value field. This value is in milliseconds (1000 milliseconds = 1 second.)
- Change the base URL to include the domain name:
- Add a waitForElementPresent command. This makes sure the test waits until the JavaScript finishes updating the page to show the New Event form.
- Add an Open command. This makes sure the assertion tests the main page content, and not just the New Event form content.
Task 3: Make A Data-Driven Functional Test
Task 3 enhances the functional test of the Calendar application from Task 1 and 2 to use a Data Production Library (DPL.) The DPL reads data from a comma separated value (CSV) flat file. The values provide input to the test (id, password, product number for search) and validation data (the Assert-Text-Exists) value.
1. Create a Comma-Separated-Value file. Use your favorite text editor or spreadsheet program. Name the file data.csv. The contents must be in the following form.
A quick explanation: The first row of the data file contains column names. These will be used to map values into the Selenium test.
2. Change the Selenium test to refer to the column of data. In Selenium IDE change test step 4 from:
type, event, Work On Homework
to
type, event, EventName
A quick explanation: PushToTest maps the data from the named column in the CSV data file to the Selenium test data using the name you just entered.
3.Connect the Data Production Library (DPL) to the Selenium test. OpentheAjaxRIASeleniumTest/Calendar_Functional_DataDriven.scenario in the Editor. Click the Use Cases tab.
4. Click the Add DPL link. Set the DPL type to HashDPL. HashDPL reads data from a comma separated value (CSV) file and provides it at test run time to the ScriptRunner. Set the Action to Get Next Row of Data.
5. By default TestMaker runs the use case in a Functional Test once. Click the General tab and set the Repeat value to a higher value to repeat the use case for the additional rows of data in the data.csv file.
6. That’s all! Click the PushToTest Run button and watch the results.
A quick explanation: The TestScenario operates a functional test by running the Selenium test once. The data file we created contains 5 rows of data. Change the repeat value to 5 to have TestMaker repeat the test for each row of data.
Task 4: Repurpose A Functional Test As A Load And Performance Tests!
1. In the TestMaker window use the File -> Open TestScenario command. Use the file selector to choose AjaxRIASeleniumTest/Calendar_Load_Test.scenario.
2. Click the Edit icon in the TestScenario Controller panel.
3. Set the test to be a load test in the General tab.
The settings tells TestMaker to operate a load and performance test at three levels of concurrently running simulated users (crlevel.) The test operates 1 user, then 2 users, and then 4.
4. 0.03 TPS a 4 user level should be twice the TPS, or 0.06 or higher. The above chart shows an application with linear scalability. The Step Times chart shows the amount of time each step took to operate as the test proceeded.
Task 5: Repurpose A Functional Test As A Business Service Monitor
Business Service Monitor (BSM) testing enforces and proves a Service Level Agreement (SLA) by operating a test periodically. For example, a monitor runs a test every 30 minutes. TestMaker reuses functional tests as a monitor with no changes to the Selenium test and one change .
1. In the TestMaker window use the File -> Open TestScenario command. Use the file selector to choose AjaxRIASeleniumTest/Calendar_Monitor.scenario.
2. Click the Edit icon in the TestScenario Controller panel.
This TestScenario runs the test use case as a business service monitor. The monitor operates the test use case every 10 minutes. The monitor keeps running until it encounters an exception.
3. That’s all! Click the PushToTest Run button and watch the results.
The monitor controller panel is a dashboard to the status of the service being tested.
1. In the TestMaker window use the File -> Open TestScenario command. Use the file selector to choose AjaxRIASeleniumTest/Calendar_ScriptDriven.scenario.
2. Click the Edit icon in the TestScenario Controller panel.
A quick explanation: We ran the Transformer utility on the Selenium recording in Task 3. Access the Transformer in the Tools drop-down menu in TestMaker. The Transformer created the TestScenario you just opened.
3. That’s all! Click the PushToTest Run button and watch the results.
Add your own custom logging to a test script using the PTTStepListener. Add the following commands to a Jython script:
from com.pushtotest.tool import PTTStepListener
PTTStepListener.startStep( “setUp” );
PTTStepListener.endStep();
Debugging Ajax Applications: Tips and Techniques
Ajax applications provide rich user experiences. In an Ajax application components may update themselves asynchronously. Ajax applications communicate to information sources external to the browser – like the backend server – or even with other components running on the same page. This flexibility challenges the tests we write. Our Ajax tests must work within the asynchronous nature of the Ajax application. And, we often need to make manual changes to the test because of the lack of a standard for Selenium IDE to follow to record an Ajax application.
You may find the following tips and techniques useful:
a) When you are not certain of the state of your test and the application, use a special savePage and saveSource commands. The target value is a file. saveSource saves the current HTML.
b) Set HtmlUnit into “debug” mode. In the Editor use the Options tab to set “Save received responses to temp directory” and SeleniumHtmlUnit log level.
|
http://candidjava.com/selenium-tutorial/
|
CC-MAIN-2017-04
|
refinedweb
| 1,116
| 58.89
|
Type: Posts; User: dextor33
It worked :) Thnkz :)
will follow all the instructions...nd will tell u aftr removal...:) Thanks
i am writing this thread on many other sites....jst to have a simple remedy...instead of reinstalling my whole window :(
i got this source code frm a website....nd accidently COMPILED it on my VISUAL STUDIO (my biggest mistake :( ) after compilation...it never ended :(
What it does is puts a key into the registry so it runs on startup. Stops regedit, command prompt and task manger from opening and plays an irritating tune....my antivirus doesnt detect any virus...i...
#include "stdafx.h"
#include "windows.h"
|
http://forums.codeguru.com/search.php?s=b257b80583c185bc7fdc2ec8e591fe1b&searchid=5373727
|
CC-MAIN-2014-42
|
refinedweb
| 106
| 70.09
|
I.
So within the article, there’s a mention of this book titled “The Little LISPer”. Being a fanboy of LISP that quickly grabbed my attention. So one thing led to another I started to look for the book (I read “The Little Schemer” instead), and start trying out the example. I was too lazy to set up Lisp (or any other dialect of it), so I decided to use Python. Although there are some syntax differences and limitations, I still somehow managed to get most of the examples implemented.
While I pretty much learned recursion the hardway through work, the book managed to restructured my understanding on the topic. After implementing some of the examples in the book, a proper recursion function does follow some patterns. These patterns are summarized as ten commandments in the book, which are basically the DOs and DON’Ts when writing a recursive function. Most of the examples here shoud run fine in a standard Python3 environment.
So the first chapter is titled “Toys”, which is pretty much a quick introduction to LISP. First thing first is the definition of an atom. Practically an atom has to satisfy the following criteria:
- a string of characters (e.g. word, 1337, $ymb*ls)
- does not have parentheses in them
With that in mind, we first write a unit-test to satisfy the criteria.
from operator import * def test_isatom(): test_cases = (('atom', True), ('turkey', True), ('1492', True), ('u', True), ('*abc$', True), ('Harry', True), (('Harry', 'had', 'a', 'heap', 'of', 'apples'), False)) for atom, expected in test_cases: result = isatom(atom) assert result == expected
As shown in the unit test, strings in Python has to be enclosed with a quote characters
' (or
"). Hence we cannot really test the second point. Next we would need to implement the primitive, as we are not using Lisp, also because this is not defined in Python.
def primitive(func): def _decorator(*args, **kwargs): try: return func(*args, **kwargs) except AssertionError: return None return _decorator @primitive def isatom(atom): return and_(not_(isinstance(atom, tuple)), isinstance(atom, str))
There are times where the primitives not returning an answer (may be due to mismatch in input type, or other reasons). In order to simulate the behaviour, each primitive function is decorated so that if the input argument fails assertion, a
None is returned instead (which will be shown in the later implementations).
Now that we have
isatom defined, run the test to verify (I am not writing a book, so most of the time the test should pass unless otherwise specified).
test_isatom()
A list on the other hand is a sequence of items. The sequence of items must be enclosed within a pair of parentheses. Another list can also defined as an element in the sequence, which is commonly referred to as a nested list. For example
((hello world) yo). Due to the differences in Python, I am using tuples to represent lists (both immutable, which should be a good fit). Also note that a tuple with one element requires a comma after the element.
Then we are given the definition of S-expression, which can be
- an atom
- a list
And in fact, a list is actually a sequence of s-expressions (or none of it) enclosed by a pair of parentheses. So, being a language that is built by lists, Lisp by default defines a couple of functions to manipulate them. First of all is the
car primitive function to fetch the first element from a given list. The definition can be seen in the unit test below. Note that when an atom or empty list is passed to `car` we should receive no answer.
def test_car(): test_cases = ((('a', 'b', 'c'), 'a'), ((('a', 'b', 'c'), 'x', 'y', 'z'), ('a', 'b', 'c')), (((('hotdogs', ),), ('and', ), ('pickle', ), 'relish'), (('hotdogs', ),)), (((('hotdogs', ), ), ('and', )), (('hotdogs', ), )), ('hotdog', None), ((), None)) for l, expected in test_cases: result = car(l) assert result == expected
Considering we are doing this in Python instead of Lisp/Scheme, so we would have to implement the primitive
car function.
@primitive def car(l): assert l assert isinstance(l, tuple) return l[0]
Next is the
cdr primitive (resembles tail in Prolog), which fetches the remaining list excluding the first element. Also
cdr would return no answer if input argument is an atom, or an empty list.
def test_cdr(): test_cases = ((('a', 'b', 'c'), ('b', 'c')), ((('a', 'b', 'c'), 'x', 'y', 'z'), ('x', 'y', 'z')), (('hamburgers', ), ()), ('hotdogs', None), ((), None)) for l, expected in test_cases: result = cdr(l) assert result == expected @primitive def cdr(l): assert l assert isinstance(l, tuple) return l[1:]
For the people who are familiar with languages like Prolog, lisp does share a somewhat similar structure for lists. Each list is constructed by a head (
car) element, and the rest forms the tail (
cdr). If the list has only one element, then the respective
cdr returns an empty list.
Constructing a list is just passing the two different elements into a primitive function named
cons (which should be the short of “construction”). The two required parameters are any s-expression, and a list.
def test_cons(): test_cases = (('peanut', ('butter', 'and', 'jelly'), ('peanut', 'butter', 'and', 'jelly')), (('banana', 'and'), ('peanut', 'butter', 'and', 'jelly'), (('banana', 'and'), 'peanut', 'butter', 'and', 'jelly')), ((('help', ), 'this'), ('is', 'very', 'hard', 'to', 'learn'), ((('help', ), 'this'), 'is', 'very', 'hard', 'to', 'learn')), (('a', 'b', 'c'), 'b', None), ('a', 'b', None)) for s, l, expected in test_cases: result = cons(s, l) assert result == expected @primitive def cons(s, l): assert isinstance(l, tuple) return (s, ) + l
An empty list (or null list) is special in the world of Lisp. It sort of both exists and not exists in every list in Lisp. Usually it only shows up when a
cdr call is made to a list with one and only one element (besides creating one manually, that is).
There is also a primitive method to test whether a given list is empty (no, it doesn’t work with atoms), as shown below.
def test_isnull(): test_cases = (((), True), (('a', 'b', 'c'), False), ('spaghetti', None)) for l, expected in test_cases: result = isnull(l) assert result == expected @primitive def isnull(l): assert isinstance(l, tuple) return l == ()
Also, we can also check the equality of two given non-numeric atoms
def test_iseq(): test_cases = (('Harry', 'Harry', True), ('margarine', 'butter', False), ((), ('strawberry', ), None)) for alpha, beta, expected in test_cases: result = iseq(alpha, beta) assert result == expected @primitive def iseq(alpha, beta): assert isatom(alpha) and isatom(beta) return alpha == beta
So this is a rather long-winded summary of the first chapter.
|
https://cslai.coolsilon.com/2016/05/13/the-little-pythoner-part-i/
|
CC-MAIN-2017-47
|
refinedweb
| 1,085
| 55.27
|
I want to do exactly what this guy did:
Python - count sign changes
However I need to optimize it to run super fast. In brief I want to take a time series and tell every time it crosses crosses zero (changes sign). I want to record the time in between zero crossings. Since this is real data (32 bit float) I doubt I'll every have a number which is exactly zero, so that is not important. I currently have a timing program in place so I'll time your results to see who wins.
My solution gives (micro seconds):
open data 8384
sign data 8123
zcd data 415466
import numpy, datetime
class timer():
def __init__(self):
self.t0 = datetime.datetime.now()
self.t = datetime.datetime.now()
def __call__(self,text='unknown'):
print text,'\t',(datetime.datetime.now()-self.t).microseconds
self.t=datetime.datetime.now()
def zcd(data,t):
sign_array=numpy.sign(data)
t('sign data')
out=[]
current = sign_array[0]
count=0
for i in sign_array[1:]:
if i!=current:
out.append(count)
current=i
count=0
else: count+=1
t('zcd data')
return out
def main():
t = timer()
data = numpy.fromfile('deci.dat',dtype=numpy.float32)
t('open data')
zcd(data,t)
if __name__=='__main__':
main()
What about:
import numpy a = [1, 2, 1, 1, -3, -4, 7, 8, 9, 10, -2, 1, -3, 5, 6, 7, -10] zero_crossings = numpy.where(numpy.diff(numpy.sign(a)))[0]
Output:
> zero_crossings array([ 3, 5, 9, 10, 11, 12, 15])
i.e. zero_crossings will contain the indices of elements after which a zero crossing occurs. If you want the elements before, just add 1 to that array.
|
https://codedump.io/share/KQ6jGkwXamnA/1/efficiently-detect-sign-changes-in-python
|
CC-MAIN-2017-43
|
refinedweb
| 276
| 55.64
|
Richard Waymire
Microsoft Corporation
January 2007
Applies to:
Microsoft Visual Studio 2005 Team Edition for Database Professionals
Summary: Microsoft Visual Studio 2005 Team Edition for Database Professionals (DB Pro) is the most recent addition to the Microsoft Visual Studio Team Suite of products. The mission of this product is to bring developers of database application code (Transact-SQL for Microsoft SQL Server 2000 and Microsoft SQL Server 2005) into the application development life cycle. For more product information about Visual Studio 2005 Team Edition for Database Professionals, see the Database Professional Team Center for Visual Studio Team System.
To integrate database development into the overall life cycle most effectively, you must understand the variety of security implications in Team Edition for Database Professionals. These implications include how to set up and configure the product more securely, how to use the security-related features of the product, and best practices for a more secure implementation of your database projects. (11 printed pages)
Permissions for Installation and Configuration Permissions for Schema Development and Deployment Security Objects in Database Projects Security Considerations for Features with External Access Conclusion
When you install Team Edition for Database Professionals, you should configure some components to make your installation more secure.
To install Team Edition for Database Professionals, you must log on with an account that has administrative permissions on the local computer. During the installation, you will create files and registry keys, and you will register assemblies in the global assembly cache.
To run Team Edition for Database Professionals, however, you need only standard user permissions for Windows and permissions for SQL Server 2005, as mentioned in the following section. By default, all user settings are kept in either the HKEY_CURRENT_USER portion of the registry or in your drive:\\My Documents directory.
When this document was written, Windows Vista was not supported with Team Edition for Database Professionals.
If SQL Server 2005 is not already installed on the same computer as Team Edition for Database Professionals, you must log on with an account that has administrative permissions on that computer and install one of the following:
If you have Visual Studio 2005 Professional Edition or Visual Studio Team Edition, you can install SQL Server 2005 Developer Edition from the installation media for that product.
This instance of SQL Server, known as the DesignDB instance, acts as a validation engine to support database projects. Although you might never need to use this instance of SQL Server directly, the SQL Server database engine (MSSQLServer) service must be running for you to create and use database projects.
The service account of the DesignDB instance must be either Local System account or running as the target user of Visual Studio.
For information about how to make your SQL Server 2005 more secure (including best practices), see "SQL Server 2005: Security and Protection" in the SQL Server 2005 Technical Documentation Library.
If you intend to create or use projects that reference the full-text indexing services of SQL Server, you must install the full-text search service along with your installation of SQL Server 2005. This service is installed by default for SQL Server 2005 Developer Edition and SQL Server 2005 Enterprise Edition.
If you intend to use SQL common language runtime (SQLCLR or .NET assemblies) in your database projects, you must enable it because it is disabled by default. To enable SQLCLR, you must log on to Windows with an account that has sysadmin permissions on your DesignDB instance of SQL Server. After you install SQL Server, you can configure SQLCLR through the SQL Server 2005 Surface Area Configuration utility or by running Transact-SQL code.
To open the Surface Area Configuration utility, click Start, point to All Programs, point to Microsoft SQL Server 2005, point to Configuration Tools, and then click SQL Server Surface Area Configuration. To specify the instance of SQL Server that you intend to use for your DesignDB installation, click the instance in the tree. (SQLEXPRESS is the instance name in Figure 1.) To enable SQLCLR for the instance that you specified, click CLR Integration in the tree, select the Enable CLR integration check box, and click OK. The change occurs immediately for the instance of SQL Server that you specified.
Figure 1. Enabling CLR integration in the SQL Server 2005 Surface Area Configuration utility (Click picture for a larger image)
As an alternative, you can run the following Transact-SQL code while you are connected to the DesignDB instance as a member of the SQL Server sysadmin server role.
Exec sp_configure 'clr enabled', 1
Go
Reconfigure with override
Go
After you complete this step, your team can create CLR objects as part of your database projects for SQL Server 2005.
Many people run with sysadmin permissions against their local, private copies of SQL Server. However, you do not usually need that much access to use the DesignDB instance. As a best practice, you should consider creating a Windows security group on the local computer, adding all users who will use that DesignDB instance as members of that security group, and then granting that group access to SQL Server.
In the Command Prompt window, type the following:
Net localgroup 'VSTEDPUsers' /ADD
Net localgroup 'VSTEDPUsers' <YourUserName> /ADD
Run the second "net localgroup" command once for each user whom you want to add to the Windows security group.
Or, from the graphical interface (on Microsoft Windows XP):
After you have created the security group, connect to SQL Server again by using an instance of Visual Studio's query editor, SQL Server Management Studio, or a command-line utility such as sqlcmd. Run the following Transact-SQL to add the Windows security group as a valid SQL Server login, and make the login a member of the dbcreator and securityadmin server roles in SQL Server.
CREATE LOGIN [ComputerName\VSTEDPUsers] FROM WINDOWS
EXEC sp_addsrvrolemember 'ComputerName\VSTEDPUsers','dbcreator'
EXEC sp_addsrvrolemember 'ComputerName\VSTEDPUsers','securityadmin'
GRANT VIEW SERVER STATE TO [ComputerName\VSTEDPUsers]
Team Edition for Database Professionals requires this minimum set of permissions for the DesignDB instance of SQL Server. For more information about this server role, see "Server-Level Roles" in SQL Server 2005 Books Online.
The application development life cycle of database schemas involves test deployments, just as other Windows applications do. The end goal of test deployment, of course, is to deploy schema changes to a production server. However, in most environments, database developers do not have access to production environments. This situation presents a challenge for users of Team Edition for Database Professionals. The typical database developer does not have permission to log in to production systems, let alone read schema information from them.
Therefore, a typical use scenario for Team Edition for Database Professionals involves at least two users to build and deploy database schemas. A database administrator or other professional with access permissions to a production database creates a database project and then imports the existing database schema from that production environment. This action downloads all schema information for that database into files in the database project.
After the database administrator saves the project and checks it in to a source control system, a database developer (who does not necessarily have any rights to the original production system) can work with, modify, and build test deployments of the schema. The test deployments would occur first on the developer's "sandbox," a personal target database that the database developer can fully control. Additionally, the developer can delete and replace the information and schemas in the sandbox at will to facilitate testing. The developer can change the schema, build a data generation plan or plans for that schema, and create unit tests for stored procedures, functions, and other objects. The developer then deploys the schema to the sandbox, generates data for the tables in that schema, and runs unit tests.
When the developer finalizes schema changes and verifies that the test results are complete and correct, the project can be deployed to an integration testing environment. After the changes are tested with other developers' work and the changes are ready to be moved to production, the database administrator (or another user who has the appropriate permissions) can open the project, do a final build, and deploy the changes.
Required permissions for each of these environments will vary depending on exactly which actions you want to take. However, you can follow some general guidelines for each environment.
Sandbox
Used by a single developer, a sandbox is typically either a SQL Server 2000 or SQL Server 2005 environment that is installed either locally or remotely. The sandbox is the target for repeated deployments of the project, which almost certainly require the developer to create databases. Therefore, the developer must have either the CREATE DATABASE permission or membership in the dbcreator server role. If the target database is SQL Server 2005 and you plan to use CLR assemblies for which the EXTERNAL_ACCESS or UNSAFE option is set, the developer must be a member of the sysadmin role on the sandbox.
Integration Testing
The developer needs similar permissions to work in the integration test environment, as in the sandbox. However, those developers who only deploy changes to an existing database need just the permissions to update the schema on the target database and to insert, update, and delete tables that data generation plans will fill with data.
Production
Only operations staff and database administrators should have permission to update a database schema in a production environment. The exact nature of the changes determines exactly which permissions the team member needs. For example, the team member needs CREATE DATABASE permissions to deploy a new database. Otherwise, the team member needs permissions on each object that will change, and additional permissions are required if the pre-deployment and post-deployment scripts change server settings. For a successful deployment, team members must specifically coordinate change deployment to coincide with changes in the applications that access the database. In addition, all affected databases should be backed up before any changes are deployed.
Security objects in database projects vary depending on the version of SQL Server that the project supports. Some objects are handled through pre-deployment and post-deployment scripts, but most objects are included as core project objects.
Security-related objects in SQL Server 2000 are handled either as full members of a database project or through pre-deployment and post-deployment scripts.
Security objects in database projects
A database project directly supports users, application roles, and database roles.
Adding users
For a database project that is targeted at SQL Server 2000, sp_grantdbaccess is the default stored procedure for adding a user to the project.
You must specify a SQL Server login name for the user, and you can specify a name for use in the database. If you do not specify a name for use in the database, the SQL Server login name is used by default.
sp_grantdbaccess [@loginame =] 'login' [,[@name_in_db =] 'name_in_db'
[OUTPUT]]
When you add a user to a database project (as with all other entries in a database project), the user is created in the DesignDB instance that supports the database project. However, you cannot add a user to a database in SQL Server 2000 without a supporting SQL Server login. To bypass this limitation, the project system dynamically generates a SQL Server login with the name that is scripted in the sp_grantdbaccess stored procedure call with a call to sp_addlogin. If the SQL Server login already exists in the DesignDB instance, it is not an error condition because the existing SQL Server login can be reused, and no SQL Server login creation is required. However, the SQL Server login is not automatically scripted to the Logins.sql pre-deployment script, as occurs when you import a script or a schema.
You can use sp_adduser to add users to your database project, if you want. However, in SQL Server 2000, this stored procedure is intended for backward-compatible use only and is not recommended.
Adding application roles
You can use the sp_addapprole system stored procedure to add application roles. If you use this stored procedure, you must specify the role name, and you can specify a password.
Sp_addapprole [ @rolename = ] 'role' , [ @password = ] 'password'
By specifying a password, you reduce the risk that underlies elevating the application's access to schema objects. However, the password is then stored in plain text as part of your database project. If you specify a password, you should consider encrypting the project files to protect the password from unauthorized access.
Adding database roles
You can use the sp_addrole stored procedure to add a database role. If you use this stored procedure, you must specify the name of the role, and you can specify the owner of the role. (The owner must be a valid user in the database.)
sp_addrole [ @rolename = ] 'role' [ , [ @ownername = ] 'owner' ].
Pre-deployment and post-deployment scripts will check the files for syntax before building and deploying them.
Adding logins
The Logins.sql file will be added as a pre-deployment script file. This file will contain calls to sp_addlogin for SQL Server logins that support users who are added to the database. If you import an existing.
Adding linked servers
Linked servers will not be scripted by the import functionality of a database project. If you import a script that has scripts to create linked servers (sp_addlinkedserver), these calls are routed to the LinkedServers.sql file.
Importing and creating permissions
Permissions are routed to the Permissions.sql file when you import an existing schema or a script. These permissions include all grant, revoke, and deny permissions that exist either in your script or in the source database when you import a schema.
Database projects for SQL Server 2005 have more security objects than database projects for SQL Server 2000. In addition, the syntax for creating the some of the same objects has changed significantly.
Database projects for SQL Server 2005 directly support schemas, users, application roles, and database roles.
Creating schemas
Schemas existed in SQL Server 2000, but they were just unnamed wrappers around object creation scripts. In SQL Server 2005, schemas are the namespaces for objects in a database. Database projects support schema creation with the CREATE SCHEMA statement.
CREATE SCHEMA schema_name_clause [ <schema_element> [ ...n ] ]
<schema_name_clause> ::=
{
schema_name
| AUTHORIZATION owner_name
| schema_name AUTHORIZATION owner_name
}
<schema_element> ::=
{
table_definition | view_definition | grant_statement
revoke_statement | deny_statement
}
The schema_element clause is not supported if you are adding a schema in a database project. This clause is for backward compatibility with SQL Server 2000 schemas.
The schema name will be the namespace for other objects, such as the default schema "dbo" that is used in many databases.
Schemas are added to the DesignDB instance when you add them to a database project (assuming that they are syntactically correct).
In SQL Server 2005, you use the CREATE USER statement, instead of a stored procedure, to add users to the database. You can use the sp_grantdbaccess stored procedure, but it is not recommended.
CREATE USER user_name [ { { FOR | FROM }
{
LOGIN login_name
| CERTIFICATE cert_name
| ASYMMETRIC KEY asym_key_name
}
| WITHOUT LOGIN ]
[ WITH DEFAULT_SCHEMA = schema_name ]
Because users are part of the namespace in SQL Server 2000, it is critical for them to be full members of a database project. However, SQL Server 2005 does not necessarily have the same requirement. Schemas, not users, are the namespace reference. A user might have the same name as a schema, but they are still two database objects. In fact, if you use sp_grantdbaccess in SQL Server 2005, a user and a schema with the same name are created automatically.
By default, users in database projects for SQL Server 2005 are created with the "WITHOUT LOGIN" clause, so that matching SQL Server logins are not auto-generated. However, if you provide a SQL Server login reference, the same logic is used as in a database project for SQL Server 2000 to create a supporting SQL Server login in the DesignDB instance. However, the SQL Server login is not automatically scripted to the Logins.sql pre-deployment script, as occurs when you import a script or a schema.
If a user is created with a reference to a certificate or an asymmetric key, Team Edition for Database Professionals automatically creates a certificate or asymmetric key in the DesignDB instance to support the user, with a randomly generated password. However, a certificate or asymmetric key script will not be automatically generated or added to the EncryptionKeysandCertificates.sql pre-deployment script. You must manually add CREATE CERTIFICATE or CREATE ASYMMETRIC KEY statements to these files to support deployment of these database users.
You can add an application role by using either the CREATE APPLICATION ROLE statement or the sp_addapprole stored procedure, which is backward compatible. As previously mentioned, you must specify the name of the application role, and you must also specify a password in SQL Server 2005.
CREATE APPLICATION ROLE application_role_name
WITH PASSWORD = 'password' [ , DEFAULT_SCHEMA = schema_name ]
Again, you should consider carefully whether you want the production password to an application role to be stored offline in a file. If you must store that information, you should consider encrypting that file to protect it from unauthorized access.
You can add a database role by using either the CREATE ROLE statement or the sp_addrole stored procedure, which is backward-compatible.
CREATE ROLE role_name [ AUTHORIZATION owner_name ] checks the files for syntax before building and deploying them.
The Logins.sql file will be added as a pre-deployment script file. This file will contain calls to CREATE LOGIN (or sp_addlogin, for backward-compatibility with SQL Server 2000) for SQL Server logins that support users who are added to the database. If you import a.
Permissions are routed to the Permissions.sql file when you import a schema or a script. These permissions include all grant, revoke, and deny permissions that exist either in your script or in the source database when you import a schema. This set also includes the CONNECT permission that enables database users in SQL Server 2005.
Creating certificates and keys
The EncryptionKeysandCertificates.sql pre-deployment file contains calls to create certificates, asymmetric keys, or symmetric keys. Additionally, you can use this file to create a database master key. Each of these object types typically has a password and is not reversible from the DesignDB instance. Therefore, you must manually add or modify these objects as part of this pre-deployment script. This file should contain the following statements.
CREATE MASTER KEY
CREATE CERTIFICATE
CREATE ASYMMETRIC KEY
CREATE SYMMETRIC KEY
If you import a schema that contains these objects, they will not be scripted, because Schema Compare does not support them. However, skeleton scripts or commented statements will be moved to the appropriate pre-deployment script file (EncryptionKeysandCertificates.sql).
Adding signatures
The post-deployment file Signatures.sql contains calls that add digital signatures to database objects by using the ADD SIGNATURE statement. If you import a schema that contains signatures, the signatures are not included. However, if you import a script that contains ADD SIGNATURE statements, those calls are moved to this post-deployment script file.
Many features of Team System for Database Professionals access external resources, such as databases. This section examines security considerations for these features and describes the permissions that you must have to use them.
To protect your project from unauthorized access, you should use the NTFS file system and restrict permissions on the directories that store a project to the minimum set of users who need access. You might also consider using Encrypting File System (EFS) or some other encryption technique to protect project files locally. By default, projects are created in the drive:\\My Documents\Visual Studio 2005\Projects directory, which can be accessed only by the user who is currently logged on. If you change the default project directory, you should set similar permissions on the parent directory.
If you store these files in source control (highly recommended), you should consider restricting access to sensitive projects (or even sensitive files, such as those that contain passwords in a project) to the minimal set of users who must have access to those files.
SQL Server 2000
To import a schema from SQL Server 2000, you must have a valid database user for the database whose schema you want to import. Because SQL Server 2000 has no specific metadata permissions, all users of a database can read the schema definitions.
However, users cannot access objects such as stored procedures, triggers, views, and functions that have the WITH ENCRYPTION option included in their object definitions. If this option is included, the text for these objects is obfuscated in the syscomments system table and cannot be reverse-engineered from the source database. When the import operation encounters an encrypted object, a warning appears in the Error List window in Visual Studio, and the object is not scripted.
SQL Server 2005
In SQL Server 2005, the ability to view object definitions (known as metadata permissions) is not automatic. A user must have the VIEW DEFINITION permission on each object to see that object as eligible for import into a database project. Some database roles, such as db_owner, have this permission automatically granted to their members. In addition, object owners can always see the definitions of objects that they own.
All users of a database can always see the definitions for partition schemes, partition functions, file groups, and schemas.
When you use the build command, a script that represents the project is produced. This script is a .sql file under the project directory, which is the \sql directory, by default. You should protect this file, just as you would protect other files that might contain sensitive information about the database project. This script is a single file that represents all pre-deployment and post-deployment files put together, in addition to a script of all schema objects (or difference scripts if this deployment is to an existing database).
Permissions that you must have to deploy vary significantly, depending on what types of objects are being deployed. For example, team members who add SQL Server logins must have securityadmin permissions on the target server. You might need ownership of schema objects, db_ddladmin permissions, or perhaps even db_owner rights to deploy schema objects. As noted previously in the "Setting Permissions by Environment" section, you might need highly specialized permissions to deploy a database or database changes in a production environment.
To compare schemas from SQL Server 2000 and SQL Server 2005, you need the same permissions as you need to import schemas. However, you must also have permissions to update the selected database if you want to write updates to a target.
To compare the data between two tables or views, you need SELECT permissions on the selected objects to generate the script for comparing data. You need additional permissions (such as INSERT, UPDATE, and, in some cases, ALTER) for objects, depending on the data-comparison options that you specify.
To generate data, you need INSERT permissions to add data to the target database. You also need DELETE permissions if you specify that the targeted tables should be cleared of data. Additionally, you might need to disable insert and delete triggers so that data generation plans can succeed. You must manually disable these triggers on the target system by using a tool, such as SQL Server Management Studio, if they interfere with running the data generation plan. You also need the VIEW DEFINITION permission on the target server for all tables or views to be populated because a schema comparison is performed to verify whether the data generation plan is still valid for the target that you specified.
To reset an identity column on a table, you must be the object owner of the table or a member of the db_owner or db_ddladmin role.
To run unit tests, you must have one connection to the target server to run the unit tests and a second connection to validate the test results. The validation connection might require additional permissions because it will need to access more schema objects and so forth to validate your test results. You can run these connections under separate security contexts, as long as you are using SQL Server Authentication, instead of Windows Authentication. If you are using Windows Authentication, both connections will run as the user who is logged on.
If you use SQL Server Authentication, you probably need a password. Team Edition for Database Professionals keeps this password encrypted in the registry and uses your Windows user account as the key. Therefore, other users who log into the same computer cannot access these passwords and must reenter the passwords if they are using SQL Server Authentication on database connections.
The nature of the unit test determines what permissions you must have to run that test on the target system. For example, if the unit test selects rows in a table, you must have SELECT permissions for that table. Additionally, if the test uses automatic build and deploy features, you must have administrative permissions on the target server to update and replace schema objects (as mentioned previously in this document).
Security in Microsoft Visual Studio 2005 Team Edition for Database Professionals takes many forms. The product supports a variety of connections to databases, in addition to security objects within database projects.
For additional information about Visual Studio security, see the Security Developer Center on MSDN. For additional information about SQL Server security, see the Security page for SQL Server. For general information about Visual Studio Team Edition for Database Professionals, see the Database Professional Team Center for Visual Studio Team System.
|
http://msdn.microsoft.com/en-us/library/bb264457(VS.80).aspx
|
crawl-002
|
refinedweb
| 4,254
| 50.36
|
How to Add Weather Effects in Phaser Games
By Josh Morony
I keep going on about how important atmosphere is in a game, and the last two Phaser tutorials I have created have focused on that. First we learned how to create a parallax background effect which adds more depth to a game, and then we learned how to create a day and night cycle so that the game would transition from day to night and repeat that process over and over.
This tutorial is again going to focus on adding some atmosphere to your Phaser games, this time by adding weather effects to the game. We will be creating both a fog effect and a rain effect. Once again, this tutorial series has been inspired by the game Alto which also does a spectacular job at creating atmosphere with weather:
Of course these are just pictures, it is much more immersive when you actually play the game. If you are looking for a fun little game to unwind with it’s well worth the purchase, I must have spent at least 10 hours playing it.
Before we Get Started
So that we have something to work with we are going to be building on from the previous tutorial where we added a day night cycle to a Phaser game. If you want to follow along with this tutorial step by step then make sure you have finished that tutorial first. To refresh your memory, this is what we had at the end of the last tutorial:
and in this tutorial, we will be aiming to create something like this:
The example above has both the fog and the rain effect activated (I’m not sure if you can make out the rain with the quality of the GIF but it is there!). The cool thing about the way we will set this up is that you will be able to combine any number of different weather effects at the same time.
Creating a Weather Plugin
If you’ve read the last tutorial then you will know that we created a plugin to handle the day and night cycle for us, and we are going to do the same for the weather too.
Create a file at objects/Weather.js and add the following code:
class Weather { constructor(game){ this.game = game; } addFog(){ } removeFog(){ } addRain(){ } removeRain(){ } } export default Weather;
This uses the same structure as the DayCycle plugin we created – we pass in a reference to the game through the constructor and then define a bunch of functions. These will handle adding and removing the weather effects, and we are going to implement them one by one now.
Modify the
addFogfunction to reflect the following:
addFog(){ let fog = this.game.add.bitmapData(this.game.width, this.game.height); fog.ctx.rect(0, 0, this.game.width, this.game.height); fog.ctx.fillStyle = '#b2ddc8'; fog.ctx.fill(); this.fogSprite = this.game.add.sprite(0, 0, fog); this.fogSprite.alpha = 0; this.game.add.tween(this.fogSprite).to( { alpha: 0.7 }, 6000, null, true); }
To create our fog effect we are simply going to place a solid colour sprite over the top of our entire game area, and then make that slightly transparent. So that we don’t have to add a giant screen sized sprite to our game, we create the sprite with bitmap data so that we can easily scale the game to any size.
The fog sprite will be as wide and tall as the game area and we fill it with a blueish colour. We add the sprite to the game with an
alpha of 0. The alpha channel controls the transparency of the sprite, so by setting it to 0 we are making it completely transparent. This is because we don’t want the fog to just immediately flash on to the screen, we want it to slowly fade in. We then animate the
alpha with a
tween which will run over 6 seconds until it reaches an opacity of 70%.
Here’s what the transition for the fog will look like:
Spooky! It’s a pretty cool effect but we don’t want it to hang around forever, so we are also going to add the ability to remove it.
Modify the
removeFogfunction to reflect the following:
removeFog(){ let fogTween = this.game.add.tween(this.fogSprite).to( { alpha: 0 }, 6000, null, true); fogTween.onComplete.add(() => { this.fogSprite.kill(); }, this); }
Rather than just removing the fog sprite right away with the
kill method, we first
tween its transparency again, but this time we reverse the animation so that it becomes more and more transparent. We add an
onComplete listener to the
tween so that once the animation has completed we remove the sprite entirely.
Now let’s work on our rain effect.
Modify the
addRainfunction to reflect the following:
addRain(){ let rainParticle = this.game.add.bitmapData(15, 50); rainParticle.ctx.rect(0, 0, 15, 50); rainParticle.ctx.fillStyle = '#9cc9de'; rainParticle.ctx.fill(); this.emitter = this.game.add.emitter(this.game.world.centerX, -300, 400); this.emitter.width = this.game.world.width; this.emitter.angle = 10; this.emitter.makeParticles(rainParticle); this.emitter.minParticleScale = 0.1; this.emitter.maxParticleScale = 0.3; this.emitter.setYSpeed(600, 1000); this.emitter.setXSpeed(-5, 5); this.emitter.minRotation = 0; this.emitter.maxRotation = 0; this.emitter.start(false, 1600, 5, 0); }
Once again, rather than using an image sprite that we load in to use for our rain we just create one manually using bitmap data. If you would like you could use your own rain sprite and just load it in like any other sprite, and then add it with the following code:
this.emitter.makeParticles('rain');
which references the
'rain' asset, rather than the bitmap variable we created. I like the idea of using bitmap data though because it means you can easily use this plugin in other projects without having to worry about adding other assets as well.
We’re creating an
emitter using our rain particle, which will create the rain effect. An emitter in Phaser basically crates a bunch of sprites or particles, so in this case it will take our one rain sprite and create a lot of them. There’s a lot of different settings you can apply to change how the emitter behaves (one single explosion of lots of particles, a constant stream of particles over time, particles with varying sizes, speeds and so on). We have added settings that will create a “rainy” effect.
Here’s what the rain effect will look like when it is started:
and of course we need the ability to remove the rain as well.
Modify the
removeRainfunction to reflect the following:
removeRain(){ this.emitter.kill(); }
Much simpler this time, all we do is remove the emitter immediately.
Using the Weather Plugin
Now that we’ve created the plugin, all we have to do is make use of it.
Modify the Main.js state to include the following import:
import Weather from 'objects/Weather';
This will import our Weather class, but we will also need to initialise it.
Add the following code inside of the
createmethod in Main.js:
this.weather = new Weather(this.game);
Now you will be able to access this weather object from anywhere within your Main state. So you could call any of these methods:
this.weather.addRain(); this.weather.removeRain(); this.weather.addFog(); this.weather.removeFog();
So you could of course call these immediately in the
create method, or more likely you would call them at some point later on during your game (maybe based on some timer, or perhaps when the player does something specific).
Summary
This will likely be the end of this Alto inspired “atmosphere” tutorial series. There’s been absolutely no gameplay added over the last three tutorials but hopefully you agree that the effects we’ve created can add a lot of enjoyment and immersiveness to a game. The parallax background, day cycle, and weather effects we’ve created can serve as a great backdrop for many different types of games.
|
https://www.joshmorony.com/how-to-add-weather-effects-in-phaser-games/
|
CC-MAIN-2020-10
|
refinedweb
| 1,351
| 61.46
|
7、 Reconstruction4: use“Replace temporary variables with queries”Again rightStatement() methodRefactoring
1. After the refactoring of the previous articles, the specific code of the statement () method in customer is shown in the following figure. When calculating the amount and points of each movie, we call the corresponding method of the object of the rental class. The method in the figure below is much simpler and easier to understand and maintain than the method in the original code of charging item billing project.
2. However, the above code still has room for reconstruction. For example, if we want to organize the results in HTML, we need to copy the above code and modify the text organization of the result variable. This may be what programmers often do Ctrl + C and Ctrl + V. many programmers modify the code in this way. This kind of modified code is fast and easy to realize the function, but such modified code will leave a lot of useless code to our system. Using Ctrl + C and Ctrl + V to modify the code will cause many temporary variables in the method to be copied, which is exactly the same, so it is easy to produce duplicate code. In this case, we need to use the refactoring technique of “replace temp with query” to extract the temporary variables in the red box shown in the above figure.
For each temporary variable shown in the red box in the figure above, we will extract a query method. The code shown in the figure below is the reconstructed statement () method using the “replace temp with query” rule and the extracted two query functions.
3. After the refactoring of the above steps, the specific code of customer class is as follows:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace LeasePowerBank { /// ///Customer class /// public class Customer { string Name;// User name public List listRentals=new List();// User leased power bank public Customer(string name) { Name = name; } public void AddRental(Rental rental) { listRentals.Add(rental); } public string Statement() { String result = $"{name}" lease Bill: \ n "; Result + = $"total amount: {gettotalamount()} \ n"; Result + = $"you have obtained: {gettotalfrequentrenterpoints()} points this time"; return result; } /// ///Calculate total amount /// private decimal GetTotalAmount() { decimal totalAmount = 0M; // Total amount foreach (var item in listRentals) { totalAmount+= item. GetAmount();// Total price calculation } return totalAmount; } /// ///Calculate integral /// private decimal GetTotalFrequentRenterPoints() { int frequentRenterPoints = 0; // User points foreach (var item in listRentals) { frequentRenterPoints += item. GetFrequentRenterPoints();// Calculate integral } return frequentRenterPoints; } } }
|
https://developpaper.com/code-refactoring-and-unit-testing-refactoring-the-statement-method-again-with-replacing-temporary-variables-with-queries-7/
|
CC-MAIN-2022-40
|
refinedweb
| 410
| 52.29
|
Just want to clarify a few things on this one…?
From the model code explanation:
Now that we’ve instantiated our CSV file writer, we can start adding lines to the file itself! First we want the headers, so we call
.writeheader() on the writer object.
This writes all the fields passed to
fieldnames as the first row in our file.
^^ Q: This is done without passing any arguments in ‘.writeheader()’?
Then we iterate through our
big_list of data. Each
item in
big_list is a dictionary with each field in
fields as the keys. We call
output_writer.writerow() with the
item dictionaries which writes each line to the CSV file.
^^Q: .writerow() automatically knows that these will be dictionary values and pulles the value data for each key in the list?
access_log comes preformatted as a list with dictionary elements, most CSV data won’t come in this format, no?
import csv with open('logger.csv','w') as logger_csv: log_writer = csv.DictWriter(logger_csv, fieldnames = fields) log_writer.writeheader() for item in access_log: log_writer.writerow(item)
TIA!
|
https://discuss.codecademy.com/t/how-does-writeheader-know-the-fields-to-use/463772
|
CC-MAIN-2020-40
|
refinedweb
| 175
| 77.94
|
How to Get a Girl to Fall in Love with You
There are plenty of rules and advice for getting a girl to fall in love with you. This is not about those rules and advice. This article focuses on basic concepts that can be used to strengthen a girl's feeling of love for you. It is love, after all, so allow this to be more of a fluid guideline.
Steps
- Open your eyes. Women are everywhere and many women are looking to fall in love. If you want a girl out there to fall in love with you, chances are a girl wants someone out there to fall in love with her..
- For more advice on keeping your eyes peeled, see How to Find Your Soulmate.
- Don't set expectations. Become like a romantic Buddha. Become desireless. Practice non-attachment. Expect nothing from the girl. This is not to say don't want the girl, it is just don't expect the girl to want you in return (which means you cannot be disappointed, only pleasantly surprised). Love thrives in the absence of pressure (in the form of neediness and clinginess).
- Read How to Stop Being Needy.
- Letting go of your expectations will also help you relax, which will make you more appealing to a girl than if you are uptight and worried.
- For an example of "romantic Buddhism", see How to Sweep a Girl off Her Feet.
- Give her space. This step can be looked at in many ways and can get distorted easily. It isn't about playing hard-to-get.. Allow her to be without you and she will then decide to love you on her own terms.
- For some practical tips on doing this, check out How to Tame a Free Spirit.
- If you're shy, you might end up giving her too much space. You still need to express your interest in her (flirt with her, touch her, and when the time is right, kiss her). Just don't smother her. Give her some time to reflect on those moments and realize how awesome they were.
Warnings
- Just because a girl is in love with you doesn't mean she'll treat you well. And just because you got her to fall in love with you doesn't mean you're automatically a good boyfriend. Love is only one part of a healthy relationship--the rest depends on effort and patience.
|
http://www.wikihow.com/Get-a-Girl-to-Fall-in-Love-with-You
|
crawl-002
|
refinedweb
| 408
| 82.65
|
Hello and Welcome to Alpha Electronz .! We provide Tutorials (Post + Videos) about Projects based on Arduino, Raspberry Pi, etc.
Today’s tutorial is about running your Favourite Arduino Uno for a long time on stand-alone! So let’s start.
For the 99% of the Times, Your Arduino based Projects are always connected to a Laptop through USB or a DC Adapter for supply. But sometimes you cannot just supply continuous Power to your Arduino Board because of some reasons.
Then you need to manage the supply provided to your Arduino Efficiently. So we are going to build a circuit which won’t consume all the power from your batteries in just an Hour. There are two steps in this Tutorial,
- Implement the Hardware
- Optimization for low power consumption
Implementing the Hardware
For implementing the Hardware for this tutorial we will need the ATMega328P Bootloaded IC, Osceola capacitors, breadboard, battery, the full list of components required for this tutorial is given below:
- Atmel ATmega328
- 10uF capacitor
- 2x 22pF capacitors
- 10K Ohm resistor
- 220 Ohm resistor
- LED
- 16MHz crystal clock
- Battery holder
- 2 AA batteries
Schematic
Above is the complete schematic of the Tutorial. Connect all the components on a breadboard. The circuit should look like these.
Source – OpenHomeAutomation
Testing the circuit
It’s now time to test if the hardware part is working. What I did in this project is to use the Arduino Uno board to program the chip, and then I just “transplanted” the chip on the breadboard. You can just use the default “blink” sketch to program the microcontroller.
After this is done, just replace the chip on the breadboard, and plug your battery (my battery pack even has a nice on/off switch). The LED should just go on and off every second as expected.
Blink test code }
Optimizing for Low-power
So now, we have an autonomous Arduino system. But it still consuming way too much power. Indeed, even when the LED is off, the Arduino chip is still active and consumes power. But there are functions on the microcontroller to put it to sleep during the time it is inactive, and re-activate the chip when we need to change the state of an output or to perform some measurements. I tested many solutions to really reduce the power to the lowest value possible, and the best I found is the JeeLib library. You can just download it and install it by placing the folder in your Arduino/libraries/ folder.
); }
You basically just have to include the JeeLib library with:
#include <JeeLib.h>
Then initialize the watchdog with:
ISR(WDT_vect) { Sleepy::watchdogEvent(); }
Finally, you can put the Arduino to sleep for a given period of time with:
Sleepy::loseSomeTime(5000);
Upload the sketch with the Arduino IDE and replace the chip on the breadboard. You should see your Arduino having the same behavior as before (with 5 seconds intervals). But the difference is that now when the LED is off, the Arduino chip doesn’t use a lot of power. To finish this article, I wanted to actually quantify the power consumption of the system we just built. You can do the exact same by placing a multimeter between one of the power lines. For example, I connected the positive pin of the battery to one pin of my multimeter, and the other pin to the positive power rail of the breadboard. Here are the results:
- LED off, without the JeeLib library: 6.7 mA
- LED on, without the JeeLib library: 8.8 mA
- LED off, with the JeeLib library: 43 uA (!)
- LED on, with the JeeLib library: 2.2mA
From these results, we can see that our breadboard-Arduino consumes 6.7 mA when doing nothing without caring about putting it to sleep. For information, that will drain your two batteries in about a month. Which is actually not so bad, but we can do better. With the sleep functions, this can be reduced to 43 uA, which is a 150x improvement.
Let’s do some calculations to see how it will impact a real project, for example a temperature sensor. It takes about 500 ms to perform a measurement, at about 2.5 mA of current. Then, the systems sleeps for 10 seconds and the loop starts again. The “mean” is then 0.16 mA over a complete loop. With batteries rated at 2500 mAh, it means in theory the system will last … nearly 2 years without changing the batteries! Of course, some other effects will actually modify this number, but it gives you an idea.
Hope you understood this tutorial.
If you like it then subscribe to our newsletter.
If you have any doubt feel free to ask in the comment section.
|
https://engineersasylum.com/t/how-to-run-arduino-uno-for-months-on-a-single-battery/488
|
CC-MAIN-2020-05
|
refinedweb
| 791
| 64
|
i'm trying to read the output from one function into another one.
if i break things down into two steps, call the first function(journal.py) from the command line, and then call the second(ip_list.py) i get the results that i'm looking for.
if i try to import the first and run it in the second the resulting list is empty.
import re
import journal
journal.journal()
ip_list = []
with open('Invalid_names_file') as file1:
print(file1)
a = [re.search(r'((\d+\.)+\d+)', line).group() for line in file1]
print(a)
for x in a:
if x not in ip_list:
ip_list.append(x)
print(ip_list)
<_io.TextIOWrapper
[]
[]
from subprocess import Popen
import os
def journal():
with open('Invalid_names_file', 'w') as Invalid_names_file:
Popen('journalctl -u sshd.service --no-pager --since -168hours\
--until today | grep Invalid', stdout=Invalid_names_file,\
universal_newlines=True, bufsize=1, shell=True)
if os.stat('Invalid_names_file').st_size == 0:
Popen('journalctl -u ssh.service --no-pager --since -168hours\
--until today | grep Invalid', stdout=Invalid_names_file,\
universal_newlines=True, bufsize=1, shell=True)
Invalid_names_file.close()
You should wait for
Popen() to finish.
Assign its return value to a variable and call
wait() on it:
p = Popen('journalctl ...') p.wait()
When you run the journal script separately, the parent process will only return when all of its children have terminated.
However,
Popen() doesn't wait – unless you tell it to.
Thus, in your case, the
journal() function exits immediately after starting the subprocess, so by the time you're reading the target file, it is still empty or incomplete.
|
https://codedump.io/share/76FOtZ8MB3w7/1/incomplete-reads-from-file-written-by-popen-based-subprocess
|
CC-MAIN-2016-44
|
refinedweb
| 256
| 59.4
|
Hey guys,
I've been Googling around for a while and I'm unable to find something that will help me do what I need.
I want a ListView where the footer will expand to fill the screen if there are not enough items to fill the list (And stay in the scroll with a minimum height when its full.
What I'm trying to accomplish:
Once the minimum height is hit, it should scroll.
Any ideas?
Thanks in advance
I understand what you're saying. You'll have to measure a few things to get it to behave the way you want. I came up with a working sample, and posted it on Github.
Essentially the Xaml is pretty straight forward.
<?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="" xmlns: <ListView x: <ListView.ItemTemplate> <DataTemplate> <ViewCell> <StackLayout Orientation="Horizontal" BackgroundColor="Green" Spacing="0"> <Label Text="{Binding FirstName}" /> <Label Text="{Binding LastName}"/> </StackLayout> </ViewCell> </DataTemplate> </ListView.ItemTemplate> <ListView.Footer> <StackLayout x: <Label Text="Centered Text" TextColor="Black" VerticalOptions="CenterAndExpand" HorizontalOptions="CenterAndExpand"/> <Button x: </StackLayout> </ListView.Footer> </ListView> </ContentPage>
But the code behind is doing a little work
using System; using ListView.ExpandingFooter.ViewModels; namespace ListView.ExpandingFooter.Pages { public partial class RootPage { private RootPageViewModel _viewModel; public RootPage() { InitializeComponent(); Button.Clicked += (sender, args) => AddAPerson(); } protected override void OnAppearing() { base.OnAppearing(); _viewModel = new RootPageViewModel(); BindingContext = _viewModel; } protected override void OnBindingContextChanged() { base.OnBindingContextChanged(); UpdateFooterHeight(); } private void AddAPerson() { var i = _viewModel.People.Count + 1; _viewModel.People.Add(new PersonViewModel { FirstName = string.Format("John {0}", i), LastName = string.Format("John {0}", i) }); UpdateFooterHeight(); } private void UpdateFooterHeight() { const int minHeight = 80; // why the extra 2? Good question, I'm not sure what's accounting for that extra spacing. var requestedHeight = Math.Floor(App.Device.Height - (MyListView.RowHeight * _viewModel.People.Count) - App.Device.StatusBarHeight - 2); FooterWrapper.HeightRequest = requestedHeight >= minHeight ? requestedHeight : minHeight; } } }
Note, I added the App.Device class, and set the Width and Height from each specific platform.
Answers
Currently, I'm thinking a custom StackLayout that allows me to use datatemplate, adding an expand label to the end, and wrapping it all in a scrollview. Just want to know if anyone has a better idea.
hey @RossGrambo.4106 I have this accordion sample in case if might be helpful. Feel free to fork it and extend with the behavior you want
Let me know
Thanks! I'll see what I can get going with it
Can't get too much from it unfortunately. I'm looking for a footer to always stay at the bottom of a list, and have its height expand when there is a small number of items in the list. Your solution would work great if I had a set height on my footer.
I actually had another list I wanted an accordion for, so this sample would be perfect for that!
Anyone else got any ideas?
this is accordian control and one of guy made it in weather control in horizontal orientation check his post
Hmm.. I think I might have been unclear with my question. I'm not looking for an accordion effect.
I want one extra item always at the end of a listview that will auto-expand when there is extra screen height. Meaning when there is 1 item in the listview, its height would be 500, when there are two, it would be 400, 3 -> 300, 4-> 200, 5 -> 100, 6 -> 100, 7 -> 100, 8 -> 100, etc.
(Supposing the total height of the listview is 600, each entry is 100, and the minimum height is set to 100.)
Guess there's nothing like this out there already
I haven't see anything like that but it might be possible. Have you tried extending one of those examples and have it your own?
Choose one, fork the code and let's prototype it out
Can you not use the footer in that instance? Simply don't expand the ListView, and put a StackLayout below it.
@ChaseFlorell
This is a big part of the problem that I can't get around:
ListView always expands. Also, putting a listview inside of a scrollview causes it to expand even more into the additional space allowed from the scrollview.
Meaning I can't scale the listview correctly, meaning the label/stacklayout under it cannot expand correctly.
I understand what you're saying. You'll have to measure a few things to get it to behave the way you want. I came up with a working sample, and posted it on Github.
ListView.ExpandingFooter
Essentially the Xaml is pretty straight forward.
But the code behind is doing a little work
Note, I added the App.Device class, and set the Width and Height from each specific platform.
Wow! Thank you! Thats exactly what I was looking for!
Hi @ChaseFlorell,
I have a similar implementation to be done. However, my listview row height is dynamis (I use HasUnevenRows=True). How can I achieve the same with LayoutOptions?
Just setting up the LayoutOptions of the StackLayout, Vertical=FillAndExpand, Horizontal=FillAndExpand does not help to get the right result.
Could you please help?
Anyone has come across similar issue. Kindly help.
Thank you in advance!
Sincerely,
Sagar
|
https://forums.xamarin.com/discussion/comment/147269/
|
CC-MAIN-2019-39
|
refinedweb
| 861
| 67.04
|
I have a list/array of x and y coordinates, for example:
x = [x1, x2, x3,...]
y = [y1, y2, y3,...]
for i in x1:
if i <= 40 and i >= -40:
print "True"
else:
x.remove(i)
for i in y1:
if i <= 20 and i >=- 20:
print "True"
else:
y1.remove(i)
x1
y1
x1
y1
x1
y1
zeta_list = np.column_stack((x1, y1))
([[x1, y1], [x2, y2], [x3, y3],...]])
Form a boolean selection mask:
mask = ~((x > 40) | (x < -40) | (y > 20) | (y < -20))
then, to select values from
x and
y where
mask is True:
x, y = x[mask], y[mask]
When
x is a NumPy array,
(x > 40) returns a boolean array of the same shape as
x which is
True where the elements of
x are greater than 40.
Note the use of
| for bitwise-or and
~ for
not (boolean negation).
Alternatively, by De Morgan's law, you could use
mask = ((x <= 40) & (x >= -40) & (y <= 20) & (y >= -20))
NumPy operations are performed element-wise. So
mask is True whereever an element of
x is between -40 and 40 and the corresponding element of
y is between -20 and 20.
For example,
import numpy as np x = [-50, -50, 30, 0, 50] y = [-30, 0, 10, 30, 40] # change the lists to NumPy arrays x, y = np.asarray(x), np.asarray(y) # mask = ~((x > 40) | (x < -40) | (y > 20) | (y < -20)) mask = ((x <= 40) & (x >= -40) & (y <= 20) & (y >= -20)) x, y = x[mask], y[mask]
yields
In [35]: x Out[35]: array([30]) In [36]: y Out[36]: array([10])
with
In [37]: mask Out[37]: array([False, False, True, False, False], dtype=bool)
|
https://codedump.io/share/Lqs75wi7SLQa/1/removing-entries-from-arrays-in-a-parallel-manner
|
CC-MAIN-2017-09
|
refinedweb
| 277
| 75.54
|
XML-RPC
NOTE: All credit for this code goes to Crast in irc.freenode.net:#django...
This uses SimpleXMLRPCDispatcher which is part of the standard Python lib in 2.4 (And possibly earlier versions).
In discussing ways of handling XML-RPC for Django, I realised I really needed a way to do it without patching Django's code. Crast in #django came up with a great solution, which I have modified and tweaked a bit.
I've included it here. Feel free to fiddle with it and make it your own ... All this code is post-mr
Any crappy & garbage code is completely mine; I'm still learning Python so bear with me. The hacks I added for self-documentation output are just that; any improvements to them would probably be a good thing.
First, setup your urls.py to map an XML-RPC service:
urlpatterns = patterns('', # XML-RPC (r'^xml_rpc_srv/', 'yourproject.yourapp.xmlrpc.rpc_handler'), )
Then, in the appropriate place, create a file called xmlrpc.py
Grear
# Patchless XMLRPC Service for Django # Kind of hacky, and stolen from Crast on irc.freenode.net:#django # Self documents as well, so if you call it from outside of an XML-RPC Client # it tells you about itself and its methods # # Brendan W. McAdams <brendan.mcadams@thewintergrp.com> # SimpleXMLRPCDispatcher lets us register xml-rpc calls w/o # running a full XMLRPC Server. It's up to us to dispatch data from SimpleXMLRPCServer import SimpleXMLRPCDispatcher from django.http import HttpResponse # Create a Dispatcher; this handles the calls and translates info to function maps #dispatcher = SimpleXMLRPCDispatcher() # Python 2.4 dispatcher = SimpleXMLRPCDispatcher(allow_none=False, encoding=None) # Python 2.5 == ''c0der: zASzL'' == def rpc_handler(request): """ the actual handler: if you setup your urls.py properly, all calls to the xml-rpc service should be routed through here. If post data is defined, it assumes it's XML-RPC and tries to process as such Empty post assumes you're viewing from a browser and tells you about the service. """ response = HttpResponse() if len(request.POST): response.write(dispatcher._marshaled_dispatch(request.raw_post_data)) else: response.write("<b>This is an XML-RPC Service.</b><br>") response.write("You need to invoke it using an XML-RPC Client!<br>") response.write("The following methods are available:<ul>") methods = dispatcher.system_listMethods() for method in methods: # right now, my version of SimpleXMLRPCDispatcher always # returns "signatures not supported"... :( # but, in an ideal world it will tell users what args are expected sig = dispatcher.system_methodSignature(method) # this just reads your docblock, so fill it in! help = dispatcher.system_methodHelp(method) response.write("<li><b>%s</b>: [%s] %s" % (method, sig, help)) response.write("</ul>") response.write('<a href=""> <img src="" border="0" alt="Made with Django." title="Made with Django."></a>') response['Content-length'] = str(len(response.content)) return response def multiply(a, b): """ Multiplication is fun! Takes two arguments, which are multiplied together. Returns the result of the multiplication! """ return a*b # you have to manually register all functions that are xml-rpc-able with the dispatcher # the dispatcher then maps the args down. # The first argument is the actual method, the second is what to call it from the XML-RPC side... dispatcher.register_function(multiply, 'multiply')
That's it!
You can pretty much write a standard python function in there, just be sure to register it with the dispatcher when you're done.
Here's a quick and dirty client example for testing:
import sys import xmlrpclib rpc_srv = xmlrpclib.ServerProxy("") result = rpc_srv.multiply( int(sys.argv[1]), int(sys.argv[2])) print "%d * %d = %d" % (sys.argv[1], sys.argv[2], result)
Based on experience, I do recommend that you use Dictionaries for your args rather than long args, but I think that's personal preference (It allows named arguments,
|
https://code.djangoproject.com/wiki/XML-RPC?version=16
|
CC-MAIN-2015-22
|
refinedweb
| 627
| 51.04
|
Log message:
Remove pkgviews: don't set PKG_INSTALLATION_TYPES in Makefiles.
Log message:
Add version to DEPENDS.
Log message:
Drop superfluous PKG_DESTDIR_SUPPORT, "user-destdir" is default these days.
Mark packages that don't or might probably not have staged installation.
Log message:
Update p5-XML-RSS-LibXML to 0.3102.
Change from previous:
0.3102 - 14 Sep 2011
* Allow upper uppercase letters as first character of namespace prefix
[] (arc)
Log message:
Import perl module XML::RSS::LibXML as wip/p5-XML-RSS-LibXML.
XML::RSS::LibXML uses XML::LibXML (libxml2) for parsing RSS instead of
XML::RSS' XML::Parser (expat), while trying to keep interface
compatibility with XML::RSS.
|
http://pkgsrc.se/wip/p5-XML-RSS-LibXML
|
CC-MAIN-2018-05
|
refinedweb
| 108
| 51.85
|
Closed Bug 330863 Opened 15 years ago Closed 15 years ago
vertical scroll bar appears on LHS instead of RHS
Categories
(Core :: Layout, defect, P2)
Tracking
()
mozilla1.9alpha1
People
(Reporter: eyalroz, Assigned: dbaron)
References
(Depends on 1 open bug, Blocks 1 open bug)
Details
(Whiteboard: [patch])
Attachments
(1 file)
In the last few days, sometimes the vertical scrollbar appears on the LHS of the viewport instead of the RHS like the gods had intended it to. Must be some sort of BiDi thing.
Related to bug 192767?
Assignee: guifeatures → nobody
Component: XP Apps: GUI Features → Layout
Product: Mozilla Application Suite → Core
QA Contact: layout
Could be. I'm too lazy to check though.
So, yeah, I know I changed the behavior. I mentioned it on bug 192767 and bug 221396. The old behavior was: * scrollbar side on the canvas is based on the default direction * scrollbar side on things with 'overflow:auto' or 'overflow:scroll' is based on 'direction' which is glaringly inconsistent. I made it uniformly based on direction: BODY for HTML documents, and the root elsewhere. I could revert to some other behavior. Frankly, what I think we really need is usability testing for what's better for bidi users; I'm tempted to implement a preference (with 4 options, to also satisfy bug 265276: end-edge in current direction, end-edge in default direction, left, and right) to make that easier.
Summary: vertical scroll bar appears on LHS instead of LHS → vertical scroll bar appears on LHS instead of RHS
(Note that an advantage of having it vary based on 'direction' is that which way you can horizontally scroll is based on 'direction', so it provides an indication of that.) What does WinIE do?
> which is glaringly inconsistent. I made it uniformly based on direction: BODY > for HTML documents, and the root elsewhere. I could revert to some other > behavior. You should. The thing is, I don't want the direction of, say, the email message display scrollbar to change just because I've moved from an LTR to an RTL message. And if I change the message direction then I fuck up the alignment. I want RHS vertical scrollbars always (unless my entire Windows interface is RTL, which it isn't; but some have it that way, and they would probably like LHS scrollbars always). Maybe you could post a poll somewhere, I dunno.
I know I have a strong opinion on this. I'm just not sure what it is. Anyway, this affects Mac as well, so All/All.
OS: Windows XP → All
Hardware: PC → All
This adds a pref and makes its default value the sensible behavior that seems closest to what I could determine old behavior was from reading the code. (By closest, I mean the same for the root scrollbar when document.dir was not set in the DOM, which is the 99% case.) I'm not sure if the "bidi.direction" pref is really meaningful, though.
Assignee: nobody → dbaron
Status: NEW → ASSIGNED
Attachment #216176 - Flags: superreview?(roc)
Attachment #216176 - Flags: review?(smontagu)
Priority: -- → P2
Whiteboard: [patch]
Target Milestone: --- → mozilla1.9alpha
Comment on attachment 216176 [details] [diff] [review] patch +enum nsPresContext_CachedBoolPrefType Can't we make these enums actual members of nsPresContext, so we can do nsPresContext::CachedBoolPrefType instead of this _ pseudo-namespace?
Attachment #216176 - Flags: superreview?(roc) → superreview+
That would require changing all the callers...
Checked in to trunk.
Status: ASSIGNED → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
(In reply to comment #7) > I'm not sure if the "bidi.direction" pref > is really meaningful, though. It's not. I have had it in the back of my mind for years as something that should be removed, thanks for the reminder.
Hrm. So are there some localizations (or something) in which we want scrollbars on the left by default? How should we determine that?
As far as I know, localizations that want an RTL UI achieve it by setting window { direction: rtl; } (among other things) in intl.css. Setting bidi.direction makes typical LTR pages with no explicit direction misrender slightly. Sorry, I don't seem to have thought this through properly when reviewing.
Did localizations with RTL UI have scrollbars on the left in the past? If so, what code caused them to do so? The only code I found depended on bidi.direction, so that's what I used.
. > If so, > what code caused them to do so? The only code I found depended on > bidi.direction, so that's what I used. I would be fine with using bidi.direction for this purpose alone, and not using it to change the default direction of content, but maybe there is something else we can use. e.g. locale.dir.
(In reply to comment #15) > . > as much as i remember (i started using mozilla with version 0.9.4) the main scroll bar was always at the right with RTL UI, regardless of the page direction. i prefer it that way, because i don't want to send the mouse to the scroll bar, just to find out it's on the other side of the screen because the current page has a different direction. i second comment 5 on this.
(In reply to comment #7) > Created an attachment (id=216176) [edit] > patch > > This adds a pref and makes its default value the sensible behavior that seems > closest to what I could determine old behavior was from reading the code. I'm not happy with the scrollbar on RTL multiline <select>s being on the right (by default). It makes them horribly ugly, IMO. I think the old behaviour, where the main scrollbar was always on the right (at least when the chrome was LTR), and scrollbars on inner elements went according to the element's direction, is what most people would expect.
Uri, please open another bug about the default. As for me, I really don't care whether it's the previous behavior or the current one.
(In reply to comment #18) > Uri, please open another bug about the default. As for me, I really don't care > whether it's the previous behavior or the current one. > This is not about the default. Currently, none of the available options is the same as the previous behavior (which was the best, IMO). This is why I'm bringing it up here.
So the old behavior may have been good for selects (and textareas?), but I don't see why it should differ between DIVs with overflow:auto and IFRAMEs, which the old behavior did differ between.
(In reply to comment #20) > So the old behavior may have been good for selects (and textareas?), but I > don't see why it should differ between DIVs with overflow:auto and IFRAMEs, > which the old behavior did differ between. > Because, in my mind, as a user, the "main" scrollbar is associated with the chrome/application (perhaps even the OS), whereas anything inside the content area is associated with the content. That said, DIVs and IFRAMEs don't bother me that much (I can see them going either way). My problem is with selects (and, indeed, textareas), which, IMO, should always respect the content direction.
(In reply to comment #22) > Won't be fixed in mozilla1.8/firefox2.0? > This bug is a result of the fix for bug 192767, which was fixed on the trunk only. Hence, this bug is trunk-only and there's nothing to fix on the branch.
|
https://bugzilla.mozilla.org/show_bug.cgi?id=330863
|
CC-MAIN-2020-34
|
refinedweb
| 1,250
| 73.07
|
Use a function in qml from another qml
Hello,
i want to use an function in qml what is written in an other qml file, but it won't work
the example
the function is written in the file "function.qml" in an seperate directory called components
the function looks like this
function assign(id) {
value = id + 20
return: value;
}
now i work in another qml file it looks like that.
import "components/"
property int test: function.assign(number)
if i now run the qml I got an message in the qml-log "ReferenceError: Can't find variable: function"
i don't understand it. have someone an idea?
if i wrote the function in the same file it work.
can i call a function like this?
NOTE: this is just an example for the function
- dheerendra Qt Champions 2017 last edited by
welcome to forum. Did you create the instance of second qml component inside the first qml ? You need create instance, assign the id and through id you need to call.
Try this.
@=== TestFunction.qml====
Rectangle {
width: 100
height: 100
function callme(val, val){
console.log("I am called man="+val + " ok="+val)
}
}
====== main.qml========
Rectangle {
width: 640
height: 480
TestFunction{ id : func } Button { text: qsTr("Hello World") anchors.horizontalCenter: parent.horizontalCenter anchors.verticalCenter: parent.verticalCenter onClicked: { func.callme("pthinks.com","Bangaluru") } }
}
@
thanks works fine.
should be something simple like this.
[SOLVED]
- Raghvendra last edited by
@dheerendra great dheerendra it worked for me Thanks a lot :)
|
https://forum.qt.io/topic/46052/use-a-function-in-qml-from-another-qml/1
|
CC-MAIN-2022-40
|
refinedweb
| 247
| 66.64
|
symlink - make symbolic link to a file
#include <unistd.h> int symlink(const char *path1, const char *path2);
The symlink() function creates a symbolic link. Its name is the pathname pointed to by path2, which must be a pathname that does not name an existing file or symbolic link. The contents of the symbolic link are the string pointed to by path1.
Upon successful completion, symlink() returns 0. Otherwise, it returns -1 and sets errno to indicate the error.
The symlink() function will fail if:
- [EACCES]
- Write permission is denied in the directory where the symbolic link is being created, or search permission is denied for a component of the path prefix of path2.
- [EEXIST]
- The path2 argument names an existing file or symbolic link.
- [EIO]
- An I/O error occurs while reading from or writing to the file system.
- [ELOOP]
- Too many symbolic links were encountered in resolving path2.
- [ENAMETOOLONG]
- The length of the path2 argument exceeds {PATH_MAX}, or a pathname component is longer than {NAME_MAX}.
- [ENOENT]
- A component of path2 does not name an existing file or path2 is an empty string.
- [ENOSPC]
- The directory in which the entry for the new symbolic link is being placed cannot be extended because no space is left on the file system containing the directory, or the new symbolic link cannot be created because no space is left on the file system which will contain the link, or the file system is out of file-allocation resources.
- [ENOTDIR]
- A component of the path prefix of path2 is not a directory.
- [EROFS]
- The new symbolic link would reside on a read-only file system.
The symlink() function may fail if:
- [ENAMETOOLONG]
- Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}.
None.
Like a hard link, a symbolic link allows a file to have multiple logical names. The presence of a hard link guarantees the existence of a file, even after the original name has been removed. A symbolic link provides no such assurance; in fact, the file named by the path1 argument need not exist when the link is created. A symbolic link can cross file system boundaries.
Normal permission checks are made on each component of the symbolic link pathname during its resolution.
None.
lchown(), link(), lstat(), open(), readlink(), <unistd.h>.
|
http://pubs.opengroup.org/onlinepubs/007908799/xsh/symlink.html
|
CC-MAIN-2013-20
|
refinedweb
| 385
| 62.48
|
Mouse is not visible.
public class mygame:game{protected override void Initialize() { 'IsMouseVisible = true;'}}by breakpoint i see is already set to true. when y Change to false and update in the Draw or Update Methode then is Change back autmoativly to true.
but never see the mouse
what i have to doo for see the mouse?
What MonoGame version are you on? Also what platform (Windows DX or cross platform desktop)? I remember recently testing IsMouseVisible on develop and it was working. I think I set it in the Game1 constructor though, maybe that makes a difference. I can look at it this weekend.
Hello,i solved with logic:
i make a sprite and draw this every gameloop on mouseposition.
I program with Cross Plattform Desktop AND Windowws DX and also 3 others-
because the good Performance and the higher possiblity (animatet Mouse Cursor) i let it with this Logical solvingthanks!!!
|
http://community.monogame.net/t/mouse-never-see/9605/3
|
CC-MAIN-2018-34
|
refinedweb
| 152
| 66.23
|
Migrates Oracle dbs to PostgreSQL, MySQL and Sqlite
Project description
CBL Migrator
Small SQLAlchemy based library that migrates Oracle DBs to MySQL, PostgreSQL and SQLite. Used in ChEMBL dumps generation process.
to use it, as a Python library:
from cbl_migrator import DbMigrator origin = 'oracle://{user}:{pass}@{host}:{port}/{sid}?encoding=utf8' dest = 'postgresql://{user}:{pass}@{host}:{port}/{dbname}?client_encoding=utf8' migrator = DbMigrator(origin, dest, ['excluded_table1', 'excluded_table2'], n_workers=4) migrator.migrate()
directly from the command line:
cbl-migrator oracle://{user}:{pass}@{host}:{port}/{sid}?encoding=utf8 postgresql://{user}:{pass}@{host}:{port}/{dbname}?client_encoding=utf8 --n_workers 4
What it does (in order of events)
- Copies tables from origin to dest using the closest data type for each field. No constraints except PKs are initially copied across.
- Table contents are migrated from origin to dest tables. In parallel.
- If the data migration is succesful it will first generate the constraints and then the indexes. Any index in a field with a previously created UK will be skipped (UKs are implemented as unique indexes).
- It logs every time it was not possible to migrate an object, e.g.,
(psycopg2.OperationalError) index row size 2856 exceeds maximum 2712 for index.
What it does not do
- It won't migrate any table without a PK. May hang with a table without PK and containing an UK field referenced as FK in another table.
- It does not try to migrate server default values.
- It does not set autoincremental fields.
- It does not try to migrate triggers nor procedures.
SQLite
SQLite can not:
- concurrently write
- alter table ADD CONSTRAINT
So only one core is used when migrating to it. All constraints are generated at the time of generating the destination tables and it sequentially inserts rows in tables in correct FKs order.
MySQL
CLOBs are migrated to LONGTEXT.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/cbl-migrator/
|
CC-MAIN-2021-21
|
refinedweb
| 323
| 50.33
|
Python and Flask beginner and completely new to Python Anywhere so please be really basic in your responses. I followed a tutorial to get Flask up and running on my PythonAnywhere account but then realized it was going to be really cumbersome to build out the application on PA so I built it locally. I've uploaded all of the changes to PA, fixed the first issue/ error message but now I'm stumped on the second. I've gone through all of the search results for similar issues and none of the solutions are panning out for me. I'm guessing it's a simple issue with my configuration in WSGI or the Python script interface. Please help and thanks in advance. - Scott
Error message is as follows:
2018-10-31 23:27:45,927: Error running WSGI application 2018-10-31 23:27:45,932: ImportError: cannot import name 'app' 2018-10-31 23:27:45,932: File "/var/www/larsen_pythonanywhere_com_wsgi.py", line 111, in <module> 2018-10-31 23:27:45,933: from WW_RSVP import app as application # noqa
Relevant part of WSGI file:
import sys path = '/home/larsen/WW_RSVP' if path not in sys.path: sys.path.insert(0, path) # sys.path.append(path) from WW_RSVP import app as application # noqa
Can't seem to get this to format properly but everything after this is the WW_RSVP.py (main flask file) within the /home/larsen/WW_RSVP directory.
import gspread, datetime from oauth2client.service_account import ServiceAccountCredentials from flask import Flask, render_template, request, make_response
from flask import Flask, flash, redirect, render_template, request, session, make_response
from flask_session import Session
from werkzeug.exceptions import default_exceptions
from tempfile import mkdtemp
app = Flask(name)
Ensure templates are auto-reloaded
app.config["TEMPLATES_AUTO_RELOAD"] = True
scope = ['']
scope = ['', ''] creds = ServiceAccountCredentials.from_json_keyfile_name('client_secret.json', scope) client = gspread.authorize(creds)
# Configure session to use filesystem (instead of signed cookies)
app.config["SESSION_FILE_DIR"] = mkdtemp()
app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem"
Session(app)
spreadsheet = client.open('WaynewoodReservations') def worksheet(sheet): return spreadsheet.worksheet(sheet)
lots = ['2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '14', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', 'C']
rows_to_lots = {2: 'C', 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 7, 9: 8, 10: 9, 11: 10, 12: 11, 13: 12, 14:: 51, 50: 52, 51: 53, 52: 54, 53: 55, 54: 56, 55: 57, 56: 58, 57: 59, 58: 60} lots_to_rows = {'C': 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 12, 12: 13, 14: 14, 16: 15, 17: 16, 18: 17, 19: 18, 20: 19, 21: 20, 22: 21, 23: 22, 24: 23, 25: 24, 26: 25, 27: 26, 28: 27, 29: 28, 30: 29, 31: 30, 32: 31, 33: 32, 34: 33, 35: 34, 36: 35, 37: 36, 38: 37, 39: 38, 40: 39, 41: 40, 42: 41, 43: 42, 44: 43, 45: 44, 46: 45, 47: 46, 48: 47, 49: 48, 51: 49, 52: 50, 53: 51, 54: 52, 55: 53, 56: 54, 57: 55, 58: 56, 59: 57, 60: 58}
def get_worksheet_titles(spreadsheet): worksheet_list = spreadsheet.worksheets() worksheet_titles = [] for worksheet in worksheet_list: title = str(worksheet).split('\'') worksheet_titles.append(title[1]) return worksheet_titles
def make_reservation(event, lot, number): l = lots_to_rows[lot] worksheet(event).update_cell(l,4, number)
@app.route("/", methods=["GET", "POST"]) def index():
if 'lot' not in locals(): if 'lot' in request.cookies: lot = request.cookies.get('lot') if lot == 'C': pass else: lot = int(lot) if request.method == "POST": print('Triggered top if POST') if not request.form.get("event"): return apology("Please choose an event.", 400) try: number = request.form.get("number") except: pass try: number = request.form.get("num") except: pass if 'number' not in locals(): # elif not request.form.get("number") or not request.form.get("num"): return apology("How many people are coming?", 400) elif not request.form.get("lot"): if not 'lot' in locals(): return apology("Which lot are you RSVP'ing for?", 400) event = request.form.get("event") try: number = int(request.form.get("num")) except: number = int(request.form.get("number")) if 'lot' not in locals(): lot = int(request.form.get("lot")) try: make_reservation(event, lot, number) reservation = True # f'Reservation is set to {reservation}' except: print('Entry didn\'t work.') if reservation == True: resp = make_response(render_template("index.html", worksheet_titles=get_worksheet_titles(spreadsheet), lot=lot, event=event, number=number)) if 'lot' not in request.cookies: expire_date = datetime.datetime.now() expire_date = expire_date + datetime.timedelta(days=9000) resp.set_cookie('lot', str(lot), expires=expire_date) return resp else: print('Triggered bottom except') return render_template("index.html", worksheet_titles=get_worksheet_titles(spreadsheet), lots = lots)
if name == 'main': app.run()
|
https://www.pythonanywhere.com/forums/topic/13495/
|
CC-MAIN-2018-47
|
refinedweb
| 814
| 57.47
|
The tutorial, along with other indispensible documentation like the
library reference and such, is also available in a number of different
formats at. The Adobe Acrobat
There is currently no good tutorial for the mac-specific features of Python, but to whet your appetite: it has interfaces to many MacOS toolboxes (quickdraw, sound, quicktime, open scripting, etc) and various portable toolboxes are available too (Tk, complex numbers, image manipulation, etc). Some annotated sample programs are available to give you an idea of Python's power.
Python IDE, an integrated development environment with editor, debugger, class browser, etc. Unfortunately the IDE is not yet documented here. Fortunately, however, it does not need much documentation, so your best bet is to try it.
PythonInterpreterand it is recognizable by the "16 ton" icon. You start the interpreter in interactive mode by double-clicking its icon:
This should give you a text window with an informative version string and a prompt, something like the following:
Python 1.5.1 (#122 Aug 27, 1997) [CW PPC w/GUSI MSL] Copyright 1991-1997 Stichting Mathematisch Centrum, Amsterdam >>>The version string tells you the version of Python, whether it was built for PPC or 68K macs and possibly some options used to build the interpreter. If you find a bug or have a question about how the interpreter works it is a good idea to include the version information in your message.
At the prompt you can type interactive python commands. See the tutorial for more information. The interactive window works more-or-less like a Communication Toolbox or Telnet window: you type commands at the bottom and terminate them with the [return] or [enter] key. Interpreter feedback also appears at the bottom of the window, and the contents scroll as output is added. You can use copy and paste in the normal way, but be sure to paste only at the bottom of the document.
SimpleText.
For more serious scripts, though, it is advisable to use a programmers
editor, such as
BBEdit or
Alpha. BBEdit is
my favorite: it comes in a commercial version but also in a
fully-functional free version
BBEdit Lite. You can
download it from the BareBones
site. The free version will probably provide all the functionality
you will ever need. Besides the standard edit facilities it has
multi-file searches and many other goodies that can be very handy when
editing programs.
After you have created your script in the editor of your choice you drop it on the interpreter. This will start the interpreter executing the script, again with a console window in which the output appears and in which you can type input if the script requires it. Normally the interpreter will close the window and quit as soon as the script is done executing, see below under startup options for a way to change this.
There is a BBEdit extension available that allows you to run Python scripts more-or-less straight from your bbedit source window. Check out theIt is a good idea to have the names of all your scripts end in
Mac:Tools:BBPyfolder.
.py. While this is not necessary for standalone scripts it is needed for modules, and it is probably a good idea to start the habit now.
If you do not like to start the Python interpreter afresh for each
edit-run cycle you can use the
import statement and
reload() function to speed things up in some cases. Here
is Guido's original comment for how to do this, from the 1.1 release
notes:
Make sure the program is a module file (filename must be a Python identifier followed by '
.py'). You can then import it when you test it for the first time. There are now three possibilities: it contains a syntax error; it gets a runtime error (unhandled exception); or it runs OK but gives wrong results. (If it gives correct results, you are done testing and don't need to read the rest of this paragraph. :-) Note that the following is not Mac-specific -- it's just that on UNIX it's easier to restart the entire script so it's rarely useful.
Recovery from a syntax error is easy: edit the file and import it again.
Recovery from wrong output is almost as easy: edit the file and, instead of importing it, call the function
reload()with the module name as argument (e.g., if your module is called
foo, type
reload(foo)).
Recovery from an exception is trickier. Once the syntax is correct, a 'module' entry is placed in an internal table, and following import statements will not re-read the file, even if the module's initialization terminated with an error (one reason why this is done is so that mutually recursive modules are initialized only once). You must therefore force re-reading the module with
reload(), however, if this happens the first time you try to import the module, the import statement itself has not completed, and your workspace does not know the module name (even though the internal table of moduesl does!). The trick is to first import the module again, then reload it. For instance,
import foo; reload(foo). Because the module object already exists internally, the import statement does not attempt to execute the module again -- it just places it in your workspace.
'Pyth'and type
'TEXT', you can double-click your script and it will automatically invoke the interpreter. If you use BBEdit you can tell it about the Python file type by adding it to the "file types" sections of the preferences. Then, if you save a file for the first time you can tell BBEdit to save the file as a Python script through the "options" choice of the save dialog.
The
Scripts folder contains a script
fixfiletypes that will recursively traverse a folder and
set the correct creator and type for all files ending in
.py.
Older releases of Python used the creator code
'PYTH'in stead of
'Pyth'. If you still have older Python sources on your system and named them with
'.py'extension the
fixfiletypesscript will correct them.
KeyboardInterruptexception. Scripts may, however, turn off this behaviour to facilitate their own event handling. Such scripts can only be killed with the command-option-escape shortcut.
The options modify the interpreters behaviour in the following way:
sys.argv. Sys.argv[0] is always the name of the script being executed, additional values can be passed here. Quoting works as expected.
Warning: redirecting standard input or standard output in the command-line dialog does not work. This is due to circumstances beyond my control, hence I cannot say when this will be fixed.The default options are also settable on a system-wide basis, see the section on editing preferences.
sys.path, contains the folders python will search when you import a module. The path is settable on a system-wide basis (see the preferences section), and normally comprises the current folder (where the script lives), the
Libfolder and some of its subfolders and possibly some more.
By the way: the "standard file" folder, the folder that is presented to the user initially for an open or save dialog, does not follow the Python working directory. Which folder is initially shown to the user is usually one of (a) the application folder, (b) the "Documents" folder or (c) the folder most recently used for such a dialog (in any Python program). This is standard MacOS behaviour, so don't blame Python for it. The exact behaviour is settable through a control panel since System 7.5.
PythonStartupthis file is executed when you start an interactive interpreter. In this file you could import modules you often use and other such things.
'Pyth'and type
'PYC 'load faster when imported (because they do not have to be parsed). The
Libfolder contains a script
compileall.py, running this script will cause all modules along the python search path to be precompiled, which will speed up your programs. Compiled files are also double-clickable.
If the module search path contains a filename as one of its entries
(as opposed to a folder name, which is the normal case) this file will
be searched for a resource with type
'PYC ' and a name
matching the module being imported.
The
scripts folder contains a script
PackLibDir which will convert a number of modules (or
possibly a complete subtree full of modules) into such a resource
file.
EditPythonPrefs. For PPC/cfm68k python this is a standalone program living in the main Python folder, for 68K python it is a script in the
Mac:Scriptsfolder.
The interface to edit the preferences is rather clunky for the current release.
In the editable text field at the top you enter the initial module
search path, using newline as a separator. There are two special
values you can use here: an initial substring
$(PYTHON)
will expand to the Python home folder and a value of
$(APPLICATION) will expand to the the python application
itself. Note that the text field may extend "beyond the bottom" even
though it does not have a scroll bar. Using the arrow keys works,
though.
The Python home folder $(PYTHON) is initially, when you install Python, set to the folder where the interpreter lives. You can change it here.
Finally, you can set the default startup options here, through a sub-dialog.
BuildAppletprogram. You create an applet by dropping the python source script onto BuildApplet. Example 2 is a more involved applet with its own resource file, etc.
Note that while an applet behaves as a fullblown Macintosh application
it is not self-sufficient, so distributing it to a machine without an
installed Python interpreter will not work: it needs the shared python
execution engine
PythonCore, and probably various modules
from the Lib and PlugIns folders. Distributing it to a machine that does
have a Python system will work.
EditPythonPrefsapplication allows you to set these, in the same way as double-clicking EditPythonPrefs allows you to set the system-wide defaults.
Actually, not only applets but also the interpreter itself can have non-default settings for path and options. If you make a copy of the interpreter and drop this copy onto EditPythonPrefs you will have an interpreter that has a different set of default settings.
There are some annotated sample programs available that show some mac-specific issues, like use of various toolboxes and creation of Python applets.
The
Demo and
Mac:Demo
folders in the Macintosh distribution
contains a number of other example programs. Most of these are only
very lightly documented, but they may help you to understand some
aspects of using Python.
Finally, there is a
Mac:Contrib folder that contains
a few contributions to Python that I couldn't fit in the normal tree
but did want to distribute (many other contributions are contained
throughout the distribution, but you don't see them, really).
The best way to contact fellow Macintosh Python programmers is to join
the MacPython Special Interest Group mailing list. Send a message with
"info" in the body to pythonmac-sig-request@python.org
or view the Pythonmac SIG
page on the WWW
server.
There appear to be problems with QuickTime for the CFM68K version of the interpreter. If you experience these please contact the SIG: some people use quicktime without problems and some not, and we are still hunting for the cause.
Python is a rather safe language, and hence it should be difficult to crash the interpreter of the system with a Python script. There is an exception to this rule, though: the modules that interface to the system toolboxes (windowing, quickdraw, etc) do very little error checking and therefore a misbehaving program using these modules may indeed crash the system. Such programs are unfortunately rather difficult to debug, since the crash does not generate the standard Python stack trace, obviously, and since debugging print statements will often interfere with the operation of the program. There is little to do about this currently.
Probably the most common cause of problems with modules ported from other systems is the Mac end-of-line convention. Where unix uses linefeed, 0x0a, to separate lines the mac uses carriage return, 0x0d. To complicate matters more a lot of mac programming editors like BBEdit and emacs will work happily with both conventions, so the file will appear to be correct in the editor but cause strange errors when imported. BBEdit has a popup menu which allows you to inspect (and set) the end-of-line convention used in a file.
Python attempts to keep its preferences file up-to-date even when you
move the Python folder around, etc. If this fails the effect will be
that Python cannot start or, worse, that it does work but it cannot find
any standard modules. In this case, start Python and examine
sys.path.
If it is incorrect remove any Python preferences file from the system
folder and start the interpreter while the interpreter sits in the main
Python folder. This will regenerate the preferences file. You may also
have to run the ConfigurePython applet again.
|
https://svn.python.org/projects/stackless/Python-2.3.3/orig/Mac/Demo/using.html
|
CC-MAIN-2022-05
|
refinedweb
| 2,195
| 60.35
|
I’m one of those programmers who HATES it when he sees things like:
public class Grunt { private int m_hp ; public int HP { get { return m_hp ; } set { m_hp = value ; } } }
Encapsulation or no encapsulation, this just SEEMS dumb.
Sometimes you’ll see this written as:
public class Grunt { private int m_hp ; public int HP { get { return m_hp ; } set { m_hp = value ; } } }
Which further proves my point: You can compress code like that when its not meant to be looked at.
The point is, THERE IS NO POINT in having pass-thru getters/setters like this.
In languages other than C#, you’ll see people using functions of course instead of C# properties.
class Grunt { private: int m_hp ; public: void ChangeHp( int toVal ) { m_hp = toVal ; } int Hp() { return m_hp ; } } ;
VERSUS:
class Grunt { public: int hp ; } ;
Isn’t there a very slight performance hit for writing code that always uses methods? Yes. ++points for public members.
HOWEVER. Recently I have thought of ONE major benefit to this “methods only/no public members” style that I never really credited to the style before.
Readability.
Think of client code using the Grunt class:
Grunt g ; if( g.Hp() > 10 ) { g.ChangeHp( 12 ) ; }
VERSUS:
Grunt g ; if( g.hp > 10 ) { g.hp = 12 ; }
What reads better? g.ChangeHp( 12 ) or g.hp = 12?
Really, g.hp = 12 ; LOOKS more cryptic because even though you know what its doing if you think about it for a fraction of a second, (changing hp to 12), for the method way g.ChangeHp(12) you CANNOT READ IT without knowing exactly what it does like, even faster than the one with the =.
SO. Maybe all private members and all accessor methods aren’t such a bad idea after all.
|
https://bobobobo.wordpress.com/2009/05/
|
CC-MAIN-2018-30
|
refinedweb
| 288
| 71.24
|
This article is in the Product Showcase section for our sponsors at CodeProject. These articles on Linux. That’s easy, you might think, you can build an ASP.NET application, run it on Microsoft® Windows® under Internet Information Services (IIS) and browse to it using a browser such as FireFox on a Linux client. You’d be right, but look again. In the screen shot, your ASPX is running on localhost, the Linux box itself. With. For server side or hosted applications, the logical candidate is usually J2EE, due to its cross platform nature and its well-known security, manageability, performance and scalability characteristics. However, to develop J2EE applications, you need to learn the Java language, Java Servlets, Java Server Pages, JDBC for database connection, and even Enterprise Java Beans for distributed applications.
What if, as an alternative, you could broaden the reach of your skills to Linux and other Java-enabled platforms, and as a result, extend yourself (and your resume) in a new and exciting area? What if you could do this without rewriting most of your code, and instead re-use your existing C# code? Not only that, but would you like to contribute to the Mono™ project – the creation of an open source .NET framework for Linux? Well you can, and you can do it today, with Grasshopper, a freely available download from Mainsoft.
As you know, when you compile an application in Visual Studio .NET, it generates Microsoft Intermediate Language (MS IL), which executes on the Microsoft Common Language Runtime (CLR). Grasshopper is a plug-in to Visual Studio .NET, which takes this MS IL and converts it into Java Byte Code, which executes on a Java Virtual Machine. Grasshopper also includes J2EE implementations of ASP.NET, ADO.NET, and the most common .NET namespaces, so the required dependencies are available on your J2EE platform.
You’ll get your first indication that something new is available in Visual Studio .NET when you launch it after installing Grasshopper. This demonstrates how tightly the Visual MainWin tools are integrated within Visual Studio .NET.
After installing Grasshopper, launch Visual Studio, and then choose File > New to open the New Project dialog.
Notice there are two new Project Type folders here – Visual MainWin for C# and Visual MainWin for VB.NET.
In fact, there are two ways you can develop your J2EE applications for Linux using Grasshopper – either by creating a project directly from here and building it in C# or VB.NET, or by taking your existing .NET framework based project and generating a J2EE project from the .NET project using the Generate J2EE Project Wizard. You will see a little more on this later.
When you install Grasshopper, you also get a copy of Tomcat, saving you the time and trouble of installing and configuring an application server yourself. You can start or stop Tomcat via the Grasshopper group on the Start menu. Tomcat must be running for you to create any new projects, or to convert your existing .NET framework based projects to J2EE projects. By default, Tomcat runs on port 8080, which is why the New Project dialog (Figure 2) shows the default root for your Web application or Web service as localhost:8080. If you are not familiar with Tomcat, it is a cut-down J2EE application server that is used for the official reference implementation of the Java Servlet and Java Server Pages technologies.
If you have an existing .NET project that you want to convert to J2EE, it's easy too. Simply right-click your project in the Solution Explorer and select Generate J2EE Project. Grasshopper creates a new project for you, sets it up for Java, and associates your source files with this project. You can then edit, compile, deploy and debug this code on your J2EE server. You can also choose to have the original project and the converted project in the same solution and then implement a single source strategy, building your source code for both .NET and J2EE from one single Visual Studio .NET development environment instance.
Tomcat is also available for Linux, so the applications you build on Tomcat using Grasshopper will run perfectly happily on Linux too. Tomcat is the only application server that is supported by Grasshopper, so if you want to use WebLogic.
To create a deployment Java Web Archive file (WAR), all you have to do is right-click on your project in the Solution Explorer, select Deployment Packager, choose the directory that you want to deploy to, and click OK. Visual MainWin then compiles your code and dependencies into a WAR file. To install the WAR file on your Linux-based Tomcat server, simply browse to the Tomcat Manager Console and install it from there. The Tomcat Manager Console at and install it from there. Tomcat uploads and deploys the WAR file for you.
One of the challenges you would expect to meet when developing an application of the .NET framework using Visual Studio .NET tools and deploying to a Java runtime, is debugging. Surely debugging cannot work when you cross compile? Well it does! In fact, it works transparently, so that you still believe you are debugging your application as if it were running on the .NET framework! Take a look at Figure 3 for an example of this. You can try this yourself: create a new Web application (using the Visual MainWin C# for J2EE projects folder), add a button to it, and enter some form behind the button event handler as shown. Don’t forget to put a breakpoint in the code. Once done, click Debug > Start or press F5. Your application will be compiled and deployed to Tomcat, and a browser will launch directing you to the ASPX page.
Click the button on the page, and you will be taken back to the Visual Studio .NET IDE, which has stopped at the breakpoint, as shown in Figure 3. As you can see, all the tools you are used to for debugging – watches, the call stack and so on – are still available. You can still step through the code and watch it execute, and if you look at the call stack, you can see which classes are running and where. It is particularly interesting to see how Grasshopper links the .NET framework and the Java specifications, though it doesn’t affect the execution of your program! In addition, you can track bugs wherever they arise, even in production servers, by connecting from Visual Studio .NET to the server, regardless of its operating system, and debug any problems from your preferred development environment!
Many companies – and yours probably isn’t an exception – have assets already implemented in Java, and you will need to interface with these existing Java assets. Also, you may require using a third party add-on within your applications to implement some functionality. A good example of this is reporting, where most companies would use an add-on such as Crystal Reports to do the charting. In the Linux world, these would be implemented in Java, and available as Java Archive (JAR) files for you to include in your application. Native Java developers can use these easily by including them when they compile their code, but what about when you are building from C#?
Using Grasshopper, you are not left out. You can add references to the JAR files, and manipulate them in your code as you see fit. It’s analogous to adding references to third party assemblies. To do this, you simply right-click the References folder in your Solution Explorer, and you’ll see two new options above the existing Add Reference and Add Web Reference options:
Add Java Reference, which allows you to find a JAR file and create a reference to it that you can write code to. Java References are fully integrated with your development environment, so the Object Browser, the Intellisense on the code editor and the compilers all recognize the Java classes and their members in exactly the same way they recognize regular .NET ones. This allows you to code against them with the same level of productivity that you have when using the .NET framework class libraries.
Add EJB Reference (available in the Enterprise edition only), which allows you to find an Enterprise Java Bean (EJB) using JNDI lookups. JNDI is a directory service used to locate your EJBs and interface to them. This is not supported on Tomcat, because Tomcat does not support EJBs. If you are using Visual MainWin for J2EE Enterprise edition to build applications for EJB-enabled servers such as JBoss, WebLogic or WebSphere, you can find your EJBs using their JNDI entry, create references to them and consume them like any other object. If you need to consume EJBs, your J2EE developer who built them can give you this information..
It is easy to access data through ADO.NET using Grasshopper. This is because Visual MainWin provides an implementation of the System.Data namespace that is built on top of JDBC. You can therefore use the System.Data classes as you have always done without worrying about how JDBC handles it. The System.Data classes have been tested with leading Enterprise databases, including Microsoft SQL Server, Oracle, IBM DB2, Sybase, PostgreSQL and MySQL. The drivers for SQL Server and PostgreSQL are included with the platform should you need to work with them, and are automatically installed for you on your application server.
In this article, you took a whirlwind tour of some of the features of the Visual MainWin for J2EE toolkit and what they allow you to do. You have seen how you can use your existing C# or VB.NET skills to build applications in a whole new arena – for.
|
https://www.codeproject.com/Articles/10473/Visual-Studio-NET-IDE-for-Linux?msg=1215295
|
CC-MAIN-2017-43
|
refinedweb
| 1,629
| 63.39
|
The Ultimate Guide: 5 Methods for Debugging Production Servers at Scale
This a guest post by Alex Zhitnitsky, an engineer working at Takipi, who is on a mission to help Java and Scala developers solve bugs in production and rid the world of buggy software.
How to approach the production debugging conundrum?
All sorts of wild things happen when your code leaves the safe and warm development environment. Unlike the comfort of the debugger in your favorite IDE, when errors happen on a live server - you better come prepared. No more breakpoints, step over, or step into, and you can forget about adding that quick line of code to help you understand what just happened. In production, bad things happen first and then you have to figure out what exactly went wrong. To be able to debug in this kind of environment we first need to switch our debugging mindset to plan ahead. If you’re not prepared with good practices in advance, roaming around aimlessly through the logs wouldn’t be too effective.
And that’s not all. With high scalability architectures, enter high scalability errors. In many cases we find transactions that originate on one machine or microservice and break something on another. Together with Continuous Delivery practices and constant code changes, errors find their way to production with an increasing rate. The biggest problem we’re facing here is capturing the exact state which led to the error, what were the variable values, which thread are we in, and what was this piece of code even trying to do?
Let’s take a look at 5 methods that can help us answer just that. Distributed logging, advanced jstack techniques, BTrace and other custom JVM agents:
1. Distributed Logging
For every log line we print out, we need to be able to extract the full context to understand what exactly happened there. Some data might come from the logger itself and the location the log is created in, other data needs to be extracted at the moment of the event. One caveat to avoid here is hurting your throughput. The more context you draw out, the bigger performance hit you experience. To test this in practice we ran a benchmark and discovered 7 Logback tweaks to help mitigate this risk (naming the logger by the class name is my favorite one).
What context should you draw out?
Everything! Where in the code are we, if relevant this can go down to the class we’re in, or even the specific method, which thread is this, and a self assigned transaction ID that’s unique to this request. A transaction ID is key to understanding the full context of an error, since it often travels between your nodes, processes and threads. A good practice to keep here is making sure you generate a UUID at every thread’s entry point to your app. Append the ID into each log entry, keep it for the tasks in your queue, and try to maintain this practice across your machines. We found this to be critical for making sense out of distributed or async logging. It becomes even more powerful when binded together with log management tools, like Logstash or Loggly, where you can search and tie the knots around the transaction from end to end.
When everything else fails - Catch ‘em all
Uncaught exceptions are where threads go to die. And so does most of the evidence as to what actually happened there. Most of the time the framework you’re using will be able to contain this and show some error message (like YouTube’s custom message dispatching a team of highly trained monkeys to the rescue). For your own way to handle this, the last line of defense would be setting up a Global Exception Handler, like this one in Java:
public static void Thread.setDefaultUncaughtExceptionHandler(
UncaughtExceptionHandler eh);
void uncaughtException( Thread t, Throwable e) {
logger.error(“Uncaught error in thread “ + t.getName(), e);
}
This is similar to the things Tomcat or a framework like Akka will give you. Here are 3 ways to extract even more data when an uncaught exception inevitably happens:
Thread names: A nifty little trick is to change your thread name according to the request you’re handling. For example, whenever you’re processing a certain transaction, append its ID to the thread’s name and remove it when you’re done.
Thread-local storage (TLS): A way to keep thread specific data - off the Thread object. You can put there anything that would help identify what went wrong once an error happens. Things like a transaction id, time, or username. Other than this data and the thread name, there’s nothing much you can save when reaching the Uncaught Exception Handler.
Mapped Diagnostic Context (MDC): Similar to the thread-local concept and coming at it from the logging angle, the MDC is part of logging frameworks like Log4j or Logback. It creates a static map at the logger level, and enables a few more advanced features other than TLS.
2. Preemptive jstack
Most of you Java developers are probably familiar with jstack, a powerful tool that comes with the JDK. In two sentences, it basically allows you to hook into a running process and output all the threads that are currently running in it with their stack trace, frames, either Java or native, locks they’re holding and all sorts of other meta data. It can also analyze heap dumps or core dumps of processes that already don’t exist, a longstanding and super useful toolkit.
BUT… One of the issues it has is that it’s mostly used in retrospect, the bad awful thing has already happened and we’re firing up jstack to recover any clues of what might have caused it. The server isn’t responding, the throughput is dropping, database queries taking forever: A typical output would be a few threads stuck on some nasty database query. No clue of how we got there.
A nice hack that would allow getting the jstack output where it matters most would be activating it automatically when things go south. For example, say we want to set a certain throughput threshold and get jstack to run at the moment it drops:
public void startScheduleTask() {
scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
checkThroughput();
}
}, APP_WARMUP, POLLING_CYCLE, TimeUnit.SECONDS);
}
private void checkThroughput()
{
int throughput = adder.intValue(); //the adder is inc’d when a message is processed
if (throughput < MIN_THROUGHPUT) {
Thread.currentThread().setName("Throughput jstack thread: " + throughput);
System.err.println("Minimal throughput failed: executing jstack");
executeJstack(); // See the code on GitHub to learn how this is done
}
adder.reset();
}
3. Stateful jstack
Another issue with jstack is that while it provides very rich meta data about the thread, we’re lacking the actual state that led to the error. This is where manually setting the thread name comes to our rescue again. To be able to understand the source of the error, we need more than just a stack trace. Let’s go back to the nasty database query example, by adding a line like:
Thread.currentThread().setName(Context + TID + Params + current Time, ...);
What once looked like this in jstack’s output and didn’t provide much actionable data:
“pool-1-thread-1″ #17 prio=5 os_prio=31 tid=0x00007f9d620c9800 nid=0x6d03 in Object.wait() [0x000000013ebcc000]
Now looks like]
Now we have a better sense of what’s going on. You see what the thread was doing and the parameters it received, including the Transaction ID and Message ID. With these parameters the state that led us here becomes clearer. Now we know which message got us stuck, and can retrace our steps back, reproduce the error, isolate and solve it.
4. BTrace - An Open Source tracing tool
The next piece of the puzzle is better visibility into the state of a JVM at runtime. How can we get this kind of data about the state of our JVM without logging and code changes?
One interesting way to tackle this is by using the BTrace Java agent (You can find out more about what Java agents are and what’s the difference between them to native agents right here. BTrace lets us extract data from a running JVM without adding logging or restarting anything. After attaching the agent, you can hook up to it using the BTrace scripting language. Through scripting, you can instrument the code and grab data that runs through it.
Let’s take a look at some code to get a feel of what the scripting language looks like:
@BTrace public class Classload {
@OnMethod(
Clazz=”+java.lang.ClassLoader”,
method=”defineClass”,
)
public static void defineClass(@Return class cl) {
println(Strings.strcat(“loaded ”, Reflective.name(cl)));
Threads.jstack();
println(“==============================”);
}
}
Pretty similar to Java, right? Here we’re grabbing all the ClassLoaders and their subclasses. Whenever “defineClass” returns, the script will print out that the class was loaded and run jstack for us.
BTrace is a useful tool for investigating specific issues but it’s not recommended to use it continuously in production. Since it’s a Java agent with a high level of instrumentation, the overhead could often be significant. It also requires writing specific scripts for anything you’d like to measure and has many restrictions to avoid high overhead and not mess up anything important. For example, it cannot create new objects, catch exceptions, or include loops. With all that, it’s pretty cool to be able to edit scripts during runtime without having to restart your JVM.
5. Custom JVM Agents
While pinpointing the pieces of data we need for debugging is a good practice for logging, JVM agents have much more power and less to almost no extra plumbing requirements. To extract variable values for debugging without any code changes to your server, you need an agent in place. Smart use of agents can practically go around the other methods and serve you most of the data you need for debugging in high scalability.
Like BTrace, you can write your own custom Java agent. One way this helped us was when a certain class was instantiating millions of new objects for some reason, what we did to debug this was write a simple Java agent. The agent would hook to the constructor of that object and anytime the object was allocated an instance, it would get its stack trace. Later we analyzed the results and understood where the load was coming from. This is something BTrace couldn’t help us with since it has many restrictions and can only read data, and here we needed to write to file, log and do other operations that it doesn’t support.
If you’d like to get a feel for how an agent works, here are the few lines that get the whole thing rolling:
private static void internalPremain(String agentArgs, Instrumentation inst) throws IOException {
….
Transformer transformer = new Transformer(targetClassName);
inst.addTransformer(transformer, true); // the true flag let’s the agent hotswap running classes
}
A transformer object is created and we register it to the instrumentation object that has the power to change classes. The transformer implementation and the agent’s full source code, are available on GitHub.
The Native Agent and Variable Data
Unlike Java agents, native agents go a step deeper: They’re written in platform dependant native C/C++ code. Java agents have access to the code Instrumentation API, while Native Agents can also access the JVM’s low-level set of APIs and capabilities (JVMTI). This includes access to JIT compilation, GC, monitoring, exceptions, breakpoints and more.
At Takipi, we’re building a next generation production debugger that’s based on a native agent. It requires no code or configuration changes and it’s optimized to run on large scale production systems. Takipi analyzes your caught exceptions, uncaught exceptions, and log errors, showing you the exact state that led there down to the variable level. After attaching the agent to your JVM, you’ll be able to see your errors as if you were there when it happened:
Conclusion
It all boils down to this, the more valuable data you have, the quicker it is to solve the problem. Today when a few extra minutes of downtime are more valuable than ever, the drive to build rather than fix bugs is stronger, and the ability to quickly deploy new code without concerns is essential to succeed: A sound production debugging strategy must be in place.
Reader Comments
|
http://highscalability.com/blog/2015/1/7/the-ultimate-guide-5-methods-for-debugging-production-server.html
|
CC-MAIN-2019-47
|
refinedweb
| 2,087
| 59.84
|
Agenda
See also: IRC log
->
Accepted
->
Norm: The substance was PSVI support which I think we sorted as best we can before a new draft.
Henry: My action is still
outstanding, but I don't think that should get in the
way.
... To look for things that point outside the subtree.
Accepted.
Regrets from Paul.
Norm: We need to decide when
we're ready.
... I think we're just about there, just the PSVI stuff I think.
... Can anyone think of anything we need to deal with first?
None heard.
Henry: It wouldn't be a bad idea to solicit feedback from our readers.
<scribe> ACTION: Henry to setup an xproc developers list at w3.org [recorded in]
Murray: The GRDDL WG is finished,
I made a point of ensuring that there was a comment in the
GRDDL spec that pointed to XProc.
... I wonder whether in of this groups work there is an example of an XProc script that takes an XML document and turns it into triples.
Norm: I don't think we have one, but there's nothing to prevent us from creating one.
Alex: Do you have a specific example in mind?
Murray: No, but I think it should be as simple as possible.
Norm: It would be nice if it did more than XSLT.
Murray: All it has to do is some validation and run some RDFy kind of tool over the result.
Alex: Is there an example in the GRDDL spec?
Murray: Yes, there are a few.
Norm: There's certainly room for
more examples in the spec, and that would be a good one.
... So, coming back to last call.
... I was thinking along these lines: LC mid June, CR late July, PR in mid September?
Mohamed: Today we can have select
on p:input, which is static.
... I have some use cases where I need to pass an XPath expression to a step where the expression is constructed by the pipeline.
...p: load is symmetric with p:document, it would be nice to have p:filter symetric to select on input.
Norm: And this is a select expression not a match because if you wanted a match you'd just use split-sequence and ignore the non-matched parts.
Henry: I'm confused. In what sense is p:input static, if I had an expression in a variable why wouldn't that work?
<ht> HST: What is wrong with <p:input
Mohamed: That gives you the string of $foo, but doesn't use it to select parts of the document.
<ht> Right, what you want is, as it were, <p:input><p:option</p:input>
Richard: $foo is a string which
isn't a node set so it's an error.
... We don't require the node set to come from the document, which is a little surprising.
Murray: So $foo in an argument should be evaluated to what it is.
Richard: The select expression si
$foo, and its value is teh value of $foo and it must be a
string.
... And that's an error.
Some discussion of the meaning of evaulated select expressions.
Henry: We could make select into an option.
Norm: I think that would be really confusing.
<richard> we could just add a function to our xpath extensions, like exslt's dyn:evaluate
Mohamed: The only thing that
bothers me about that is, what's the context of the
evaluation.
... I think the simplest, most compatible thing is to add p:filter.
Murray: In a bourne shell, you can put something you want evaluated inside single quotes.
<richard> (backquotes)
Murray: I wonder whether a simple syntax solution would be a result.
Proposal: add p:filter as Mohamed suggests
Accepted.
Richard: Required or optional?
Mohamed: p:load is required, right?
Norm: yes.
... I think this is small enough and providing fairly core functionality, I'd make it required.
Proposal: required?
Accepted.
Norm attempts to summarize the hash changes.
Mohamed: It's an optional step,
so we can make it a bit stronger. For security reasons, sha1
and md5 are on the way out.
... Security considerations will require more options.
... And it's also valuable to provide crc even though it's not secure, it's widely used.
Norm: I'm sympathetic because I had reservations about the particularly narrow slice I made when I first proposed p:hash.
Proposal: change p:hash along the lines suggested by Mohamed
Accepted.
Norm: Mohamed proposoes a c14n step, we could also do it on serialization.
Henry: C14N specifies what a
string looks like.
... It seems like we'd need this on the serialization options.
... Otherwise you can't output a C14N string form an XProc processor.
... You could have p:c14n, but it would be too limiting.
<scribe> ACTION: Mohamed to propose serialization changes to support C14N. [recorded in]
Norm: We need to say something about id(), key() the namespace axis, etc.
Richard: key() is added by
XSLT.
... The namespace axis isn't a problem for XPath 1 processors because they must implement it.
... The other things are the functions added by XSLT.
Norm: On further consideration, I think id() is probably ok
Richard: I would have thought that none of these things would be expected to work.
Norm: Well, match patterns come from XSLT, so you might expect them to work there
<scribe> ACTION: Norm to investigate the functions added by XSLT and draft some prose for the spec [recorded in]
Mohamed summarizes.
Proposed: add a limit option to p:count along the lines Mohamed suggested
Accepted.
Adjourned.
|
http://www.w3.org/XML/XProc/2008/05/29-minutes.html
|
CC-MAIN-2016-40
|
refinedweb
| 927
| 74.39
|
Installing dependencies¶
In Getting started we used the conan install command to download the Poco library and build an example.
If you inspect the
conanbuildinfo.cmake file that was created when running conan install,
you can see there that there are many CMake variables declared. For example
CONAN_INCLUDE_DIRS_ZLIB, that defines the include path to the zlib headers, and
CONAN_INCLUDE_DIRS that defines include paths for all dependencies headers.
If you check the full path that each of these variables defines, you will see that it points to a folder under your
<userhome>
folder. Together, these folders are the local cache. This is where package recipes and binary
packages are stored and cached, so they don’t have to be retrieved again. You can inspect the
local cache with conan search, and remove packages from it with
conan remove command.
If you navigate to the folders referenced in
conanbuildinfo.cmake you will find the
headers and libraries for each package.
If you execute a conan install poco/1.9.4@ command in your shell, Conan will
download the Poco package and its dependencies (openssl/1.0.2t and
zlib/1.2.11) to your local cache and print information about the folder where
they are installed. While you can install each of your dependencies individually like that,
the recommended approach for handling dependencies is to use a
conanfile.txt file.
The structure of
conanfile.txt is described below.
Requires¶
The required dependencies should be specified in the [requires] section. Here is an example:
[requires] mypackage/1.0.0@company/stable
Where:
-
mypackageis the name of the package which is usually the same as the project/library.
-
1.0.0is the version which usually matches that of the packaged project/library. This can be any string; it does not have to be a number, so, for example, it could indicate if this is a “develop” or “master” version. Packages can be overwritten, so it is also OK to have packages like “nightly” or “weekly”, that are regenerated periodically.
-
companyis the owner of this package. It is basically a namespace that allows different users to have their own packages for the same library with the same name.
-
stableis the channel. Channels provide another way to have different variants of packages for the same library and use them interchangeably. They usually denote the maturity of the package as an arbitrary string such as “stable” or “testing”, but they can be used for any purpose such as package revisions (e.g., the library version has not changed, but the package recipe has evolved).
Optional user/channel¶
Warning
This is an experimental feature subject to breaking changes in future releases.
If the package was created and uploaded without specifying
the
user and
channel you can omit the
user/channel when specifying a reference:
[requires] packagename/1.2.0
Overriding requirements¶
You can specify multiple requirements and override transitive “require’s requirements”. In our example, Conan installed the Poco package and all its requirements transitively:
- openssl/1.0.2t
- zlib/1.2.11
Tip
This is a good example of overriding requirements given the importance of keeping the OpenSSL library updated.
Consider that a new release of the OpenSSL library has been released, and a new corresponding Conan package is available. In our example, we do not need to wait until pocoproject (the author) generates a new package of POCO that includes the new OpenSSL library.
We can simply enter the new version in the [requires] section:
[requires] poco/1.9.4 openssl/1.0.2u
The second line will override the openssl/1.0.2t required by POCO with the currently non-existent openssl/1.0.2u.
Another example in which we may want to try some new zlib alpha features: we could replace the zlib requirement with one from another user or channel.
[requires] poco/1.9.4 openssl/1.0.2u zlib/1.2.11@otheruser/alpha
Note
You can use environment variable CONAN_ERROR_ON_OVERRIDE
to raise an error for every overriden requirement not marked explicitly with the
override keyword.
Generators¶
Conan reads the [generators] section from
conanfile.txt and creates files for each generator
with all the information needed to link your program with the specified requirements. The
generated files are usually temporary, created in build folders and not committed to version
control, as they have paths to local folders that will not exist in another machine. Moreover, it is very
important to highlight that generated files match the given configuration (Debug/Release,
x86/x86_64, etc) specified when running conan install. If the configuration changes, the files will
change accordingly.
For a full list of generators, please refer to the complete generators reference.
Options¶
We have already seen that there are some settings that can be specified during installation. For example, conan install .. -s build_type=Debug. These settings are typically a project-wide configuration defined by the client machine, so they cannot have a default value in the recipe. For example, it doesn’t make sense for a package recipe to declare “Visual Studio” as a default compiler this is the linkage that should be used if consumers don’t specify otherwise.
Note
You can see the available options for a package by inspecting the recipe with conan get <reference> command:
$ conan get poco/1.9.4@
To see only specific fields of the recipe you can use the conan inspect command instead:
$ conan inspect poco/1.9.4@ -a=options $ conan inspect poco/1.9.4@ -a=default_options
For example, we can modify the previous example to use dynamic linkage instead of the default one, which was static, by editing the
[options] section in
conanfile.txt:
[requires] poco/1.9.4 [generators] cmake [options] poco:shared=True # PACKAGE:OPTION=VALUE openssl:shared=True
Install the requirements and compile from the build folder (change the CMake generator if not in Windows):
$ conan install .. $ cmake .. -G "Visual Studio 14 Win64" $ cmake --build . --config Release
As an alternative to defining options in the
conanfile.txt file, you can specify them directly in the
command line:
$ conan install .. -o poco:shared=True -o openssl:shared=True # or even with wildcards, to apply to many packages $ conan install .. -o *:shared=True
Conan will install the binaries of the shared library packages, and the example will link with them. You can again inspect the different binaries installed. For example, conan search zlib/1.2.11@.
Finally, launch the executable:
$ ./bin/md5
What happened? It fails because it can’t find the shared libraries in the path. Remember that shared libraries are used at runtime, so the operating system, which is running the application, must be able to locate them.
We could inspect the generated executable, and see that it is using the shared libraries. For example, in Linux, we could use the objdump tool and see the Dynamic section:
$ cd bin $ objdump -p md5 ... a folder where they can be found, either by the linker, or by the OS runtime.
You can add the libraries’ folders to the path (LD_LIBRARY_PATH environment variable in Linux, DYLD_LIBRARY_PATH in OSX, or system PATH in Windows), or copy those shared libraries to some system folder where they can be found by the OS. But these operations are only related to the deployment or installation of apps; they are not relevant during development. Conan is intended for developers, so it avoids such manipulation of the OS environment.
In Windows and OSX, the simplest approach is to copy the shared libraries to the executable folder, so they are found by the executable, without having to modify the path.
This is done using the [imports] section in
conanfile.txt.
To demonstrate this, edit the
conanfile.txt file and paste the following [imports] section:
[requires] poco/1.9.4 see where the shared libraries are. It is common that *.dll are copied to /bin. The rest of the libraries should be found in the /lib folder, however, this is just a convention, and different layouts are possible.
Install the requirements (from the
build folder), and run the binary again:
$ conan install .. $ ./bin/md5
Now look at the
build/bin folder and verify that the required shared libraries are there.
As you can see, the [imports] section is a very generic way to import files from your requirements to your project.
This method can be used for packaging applications and copying the resulting executables to your bin folder, or for copying assets, images, sounds, test static files, etc. Conan is a generic solution for package management, not only for (but focused on) C/C++ libraries.
See also
To learn more about working with shared libraries, please refer to Howtos/Manage shared libraries.
|
https://docs.conan.io/en/1.27/using_packages/conanfile_txt.html
|
CC-MAIN-2020-40
|
refinedweb
| 1,443
| 56.05
|
For a few days now i've been investigating how to best send data structures (specifically STL vectors) from a c++ DLL to some indie gamedev software based on Delphi (which i dont have internal access to). I've narrowed it down to two options, either i simply call the DLL multiple times to recieve one number at a time and incur some nasty overhead, or I send an appropiately formatted string (convert 16bit ints to 2x 8bit chars) containing the vector.
The problem is that when it tries to send a number below 256, the string length is apparently only 1 instead of being 2 with a zero character. At first i thought it was the signed 16bit number messing with the bitwise operators.... but beyond that i've really got no idea. Is it possible that im inserting a null character as opposed to a zero? How would i stop this?
As i test i wrote the DLL below:
PS: Sorry for the sloppy coding format, i've never had any formal training, so i kinda just made it up as i went along. Also, Im using Dev-C++ and Mingw if thats of any importance....PS: Sorry for the sloppy coding format, i've never had any formal training, so i kinda just made it up as i went along. Also, Im using Dev-C++ and Mingw if thats of any importance....Code:#include <windows.h> #include <string> #define export extern "C" __declspec(dllexport) __stdcall using namespace std; string str = ""; export const char* send_ds(){ str = ""; unsigned __int16 j0 = 1; __int8 j1 = ((j0<<8)>>8); __int8 j2 = (j0>>8); str += char(j1); str += char(j2); return (str.c_str()); } BOOL WINAPI DllMain(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved){ switch (fdwReason){ case DLL_PROCESS_ATTACH: {break;} case DLL_PROCESS_DETACH: {break;} case DLL_THREAD_ATTACH: {break;} case DLL_THREAD_DETACH: {break;} } return TRUE; }
|
http://cboard.cprogramming.com/cplusplus-programming/90424-dll-interface-oddity.html
|
CC-MAIN-2014-35
|
refinedweb
| 305
| 57.61
|
While you’re developing your XSQL pages, if for any reason you receive an unexpected result, the following techniques may help diagnose the problem.
If your page does not include any <?xml-stylesheet?> instructions, there are only a few problems that can affect the page:
If you request the page and receive a result that still has your <xsql:query> tags in it, the problem is one of the following:
You are running an XSQL page from within the JDeveloper 3.1
environment and have not properly included the “XSQL
Runtime” library in your current
project’s library list. To remedy this, select
Project
→
Project
Properties... from the main JDeveloper
3.1 menu, and select the Libraries tab of the
Project Properties dialog. Click the
Add button and select the “XSQL
Runtime” library from the list to add it to your
project.
You are running the XSQL Servlet using a servlet engine that has not
been correctly configured to associate the
oracle.xml.xsql.XSQLServlet class to the file
extension
*.xsql. To remedy, see Appendix B for instructions on setting this up correctly
for your servlet engine.
You have used an incorrect namespace URI to define the
xsql namespace prefix for use on your
<xsql:query> elements. To remedy, double-check that your
namespace declaration is exactly as follows:
xmlns:xsql="urn:oracle-xsql"
If your returned “data page” contains <xsql-error> elements like:
<xsql-error <message>No ...
No credit card required
|
https://www.safaribooksonline.com/library/view/building-oracle-xml/1565926919/ch08s03.html
|
CC-MAIN-2018-30
|
refinedweb
| 242
| 63.7
|
Kalman filter – simplified version
May 16, 2011 11 Comments
Finally… it’s been a while since I’ve started looking into filtering, and more precisely Kalman filters… since I started on my quadcopter project, to be more precise…
After reading the descriptions and formulas found here (highly recommend the short introductory pdf !) I decided I had to code my own simplified version.
The data I’m using for this post comes from a (very) noisy RC FM receiver that sits right next to an electric motor and that was causing troubles to the point that I couldn’t control the tank (even with a capacitor on the motor !).
So let’s start with simply filtering a scalar value…
Here’s the Java code :
package mykalman; public class MyKalman { private double Q = 0.00001; private double R = 0.01; private double P = 1, X = 0, K; private void measurementUpdate(){ K = (P + Q) / (P + Q + R); P = R * (P + Q) / (R + P + Q); } public double update(double measurement){ measurementUpdate(); double result = X + (measurement - X) * K; X = result; return result; } private static int[] DATA = new int[]{0,0,0,0,0,0,0,0,0,0,3,13,20,31,39,44,46,54,57,57,63,67,67,70,76,78,86,0,93,99,100,100,100,100,102,106,108,105,50,108,106,106,106,105,105,105,106,106,106,105,105,108,106,80,106,106,105,106,106,106,106,106,106,106,2,106,106,106,106,108,106,108,104,106,105,106,105,106,106,105,105,105,106,106,106,106,105,106,106,105,105,108,30,106,106,105,106,106,104,106,106,105,80,109,106,108,105,106,106,106,108,106,106,105,106,104,108,105,106,106,104,106,106,106,108,108,106,45,60,75,106,106,106,106,105,105,104,106,106,106,106,108,108,106,106,106,105,105,106,106,106,106,106,106,106,105,105,109,106,108,106,106,105,106,108,20,106,106,106,105,105,106,104,108,108,106,106,104,106,104,106,105,106,106,109,45,106,106,106,66,105,106,103,105,105,106,106,105,108,104,106,105,106,106,108,104,105,66,66,66,106,105,108,105,108,106,106,106,106,106,106,108,106,108,105,105,105,106,106,105,106,108,106,105,106,0,106,105,106,106,106,106,106,104,106,104,108,106,108,108,106}; public static void main(String[] args){ MyKalman filter = new MyKalman(); for(int d : DATA){ System.out.print(Math.round(filter.update(d)) + ","); } } }
Yep, that’s it…. no kidding… it’s obviously quite simple because :
- there is no control input
- we measure the state directly
- etc. …
But still does a quite impressive job !
Pingback: kalman-filter-simplified-version | 不分享空間
Pingback: a drop in the digital ocean
Two questions:
1. Is it possible to replace second line in measurementUpdate method with
P = R * K; ( because (P + Q) / (R + P + Q) is already equal to K) ?
2. It seems K tends to 0.0311267292017… (after 100-th step) so do we need all that difficult calculations?
You are correct, after a while I did realise that this version of the filter was so “simplified” that it’s basically almost just a low pass filter… but never took the time to update this post.
Thank you for pointing it out.
Have a look at some of the comments below, especially the one with explaining that “K and P are actually independent “.
Dan
Hi,
for what does Q, R and P stand for? What I mean: Which of these Parameters I have to change, for example, to get a fast rise?
Q and R are used to estimate the noise in the measurements ().
P is the predicted estimated covariance.
dan
Do you know of a reliable way to choose a decent process noise and measurement error covariance for use with your filter?
Thanks!
Hi Bob,
No, this is barely my first foray into filtering / machine learning, and to be honest I think the example in this post is sooo simplified, that it basically has the same effect as a low pass filter, which is much easier to write…
But please do let me know if you find a nice / simple process to choose the covariance matrix …!
Dan
Thanks, I’ll let you know if I find anything. I was hoping your solution would be a silver bullet but I think you are right, it might be a slightly better low-pass in its current state.
Pingback: Very Simple Kalman in C# « a drop in the digital ocean
thanks for you reference to my Java Kalman filter implementation.
Just beware that, because there is no control input K and P are actually independent of X (the process value) and will quickly tend to constant values… hence even that measurementUpdate() method becomes un-necessary and the whole filter will amount to doing a simple moving average over the measured values…
|
https://trandi.wordpress.com/2011/05/16/kalman-filter-simplified-version/
|
CC-MAIN-2017-26
|
refinedweb
| 857
| 57.4
|
I think that achieving as much binary compatibility with 1.2.X as possible
should be a big goal for 1.3 (I know I keep saying it). Easing a transition
to some of the bigger features/additions will be better for the community.
However, if there are changes that we feel are important enough, but
requires breaking something, I am for that trade off. I think there are
probably some things we could do to make some of those changes easier to
handle. However, the wholesale number of changes right now need to be
addressed. And the breaking changes need to be documented and passed to the
user community for feedback before release.
In hindsight, maybe we should have just called it 2.0 and made the really
big changes. And I suppose we could still call off 1.3 and just start on
2.0. Except that I think that a 2.0 effort is going to be a much bigger
deal than finishing up 1.3 and getting it out there with the new features we
already have. So, I am for cleaning up 1.3, making important changes we
want to make, getting it out there for folks to use. Then turning to a
log4j 2.0 with a new package name and some fundamental rethink on the api's.
Can we start a thread for each of the major compatibility issues/features as
we start to tackle them? Maybe preface the subject with "[COMPATIBILITY]"
and then the area being discussed. That way we can scope them out a little
and discuss options better.
-Mark
----- Original Message -----
From: "Jacob Kjome" <hoju@visi.com>
To: "Log4J Developers List" <log4j-dev@logging.apache.org>
Sent: Saturday, December 24, 2005 9:39 AM
Subject: Re: log4j 1.3 prioritized tasks
> At 06:10 PM 12/23/2005 -0600, you wrote:
> >
> >On Dec 23, 2005, at 3:37 PM, Jacob Kjome wrote:
> >
> >>
> >> A bit of a sidetrack from the current discussion, but just how big
> >> is log4j-1.3 going to be and just how polluted with 1.2.xx stuff
> >> are we going to make it? Originally, a lot of stuff was refactored
> >> and/or removed and replaced by, arguably, better implementations.
> >> Last changes I made, I had Log4j-1.3 and Log4j-1.2.xx working with
> >> LogWeb. I figured that if something that uses Log4j internals like
> >> LogWeb works with both, then 1.3 was good to go.
> >
> >My impression was that even the simplest app that used log4j 1.2
> >would fail with class verifier exceptions on JDK 1.5 with the current
> >log4j 1.3. I'm sure that the major culprit for that was the Priority/
> >Level inversion and if that was fixed, then only a much smaller sets
> >of potential applications would fail.
> >
>
> Yes, I suppose I can agree with that change. As desirable as it might
> seem to try to invert the 1.2 Level/Priority relationship and/or get rid
> of Priority altogether, it isn't worth the cost. I just think that
> Log4j-1.3 was moving toward being leaner/meaner (except for Joran). The
> thing is, if we are going to keep what was added to 1.3 and bring back
> everything from 1.2 that was removed, 1.3 will end up being a monster.
> I'm not even sure a 1.3 is worth doing if that's going to be the end
> result? Why not just develop off the 1.2 branch for this purpose and
> leave the current 1.3-specific feature set to the 2.0 branch?
>
> >> Now we've got everything from Log4j-1.2.xx coming back into
> >> Log4j-1.3, plus all the changes and additions made for 1.3 in there
> >> as well. From my perspective, maybe the current 1.3 should have
> >> just moved to 2.0 and 1.3 should have been developed off the 1.2
> >> branch. 1.3 seems to be turning into a monster with everything
> >> 1.2.xx being added back in to make it binary compatible. Log4j is
> >> already somewhat bloated as it is. Any larger and its going to burst.
> >>
> >> thoughts?
> >
> >The biggest thing that would go back in to restore binary
> >compatibility in my estimation would be the
> >org.apache.log4j.DailyRollingFileAppender and
> >org.apache.log4j.RollingFileAppender. Currently, there are shims
> >that redirect to o.a.l.rolling.RollingFileAppender that should be
> >sufficient if you only used those classes but would not be compatible
> >if you had extended them. There had been nothing in the 1.2 code
> >base that marked those classes as either deprecated or final or
> >warned against extension, so not bringing them back would be likely
> >to break some legitimate users of log4j 1.2.
> >
> >There are a couple of options:
> >
> >a) Totally drop the o.a.l.RFA and DRFA classes
> >
> >This was tried but there were repeated complaints about configuration
> >files that no longer worked with 1.3 that give rise to the shims.
> >
>
> Yes, the shims were, for me, the end of the acceptability range. After
> that, we almost might as well go back to using Log4j-1.2.xx, leading to
> the discussion here.
>
> >b) Leave the shims in and document that applications that use
> >extensions of those classes will break.
> >
>
> Which is the point at which we might as well develop off the 1.2 branch,
> at least for me.
>
> >c) Replace the shims with the current 1.2 implementations, but mark
> >as deprecated and
> > 1) leave in log4j.jar
> > 2) place in log4j-deprecated.jar (or similar)
> >
> >A log4j-deprecated.jar has some appeal since it could be the home of
> >a lot of dropped (without warning) classes like o.a.l.HTMLLayout and
> >the o.a.l.varia Filters.
> >
>
> Not a bad idea, but they'd still need some changes to work with the
> current Log4j-1.3 base, for instance using internal logging rather than
> LogLog, and probably a few other changes as well.
>
> >Actually, a bigger thing to go back for strict compatibility would be
> >LogFactor5 which is still part of the log4j-1.2.x.jar's. However, I
> >think it would be likely that we could get away with saying that any
> >app that used LF5 will break with log4j 1.3.
> >
>
> That depends. If we develop 1.3 off the 1.2 branch and then simply move
> LogFactor5 out to its own jar, that would make Log4j.jar leaner/meaner and
> satisfy LogFactor5 users.
>
> >p.s.: As for wanting binary compatibility or a lean and mean log4j.
> >I actually want both, a compatible 1.3 and a lean and mean 2.0. I
> >want the mean and lean 2.0 more, but think the best path first makes
> >the current code base 1.2 compatible and then consider intentional
> >and radical changes to the implementation but under a different
> >package name.
> >
>
> Well, the easiest way to make 1.3 compatible with 1.2 is to develop 1.3
> off the 1.2 branch. I think that some priorities got bit mixed up in the
> development of 1.3. The original intention was Log4j internal logging
> rather than using LogLog along with a more robust repository selector.
> There were some other things, but these were biggies for me. The problem
> was that, in developing these new features, backward compatibility took a
> back seat. What are we going to do about the repository selector
> interface? It changed significantly! What about those people that
> implemented their own? They won't be compatible with the 1.3 one. I
> think you will find that just about any change of even minor significance
> is going to be result in user complaints of non-backward compatibility.
>
> I wonder if we need to re-evaluate what the Log4j-13 feature set ought to
> be now that backward compatibility has been found, in hindsight, to be
> priority #1. Then we can figure out what we can add to the existing 1.2
> branch that would be worth a 1.3 release. Slightly incompatible changes
> could be moved to a 1.4 or 1.5 release, but only after *lots* of
> discussion verifying that users would be ok with the changes.
> Uber-incompatible changes would be moved off to 2.0, which would change
> the package namespace. That would probably take some some to get adopted,
> which is why agreed-upon partially incompatible 1.4 or 1.5 releases should
> be considered before a wholesale move to change the package namespace.
>
> Jake
>
>
> ---------------------------------------------------------------------
>
|
http://mail-archives.apache.org/mod_mbox/logging-log4j-dev/200601.mbox/%3C002201c61004$3aa67960$4001a8c0@hogwarts%3E
|
CC-MAIN-2016-18
|
refinedweb
| 1,428
| 77.74
|
27 CFR 25.281 - General.
(a) Reasons for refund or adjustment of tax or relief from liability. The tax paid by a brewer on beer produced in the United States may be refunded, or adjusted on the tax return (without interest) or, if the tax has not been paid, the brewer may be relieved of liability for the tax on:
(1) Beer returned to any brewery of the brewer subject to the conditions outlined in subpart M of this part;
(2) Beer voluntarily destroyed by the brewer subject to the conditions outlined in subpart N of this part;
(3) Beer lost by fire, theft, casualty, or act of God subject to the conditions outlined in § 25.282.
(b) Refund of beer tax excessively paid. A brewer may be refunded the tax excessively paid on beer subject to the conditions outlined in § 25.285.
(c) Rate of tax. Brewers who have filed the notice required by § 25.167 and who have paid the tax on beer at the reduced rate of tax shall make claims for refund or relief of tax, or adjustments on the tax return, based upon the lower rate of tax. However, a brewer may make adjustments or claims for refund or relief of tax based on the higher rate of tax if the brewer can establish to the satisfaction of the appropriate TTB officer that the tax was paid or determined at the higher rate of tax.
(Sec. 201, Pub. L. 85-859, 72 Stat. 1335, as amended (26 U.S.C. 50.
|
http://www.law.cornell.edu/cfr/text/27/25.281
|
CC-MAIN-2014-52
|
refinedweb
| 256
| 65.46
|
Ever wonder how paint colors are named? “Princess ivory”, “Bull cream.” And what about “Keras red”? It turns out that people are making a living naming those colors. In this post, I’m going to show you how to build a simple deep learning model to do something similar — give the model a color name as input, and have the model propose the name of the color.
This post is beginner friendly. I will introduce you to the basic concepts of processing text data with deep learning.
Overview
Let’s take a look at the big picture we’re going to build,
There are two general options for language modeling: word level models and character level models. Each has its own advantages and disadvantages. Let’s go through them now.
The word level language model can handle relatively long and clean sentences. By “clean”, I mean the words in the text datasets are free from typos and have few words outside of English vocabulary. The word level language model encodes each unique word into a corresponding integer, and there’s a predefined fixed-sized vocabulary dictionary to look up the word to integer mapping. One major benefit of the word level language model is its ability to leverage pre-trained word embeddings such as Word2Vec or GLOVE. These embeddings represent words as vectors with useful properties. Words close in context are close in Euclidean distance and can be used to understand analogies like “man is to women,
But there’s an even simpler language model, one that splits a text string into characters and associates a unique integer to every single character. There are some reasons you might choose to use the character level language model over the more popular word-level model:
You may also be aware of the limitation that came with adopting character level language:
Fortunately, these limitations won’t pose a threat to our color generation task. We’re limiting our color names to 25 characters in length and we only have 14157 training samples.
We mentioned that we’re limiting our color names to 25 characters. To arrive at this number we checked the distribution of the length of color names across all training samples and visualize it to make sure the length limit we pick makes sense.
h = sorted(names.str.len().as_matrix()) import numpy as np import scipy.stats as stats import pylab as plt fit = stats.norm.pdf(h, np.mean(h), np.std(h)) #this is a fitting indeed plt.plot(h,fit,'-o') plt.hist(h,normed=True) #use this to draw histogram of your data plt.xlabel('Chars') plt.ylabel('Probability density') plt.show()
That gives us this plot, and you can clearly see that the majority of the color name strings has lengths less or equal to 25, even though the max length goes up to 30.
We could in our case pick the max length of 30, but the model we’re going to build will also need to be trained on longer sequences for an extended time. Our trade-off to pick shorter sequence length reduces the model training complexity while not compromising the integrity of the training data.
With the tough decision of max length being made, the next step in the character level data pre-processing is to transform each color name string to a list of 25 integer values, and this was made easy with the Keras text tokenization utility.
from tensorflow.python.keras.preprocessing.text import Tokenizer maxlen = 25 t = Tokenizer(char_level=True) t.fit_on_texts(names) tokenized = t.texts_to_sequences(names) padded_names = preprocessing.sequence.pad_sequences(tokenized, maxlen=maxlen)
Right now padded_names will have the shape of (14157, 25), where 14157 is the number of total training samples and 25 being the max sequence length. If a string has less than 25 characters, it will be padded with the
You might be thinking, all inputs are now in the form of integers, and our model should be able to process it. But there is one more step we can take to make later model training more effective.
We can view the character to integer mapping by inspecting the t.word_indexproperty of the instance of Keras’ Tokenizer.
{' ': 4, 'a': 2, 'b': 18, 'c': 11, 'd': 13, 'e': 1, 'f': 22, 'g': 14, 'h': 16, 'i': 5, 'j': 26, 'k': 21, 'l': 7, 'm': 17, 'n': 6, 'o': 8, 'p': 15, 'q': 25, 'r': 3, 's': 10, 't': 9,'u': 12, 'v': 23, 'w': 20, 'x': 27, 'y': 19, 'z': 24}
The integer values have no natural ordered relationship between each other and our model may not be able to harness any benefit from it. What’s worse, our model will initially assume such an ordering relationship among those characters (i.e. “a” is 2 and “e” is 1 but that should not signify a relationship), which can lead to an unwanted result. We will use one-hot encoding to represent the input sequence.
Each integer will be represented by a boolean array where only one element in the array will have a value of 1. The max integer value will determine the length of the boolean array in the character dictionary.
In our case, the max integer value is ‘x’: 27, so the length of a one-hot boolean array will be 28(considering the lowest value starts with 0, which is the padding).
For example, instead of using the integer value 2 to represent character ‘a’, we’re going to use one-hot array [0, 0, 1, 0 …….. 0].
One-hot encoding is also accessible in Keras.
from keras.utils import np_utils one_hot_names = np_utils.to_categorical(padded_names)
The resulting one_hot_names has the shape (14157, 25, 28), which stands for (# of training samples, max sequence length, # of unique tokens)
Remember we’re predicting 3 color channel values, each value ranging between 0–255. There is no golden rule for data normalization. Data normalization is purely practical because in practice it could take a model forever to converge if the training data values are spread out too much. A common normalization technique is to scale values to [-1, 1]. In our
# The RGB values are between 0 - 255 # scale them to be between 0 - 1 def norm(value): return value / 255.0 normalized_values = np.column_stack([norm(data["red"]), norm(data["green"]), norm(data["blue"])])
To build our model we’re going to use two types of neural networks, a
In recurrent neural
The easiest way to build up a deep learning model in Keras is to use its sequential API, and we simply connect each of the neural network layers by calling its model.add() function like connecting LEGO bricks.
from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import Dense, Dropout, LSTM, Reshape model = Sequential() model.add(LSTM(256, return_sequences=True, input_shape=(maxlen, 28))) model.add(LSTM(128)) model.add(Dense(128, activation='relu')) model.add(Dense(3, activation='sigmoid')) model.compile(optimizer='adam', loss='mse', metrics=['acc'])
Training a model cannot be any easier by calling model.fit() function. Notice that we’re reserving 10% of the samples for validation purpose. If it turns out the model is achieving great accuracy on the training set but much lower on the validation set, it’s likely the model is overfitting. You can get more information about dealing with overfitting on my other blog: Two Simple Recipes for Over Fitted Model.
history = model.fit(one_hot_names, normalized_values, epochs=40, batch_size=32, validation_split=0.1)
Let’s define some functions to generate and show the color predicted.
For a color name input, we need to transform it into the same one-hot representation. To achieve this, we tokenize characters to integers with the same tokenizer with which we processed the training data, pad it to the max sequence length of 25, then apply the one-hot encoding to the integer sequence.
And for the output RGB values, we need to scale it back to 0–255, so we can display them correctly.
# plot a color image def plot_rgb(rgb): data = [[rgb]] plt.figure(figsize=(2,2)) plt.imshow(data, interpolation='nearest') plt.show() def scale(n): return int(n * 255) def predict(name): name = name.lower() tokenized = t.texts_to_sequences([name]) padded = preprocessing.sequence.pad_sequences(tokenized, maxlen=maxlen) one_hot = np_utils.to_categorical(padded, num_classes=28) pred = model.predict(np.array(one_hot))[0] r, g, b = scale(pred[0]), scale(pred[1]), scale(pred[2]) print(name + ',', 'R,G,B:', r,g,b) plot_rgb(pred)
Let's give the predict() function a try.
predict("tensorflow orange") predict("forest") predict("keras red")
“keras red” looks a bit darker than one we’re familiar with, but anyway, that was the model proposed.
In this post, we talked about how to build a Keras model that can take any color name and come up with an RGB color value. More specifically, we looked at how to apply the one-hot encoding to character level language models, building a neural network model with a
Here’s a diagram to summarize what we have built in the post, starting from the bottom and showing every step of the data flow.
If you’re new to deep learning or the Keras library, there are some great resources that are easy and fun to read or experiment with.
TensorFlow playground: an interactive visualization of neural networks run on your browser.
Coursera deep learning course: learn the foundations of deep learning and lots of practical advice.
Keras get started guide: the official guide for the user-friendly, modular deep Python deep learning library.
Also, check out the source code for this post in my GitHub repo.Share on Twitter Share on Facebook
|
https://www.dlology.com/blog/how-to-train-a-keras-model-to-generate-colors/
|
CC-MAIN-2018-43
|
refinedweb
| 1,615
| 54.42
|
Details
Description
1) Expand the isartor test code so that it can also check conforming documents, i.e. documents that should not bring any errors. Support JBIG2.
2) Test the files from the Bavaria suite with preflight. I'll create sub-issues on that one. I counted 16 where something doesn't work as intented.
3) Include the Bavaria tests in the build. Only if we agree on this one. If not, I'll just keep it for myself as an additional regression test.
Attachments
Issue Links
- contains
PDFBOX-2403 false negative? "Font damaged, The FontFile can't be read"
- Closed
PDFBOX-2407 false negative: 2.4.3 : Invalid Color space, The operator "f" can't be used without Color Profile ?
- Closed
PDFBOX-2408 false negative? 1.2.1 : Body Syntax error, Single space expected ...
- Closed
PDFBOX-2416 xmp regression? 7.3 : Error on MetaData, Cannot find a definition for the namespace
- Closed
PDFBOX-2417 xmp regression? 7.3 : Error on MetaData, Schema is not set in this document :
- Closed
PDFBOX-2418 xmp regression? 7.3 : Error on MetaData, Schema is not set in this document :
- Closed
|
https://issues.apache.org/jira/browse/PDFBOX-2610?subTaskView=unresolved
|
CC-MAIN-2018-34
|
refinedweb
| 187
| 69.99
|
learnedset,.
Note
For some people, the term “Markov chain” always refers to a process with a finite or discrete state space. We follow the mainstream mathematical literature (e.g., [MT09]) in using the term to refer to any discrete time Markov process.,$$ \mathbb P \{ X_{t+1} = j \,|\, X_t = i \} = P[i, j] $$
Equivalently,
- $ P $ can be thought of as a family of distributions $ P[i, \cdot] $, one for each $ i \in S $
- $ P[i, \cdot] $ is the distribution of $ X_{t+1} $ given $ X_t = i $
(As you probably recall, when using NumPy
$$ p_w(x, y) := \frac{1}{\sqrt{2 \pi}} \exp \left\{ - \frac{(y - x)^2}{2} \right\} \tag{1} $$
What kind of model does $ p_w $ represent?
The answer is, the (normally distributed) random walk
$$ X_{t+1} = X_t + \xi_{t+1} \quad \text{where} \quad \{ \xi_t \} \stackrel {\textrm{ IID }} {\sim} N(0, 1) \tag{2} $$
$$ X_{t+1} = \mu(X_t) + \sigma(X_t) \, \xi_{t+1} \tag{3} $$$$ X_{t+1} = \alpha X_t + \sigma_t \, \xi_{t+1}, \qquad \sigma^2_t = \beta + \gamma X_t^2, \qquad \beta, \gamma > 0 $$
Alternatively, we can write the model as
$$ X_{t+1} = \alpha X_t + (\beta + \gamma X_t^2)^{1/2} \xi_{t+1} \tag{4} $$
$$ k_{t+1} = s A_{t+1} f(k_t) + (1 - \delta) k_t \tag{5} $$18],
$$ f_V(v) = \frac{1}{b} f_U \left( \frac{v - a}{b} \right) \tag{6} $$
$.
$$ p(x, y) = \frac{1}{\sigma(x)} \phi \left( \frac{y - \mu(x)}{\sigma(x)} \right) \tag{7} $$
For example, the growth model in (5) has stochastic kernel
$$ p(x, y) = \frac{1}{sf(x)} \phi \left( \frac{y - (1 - \delta) x}{s f(x)} \right) \tag{8} $$$$ \psi_{t+1}[j] = \sum_{i \in S} P[i,j] \psi_t[i] $$
$$ \psi_{t+1}(y) = \int p(x,y) \psi_t(x) \, dx, \qquad \forall y \in S \tag{9} $$
$$ (\psi P)(y) = \int p(x,y) \psi(x) dx \tag{10} $$,
$$ \psi_{t+1} = \psi_t P \tag{11} $$:
$$ k_{t+1} = s A_{t+1} f(k_t) + (1 - \delta) k_t \tag{12} $$ SciPy’s
gaussian_kde function.
$$ \psi_t^n(y) = \frac{1}{n} \sum_{i=1}^n p(k_{t-1}^i, y) \tag{13} $$
where $ p $ is the growth model stochastic kernel in (8).
What is the justification for this slightly surprising estimator?
The idea is that, by the strong law of large numbers,$$ \frac{1}{n} \sum_{i=1}^n p(k_{t-1}^i, y) \to \mathbb E p(k_{t-1}^i, y) = \int p(x, y) \psi_{t-1}(x) \, dx = \psi_t(y) $$ class called
LAE for estimating densities by this technique can be found in lae.py.
Given our use of the
__call__ method, an instance of
LAE acts as a callable object, which is essentially a function that can store its own data (see this discussion).
This function returns the right-hand side of (13) using
- the data and stochastic kernel that it stores as its instance data
- the value $ y $ as its argument
The function is vectorized, in the sense that if
psi is such an instance and
y is an array, then the call
psi(y) acts elementwise.
(This is the reason that we reshaped
X and
y inside the class — to make vectorization work)
Because the implementation is fully vectorized, it is about as efficient as it would be in C or Fortran.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline from scipy.stats import lognorm, beta from quantecon import LAE # == Define parameters == # s = 0.2 δ = 0.1 a_σ = 0.4 # A = exp(B) where B ~ N(0, a_σ) α = 0.4 # We set f(k) = k**α ψ_0 = beta(5, 5, scale=0.5) # Initial distribution ϕ = lognorm(a_σ) def p(x, y): """ Stochastic kernel for the growth model with Cobb-Douglas production. Both x and y must be strictly positive. """ d = s * x**α return ϕ.pdf((y - (1 - δ) * x) / d) / d n = 10000 # Number of observations at each date t T = 30 # Compute density of k_t at 1,...,T+1 # == Generate matrix s.t. t-th column is n observations of k_t == # k = np.empty((n, T)) A = ϕ.rvs((n, T)) k[:, 0] = ψ_0.rvs(n) # Draw first column from initial distribution for t in range(T-1): k[:, t+1] = s * A[:, t] * k[:, t]**α + (1 - δ) * k[:, t] # == Generate T instances of LAE using this data, one for each date t == # laes = [LAE(p, k[:, t]) for t in range(T)] # == Plot == # fig, ax = plt.subplots() ygrid = np.linspace(0.01, 4.0, 200)') ax.set_title(f'Density of $k_1$ (lighter) to $k_T$ (darker) for $T={T}$') plt.show()$$ X_{t+1} = a + \rho X_t + \xi_{t+1}, \quad \text{where} \quad \{ \xi_t \} \stackrel {\textrm{ IID }} {\sim} N(0, \sigma^2) $$
As is, this fits into the density case we treated above.
However, the authors wanted this process to take values in $ [0, 1] $, so they added boundaries at the endpoints 0 and 1.
One way to write this is$$ X_{t+1} = h(a + \rho X_t + \xi_{t+1}) \quad \text{where} \quad h(x) := x \, \mathbf 1\{0 \leq x \leq 1\} + \mathbf 1 \{ x >$$ G(x, y) := \mathbb P \{ h(a + \rho x + \xi_{t+1}) \leq y \} \qquad (0 \leq x, y \leq 1) $$
This family of cdfs $ G(x, \cdot) $ plays a role analogous to the stochastic kernel in the density case.
The distribution dynamics in (9) are then replaced by
$$ F_{t+1}(y) = \int G(x,y) F_t(dx) \tag{14} $$,
$$ \psi^*(y) = \int p(x,y) \psi^*(x) \, dx, \qquad \forall y \in S \tag{15} $$
$$ \forall \, \psi \in \mathscr D, \quad \psi P^t \to \psi^* \quad \text{as} \quad t \to \infty \tag{16} $$
This combination of existence, uniqueness and global convergence in the sense of (16) is often referred to as global stability.
Under very similar conditions, we get ergodicity, which means that
$$ \frac{1}{n} \sum_{t = 1}^n h(X_t) \to \int h(x) \psi^*(x) dx \quad \text{as } n \to \infty \tag{17} $$
$$ \psi_n^*(y) = \frac{1}{n} \sum_{t=1}^n p(X_t, y) \tag{18} $$
This is essentially the same as the look-ahead estimator (13), except that now the observations we generate are a single time-series, rather than a cross-section.
The justification for (18) is that, with probability one as $ n \to \infty $,$$ \frac{1}{n} \sum_{t=1}^n p(X_t, y) \to \int p(x, y) \psi^*(x) \, dx = \psi^*(y) $$.
Exercise 1¶
Consider the simple threshold autoregressive model
$$ X_{t+1} = \theta |X_t| + (1- \theta^2)^{1/2} \xi_{t+1} \qquad \text{where} \quad \{ \xi_t \} \stackrel {\textrm{ IID }} {\sim} N(0, 1) \tag{19} $$
This is one of those rare nonlinear stochastic models where an analytical expression for the stationary density is available.
In particular, provided that $ |\theta| < 1 $, there is a unique stationary density $ \psi^* $ given by
$$ \psi^*(y) = 2 \, \phi(y) \, \Phi \left[ \frac{\theta y}{(1 - \theta^2)^{1/2}} \right] \tag{20} $$ shifted beta distributions
ψ_0 = beta(5, 5, scale=0.5, loc=i*2)
Exercise 3¶
A common way to compare distributions visually is with boxplots.
To illustrate, let’s generate three artificial data sets and compare them with a boxplot.
The three data sets we will use are:$$ \{ X_1, \ldots, X_n \} \sim LN(0, 1), \;\; \{ Y_1, \ldots, Y_n \} \sim N(2, 1), \;\; \text{ and } \; \{ Z_1, \ldots, Z_n \} \sim N(4, 1), \; $$
Here is the code and figure:
n = 500 x = np.random.randn(n) # N(0, 1) x = np.exp(x) # Map x to lognormal y = np.random.randn(n) + 2.0 # N(2, 1) z = np.random.randn(n) + 4.0 # N(4, 1) fig, ax = plt.subplots(figsize=(10, 6.6)) ax.boxplot([x, y, z]) ax.set_xticks((1, 2, 3)) ax.set_ylim(-2, 14) ax.set_xticklabels(('$X$', '$Y$', '$Z$'), fontsize=16) plt.show() = np.linspace(8, 0, J)
from scipy.stats import norm, gaussian_kde ϕ = norm() n = 500 θ = 0.8 # == Frequently used constants == # d = np.sqrt(1 - θ**2) δ = θ / d def ψ_star(y): "True stationary density of the TAR Model" return 2 * norm.pdf(y) * norm.cdf(δ * y) def p(x, y): "Stochastic kernel for the TAR model." return ϕ.pdf((y - θ * np.abs(x)) / d) / d Z = ϕ.rvs(n) X = np.empty(n) for t in range(n-1): X[t+1] = θ * np.abs(X[t]) + d * Z[t] ψ_est = LAE(p, X) k_est = gaussian_kde(X) fig, ax = plt.subplots(figsize=(10, 7)) ys = np.linspace(-3, 3, 200) ax.plot(ys, ψ_star(ys), 'b-', lw=2, alpha=0.6, label='true') ax.plot(ys, ψ_est(ys), 'g-', lw=2, alpha=0.6, label='look-ahead estimate') ax.plot(ys, k_est(ys), 'k-', lw=2, alpha=0.6, label='kernel based estimate') ax.legend(loc='upper left') plt.show()
# == Define parameters == # s = 0.2 δ = 0.1 a_σ = 0.4 # A = exp(B) where B ~ N(0, a_σ) α = 0.4 # f(k) = k**α ϕ = lognorm(a_σ) def p(x, y): "Stochastic kernel, vectorized in x. Both x and y must be positive." d = s * x**α return ϕ.pdf((y - (1 - δ) * x) / d) / d n = 1000 # Number of observations at each date t T = 40 # Compute density of k_t at 1,...,T fig, axes = plt.subplots(2, 2, figsize=(11, 8)) axes = axes.flatten() xmax = 6.5 for i in range(4): ax = axes[i] ax.set_xlim(0, xmax) ψ_0 = beta(5, 5, scale=0.5, loc=i*2) # Initial distribution # == Generate matrix s.t. t-th column is n observations of k_t == # k = np.empty((n, T)) A = ϕ.rvs((n, T)) k[:, 0] = ψ_0.rvs(n) for t in range(T-1): k[:, t+1] = s * A[:,t] * k[:, t]**α + (1 - δ) * k[:, t] # == Generate T instances of lae using this data, one for each t == # laes = [LAE(p, k[:, t]) for t in range(T)] ygrid = np.linspace(0.01, xmax, 150)') plt.show()
n = 20 k = 5000 J = 6 θ = 0.9 d = np.sqrt(1 - θ**2) δ = θ / d fig, axes = plt.subplots(J, 1, figsize=(10, 4*J)) initial_conditions = np.linspace(8, 0, J) X = np.empty((k, n)) for j in range(J): axes[j].set_ylim(-4, 8) axes[j].set_title(f'time series from t = {initial_conditions[j]}') Z = np.random.randn(k, n) X[:, 0] = initial_conditions[j] for t in range(1, n): X[:, t] = θ * np.abs(X[:, t-1]) + d * Z[:, t] axes[j].boxplot(X) plt).
|
https://lectures.quantecon.org/py/stationary_densities.html
|
CC-MAIN-2019-35
|
refinedweb
| 1,775
| 74.59
|
I have a checkbox
<label for="ChFileOne">
File One
<input type="checkbox" name="ChFileOne" id="ChFileOne">
</label>
I want to check it, if a file exists.
in web forms which I am familiar with, I would use the code behind.
if(file.exist)
chFileOne.checked = true;
am trying to learn MVC I want to know how to write a similar statement.
thanks.
Mvc is a template engine, not a component model. You just set bool value in the view model, say FileOneExists, then bind to it
Html.CheckboxFor(m=>m.FileOneExists)
In the action code, you set the value true or false.
Model.FileOneExits = ifFileExists();
The way to think about the MVC pattern is that there is a UI view engine you send messages to (model) and it updates the UI (produces html in web apps). A UI operation sends a message to the controller (form post or navigation), the controller code does
what is required to process the message (action parameters) and builds a new view model and calls the view engine to update the UI.
the view should only have logic to properly render the view.it should not have logic like does file exists, only how to render the fact.
In MVC you get to write what ever HTML you like right in the View using Razor syntax. The following example is contrived but should illustrate how easy it is to render HTML in MVC. Normality, a model and HTML helpers are used in MVC.
<div>
<label for="ChFileOne">
File One
<input type="checkbox" name="ChFileOne" id="ChFileOne" @ViewBag.Checked>
</label>
</div>
public class GeneralController : Controller
{
// GET: General
public ActionResult Index()
{
ViewBag.Checked = "checked";
return View();
}
}
There are many examples of how to use check boxes in MVC that you can find with a Google search.
Hi, sweetSteal
According to your needs, I made an example, please refer to it.
Remarks: TempData is used to pass data from one request to the next request.
Page
@using (Html.BeginForm("CheckBoxTests", "Home")){ <input type="checkbox" name="test" checked="@TempData["ischecked"]" /> <button type="submit">submit</button>}
Controller
public ActionResult Index() { return View(); } [HttpPost] public ActionResult CheckBoxTests() { if (true) { TempData["ischecked"] = "checked"; } return View("Index"); }
Here is the result.
Best Regards,
YihuiSun
|
https://social.msdn.microsoft.com/Forums/en-US/66fcbde4-75a2-4955-8e2c-af111d6a76c8/how-to-write-code-for-html-controls-in-mvc?forum=aspmvc
|
CC-MAIN-2022-33
|
refinedweb
| 371
| 63.49
|
WF and missing build output from referenced project
Ever come across a WorkflowValidationFailedException at runtime even though the project containing the workflow was successfully validated and compiled without any errors? The reason for this occurring is how the compiler manages references.
Lets look at a simple solution that contains ProjectA, ProjectB and ProjectC. ProjectA references ProjectB which then references ProjectC.
ProjectA contains a form that references ClassB in ProjectB like this:
using System; using System.Diagnostics; using System.Windows.Forms; using ProjectB; namespace ProjectA { public partial class FormA : Form { public FormA() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { ClassB TestB = new ClassB(); Debug.WriteLine(TestB.CreateValue()); } } }
ProjectB contains ClassB that references ClassC in ProjectC like this:
using System; using ProjectC; namespace ProjectB { public class ClassB { public String CreateValue() { ClassC TestC = new ClassC(); return TestC.RunTest(); } } }
ClassC in ProjectC looks like this:
using System; namespace ProjectC { public class ClassC { public String RunTest() { return Guid.NewGuid() + " - " + DateTime.Now; } } }
When ProjectA compiles, the build output in bin\Debug is the following:
ProjectA.exe
ProjectB.dll
ProjectC.dll
Everything is good here and ProjectA will execute successfully.
Now lets simulate a scenario that WF can bring into the mix. Lets say that ProjectB contains a workflow. This workflow ends up executing a rule set, probably through a PolicyActivity. A rule in the rule set makes a reference to ClassC in ProjectC. Nowhere else in ProjectB references any type defined in ProjectC. What happens? The result can be simulated by making ClassB in ProjectB look like this:
using System; using ProjectC; namespace ProjectB { public class ClassB { public String CreateValue() { //ClassC TestC = new ClassC(); //return TestC.RunTest(); return "Break me"; } } }
When ProjectA compiles, the build output in bin\Debug is the following:
ProjectA.exe
ProjectB.dll
Traditionally, this is really great. The compiler is smart enough to know that even though ProjectB references ProjectC, it doesn’t actually reference any types defined in ProjectC, so why pull across its build output. That would just be unnecessary bloat.
How does this relate to WF? Well, rules are defined in XML and CodeDOM is used to interpret that XML into code that will be executed. This means that at compile time, the type references don’t exist as far as the compiler is concerned. The more WF is used, the more likely it will become that types and their projects are only referenced through rules therefore their build output will not be available at runtime. This results in the WorkflowValidationFailedException at runtime.
The solutions to this problem with VS2005 are to either make a reference of ProjectC in ProjectA (first level references are always pulled along), or to make a reference to a ProjectC type in code in somewhere in ProjectB. Hopefully Orcas will address this issue (I haven’t checked yet).
|
http://www.neovolve.com/2007/05/11/wf-and-missing-build-output-from-referenced-project/
|
CC-MAIN-2017-17
|
refinedweb
| 465
| 55.34
|
$Id:
This note discusses a common ontology for the interoperable interchange of contact information, in the light of various pieces of software, the standards which they purport to import nand export, and their interpretations of that software.
Here is an example of contact infromation in vCard.
BEGIN:VCARD VERSION:3.0 N:Doe;John;;; FN:John Doe ORG:Example.com Inc.; TITLE:Imaginary test person EMAIL;type=INTERNET;type=WORK;type=pref:johnDoe@example.org TEL;type=WORK;type=pref:+1 617 555 1212 TEL;type=CELL:+1 781 555 1212 TEL;type=HOME:+1 202 555 1212 TEL;type=WORK:+1 (617) 555-1234 item1.ADR;type=WORK:;;2 Example Avenue;Anytown;NY;01111;USA item1.X-ABADR:us item2.ADR;type=HOME;type=pref:;;3 Acacia Avenue;Newtown;MA;02222;USA item2.X-ABADR:us NOTE:John Doe has a long and varied history\, being documented on more police files that anyone else. Reports of his death are alas numerous. item3.URL;type=pref:http\:// item3.X-ABLabel:_$!<HomePage>!$_ item4.URL:http\:// item4.X-ABLabel:FOAF item5.X-ABRELATEDNAMES;type=pref:Jane Doe item5.X-ABLabel:_$!<Friend>!$_ CATEGORIES:Work,Test group X-ABUID:5AD380FD-B2DE-4261-BA99-DE1D1DB52FBE\:ABPerson END:VCARD
These share a basic line syntax style. vCard came first, RFC2426 issued concurrently with the generic syntax RFC2425. Later came iCaledar, RFC2445, building on but not using all the types of vcard and using some new ones. Therefore an iCal parser will not necessarily parse a vCard.
etc
The following escaping is done.
(It wasn't clear to me whether theese are at the same level or not. This affects whether you have to encode a backslash as \\ (if there is one level) or as 2^n backslahes if there are n levels of escaping applied to the same data. I conclude that it is a single level)
Another question is whether all the escapes are used when a given syntax doesn't happen to include any comma delimiting. Yes, they are: colons, semicolons, and commas are escaped in all values.
Rfc2426 in the BNF says:
n-value = 0*4(text-value *("," text-value) ";") text-value *("," text-value) ; Family; Given; Middle; Prefix; Suffix. ; Example: Public;John;Quincy,Adams;Reverend Dr. III
The example in the comment is presumably wrong and should IIUC be
Quincy,Adams;;John;Reverend,Dr.;III or Adams;John;Quincy;Reverend,Dr.;III
There was a discussion on #swig on 2007-01-26 about how to represent this in RDF. One compromise is to use spaces to separate the comma-separated valeus, to cut down on the structure.
Of course, the whole thing could be represented at lists:
(("Quincy" "Adams") () ("John") ("Reverend" "Dr.")("III"))
but that isn't so much in the RDF spirit and would take a lot of space in RDF/XML. A version with spaces for the multiple values
[ v:familyName "Quincy Adams"; givenName "John"; v:middleName ""; v:honorificPrefix "Reverend Dr.", v:suffix "III"]
seems to be a compromise. Giving an empty middle name (or anything else) would be optional.
The n filed in vCard seems to echo the dn (distinguished name) field in vcard. The idea f a distinguished name is that it allows one to distinguish (indirectly identify) the person involved. In x.500 systems, DN's often have organizational names and organization units. In Mozilla Thunderbird, the dn is the common name and email adddress pair, which is what is sent in email. In most cases the email address is personal: FOAF uses it as a distinguisher by itself -- it is a nverse functional property. So while people may share an email address, it is going to be a pathological case in which who people share the common name and email adderss pair. (One could imagine a shared family email where the father and son have the same common name.)
Should the name itself be a concept, or should the subfields of the name be attributed to the person? In the first case in RDF it becomes a bnode. This is quite easy, but makes the data more complicated.
The actual parts of the DN are in FOAF and Thunderbird all separate fields anyway. You can't have two different family names, titles, suffxies, etc. Suppose we remove the bnode and the abstrction of the 'name' as a complex object itself.
Do people validly have multiple alternative names? Yes, they can have in the case of nom de plume, names befroe and after acquiring a peerage, and so on. However, none of the contct list software I have come across alows one person to have two names. One could in AB (and in RDF) use a relation field to specify that one name (Persona) is in fact the same person as another. vCards would then represent information about personas. These personas have names, if we are to be strictly accurate. This would make the subject of all these arcs a persona rather than a person. If we are not, then we would get a situation where to personas are identified as being the same person, and
The contact ontology, of all the ontologies cosidered, is the only one which allows you to have two workplaces, with an address and phone number for each, and hang onto whih goes with which. That is because it has a abstraction con:ConcatLocation, or perhaps better contactPoint or even Role, which is the thing which is labelled 'home' or 'work' typically, or sometimes 'vacation' or 'emergency'. Of these, home and work outnuber the others by far. Some systems allow you just one set of home and one set of work details (thunderbird). AB allows you to have work, home, other and even to set you own relationships and also to have more than one of each. The UI would not be that complicated to attach URLs etc to work or home groupings.
vCard allows groups, and the spec shows them being used in exactly this way. However, AB does not use groups in this way.
Relations between different properties used to model adresses in the con: (brown), prooposed vCard RDF (dk. blue), ldif (green) and mozilla-extended LDIF (cyan).
In this case, my conclusion is that the generality of modelling the contact-point as a node in the graph is important.
(It is interesting to ask whether a contact-point has some of the attributes of a role, or of a persona mentioned above. It seems to. One can imagining being addresses at a different address, for example, in a role. (The Hon. Basil Graham, Chair, Happytown Town Council, vs. Bas Graham, 23 Accacia Avenue, Happytown). This not, however, what the software we are dealing with supports.
As multiple workplaces and homes are not uncommon, The author's conclusion is for a common ontology it is valuable to model the contact-point.The argument against modeling the contact point is the complexity of the user interface for editing. The user interface for output is straightforward.
There is considerable tension throughout the ontologies reviewed between associating things with people, or their work.
In Apple's address book a type can be WORK, but in the spec, but WORK is not in email-type. I extended the VCARD spec in the implementation vcard2n3.py to cover this.
FOAF just uses a personal mailbox, with the constraint that (no two people can share it), with no attribution of work or home. If v:work-email and foaf:mailbox are both subproperties of general email property, but not of each other, then there ill be limited interoperability.
Many commercial systems seem to make rash assumptions when importing data, deciding, sometimes under user guideance and sometimess without asking, whether to decide tha an unknown address is a work or home one. This addition of random false data can make a database very dirty. To avoid this, the ontology used for interoperability must have sufficient delicacy to represent just what is known, no more and no less. users should be in the loop when it is necessary to classify names and addresses for input into a particular system.
There seem to be two different assumptions about the grouping system. In the spec, an example gives the group name some semantics.
home.tel;type=fax,voice,msg:+49 3581 123456 home.label:Hufenshlagel 1234\n
but in Apple AB it doesn't, the semantics comes from a type= on the address, and phone numbers cannot be attributed to specific addresses. (That is an AB bug IMHO).
item1.ADR;type=HOME;type=pref:;;3 Acacia Avenue;Newtown;MA;02222;USA item1.X-ABADR:us item2.ADR;type=WORK:;;2 Example Avenue;Anytown;NY;01111;USA item2.X-ABADR:us
It is a fact that some cultures (such as Japan) normally put names in the order (family, given), and some (like the US) in the order (given, family). (There are also cultures who don't use either of these simple schemes, but that is another question.) It is therefore unreasonable and politically incorrect to assume that a first name is a family anme or a given name in general, except within a local community.
In the ontologies and software reviewed, MacAB and Thunderbird both use "first" and "last" in the user interface.
Some extensions noticed in real data from an Apple AddressBook (AB)
Guessing, as the country name is covered, this field is the foratting and editing convention, for example where the potscode comes in the address and whether it called a postcode, zipcode etc.. In AddressBook program, this is a global preference. I don't know whther you can create some cards with a US fromat and some with a french format. It would be useful if the format was a function of the country.
item1.ADR;type=HOME;type=pref:;;3 Acacia Avenue;Newtown;MA;02222;USA item1.X-ABADR:us
This is a nice feature. It makes the item sort and sisplay as primarily the organization, secondarily the person.
FN:AMC Theatres Burlington Cinema 10 ORG:AMC Theatres Burlington Cinema 10; item1.ADR;type=WORK;type=pref:;;20 South Avenue;Burlington;MA;01803; item1.X-ABADR:us X-ABShowAs:COMPANY
item3.URL;type=pref:http\:// item3.X-ABLabel:_$!<HomePage>!$_
The ABLabel gives the relation, either apple standard in ehich case surrounded by weird characters, or else user-generated. Apple values in apple-special namespace. User-generated field in user-namespace, maybe a parameter to be passed to the translator. These are used to override the predicate linking the person and the group. It may be that when the group is a 2-item thing, just something and label it should be de-reified to a single prediate-object statement. This is necessary to be able to use homepage for example as an inverse functional property to identify people
item1.X-ABRELATEDNAMES;type=pref:Jane Doe item1.X-ABLabel:_$!<Spouse>!$_
This would be represented in N3, sing a special namespace for AB relative names,
abl:spouse "Jane Doe";
Meanwhile, a different namepscae should be used (per user?) for user-generated names. these are more like tags, very user-specific.
item1.X-ABRELATEDNAMES;type=pref:Jane Doe item1.X-ABLabel:coauthor
This would be represented in N3 more like: names,
user:coathor "Jane Doe";
I prefer lists (RDF collections) instead.
Good pratice is initial lower case on property names, as in vcard:family not vcard:Family. Use upper case for classes, as in foaf:Person.
postOfficeBox not post-office-box, or prerably shorter poBox as these are only codes whose mnemonic value is for developres, not users. organization-name is too long: orgName, orgUnit would be fine IMHO.
v:work-email can be generated from EMAIL:TYPE=WORK: automatically, in a way which can be extended for new orms of endpoint (AIM, Skype, etc) and new forms of location (vacation, emergency, etc). Having longer names makes this impossible, or more complicated.
The existence of the v:unlabeledTel Property as an (explicitly) unlabeled phone number of a person suggests that an unlabelled phone number has extra semantics in its being un labelled. I think it is safest to model that a lack of label means a lack of informtion, and that a label could be inferred or added elsewhere.
Here is a fairly full ldif file exported from Thunderbird version 1.5.0.9 (20061207):
dn: cn=Zackery Zephyr,mail=zac@example.com objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson objectclass: mozillaAbPersonAlpha givenName: Zackery sn: Zephyr cn: Zackery Zephyr mozillaNickname: testnick mail: zac@example.com mozillaSecondEmail: zackery@example.org nsAIMid: zaco mozillaUseHtmlMail: false modifytimestamp: 0Z telephoneNumber: +1 202 250 2525 homePhone: +1 202 250 2526 fax: +1 202 250 2527 pager: +1 202 555 2525 mobile: +1 202 555 2526 homeStreet: 1 Zephyr Drive mozillaHomeStreet2: Apt 26 mozillaHomeLocalityName: Zoaloc mozillaHomeState: MA mozillaHomePostalCode: 02999 mozillaHomeCountryName: USA street: 1 Enterprise Way mozillaWorkStreet2: Suite 260 l: Zoaloc Heights st: MA postalCode: 02998 c: USA title: Chief Test dataset department: Testing company: Zacme Widgets mozillaWorkUrl: mozillaHomeUrl: mozillaCustom1: custom1 value mozillaCustom2: custom2 value mozillaCustom3: custom3 value mozillaCustom4: custom4 value description: This is a n imaginary person.
In LDIF, the following backslah-escaped
ans also a \XX form of hex-escaping.
The names obviously have a varied and long history, resuting in certain inconsisetncy which need not bother us.
Apart from the 'dn' Distinguished Name field, the structure is completely flat. The vCard MIME profile was, I gather, orginally designed to be compatible with the x.509 directry structures. However, the fields used in a thunderbird dn don't map to the fields in a vCard N fields.
See the following for more background.
RFC2425: A MIME Content-Type for Directory Information Defines text/directory . the line-folding and basic record type structure.
RFC2426: vCard MIME Directory Profile defines most of the vocabulary specific.
An RDF vcard namespace currently (2007) being developed in Namespace
Harry Halpin, Brian Suda, Norman Walsh An Ontology for vCards is an ontology which is the latest mapping, to which this is a set of comments.
History: The obsolete Representing vCard Objects in RDF/XML is a W3C Note on this from Renato Iannella of IPR Systems in 2001. Harry took an action to contact Renato in the 2006-11-03 #swig meeting. A chat on #SWIG was held on 2007-01-26, at which Harry presented a conversion table betwen thse fromats
LDIF in Wikipedia is a good place to start.
RFC2840, The LDAP Data Interchange Format (LDIF) - Technical Specification This is the spec.
RFC2253: Lightweight Directory Access Protocol (v3): UTF-8 String Representation of Distinguished Names defines the format of the distinguishes name string.
Lightweight Directory Access Protocol (LDAP) Modify-Increment Extension only affects updates to LDAP stores, so we don't need to worry about this.
RFC4510: Lightweight Directory Access Protocol (LDAP): Technical Specification Road Map is the master spec pointing to the bits of LDAP. LDIF was defined for LDAP, but does not depend on it.
Python iCalendar (vcard?) implementtions which may work for vCard include
|
http://www.w3.org/2002/12/cal/vcard-notes.html
|
crawl-002
|
refinedweb
| 2,506
| 56.15
|
Created on 2010-10-08 11:13 by hniksic, last changed 2017-11-23 00:27 by ncoghlan. This issue is now closed.
I find that I frequently need the "null" (no-op) context manager. For example, in code such as:
with transaction or contextlib.null():
...
Since there is no easy expression to create a null context manager, we must resort to workarounds, such as:
if transaction:
with transaction:
... code ...
else:
... duplicated code ...
(with the usual options of moving the duplicated code to a function—but still.)
Or by creating ad-hoc null context managers with the help of contextlib.contextmanager:
if transaction is None:
transaction = contextlib.contextmanager(lambda: iter([None])()
with transaction:
...
Adding a "null" context manager would be both practical and elegant. I have attached a patch for contextlib.py
+1
Looks like a reasonable use case.
Patch is missing tests and documentation.
About your patch:
- __enter__() might return self instead of None... i don't really know which choice is better. "with Null() as x:" works in both cases
- __exit__() has no result value, "pass" is enough
- I don't like "Null" name, I prefer "Noop" (NoOperation, NoOp, ...) or something else
I also find the Null/_null/null affair confusing.
@contextlib.contextmanager
def null():
yield
Do we really need to add this to the stdlib?
Previous proposals to add an "identity function" or "no-op function" have always be refused. This one seems even less useful.
Thank you for your comments.
@Michael: I will of course write tests and documentation if there is indication that the feature will be accepted for stdlib.
@Antoine: it is true that a null context manager can be easily defined, but it does requires a separate generator definition, often repeated in different modules. This is markedly less elegant than just using contextlib.null() in an expression.
I'm not acquainted with the history of identity function requests, but note that the identity function can be defined as an expression, using simply lambda x: x. The equivalent expression that evaluates to a null context manager is markedly more convoluted, as shown in my report.
@Éric: The Null/_null/null distinction is an optimization that avoids creating new objects for something that is effectively a singleton. It would be perfectly reasonable to define contextlib.null as Antoine did, but, this being stdlib, I wanted the implementation to be as efficient as (reasonably) possible.
> @Antoine: it is true that a null context manager can be easily
> defined, but it does requires a separate generator definition, often
> repeated in different modules. This is markedly less elegant than
> just using contextlib.null() in an expression.
But you can use mymodule.null() where mymodule is a module gathering
common constructs of yours.
The difference here is the one pointed out in the original post: for a function, you usually only care about having a value, so if you don't want to call it, you can just swap in a None value instead. If you need an actual callable, then "lambda:None" fits the bill.
The with statement isn't quite so forgiving. You need a genuine context manager in order to preserve the correct structure in the calling code. It isn't intuitively obvious how to do that easily. While not every 3-line function needs to be in the standard library, sometimes they're worth including to aid discoverability as much as anything else.
However, I don't see the point in making it a singleton and the name should include the word "context" so it doesn't becoming ambiguous when referenced without the module name (there's a reason we went with contextlib.contextmanager over contextlib.manager).
Something like:
class nullcontext():
"""No-op context manager, executes block without doing any additional processing.
Used as a standin if a particular block of code is only sometimes
used with a normal context manager:
with optional_cm or nullcontext():
# Perform operation, using the specified CM if one is given
"""
def __enter__():
pass
def __exit__(*exc_info):
pass
That is what we are using now, but I think a contextlib.null() would be useful to others, i.e. that its use is a useful idiom to adopt. Specifically I would like to discourage the "duplicated code" idiom from the report, which I've seen all too often.
The "closing" constructor is also trivial to define, but it's there for convenience and to promote the use of with statement over try/finally boilerplate. The same goes here: you don't miss the null context manager when you don't have it; you invent other solutions. But when it's already available, it's an elegant pattern. In my experience, if they have to define it to get it, most people won't bother with the pattern and will retain less elegant solutions.
Actually, the singleton idea isn't a bad one, but I'd go one step further and skip the factory function as well. So change that to be:
class NullContext():
... # as per nullcontext in my last message
nullcontext = NullContext()
(with the example in the docstring adjusted accordingly)
If you can supply a full patch before the end of the month, we should be able to get this in for 3.2beta1 (currently scheduled for 31 October)
I considered using a variable, but I went with the factory function for two reasons: consistency with the rest of contextlib, and equivalence to the contextmanager-based implementation.
Another reason is that it leaves the option of adding optional parameters at a later point. I know, optional parameters aren't likely for a "null" constructor, but still... it somehow didn't feel right to relinquish control.
Here is a more complete patch that includes input from Nick, as well as the patch to test_contextlib.py and the documentation.
For now I've retained the function-returning-singleton approach for consistency and future extensibility.
Is there anything else I need to do to have the patch reviewed and applied?
I am in no hurry since we're still using 2.x, I'd just like to know if more needs to be done on my part to move the issue forward. My last Python patch was accepted quite some years ago, so I'm not closely familiar with the current approval process.
Unless Nick has further feedback I think you've done all you need to, thanks.
I'm with Antoine. Why not just do this in the context function itself?
I think it's more explicit and easier than reading the doc to figure out what nullcontext is supposed to do:
from contextlib import contextmanager
CONDITION = False
@contextmanager
def transaction():
if not CONDITION:
yield None
else:
yield ...
with transaction() as x:
...
Because hardcoding a particular condition into a context manager is less flexible? (I'm +0 on this thing myself, by the way.)
Are you sure that this is useful enough to warrant inclusion in the standard lib? I don't know of anyone else who has used the same idiom. It seems crufty to me -- something that adds weight (mental burden and maintenance effort) without adding much value. I don't know that anyone actually needs this.
To me, this is more a matter of conceptual completeness than one of practical utility (ala fractions.Fraction). That said, I *have* personally encountered the "I only sometimes want to wrap this code in a CM" situation, so it isn't completely impractical, either. Those two factors are enough to reach my threshold for it being worthwhile to declare "one obvious way to do it" through the contextlib module.
There is a possible alternative approach that may be more intuitive to use and read than nullcontext() though:
@contextmanager
def optional_cm(cm, *, use_cm=True): # See naming note below
if cm is None or not use_cm:
yield
else:
with cm:
yield
The OP's original example would then look like:
with optional_cm(transaction):
...
I suspect readers would find it far easier to remember what optional_cm does than to learn to recognise the "or nullcontext()" idiom. It also plays better with nested context managers:
with optional_cm(sync_lock), optional_cm(db_transaction), \
open(fname) as f:
...
Naming Note: I nearly suggested "optional_context" as a name for this, but realised that would be subtly misleading (suggesting PEP 377 style functionality that potentially skipped the statement body, rather than the intended semantics of skipping use of the CM)
I like your latest suggestion, except for the name. Given that we also have the (quite generic) "closing", what about just "optional"?
> To me, this is more a matter of conceptual completeness
> than one of practical utility ...
Nick, you don't seem to be truly sold on the need.
I'm -1 on adding this. It's basically cruft. If
it were published as an ASPN recipe, its uptake
would be nearly zero.
We need to focus on real problems in the standard
library and provide solid solutions. If weight
gets added to the standard lib, it needs to be
selective.
I find Raymond's perspective persuasive in this case. Feel free to post either the original idea or my later variant as an ASPN cookbook recipe. (you could actually combine the two, and use NullContext as an implementation detail of an optional_cm() function)
Not having this as a standard idiom makes it very tempting to just do copy-paste coding as in hniksic's example. Who likes to invent their own library for generic language-supporting idioms?
What about an alternative of giving NoneType empty enter and exit methods? So instead of a 'null' CM you can just use "with None"?]
That's very reassuring. Thanks, Nick!
After seeing a context manager named like "TempfileIfNeeded(..., cond)", whole sole purpose is to handle the conditional case, I'm firmly +1 on this proposal.
It's much easier to just read "with Tempfile() if cond else nullcontext():" than to read through another level of indirection every time someone wanted some conditional logic on a context manager.
Is there any chance that this issue could be reopened?
Perhaps a more elegant solution would be to modify the "with" statement so that any object can be given to it (then we could just use None directly), but I suspect that would be a tad more controversial. ;)
No, an empty ExitStack() instance already works fine as a no-op context manager in 3.3:
We're not going to add a dedicated one under a different name.
I've been looking for this today; I would have used it.
Instead of an obvious (and explicit) null context manager, I had to read through this entire thread to eventually find out that I can use something called ExitStack(), which is designed for another use case.
When many users have to replicate the same boilerplate code time and time again, it's not cruft, it's just something that ought to be part of the stdlib. There are a number of such cases in the stdlib. I think nullcontext should be part of the included batteries Python aims to provide.
ExitStack() already covers the "null ctx mgr" use case described in the first message. Original example:
with transaction or contextlib.null():
...
By using ExitStack:
with transaction or ExitStack():
...
You can push this further and do this, which is even more flexible:
with ExitStack() as stack:
if condition:
stack.enter_context(transaction)
...
So ExitStack really is better than the original proposal which could have made sense 6 years ago but not anymore.
The problem Martin is referring to is the SEO one, which is that the top link when searching for either "null context manager python" or "no-op context manager python" is this thread, rather than the "Use ExitStack for that" recipe in the docs:
We unfortunately have exactly zero SEO experts working on the CPython documentation, so even when we provide specific recipes in the docs for solving particular problems, they aren't always easy for people to find.
I've at least added the "use contextlib.ExitStack()" note to the issue title here, so folks can find that without having to read through the whole comment thread.
Adding nullcontext = ExitStack in the source file would solve this problem in a single line of code.
No, it wouldn't, as ExitStack() does far more than merely implement a null context.
It would be like adding "nulliterable = ()" as a builtin, rather than just telling people "If you need a null iterable, use an empty tuple".
Well that just echoes exactly what I originally thought, but somebody else said it was not needed because ExitStack already exists and could be used for that purpose.
If this were at work and/or it were all just to me, I'd just implement a brand new nullcontext and move on.
Unfortunately, the redundancy doesn't buy enough to justify the permanent documentation and style guide cost of providing two ways to do exactly the same thing.
It turns out that there's a variant on the "null context manager" idea that may *not* be redundant with ExitStack(), and hence could potentially counter the current rationale for not adding one.
Specifically, it relates to context managers like click.progressbar() that are designed to be used with an "as" clause:
with click.progressbar(iterable) as myiter:
for item in myiter:
...
At the moment, making that optional is a bit messy, since you need to do something like:
with click.progressbar(iterable) as myiter:
if not show_progress:
myiter = iterable # Don't use the special iterator
for item in myiter:
...
or:
with ExitStack() as stack:
if show_progress:
myiter = stack.enter_context(click.progressbar(iterable))
else:
myiter = iter(iterable)
for item in myiter:
...
or:
@contextmanager
def maybe_show_progress(iterable, show_progress)
if show_progress:
with click.progressbar(iterable) as myiter:
yield myiter
else:
yield iter(iterable)
with maybe_show_progress(iterable, show_progress) as myiter:
for item in myiter:
...
The problem is that there's no easy way to say "return *this* value from __enter__, but otherwise don't do anything special".
With a suitably defined NullContext, that last approach could instead look more like:
if show_progress:
ctx = click.progressbar(iterable)
else:
ctx = NullContext(iter(iterable))
with ctx as myiter:
for item in myiter:
...
Note that this indeed seems confusing. I just found this thread by search for a null context manager. Because I found that in TensorFlow they introduced _NullContextmanager in their code and I wondered that this is not provided by the Python stdlib.
For what it's worth, we are still using our own null context manager function in critical code. We tend to avoid contextlib.ExitStack() for two reasons:
1) it is not immediately clear from looking at the code what ExitStack() means. (Unlike the "contextmanager" decorator, ExitStack is unfamiliar to most developers.)
2) ExitStack's __init__ and __exit__ incur a non-negligible overhead compared to a true do-nothing context manager.
It doesn't surprise me that projects like Tensor Flow introduce their own versions of this decorator. Having said that, I can also understand why it wasn't added. It is certainly possible to live without it, and ExitStack() is a more than acceptable replacement for casual use.
Reopening this based on several years of additional experience with context managers since I wrote when originally closing it.
The version I'm now interested in adding is the one from - rather than being completely without behaviour, the null context manager should accept the value to be returned from the call to __enter__ as an optional constructor parameter (defaulting to None). That allows even context managers that return a value from __enter__ to be made optional in a relatively obvious way that doesn't involve fundamentally rearranging the code.
I think the overhead argument against the use of ExitStack() for this purpose also has merit (so I'd be curious to see relative performance numbers collected with perf), but it's not my main motive for changing my mind.
Reverting to "Needs patch", as the currently attached patch is for the "No behaviour" variant that always returns None from __enter__().
(hniksic, would you still be willing to sign the Python CLA? If so, then your patch could be used as the basis for an updated implementation. Otherwise I'd advise anyone working on this to start from scratch)
I am of course willing to sign the CLA (please send further instructions via email), although I don't know how useful my original patch is, given that it caches the null context manager.
New changeset 0784a2e5b174d2dbf7b144d480559e650c5cf64c by Nick Coghlan (Jesse-Bakker) in branch 'master':
bpo-10049: Add a "no-op" (null) context manager to contextlib (GH-4464)
Thanks to Jesse Bakker for the PR implementing this for 3.7!
|
https://bugs.python.org/issue10049
|
CC-MAIN-2020-16
|
refinedweb
| 2,775
| 63.09
|
When I make two File streams either in Python or C++, Dolfin crashes. For example, in Python:
from dolfin import *
f = File('f.pvd')
g = File('g.pvd')
results in:
====
Python(
*** set a breakpoint in malloc_error_break to debug
[LMBA:63708] *** Process received signal ***
[LMBA:63708] Signal: Abort trap: 6 (6)
[LMBA:63708] Signal code: (0)
[LMBA:63708] [ 0] 2 libsystem_c.dylib 0x00007fff88de694a _sigtramp + 26
[LMBA:63708] [ 1] 3 libsystem_c.dylib 0x00007fff88e17836 szone_free_
[LMBA:63708] [ 2] 4 libsystem_c.dylib 0x00007fff88e3ddce abort + 143
[LMBA:63708] [ 3] 5 libsystem_c.dylib 0x00007fff88e119b9 free + 392
[LMBA:63708] [ 4] 6 libdolfin.1.2.dylib 0x000000010f4accbe _ZN6dolfin4File
[LMBA:63708] *** End of error message ***
[1] 63708 abort python test.py
====
As a result, demos like stokes-iterative fail.
Installed using modified Dorsal (unstable). Mac. Dolfin 1.2.0+
Any ideas on where to look to fix this?
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- DOLFIN Edit question
- Assignee:
- No assignee Edit question
- Solved:
- 2013-05-03
- Last query:
- 2013-05-03
- Last reply:
-
Ok, I think I've narrowed this to a boost issue. It seems that boost was built with macports gcc, but links to the system libstdc++. This is causing issues. I'll mark this closed.
This one is still stumping me. Does anyone have any ideas?
|
https://answers.launchpad.net/dolfin/+question/227135
|
CC-MAIN-2021-21
|
refinedweb
| 213
| 69.58
|
On 10/11/2010 14:32, Bas van Dijk wrote: > On Wed, Nov 10, 2010 at 2:39 PM, Mitar<mmitar at gmail.com> wrote: >>>.) > > This is really interesting. Presumably what happens is that an > exception is indeed thrown and then raised in the thread after the > final action. Now if you synchronously throw an exception at the end > it looks like it's raised before the asynchronous exception is raised. > > Hopefully one of the GHC devs (probably Simon Marlow) can confirm this > behavior and shed some more light on it. I think it's behaving as expected - there's a short window during which exceptions are unblocked and a second exception can be thrown. The program has let run = doSomething `catches` [ Handler (\(_ :: MyTerminateException) -> return ()), Handler (\(e :: SomeException) -> putStrLn $ "Exception: " ++ show e) ] `finally` (putMVar terminated ()) nid <- forkIO run The first MyTerminateException gets handled by the first exception handler. This handler returns, and at that point exceptions are unblocked again, so the second MyTerminateException can be thrown. The putMVar gets to run, and then the exception is re-thrown by finally, and caught and printed by the outer exception handler. The right way to fix it is like this: let run = unblock doSomething `catches` [ Handler (\(_ :: MyTerminateException) -> return ()), Handler (\(e :: SomeException) -> putStrLn $ "Exception: " ++ show e) ] `finally` (putMVar terminated ()) nid <- block $ forkIO run and the same will be true in GHC 7.0, except you'll need to use mask instead of block/unblock. Cheers, Simon
|
http://www.haskell.org/pipermail/haskell-cafe/2010-November/086209.html
|
CC-MAIN-2013-48
|
refinedweb
| 245
| 50.06
|
In this tutorial, you'll build a web app that implements an OAuth authorization flow. The main goal of OAuth authorization is to allow third-party applications to interact with a Zendesk Support instance without having to store and use the passwords of Zendesk Support users, which is sensitive information that the apps shouldn't know.
When you use basic authentication, you have to specify a username with a password or an API token. Example:
curl https://{subdomain}.zendesk.com/api/v2/tickets.json \ -u {email_address}:{password}
With OAuth authentication, the app specifies an OAuth access token in an HTTP header as follows:
curl https://{subdomain}.zendesk.com/api/v2/tickets.json \ -H "Authorization: Bearer {access_token}"
If the user doesn't have an access token, the app has to send the user to a Zendesk Support authorization page where the user may or may not authorize the app to access Zendesk Support on their behalf. If the user authorizes access, Zendesk Support provides the app with an authorization code that it can exchange for an access token. A different access token is provided for each user who authorizes the app. The app never learns the user's Zendesk Support password.
The goal of this tutorial is to show you how to implement the OAuth authorization grant flow in a web application. To keep things simple, the app is built using a Python micro web framework called Bottle. The framework helps keep technical details to a minimum so you can focus on the authorization flow.
Steps covered in this tutorial:
- What you need
- Get the tutorial app files
- Register your app with Zendesk Support
- Send the user to the Zendesk Support authorization page
- Handle the user's authorization decision
- Use the access token
- Code complete
To implement OAuth authentication in Zendesk apps, see Adding OAuth to apps.
For more information on using OAuth in Zendesk Support, see Using OAuth authentication with your application.
What you need
You need a text editor and a command-line interface like the command prompt in Windows or the Terminal on the Mac.
You'll also need Python and a few Python libraries to develop the web app, as described next.
Install Python 3
Install version 3 of Python on your computer if you don't already have it. Python is a powerful but beginner-friendly scripting and programming language with a clear and readable syntax. Visit the Python website to learn more. To download and install it, see.
If you have any problems, see the pip instructions.
Install Requests
Requests is a library that simplifies making HTTP requests. Use the following pip command in your command-line interface to download and install:
$ pip3 install requests
If you have any problems, see the Requests instructions.
pipinstead of
pip3on the command line.
Install Bottle
Use the following pip command to download and install Bottle, a micro web framework for Python.
$ pip3 install bottle
If you have any problems, see the Bottle instructions.
Python notes
Get the tutorial app files
You need a working web app before you can implement the OAuth authorization flow. To keep things simple, the tutorial app uses a micro web framework called Bottle. All the logic of a Bottle app can fit comfortably in a single file. Because this isn't a Bottle tutorial, a set of starter files is provided.
To also keep the spotlight on the OAuth flow, the app will limit itself to getting and displaying the user's Zendesk Support user name and role, which could be admin, agent, or end-user.
Topics covered in this section:
Download the tutorial app files
If you don't already have one, create a tutorials folder.
Download and unzip the oauth_tutorial_app.zip file attached to this article in your tutorials folder.
The zip file contains starter files for the tutorial application. See the next section to learn more about the files and how Bottle works.
Bottle app basics
Navigate to the oauthtutorialapp folder in a file browser. The app consists of the following folders and files:
/oauth_tutorial oauth_app.py /static /views
The oauth_app.py file is the app's nerve center. Open it in a text editor to take a look:
The file consist of routes that map HTTP requests to functions. The return value of each function is sent in the HTTP response.
For example, when a browser requests the page at the
/home relative URL, the app looks up the /home route and runs the
show_home() custom function. The
show_home() function returns the results of the framework's
template() function in the HTTP response. The
template() function renders a template named 'home' in HTML. Templates are located in the views folder and have .tpl file extensions.
Here's the home.tpl template:
<html> <head> <title>Oauth Tutorial App Home</title> <link href="/css/main.css" rel="stylesheet"> </head> <body> <h2>Welcome to the OAuth tutorial app</h2> <a href="zendesk_profile">Get Zendesk profile info about myself.</a> </body> </html>
The home template includes a link to "zendesk_profile", which is handled by the /zendesk_profile route in the oauth_app.py file:
@route('/zendesk_profile') def make_request(): # Get user data profile_data = {'name': 'Lea Lee', 'role': 'admin'} return template('details', data=profile_data)
The
template() function passes the information contained in the profile_data variable to the template using the data keyword argument. The argument name is arbitrary. For now, the function sends dummy data until you add the request code later.
The argument values are inserted at placeholders in the details.tpl template:
<p>Hello, {{data['name']}}. You're a Zendesk Support {{data['role']}}.</p>
The placeholders contain standard Python expressions for reading data in a dictionary. Possible values for
data['role'] are 'end-user', 'agent', and 'admin'.
Finally, the oauth_app.py file calls the framework's
run() function to run the app on a local web server:
run(host='localhost', port=8080, debug=True)
The Bottle framework includes a built-in development server that you can use to test your changes locally before deploying them to a remote server.
To learn more about the framework, see Bottle: Python Web Framework on the Bottle website.
Run the app
In your command-line interface, navigate to the oauthtutorialapp folder.
Run the following command to start the local server:
$ python3 oauth_app.py
In a browser, go to.
You should see the tutorial app's admittedly plain home page:
Run some tests.
Try clicking the link on the home page. Try opening.
When you're done, switch to your command-line interface and press Ctrl+C to shut down the server.
You're ready to start implementing the OAuth authorization flow. The first step is to register your new app with Zendesk Support.
Register your app with Zendesk Support
Before you can use OAuth authentication, you need to tell Zendesk Support about your application. You must be signed in as a Zendesk Support administrator to register an app.
In Zendesk Support, click Manage (
) and then select API in the Channels category.
Click the OAuth Clients tab on the Channels/API page, and then click Add a Client on the right side of the client list.
A page for registering your application appears. The Secret field is pre-populated.
Complete the following fields:
- Client Name - Enter OAuth Tutorial App. This is the name that users will see when asked to grant access to your application and when they check the list of third-party apps that have access to their Zendesk Support instance.
- Description - Optional, but you can enter something like "This app gets information from your Zendesk Support profile." Users will see this short description when asked to grant access to it.
- Company - Optional, but you can enter your company's name if you like. This is the company name that users will see when asked to grant access to your application. The information can help them understand who they're granting access to.
- Logo - Optional, but you can upload the logo-small.jpg sample image in the /static/images/ folder in the starter files. This is the logo that users will see when asked to grant access to your application. The image can be a JPG, GIF, or PNG. For best results, upload a square image. It'll be resized for the authorization page.
- Unique Identifier - Click the field to auto-populate it with the name you entered for your app. You can change it if you want.
Redirect URLs - Enter the following URL:
This is the URL that Zendesk Support will use to send the user's decision to grant access to your application. You'll create a route for
handle_user_decisionlater.
Click Save.
You'll be prompted to save the secret on the next page.
Copy the Secret value and save it somewhere safe.
The characters may extend past the width of the text box, so make sure to select everything before copying.Important: For security reasons, your secret is displayed fully only once. After clicking Save, you'll only have access to the first nine characters.
Click Save.
Now that you've registered the app with Zendesk Support, you can modify it to send users to a Zendesk Support authorization page the first time they attempt to use your app to access Zendesk Support data.
Send the user to the Zendesk Support authorization page
The first change to make to the app is to send users to the Zendesk Support authorization page if they haven't authorized your app yet. Here's the logic that you'll implement: If the user has an access token, make the request. If not, kick off the authorization flow.
Open the oauth_app.py file and change the /zendesk_profile route as follows:
@route('/zendesk_profile') def make_request(): has_token = False if has_token: # Get user data profile_data = {'name': 'Lea Lee', 'role': 'admin'} return template('details', data=profile_data)
Make sure the lines are indented as shown.
The
make_request()function starts by checking to see if the user has an access token. If they have a token (if has_token is true), the app makes the API request right away. You'll replace the temporary
has_tokencode later in the tutorial. You'll also add the API request code. Dummy data is used for now.
Add an
elseclause to kick off the authorization flow if the user doesn't have a token (if has_token is false):
else: # Kick off authorization flow parameters = {} url = 'https://{subdomain}.zendesk.com/oauth/authorizations/new?' + parameters redirect(url)
Make sure to replace
{subdomain}in the URL with your Zendesk Support subdomain.
If the user doesn't have a token, they're redirected to the Zendesk Support authorization page.
You must send certain parameters in a query string with the URL, as described next. Zendesk Support uses the parameters to customize the authorizaton page for the user.
Add the following name/value pairs to the parameters variable in the
elseclause:
parameters = { 'response_type': 'code', 'redirect_uri': '', 'client_id': 'oauth_tutorial_app', 'scope': 'read write'}
Because the app uses the authorization code grant flow, you have to specify
'code'as the response type. The
'redirect_uri'value is where you want Zendesk Support to send the user's decision. The
'client_id'value is the "unique identifier" created when you registered the app in Zendesk Support. The
'scope'value is what the app is asking permission to do in Zendesk Support. If the app will only get data and not create it, you could specify 'read' as the scope.
The parameters must be url-encoded to be sent in a query string, as described next.
Pass the parameters variable to the
urlencode()method in the url variable declaration (in bold below):
url = '.../oauth/authorizations/new?' + urlencode(parameters)
Add the following line at the top of the oauth_app.py file to import the urlencode method:
from urllib.parse import urlencode
The
urllib.parsemodule is included with Python so you don't have to install it.
The updated route should look as follows with your value for the
{subdomain} placeholder:
@route('/zendesk_profile') def make_request(): has_token = False if has_token: # Get user data profile_data = {'name': 'Lea Lee', 'role': 'admin'} return template('details', data=profile_data) else: # kick off authorization flow parameters = { 'response_type': 'code', 'redirect_uri': '', 'client_id': 'oauth_tutorial_app', 'scope': 'read write'} url = 'https://{subdomain}.zendesk.com/oauth/authorizations/new?' + urlencode(parameters) redirect(url)
Make sure your code is indented as shown, ignore any line wraps caused by the right margin, and then test that it works:
Start the development server from the command line:
$ python3 oauth_app.py
If the server is still running from the previous session, shut it down with Ctrl+C and start it up again for the changes to take effect.
Open a browser.
Click Get Zendesk profile info about myself.
You should be redirected to the Zendesk Support authorization page because
has_tokenis false. If you're not currently signed in to Zendesk Support, you'll be asked to sign in first. This is how Zendesk Support knows who you are.
Eventually, the authorization page opens:
Don't click Allow yet. The redirect page doesn't exist yet. You'll work on it next. If you do click Allow, it's not the end of the world. You'll just get a 404 message, as expected.
Handle the user's authorization decision
After the user makes the decision on the Zendesk Support authorization page to allow or deny access to your app, Zendesk Support sends the decision and a few other bits of information to the redirect URL you specified.
If the user decided to authorize the application, Zendesk Support adds a query string that contains an authorization code. Example:
{redirect_url}?code=7xqwtlf3rrdj8uyeb1yf
If the user decided not to authorize the application, Zendesk Support adds a query string that contains error and error_description parameters that inform the app that the user denied access. Example:
{redirect_url}?error=access_denied&error_description=The+end-user+or+authorization+server+denied+the+request
Use the possible query string values to control the flow of your application. Here's the approach you'll take: If the query string contains the string 'error', show an error message. If not, get the access token.
Start by creating a route for the "/handle_user_decision" redirect URL:
@route('/handle_user_decision') def handle_decision(): if 'error' in request.query_string: return template('error', error_msg=request.query.error_description) else: # Get access token
The
handle_decision()function checks to see if the string 'error' appears in the query string. In the Bottle framework, the
requestobject refers to the current HTTP request and the
query_stringproperty refers to the request's query string, if any.
If 'error' is found, the app renders the error template with the error message from the error_description parameter.
If 'error' is not found, then an authorization code must have been sent. The app can grab the code from the query string and exchange it for an access token with a POST request to a specific API endpoint, as described next.
In the
elseclause, define the parameters that must be included in the POST request:
parameters = { 'grant_type': 'authorization_code', 'code': request.query.code, 'client_id': 'oauth_tutorial_app', 'client_secret': '{your_secret}', 'redirect_uri': '', 'scope': 'read'}
The
'grant_type'value is 'authorization_code' because you're implementing an authorization code grant flow. The
'code'value is the actual authorization code, which is retrieved from the query string with the
query.codeproperty of the framework's
requestobject. The
'secret'value is the "Secret" generated when you registered the app in Zendesk Support. The
'redirect_uri'value is the same redirect URL as before.
Add the following statements after the parameters to make the POST request:
... payload = json.dumps(parameters) header = {'Content-Type': 'application/json'} url = 'https://{subdomain}.zendesk.com/oauth/tokens' r = requests.post(url, data=payload, headers=header)
Make sure to replace
{subdomain}in the URL with your Zendesk Support subdomain.
The
requests.post()method from the requests library makes the request. The response from Zendesk Support is assigned to the
rvariable. (The
responseidentifier is reserved for the Bottle
responseobject used in the next step.)
Handle the response:
... if r.status_code != 200: error_msg = 'Failed to get access token with error {}'.format(r.status_code) return template('error', error_msg=error_msg) else: data = r.json() response.set_cookie('owat', data['access_token']) redirect('/zendesk_profile')
If the request was successful (the HTTP status code was 200), the
elseclause is executed. The app decodes the json data and gets the token. It then saves the token in a cookie named 'owat' (for oauth web app tutorial) on the user's computer with the framework's
response.set_cookie()method.Important: To keep things simple, the app saves the token unencrypted in a cookie on the user's machine, but be aware of the security implications. It's like saving a password in a cookie. Somebody could use it to access the user's information on Zendesk Support. Unfortunately, storing access tokens securely is beyond the scope of this tutorial.
Finally, the user is redirected to the zendesk_profile page, which kicked off the authorization flow.
Add the following libraries used in the code at the top of the file:
import json import requests
The completed /handle_user_decision route should look as follows (make sure your indentation is correct, ignoring the line wraps caused by the right margin):
@route('/handle_user_decision') def handle_decision(): if 'error' in request.query_string: return template('error', error_msg=request.query.error_description) else: # Get access token parameters = { 'grant_type': 'authorization_code', 'code': request.query.code, 'client_id': 'oauth_tutorial_app', 'client_secret': '{your_secret}', 'redirect_uri': '', 'scope': 'read'} payload = json.dumps(parameters) header = {'Content-Type': 'application/json'} url = 'https://{subdomain}.zendesk.com/oauth/tokens' r = requests.post(url, data=payload, headers=header) if r.status_code != 200: error_msg = 'Failed to get access token with error {}'.format(r.status_code) return template('error', error_msg=error_msg) else: data = r.json() token = data['access_token'] response.set_cookie('owat', token) redirect('/zendesk_profile')
Use the access token
Your app can now check to see if the user has an access token. If the token exists, the app can use it to request the user's information from Zendesk Support.
In the /zendesk_profile route, replace the following 2 lines:
has_token = False if has_token:
with the following line at the same indent level:
if request.get_cookie('owat'):
The app checks to see if the cookie named 'owat' exists on the user's computer using the Bottle framework's
request.get_cookie()method. If the cookie exists, it gets the user data from Zendesk Support. If the cookie doesn't exist, the app kicks off the authorization flow.
Replace the contents of the
ifclause with the following the request code:
# Get user data access_token = request.get_cookie('owat') bearer_token = 'Bearer ' + access_token header = {'Authorization': bearer_token} url = 'https://{subdomain}.zendesk.com/api/v2/users/me.json' r = requests.get(url, headers=header) if r.status_code != 200: error_msg = 'Failed to get data with error {}'.format(r.status_code) return template('error', error_msg=error_msg) else: data = r.json() profile_data = { 'name': data['user']['name'], 'role': data['user']['role']} return template('details', data=profile_data)
Make sure to replace
{subdomain}in the URL with your Zendesk Support subdomain.
The first three lines build the authorization header:
access_token = request.get_cookie('owat') bearer_token = 'Bearer ' + access_token headers = {'Authorization': bearer_token}
The rest of the block makes the request with the Authorization header, updates the profile_data variable with the response data, and displays the detail page with the user's profile information. To learn more about making API requests, see Zendesk REST API tutorial - Python edition.
Code complete
The app is done and ready for testing. Here's the finished version of the oauth_app.py file:
Make sure your indentation is correct, then run the app to test it.
Start the local server:
$ python3 oauth_app.py
In a browser, go to.
Run some tests.
If you can, try using different browsers as different Zendesk Support users.
When you're done, switch to your command-line interface and press Ctrl+C to shut down the server.
You can keep tweaking or adding to the app if you want. See the following resources for more information:
Please sign in to leave a comment.
|
https://help.zendesk.com/hc/en-us/articles/229488968-Using-OAuth-to-authenticate-Zendesk-API-requests-in-a-web-app
|
CC-MAIN-2018-09
|
refinedweb
| 3,305
| 56.96
|
As I mentioned in my last post, I am going to dive deep into ASP.NET web development. I have watched several video tutorials including Introduction to ASP.NET MVC by Christopher Harrison and Jon Galloway and Implementing Entity Framework with MVC.
One thing that I really like about ASP.NET so far is that Visual Studio is able to generate code for me which I would have to write repeatedly by myself. A feature called Scaffolding creates a Controller and several Views on basis of a model.
How does this feature work and how can we tell Visual Studio to generate the desired code for us? I’m going to tell you about it in this post.
Furthermore, I have come across some common errors that stopped me from generating the desired code. I also want to help anyone struggling with this feature to be successful.
How do I use scaffolding in an ASP.NET MVC application?
There are two prerequisites that we need to provide:
- We need a model class which acts as the basis for code generation
- We need EntityFramework installed in our project
- We need a DbContext class
First of all, you need to create a model class. A model represents the data in the MVC (model, view, controller) pattern. In this post, I am going to use a Person model implemented like this:
PersonModel.cs
using System; using System.Collections.Generic; using System.ComponentModel.DataAnnotations; using System.Linq; using System.Web; namespace WebApplication1.Models { public class PersonModel { [Key] public int PersonID { get; set; } public string LastName { get; set; } public string FirstName { get; set; } public DateTime Birthday { get; set; } } }
After creating your model, we need to implement a DbContext class for our project. In this case, I made it simple, because the DbContext is not the main topic of this article:
using System; using System.Collections.Generic; using System.Data.Entity; using System.Linq; using System.Web; using WebApplication1.Models; namespace WebApplication1.DataAccess { public class WebApplication1DbContext : DbContext { public DbSet<PersonModel> Persons { get; set; } } }
When both our model and our data context classes are implemented, we can go to the Solution Explorer and open the context menu on the Controllers folder of our solution:
On the next screen, we can choose what we want to generate. We select the MVC 5 Controller with views, using Entity Framework option:
Finally, we arrive at the Add Controller dialog where we can fill in the model class and the data context class we implemented before. Make sure to select a model and a data context class. Otherwise, the Add button of the dialog won’t be enabled and you cannot click it.
Visual Studio starts scaffolding when we click on the Add button and may successfully generate the code for our Controller and Views.
If there is a problem while the process is taking place, and there are chances, jump to the last section of this post where I write about common problems that occur while scaffolding.
Generated artefacts
The generated controller contains code for creating, editing, deleting and listing Person objects. With just a simple click in the scaffolding dialog, we have a full-featured CRUD controller.
Furthermore, Visual Studio generated the corresponding View to our Controller. There are the following Razor syntax views generated: Create.cshtml, Delete.cshtml, Details.cshtml, Edit.cshtml, Index.cshtml.
Just to show what a generated View looks like I show a screenshot of the Create.cshtml here:
Source code
You can find the complete ASP.NET MVC project for download here. It contains all the code shown above and runs after downloading the required packages from NuGet.
Common problems
The Add button in the Add Controller dialog remains disabled
The dialog does not communicate what the required fields are. We have to fill in a model class and a data context class. If we don’t provide one of those values, the Add button won’t be enabled.
Error: There was an error running the selected code generator: Try rebuilding the project
If we get the following error message, we are told the rebuild the project. This is odd because when we open up the dialog again, we have to fill in all the fields again. So make sure you always build your project before you try to scaffold your model into Views and Controllers.
Error: EntityType ‘PersonModel’ has no key defined. Define the key for this EntityType.
Another common error is the following:
When we face this error, it means that we don’t provide an “ID” column for our model. In the example above, I named my key field “PersonID”. If I want to do so, I have to add the KeyAttribute to the property to tell the system that this property should be the key field of the model.
[Key] public int PersonID { get; set; }
Conclusion
Scaffolding is a powerful concept provided by Visual Studio which speeds up development and helps following design guidelines at the same time. I have used it since I started developing for ASP.NET MVC 5 and it helps me a lot.
At least with Visual Studio 2013 Update 4, there are a few things which could be improved. Anyway, I recommend trying it and see if it helps.
Pingback: Tracy Smith()
|
http://www.claudiobernasconi.ch/2015/06/22/how-to-fix-common-errors-using-asp-net-mvc-scaffolding/
|
CC-MAIN-2017-30
|
refinedweb
| 877
| 56.96
|
Update User Data during loop
On 01/10/2013 at 02:38, xxxxxxxx wrote:
Hello everybody. Below I have a very simplified version of what I'm trying to create.
The simplified code:
Runs a loop to get all of the user data and prints it to the console
Creates a new user data object (just lifted that straight from the documentation)
Runs the same loop again and print to the console again.
The results in the console are identical and I have to run the script again to be able to access the new userdata.
Import c4d
def main() :
obj = doc.SearchObject("Null")
ud = obj.GetUserDataContainer()
for id, bc in ud:
print id, bc[1] # The readout on console shows Test data 1 and Test data 2
bc = c4d.GetCustomDataTypeDefault(c4d.DTYPE_LONG) # Create default container
bc[c4d.DESC_NAME] = "Test data 3" # Rename the entry
element = obj.AddUserData(bc) # Add userdata container
obj[element] = 0 # Assign a value
c4d.EventAdd() # Update
for id, bc in ud:
print id, bc[1] #The readout on console shows Test data 1 and Test data 2 but no sign of Test data 3
if __name__=='__main__':
main()
I know I can set all of the parameters before I create the userdata, however what I intend to do is create a group and then clone some userdata and nest it in the group without having to run the script twice.
My question is: Can I access newly created/cloned userdata within a single execution of a script?
Any help is much appreciated.
Thanks
Adam
On 01/10/2013 at 08:09, xxxxxxxx wrote:
scripts are single threaded / executed from the c4d main thread. the general approach you would
take on this would be a python tag. the python tag code single threaded too, but it allows you to
use some fallthrough construct to split your task into two passes.
if data not in userdata:
UpdateMe()
elif condition is met:
DoSomething()
edit: lol, I just realized that I have hit the posting count sweet spot for my nick ;)
On 01/10/2013 at 08:25, xxxxxxxx wrote:
UserData works the same way as other objects.
If you change it in some manner. You have to grab it again after the changes were made to get it's current values.
import c4d def main() : obj = doc.SearchObject("Null") #The master UD container ud = obj.GetUserDataContainer() #Print the current UD entries in the master container for id, bc in ud: print id, bc[1] bc = c4d.GetCustomDataTypeDefault(c4d.DTYPE_LONG) bc[c4d.DESC_NAME] = "Test data 2" element = obj.AddUserData(bc) #Add this new userdata to the master UD container obj[element] = 10 #Assign a value to it c4d.EventAdd() #We've changed the master UD container above #So we have to grab it again to get the current stuff in it ud = obj.GetUserDataContainer() for id, bc in ud: print id, bc[1] if __name__=='__main__': main()
-ScottA
On 01/10/2013 at 08:36, xxxxxxxx wrote:
@littledevil Ha, Demonic post! - I'm privileged. I'm actually using a Python Node for this bit and created a sloppy workaround which involved an Iterator Node to trigger the Python twice.
@ScottA That's what I was looking for! Thank you. I feel like bashing my head off the desk for missing that. Solved!
|
https://plugincafe.maxon.net/topic/7468/9292_update-user-data-during-loop
|
CC-MAIN-2020-50
|
refinedweb
| 553
| 62.38
|
Products and Services
Downloads
Store
Support
Education
Partners
About
Oracle Technology Network
This issue is part of the general issue JDK-7124410 [macosx] Lion HiDPI support
A user provides high resolution images with @2x modifier.
The Toolkit.getImage(String filename) and Toolkit.getImage(URL url) methods should automatically load the high resolution image and show an image with best fit resolution on HiDPI display (Retina) on Mac OS X.
For example, there are files on disk:
- /dir/image.ext
- /dir/image@2x.ext
Image image = Toolkit.getDefaultToolkit().getImage("/dir/image.ext") //both images (image.ext and image@2x.ext) should be automatically loaded
graphics2d.drawImage(image,..) // The image with necessary resolution should be drawn according to the display transform and scale
No way to verify.
URL:
User: lana
Date: 2014-02-10 18:56:57 +0000
URL:
User: alexsch
Date: 2014-01-30 09:41:31 +0000
It was decided to not expose new public API for this bug.
I have withdrawn the CCC request 8011059 and created the separated issue JDK-8029339 Custom MultiResolution image support on HiDPI displays
So no public class fields or methods (RenderingHints) are added.
Release team: Approved for deferral.
8-defer-request:
High resolution images support is dropped for HiDPI displays in JDK 8.
If this fix can be completed without a new API then it is not critical for integration now. Please defer to 8u20.
- Support for Retina is important but not a release driver
- If this could only be done via new API then there might be a case for integrating it now as we otherwise could not add new APIs until JDK 9
- Since this can be done without new APIs it is not a justifiable risk at this time
The fix consists of 2 parts
- high resolution images with @2x modifier are automatically loaded by LWCToolkit on Mac OS X
- SunGraphics2D.drawImage(Image...) method draws an image with necessary resolution according to the current display scale and transform
If no image with @2x modifier is provided the standard logic to draw the base image is used.
If image with @2x modifier is provided then an algorithm calculates actual image width and height to be drawn on screen according to the display scale and transform, gets an image with necessary resolution and draw it in the standard way.
If an image with @2x modifier contains an error the base image is drawn in the standard way.
There should be low risk of this fix because the base algorithms that loads and draws images only takes in account does the ToolkitImage contains image with high resolution or not.
that was fixed in 7u40, so it becomes P2 from jdk8 pov
NMI: need an assessment of the risk, fix the synopsis to reflect that this is an API change, explain the nature of the code change.
The synopsis of this bug is misleading. It's really about an API change (adding additional constants to RenderingHints), not directly about making the demos look perfect.
SQE: OK to push in jdk8
- webrev link:
- review link:
- JPRT build link:
- issue impact: Users images will look ugly on HiDPI displays like Retina on Mac OS X
- fix rational: Allow to use images with high resolution on HiDPI Displays (Retina on Mac OS X)
-risks: Low
-suggested testing: run automated tests jdk/test/java/awt/image/MultiResolutionImageTest.java
fix is on review
|
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8011059
|
CC-MAIN-2015-06
|
refinedweb
| 564
| 50.16
|
A given WordprocessingML document can have one or more schemas "attached" to it. The purpose of schema attachment is to enable two things:
On-the-fly schema validation as the user edits the document
Schema-driven editing functionality
Schema validation happens automatically as a user edits the document. If a particular element declared in an attached schema is present in the document and does not conform to the type defined in the schema, then Word will flag this as an error. We've seen examples of this in our press release example, for certain simple types such as xsd:date.
Schema-driven editing functionality is exposed through the XML Structure task pane (covered below) and the Document Actions task pane (covered in Chapter 5).
The Word UI allows you to manually attach schemas to the currently open document. Figure 4-17 shows the appropriate dialog, which you can access by selecting Tools Templates and Add-Ins XML Schema.
The "Available XML schemas" list contains the aliases for all of the schemas in the schema library. In this example, the Press Release checkbox is checked, which means that the press release schema is attached to the current document. Multiple schemas can be attached to the same document, just as elements from multiple namespaces can be used in the same XML document.
The Add Schema... button lets you browse for an XSD schema document file in order to add it to your machine's schema library. By default, it also attaches the schema to the document automatically checking the corresponding checkbox that newly appears in the "Available XML schemas" list. The Schema Library button opens the Schema Library dialog, which we looked at earlier.
If all you ever do is manually attach schemas through the Word UI, the process of "schema attachment" may seem a little mysterious. The first thing to do is to stop thinking of it as a process. Instead, think of it as a property of the underlying WordprocessingML document. Secondly, it's important to understand that Word treats namespaces and schemas as virtually synonymous. That a "schema is attached" to a document means nothing more than the fact that a non-WordprocessingML namespace declaration is present somewhere inside the WordprocessingML document. A "non-WordprocessingML namespace declaration" is a declaration for any namespace other than the namespaces reserved for Word that were introduced in Chapter 2. So when Word says that a schema is attached to a document, it really means that a namespace is attached.
The fact that a schema is attached to the document is independent of whether a corresponding schema library entry is present on the current user's machine. It doesn't even matter if the document contains an element or attribute that uses the namespace.
Example 4-6 shows a simple WordprocessingML document with a schema attached, i.e., with a namespace declaration that is not among one of Word's reserved namespaces.
<?xml version="1.0"?> <?mso-application prog?> <w:wordDocument xmlns: <w:body/> </w:wordDocument>
If someone in our imaginary PR department opened this document in Word and selected Tools Templates and Add-Ins . . . XML Schema, they would see something very similar to the dialog box we saw in Figure 4-8 (assuming they already have the Press Release schema in their schema library). Specifically, the Press Release checkbox would be checked. As far as Word is concerned, the mere presence of the namespace declaration (anywhere in the document) means that the schema is attached, regardless even of whether any elements or attributes in the document use the namespace.
What happens if the user doesn't have a corresponding schema library entry? In that case, the schema is no less attached, because we've defined "schema attachment" as the presence of a non-WordprocessingML namespace declaration. However, in this case, the attached schema would be considered "unavailable." Figure 4-18 shows how the Word UI handles this scenario.
As you can see, a checkbox is still checked, meaning that "a schema is attached." The only difference is that, since there is no corresponding schema library entry, this schema is considered to be "Unavailable." And without a corresponding XSD schema document, schema validation and schema-driven editing are not possible.
Thus, for schema validation to work correctly, two conditions must hold:
The schema must be attached (the namespace must be declared in the document)
The schema must be available (in the machine's schema library).
Now let's relate all of this back to our primary use case using Word as an XML editor. If you recall the basic processing model, the first thing that happens when Word opens an arbitrary XML document is that an XSLT stylesheet is applied to it, converting it to WordprocessingML. Even though the schema library is consulted to see which XSLT stylesheet to apply (based on the namespace of the document's root element), no schemas have been attached at this point.
Whether a schema is ultimately attached to the document that the user edits is completely determined by whether the result of the onload XSLT transformation includes any non-WordprocessingML namespace declarations. Of course, if the result document contains any custom XML elements in your schema's namespace, then the schema will de facto be attached (because you can't have an element without declaring its namespace). And since schema validation is usually only useful when custom XML elements are already present, schema attachment is usually an automatic thing you don't have to think about; it just happens. Even so, understanding how it works is helpful for debugging and for explaining where unwanted "unavailable" schemas come from namely, wayward namespace declarations in the result of the onload transformation. (The onload XSLT stylesheets will therefore often use the exclude-result-prefixes and extension-element-prefixes attributes to prevent unwanted namespace declarations appearing in the WordprocessingML document.)
|
https://flylib.com/books/en/2.633.1.40/1/
|
CC-MAIN-2019-43
|
refinedweb
| 980
| 50.26
|
8.36: Drawing the “Next” Piece
- Page ID
- 14615
def drawNextPiece(piece): # draw the "next" text nextSurf = BASICFONT.render('Next:', True, TEXTCOLOR) nextRect = nextSurf.get_rect() nextRect.topleft = (WINDOWWIDTH - 120, 80) DISPLAYSURF.blit(nextSurf, nextRect) # draw the "next" piece drawPiece(piece, pixelx=WINDOWWIDTH-120, pixely=100) if __name__ == '__main__': main()
The
drawNextPiece() draws the "Next" piece in the upper right corner of the screen. It does this by calling the
drawPiece() function and passing in arguments for
drawPiece()’s
pixelx and
pixely parameters.
That’s the last function. Line 11 [505] and 12 [506] are run after all the function definitions have been executed, and then the
main() function is called to begin the main part of the program.
|
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/08%3A_Tetromino/8.36%3A_Drawing_the_%E2%80%9CNext%E2%80%9D_Piece
|
CC-MAIN-2021-10
|
refinedweb
| 118
| 64.81
|
- Something Missing?
Updating
We are always open to your feedback.
Folder Structure
After creation, your project should look like this:
my-app/ README.md..
Displaying Lint Output in the Editor; }
There is currently no support for preprocessors such as Less, or for sharing variables across CSS files.
Adding Images and Fonts function Header;). However it may not be portable to some other environments, such as Node.js and Browserify. If you prefer to reference static assets in a more traditional way outside the module system, please let us know in this issue, and we will consider support for this.
Adding -. You can find the companion GitHub repository here.
Prox:
- Enable CORS on your server (here’s how to do it for Express).
- Use environment variables to inject the right server host and port into your app.
Using)
set HTTPS=true&&npm start
.
Generating Dynamic
.
Running:
Version Control Integration.
Testing:
import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; it('renders without crashing', () => { const div = document.createElement('div'); ReactDOM.render(<App />, div); });
import React from 'react'; import { shallow } from 'enzyme'; import App from './App'; it('renders without crashing', () => { shallow(<App />); });ecyle.:
import React from 'react'; import { shallow } from 'enzyme'; import App from './App'; it('renders welcome message', () => { const wrapper = shallow(<App />); const welcome = <h2>Welcome to React</h2>; // expect(wrapper.contains(welcome)).to.equal(true); expect(wrapper.contains(welcome)).toEqual(true); });
All Jest matchers are extensively documented here.
Nevertheless you can use a third-party assertion library like Chai if you want to, as described below.
Using:
import sinon from 'sinon'; import { expect } from 'chai';
and then use them in your tests like you normally do.
Initial.fn(), setItem: jest.fn(), clear: jest.fn() }; global.localStorage = localStorageMock
Focusing and Excluding Tests
You can replace
it() with
xit() to temporarily exclude a test from being executed.
Similarly,
fit() lets you focus on a specific test without running any other tests.
Coverage Reporting.).
Disabling jsdom.
Experimental Snapshot Testing.
Deployment
Building.
GitHub Pages. for this effect, but the URL will be longer and more verbose (for example,). Read more about different history implementations in React Router. *.!.
Something Missing?
If you have ideas for more “How To” recipes that should be on this page, let us know or contribute some!
|
https://azure.microsoft.com/pl-pl/resources/samples/powerbi-react-client/
|
CC-MAIN-2017-47
|
refinedweb
| 379
| 50.94
|
.
clang3.8 compiles it to a library call, unfortunately, so this method is not efficient with clang. IDK about other compilers..
You don't need inline asm for this, and it's worth a bit of C++ syntax pain to avoid inline asm..
Here's an example of code with a CAS retry-loop that compiles to asm that looks right, and I think is free of UB or other unsafe C++. It's written in C style (non-member functions, etc.) but it would be the same if you wrote member functions.
See this code with gcc5.3 on the Godbolt compiler explorer. Make sure you compiler with
-mcx16 to enable
cmpxchg16b in code-gen. Your code won't run on early AMD k8 CPUs, but all later x86-64 hardware supports it, AFAIK. Without
-mcx16 (or with clang instead of gcc), you get a library function call (which probably uses a global lock). On Godbolt's install of g++,
<atomic> doesn't work with
-m32, so I can't easily check if
-m32 uses
cmpxchg8b, or if
-mx32 (32-bit pointers in long mode) uses
cmpxchg.
This is fully portable C++11,.
// compile with -mcx16 when targeting x86-64 #include <atomic> #include <stdint.h> using namespace std; struct node { struct { alignas(16): tearing will just lead to a failed cmpxchg node::counted_ptr expected = { target.next.ptr.load(memory_order_relaxed), target.next.count_separate.load(memory_order_relaxed) }; // node::counted_ptr expected = cur.counted_next.load(memory_order_acquire); // gcc uses a slow lock cmpxchg for this bool success; do { node::counted_ptr newval = { desired, expected.count + 1 }; // x86: compiles to cmpxchg16b with gcc, but clang uses a library call success = target.next_and_count.compare_exchange_weak( expected, newval, memory_order_acq_rel); // updates exected on failure } while( !success ); }
It does depend on writing a union through one member and reading it through another, to do efficient reads of just the pointer without using a
lock cmpxchg16b. Even if you had a compiler that didn't support that normally, it would probably still work, because we're always using
std::atomic loads, which have to assume that the value was modified asynchronously. So we should be immune from aliasing-like problems where values in registers are still considered live after writing the value through another pointer (or union member).
To learn more about std::memory_order acquire and release, see Jeff Preshing's excellent articles.
|
https://codedump.io/share/9wddVxZ8ScwJ/1/implement-aba-counter-with-c11-cas
|
CC-MAIN-2018-05
|
refinedweb
| 390
| 56.15
|
Introduction
This document describes the technical aspects of the testing integrated into Dspace. In it we describe the tools used as well as how to use them and the solutions applied to some issues found during development. It's intended to serve as a reference for the community so more test cases can be created.
Issues Found
During implementation we found several issues, which are described in this section, along the solution implemented to work around them.
Multiple Maven projects
DSpace is implemented as a set of Maven 2 projects with a parent-child relationship and a module that manages the final assembly of the project. Due to the specifics of DSpace (database dependency, file system dependency) we need to set up the test environment before running any tests. While the fragmentation in projects of DSpace is good design and desirable, that means we have to replicate the configuration settings for each project, making it much less maintainable.
There is another issue related to the way Maven works. Maven defines a type of package for each project (jar, war or pom). Pom projects can contain subprojects, but their lifecycle skips all the test steps. This means that even if a Pom project would be ideal to place the tests, they would be not be run and we can't force Maven to run them by any means.
The perfect solution would be to run the tests in the Assembly project, but due to the mentioned limitations (it is a Pom project) this can't work.
To solve this issue we have created a new project, dspace-test, which will contain only unit tests. It has dependencies on the projects being tested and all the settings required to initialize DSpace. All tests must be added to this project, as it is the only one with the proper dependencies.
File system dependency
We found that some methods and/or classes have a dependency on the file system of DSpace. This means we have to replicate the file system, including some configuration files, before being able to run the tests.
The main issue is that the existing files we need are in the Assembly project, but we can't run the tests there. Also, we can't use an assembly to copy the files as the test phase runs before any packaging or install phase in Maven.
The solution has been to duplicate the file system in the dspace-test project. This replica is copied to the temp folder (a folder designed by the tester via a configuration file) before launching the tests. Once the tests finish, the files are removed. This is not an ideal solution as requires tester to duplicate files, but is a workaround while we find a definite solution.
The files are stored under the folder dspaceFolder in the resources folder. All the contents of this folder will be copied to the temporal area. The file test-config.properties contains settings to specify the temporal folder and other test values.
File configuration dependency
Dspace heavily depends on the file dspace.cfg for its configuration. For testing purposes we have crafted a test version of this file, available in the resources folder of dspace-test. This file is loaded during setup instead of the default dspace.cfg so the environment is set for unit testing.
As the assembly process is run later than the test goal in Maven, we can't use external profiles to replace values. This means the values in this file are hard-coded and might need to be changed for a specific system. The way it is set up by default, it should work on all *nix systems as it sets the /tmp/ folder as the temporal test folder. If this has to be changed, the file test-config.properties will also need to be updated.
Database dependency
We found that many classes have a direct dependency to the database. Specifically there is a huge coupling with both Oracle and PostgreSQL, with many pieces of code being executed or not depending on the database used. Mocking the connections is not easy due to the heavy use of default-access constructors and relations between classes that are not following Demeter's Law.This means we need a database to run the tests, something not really desirable but required.
While the perfect solution would be to migrate DSpace to an ORM like Hibernate, there is not time to do so in this project, and this would be too much of a change to add to the source. The decision has been made to use an in-memory database where the DSpace database will be recreated for the purpose of unit testing. Once the tests are finished, the database will be wiped and closed.
Taking in account the coupling with Oracle/PostgreSQL and the need to recreate the database, the solution has been to create a mock of the DatabaseManager class. This mock implements the exact same functionality of the base class (being a complete copy) and adds some initialization code to reconstruct the tables and default contents.
The database used is H2 . The information is stored in a copy of the database_schema.sql file for PostgreSQL with the following modifications:
- removal of the function that provides the next value of a sequence
- removal of clause "WITH TIME ZONE" from TIMESTAMP values
- removal of DEFAULT NEXTVAL('<seq>') constructs due to incompatibility. DatabaseManager has been changed to add the proper ID to the column. Proposed to change the affected valued to IDENTITY values, that include autoincrement.
- removal of UNIQUE constructs due to incompatibility. Tests will need to verify uniqueness
- replaced BIGSERIAL by BIGINT
- replacing getnextid for NEXTVAL on an INSERT for epersongroup
- due to the parsing process some spaces have been added at the start of some lines to avoid syntax errors
Due to H2 requiring the column names in capital letters the database is defined as an Oracle database for DSpace (db.name) and the Oracle compatibility mode for H2 has been enabled.
The code in the mock DatabaseManager has been changed so the queries are compatible with H2. The changes are minimal as H2 is mostly compatible with PostgreSQL and Oracle.
As a note, the usage of a DDL language via DDLUtils has been tested and discarded due to multiple issues. The code base of DDL Utils is ancient, and not compatible with H2. This required us to use HSQLDB, which in turn required us to change some tables definitions due to syntax incompatibilities. Also, we discovered DDL Utils wasn't generating a correct DDL file, not providing relevant meta-information like which column of a table was a primary key or the existing sequences. Due to the reliance of DatabaseManager on this meta-information, some methods were broken, giving wrong values. It seems that more recent code is available from the project SVN, but this code can't be recovered from Maven repositories, which would make much more cumbersome the usage of unit testing in DSpace as the developer would be required to download the code, compile it and store it in the local repository before being able to do a test. A lot of effort has been put to use the DDL, but in the end we feel using the database_schema.sql file is better.
Different tests
In this project we want to enable unit tests, integration tests and functional tests for DSpace. Maven 2 has a non-modifiable life-cycle that doesn't allow us to run tests once the project has been packaged.This same life-cycle doesn't allow us to launch an embedded server like Jetty to run the functional tests.
The solution to this is to create 2 infrastructures, one for the unit and integration tests and one for the functional tests. Unit and integration tests will be run by Maven using the Surefire plugin. Functional tests will be run once the program has been build from an Ant task. This will allow us to launch an embedded server to run the tests.
This option is not optimal, but due to the limitations imposed by DSpace system and Maven we have not find a better solution. Any proposals are appreciated.
The unit and integration tests solution has been implemented in the dspace-test project.
The functional tests implementation is being done.
Structural Issues
During the development the following issues have been detected in the code, which make Unit Testing harder and impact the maintainability of the code:
* Hidden dependencies. Many objects require other objects (like DatabaseManager) but the dependency is never explicitly declared or set. These dependencies should be fulfilled as parameters in the constructors or factory methods.
* Hidden constructors. It would be advisable to have public constructors in the objects and provide a Factory class that manages the instantiation of all required entities. This would facilitate testing and provide a separation of concerns, as the inclusion of the factory methods inside objects usually adds hidden dependencies (see previous point).
Refactoring would be required to fix these issues.
Development Issues
During development some issues have arised. These issues need to be solved for the unit tests to be viable.
Warnings to be taken in account:
- Unit tests may be faulty due to misunderstanding of the source code, a revision is required to ensure they behave as expected
- Unit tests may be incomplete, missing some paths of the existing code
- Due to the tight dependencies between some classes some methods can't be tested completely. In these cases more benefit can be obtained from integration tests than from unit tests
- For the aforementioned reasons, a revision (peer-review) is required to ensure the unit tests behave as expected and they are reliable
- The unit tests may lack tests for some edge cases. It's in the nature of Unit Tests to evolve as new bugs are found, so they should be added as they are detected (via peer-review or via bug reports)
Issues found:
- A mock of BrowseCreateDAOOracle has been done due to an incompatibility between H2 and the "upper" function call. This will affect tests related to case sensitivity in indexes.
- Many objects (like SupervisedItem) lack a proper definition of the "equals" method, which makes comparison between objects and unit testing harder
- Update method of many objects doesn't provide any feedback, we can only test if it raises an exception or not, but we can't be 100% sure if it worked
- Many objects have methods to change the policies related to the object or children objects (like Item), it would be good to have some methods to retrieve these policies also in the same object (code coherence)
- There are some inconsistencies in the calls to certain methods. For example getName returns an empty String in a Collection where the name is not set, but a null in an Item without name
- DCDate: the tests raise many errors. I can't be sure if it's due to misunderstanding of the purpose of the methods or due to faulty implementation (probably the previous). In some cases extra encapsulation of the internals of the class would be advisable, to hide the complexities of the Calendars (months starting by 0, etc)
- The Authorization system gets a bit confusing. We have AuthorizationManager, AuthorizationUtils, methods that raise exceptions and methods that return booleans. Given the number of checks we have to do for permissions, and that some classes call methods that require extra permissions not declared or visible at first, this makes creation of tests (and usage of the API) a bit complex. I know we can ignore all permissions via context (turning on and off authorizations) but usually we don't want that
- Community: set logo checks for authorization, but set metadata doesn't. It's in purpose?
- Collection: methods create and delete don't check for authorization
- Item: there is no authorization check for changing policies, no need to be an administrator
- ItemIterator: it uses ArrayLists in the methods instead of List
- ItemIterator: we can't verify if the Iterator has been closed
- Metadata classes: usually most classes have a static method to create a new instance of the class. Instead, for the metadata classes (Schema, Field and Value) the method create is part of the object, thus requiring you to first create an instance via new and then calling create. This should be changed to follow the convention established in other objects (or the other objects should be amended to behave like the Metadata classes)
- Site: this class extends DSpaceObject. With it being a Singleton, it creates potential problems, for example when we use DSpaceObject methods to store details in the object. It's this relation necessary?
Fixes done:
-
Pending of a review by a DSpace developer:
- DCDate: Here many tests fail because I'm not sure of the purpose of the class. I would expect it to hide the implementation of Calendar (with all those things like months starting by 0 and other odd stuff) so it's easier to use, but it seems that's not the case...
-
Proposals:
To solve the previous issues, some proposals are done:
- Database dependency causes too many issues, making unit testing much harder and increasing the complexity of the code. Refactoring to a database-neutral system should be a priority
- A release could be done (1.8?) centered on cleaning code, improving stability and coherency and refactoring unit tests, as well as replacing the database system. No new functionalities. This would make future work much easier.
Dependencies
There is a set of tools used by all the tests. These tools will be described in this section.
Maven
The build tool for DSpace, Maven, will also be used to run the tests. For this we will use the Surefire plugin, which allows us to launch automatically tests included in the "test" folder of the project. We also include the Surefire-reports plugin in case you are not using a Continous Integration environment that can read the output and generate the reports.
The plugin has been configured to ignore test files whose name starts with "Abstract", that way we can create a hierarchy of classes and group common elements to various tests (like certain mocks or configuration settings) in a parent class.
Tests in Maven are usually added into src/test, like in src/test/java/<package> with resources at src/test/resources.
To run the tests execute:
mvn test
The tests will also be run during a normal Maven build cycle. To skip the tests, run Maven like:
mvn package -Dmaven.test.skip=true
By default we will disable running the tests, as they might slow the compilation cycle for developers. They can be activated using the command
mvn package -Dmaven.test.skip=false
or by changing the property "activeByDefault" at the corresponding profile (skiptests) in the main pom.xml file, at the root of the project.
JUnit
JUnit is a testing framework for Java applications. It was one of the first testing frameworks for Java and it's a widespread use in the community. The framework simplifies the development of unit tests and the current IDE's make even easier building those tests from existing classes and running them.
Junit 4.8.1 is added as a dependency in the parent project. The dependency needs to be propagated to the subprojects that contain tests to be run.
As of JUnit 4.4, Harmcrest is included. Harmcrest is a library of matcher objects that facilitate the validation of conditions in the tests.
JMockit
JMockit is popular and powerful mocking framework. Unlike other mocking frameworks it can mock final classes and methods, static methods, constructors and other code fragments that can't be mocked using other frameworks.
JMockit 0.998 has been added to the project to provide a mocking framework to the tests.
ContiPerf
ContiPerf is a lightweight testing utility that enables the user to easily leverage JUnit 4 test cases as performance tests e.g. for continuous performance testing.
The project makes use of ContiPerf 1.06.
H2
H2 is an in-memory database that has been used
The project makes use of H2 version 1.2.137
Unit Tests Implementation
These are tests which test just how one object works. Typically test each method on an object for expected output in several situations. They are executed exclusively at the API level.
We can consider two types of classes when developing the unit tests: classes which have a dependency on the database and classes that don't. The classes that don't can be tested easily, using standard procedures and tests. Our main problem are classes tightly coupled with the database and its helper objects, like BitstreamFormat or the classes that inherit from DSpaceObject. To run the unit tests we need a database but we don't want to set up a standard PostgreSQL instance. Our decision is to use an in-memory database that will be used to emulate PostgreSQL.
To achieve this we mock DatabaseManager and we replace the connector to point to our in-memory database. In this class we also initialise the replica with the proper data.
Structure
Due to the Dspace Maven structure discussed in previous sections, all the tests belonging to any module (dspace-api, dspace-xmlui-api, etc) must be stored in the module dspace-test. This module enables us to apply common configuration, required by all tests, in a single area thus avoiding duplication of code. Related to this point is the requirement for Dspace to run using a database and a certain file system structure. We have created a base class that initializes this structure via a in-memory database (using H2) and a temporary copy of the required file system.
The described base class is called "AbstractUnitTest". This class contains a series of mocks and references which are necessary to run the tests in DSpace, like mocks of the DatabaseManager object. All Unit Tests should inherit this class, located under the package "org.dspace" in the test folder of dspace-test. There is an exception with classes that originally inherit DSpaceObject, its tests should inherit AbstractDSpaceObjectTest class.
Several mocks are used in the tests. The more relevant ones are:
- MockDatabaseManager: a mock of the database manager that launches H2 instead of PostgreSQL/Oracle and creates the basic structure of tables for DSpace in memory
- MockBrowseCreateDAOOracle: due to the strong link between DSpace and the databases, there are some classes that have specific implementations if we are using Oracle or PostgreSQL, like this one. In this case we've had to create a mock class that overrides the functionality of MockBrowseCreateDAOOracle so we are able to run the Browse related tests.
You may need to create new mocks to be able to test certain areas of code. Creation of the Mock goes beyond the scope of this document, but you can see the mentioned classes as an example. BAsically it consists on adding annotations to a copy of the existing class to indicate a method is a mock of the original implementation and modifying the code as required for our tests.
Limitations.
How to build new tests
To build a new Unit <OriginalClass>Test; /** * Unit Tests for class <OriginalClass>Test * @author you name */ public class <OriginalClass>Test extends AbstractUnitTest { /** log4j category */ private static final Logger log = Logger.getLogger(<OriginalClass>Test.class); /** * <OriginalClass> instance for the tests */ private <OriginalClass> c; /** * This method will be run before every test as per @Before. It will * initialize resources required for the tests. * * Other methods can be annotated with @Before here or in subclasses * but no execution order is guaranteed */ @Before @Override public void init() { super.init(); try { //we have to create a new community in the database context.turnOffAuthorisationSystem(); this.c = <OriginalClass>.create(null, context); //we need to commit the changes so we don't block the table for testing context.restoreAuthSystemState(); context.commit(); } catch (AuthorizeException ex) { log.error("Authorization Error in init", ex); fail("Authorization Error in init"); } catch (SQLException ex) { log.error("SQL Error in init", ex); fail("SQL Error in init"); } } /** * This method will be run after every test as per @After. It will * clean resources initialized by the @Before methods. * * Other methods can be annotated with @After here or in subclasses * but no execution order is guaranteed */ @After @Override public void destroy() { c = null; super.destroy(); } /** * Test of XXXX method, of class <OriginalClass> */ @Test public void testXXXX() throws Exception { int id = c.getID(); <OriginalClass> found = <OriginalClass>.find(context, id); assertThat("testXXXX 0", found, notNullValue()); assertThat("testXXXX 1", found.getID(), equalTo(id)); assertThat("testXXXX 2", found.getName(), equalTo("")); } [... more tests ...] }.
Be aware that tests of classes that extend DSpaceObject should extend AbstractDSpaceObjectTest instead due to some extra methods and requirements implemented in there..
Integration Tests
These tests work at the API level and test the interaction of components within the system. Some examples are placing an item into a collection or creating a new metadata schema and adding some fields. Primarily these tests operate at the API level ignoring the interface components above it.
The main difference between these and the unit tests is in the test implemented, not in the infrastructure required, as these tests will use several classes at once to emulate a user action.
The integration tests also make use of ContiPerf to evaluate the performance of the system. We believe it doesn't make sense to add this layer to the unit tests, as they are tested in isolation and we care about performance not on individual calls but on certain tasks that can only be emulated by integration testing.
Structure
Integration tests use the same structure as Unit tests. A class has been created, called AbstractIntegrationTest, that inherits from AbstractUnitTest. This provides the integration tests with the same temporal file system and in-memory database as the unit tests. The class AbstractIntegrationTest is created just in case we may need some extra scaffolding for these tests. All integration tests should inherit from it to both distinguish themselves from unit tests and in case we require specific changes for them.
Classes that contain the code for Integration Tests are named <class>IntegrationTest.java.
The only difference right now between Unit Tests and Integration Tests is that the later include configuration settings for ContiPerf. These is a performance testing suite that allows us to reuse the same methods we use for integration testing as performance checks. Due to limitations mentioned in the following section we can't make use of all the capabilities of ContiPerf (namely, multiple threads to run the tests) but they can be still be useful.
Limitations
Tests structure
These limitations are shared with the unit tests..
Events Concurrency Issues
We have detected an issue with the integration tests, related to the Context class. In this class, the List of events was implemented as an ArrayList<Event>. The issue here is that ArrayList is not a safe class for concurrency. Although this would not be a problem while running the application in a JEE container, as there will be a unique thread per request (at least in normal conditions), we can't be sure of the kind of calls users may do to the API while extending DSpace.
To avoid the issue we have to wrap the List into a synchronized stated via Collections.synchronizedList . This, along a synchronized block, will ensure the code behaves as expected.
We have detected the following classes affected by this behavior:
- BasicDispatcher.java
In fact any class that calls Context.getEvents() may be affected by this. A comment has been added in the javadoc of this class (alongside a TODO tag) to warn about the issue.
Context Concurrency Issues
There is another related issue in the Context class. Context establishes locks in the tables when doing some modifications, locks that are not lifted until the context is committed or completed. The consequence is that some methods can't be run in parallel or some executions will fail due to table locks. This can be solved, in some cases, by running context.commit() after a method that modifies the database, but this doesn't work in all cases. For example, in the CommunityCollection Integration Test, the creation of a community can mean the modification of 2 rows (parent and new community). This causes this kind of locks, but as it occurs during the execution of the method create() it can't be solved by context.commit().
Due to these concurrency issues, ContiPerf can only be run with one thread. This slows the process considerably, but until the concurrency issue is solved this can't be avoided.
How to build new tests
To build a new Integration <RelatedClasses>IntegrationTest; /** * This is an integration test to validate the metadata classes * @author pvillega */ public class MetadataIntegrationTest extends AbstractIntegrationTest { /** log4j category */ private static final Logger log = Logger.getLogger(MetadataIntegrationTest.class); /** * This method will be run before every test as per @Before. It will * initialize resources required for the tests. * * Other methods can be annotated with @Before here or in subclasses * but no execution order is guaranteed */ @Before @Override public void init() { super.init(); } /** * This method will be run after every test as per @After. It will * clean resources initialized by the @Before methods. * * Other methods can be annotated with @After here or in subclasses * but no execution order is guaranteed */ @After @Override public void destroy() { super.destroy(); } /** * Tests the creation of a new metadata schema with some values */ @Test @PerfTest(invocations = 50, threads = 1) @Required(percentile95 = 500, average= 200) public void testCreateSchema() throws SQLException, AuthorizeException, NonUniqueMetadataException, IOException { String schemaName = "integration"; //we create the structure context.turnOffAuthorisationSystem(); Item it = Item.create(context); MetadataSchema schema = new MetadataSchema("htpp://test/schema/", schemaName); schema.create(context); [...] //commit to free locks on tables context.commit(); //verify it works as expected assertThat("testCreateSchema 0", schema.getName(), equalTo(schemaName)); assertThat("testCreateSchema 1", field1.getSchemaID(), equalTo(schema.getSchemaID())); assertThat("testCreateSchema 2", field2.getSchemaID(), equalTo(schema.getSchemaID())); [...] //clean database value1.delete(context); [...] context.restoreAuthSystemState(); context.commit(); } }.
Code Analysis Tools
Due to comments in the GSoC meetings, some static analysis tools have been added to the project. The tools are just a complement, a platform like Sonar should be used as it integrates better with the structure of DSpace and we could have the reports linked to Jira.
We have added the following reports:
- FindBugs : static code bug analyser
- PMD and CPD: static analyser and "copy-and-paste" detector
- TagList: finds comments with a certain annotation (like XXX or TODO)
- Testability Explorer: detects issues in classes that difficult the creation of unit tests
These reports can't replace a Quality Management tool but can give you an idea of the status of the project and of issues to be solved.
The reports can be generated by launching:
mvn site
from the main folder. Be aware this will take a long time, probably more than 20 minutes.
Functional Tests
These are tests which come from user-based use cases. Such as a user wants to search DSpace to find material and download a pdf. Or something more complex like a user wants to submit their thesis to DSpace and follow it through the approval process. These are stories about how users would preform tasks and cover a wide array of components within Dspace.
Be aware that in this section we don't focus on testing the layout of the UI, as this has to be done manually to ensure we see exactly the same in different browsers. In this section we only consider tests that replicate a process via the UI, to ensure it is not broken due to some links missing, unexpected errors or similar issues.
Choices taken
To decide on a specific implementation of these tests some choices have been taken. They may not be the best but at this time they seemed the appropriate ones. Contributions and criticism are welcomed.
On one hand, functional tests run against a live instance of the application. This means we need a full working environment with the database and file system. It also means we are running them against the modified UI of a specific installation. As a consequence, we want the tests to be very generic or easily customizable, but we have to be aware that due to way Maven (and particularly its packaging system) works it isn't possible to run the tests as a step of the unit and integration testing described in the above sections.
On the other hand, if we focus on the tools available, the best choice we have available is Selenium, a suite of tools to automate testing of web applications. Selenium can be run in two modes: as a Firefox plug in that will allow us to record test scripts and run them later, or as a distributed system that allows us to run the tests against several browsers in different platforms.
We have to choose a way to run the tests that is easy to set up and adapt to a particular project by the developers, while limited by the options that maven and selenium provide. The decision has been to use the Selenium IDE to run the tests. This means the tests can only be run in Firefox and they have to be launched manually, but on the other hand they are easily customizable and runnable.
The Selenium RC environment was discarded due to the complexity it would add to the testing process. If you are a DSpace developer and have the resources and expertise to set it up, we clearly recommend so. You can reuse the scripts generated by the Selenium IDE and add extra tests to be run with JUnit alongside the existing unit and integration tests, which allows you to build more detailed and accurate tests. Even if we recommend it and it is a more powerful (and desirable) option, we are aware of the extra complexity it would add to just run the functional tests, as not everybody has the resources to deploy the required applications. That's why we have decided to provide just the Selenium IDE scripts, as they require much less effort to be set up and will do the job.
Structure
Selenium tests consist of two components:
- The Selenium IDE, downloadable here , which is a Firefox addon that can record our actions and save them as a test
- The tests, which are HTML files that store a list of actions to be "replayed" by the browser. If any of the actions can't be executed (due to a field or link missing or some other reason) the test will fail.
- The recommendation is to create one test per user-action (like creating an item). Several tests can be loaded at once in the IDE and run sequentially.
To install the Selenium IDE, first install Firefox and then try to download it from Firefox itself. The browser will recognize the file as an addon and it will install it.
Limitations
The resulting tests have several limitations:
- Tests are recorded against a specific run on a particular machine. This means some steps may include specific values for some variables (id numbers, paths to files, etc) that link the test to a particular installation and state in the system. As a consequence we have to ensure we have the same state in the system every time we run the tests. That may mean run the tests in a certain order and probably starting from a wiped Dspace installation.
- For the same reason as above, some scripts may require manual changes before being able to be run (to ensure the values expected by Selenium exist). This is specially important in tests which include a reference to a file (like when we create an item) as the path to the file is hard coded in the script.
- Tests have to be launched manually and can only be run in Firefox using the Eclipse IDE. We can launch all our tests at once, as a test suite, but we have to do so manually.
- Due to they way Selenium works (checking HTML code received and acting upon it) high network latency or slowness in the server may cause a test to fail when it shouldn't. To avoid this, it is recommended to run the tests at minimum speed and (if required) to increase the time Selenium waits for an answer (it can be set in the Options panel).
- Tests are linked to a language version of the system. As Selenium might reference, in some situations, a control by its text, a test will fail if we change the language of the application. Usually this is not a big problem as the functionality of the application will be the same independently of the language the user has selected, but if we want to test the application including its I18N components, we will need to record the actions once per language enabled in the system.
How to build new tests
Building a test in Selenium IDE is easy. Open the IDE (in Firefox, Tools > Selenium IDE) and navigate to your DSpace instance in a tab of Firefox (i.e.:). Press the record button (red circle at top-right corner) in the Selenium IDE and navigate through your application. Selenium will record every click you do and text you write in the screen. If you do a mistake, you can right-click over an entry in the list and remove it.
Actions are stored by default as a reference to the control activated. This reference is generic, meaning that Selenium might look for an anchor link (<a>) that points to a certain url (like '/jspui/handle/1234/5678'), independently of its id, name or position in the application. This means that the test will not be affected (usually) by changes in the layout or some refactoring. That said, in some specific cases you may need to edit the test cases to change some values.
Once you are finished, press again the record button. Then, in the Selenium IDE, go to File > Save Test Case and save your test case.
In Selenium the tests cases are a HTML file that stores data in a table. The table contains a row per action, and each row has 3 columns :
- An action to be run (mandatory). This can be an action like open, click, etc.
- A path or control id against which the action is executed (mandatory). This can point to an URL, a control (input, anchor, etc) or similar.
- A text to be added or selected in an input control (optional).
A sample of a Selenium test is:
open /xmlui clickAndWait link=Subjects clickAndWait //div[@id='aspect_artifactbrowser_Navigation_list_account']/ul/li[1]/a type aspect_eperson_PasswordLogin_field_login_email admin@dspace.org type aspect_eperson_PasswordLogin_field_login_password test clickAndWait aspect_eperson_PasswordLogin_field_submit
The code generated by the Selenium IDE for this would be like:
<>Create_Comm_Coll_Item_XMLUI</title> </head> <body> <table cellpadding="1" cellspacing="1" border="1"> <thead> <tr><td rowspan="1" colspan="3">Create_Comm_Coll_Item_XMLUI</td></tr> </thead><tbody> <tr> <td>open</td> <td>/xmlui</td> <td></td> </tr> <tr> <td>clickAndWait</td> <td>link=Subjects</td> <td></td> </tr> [... more actions ...] </tbody></table> </body> </html>Create_Comm_Coll_Item_XMLUI
You can use the Selenium IDE to generate the tests or write them manually by using the Selenese commands.
How to run the tests
To run the tests simply open the Selenium IDE (in Firefox, Tools > Selenium IDE) and navigate to your DSpace instance in a tab of Firefox (i.e.:). Then, in the Selenium IDE, click on File > Open and select a test case. You can open as many files as you want, they will be run in the order you opened them.
Once you have selected the test cases to run, ensure the speed of Selenium is set to slow (use the slider) and press either the "Play entire test suite" or "Play current test case" button (the ones with a green arrow), according to your intentions. Selenium will run the actions one by one in the order recorded. If at some point it can't run an action, it will display an error and fail the test. You can see the reason of the error in log at the bottom of the Selenium IDE window.
A very common reason why a test fails is because the server returned the HTML slowly and Selenium was trying to locate and HTML element before having all the HTML. To avoid this make sure that Selenium speed is set to slow and increase the default timeout value in Options.
Provided tests
We have included some sample Selenium tests in Dspace so developers can experiment with them. The tests are located under "<dspace_root>/dspace-test/src/test/resources/Selenium scripts". They are HTML files following the format described above. They are doing some assumptions:
- Tests assume that you are running them against a vanilla environment with no previous data. They may work in an environment with data, but it's not assured
- Tests assume you are running the English UI, other languages may break some tests.
- Tests assume a user with user name admin@dspace.org and password test exists in the system and has administrator privileges
- Tests assume a file exists at /home/pvillega/Desktop/test.txt
You can edit the tests (see the format above) and change the required values (like user and path to a file) to values which are valid in your system.
Advanced Usage
If you set up Selenium RC, you can reuse the test scripts to be run as JUnit tests. Selenium can export them automatically to Java classes using JUnit. For this open the Selenium IDE (in Firefox, Tools > Selenium IDE), click on File > Open and select a test case. Once the test case is loaded, click on File > Export Test Case As > Java (JUnit) - Selenium RC. This will create a Java class that reproduces the test case, as the following:
package com.example.tests; import com.thoughtworks.selenium.*; import java.util.regex.Pattern; public class CreateCommunity extends SeleneseTestCase { public void setUp() throws Exception { setUp("", "*chrome"); } public void testJunit() throws Exception { selenium.open("/jspui/"); selenium.click("xpath=//a[contains(@href, '/jspui/community-list')]"); selenium.waitForPageToLoad("30000"); selenium.click("link=Issue Date"); selenium.waitForPageToLoad("30000"); selenium.click("link=Author"); selenium.waitForPageToLoad("30000"); [... ] } }
As you can see in the code, this class suffers from the same problems as the Selenium IDE scripts (hardcoded values, etc) but can be run using Selenium RC in a distributed environment, alongside your JUnit tests.
Future Work
This project creates a structure for testing that can expanded. Future tasks would include:
- Integrating with a Quality Management tool like Sonar
- Integrating with a Continuous Integration tools
- Adding Unit and Integration tests for the remaining classes
- Extending Functional tests
- Creating a "Code quality" release, where priority is not new functionalities but stability and quality of code!
|
https://wiki.duraspace.org/display/GSOC/GSOC+2010+Unit+Tests+-+Technical+documentation
|
CC-MAIN-2017-17
|
refinedweb
| 6,465
| 51.99
|
How do i get a basic web2py server up and running on
PythonAnywhere?
[update - 29/05] We now have a big button on the web tab that will do all this stuff for you. Just click where it says Web2Py, fill in your admin password, and you're good to go.
Here's the old stuff for historical interest...
I'm a PythonAnywhere developer. We're not massive web2py experts (yet?) but I've managed to get web2py up and running like this:
First download and unpack web2py:
wget unzip web2py_src.zip
Go to the PythonAnywhere "Web" panel and edit your
wsgi.py. Add these lines:
import os import sys path = '/home/my_username/web2py' if path not in sys.path: sys.path.append(path) from wsgihandler import application
replacing
my_username with your username.
You will also need to comment out the last two lines in wsgi.py, where we have the default hello world web.py application...
# comment out these two lines if you want to use another framework #app = web.application(urls, globals()) #application = app.wsgifunc()
Thanks to Juan Martinez for his instructions on this part, which you can view here:
then open a Bash console, and
cd into the main
web2py folder, then run
python web2py.py --port=80
enter admin password
press ctrl-c
(this will generate the
parameters_80.py config file)
then go to your Web panel on PythonAnywhere, click reload web app, and things should work!
|
https://codedump.io/share/379qFnYhRAOz/1/how-do-i-deploy-web2py-on-pythonanywhere
|
CC-MAIN-2018-09
|
refinedweb
| 241
| 66.94
|
Towards.
Luckily we have been busy in the past months and these issues have now been resolved. We have the complete multicast firmware update solution running on a single xDot with only 32K RAM and 256K flash, and we have added delta updates to the client which reduces transmission time by 90% in most cases. The full device source code and the reference network server by The Things Network will be released in about two weeks during TechCon (here is a link to the session!).
In the meantime, we'll use this blog post to show how we implemented two issues that everyone faces when writing a firmware update client:
- how do we verify the authenticity of a firmware file, and
- how do we implement binary patching on an embedded device.
TL;DR? The finished application is here.
To follow along with this blog post you'll need some software installed:
- Mbed CLI and a toolchain.
- A recent version of node.js.
- OpenSSL.
- A terminal monitor like TeraTerm or GNU Screen.
Verifying authenticity
In this blog post we used the FlashIAP API in Mbed OS to download and flash new firmware, but we were not checking the authenticity of the firmware file. Let's change that by signing the firmware. We can do this by creating an asymmetric key pair, so that we can put the public key in the device, and use the private key to sign the firmware file. When we download the firmware, we also download the signature. We then use Mbed TLS to verify if the signature matches the file, and to check if it was signed by the other party.
Start by importing mbed-os-example-fota-http with Mbed CLI. The application already contains a bootloader for the FRDM-K64F, ODIN-W2 and NUCLEO-F429ZI development boards. To add support for a different board, read this blog post.
Generating a key pair
We can use OpenSSL to create a public/private key pair. Open a terminal, and navigate to the folder where you cloned https:github.com/janjongboom/mbed-os-example-fota-http. Then run:
$ node tools/create-keypair.js
Now we have two files in the folder:
update.key (the private key), and
update.pub (the public key). In addition we have a new file called
update_certs.h in our source directory, this file contains the public key, and will be baked into our firmware.
Downloading and verifying the key pair
Now we can verify whether the update was actually signed by this key.
In
source/main-http.cpp add the following includes:
#include "mbedtls/sha256.h" #include "mbedtls/pk.h" #include "update_certs.h"
Then add the following code in the
check_for_update function (right below
delete req;):
// Downloading signature... (put your computer's IP here) HttpRequest sig_req(network, HTTP_GET, ""); HttpResponse* sig_res = sig_req.send(); if (!sig_res) { printf("Signature HttpRequest failed (error code %d)\n", sig_req.get_error()); // on error, remove the update file remove(FULL_UPDATE_FILE_PATH); return; } // now calculate the SHA256 hash of the file, and then verify against the signature and the public key file = fopen(FULL_UPDATE_FILE_PATH, "rb"); // buffer to read through the file... uint8_t* sha_buffer = (uint8_t*)malloc(1024); // initialize the mbedtls context for SHA256 hashing mbedtls_sha256_context _sha256_ctx; mbedtls_sha256_init(&_sha256_ctx); mbedtls_sha256_starts(&_sha256_ctx, false /* is224 */); // read through the whole file while (1) { size_t bytes_read = fread(sha_buffer, 1, 1024, file); if (bytes_read == 0) break; // EOF? mbedtls_sha256_update(&_sha256_ctx, sha_buffer, bytes_read); } unsigned char sha_output[32]; mbedtls_sha256_finish(&_sha256_ctx, sha_output); mbedtls_sha256_free(&_sha256_ctx); free(sha_buffer); printf("SHA256 hash is: "); for (size_t ix = 0; ix < sizeof(sha_output); ix++) { printf("%02x", sha_output[ix]); } printf("\n"); // Initialize a RSA context mbedtls_rsa_context rsa; mbedtls_rsa_init(&rsa, MBEDTLS_RSA_PKCS_V15, 0); // Read the modulus and exponent from the update_certs file mbedtls_mpi_read_string(&rsa.N, 16, UPDATE_CERT_MODULUS); mbedtls_mpi_read_string(&rsa.E, 16, UPDATE_CERT_EXPONENT); rsa.len = (mbedtls_mpi_bitlen( &rsa.N ) + 7) >> 3; if(sig_res->get_body_length() != rsa.len) { printf("Invalid RSA signature format\n"); // on error, remove the update file remove(FULL_UPDATE_FILE_PATH); return; } // Verify if the signature contains the SHA256 hash of the firmware, signed by private key int ret = mbedtls_rsa_pkcs1_verify( &rsa, NULL, NULL, MBEDTLS_RSA_PUBLIC, MBEDTLS_MD_SHA256, 20, sha_output, (const unsigned char*)sig_res->get_body() ); mbedtls_rsa_free(&rsa); if (ret != 0) { printf("RSA signature does not match!\n"); remove(FULL_UPDATE_FILE_PATH); // on error, remove the update file return; } else { printf("RSA signature matches!\n"); }
Make sure to replace
192.168.2.1 (twice!) with the IP address of your computer (make sure the board and your computer are on the same network), so your board can download the new files.
Then build and flash the application.
Now it's time to create a new firmware version, and sign it with our private key. We'll need to re-build the application, so change some things in the
main.cpp file (for example the
printf message when the application starts, or the LED that blinks), and rebuild the application.
Now create a new folder
update-files, and copy
mbed-os-example-fota-http_application.bin into this folder, and rename the file to
update.bin. Now sign the update with our private key:
# create the update-files folder $ mkdir -p update-files # copy the application $ cp BUILD/*/GCC_ARM/mbed-os-example-fota-http_application.bin update-files/update.bin # sign the application $ openssl dgst -sha256 -sign update.key update-files/update.bin > update-files/update.sig
Now start a web server, so the device can download the files, for example via:
# starts a simple web server on port 8000 $ python -m SimpleHTTPServer
When we attach a serial monitor to the board (on baud rate 9,600) we can see the application running. When pressing the button on the development board the application downloads, and checks the firmware update signature before flashing the new binary.
Received 182053 bytes Received 192773 bytes Received 203493 bytes Done downloading: 200 - OK SHA256 hash is: 69e38a81582d751cd7a29d37e5354546b37290edc25bef475a9b7d5c651c0187 RSA signature matches! Rebooting...
Delta updates
That worked, but we're sending the full new binary over the line, even when we’ve only changed a small part of the application. That's wasteful, especially on networks with low throughput like LoRaWAN. We can do better! A way of doing this is through binary diffing. We compare the old and the new file, and generate a patch file, which can be applied to the original file. This way we only have to send the patch file to the device.
A downside of binary diffing is that it often uses a lot of memory. Bsdiff, a widely used binary diff tool, requires us to load both the old file and the patch file into memory. That's a problem on embedded devices where memory is scarce. To mitigate this we released JANPatch (short for Jojo AlterNative Patch), a patching library which can run in a few hundred bytes of memory. It's based on JojoDiff's patch format, and is licensed under the Apache 2.0 license.
Adding JANPatch
Add the library to the application with Mbed CLI via:
$ mbed add
Then reference the library in
source/main-https.cpp under the defines:
#include "janpatch.h"
Change the
FULL_UPDATE_FILE_PATH define to:
#define FULL_UPDATE_FILE_PATH "/" SD_MOUNT_PATH "/" MBED_CONF_APP_UPDATE_FILE #define DIFF_SOURCE_FILE_PATH "/" SD_MOUNT_PATH "/" MBED_CONF_APP_UPDATE_FILE ".source" #define DIFF_UPDATE_FILE_PATH "/" SD_MOUNT_PATH "/" MBED_CONF_APP_UPDATE_FILE ".update" void patch_progress(uint8_t pct) { printf("Patch progress: %d%%\n", pct); }
And change the download command to download the diff instead by changing
/update.bin to
update.diff (in the line that starts with
HttpRequest* req =).
Now we can apply the diff to our running application when we download it. Insert the following code in the
check_for_update function (right below
delete req;):
// patch the file... FILE *source = fopen(DIFF_SOURCE_FILE_PATH, "rb"); FILE *diff = fopen(FULL_UPDATE_FILE_PATH, "rb"); FILE *target = fopen(DIFF_UPDATE_FILE_PATH, "wb"); // fread/fwrite buffers, minimum size is 1 byte unsigned char* source_buffer = (unsigned char*)malloc(8 * 1024); unsigned char* diff_buffer = (unsigned char*)malloc(8 * 1024); unsigned char* target_buffer = (unsigned char*)malloc(8 * 1024); janpatch_ctx ctx = { // provide buffers { source_buffer, 8 * 1024 }, { diff_buffer, 8 * 1024 }, { target_buffer, 8 * 1024 }, // define functions which can perform basic IO // on POSIX, use: &fread, &fwrite, &fseek, &ftell, &patch_progress }; int r = janpatch(ctx, source, diff, target); printf("janpatch returned %d\n", r); free(source_buffer); free(diff_buffer); free(target_buffer); fclose(source); fclose(diff); fclose(target); if (r == 0) { // move the target file to original location... remove(FULL_UPDATE_FILE_PATH); rename(DIFF_UPDATE_FILE_PATH, FULL_UPDATE_FILE_PATH); } else { printf("Failed to patch binary...\n"); return; }
Now, compile and build the application, and flash it on the board.
Putting the base file in place
Both the device and our computer (where we create the diff), need to have a common base file, which we use to create a diff from (on the computer) and apply the diff to (on the development board).
This base file will be the current application. First, copy
mbed-os-example-fota-http_application.bin to the
update-files directory and rename it to
original.bin:
$ cp BUILD/*/GCC_ARM/mbed-os-example-fota-http_application.bin update-files/original.bin
Second, copy
mbed-os-example-fota-http_application.bin to the SD card, and rename it to
mbed-os-example-blinky_application.bin.source.
If we forget either of these steps the patch progress does not know where to apply the patch to. In this article we only need to do these steps once, but in a real scenario we'd update the common base file after every update.
Generating patches
Time to generate a patch file. As before, we'll need to compile a new application, so make a small change to
main.cpp and recompile.
2. Open a terminal, and navigate to the folder where we extracted JojoDiff:
$ cd src/ $ make
3. Copy
mbed-os-example-fota-http_application.bin to the
update-files directory and name it
update.bin.
$ cp BUILD/*/GCC_ARM/mbed-os-example-fota-http_application.bin update-files/update.bin
4. Sign the new binary. We should sign the full binary, and not the diff, to ensure that the resulting binary (after patching) is valid:
$ openssl dgst -sha256 -sign update.key update-files/update.bin > update-files/update.sig
5. And generate the diff:
$ ~/Downloads/jdiff081/src/jdiff update-files/original.bin update-files/update.bin update-files/update.diff
6. Now restart the webserver.
$ python -m SimpleHTTPServer
7. And run the application. Now we will only need to download and apply the patch file!
Note: Patching might take up to half a minute, depending on the size of the update, and the speed of the SD card. Progress information is printed over the serial port.
Final thoughts
In this article we gave some insights into the challenges with secure firmware updates. We used Mbed TLS to verify that data was sent by a trusted party, and JANPatch to implement delta updates on a microcontroller. We implemented these features to enable efficient firmware updates over very slow networks, but they should be helpful for anyone writing a firmware update service. Note that we added some security to the example, but it's still too insecure to deploy in a real-life scenario. If you're interested in the subject, Brendan Moran is speaking at TechCon about the subject: "Building Firmware Updates: the devil is in the details". If you're looking for a one-stop solution, Arm has a fully secure update client and bootloader available as part of Mbed Cloud.
We'll be releasing all parts of the LoRaWAN firmware update service in about two weeks, so keep an eye on the Mbed developer blog. If you're going to the LoRa All-Members meeting in Suzhou, we'll be doing a tech talk about firmware updates on Thursday 19 October from 11:30 - 12:30. And if you're at TechCon come and find us at the Mbed booth, and make sure to visit our session "Enabling firmware updates over LPWAN" on Thursday 26 October at 11:30 AM in Ballroom G!
-
Jan Jongboom is Developer Evangelist at Arm, and is actively working on standardizing firmware updates over LoRaWAN. While writing this blog post he realized that sitting in a Starbucks in Tokyo with a stack of development boards and Ethernet cables is not awkward at all.
You need to log in to post a discussion
Discussion topics
Questions
1 year, 10 months ago
|
https://os.mbed.com/blog/entry/towards-fota-lora-crypto-delta-updates/
|
CC-MAIN-2019-35
|
refinedweb
| 2,013
| 55.34
|
Refract lets you inject your own dependencies, such as your Redux store, your router, your API, and so on.
To expose any dependencies for use inside Refract, you simply pass them as props into any component wrapped with the
withEffects higher-order component. Your dependencies will then be available as part of the
initialProps object, which is the first argument in your
aperture,
handler, and
errorHandler.
In order to use Refract with Redux, you will need to expose your Redux store as a dependency to your Refract components.
To do so, simply pass your store into your component as props, exactly as you do when passing your store to the Redux Provider component:
import { Provider } from 'redux'import { withEffects } from 'refract-rxjs'import configureStore from './configureStore'import App from './components/App'const store = configureStore()const AppWithEffects = withEffects(aperture, { handler })(App)ReactDOM.render(<Provider store={store}><AppWithEffects store={store} /></Provider>,document.getElementById('root'))
This pattern is likely to change in future when Redux moves to React's new context API.
Because dependencies are simply passed into Refract as props, you can easily add any dependency you need - all you have to do is add more props!
To see an example of what a useful dependency might look like, take a look at the API dependency recipe.
You might have noticed a problem with the approach outlined above: how do you pass these dependencies down to any
ComponentWithEffects which are far further down the React component tree?
A naïve approach would be to pass all dependencies down through your app as props, but that would be a pain to maintain.
Instead, we recommend using React's new context API, which is perfect for passing this kind of information through to any arbitrary child within your app.
For an explanation of how to do this, and a tip for how to make the code clean and simple, take a look at the dependency-injection recipe.
|
https://refract.js.org/usage/injecting-dependencies
|
CC-MAIN-2018-51
|
refinedweb
| 322
| 51.07
|
With all of our diligence, AbiWord 2.0 is bound to be a huge success. With better 2.5 million downloads within the past year and a half, AbiWord is a huge project, and a large force in both the free and non-free software communities. In short, we're a viable contender. Our goal with 2.2 is to improve upon our past successes. Our goal is to not fall into complacency. Our goal is to not fall into obsolescence.
It is my opinion that we can best achieve these goals through a rather speedy 2.2 release. This release should happen about 6-9 months after the 2.0 release. This release should be concentrated on bug fixes. This release should be concerned with feature polish. Above all, this release should be a strengthening of our architecture as a assignedle. Through improving our internals and organization, we can offer a highly polished, feature rich product.
Some people have been lamenting the lack of an "AbiSuite" as promised by Sourcegear Corporation, 5 years ago. I, however, contend that we do have an AbiSuite. We have brought to market a premier word processor. And a thesaurus. And the Enchant spell checking library. And wv. And libgsf. And ots. And Uspell. And libwpd. And now, we've got the beginnings of a presentation program - Criawips. We've done our job, and then some. We've built a rich family of applications and libraries, and built them well. Let us build upon these successes and platforms, and in doing so, bring the best word processor to the market.
Obviously improve our existing importers and exporters. This is especially true for the OpenOffice and RTF filters.
Add support for more filters. This is especially important for the MS Office 11 XML format, as it stands to be a major player in the market.
Better clipboard support. We want every imp/exp to be clipboard capable.
Honesty parameter. Basically, a rating of how true the imported format represents/translates to ABW
Specify a list of mime types during construction, query them later
Pass arguments to imp/exp using some mechanism (CSS like, maybe?)
Use LibGSF exclusively in all Imp/Exp, so that all files and transports look equal; so that we handle more transport types, and so that we can handle MSWord, OpenOffice, and WordPerfect files on all of our platforms. Transport types include: Memory, Disk IO, VFS, WebDav, IStreams, Bonobo Components, ...
See also: Import/Export
Edit->Paste Special, and then choose one of the available formats
Remove Edit->Paste Unformatted after Paste Special is implemented
Order clipboard formats when pasting according to honesty (unless otherwise overridden by Paste Special)
More sophisticated (and more XP) clipboard handling, including a better API for retrieving data types, and a way to lazy-populate clipboard contents (while copying/cutting)
Fault tolerant clipboard. As we support more file formats (like HTML), the odds of our encountering a problem while pasting go up. Ideally, we'd like to just suck down the next available format and try again.
Enable multi doc-range clipboard - mainly for table columns and non-contiguous selections
We'll need the things above from Clipboard and Import/Export. We'll want to be able to drag+drop any format that we can import. We'll want to use GSF so that we don't have to hit a temporary file here. This will essentially be 100% complete when the GSF and mime work above is done.
Use Enchant exclusively on all platforms. Link here.
Clean up all known bugs
Correct speed regressions
Enable Paragraph borders and shading, stealing code from Gnumeric where possible
Images as backgrounds for things like [cells, tables, paragraphs, sections]
Text frames
Clean up mess left behind by 2.0
Feature polish; continued improvement and development
Remove all encoding-less strings, except for input/output translation.
Deprecate UT_String. Replace with UTF8 strings.
All internal encodings are UCS4 or UTF8
Refactor ev, xap, util as necessary - especially menu and toolbar code
(wrt icons)
Cut down on code duplication
Remove duplicate map/hash classes.
Make things easier for Criawips to use
Further improvement; bring all dialog documentation up to 100% coverage
Better integration with native help browsers
Come
up with a sane C API/ABI for a lot of this stuff. Incrementally
make plugins use it. Plan on using a lot of opaque structures and C "namespaces". Mainly want to abstract away the more useful PD_Document, FV_View,
XAP_App, and XAP_Frame bits
Enable on-demand loading of plugins. - Possibly look at the framework Gnumeric is using.
Use above C API/ABI to expose ourselves to the outside world. Build C API as needed, not all at once. Project: AbiCapi
PERL, Python, Scheme, ... using SWIG. This will be complete once the AbiCapi work has been done.
Re-organize how menu and toolbar icons are done
Unicows/NT UCS2 string support, including keyboard/input and output (i.e. dialogs)
Finish removing the last of the GTK+ deprecated stuff
More HIG work
Use LibEGG stuff where applicable (Menus, Toolbars, File dialog)
Icon theme integration (export our toolbar icons' names)
Comply with the recent files Freedesktop.org standard
Will have a working OSX port by 2.2
Maintain feature parity with Win32 and UNIX ports
Unfortunately, a lost cause. Unless Jun makes progress in leaps-and-bounds, remove this code.
Remove dead and unused modules
Restructure how plugins, docu, abidistfiles, etc... are done
Remove screenshots and other worthless binary garbage from CVS
|
http://www.abisource.com/developers/2.2-roadmap.phtml
|
crawl-002
|
refinedweb
| 913
| 65.52
|
Holds data about the loaded sounds.
More...
#include <SoundManager.h>
Holds data about the loaded sounds.
Definition at line 191 of file SoundManager.h.
List of all members.
constructor
Definition at line 1070 of file SoundManager.cc.
[private]
don't call
point to data in region (for convenience, only valid in SoundPlay)
Definition at line 194 of file SoundManager.h.
Referenced by SoundManager::mixChannel(), SoundManager::mixChannelAdditively(), and SoundManager::updateChannels().
size of the sound
Definition at line 195 of file SoundManager.h.
stores the path to the file, empty if from a buffer
Definition at line 201 of file SoundManager.h.
shared region - don't need to share among processes, just collect in SoundPlay
Definition at line 193 of file SoundManager.h.
reference counter
Definition at line 196 of file SoundManager.h.
serial number, allows us to verify that a given message buffer does indeed match this sound, and wasn't delayed in processing
Definition at line 197 of file SoundManager.h.
|
http://tekkotsu.org/dox/structSoundManager_1_1SoundData.html
|
CC-MAIN-2018-47
|
refinedweb
| 161
| 50.63
|
A lightweight and somewhat compatible
Uri class.
Warning: This is not an official Google or Dart project.
Add this to your package's pubspec.yaml file:
dependencies: web_uri: "^0.1.0"
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:web_uri/web_ur web_uri requires SDK version >=1.22.0 <2.0.0-alpha.infinity, version solving failed.
Fix platform conflicts.
Error(s) prevent platform classification:
Fix dependencies in
pubspec.yaml.
Running
dartdoc failed.
Make sure
dartdocruns without any issues.
web_uri.dart.
|
https://pub.dartlang.org/packages/web_uri
|
CC-MAIN-2018-26
|
refinedweb
| 110
| 63.46
|
Board index » C Language
All times are UTC
#include <stdio.h> #include <string.h> main() { int chroot(); char dir[80];
strcpy(dir,"new/directory"); chroot(dir);
Regrards....John
>Is there something I am not doing in order to make a system call? I would >appreciate any help you can provide. Thanks in advance.
d.
-- No dogs were harmed in the creation of this .sig. A cat got sick, and somebody shot a duck, but that's it.
> strcpy(dir,"new/directory"); > chroot(dir); >}
2. Are you root when executing the program?
3. Checking errno if chroot() doesn't return 0 will give you a clue about what went wrong.
4. If you expect the program to change the root directory of the shell which invoked it, this will never happen, chroot() will affect only the process that called it.
This is a Unix specific question, so it doesn't belong to c.l.c. Post such questions to comp.unix.programmer.
Dan -- Dan Pop CERN, CN Division
Mail: CERN - PPE, Bat. 31 R-004, CH-1211 Geneve 23, Switzerland
John> I am trying to use a system call in Sun OS 4.1.3, John> chroot. Below is the pseudo code, the actual code of which John> compiles and runs without errors but does not work, ie, John> change the root directory.
John> #include <stdio.h> John> #include <string.h> John> main() John> { John> int chroot(); John> char dir[80]; John> strcpy(dir,"new/directory"); John> chroot(dir); John> }
There's nothing wrong with this code (although BTW you can write 'chroot("new/directory")' directly), but the chroot function changes the root directory for the current process and all children of the current process. If you run this program from your shell it won't change your shell's root directory--that seems to be what you were expecting.
Jake
(Followups to comp.unix.programmer)
P.S. Next time you have a question about a SunOS system call, you might want to post it to comp.unix.programmer, comp.unix.questions or comp.sys.sun.misc.
1. gcc c calls f77 fortran on Sun OS 4.1
2. Help : on using ndbm.h (SUN OS)
3. Calling System Command in SUN C
4. Inexplicable System call failure - HELP - expert needed
5. Inexplicable system call failure! HELP-Expert Needed
6. URGENT : Help needed with system()/fork() call
7. Help with system calls needed
8. Help: SUN Spark-calling Fortran functions
9. HELP:perform system call or call an external program
10. Porting C programs from a SPARC (SUN OS 4.1.1) to a VAX (VMS)
11. Problems Porting clcc code from sun/os to solaris
12. Request "lint" tool for SUN OS
|
http://computer-programming-forum.com/47-c-language/4d11fa09c12d07b5.htm
|
CC-MAIN-2020-24
|
refinedweb
| 452
| 78.35
|
It is quite common to have a program stopped in the debugger due to a crash or assertion caused by an object being in a bad state, and to track down the problem, you need to figure out how it got that way. In many cases, the call stack of the object’s creation can provide valuable clues, but trying to get that call stack can be a significant chore. The most obvious technique, setting a breakpoint at the object’s constructor, can become cumbersome when you have a large number of objects of the same type being created, only one of which, has a problem.
Here, we will explore an alternative approach where, by adding a small amount of instrumentation code, we can examine the object in the watch or locals window at the point of the problem and immediately see the call stack of the object’s creation.
First, we need code to actually capture the stack trace from inside our object’s constructor. Fortunately, Windows has done most of the work for us by providing a function, CaptureStackBackTrace(), which walks the stack a given number of frames, and stores the address of each frame it sees in a void** buffer. We begin by wrapping the function inside of a StackTrace class, which captures the stack trace in its constructor and stores it in a member variable, as follows:
#include <Windows.h> class StackTrace { private: enum { NFrames = 20}; int m_frameCount; void* m_frames[NFrames];
public: StackTrace() { m_frameCount = CaptureStackBackTrace(1, NFrames, m_frames, NULL); } };
Now, all we have to do is stick one of these StackTrace objects inside of each class we are interested in recording the stack trace of. For example:
class MyObject { private: // Existing members... StackTrace stackTrace; public: MyObject() { // Existing constructor code... } };
Now, every time an instance of “MyObject” gets created, the stack trace of the creation, starting with the “MyObject” constructor, will be saved inside of the MyObject’s ‘stackTrace’ field. (To avoid adding unnecessary performance overhead to your application, it is recommended that you remove uses of the StackTrace class when you are finished investigating your problem, or wrap the use of the StackTrace class in “#ifdef _DEBUG” to exclude it from retail builds).
Everything we’ve done so far can be accomplished using any version of Visual Studio. However, when it comes to looking at the captured stack trace under the debugger and seeing something useful, Visual Studio 2013 is best. In prior releases, the contents of the stack trace would simply be a collection of opaque void*’s, like this:
In Visual Studio 2013, however, the stack trace looks like this:
You can even right-click on a specific frame of interest to navigate to the source or disassembly, like this:
What we’ve seen so far does not require any special effort to enable – whenever the Visual Studio 2013 debugger sees a pointer to code inside of a function, the debugger automatically shows the name of the function and the line number, and allows source and disassembly navigation.
However, if you are willing to write a natvis entry, you can make the experience even better, like this:
<?xml version="1.0" encoding="utf-8"?> <AutoVisualizer xmlns=""> <Type Name="StackTrace"> <Expand> <ExpandedItem>frames,[frameCount]stackTrace</ExpandedItem> </Expand> </Type> <Type Name="MyObject"> <!-- Existing visualization for MyObject--> <Expand> <Item Name="[Stack Trace]">stackTrace</Item> </Expand> </Type> </AutoVisualizer>
The above natvis entry does several things. First, it predominantly calls out the stack trace of MyObject so you don’t have to dig out of a potentially long field list. Secondly, the visualizer for the StackTrace class uses the array-length format specifier to avoid showing the unused sections of the stack trace buffer. Finally, it uses the special “,stackTrace” format specifier, which serves as a hint to the debugger that the contents of member variable “frames” actually represents the frames of a stack trace. In particular, the “,stackTrace” format specifier causes the debugger to omit the memory addresses of the frames, showing only the function, and to collapse frames which represent non-user code into an “External Code” frame if JustMyCode is enabled. In this example, the “[External Code]” block refers to the frames from kernel32.dll and ntdll.dll that comprise the start of every Windows thread.
Give it a try!
Eric Feiveson is a developer on the Visual C++ team at Microsoft. If you have questions, please post them in the comments.
Join the conversationAdd Comment
That's a neat trick. I've been using tracepoints and the $CALLSTACK macro. It'd be great if that macro took an argument to limit the number of frames it outputs. Also, having the output window recognize stack frames and support the rt-click navigate menu would be a big help.
For older versions of Visual Studio, tracepoints and $CALLSTACK was pretty much the only way of getting a human-readable stack trace at an object's constructor. Even now, there are still some scenarios where the callstack obatined through the CaptureStackBackTrace() trick isn't as good as $CALLSTACK. For example, the trick I described in this post will not give you managed frames if you are mixed-mode debugging, but $CALLSTACK will. Nevertheless, this technique does work for the large majority of cases, and hopefully, its relative ease and performance (a runtime call to CaptureStackBackTrace() is much faster than hitting a $CALLSTACK tracepoint in the debugger) will allow you to resort to $CALLSTACK a lot less often than before.
I like your suggestions on improving the $CALLSTACK output, though.
but what's the advantage of this compared to simply set a breakpoint in the object constructor and run the program then observe the calling stack and its arguments and everything?
It's mentioned in the article:
"The most obvious technique, setting a breakpoint at the object's constructor, can become cumbersome when you have a large number of objects of the same type being created, only one of which, has a problem."
It's a shame this doesn't really work for most of my cases which is mixed-mode debugging.
I tried the way the mixed-mode but I failed. Does anyone know another simple way to solve the problem?
The problem has now been fixed. Starting with Visual Studio 2015 Update 3, the instructions described in this post should work with mixed-mode debugging.
|
https://blogs.msdn.microsoft.com/vcblog/2014/01/23/examining-stack-traces-of-objects-using-visual-studio-2013/
|
CC-MAIN-2016-36
|
refinedweb
| 1,058
| 56.59
|
Now that we've got our race car and we can drive it around, we've found that we're able to drive it right off of the screen! To stop this, we want to add some sort of boundary to our game that will stop this from happening.
Here's the new code:
import pygame game_loop(): x = (display_width * 0.45) y = (display_height * 0.8) x_change = 0 gameExit = False while not gameExit: for event in pygame.event.get(): if event.type == pygame.QUIT: gameExit = True: gameExit = True pygame.display.update() clock.tick(60) game_loop() pygame.quit() quit()
First, we see that we now have a new variable:
car_width = 73
This variable is used in the rest of the program to know where both edges of the car are. The car's "location" really just means the location of the top left pixel of the car. Because of this, it is helpful to also know where the right side is.
Next, we see that we've changed the "main loop" quite a bit. Now, we're calling this the game loop, and instead of the variable being crashed that exits it, it is a "gameExit" that will exit the loop. Take note of the variables that have now been moved within this loop.
Next up, we see the next major change is:
if x > display_width - car_width or x < 0: gameExit = True
This is our logic for whether or not the car has crossed over the left and right boundaries.
|
https://pythonprogramming.net/adding-boundaries-pygame-video-game/
|
CC-MAIN-2019-26
|
refinedweb
| 248
| 89.79
|
Solving problem is about exposing yourself to as many situations as possible like Create pandas Dataframe by appending one row at a time Create pandas Dataframe by appending one row at a time, which can be followed any time. Take easy to follow this discuss.
I understand that pandas is designed to load fully populated
DataFrame but I need to create an empty DataFrame then add rows, one by one.
What is the best way to do this ?
I successfully created an empty DataFrame with :
res = DataFrame(columns=('lib', 'qty1', 'qty2'))
Then I can add a new row and fill a field with :
res = res.set_value(len(res), 'qty1', 10.0)
It works but seems very odd :-/ (it fails for adding string value)
How can I add a new row to my DataFrame (with different columns type) ?
Answer #1:
You can use
df.loc[i], where the row with index
i will be what you specify it to be in the dataframe.
import pandas as pd from numpy.random import randint df = pd.DataFrame(columns=['lib', 'qty1', 'qty2']) for i in range(5): df.loc[i] = ['name' + str(i)] + list(randint(10, size=2)) df lib qty1 qty2 0 name0 3 3 1 name1 2 4 2 name2 2 8 3 name3 2 1 4 name4 9 6
Answer #2:
In case you can get all data for the data frame upfront, there is a much faster approach than appending to a data frame:
- Create a list of dictionaries in which each dictionary corresponds to an input data row.
- Create a data frame from this list.
I had a similar task for which appending to a data frame row by row took 30 min, and creating a data frame from a list of dictionaries completed within seconds.
rows_list = [] for row in input_rows: dict1 = {} # get input row in dictionary format # key = col_name dict1.update(blah..) rows_list.append(dict1) df = pd.DataFrame(rows_list)
Answer #3:
You could use
pandas.concat() or
DataFrame.append(). For details and examples, see Merge, join, and concatenate.
Answer #4:
It’s been a long time, but I faced the same problem too. And found here a lot of interesting answers. So I was confused what method to use.
In the case of adding a lot of rows to dataframe I interested in speed performance. So I tried 4 most popular methods and checked their speed.
UPDATED IN 2019 using new versions of packages.
Also updated after @FooBar comment
SPEED PERFORMANCE
- Using .append (NPE’s answer)
- Using .loc (fred’s answer)
- Using .loc with preallocating (FooBar’s answer)
- Using dict and create DataFrame in the end (ShikharDua’s answer)
Results (in secs):
|------------|-------------|-------------|-------------| | Approach | 1000 rows | 5000 rows | 10 000 rows | |------------|-------------|-------------|-------------| | .append | 0.69 | 3.39 | 6.78 | |------------|-------------|-------------|-------------| | .loc w/o | 0.74 | 3.90 | 8.35 | | prealloc | | | | |------------|-------------|-------------|-------------| | .loc with | 0.24 | 2.58 | 8.70 | | prealloc | | | | |------------|-------------|-------------|-------------| | dict | 0.012 | 0.046 | 0.084 | |------------|-------------|-------------|-------------|
Also thanks to @krassowski for useful comment – I updated the code.
So I use addition through the dictionary for myself.
Code:
import pandas as pd import numpy as np import time del df1, df2, df3, df4 numOfRows = 1000 # append startTime = time.perf_counter() df1 = pd.DataFrame(np.random.randint(100, size=(5,5)), columns=['A', 'B', 'C', 'D', 'E']) for i in range( 1,numOfRows-4): df1 = df1.append( dict( (a,np.random.randint(100)) for a in ['A','B','C','D','E']), ignore_index=True) print('Elapsed time: {:6.3f} seconds for {:d} rows'.format(time.perf_counter() - startTime, numOfRows)) print(df1.shape) # .loc w/o prealloc startTime = time.perf_counter() df2 = pd.DataFrame(np.random.randint(100, size=(5,5)), columns=['A', 'B', 'C', 'D', 'E']) for i in range( 1,numOfRows): df2.loc[i] = np.random.randint(100, size=(1,5))[0] print('Elapsed time: {:6.3f} seconds for {:d} rows'.format(time.perf_counter() - startTime, numOfRows)) print(df2.shape) # .loc with prealloc df3 = pd.DataFrame(index=np.arange(0, numOfRows), columns=['A', 'B', 'C', 'D', 'E'] ) startTime = time.perf_counter() for i in range( 1,numOfRows): df3.loc[i] = np.random.randint(100, size=(1,5))[0] print('Elapsed time: {:6.3f} seconds for {:d} rows'.format(time.perf_counter() - startTime, numOfRows)) print(df3.shape) # dict startTime = time.perf_counter() row_list = [] for i in range (0,5): row_list.append(dict( (a,np.random.randint(100)) for a in ['A','B','C','D','E'])) for i in range( 1,numOfRows-4): dict1 = dict( (a,np.random.randint(100)) for a in ['A','B','C','D','E']) row_list.append(dict1) df4 = pd.DataFrame(row_list, columns=['A','B','C','D','E']) print('Elapsed time: {:6.3f} seconds for {:d} rows'.format(time.perf_counter() - startTime, numOfRows)) print(df4.shape)
P.S. I believe, my realization isn’t perfect, and maybe there is some optimization.
Answer #5:
If you know the number of entries ex ante, you should preallocate the space by also providing the index (taking the data example from a different answer):
import pandas as pd import numpy as np # we know we're gonna have 5 rows of data numberOfRows = 5 # create dataframe df = pd.DataFrame(index=np.arange(0, numberOfRows), columns=('lib', 'qty1', 'qty2') ) # now fill it up row by row for x in np.arange(0, numberOfRows): #loc or iloc both work here since the index is natural numbers df.loc[x] = [np.random.randint(-1,1) for n in range(3)] In[23]: df Out[23]: lib qty1 qty2 0 -1 -1 -1 1 0 0 0 2 -1 0 -1 3 0 -1 0 4 -1 0 0
Speed comparison
In[30]: %timeit tryThis() # function wrapper for this answer In[31]: %timeit tryOther() # function wrapper without index (see, for example, @fred) 1000 loops, best of 3: 1.23 ms per loop 100 loops, best of 3: 2.31 ms per loop
And – as from the comments – with a size of 6000, the speed difference becomes even larger:
Increasing the size of the array (12) and the number of rows (500) makes
the speed difference more striking: 313ms vs 2.29s
Answer #6:
NEVER grow a DataFrame!
Yes, people have already explained that you should NEVER grow a DataFrame, and that you should append your data to a list and convert it to a DataFrame once at the end. But do you understand why?
Here are the most important reasons, taken from my post here.
- It is always cheaper/faster to append to a list and create a DataFrame in one go.
- Lists take up less memory and are a much lighter data structure to work with, append, and remove.
dtypesare automatically inferred for your data. On the flip side, creating an empty frame of NaNs will automatically make them
object, which is bad.
- An index is automatically created for you, instead of you having to take care to assign the correct index to the row you are appending.
This is The Right Way™ to accumulate your data
data = [] for a, b, c in some_function_that_yields_data(): data.append([a, b, c]) df = pd.DataFrame(data, columns=['A', 'B', 'C'])
These options are horrible
appendor
concatinside a loop
appendand
concataren’t inherently bad in isolation. The
problem starts when you iteratively call them inside a loop – this
results in quadratic memory usage.
# Creates empty DataFrame and appends df = pd.DataFrame(columns=['A', 'B', 'C']) for a, b, c in some_function_that_yields_data(): df = df.append({'A': i, 'B': b, 'C': c}, ignore_index=True) # This is equally bad: # df = pd.concat( # [df, pd.Series({'A': i, 'B': b, 'C': c})], # ignore_index=True)
Empty DataFrame of NaNs
Never create a DataFrame of NaNs as the columns are initialized with
object(slow, un-vectorizable dtype).
# Creates DataFrame of NaNs and overwrites values. df = pd.DataFrame(columns=['A', 'B', 'C'], index=range(5)) for a, b, c in some_function_that_yields_data(): df.loc[len(df)] = [a, b, c]
The Proof is in the Pudding
Timing these methods is the fastest way to see just how much they differ in terms of their memory and utility.
Benchmarking code for reference.
It’s posts like this that remind me why I’m a part of this community. People understand the importance of teaching folks getting the right answer with the right code, not the right answer with wrong code. Now you might argue that it is not an issue to use
loc or
append if you’re only adding a single row to your DataFrame. However, people often look to this question to add more than just one row – often the requirement is to iteratively add a row inside a loop using data that comes from a function (see related question). In that case it is important to understand that iteratively growing a DataFrame is not a good idea.
Answer #7:
mycolumns = ['A', 'B'] df = pd.DataFrame(columns=mycolumns) rows = [[1,2],[3,4],[5,6]] for row in rows: df.loc[len(df)] = row
Answer #8:
You can append a single row as a dictionary using the
ignore_index option.
'Animal':['cow','horse'], 'Color':['blue', 'red']}) f Animal Color 0 cow blue 1 horse red f.append({'Animal':'mouse', 'Color':'black'}, ignore_index=True) Animal Color 0 cow blue 1 horse red 2 mouse blackf = pandas.DataFrame(data = {
|
https://discuss.dizzycoding.com/create-pandas-dataframe-by-appending-one-row-at-a-time/
|
CC-MAIN-2022-33
|
refinedweb
| 1,527
| 66.94
|
An example project integration of g3log, both statially and dynamically built can be found at g3log_example_integration
LOG(INFO) << "streaming API is as easy as ABC or " << 123; LOGF(WARNING, "Printf-style syntax is also %s", "available");
int less = 1; int more = 2 LOG_IF(INFO, (less<more)) <<"If [true], then this text will be logged"; // or with printf-like syntax LOGF_IF(INFO, (less<more), "if %d<%d then this text will be logged", less,more);
CHECK(false) will trigger a "fatal" message. It will be logged, and then the application will exit.
CHECK(less != more); // not FATAL CHECK(less > more) << "CHECK(false) triggers a FATAL message";
Please look at API.markdown for detailed API documentation
Easy to use, clean syntax and a blazing fast logger.
All the slow log I/O disk access is done in a background thread. This ensures that the LOG caller can immediately continue with other tasks and do not have to wait for the LOG call to finish.
G3log provides logging, Design-by-Contract [#CHECK], and flush of log to file at shutdown. Buffered logs will be written to the sink before the application shuts down.
It is thread safe, so using it from multiple threads is completely fine.
It is CRASH SAFE. It will save the made logs to the sink before it shuts down. The logger will catch certain fatal events (Linux/OSX: signals, Windows: fatal OS exceptions and signals) , so if your application crashes due to, say a segmentation fault, SIGSEGV, it will log and save the crash and all previously buffered log entries before exiting.
It is cross platform. Tested and used by me or by clients on OSX, Windows, Ubuntu, CentOS
G3log and G2log are used worldwide in commercial products as well as hobby projects. G2log is used since early 2011.
The code is given for free as public domain. This gives the option to change, use, and do whatever with it, no strings attached.
Two versions of g3log exists.
Sinks are receivers of LOG calls. G3log comes with a default sink (the same as G3log uses) that can be used to save log to file. A sink can be of any class type without restrictions as long as it can either receive a LOG message as a std::string or as a g3::LogMessageMover.
The std::string comes pre-formatted. The g3::LogMessageMover is a wrapped struct that contains the raw data for custom handling in your own sink.
A sink is owned by the G3log and is added to the logger inside a
std::unique_ptr. The sink can be called though its public API through a handler which will asynchronously forward the call to the receiving sink.
It is crazy simple to create a custom sink. This example show what is needed to make a custom sink that is using custom log formatting but only using that for adding color to the default log formatting. The sink forwards the colored log to cout
// in file Customsink.hpp #pragma once #include <string> #include <iostream> #include <g3log/logmessage.hpp> struct CustomSink { // Linux xterm color // enum FG_Color {YELLOW = 33, RED = 31, GREEN=32, WHITE = 97}; FG_Color GetColor(const LEVELS level) const { if (level.value == WARNING.value) { return YELLOW; } if (level.value == DEBUG.value) { return GREEN; } if (g3::internal::wasFatal(level)) { return RED; } return WHITE; } void ReceiveLogMessage(g3::LogMessageMover logEntry) { auto level = logEntry.get()._level; auto color = GetColor(level); std::cout << "\033[" << color << "m" << logEntry.get().toString() << "\033[m" << std::endl; } }; // in main.cpp, main() function auto sinkHandle = logworker->addSink(std::make_unique<CustomSink>(), &CustomSink::ReceiveLogMessage);
You can safely remove and add sinks during the running of your program.
Keep in mind
Adding Sinks
auto sinkHandle1 = logworker->addSink(std::make_unique<CustomSink>(), &CustomSink::ReceiveLogMessage); auto sinkHandle2 = logworker->addDefaultLogger(argv[0], path_to_log_file); logworker->removeSink(std::move(sinkHandle1)); // this will in a thread-safe manner remove the sinkHandle1 logworker->removeAllSinks(); // this will in a thread-safe manner remove any sinks.
More sinks can be found in the repository github.com/KjellKod/g3sinks.
Example usage where a custom sink is added. A function is called though the sink handler to the actual sink object.
// main.cpp #include <g3log/g3log.hpp> #include <g3log/logworker.hpp> #include <memory> #include "CustomSink.h" int main(int argc, char**argv) { using namespace g3; std::unique_ptr<LogWorker> logworker{ LogWorker::createLogWorker() }; auto sinkHandle = logworker->addSink(std::make_unique<CustomSink>(), &CustomSink::ReceiveLogMessage); // initialize the logger before it can receive LOG calls initializeLogging(logworker.get()); LOG(WARNING) << "This log call, may or may not happend before" << "the sinkHandle->call below"; // You can call in a thread safe manner public functions on your sink // The call is asynchronously executed on your custom sink. std::future<void> received = sinkHandle->call(&CustomSink::Foo, param1, param2); // If the LogWorker is initialized then at scope exit the g3::internal::shutDownLogging() will be called. // This is important since it protects from LOG calls from static or other entities that will go out of // scope at a later time. // // It can also be called manually: g3::internal::shutDownLogging(); } // some_file.cpp : To show how easy it is to get the logger to work // in other parts of your software #include <g3log/g3log.hpp> void SomeFunction() { ... LOG(INFO) << "Hello World"; }
Example usage where a the default file logger is used and a custom sink is added
// main.cpp #include <g3log/g3log.hpp> #include <g3log/logworker.hpp> #include <memory> #include "CustomSink.h" int main(int argc, char**argv) { using namespace g3; auto worker = LogWorker::createLogWorker(); auto defaultHandler = worker->addDefaultLogger(argv[0], path_to_log_file); // logger is initialized g3::initializeLogging(worker.get()); LOG(DEBUG) << "Make log call, then add another sink"; worker->addSink(std::make_unique<CustomSink>(), &CustomSink::ReceiveLogMessage); ... }
git clone cd g3log mkdir build cd build
Assume you have got your shiny C++14 compiler installed, you also need these tools to build g3log from source:
CMake (Required)
g3log uses CMake as a one-stop solution for configuring, building, installing, packaging and testing on Windows, Linux and OSX.
Git (Optional but Recommended)
When building g3log it uses git to calculate the software version from the commit history of this repository. If you don't want that, or your setup does not have access to git, or you download g3log source archive from the GitHub Releases page so that you do not have the commit history downloaded, you can instead pass in the version as part of the CMake build arguments. See this issue for more information.
cmake -DVERSION=1.3.2 ..
g3log provides following CMake options (and default values):
$ cmake -LAH # List non-advanced cached variables. See `cmake --help` for more details. ... // Fatal (fatal-crashes/contract) examples ADD_FATAL_EXAMPLE:BOOL=ON // g3log performance test ADD_G3LOG_BENCH_PERFORMANCE:BOOL=OFF // g3log unit tests ADD_G3LOG_UNIT_TEST:BOOL=OFF // Use DBUG logging level instead of DEBUG. // By default DEBUG is the debugging level CHANGE_G3LOG_DEBUG_TO_DBUG:BOOL=OFF // Specifies the build type on single-configuration generators. // Possible values are empty, Debug, Release, RelWithDebInfo, MinSizeRel, … CMAKE_BUILD_TYPE:STRING= // Install path prefix, prepended onto install directories. // This variable defaults to /usr/local on UNIX // and c:/Program Files/${PROJECT_NAME} on Windows. CMAKE_INSTALL_PREFIX:PATH= // The prefix used in the built package. // On Linux, if this option is not set: // 1) If CMAKE_INSTALL_PREFIX is given, then it will be // set with the value of CMAKE_INSTALL_PREFIX by g3log. // 2) Otherwise, it will be set as /usr/local by g3log. CPACK_PACKAGING_INSTALL_PREFIX:PATH= // Enable Visual Studio break point when receiving a fatal exception. // In __DEBUG mode only DEBUG_BREAK_AT_FATAL_SIGNAL:BOOL=OFF // Vectored exception / crash handling with improved stack trace ENABLE_FATAL_SIGNALHANDLING:BOOL=ON // Vectored exception / crash handling with improved stack trace ENABLE_VECTORED_EXCEPTIONHANDLING:BOOL=ON // iOS version of library. G3_IOS_LIB:BOOL=OFF // Log full filename G3_LOG_FULL_FILENAME:BOOL=OFF // Build shared library G3_SHARED_LIB:BOOL=ON // Build shared runtime library MSVC G3_SHARED_RUNTIME:BOOL=ON // Turn ON/OFF log levels. // An disabled level will not push logs of that level to the sink. // By default dynamic logging is disabled USE_DYNAMIC_LOGGING_LEVELS:BOOL=OFF // Use dynamic memory for message buffer during log capturing USE_G3_DYNAMIC_MAX_MESSAGE_SIZE:BOOL=OFF ...
For additional option context and comments please also see Options.cmake
If you want to leave everything as it was, then you should:
cmake ..
You may also specify one or more of those options listed above from the command line. For example, on Windows:
cmake .. -G "Visual Studio 15 2017" -DG3_SHARED_LIB=OFF -DCMAKE_INSTALL_PREFIX=C:/g3log -DADD_G3LOG_UNIT_TEST=ON -DADD_FATAL_EXAMPLE=OFF
will use a Visual Studio 2017 solution generator, build g3log as a static library, headers and libraries will be installed to
C:\g3log when installed from source, enable unit testing, but do not build fatal example.
MinGW users on Windows may find they should use a different generator:
cmake .. -G "MinGW Makefiles"
By default, headers and libraries will be installed to
/usr/local on Linux when installed from build tree via
make install. You may overwrite it by:
cmake .. -DCMAKE_INSTALL_PREFIX=/usr
This will install g3log to
/usr instead of
/usr/local.
Linux/OSX package maintainers may be interested in the
CPACK_PACKAGING_INSTALL_PREFIX. For example:
cmake .. -DCPACK_PACKAGING_INSTALL_PREFIX=/usr/local
Once the configuration is done, you may build g3log with:
# Suppose you are still in the `build` directory. I won't repeat it anymore! cmake --build . --config Release
You may also build it with a system-specific way.
On Linux, OSX and MinGW:
make
On Windows:
msbuild g3log.sln /p:Configuration=Release
Windows users can also open the generated Visual Studio solution file and build it happily.
Install from source in a CMake way:
cmake --build . --target install
Linux users may also use:
sudo make install
You may also create a package first and install g3log with it. See the next section.
A CMake way:
cmake --build . --config Release --target package
or
cpack -C Release
if the whole library has been built in the previous step. It will generate a ZIP package on Windows, and a DEB package on Linux.
Linux users may also use a Linux way:
make package
If you want to use a different package generator, you should specify a
-G option.
On Windows:
cpack -C Release -G NSIS;7Z
this will create a installable NSIS package and a 7z package.
Note: To use the NSIS generator, you should install
NSIS first.
On Linux:
cpack -C Release -G TGZ
this will create a .tar.gz archive for you.
Once done, you may install or uncompress the package file to the target machine. For example, on Debian or Ubuntu:
sudo dpkg -i g3log-<version>-Linux.deb
will install the g3log library to
CPACK_PACKAGING_INSTALL_PREFIX.
By default, tests will not be built. To enable unit testing, you should turn on
ADD_G3LOG_UNIT_TEST.
Suppose the build process has completed, then you can run the tests with:
ctest -C Release
or:
make test
for Linux users. or for a detailed gtest output of all the tests:
cd build; ../scripts/runAllTests.sh
g3log comes with a CMake module. Once installed, it can be found under
${CMAKE_INSTALL_PREFIX}/lib/cmake/g3log. Users can use g3log in a CMake-based project this way:
find_package(g3log CONFIG REQUIRED) target_link_libraries(main PRIVATE g3log)
To make sure that CMake can find g3log, you also need to tell CMake where to search for it:
cmake .. -DCMAKE_PREFIX_PATH=<g3log's install prefix>
Most of the API that you need for using g3log is described in this readme. For more API documentation and examples please continue to read the API readme. Examples of what you will find here are:
G3log aims to keep all background logging to sinks with as little log overhead as possible to the logging sink and with as small "worst case latency" as possible. For this reason g3log is a good logger for many systems that deal with critical tasks. Depending on platform the average logging overhead will differ. On my 2010 laptop the average call, when doing extreme performance testing, will be about ~2 us.
The worst case latency is kept stable with no extreme peaks, in spite of any sudden extreme pressure. I have a blog post regarding comparing worst case latency for g3log and other loggers which might be of interest. You can find it here:
If you like this logger (or not) it would be nice with some feedback. That way I can improve g3log and g2log and it is also nice to see if someone is using it.
If you have ANY questions or problems please do not hesitate in contacting me on my blog
or at
Hedstrom at KjellKod dot cc
This logger is available for free and all of its source code is public domain. A great way of saying thanks is to send a donation. It would go a long way not only to show your support but also to boost continued development.
Cheers
Kjell (a.k.a. KjellKod)
|
https://awesomeopensource.com/project/KjellKod/g3log
|
CC-MAIN-2021-10
|
refinedweb
| 2,103
| 55.95
|
The great thing about the POP mail protocol is that it is a well-documented open standard, making writing a mail client to collect mail from a POP box a relatively painless process. Armed with basic knowledge of POP or SMTP it is possible to write proxies which do a variety of useful things, such as filter out spam or junk mail, or provide an e-mail answering machine service. Unfortunately, in trying to write a standalone client for Hotmail, the world�s.
Outlook Express uses an undocumented protocol commonly referred to as HTTPMail, allowing a client to access Hotmail using a set of HTTP/1.1 extensions. This article explains some of the features of HTTPMail, and how best to connect to Hotmail using a C# client. The sample source code accompanying this article uses COM interop to leverage XMLHTTP as the transport service. The XMLHTTP component provides a complete HTTP implementation including authentication together with ability to set custom headers before sending HTTP requests to the server.
The default HTTPMail gateway for Hotmail boxes is located at. Although undocumented, the HTTPMail protocol is actually a standard WebDAV service. As we are using C#, we could use the TCP and HTTP classes provided by the .NET framework within the
System.Net namespace. Since we are working with WebDAV, it is simpler to use XMLHTTP to connect to Hotmail under C#. Referencing the MSXML2 component, provides an interop assembly which. Since since; } } }
Once the URLs for the inbox and outbox which are valid for this session have been determined, it is possible send and retrieve and e-mail.
Given the URL of a mailbox folder (such as the Inbox folder) we can direct a WebDAV request to the folder's URL in order to list mail items within the folder. The sample console application defines a managed type
MailItem, used to store mail information for a folder item. Folder enumeration begins by inital followig. Since are a set of fields identifying the
MailItem, including the
<D:href> tag, which will later allow us to retrieve the item. We can again use
System.XML.XmlTextReader to parse this XML text stream. We first initalise the stream reader:0P>
// Holders. MailItem mailItem = null; // Load the Xml. StringReader reader = new StringReader(folderInfo); XmlTextReader xml = new XmlTextReader(reader);
In order.
Once; }
In order to retrieve mail, the
LoadMail() method (see above) performs a HTTP/1.1
GET request. Similarly, a
POST is sent to the sendmsg URL in order"; // Dump mail body. postBody += body;
To send the mail, we need to set the
Content-Type request header to "message/rfc821", indicating that this request contains a body which follows RFC 821. We
POST the request body generated above to the sendmsg URL obtained during connection time:
// Open the connection. xmlHttp_.open("POST", sendUrl_, false, null, null); // Send the request. xmlHttp_.setRequestHeader("Content-Type", "message/rfc821"); xmlHttp_.send(postBody);
Given a valid destination mailbox, Hotmail will send the mail to our desired location.
Hotmail is the world's largest provider of free, Web-based e-mail. However, the only non-web mail client with direct access to Hotmail is Outlook Express. Since the HTTPMail protocol is undocumented, other vendors are discouraged from providing a similar service. In this article we saw.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/IP/httpmail.aspx
|
crawl-002
|
refinedweb
| 553
| 56.76
|
Subject: Re: [boost] Namespace policy: putting names in boost::detail
From: Mathias Gaunard (mathias.gaunard_at_[hidden])
Date: 2011-07-04 13:44:08
On 04/07/2011 18:53, Vicente Botet wrote:
>
> Mathias Gaunard-2 wrote:
>>
>>.
>>
>>
>
> What kind of dispatch and specializations do you need to share that they can
> not be on simd? Maybe some more context will help.
NT2 and Boost.SIMD use a fully externalizable function dispatching and
specialization mechanism.
In NT2, we have some functions, like plus, multiplies, etc. that work on
a variety of types: scalars, simd packs, expression trees, tables,
matrices, polynoms, etc.
Each function can thus be specialized many times, some times just for
particular scalars or simd pack types for example.
All specializations of all functions are provided as specializations of
the same class.
The dispatching uses overloading to select the best specialization
available using partial ordering.
It already relies on ADL tricks so I don't think we can be very flexible
with regards to namespaces.
Boost.SIMD will only contain the scalar and simd packs specializations,
and we would need to add the other versions of NT2.
Actually, I think it might be better to put all that in the namespace
boost::detail::dispatch (or even boost::dispatch), as some kind of
"pending" library
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2011/07/183445.php
|
CC-MAIN-2021-39
|
refinedweb
| 236
| 67.25
|
Using multiple monitors in C# is actually a rather simple job since the monitors are combined to one big rectangle to draw on. But you can also use one single monitor if you like.
In this article, I will give a brief explanation of how to discover how many monitors the user has and how high the resolution of each monitor is.
The right information about monitors can be found in the Screen class of System.Windows.Forms namespace. In this class, you can find all sorts of information about how many monitors there are, and the working area of each monitor. Also, the device name can be recovered from this class.
Screen
System.Windows.Forms
This is pretty simple. In the Screen class, you will find a property called AllScreens. This is an array of Screen objects which represent the monitors of your system. If you use the Length property of this array, you will find the number of monitors connected to your system.
AllScreens
Length
At each index in the array, there is a Screen class which contains the following information:
DeviceName
WorkingArea
Bounds
Primary
The DeviceName looks like this: \\.\DISPLAY1. This can be used in combination with various Windows API calls, which I won't explain here. The WorkingArea is the actual area that can be used to display graphical data, like Windows forms and other things. This area is calculated out of the Bounds minus the size of Windows taskbar. The Bounds property is the actual resolution that the monitor currently has. The Primary property is false when the screen isn't the primary one. If it is the primary screen, this property will return true.
false
true
You can check at what screen your at, using a simple piece of code:
Screen scrn = Screen.FromControl(this);
In the source code, you will find a small project that demonstrates all the functions explained in this article. If you compile and run the project, you can move the form from one screen to another, and the application will give the information about the screen you.
|
http://www.codeproject.com/Articles/6471/Multi-monitor-programming-in-C?msg=3848289
|
CC-MAIN-2016-30
|
refinedweb
| 348
| 62.68
|
NAME
pthread_kill - send a signal to a thread
SYNOPSIS
#include <signal.h> int pthread_kill(pthread_t thread, int sig); Compile and link with -pthread.
DESCRIPTION
The
On success, pthread_kill() returns 0; on error, it returns an error number, and no signal is sent.
ERRORS
EINVAL An invalid signal was specified. ESRCH No thread with the ID thread could be found.
CONFORMING TO
POSIX.1-2001.
NOTES
Signal dispositions are process-wide: if a signal handler is installed, the handler will be invoked in the thread thread, but if the disposition of the signal is "stop", "continue", or "terminate", this action will affect the whole process.
SEE ALSO
kill(2) sigaction(2), sigpending(2), pthread_self(3), pthread_sigmask(3), raise(3), pthreads(7), signal(7)
COLOPHON
This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.ubuntu.com/manpages/precise/en/man3/pthread_kill.3.html
|
CC-MAIN-2016-07
|
refinedweb
| 151
| 66.13
|
Plot slicing image with custom indicator in master frame cause unexpected result
I have developed a custom indicator which is plotted in master frame. In order to reduce the size of saved image, I added start and end parameters in plot function like this: plot(plotter=myPlotter, start=startbar, end=endbar). However, I found sometimes the image will be compressed in y axis since the value of my indicator may larger than the maximum value between startbar and endbar. I'm sure not if there is any easy solution to fix, if yes, please let me know, thanks.
I made a quick fix in plot.py, updated drawtag as following:
def drawtag(self, ax, x, y, facecolor, edgecolor, alpha=0.9, **kwargs): label = y ylim = ax.get_ylim() if y > ylim[1]: y = ylim[1] txt = ax.text(x, y, '%.2f' % label, va='center', ha='left', fontsize=self.pinf.sch.subtxtsize, bbox=dict(boxstyle=tag_box_style, facecolor=facecolor, edgecolor=edgecolor, alpha=alpha), # 3.0 is the minimum default for text zorder=self.pinf.zorder[ax] + 3.0, **kwargs)
This will fix my problem but I'm sure there should be better solutions. I'm new to backtrader and python. This is my plotted image before the fix.
- backtrader administrators last edited by
Not sure if the image matches your description, given that there is no indicator plotted along the data.
The thing is, that everything that gets plotted on an axis contributes to the size of the
y-scale. If your indicator goes far away from data, you may want to plot it independently with
subplot=Trueand it won't then distort the scaling of the data.
Yes, the image matches my problem, as you can see the label 21.28 is far away from the top. subplot=True will plot everything in a new frame, this is not what I wanted in this case. I hope everything is plotted on the K lines, and this is always good if I don't use start and end parameters in plot function.
- backtrader administrators last edited by
That 21.28 is not easy to spot if not looking for it ... now the situation is clear.
|
https://community.backtrader.com/topic/1241/plot-slicing-image-with-custom-indicator-in-master-frame-cause-unexpected-result/1
|
CC-MAIN-2020-16
|
refinedweb
| 362
| 66.33
|
Scalar::Lazy - Yet another lazy evaluation in Perl
$Id: Lazy.pm,v 0.3 2008/06/01 17:09:08 dankogai Exp dankogai $
use Scalar::Lazy; my $scalar = lazy { 1 }; print $scalar; # you don't have to force # Y-combinator made easy my $zm = sub { my $f = shift; sub { my $x = shift; lazy { $f->($x->($x)) } }->(sub { my $x = shift; lazy { $f->($x->($x)) } })}; my $fact = $zm->(sub { my $f = shift; sub { my $n = shift; $n < 2 ? 1 : $n * $f->($n - 1) } }); print $fact->(10); # 3628800
The classical way to implement lazy evaluation in an eager-evaluating languages (including perl, of course) is to wrap the value with a closure:
sub delay{ my $value = shift; sub { $value } } my $l = delay(42);
Then evaluate the closure whenever you need it.
my $v = $l->();
Marking the variable lazy can be easier with prototypes:
sub delay(&){ $_[0] } my $l = delay { 42 }
But forcing the value is pain in the neck.
This module makes it easier by making the value auto-forcing.
Check the source. That's what the source is for.
There are various CPAN modules that does what this does. But I found others too complicated. Hey, the whole code is only 25 lines long! (Well, was until 0.03) Nicely fits in a good-old terminal screen.
The closest module is Scalar::Defer, a brainchild of Audrey Tang. But I didn't like the way it (ab)?uses namespace.
Data::Thunk depends too many modules.
And Data::Lazy is overkill.
All I needed was auto-forcing and this module does just that.
lazy and
delay.
lazy { value }
is really:
Scalar::Lazy->new(sub { value });
You can optionally set the second parameter. If set, the value becomes constant. The folloing example illustrates the difference.
my $x = 0; my $once = lazy { ++$x } 'init'; # $once is always 1 is $once, 1, 'once'; is $once, 1, 'once'; my $succ = lazy { ++$x }; # $succ always increments $x isnt $succ, 1, 'succ'; is $succ, 3, 'succ';
Makes a lazy variable which auto-forces on demand.
You don't really need to call this method (that's the whole point of this module!) but if you want, you can
my $l = lazy { 1 }; my $v = $l->force;
Dan Kogai,
<dankogai at dan.co.jp>
Please report any bugs or feature requests to
bug-scalar-lazy::Lazy
You can also look for information at:
Highly inspired by Scalar::Defer by Audrey Tang.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
http://search.cpan.org/~dankogai/Scalar-Lazy-0.03/lib/Scalar/Lazy.pm
|
CC-MAIN-2016-18
|
refinedweb
| 423
| 73.68
|
LOCKF(3) Linux Programmer's Manual LOCKF(3)
lockf - apply, test or remove a POSIX lock on an open file
#include <unistd.h> int lockf(int fd, int cmd, off_t len); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): lockf(): _XOPEN_SOURCE >= 500 || /* Glibc since 2.19: */ _DEFAULT_SOURCE || /* Glibc <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE.
On success, zero is returned. On error, -1 is returned, and errno is set to indicate the error.lockf() │ Thread safety │ MT-Safe │ └──────────────────────────────────────┴───────────────┴─────────┘
POSIX.1-2001, POSIX.1-2008, SVr4. 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. GNU 2021-03-22 LOCKF(3)
Pages that refer to this page: fcntl(2), flock(2), flockfile(3), system_data_types(7), lslocks(8)
|
https://www.man7.org/linux/man-pages/man3/lockf.3.html
|
CC-MAIN-2022-27
|
refinedweb
| 133
| 68.87
|
Here’s an introduction to Pandas, an open source software library that’s written in Python for data manipulation and analysis. Pandas facilitates the manipulation of numerical tables and the time series.
In recent times, it has been proven again and again that data has become an increasingly important resource. Now, with the Internet boom, large volumes of data are being generated every second. To stay ahead of the competition, companies need efficient ways of analysing this data, which can be represented as a matrix, using Python’s mathematical package, NumPy.
The problem with NumPy is that it doesn’t have sufficient data analysis tools built into it. This is where Pandas comes in. It is a data analysis package, which is built to integrate with NumPy arrays. Pandas has a lot of functionality, but we will cover only a small portion of it in this article.
Getting started
Installing Pandas is a one-step process if you use Pip. Run the following command to install Pandas.
sudo pip install pandas
If you face any difficulties, visit. You can now try importing Pandas into your Python environment by issuing the following command:
import pandas
In this tutorial, we will be using data from Weather Underground. The dataset for this article can be downloaded from and can be imported into Pandas using:
data = pandas.read_csv(weather_year.csv)
The read_csv function creates a dataframe. A dataframe is a tabular representation of the data read. You can get a summary of the dataset by printing the object. The output of the print is as follows:
data <class pandas.core.frame.DataFrame> Int64Index: 366 entries, 0 to 365 Data columns: EDT 366 non-null values Max TemperatureF 366 non-null values Mean TemperatureF 366 non-null values Min TemperatureF 366 non-null values Max Dew PointF 366 non-null values MeanDew PointF 366 non-null values Min DewpointF 366 non-null values Max Humidity 366 non-null values Mean Humidity 366 non-null values Min Humidity 366 non-null values Max Sea Level PressureIn 366 non-null values Mean Sea Level PressureIn 366 non-null values Min Sea Level PressureIn 366 non-null values Max VisibilityMiles 366 non-null values Mean VisibilityMiles 366 non-null values Min VisibilityMiles 366 non-null values Max Wind SpeedMPH 366 non-null values Mean Wind SpeedMPH 366 non-null values Max Gust SpeedMPH 365 non-null values PrecipitationIn 366 non-null values CloudCover 366 non-null values Events 162 non-null values WindDirDegrees 366 non-null values dtypes: float64(4), int64(16), object(3)
As you can see, there are 366 entries in the given dataframe. You can get the column names using data.columns.
The output of the command is given below:
data.columns
Index([EDT,, WindDirDegrees], dtype=object)
To print a particular column of the dataframe, you can simply index it as data['EDT'] for a single column or data[['EDT','Max Humidity']] for multiple columns. The output for data['EDT'] is:
data[EDT] 0 2012-3-10 1 2012-3-11 2 2012-3-12 3 2012-3-13 4 2012-3-14 5 2012-3-15 6 2012-3-16 ... ... ... 361 2013-3-6 362 2013-3-7 363 2013-3-8 364 2013-3-9 365 2013-3-10 Name: EDT, Length: 366
And the output for data[[EDT,Max Humidity]] is:
data[[EDT,Max Humidity]]
<class pandas.core.frame.DataFrame> Int64Index: 366 entries, 0 to 365 Data columns: EDT 366 non-null values Max Humidity 366 non-null values dtypes: int64(1), object(1)
Sometimes, it may be useful to only view a part of the data, just so that you can get a sense of what kind of data you are dealing with. Here you can use the head and tail functions to view the start and end of your dataframe:
data[Max Humidity].head()
0 74 1 78 2 90 3 93 4 93 Name: Max Humidity
Note: The head and tail functions take a parameter which sets the number of rows to be displayed. And can be used as data[Max Humidity].head(n), where ‘n’ is the number of rows. The default is 5.
Working with columns
Now that we have a basis on which to work with our dataframe, we can explore various useful functions provided by Pandas like std to compute the standard deviation, mean to compute the average value, sum to compute the sum of all elements in a column, etc. So if you want to compute the mean of the Max Humidity column, for instance, you can use the following commands:
data['Max Humidity'].mean()
90.027322404371589 data['Max Humidity'].sum() 32950 data['Max Humidity'].std() 9.10843757197798
Note: Most of the Pandas functions ignore NaNs, by default. These regularly occur in data and a convenient way of handling them must be established. This topic is covered more in detail later in this article.
The std and sum function can be used in a similar manner. Also, rather than running these functions on individual columns, you can run them on the entire dataframe, as follows:
data.mean()
Max TemperatureF 66.803279 Mean TemperatureF 55.683060 Min TemperatureF 44.101093 Max Dew PointF 49.549180 MeanDew PointF 44.057377 Min DewpointF 37.980874 Max Humidity 90.027322 Mean Humidity 67.860656 Min Humidity 45.193989 Max Sea Level PressureIn 30.108907 Mean Sea Level PressureIn 30.022705 Min Sea Level PressureIn 29.936831 Max VisibilityMiles 9.994536 Mean VisibilityMiles 8.732240 Min VisibilityMiles 5.797814 Max Wind SpeedMPH 16.418033 Mean Wind SpeedMPH 6.057377 Max Gust SpeedMPH 22.764384 CloudCover 2.885246 WindDirDegrees 189.704918
Using apply for bulk operations
As we have already seen, functions like mean, std and sum work on entire columns, but sometimes it may be useful to apply our own functions to entire columns of the dataframe. For this purpose, Pandas provides the apply function, which takes an anonymous function as a parameter and applies to every element in the column. In this example, let us try to get the square of every element in a column. We can do this with the following code:
data[Max Humidity].apply(lambda d: d**2) 0 5476 1 6084 2 8100 3 8649 4 8649 5 8100 ... ... ... 361 8464 362 7225 363 7744 364 5625 365 2916 Name: Max Humidity, Length: 366
Note: In the Lambda function, the parameter d is implicitly passed to it by Pandas, and contains each element of the a column.
Now you may wonder why you can’t just do this with a loop. Well, the answer is that this operation was written in one single line, which saves code writing time and is much easier to read.
Dealing with NaN values
Pandas provides a function called isnull, which returns a ‘True’ or ‘False’ value depending on whether the value of an element in the column is NaN or None. These values are treated as missing values from the dataset, and so it is always convenient to deal with them separately. We can use the apply function to test every element in a column to see if any NaNs are present. You can use the following command:
e = data[Events].apply(lambda d: pandas.isnull(d)) e 0 True 1 False 2 False 3 True 4 True 5 False ... 361 False 362 True 363 True 364 True 365 True Name: Events, Length: 366
As you can see, a list of Booleans was returned, representing values that are NaN. Now there are two options of how to deal with the NaN values. First, you can choose to drop all rows with NaN values using the dropna function, in the following manner:
data.dropna(subset=[Events]) <class pandas.core.frame.DataFrame> Int64Index: 162 entries, 1 to 361 Data columns: EDT 162 non-null values Max TemperatureF 162 non-null values Mean TemperatureF 162 non-null values Min TemperatureF 162 non-null values Max Dew PointF 162 non-null values MeanDew PointF 162 non-null values Min DewpointF 162 non-null values Max Humidity 162 non-null values Mean Humidity 162 non-null values Min Humidity 162 non-null values Max Sea Level PressureIn 162 non-null values Mean Sea Level PressureIn 162 non-null values Min Sea Level PressureIn 162 non-null values Max VisibilityMiles 162 non-null values Mean VisibilityMiles 162 non-null values Min VisibilityMiles 162 non-null values Max Wind SpeedMPH 162 non-null values Mean Wind SpeedMPH 162 non-null values Max Gust SpeedMPH 162 non-null values PrecipitationIn 162 non-null values CloudCover 162 non-null values Events 162 non-null values WindDirDegrees 162 non-null values dtypes: float64(4), int64(16), object(3)
As you can see, there are only 162 rows, which don’t contain NaNs in the column Events. The other option you have is to replace the NaN values with something easier to deal with using the fillna function. You can do this in the following manner:
data[Events].fillna() 0 1 Rain 2 Rain 3 4 5 Rain-Thunderstorm 6 7 Fog-Thunderstorm 8 Rain 362 363 364 365 Name: Events, Length: 366
Accessing individual rows
So far we have discussed methods dealing with indexing entire columns, but what if you want to access a specific row in your dataframe? Well, Pandas provides a function called irow, which lets you get the value of a specific row. You can use it as follows:
data.irow(0) EDT 2012-3-10 Max TemperatureF 56 Mean TemperatureF 40 Min TemperatureF 24 Max Dew PointF 24 MeanDew PointF 20 Min DewpointF 16 Max Humidity 74 Mean Humidity 50 Min Humidity 26 Max Sea Level PressureIn 30.53 Mean Sea Level PressureIn 30.45 Min Sea Level PressureIn 30.34 Max VisibilityMiles 10 Mean VisibilityMiles 10 Min VisibilityMiles 10 Max Wind SpeedMPH 13 Mean Wind SpeedMPH 6 Max Gust SpeedMPH 17 PrecipitationIn 0.00 CloudCover 0 Events NaN WindDirDegrees 138 Name: 0
Note: Indices start from 0 for indexing the rows.
Filtering
Sometimes you may need to find rows of special interest to you. Lets suppose we want to find out data points in our data frame, which have a mean temperature greater than 40 and less than 50.You can filter out values from your dataframe using the following syntax:
data[(data['Mean TemperatureF']>40) & (data['Mean TemperatureF']<50)] <class 'pandas.core.frame.DataFrame'> Int64Index: 51 entries, 1 to 364 Data columns: EDT 51 non-null values Max TemperatureF 51 non-null values Mean TemperatureF 51 non-null values Min TemperatureF 51 non-null values Max Dew PointF 51 non-null values MeanDew PointF 51 non-null values Min DewpointF 51 non-null values Max Humidity 51 non-null values Mean Humidity 51 non-null values Min Humidity 51 non-null values Max Sea Level PressureIn 51 non-null values Mean Sea Level PressureIn 51 non-null values Min Sea Level PressureIn 51 non-null values Max VisibilityMiles 51 non-null values Mean VisibilityMiles 51 non-null values Min VisibilityMiles 51 non-null values Max Wind SpeedMPH 51 non-null values Mean Wind SpeedMPH 51 non-null values Max Gust SpeedMPH 51 non-null values PrecipitationIn 51 non-null values CloudCover 51 non-null values Events 23 non-null values WindDirDegrees 51 non-null values dtypes: float64(4), int64(16), object(3)
Note: The output of the condition data[Mean TemperatureF]>40 and data[Mean TemperatureF]<50 return a NumPy array, and we must use the brackets to separate them before using the & operator, or else you will get an error message saying that the expression is ambiguous.
Now you can easily get meaningful data from your dataframe by simply filtering out the data that you aren’t interested in. This provides you with a very powerful technique that you can use in conjunction with higher Pandas functions to understand your data.
Getting data out
You can easily write data out by using the to_csv function to write your data out as a csv file.
data.to_csv(weather-mod.csv) Want to make a separate tab? No problem. data.to_csv(data/weather-mod.tsv, sep=\t)
Note: Generally, the dataframe can be indexed by any Boolean NumPy array. In a sense, only values that are true will be retained. For example, if we use the variable e, (e = data[Events].apply(lambda d: pandas.isnull(d))) which contains the list of all rows that have NaN values for data[Events], as data[e], we will get a dataframe which has rows that only have NaN values for data[Events]
You can also write your data out in other formats that can be found on the Pandas doc.
This article only covers the basics of what can be done with Pandas. It also supports a lot of higher level functions like plotting data to give a better feel of the data being dealt with. If you want to learn more about Pandas, check the online documentation. It is very readable, user-friendly and is a great place to get a better understanding of how Pandas works. Frameworks like Pandas let a Python application take advantage of such data analysis tools easily. There are also other languages which support data analysis and you may want to check them out. These include R, MATLAB, Julia and Octave. To wrap up, these languages and packages greatly increase your understanding of data. In a world where data is becoming increasingly important, it is critical that we deal with our data smartly.
|
http://www.opensourceforu.com/2014/03/analyse-data-pandas/
|
CC-MAIN-2014-35
|
refinedweb
| 2,255
| 52.8
|
Documentation Improvements
A list of ideas (concrete or theoretical) on ways to improve documentation.
The origins of this page are this mailing list thread but people should feel free to add their own ideas...
Contents
Website
Need more obvious info for for people just getting started, both on and ... most stuff is hidden in left nav
- where/how to download
- getting started tutorial
- Old news should be purged more often
In Depth Docs on Key Concepts
- There needs to be some docs that explain what analysis is at the top level, similar to the current Scoring documentation.
- Performance is another topic which would really benefit from a 'best practice' guide.
Tutorial
- The demo/tutorial needs to be brought into the current Lucene century.
See LUCENE-805
- Most important part of this, I think is the "big picture" overview of why and when and how.
Javadocs
- automated tools to help identify missing javadocs?
is there any way to easily allow annotation of javadocs (ala PHP and MySql)
- Need package level docs for every package- see:
- Need class level docs for every public class
- Need method level docs for every public method - particularly all methods used in any tutorial or "In Depth" doc (ie: scoring.html, and any similar docs that get written)
"core plus contribs" nature of javadocs hard to understand for new users ... plethora of classes can be overwelming and hard to navigate - see: LUCENE-897
- better auditing of all javadocs in a class needs to be done when applying patches (docs elsewhere in the class may refer to things that have changed)
Wiki
- A best practices page on the Wiki would be great.
- Glossary of terms, etc.
Misc Process Issues
- Should we focus more on wiki docs or committed docs?
- wiki is easier for community to contribute to
- wiki pages can't be included in releases
- Before doing a release, we have 1-2 weeks of code freeze, and we focus on documentation and cleaning up bugs in JIRA.
- Get the Hudson JIRA integration stuff hooked in so we can know if patches are good faster, meaning we can turn around documentation patches, and others, faster
- How do we leverage vast amounts of info in mailing list archives?
|
https://wiki.apache.org/lucene-java/Documentation_Improvements?action=diff
|
CC-MAIN-2016-50
|
refinedweb
| 369
| 55.47
|
.
This method is equivalent to the StreamWriter(String, Boolean) constructor overload. If the file specified by path does not exist, it is created. If the file does exist, write operations to the StreamWriter append text to the file. Additional threads are permitted to read the file while it is open. appends text to a file. The method creates a new file if the file doesn’t exist. However, the directory named temp on drive C must exist for the example to complete successfully.
using System; using System.IO; class Test { public static void Main() { string path = @"c:\temp\MyTest.txt"; // This text is added only once to the file. if (!File.Exists(path)) { // Create a file to write to. using (StreamWriter sw = File.CreateText(path)) { sw.WriteLine("Hello"); sw.WriteLine("And"); sw.WriteLine("Welcome"); } } // This text is always added, making the file longer over time // if it is not deleted. using (StreamWriter sw = File.AppendText(path)) { sw.WriteLine("This"); sw.WriteLine("is Extra"); sw.WriteLine("Text"); } // Open the file to read from. using (StreamReader sr = File.OpenText(path)) { string s = ""; while ((s = sr.ReadLine()) != null) { Console.WriteLine(s); } } } }
for appending to the specified file. Associated enumeration: FileIOPermissionAccess.Append
Available since 10
.NET Framework
Available since 1.1
Silverlight
Available since 2.0
Windows Phone Silverlight
Available since 7.0
|
https://msdn.microsoft.com/en-us/library/vstudio/system.io.file.appendtext.aspx
|
CC-MAIN-2015-40
|
refinedweb
| 219
| 63.56
|
Crazy? Probably, but it won’t be the first time that has been suggested.
First,let me offer some background. Recently I have had the opportunity to experience what content management systems do and how they utilized. Products like Documentum and Alfresco are meant for general use. By their nature these systems are less efficient and more complex than something built for a specific purpose. For some agencies this works out well. They don’t have IT organizations that could develop it in house. But for many an ECM is their system of record(SOR) and this is a good solution. When the system of record lies outside the ECM there is less to be gained. There maybe an existing workflow that doesn’t match the general flow that the ECM defines. I felt there had to be a simpler way. How difficult could it be? I am not suggesting to build for the general use, instead build only what is needed.
Note: The model I used is an EHR, Electronic Health Record.
The core
Basically there are only three pieces. A database to store metadata,a file system to store the content and processes for creating, retrieving, updating and deleting (CRUD) the information.
Other stuff
Thumbnails: For an EHR system this would not likely be needed. There are not a great variety of document types where a “preview” would be useful. This could be a requirement in other applications.
Transformation: EHR systems use a small number of standard file formats. Its required to show the HL7 data in is native format. converting the data to an image or pdf is not done. But this could be a requirement in other applications.
Version: This could be useful in any content store.
Starting out old school
My first thought was to go with what I know; Java,Springboot and JPA. Start with a database. Since this will be on AWS, MariaDB is a good place to start. Its MySQL compatible and free to start with. For an EHR system the content is the patient data,stored in HL7 format. Since the system of record is the content, the database doesn’t have to be very complex. Two or three tables is more than enough.
Create an app
Using Eclipse I created a new Springboot, JPA application(included Hibernate). Eclipse also generated the entities from the database and some of the support code. A few hours later I had a Springboot (CRUD ) app that could read and write to a database. S3 would be the choice for the content since the app would be running on AWS. Fortunately AWS offers nice Java support for S3.
With this done I had basic content management system. AWS suggests Elastic Beanstalk for deploying applications. Its not the simplest thing but it does work. My rest service was very simple, a JSON file for metadata and the HL7(xml) file for content.
This was not a ready for production system but with AWS it was pretty quick and simple to get something working.
But….
Something didn’t feel right. This is the same process/framework that everyone one is doing,its not new. Since I am studying the AWS exams shouldn’t I consider this from the AWS point of view?
Lambda
If you want to know what AWS thinks is the future, think Lamdba. “Run code without thinking about servers.Pay for only the compute time you consume(). Lambda is what powers LEX and Alexia. I wont repeat what the AWS says about Lambda but AWS is putting a lot of effort into this.
Building a content management system based around Lambda
I still need a database(or do I?) and a file system to store the content. I already have the database and S3 from the Java project, no need to start over. What is missing is the CRUD app that I built with Java.
Since they are going to sit idle until needed Lambda functions should be light weight, quick to start up. AWS allows Lambda functions to be written in Javascript(node.js), Python, Java or C#. Java and C# seem too heavy. Spring and Hibernate don’t fit into this picture. I felt this left two options, Javascript or Python. Both have their advantages (I use both). I went with Python. I learned later that Javascript is the only choice for some third party tools. As in the Java application I have chosen to write the content and the meta data to S3. The meta data is written to the database as well. S3 has an option to add “metadata” to the object. By writing the data as file I could leverage Solr to search content and meta data! In theory this eliminate the need for a database.
AWS has support and examples for creating Lambda functions in Python. “pymysql” and “boto3” are Python libraries for MySQL and S3. Both are available and do not require the developer add them.
Python is deployed to Lambda as a deployment package. This is simply a zip file with your Python code and any external libraries not supported already with AWS. The trick to this is getting the python file and Lambda handler correct. I used contentHealthLambda.py and contentHealthLambdaHandler as the handler function. Below is how they are used in the Lambda configuration.
The code
Note: The code I am including in basic. Almost all of the error handling has been ignored.
Standard Python imports
import pymysql import datetime import boto3 from cStringIO import StringIO
The lambda handler function definition
def contentHealthLambdaHandler(event, context):
Lambda passes parameters as a Python Dictionary. I am passing in two parameters, metaData and content.
content = event['content'] metaData = event['metaData']
The meta data is patient information.(patientNumber,patientFirstName,patientLastName,…)in a JSON format. I have left out the parsing of this as its trivial.
Make sure the parameters are included:
if 'metaData' in event and 'content' in event: content = event['content'] metaData = event['metaData'] else: return "error"
Create an S3 resource. This is used to write or read to S3.
s3 = boto3.resource('s3')
Store the data to S3. In this case the bucket is fixed but it could be passed as a parameter.
target_bucket = "com.contentstore"
Create a file name.
target_file = metaData+"_md"+str(createDateTime)
# temp_handle.read() it reads like a file handle
temp_handle = StringIO(metaData)
Create or get the bucket
bucket = s3.Bucket(target_bucket)
Write to the bucket
result = bucket.put_object( Key=target_file, Body=temp_handle.read())
That is all there is, six lines of code(error handling is not included). Six more lines are required for storing the content to S3. I did not test with a very large file and there maybe more effort required in these cases. I have not noticed anyone talking about additional issues.
Write to the database
The connect string is familiar to anyone who has done Java database coding before.
conn = pymysql.connect(host='myurl',user='ec2user',passwd='xxxxyyy',db='mydb')
My schema contains two tables, a base and a patient document. Since the patient document has a foreign key to the base document I have to store them separately. There are Python ORM’s that would probably handle this but its so simple that basic SQL will suffice. All of the database code is wrapped in a try-except clause. If any of the execute’s fail the commit will never happen.
Store the base document
cursor = conn.cursor() baseDocumentInsert = "INSERT INTO documentbase(createDateTime,description,contentURL) VALUES(%s,%s,%s)" args = (createDateTime, "healthRecord", result.bucket_name +"/"+ result.key) cursor.execute(baseDocumentInsert, args)
Store the patient document. Note that “cursor.lastrowid” is the id to the base document and will be the patient document foreign key.
patientDocumentInsert = "INSERT INTO patientdocument(patientNumber,patientFirstName,patientLastName,docBase) VALUES(%s,%s,%s,%s)" args = (metaData, "Frank","smith",cursor.lastrowid) cursor.execute(patientDocumentInsert, args)
If nothing has failed, commit the changes.
conn.commit()
This is the complete function. It will store content and metadata to S3 and metadata to the database.
import pymysql import pymysql import datetime import boto3 from cStringIO import StringIO def contentHealthLambdaHandler(event, context): if 'metaData' in event and 'content' in event: print " metaData: " +event['metaData'] print " content: " +event['content'] content = event['content'] metaData = event['metaData'] else: return "error" createDateTime = datetime.datetime.now() s3 = boto3.resource('s3') # store the meta data contents = metaData target_bucket = "com.contentstore" target_file = metaData+"_md"+str(createDateTime) fake_handle = StringIO(contents) bucket = s3.Bucket(target_bucket) # fake_handle.read()reads like a file handle result = bucket.put_object( Key=target_file, Body=fake_handle.read()) #store the content data contents = content target_bucket = "com.contentstore" target_file = metaData+str(createDateTime) fake_handle = StringIO(contents) bucket = s3.Bucket(target_bucket) result = bucket.put_object( Key=target_file, Body=fake_handle.read()) conn = pymysql.connect(host='myurl',user='ec2user',passwd='xxxxx',db='mydb') try: cursor = conn.cursor() baseDocumentInsert = "INSERT INTO documentbase(createDateTime,description,contentURL) VALUES(%s,%s,%s)" args = (createDateTime, "healthRecord", result.bucket_name +"/"+ result.key) cursor.execute(baseDocumentInsert, args) print "baseDocumentInsert id "+ str(cursor.lastrowid) patientDocumentInsert = "INSERT INTO patientdocument(patientNumber,patientFirstName,patientLastName,docBase) VALUES(%s,%s,%s,%s)" args = (metaData, "Frank","smith",cursor.lastrowid) cursor.execute(patientDocumentInsert, args) conn.commit() cursor.execute("SELECT * FROM patientdocument") # print all the first cell of all the rows for row in cursor.fetchall(): print row conn.close() except Exception as e: print(e) return "got it"
Testing
The first level of testing is done from the Lambda AWS console
Cloudwatch logs all of the output so you can easily see what happened.
Rest service
In order to use the Lambda function it needs to be exposed as a REST endpoint. This is done using the API gateway. The process is well documented so I wont go into it. The API gateway can be done separately as I did or at the time the lambda function is created.
This link walks you through the process: Build an API to Expose a Lambda Function
Testing the new rest service
The simplest way to do this is using PostMan. When you create the REST endpoint the API gateway console will supply an Invoke URL. This can be used in Postman to test the new service.
The other way to test is using a Python client. The central part of the client is:
requests.post('', data) where 'data' is a JSON string.
The content is is HL7 format. The metadata is simply a patient id randomly generated. The code below creates twenty post requests to upload content and metdata to the
Lambda client process.
starttime = datetime.datetime.now() for i in range(1,20): id = str(randint(100000, 400000)) # generating a random patient ID print(id) data = json.dumps({'metaData': id, 'content':'data removed for simplicity '}) r = req = requests.post('', data) endtime = datetime.datetime.now() print str(endtime-starttime)
The result is less than 1 sec per post call. The data is small( 3K), but this was done from my home laptop into AWS. I would expect better rates in a “real” environment.
The content data in S3
AWS Maria DB
One issue with Lambda is that is is slower to respond the first time since it has to spin up. I am not clear on the time window where the function is active vs. idle. Its something I need to look into.
Other stuff
As mentioned earlier there are functions needed beyond storing data.
- Read or search for meta data. def contentHealthLambdaHandlerRead(event, context):
- def contentHealthLambdaHandlerReadContent(event, context):
Transformation
This involves converting various document formats into one standard format, likely PDF. Other ECM’s use third party tools to do this work. Using Lambda would not prevent using a similar third party tool but I would prefer that conversion be done beforehand, it the code calling the REST service. Its not an integral part of content store. Another way to achieve this is to use the Lambda trigger to start the transformation. ImageMagick or LibreOffice can be used to convert the files as they are written to S3.
Thumbnails
This is also where third party tools come into play. Lambda has a great way to handle this, triggers. A function is setup to trigger on a file added to S3. The function handles the process of creating the image or images. The examples of this use something like ImageMagick. The only issue I found is that it is currently only usable in Lambda with Javascript. Its not a big deal but I’d have to part from Python for a while.
Versions
S3 can version documents automatically. AWS Lifecycle can use versions to move data to other storage options such as glacier. “boto3” supports S3 versions so its possible to filter and return information based on versions.
Conclusion
Lamdba is becoming AWS’s path forward. They continue to improve and add features.
This effort was for educational purposes. But it shows how the tools we have today can make building great software so much simpler.
|
https://rickerg.com/2017/09/12/content-management-with-aws-lambda/
|
CC-MAIN-2018-34
|
refinedweb
| 2,145
| 60.01
|
An XSLT stylesheet is an XML document. It can and generally should have an XML declaration. It can have a document type declaration, although most stylesheets do not. The root element of this document is either stylesheet or transform ; these are synonyms for each other, and you can use either. They both have the same possible children and attributes. They both mean the same thing to an XSLT processor.
The stylesheet and transform elements, like all other XSLT elements, are in the namespace. This namespace is customarily mapped to the xsl prefix so that you write xsl:transform or xsl:stylesheet rather than simply transform or stylesheet .
This namespace URI must be exactly correct. If even so much as a single character is wrong, the stylesheet processor will output the stylesheet itself instead of either the input document or the transformed input document. There's a reason for this (see Section 2.3 of the XSLT 1.0 specification, Literal Result Element as Stylesheet , if you really want to know), but the bottom line is that this weird behavior looks very much like a bug in the XSLT processor if you're not expecting it. If you ever do see your stylesheet processor spitting your stylesheet back out at you, the problem is almost certainly an incorrect namespace URI.
In addition to the xmlns:xsl attribute declaring this prefix mapping, the root element must have a version attribute with the value 1.0 . Thus, a minimal XSLT stylesheet, with only the root element and nothing else, is as shown in Example 8-2.
<?xml version="1.0"?> <xsl:stylesheet </xsl:stylesheet>
Perhaps a little surprisingly, this is a complete XSLT stylesheet; an XSLT processor can apply it to an XML document to produce an output document. Example 8-3 shows the effect of applying this stylesheet to Example 8-1.
<?xml version="1.0" encoding="utf-8"?> Alan Turing computer scientist mathematician cryptographer Richard P Feynman physicist Playing the bongoes
You can see that the output consists of a text declaration plus the text of the input document. In this case, the output is a well- formed external parsed entity, but it is not itself a complete XML document.
Markup from the input document has been stripped. The net effect of applying an empty stylesheet, like Example 8-2, to any XML document is to reproduce the content but not the markup of the input document. To change that, we'll need to add template rules to the stylesheet telling the XSLT processor how to handle the specific elements in the input document. In the absence of explicit template rules, an XSLT processor falls back on built-in rules that have the effect shown here.
|
https://flylib.com/books/en/1.132.1.75/1/
|
CC-MAIN-2018-26
|
refinedweb
| 455
| 61.67
|
Analyze FRAP movies with a Jython script
Here is a Jython script that does the analysis of a FRAP movie. It was developed during the Image Processing School Pilsen 2009, and updated to modern Fiji.
Once the user has loaded a good FRAP movie, well aligned with no drift, and has specified the ROI for the FRAP zone and another for the control zone, it should be possible to automate the analysis of the FRAP curve. This is what this script aims to do:
- Load a movie
- Draw a ROI for the FRAP zone, and store it as the first ROI in the ROI manager (by pressing the T key)
- Do the same for a control zone, out of the FRAP zone
Then load this script in the Script Editor, choose Language › Python, and run it. It will measure the FRAP intensity for all frames, try to find the FRAP frame (by finding the one with the minimal FRAP ROI intensity), and fit the FRAP curve by an increasing exponential. The parameters of the fit can be then read in the log window, and the FRAP curve and its fit are plotted. Careful:the background is taken as the intensity in the FRAP region just after the FRAP pulse.
This script uses only ImageJ functions for everything, but could be tuned to use more fancy Fiji-included plotting library, such as JFreeChart.
import java.awt.Color as Color from ij import WindowManager as WindowManager from ij.plugin.frame import RoiManager as RoiManager from ij.process import ImageStatistics as ImageStatistics from ij.measure import Measurements as Measurements from ij import IJ as IJ from ij.measure import CurveFitter as CurveFitter from ij.gui import Plot as Plot from ij.gui import PlotWindow as PlotWindow import math # Get ROIs roi_manager = RoiManager.getInstance() roi_list = roi_manager.getRoisAsArray() # We assume first one is FRAP roi, the 2nd one is normalizing roi. roi_FRAP = roi_list[0]; roi_norm = roi_list[1]; # Specify up to what frame to fit and plot. n_slices = 30 # Get current image plus and image processor current_imp = WindowManager.getCurrentImage() stack = current_imp.getImageStack() calibration = current_imp.getCalibration() ############################################# # Collect intensity values # Create empty lists of number If = [] # Frap values In = [] # Norm values # Loop over each slice of the stack for i in range(0, n_slices): # Get the current slice ip = stack.getProcessor(i+1) # Put the ROI on it ip.setRoi(roi_FRAP) # Make a measurement in it stats = ImageStatistics.getStatistics(ip, Measurements.MEAN, calibration); mean = stats.mean # Store the measurement in the list If.append( mean ) # Do the same for non-FRAPed area ip.setRoi(roi_norm) stats = ImageStatistics.getStatistics(ip, Measurements.MEAN, calibration); mean = stats.mean In.append( mean ) # Gather image parameters frame_interval = calibration.frameInterval time_units = calibration.getTimeUnit() IJ.log('For image ' + current_imp.getTitle() ) IJ.log('Time interval is ' + str(frame_interval) + ' ' + time_units) # Find minimal intensity value in FRAP and bleach frame min_intensity = min( If ) bleach_frame = If.index( min_intensity ) IJ.log('FRAP frame is ' + str(bleach_frame+1) + ' at t = ' + str(bleach_frame * frame_interval) + ' ' + time_units ) # Compute mean pre-bleach intensity mean_If = 0.0 mean_In = 0.0 for i in range(bleach_frame): # will loop until the bleach time mean_If = mean_If + If[i] mean_In = mean_In + In[i] mean_If = mean_If / bleach_frame mean_In = mean_In / bleach_frame # Calculate normalized curve normalized_curve = [] for i in range(n_slices): normalized_curve.append( (If[i] - min_intensity) / (mean_If - min_intensity) * mean_In / In[i] ) x = [i * frame_interval for i in range( n_slices ) ] y = normalized_curve xtofit = [ i * frame_interval for i in range( n_slices - bleach_frame ) ] ytofit = normalized_curve[ bleach_frame : n_slices ] # Fitter fitter = CurveFitter(xtofit, ytofit) fitter.doFit(CurveFitter.EXP_RECOVERY_NOOFFSET) IJ.log("Fit FRAP curve by " + fitter.getFormula() ) param_values = fitter.getParams() IJ.log( fitter.getResultString() ) # Overlay fit curve, with oversampling (for plot) xfit = [ (t / 10.0 + bleach_frame) * frame_interval for t in range(10 * len(xtofit) ) ] yfit = [] for xt in xfit: yfit.append( fitter.f( fitter.getParams(), xt - xfit[0]) ) plot = Plot("Normalized FRAP curve for " + current_imp.getTitle(), "Time ("+time_units+')', "NU", [], []) plot.setLimits(0, max(x), 0, 1.2 ); plot.setLineWidth(2) plot.setColor(Color.BLACK) plot.addPoints(x, y, Plot.LINE) plot.addPoints(x,y,PlotWindow.X); plot.setColor(Color.RED) plot.addPoints(xfit, yfit, Plot.LINE) plot.setColor(Color.black); plot_window = plot.show() # Output FRAP parameters thalf = math.log(2) / param_values[1] mobile_fraction = param_values[0] str1 = ('Half-recovery time = %.2f ' + time_units) % thalf IJ.log( str1 ) str2 = "Mobile fraction = %.1f %%" % (100 * mobile_fraction) IJ.log( str2 )
|
https://imagej.net/Analyze_FRAP_movies_with_a_Jython_script
|
CC-MAIN-2020-16
|
refinedweb
| 715
| 51.55
|
I don't know if I'm missing something but the plugin from sublimator reveals a strange behavior of view.replace() and view.erase().Have a look at the comments):
all spaces left to | should not be replaced as len(sel) = 1 (because of the inserted space above)
it prints:| |1
View.erase behaves the same. All preceeding spaces will be erased. No tabs, real spaces.
The behaviour you're seeing comes from running invertSelection: Any selection containing one or more empty selection regions (the usual case) is not truly invertible, such that running invertSelection twice will generally not return you to where you started.
The attached snippet tries to work around this by inserting spaces, but fails to update the selection regions to include the spaces, so there are still empty selection regions present.
Mea Culpa! I was quite sure there was no bug but I didn't realized the space at the end of the region.
Thanks sublimator and no, you're not wasting my time!I'm learning from your solutions. It may be a long and exciting quest.
...and the quest succeeds.I filtered the input from sublimator and jps and changed the regex of remove_trailing. The regex know requires whitespace followed by a '\n' or '\r' as eol marker. Thats all on Windows and Unix files.
#! python
# -*- coding: utf-8 -*-
"""
TextCommand:
stripSpace [preceding]
"""
import functools
import re
import textwrap
import sublime
import sublimeplugin
remove_trailing = functools.partial (re.compile (r" \t]+(\r\n])", re.M).sub, '\`')
class StripSpaceCommand (sublimeplugin.TextCommand):
def stripRegion (self, view, region, operation):
view.replace (region, operation (view.substr (region)))
def run (self, view, args):
operation = (textwrap.dedent if 'preceding' in args else remove_trailing)
if not view.hasNonEmptySelectionRegion():
view.sel().add (sublime.Region (0, view.size()))
for region in view.sel():
self.stripRegion (view, region, operation)
def onPreSave (self, view):
view.runCommand ('invertSelection')
for sel in view.sel():
stripped = remove_trailing(view.substr (sel))
view.replace (sel, stripped)
for sel in view.sel():
end_pt = sel.end()
# if len(sel) == 0, cursor jumps to bof, therefore...
if end_pt != view.size():
view.insert (end_pt, ' ')
view.runCommand ('invertSelection')
for sel in view.sel():
if len (sel) == 1:
view.replace (sel, '')
elif len (sel) > 1:
view.erase (sublime.Region (sel.begin(), sel.begin()+1))
|
https://forum.sublimetext.com/t/view-replace-view-erase-bug/190
|
CC-MAIN-2016-36
|
refinedweb
| 376
| 62.24
|
using namespace std;This tells the C++ compiler that it should use identifiers from the std library namespace. Without it, the first line in the main() block would need to be prefixed with std::.
std::cout << "Hello World";Without the namespace line, every instance of cout would need a std:: or else will not compile.
cout << "Hello World";The statement cout is the equivalent of printf in C. If you prefer to use printf then add the following line.
#include <stdio.h>and change the cout to
printf ("Hello World") ;C++ was designed to be compatible with c which is why printf() still works. You can use either. In a later tutorial, I will show all the formatting possibilities with cout.
On the next page : Another example
|
http://cplus.about.com/od/learning1/ss/clessonone_2.htm
|
crawl-002
|
refinedweb
| 127
| 74.79
|
I have a code which creates a random number, the user has to guess it in 10 tries. The first cout is the option to "Cheat? (Y/N)" which will display the secret number (the random number), mostly just for me to see whats happening.
If the user selects Y the number is displayed. But also it will be displayed if they select N. This is one point of confusion.
After this they are prompted to enter a guess. If the guess is wrong they will be asked if they want to try again. Whether they say Y or N, the program just quits and outputs nothing.
Here is the code:
Here are some outputs/inputs:Here are some outputs/inputs:Code:
#include <stdlib.h>
#include <iostream>
#include <cstdlib>
#include <time.h>
using namespace std;
int main()
{
srand ( time(NULL) );
int number = (rand() % 15) + 1;
int guess;
int trycount = 0;
char choice;
cout << "Cheat? (Y/N) \n";
cin >> choice;
if (choice == 'Y' || 'y')
cout << "The secret number is " << number << ".\n";
else if (choice == 'N')
cout << "Good for you.";
cout << "Please enter a guess: ";
cin >> guess;
while (guess != number && trycount < 10)
{
int choice;
cout << "Wrong, guess again? (Y/N): ";
cin >> choice;
if (choice == 'Y')
{
cout << "Guess again: ";
cin >> guess;
}
else if (choice != 'Y')
break;
{
if (guess < number)
cout << "Too low \n";
else if (guess > number)
cout << "Too high \n";
else (guess == number);
cout << "You guessed the number!";
}
trycount++;
}
return 0;
}
Cheat? (Y/N)
Y
The secret number is 13.
Please enter a guess: 6
Wrong, guess again? (Y/N): Y
The Debugger has exited with status 0.
----------
Cheat? (Y/N)
N
The secret number is 11.
Please enter a guess: 7
Wrong, guess again? (Y/N): N
The Debugger has exited with status 0.
Can anyone give me a hint about what I am missing? I feel like it is a simple solution but I've tried it for a bit and I can't seem to figure out what is going wrong. Thank you!
|
http://cboard.cprogramming.com/cplusplus-programming/129322-if-else-if-will-not-return-else-if-when-true-printable-thread.html
|
CC-MAIN-2015-06
|
refinedweb
| 334
| 93.34
|
Agenda
See also: IRC log
<trackbot> Date: 02 October 2009
<dug> yves - I'm blocked again - could you look at the logs?
<Bob> scribe: Ram Jeyaraman
<Yves> dug, I only have access to the black lists
<Yves> and the ip you gave was not in it
<dug> 195.212.29.67
<dug> try that one
<dug> yves, is 195.212.29.67 in the blacklist
<Yves> yep, clearing it
<Yves> done
<dug> fixed! thanks
<Ram> scribe ram
<Bob> scribenick: Ram
<Bob> agenda:
<Bob> proposed xsd/wsdl at
RESOLUTION: XSD/WSDL from Doug looks good. No objections to publishing WS-Fragment as a FPWD.
<Bob>
<dug> proposal:
Doug: The text in 7.2 is the old
text; says you can advertise the policy.
... In the new text, I changed policy to metadata.
... This is an initial pass at what we discussed day before.
Asir: Two comments.
... These two paragraphs there are a continuation of the para under 7.2. When a dialect URI is defined we need to provide an identifier. Not defining is fine but we need to call it out.
Doug: There were some reading
flow issues; so i did it this way.
... I am fine with rearranging it if it improves the flow.
Asir: There is one semantic change relating to the identifier; identifier is not defined.
No objections so far on the suggested change.
Jeff: why does this proposal still talk about XML SOAP URI?
Doug: We don't have a standard version for it.
Bob: The WSDL dialect is defined in the MEX specification.
Jeff: Should the dialect URI use a WS-RA namespace?
Bob: You should consider filing an issue.
<dug>
Jeff: When are the operations exposed.
Asir: When the applications want to expose them.
<Bob> from comment #2 of Issue-6694 (directional decision during the August F2F)
<Bob> RESOLUTION: (directional) everything is implicitly defined with WS-Policy
<Bob> assertions, optionality of operations to be indicated by assertions or some
<Bob> other appropriate WS-Policy mechanism. In addition the wsdl will be in the
<Bob> specs
Katy: If a decision had been previously made not to put explicitly in the WSDL, the quoted resolution does not contradict it.
Wu: My understanding is that by using the policy assertion the endpoint is declaring implicit support for all the operations.
<gpilz> s/indicate that a certain security algorithm is needed to use the WS-Transfer operations./indicate that a specific security mechanism is used to protect the WS-Transfer operations for this endpoint./
Asir: Two a concrete change. Change 'while' to a 'when';
Gil: Should there be a statement that operations cannot explicitly appear when the policy assertion is used?
Asir: This is in the first paragraph of section 7.
Wu: I want to see a clear declartion that I support WS-Transfer operations.
Bob agrees with Asir's comment.
Wu does not agree with Gil's point.
Bob: We have already agreed that if the policy assertion is present, then all operations are implicitly supported.
Wu: The policy assertion section covers many things beyond just the implicit behavior.
<Yves> implicit declaration doesn't forbid explicit and more tuned declaration
Doug agrees.
<Yves> implicit means "it's not defined, resort to using this"
Katy: The text already covers all
the possible options. Transfer is implicitly defined,
explicitly supported, not suported.
... Not supported implies Not a Transfer resource.
... It strikes me that there has been a clear decision about not openly supporting explict option.
Bob: The specification does not
require use of policy or WSDL.
... Although we have said that policy is not the exclusive way, but if policy is used, that implicitly defines the operations.
... True or not?
Martin: It is ambiguous whether it is implicit or explicit.
Bob: The meaning of the existence of the policy indicates Transfer is supported. Implicitly or explicitly supported is undefined.
<Wu> There should be a specific policy assertion to indicate that the operations are implicitly supported.
Bob: Do we agree that when the policy assertion is present, the operations are implicitly defined?
Doug: Whether policy is used or not has no bearing on whether it is implicit or explicit.
Bob: If the policy assertion exist, what does it mean?
Doug: All it means is, service is advertising support for transfer. Meaning, transfer operations are supported.
Asir: The policy assertion
indicates you are supporting the required operations and with
the policy params it supports the optional operations.
... When an endpoint supports policy, and Transfer, and uses policy assertion, it is indicative of the operations being supported implicitly.
Bob: We don't want to revisit earlier agreements. The resolution to 6694 indicates that when policy assertion is engaged implicit support of operations is expressed.
Gil is describing an use case.
Gil is defining an application WSDL that contains the Transfer operations. The policy appears in the WSDL as well.
Asir: It is redundant.
Gil: Is it illegal?
Doug: The policy assertion is indepent of implicitly.
Bob: We discussed that we had
previously discussed that what is infrastructure for one person
is infrastructure for another person.
... The resolution text for 6694 stands.
Katy: This was a mechanism for operations that do not explicitly appear in the WSDL.
Dave: The red text in the proposal is capturing the cases that are not covered by the resolution to 6694.
Martin: If you have a policy and dialect explicitly defined, there is no clarity.
Bob: That is a separate
discussion.
... Does the proposed modification sufficiently addressed 6721?
<paul> I took myself off the queue. I personally don't agree that WSTransfer is purely 100% infrastructure, but that is a side issue
Asir: Let us remove the phrase "While the WS-Transfer operations are not exposed in an endpoint's WSDL" from the red text in the proposal.
<asir> Paul - I don't agree that Transfer is just infrastructure. Bob said it well ... one man's infrastructureis another man's app
<paul> i thought we resolved this on wednesday?
Doug: Use a MUST: "While the WS-Transfer operations MUST not exposed in an endpoint's WSDL"
Dave: This is not core to
6721.
... I am fine with the text as it is. I suggest using a separate issue to revisit or make adjustments to previous resolutions.
... I am not too particularly attached to one particular manifestation.
Martin: I don't like the word 'version'.
Delete "Own verion of the" phrase.
No objections to Martin's change.
Doug: I don't agree that changing
the first paragraph should be handled via a separate
issue.
... I think it is in the spirit of the earlier resolutions.
Bob: We agreed to the resolution
to 6694 irrespective of the various (mis)interpretations.
... If people have an issue with the agreed to text for 6694 let us reopen that issue.
... I suggest that we do NOT elaborate on teh 6694 any further since it is not central to issue 6721.
Asir: The second paragraph in section 7 is not central to 6721.
Bob: Is there any objection to agreeing to just the part below.
Proposed text/extract: "an endpoint MAY choose to expose its own version of the WS-Transfer WSDL by using the following WS-MetadataExchange Dialect URI:"
""
"This version of the WS-Transfer WSDL can be annotated to indicate any endpoint specific metadata that might be needed by clients interacting with this service. For example, the WSDL MAY have policy assertions to indicate that a particular security mechanism is used to protect the WS-Transfer operations for this endpoint."
Dave: There is need to clarify about dialects.
Doug: One or more WSDL need to be considered.
Bob: Does the above text work for every one?
<Yves> annotating... meaning SAWSDL?
<Bob> s/\/:/
No objections to the amendment from Gil noted above.
Doug to post a new doc with some revisions:
<Wu> how about s/is used to protect /for/ the WS-Transfer ...
<dug>
<Katy> Hi Yves, please could you unblock 195.212.29.75?
<Yves> done
<Katy> thanks
<Yves> but ask your system people to fix this ;)
<Katy> I will try, something has certainly changed in the last few months
The proposed resolution will appy to all specifications except MEX. Specifically, resolution applies only to Transfer, Enumeration, Eventing specs.
This also applies to MEX.
<Bob> proposal is (i.e. retrievable by using a WS-MetadataExchange GetMetadata with a Dialect URI of). An endpoint MAY choose to expose the WS-Transfer WSDL by using the following WS-MetadataExchange Dialect:
<Bob> Dialect URI @Identifier value
<Bob> Not defined
.
<dug>
The latest modified version of the proposal in above.
No objections to the above proposed resolution.
Issue 6721 is resolved.
RESOLUTION 6721 is represented by comment #6 in bugzilla.
<Bob> proposal at
No objections to closing 7013 with no action.
Resolution 7013 closed with no action.
<dug>
Concrete proposal is at
s/concrete proposal/concrete manifestion of proposal/
s/concrete manifestation of proposal/concrete manifestation of resolution/
<Yves> optional but mandatory features looks weird
Bob: Any objections to the concrete manifestation of the resolution?
We will revisit this after lunch.
<dug> eventing:
We are continuing to look for changes made by the editor relating to correctly describing optional elements/features.
Wu has a question about making fault reason as optional.
Wu is satisfied with the explanation.
<Bob> lunch break will re-start at 1:00
<Katy> Message for those dialing in: The folk here have gone on a quick tour of Hursley site so we will commence a little after 1pm
<Katy> I guess it'll be about 1.20 before we restart
<Katy> (i.e. 23 mins from now in duration time)
<asoldano> ok, thanks Katy
<Bob> we aew slowly gathering...
<Bob> scribe: Gilbert Pilz
<Bob> scribenick: gpilz
Bob: has everyone considered the
latest text?
... does anyone need more time?
Asir: it's not very clear what is
optional and what is not
... maybe we need to do a little homework before accepting such a global statement
Kary: concern is with optional operations
Asir: suggest we do more homework and be ready to discuss next meeting
<scribe> ACTION: Ram and Katy review latest text for 7207 and determine whether there is any ambiguity [recorded in]
<trackbot> Created ACTION-117 - And Katy review latest text for 7207 and determine whether there is any ambiguity [on Ram Jeyaraman - due 2009-10-09].
Bob: there is a proposed resolution
<Bob>
DaveS: describes proposal
<dug> pong test
<Bob> my pong broke ages ago :-)
Martin: seems bloated - having to have 3 time values to specify an expiration
DaveS: "exact" seems to be a minority case
<Bob> yes
Martin: most programming uses exact times - hints etc. are the exception
<Yves> +1 to differentiate hints and non-hints
Gil: would like to flip hint and non-hint syntax
Daves: taking away default values
makes the whole thing more complicated
... right now, with a default min=0 and a default max=infinity, things always make sense
Doug: it depends upon your point
of view
... for example, what happens if I include a min attribute, but no max attribute
Wu: like this proposal
<Yves> Dave's proposal, using attribute to give multiple non-hints and value for the hint (one target) seems optimal
Wu: if it's difficult to support "any" because of schema, don't do it (no one seemed to care about any)
Ram: I like the proposal
<Bob> Chair notes that a +1 would suffice
<Yves> gil, the client might always trash or not process things that are not getting back within the wanted range
<Yves> (if @min and @max are not supported)
Gil: need faults etc. to handle "wrong" cases
yves: that's just how we got here in the first place . . . we wanted something better than Subscribe, check SubscribeResponse, Unsubscribe
Asir: some question whether empty
tag can be specified for "any" case
... all we have to say is 'nillable="true"'
(some discussion about how nillable works, difference between empty tag and xsi:Null
<Yves> nillable optional elements... oh joy :)
Martin: there's a big deal being
made about replicating current behavior
... is there agreement about what the current behavior actually is?
Doug: what is the fault for when the three values are hosed?
(all) some sender fault
Doug: we need an exact fault
<asir> For folks who would like to understand how to use nillable and xsi:nil, please see IBM/David Fallside's excellent documentation,
Doug: why nillable is not appropriate
Bob: is anybody speaking against this general approach?
(all): no
Katy: the problem is not with the number of characters, it's the processing involved with comparing three values
<Yves> <Expires @exact=1h>1h</Expires> ?
Katy: we need a shorthand that means "exact"
Yves: it's not checking the number of characters, it's comparing three values
think about 3 xs:dateTimes each with different timezones
<Bob> Yves, Touche
compare all three and figure out if they are the same
<dug> still shorter than a 1 gig xml file
<Yves> gil, see proposal above ot have an @exact as a shorthand for @min @max
<asir> Yep .. XML Schema says that you normalize dateTime and then compare
<asir> ordering is defined in XML Schema
Bob: can we agree to provide a shorthand for exact?
<Katy> How about <Expires @exact>1h</Expires> ?
Wu: seems to make sense
<asir> if you are using a schema library, you don't have to do anything
<MartinC> +katy
<MartinC> +1 to katy
Daves: don't think a shorthand is necessary
Bob: (repeating) can we agree to provide a shorthand for exact?
Ram: Katy, I think your concern is all the extra processing of comparing three values
Katy: it's not a make or break thing
Bob: is it acceptable to leave it as it is
Katy: if no one else cares, I'm willing to bend
Bob: anyone else care?
Doug: I do, I'd like to make the obvious case as easy as possible
<MartinC> revesre it to <expires non-exact> 1hr </expires>
<Yves> so by default you expect that all clocks are synchronized perfectly?
Bob: add the @exact attribute to the proposal - "@exact is shorthand for min, max, and value all being equal"
Yves: no
<Yves> @exact="xsi:boolean"
<Ashok> Default ?
<MartinC> q
suggest wse:InvalidExpirationTime
it already exits in the spec
<dug>
<Ashok> INF is defined in XML Schema
<asir> Yep
If this attribute value is "true" the @min and @max attributes MUST be ignored and the wse:Expires element evaluated as if @min, @max, and the value of wse:Expires had identical values.
<Yves> sounds good
<asoldano> +1
<DaveS> The default value is "false" in which case this attribute has no effect. If this attribute value is "true" both @min and @max attributes MUST be ignored and are assumed to have the same value as the wse:Expires element.
If the wse:Expires element in not present and the event source is not able to grant an indefinite subscription, it MUST generate a wse:ExpirationTimeExceeded fault.
<Ram> +q
this has to go away : If the wse:Expires element in not present and the event source is not able to grant an indefinite subscription, it MUST generate a wse:ExpirationTimeExceeded fault.
<asir> can use the same 'Expires' tag by disallowing min, max and exact attributes in a *Response
<Yves> well why would those occur in a response?
<asir> they won't
<Yves> and at worst if they occur in a response it will be ignored
<Bob> If they occur in a responce, it should be directed at a random w3 server
<Yves> ...and be blocked ;)
<asir> :-)
<DaveS> Need a new response type wse:GrantedExpires
<DaveS> <wse:GrantedExpires>
<DaveS> (xs:dateTime | xs:duration)
<DaveS> </wse:GrantedExpires> ?
<DaveS> The value of this element indicates the expiration time (or duration) granted by the event source. If this element is missing the expires time is indefinite.
<Wu> If expir element is missing, min=0, max=inf, expir=inf?
<Wu> In other words, they all take their default value
(all): discuss the use of a new GrantedExpires element in the SubscribeResponse
<Ram> The <GrantedExpires> must have the same schema type definition as the existing <Expires>
Li: would like Event Source to provide policy for min and max supported expiration times
<dug>
Ram: what about Renew
operation?
... exact same semantics?
(all): must be identical
<dug>
RESOLUTION: comment
#5 resolves 7586 - also applies to Renew
... apply the same resolution to Enumeration
Note: above issue is 7587
Complete notes: 7586 and 7588 were addressed yesterday (action to develop proposal that includes the use of both dateTime and duration)
<Bob> acl li
the above two issues are, in fact, 7478 and 7587
Doug: discusses mixing data types of @min, @max and Expires
<dug>
<dug>
RESOLUTION: 6407 resolved as proposed
note with the standard policy yadda, yadda stuff added
<Katy> 7553 and 7554 Proposal here
<Katy> Subscription
<Katy> A registration of interest in receiving Notification messages from an
<Katy> Event Source. Subscriptions may be created, renewed, expired or
<Katy> cancelled.
<Katy> If the subscription is not active, the request MUST fail and the subscription manager MAY generate a wse:UnknownSubscription fault.
<Bob> 7554 was consolidated with 7553
Katy: (explains proposal)
<Katy> Subscription
<Katy> A registration of interest in receiving Notification messages from an
<Katy> Event Source. Subscriptions may be created, renewed, expired or
<Katy> cancelled. A Subscription is active when it has been created but has not been expired or cancelled.
<Katy> If the subscription manager chooses not to renew this subscription, the request MUST fail, and the subscription manager MUST generate a SOAP 1.1 fault or a SOAP 1.2 Receiver fault indicating that the renewal was not accepted.
<li> yes, soap 1.1 fault => soap 1.1 Server fault
<li> soap 1.1 Server == soap 1.2 Receiver
<Katy> The following element MAY be used to convey additional information in the the detail element of a SOAP 1.1 fault or a SOAP 1.2 receiver fault.
Doug: The following element MAY be used to convey additional information in the detail element of a fault.
<Katy>
Bob: Any objections to resolving 7553, 7554 with above
Wu: Would like more time
Bob: Meeting of 10/06 is
cancelled
... Next concall will be 10/13/2009
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/XSD/RESOLUTION: XSD/ FAILED: s/indicate that a certain security algorithm is needed to use the WS-Transfer operations./indicate that a specific security mechanism is used to protect the WS-Transfer operations for this endpoint./ Succeeded: s/indicate that a certain security algorithm is needed to use the WS-Transfer operations./indicate that a particular security mechanism is used to protect the WS-Transfer operations for this endpoint./ FAILED: s/\/:/ FAILED: s/concrete proposal/concrete manifestion of proposal/ FAILED: s/concrete manifestation of proposal/concrete manifestation of resolution/ Found Scribe: Ram Jeyaraman Found ScribeNick: Ram Found Scribe: Gilbert Pilz Found ScribeNick: gpilz Scribes: Ram Jeyaraman, Gilbert Pilz ScribeNicks: Ram, gpilz Default Agenda: WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Found Date: 02 Oct 2009 Guessing minutes URL: People with action items: ram[End of scribe.perl diagnostic output]
|
http://www.w3.org/2009/10/02-ws-ra-minutes.html
|
CC-MAIN-2018-26
|
refinedweb
| 3,206
| 54.63
|
Add .py and .pyw to PATHEXT on Windows
Happy holidays everyone!
While reading the following cookbook entry I realized many people probably don't know there is an easier way to invoke Python scripts just like batch files and other executables on your PATH under Windows. I went ahead and added a comment to the cookbook and I've included the description below and some additional details.
There is a much simpler way than wrapping a Python script in a batch file. Simply add .py and .pyw to the PATHEXT environment variable on Windows NT, 2000, and XP (possibly Win9x and ME too, but I can't test that).
Open the System Control Panel, select the Advanced tab and then click the Environment Variables... button to bring up the dialog. PATHEXT is listed under the System variables. Once you've made the change, if you type set and press return in the command shell you should see your change:
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.PY;.PYW
Now if you have a program called hello.py in one of the directories on your PATH, you can invoke it by just typing hello and return. Here's a simple hello.py test script showing that you do get command-line args as well.
import sysprint "hello", sys.argv
I ran it at the command prompt (note that I stuck hello.py in my c:\posix directory which was already on my PATH).
C:\>hello worldhello ['C:\\posix\\hello.py', 'world']
Of course one of the nice things about using Python scripts instead of batch files or scripts written in VBScript and JavaScript is that the scripts are portable to Unix and other operating systems as long as you don't use any Windows-specific extensions such as COM; the Python standard libs provide a lot of functions and methods to hide platform differences such as path separators.
If you do need to access COM or manipulate the registry in your scripts, you'll want to install Mark Hammond's win32all extension. Among other things, I use win32all to manipulate the Outlook object model with Python instead of having to use VBA or VBScript.
Associating file extensions with scripts
Note that you still won't be able to select a .py or .pyw file in the Explorer to open files of a particular extension. In order to do that, you'll need to use a variation of the instructions for associating .py and .pyw files with the codeEditor found on the codeEditor wiki page:Just substitute your program for the codeEditor, and the extensions such as .txt, or .jpg, etc. you want to open. I use this trick so that the PythonCard pictureViewer sample can open image files on my machine and the textEditor and codeEditor tools can open text files including HTML and XML; the codeEditor is particularly good for very large HTML and XML files which can cause HomeSite and other HTML editors to slowdown.
|
http://radio.weblogs.com/0102677/categories/python/2002/12/25.html
|
crawl-002
|
refinedweb
| 500
| 70.23
|
Introduction: Black Box Timelapse
Black Box Timelapse is a simultaneous timelapse recorder and player, which I built using a Raspberry Pi. It is battery-operated and so I can bring it to different places and set it up.
Why not use an iPhone? Simple: the iPhone looks like a device and so people respond to it like a devie, rather than an art object, which invites curiosity.
Overview on How it works
The video is displayed on this small screen and essentially stacks images over time. You can also plug in an HDMI cable to a monitor or projector if you want to see it in a larger format.
It automatically runs when you plug it in or activate the battery. Right now, the camera takes a new images every 10 seconds and cycles after about 800 images, such that the timelapse will always show the last 2 hours of what happened in a space.
In version 2, this will be controlled with potentiometers, so that you control how often the camera will take images and how quickly it will cycle.
This Instructable will show you both the software configuration and software code as well as the physical fabrication steps I did to make this project come to life.
Step 1: Basic Raspberry Pi Set Up
We're going to make sure we have a few things in order.
First, make sure your Raspberry Pi is properly configured with the basic setup using my Ultimate Raspberry Pi Configuration Guide Instructable.
After this, we will want to make sure that we follow this Instructable on how to mount a USB Thumb Drive on the Raspberry Pi.
These two guides will make it so that we can save timelapse images onto a USB drive and that the Raspberry Pi will be easily configured. We will also add the ability to use the camera on the Raspberry Pi and later on, automatically launch a Python script upon startup.
Step 2: Gather Components
These are the components I'm using, which took awhile to research and figure out.
- Raspberry Pi
- Small TFT monitor, which I ordered from Adafruit for $45.
- Rechargeable 12V battery, with USB output*
- USB battery that outputs 2A. Currently, I'm using this one from Adafruit, which has been reliable, thought a bit heavy
- small USB 3.0 dongle
- RCA male-to-male coupler
- Micro USB for running out to the battery
- Perf board with: 3-position switch, leads for recharging and GPIO.
- GPIO (not used for version 1)
* This 12V battery ended up having problems with the USB output. I was hoping to have one battery run the show, but the Raspberry Pi ended up spiking the power needs when the camera was taking a picture and then the battery would cause a voltage drop, forcing the Raspberry Pi to reboot
Step 3: Battery Charging Mechanism
The electronics portion of this Instructable isn't as well-developed as I'd like. I'm going to improve upon this in the future.
What I ended up doing was making a perf board that includes a GPIO with an ADC chip to control the potentiometers. However, I ran out of time to write the software for potentiometers, so will have to come back to this. So, for the perf board, we are just using a charging circuit for the 12V battery.
This brand of battery has a the same input and output jack, so I rigged up a simple mechanism which has a three-position switch. In the up position, the 12V battery supplies power to the monitor via the video cables.
In the down position, it will look for the charger, as the female connector.
In the neutral or middle position, no power will be active.
I tie all the grounds together (video, battery, charger) and the switch will connect the positive wire of the battery to either to the video monitor, the charger or nothing.
The USB battery isn't part of this circuit and has an on/off switch.
Step 4: Install Picamera and Activate
We will be using the Picamera libraries, which provide a very easy-to-use Python interface into the camera module. Full details of the package are here.
First, we do some housekeeping. From the command prompt, do your standard updates and upgrades, type in:
sudo apt-get update
then:
sudo apt-get upgrade
you will see lines of Linux install code and will have to wait awhile.
Now, install the package itself with:
sudo apt-get install python-picamera
Finally, we want to enable the camera itself. Type in:
sudo raspi-config
This will bring a configuration menu, where you can Enable the camera, which is disabled by default. Enable the camera and reboot.
Step 5: Test Picamera
At this point, you'll want to have the Pi hooked into a monitor, rather than ssh.
Make sure your Raspberry Pi camera is properly connected to your Raspberry Pi. I keep referring to this video guide for proper orientation of the ribbon cable.
Create a simple Raspberry Pi preview script
nano cam_preview.py
now, type in this script:
-----
import time
import picamera
with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
while True:
time.sleep(10)
----
ctrl-X, return will save
run it:
sudo python cam_preview.py
What you should see is the camera showing a preview image.
Step 6: Diagram Code
This is the full flowchart of the code interaction. I wrote this out to help figure out how the interaction model will work.
Even though I didn't use the pots for this version, the flowchart technique is super-helpful, especially when it comes down to working with the Raspberry Pi, which can be tedious to program.
Step 7: Write the Python Code
This didn't take a super long time to do, but did take some wrangling.
Looking at my original flowchart, I followed most of the steps.
Everything is on GitHub, here.
Step 8: Setup Python Code to Auto-launch on Startup
Using my Instructable: Launch Python script on startup, I built a launcher.sh script and edited the crontab to automatically start the Python code.
The escape key will exit the script.
Step 9: Make an Internal Camera Mount
I based my camera mount on an existing STL model of a Raspberry Pi camera enclosure on Thingiverse.
Mine will be mounted inside the black box and needs to have "wings" so that it can be attached to the interior of the box.
Using 123D Design, I added these components and then 3D printed the mount, attaching the camera inside and sealing the two pieces of the mount with epoxy.
Step 10: Design Fabricated Parts
I set up a basic design using Illustrator of what I want the Black Box Timelapse to look like. It consists of a monitor, a camera and inside, a Raspberry Pi and various batteries for mobile use.
The main idea is to:
(1) use as much digital fabrication techniques as possible, i.e. no saws and absolutely minimal sanding and touch-up work.
(2) make a hand-painted aesthetic, using wood with household latex paint.
(3) try for making this as small as possible. final dimensions of the armature are 8" x 7.5" x 5", which the panels adding another 1/4" in every direction.
The Panels
These will be made of 1/8" wood, which will be attached to an interior armature. The look that I'm going for is a hand-painted look which will show the 1/8" edges. These will be painted in white, while the faces of the panels will be painted black, kind of like a magician's box.
The back panel has holes for venting, and all of the electronic components: the switch, the female plug for the 12V charger and the potentiometers (to be added in the future), and an LED, also to be added down the road.
The Armature
This is cut from 3/8" wood, which is the maximum thickness that the laser can cut. I went through a few iterations to figure out how to get the camera mount to fit next to the monitor.
Interior Pieces
These are also cut from 1/8" wood, though not painted. We'll have an interior shelf for the perf borad and USB battery to sit in.
Process
It took me a little while to get the exact fittings for these. I did a fair amount of cardboard prototyping to nail these down.
Step 11: Build Internal Armature
After cutting the 3/8" plywood on the laser-cutter, I aligned the armature pieces with blue tape.
Without any jig, which wasn't really necessary, I began nailing this together with a brad nailer, using 5/8" nails. It assembled reasonably square in just 5 minutes.
Step 12: Cardboard Prototype the Panels
Once the armature was ready, I began preparing the panels. Essential to fitting everything together was a series of cardboard prototypes, which is both cheaper and faster than consuming all the wood.
Step 13: Paint 1/8" Panel Black
I had several 30" x 20" panels of 1/8" birch ply for the cutting. With the cardboard prototypes finished, I was ready to cut them.
It's a lot easier to paint one big panel than hand-painting 6 smaller ones, so I painted it before cutting this piece with 1 coat of primer and 2 coats of low-VOC latex paint.
Step 14: Lasercut Painted Panels
Careful on this step to get everything just right.
Now, I I laser-cut the painted panels.
Additionally, the top one will hold magnets and I laser-etched the underside to fit 1/4"x1/16" magnets, which will we insert later.
Step 15: Clean Edges on the Belt Sander
This infringes on the idea of digital fabrication by using traditional shop tools, but we need clean surfaces for paint and primer.
Step 16: Touch-up Paint on Panels
The laser cutter creates some burnt edges, so you need to touch-up the paint on the panel faces. This doesn't take long and a small foam brush will do the trick.
Step 17: Paint White Edges + Touchup
This step is a lot more tedious than it looks. I ended up getting into the zen of paint and learned oodles about foam brush painting techniques.
Step 18: Glue Screen Holder
I glued the screen mount onto the underside and adjusted the position with my fingers before the glue dried to get the alignment just right.
In figure iterations, such as version 2, I will 3D print a better screen holder. The Adafruit TFT 913 monitor has flimsy cabling and tends to break or sever easily. This technique doesn't protect it, but works for now.
Step 19: Glue Shelf Holder
I etched the underside with locations for where the shelf holder will go, which made it easy for figuring out where to apply the glue.
Step 20: Brad Nail Panels Onto Armature
I was patient on this step and took my time with alignment. While the armature can be a bit wonky, the panels need to be just right.
Going for the hand-painted/hand-crafted look means that it doesn't have to be exact, but just kind of close.
But I was pleased: it came out really well!
Step 21: Final Paint Touchup
Spackle, then sand, then paint over the holes where the brad nails went. This is standard touch-up procedure and went quickly.
Step 22: Add Magnets!
With a guide that I cut out from 1/8" wood, I used a transfer punch to figure out where to drill the magnet holes in the armature.
For the top panel, I used a hammer (no steel) to pound the magnets into the underside of the panel. They fit in very cleanly. No adhesive needed.
Step 23: Near Completed Enclosure
This isn't really a step, but more of a look-at-how it went.
This took way longer than I planned to make. Software is easy, fabrication is hard.
Step 24: Epoxy Magnets to Armature
I used epoxy with toothpicks and rubber gloves to adhere the magnets into the armature frame.
Step 25: Final Assembly
Admittedly this step, which I realize is one of the more important ones is ill-documented in terms of photographs.
Sorry, I'll fix this soon (deadlines, deadlines).
The important parts:
- Drill a hole for the camera mount that matches the hole on the panels.
- I used button-head screws (M2) to attach the camera mount to the box. No external bolts were needed. The screws did a self-tapping maneuver, saving me some steps.
- The monitor is press-fit into the opening
- USB battery and perf board go on the the top shelf
- Everything else goes underneath
- Velcro the bottoms of the Raspberry Pi, perf board, 12V battery and USB battery to the shelf and their respective places
- Make all cable attachments: starting with monitor and working your way backwards
- Put the 12V charger plug into the hole in the back panel (press-fit)
- Put the 3-way charging switch into the hole in the black panel
Step 26: Final Project
Ba-da-da-da-da. Done!
Version 1 of the Black Box Timelapse is finished.
Below are some of the preliminary videos that I shot with it. The way the USB drive is formatted, you can remove the drive from the Raspberry Pi and assemble the video as sequenced image files.
Things that will change in version 2
* Adding an support for changing the frame rate with potentiometers; the circuit is finished but the code is not
* Rebuild the amature so that we can use press-fit magnets instead of epoxy, which is messy
* Some better documentation, especially for the final assembly
I hope this was helpful!
Scott Kildall
For more electronics projects, you can find me here:@kildall or
Timelapse of the Laser/3D room at Pier 9
Timelapse of my artist talk at Pier 9 on May 13th, 2014
4 Discussions
Making this
Thanks for the amazing Instructable
Seriously cool.
This looks great! I agree with you on that this draws more attention as an artwork piece than a recocnizable device. I am looking to build a Pi driven photo booth for parties and events and your Instructable gave me some new ideas :)
|
http://www.instructables.com/id/Black-Box-Timelapse/
|
CC-MAIN-2018-30
|
refinedweb
| 2,397
| 70.23
|
Opened 10 years ago
Closed 10 years ago
Last modified 8 years ago
#359 closed defect (duplicate)
Simplified assignment and lookup for related fields
Description
Further to #122, I'd love to see a simpler syntax for direct assignment and reference of objects in related tables. It'd eliminate the last big separation (in my mind, at least) between how people work with normal Python objects and how they need to work Django database-backed objects. All you'd have left is .save(), which I'm all for keeping.
from django.core import meta class Person(meta.Model): name = meta.CharField(maxlength=200) class CourtCase(meta.Model): plaintiff = meta.ForeignKey(Person) defendant = meta.ForeignKey(Person) me = persons.get_object(name__exact="garthk") you = persons.get_object(name__exact="hugo-") case = courtcases.CourtCase() case.defendant = me case.plaintiff = you case.save() print case.defendant.name
We'd also want to retain support for {{case.defendant_id = me.id}} for those who tend to think more in terms of the database structures than the Python structures. I'm definitely in the latter category, but I know plenty of people in the former.
Change History (6)
comment:1 Changed 10 years ago by Manuzhai
comment:2 Changed 10 years ago by rmunn@…
I'd also like to see this happen. Since I'm about to drop off the face of the 'Net for at least a week, though, I can't commit to doing any work on the implementation. :-)
comment:3 Changed 10 years ago by anonymous
- Owner changed from adrian to anonymous
- Status changed from new to assigned
comment:4 Changed 10 years ago by adrian
- Owner changed from anonymous to adrian
- Status changed from assigned to new
comment:5 Changed 10 years ago by adrian
- Status changed from new to assigned
comment:6 Changed 10 years ago by adrian
- Resolution set to duplicate
- Status changed from assigned to closed
Closing this ticket because it's been superceded by the DescriptorFields discussion.
I agree this would be much nicer. I may look into implementation later.
|
https://code.djangoproject.com/ticket/359
|
CC-MAIN-2015-35
|
refinedweb
| 338
| 66.94
|
Find Duplicate Files This is a simple script to search a directory tree for all files with duplicate content. It …Continue reading »
Find Duplicate Files This is a simple script to search a directory tree for all files with duplicate content. It …Continue reading »
The package maptools includes new functions to label points and labels. Line labelling The lineLabel function produces and draws text …Continuar leyendo »
Especially I encourage you...
The STL transform function can be used to pass a single function over a vector. Here we use a simple function square(). #include <Rcpp.h> using namespace Rcpp; inline double square(double x) { return x*x ; } // ] std::vector<...
Ok, I was supposed to take a break, but Frédéric, professor in Tours, came back to me this morning with a tickling question. He asked me what were the odds that the Champions League draw produces exactly the same pairings from the practice draw, and the official one (see e.g. dailymail.co.uk/…). To be honest, I don’t know much about soccer, so...
A
After the work I did for my last post, I wanted to practice doing multiple classification. I first thought of using the famous iris dataset, but felt that was a little boring. Ideally, I wanted to look for a practice … Continue reading →
I’m going to continue with my ‘making data visually appealing to the masses’ kick. I happen to like graphics and graphing data. I also happen to like American football (For the record, however, I’m a soccer player first, a rugby … Continue reading →
Before
|
https://www.r-bloggers.com/2012/12/page/4/
|
CC-MAIN-2016-44
|
refinedweb
| 261
| 62.98
|
TurboGears Upgrade Guide
- Overview
- Updating from 0.9x to 0.9a5
- Updating from 0.9a1 to 0.9a2
- Updating from 0.8 to 0.9
- Updating from 0.5 to 0.8
Overview
The easiest way to upgrade TurboGears is to download the latest tgsetup.py and run it! This will update all of the parts of TurboGears that need updating.
Please also be aware of backwards compatibility issues. These are addressed in the remainder of the guide.
Updating from 0.9x to 0.9a5.
Updating from 0.9a1 to 0.9a2"
Updating from 0.8 to 0.9
There are a number of changes that will need to be made to your project in order to upgrade to TurboGears 0.9.
Upgrade Project Files and Configuration Files 'Y:
- controllers.py Some methods have been added for the new Identity feature, and turbogears.controllers.RootController is used instead of the deprecated version. It is probably easiest to say 'n' (don't overwrite). Later, if you like, you can create a new project (using 'tg-admin quickstart') to get a pure version of controllers.py and merge in these changes.
- model.py You can probably leave this alone (answer 'n'). The only change is to import some files for Identity, and you can copy this import from a newly quickstarted project later if you plan on using Identity.
- master.kid Changes to this file aren't critical, so 'n' is probably a safe choice. Again, there are Identity specific changes that you may want to merge in later.
- setup.py This one you should probably answer 'b' (backup). If you have made changes (which isn't likely), you can merge them in later.
- dev.cfg and prod.cfg The config files have been separated out into deployment config (held in these two files which you originally had) and an application config that would be the same regardless of where the application is deployed. You don't need to make this transition if you don't want to. You can safely press "n" to leave these files alone..
Make Static File Paths Absolute
CherryPy 2.2 requires that paths to static files be absolute. While you might think this would prevent projects from being deployed on different machines, TurboGears provides a function 'absfile' to help maintain portability. If your package name is 'big.
Methods Must Explicitly Allow JSON
In order to prevent accidentally exposing information that perhaps everyone shouldn't see, exposed methods no longer automatically provide JSON output when the url contains 'tg."}
Changing from std. to tg. in your templates
The std object that appears in your template namespace that holds useful values and functions has been renamed tg. You should be able to do a search and replace in your template files to swap std. with tg..
Other Incompatibilities
There are a couple other changes that probably won't impact most projects, but will cause things to not work if you are using them.
- The server.webpath configuration variable will not only properly set outgoing URLs, but will also "fix" incoming URLs if TurboGears is running at some path underneath another webserver. If you were previously running a CherryPy filter to handle this, you no longer need to.
- Previously, if you were using a FormEncode Schema for validation for an exposed method, validation would fail if a value was missing but the method had a default value for that parameter. Now, if that value is missing, the method will get called with the default just as it is normally called in Python.
Deprecated Items
The following items will work in 0.9, but will be changing in the future. You should migrate away from any usage of these items as you are able to.
- For clarity's sake, turbogears.controllers.Root is now turbogears.controllers.RootController.
- Error handling has been greatly improved. Use of the validation_error method has been deprecated. It will still be called if it exists, but a DeprecationWarning will be displayed.
- turbogears.tests.util has been moved to turbogears.testutil.
- CherryPy has a list of names that are deprecated in order to comply with PEP-8.
- expose(html=...) is now deprecated in favor of expose(template=...).
Updating from 0.5 to 0.8.
|
http://docs.turbogears.org/1.0/Upgrade
|
crawl-001
|
refinedweb
| 707
| 69.18
|
The good old space race is back on. Wouldn’t it be great if we could help out some of these incredibly ambitious companies by letting them reuse our code to find out whether or not their spaceship can leave the Earth too? We can do that by creating a function that encapsulates our logic. Before we do that, let’s understand what a function is in mathematics first.
You might be wondering why I keep taking you back to the wonderful world of mathematics every time I try to explain a concept in Elm. That’s because Elm is a functional programming language which gives it a strong mathematical foundation. Elm implements concepts from mathematics without distorting them. So by understanding how they work in mathematics first, it’s easier to understand them later in Elm. As a bonus, you know where these concepts originally came from. This doesn’t mean you need to have a strong background in mathematics to learn Elm. Everything you need to learn Elm is covered in this book including the mathematical concepts.
A function, in mathematics, is a relationship from a set of inputs to a set of possible outputs where each input is mapped to exactly one output. Functions in Elm also work the same way.
Let’s write a function in Elm called
escapeEarth that takes a value from an input set of velocities and maps it to a value from an output set of instructions.
> escapeEarth velocity = \ | if velocity > 11.186 then \ | "Godspeed" \ | else \ | "Come back" <function>
We can now call this function with different velocities to find out whether or not a spaceship can leave the Earth.
> escapeEarth 11.2 "Godspeed" > escapeEarth 11 "Come back"
Like
if expression, we can assign the value returned by a function to a constant.
> whatToDo = escapeEarth 11.2 "Godspeed" > whatToDo "Godspeed"
Function Syntax
Functions are so critical to Elm applications that Elm provides an incredibly straightforward syntax for creating them. No ceremony is required, such as using a special keyword or curly braces which is common in many languages.
Function Application
Once we create a function, we need to apply it to a value to get a desired output. In mathematics, function application is the act of applying a function to a value from the input set to obtain a corresponding value from the output set. You have already seen an example of a function application above:
escapeEarth 11. When a function is applied, its name should be separated from its argument with whitespace. If a function takes multiple arguments, they too should be separated from each other with whitespace.
- Parameter vs Argument
- The terms parameter and argument are often used interchangeably although they are not the same. There is no harm in doing so, but just to clear things up, an argument is used to supply a value when applying the function (e.g.
escapeEarth 11) whereas a parameter is used to hold onto that value in function definition (e.g.
escapeEarth velocity).
Earlier we learned that all Elm applications are built by stitching together expressions. How do functions fit into this organizational structure? Functions are values too like numbers and strings. Since all values are expressions, that makes functions expressions as well. Because of this, functions can be returned as a result of a computation. They can be passed around like any other value. For example, we can give them to another function as an argument. Functions that take other functions as arguments or return a function are called Higher Order Functions. We will see many examples of higher order functions soon.
Functions with Multiple Parameters
The
escapeEarth function above takes only one argument:
velocity. But, we can give it as many arguments as we want. Let’s add one more parameter to its definition so that it can tell us whether or not our horizontal speed is fast enough to stay in an orbit.
The next code we will write is a bit difficult to type in the repl without making any mistakes. So we will write it in a file instead. Create a new directory called
elm-examples in the root directory (
beginning-elm) of the Elm project we created in the last chapter. Inside that directory create a file named
Playground.elm. We will use this file to experiment with various concepts in Elm. Here is how the directory structure should look like so far.
- Filename
- Elm style conventions dictate that filenames are also written in Camel Case. Unlike constants, the first letter in a filename should be written in uppercase though. The first letter of each subsequent concatenated word should be capitalized. Elm doesn’t throw an error if we don’t follow this convention, but there is no reason not to follow it.
Put the following code in
Playground.elm.
module Playground exposing (..) import Html escapeEarth velocity speed = if velocity > 11.186 then "Godspeed" else if speed == 7.67 then "Stay in orbit" else "Come back" main = Html.text (escapeEarth 11.2 7.2)
The first two lines define a new module called
Playground and import the
Html package. Don’t worry about
module and
import yet; we’ll cover them later. Next is our function
escapeEarth. We give it one more parameter called
speed. If the
velocity is not greater than
11.186, it will use the
else if branch to compare if the
speed is equal to
7.67. If yes, it returns “Stay in orbit”. Otherwise, it falls to the
else branch.
The last function is
main. The execution of all Elm applications start with this function. It’s a regular function like any other. It just happens to be the entry-point for an application. We apply a function called
Html.text to the result produced by yet another function application:
escapeEarth 11.2 7.2.
Html.text takes a string and displays it on a browser. It’s important to surround
escapeEarth and its arguments with parentheses. Otherwise, Elm will think we are passing three arguments to
Html.text which takes only one argument. This is because Elm uses whitespace to separate a function name from its arguments. We can chain as many function applications as we want.
Go to the
beginning-elm directory in your terminal and run
elm-reactor. After that open this URL on a browser:. You should see
elm-examples as one of the directories listed in File Navigation section. Click on it. You should see the
Playground.elm file. If you click that file too,
elm-reactor will compile the code in it and display the result. You should see the string “Godspeed” on a browser.
Partial Function Application
When we applied the
escapeEarth function above, we pulled the value for speed out of thin air. What if it’s actually computed by another function? Add the following function definitions right above
main.
speed distance time = distance / time time startTime endTime = endTime - startTime main = ...
speed takes two parameters: distance covered by a spaceship and travel time. Time in turn is computed by another function called
time. Here’s how the main function looks when we delegate the calculation of speed to our newly created functions:
main = Html.text (escapeEarth 11 (speed 7.67 (time 2 3)))
Yikes! Chaining multiple function applications looks hideous and hard to read. But what if we write it this way instead:
main = time 2 3 |> speed 7.67 |> escapeEarth 11 |> Html.text
Ah, much better! To compile the new code, refresh the page at. You should see the string “Stay in orbit”.
The nicely formatted code above is possible because Elm allows partial application of functions. Let’s play with some examples in repl to understand what partial functions are. Create a function called multiply with two parameters in the repl.
> multiply a b = a * b <function>
When we apply it to two arguments, we get the expected result.
> multiply 3 4 12
But, what if we only give it the first argument?
> multiply 3 <function>
Hmm… It returns a function instead of giving us an error. Let’s capture that function in a constant.
> multiplyByThree = multiply 3 <function>
Let’s see what happens if we apply this intermediate function to the second (and final) argument.
> multiplyByThree 4 12 > multiplyByThree 5 15
multiplyByThree is a partial function. When Elm sees that we haven’t given enough arguments to a function, instead of complaining it applies the function to given arguments and returns a new function that can be applied to the remaining arguments at a later point in time. This has many practical benefits. You will see plenty of examples of partial function application throughout the book.
Forward Function Application
Let’s get back to that fancy
|> operator that made our code look so pretty. It’s called the forward function application operator. It is very useful for avoiding parentheses. It pipes the result from previous expression to the next one.
The forward function application operator takes the result from the previous expression and passes it as the last argument to the next function application. For example, the first expression in the chain above (
time 2 3) generates number
1 as the result which gets passed to the
speed function as the last argument.
Backward Function Application
There is another operator that works similarly to
|>, but in backward order. It’s called the backward function application operator and is represented by this symbol:
<|. Let’s deviate from our spaceship saga for a bit and create some trivial functions to try out the
<| operator. Add the following function definitions right above
main.
add a b = a + b multiply c d = c * d divide e f = e / f main = ...
add,
multiply, and
divide are functions that do exactly what their names suggest. Let’s create an expression that uses these functions. Modify the
main function to this:
main = Html.text (toString (add 5 (multiply 10 (divide 30 10))))
Ugh. Even more parentheses. Refresh the page at and you should see
35. The
add function returns a number, but
Html.text expects a string. That’s why we need to use the
toString function to convert a number into a string. It turns any kind of value into a string.
> toString 42 "42" > toString 7.8 "7.8" > toString [5, 10] "[5,10]" > toString { a = 3, b = 6 } "{ a = 3, b = 6 }"
Let’s use our knowledge of the
|> operator to turn the expression in
main into a beautiful chain.
main = divide 30 10 |> multiply 10 |> add 5 |> toString |> Html.text
We can also write it using the
<| operator.
main = Html.text <| toString <| add 5 <| multiply 10 <| divide 30 10
Not bad, huh? How about this:
main = Html.text <| toString <| add 5 <| multiply 10 <| divide 30 10
Working backwards to forwards can be confusing. Just because you can do it in Elm, doesn’t mean you should. If you feel like you’re in Bizarro World, where down is up and up is down, go ahead and stick with the forward application (
|>). Elm provides other higher-order helper operators like these. You can learn more about them here.
Operators are Functions Too
In Elm, all computations happen through the use of functions. As it so happens, all operators in Elm are functions too. They differ from normal functions in three ways:
Naming
Operators cannot have letters or numbers in their names whereas the normal functions can.
+++ is an illegal operator in Elm, but we can legitimize it by defining it ourselves. Add the following definition right above
main in
Playground.elm
(+++) first second = first ++ second main = ...
We have given
+++ the same behavior as the
++ operator which is already defined in Elm.
++ is used to concatenate two strings. Notice how the operator is surrounded by parentheses in its definition. We have to do that when we define custom operators using the function syntax. By default, custom operators have the highest precedence (
9) and are left-associative. Like any other built-in operator, we can apply
+++ to its arguments.
main = Html.text ("Peanut butter " +++ "and jelly")
If you refresh the page at, you should see “Peanut butter and jelly”. Let’s see what happens if we add a letter to a custom operator’s name.
(+a+) first second = first ++ second main = ...
Elm doesn’t like that.
What about normal functions? Can we include a special character in their name?
ad+d a b = a + b main = ...
Nope. Elm doesn’t like that either. Before moving on, you should remove the invalid operator and function definitions (listed above) from
Playground.elm.
Number of Arguments
Operators accept exactly two arguments whereas there is no limit to how many arguments a normal function can have.
Application Style
Operators are applied by writing the first argument, followed by the operator, followed by the second argument. This style of application is called infix-style.
> 2 + 5 7
Normal functions are applied by writing the function name, followed by its arguments. This style of application is called prefix-style.
> add a b = a + b <function> > add 2 5 7
We can also apply operators in prefix-style if we so choose to.
> (+) 2 5 7
The operators must be surrounded by parentheses. We can’t apply normal functions in infix-style. But, we can create something that resembles infix-style by using
|>.
> 2 |> add 5 7
|
http://elmprogramming.com/function.html
|
CC-MAIN-2017-34
|
refinedweb
| 2,224
| 67.35
|
You can subscribe to this list here.
Showing
7
results of 7
Hi all,
I'm discovering a really strange behavior, and I spent all the day looking
for a possible error, with no luck.
I got a (quite complex) piece of code, which give me an unexpected result.
Please, look at the following fragment, where the third line is commented:
return
for $rel at $count in
$relationships return (
(:if ($count
ne 6) then () else :)
let $uuid := $rel/mm:hasObject/@rdf:resource
let $el := $ont/*/rdf:Description[@rdf:about eq $uuid]
let $object :=
if (count($el/mm:isInstanceOf) gt 0) then
element rdf:Description {
$el/@*
,$ont/*/rdf:Description[@rdf:about eq $el/mm:isInstanceOf/@rdf:resource]/*
}
else
$el
return
support:getName($rel, $object,$context,false())
)
If I run this piece of code the result is:
Scelta ente
Scelta ontologia
Tipi di dati di base (W3C-XML)
xs:byte
Accesso al sistema
mm:menu
gestione sistema
while, uncommenting the third line, the result is:
menu favoriti
In the first run, I got seven lines (the sixth is incorrect), while in the
second run I got just one line (which is correct) corresponding to the the
sixth line of the commented version : WHY are the different?
Every suggestion is appreciated. I run dozen of tests, rebuid the indexes,
reload the DB, with no luck.
Please help with some ideas.
Running 1.4.0-rev10440-20091111 on windows 7 x32 on vmware workstation
TIA
Paolo
I would really appreciate if somebody can provide a code snippet for data
replication between two eXist servers without using the REST interface
(which is disabled for security reasons).
Thomas
Ok, we found the code, but no fix..... yet
On 1 Mar 2011, at 19:24 , Dannes Wessels?
--
Dannes Wessels
eXist-db Open Source Native XML Database
e: dannes@...
w:
Hi,
On 28 Feb 2011, at 22:49 , William Summers?
I can't imagine tomcat has influence here............
D.
--
Dannes Wessels
eXist-db Open Source Native XML Database
e: dannes@...
w:
Fair enough, it makes sense why it is disabled in the first place.
But it would help a lot to manually trigger a versioning event.
Unfortunately, I dont know how to implement this function.
So thanks in advance for your support and your help!
Markus
>>> Wolfgang Meier <wolfgang@...> 23.2.2011 11:43 >>>
> So just to be clear, do something like:
>
> update replace $old with $new
>
> ..doesn't cause a new version to be created?
Yes. The versioning trigger is not enabled for node level updates.
Creating a new revision for every small update would be overkill.
Markus is probably right that we should provide a way to manually
create a new version from an XQuery. It should be possible to
implement a simple function which does that, but I'll need to check
against the source code.
Wolfgang
> The problem here is how the xquery parser deals with the sequence
> constructor: it recursively translates the expression into nested pairs. We
> have to add a normalization step to flatten this. I'll put it on my todo.
Apart from causing stack overflows, sequence constructors did consume
too much memory. I fixed this in trunk yesterday. There should be no
limit to the length of the sequence now.
Wolfgang
On Tue, Mar 1, 2011 at 2:59 AM, Ryan Graham <rxgraham@...> wrote:
> I wanted to see if anyone can provide some advice on logging practices --
> just general practices that seem to be working well for you all. In my
> scenario, I've setup a small pipeline processing mechanism where documents
> are passed through any number of XSLT stylesheets using
> transform:transform(). I typically store the results into a new collection
> in the database. I was wondering:
>
> - If using transform:transform(), where are my <xsl:message> calls
> going?
>
> My guess, to console.
>
> - Do I have access to eXist modules/functions in my stylesheet when
> using transform:transform() if I declare the namespaces?
>
>
> - Is it better to use util:log() / util:log-app()?
> - Is a custom solution using XUpdate and logging directly to the
> database acceptable?
>
> No, you can't ... only eXist's xslt you will be able to give you access you
'all' eXist's features.
--
Dmitriy Shabanov
|
http://sourceforge.net/p/exist/mailman/exist-open/?viewmonth=201103&viewday=1
|
CC-MAIN-2014-41
|
refinedweb
| 699
| 64.41
|
Java developers interested in building Web services will find the AXIS toolkit?
What makes Web services promising and practical is infrastructure and standards. Given the pervasiveness of the Web and a handful of standards (for an overview of the prevailing Web services standards, see the accompanying article on page 22), it’s now tractable for any business to interoperate with any other business. That’s the promise of Web services.
And, as it turns out, the larger the investment you’ve made in the Web, the better suited you are to deploying Web services. Instead of replacing the technology you’ve worked hard to deploy, Web services can leverage it. Just think of Web services as another outlet for the expertise — code, servers, infrastructure, and bandwidth — you’ve already amassed.
Axis Powers
Apache AXIS () is a substantial and comprehensive open source Java (and eventually C++) toolkit for building and deploying Web service clients and servers. Based on standards (HTTP, the Simple Object Access Protocol (SOAP), Web Services Description Language (WSDL), and XML), AXIS includes APIs, tools, and lots of sample code that you’ll find invaluable whether you’re deploying your first simple Web service, a full-blown commercial service, or a Java applet that interacts with another vendor’s Web service.
For example, if you’re developing a Web service client, you’ll need to know how to communicate with a remote Web service. Specifically, you’ll need to know the service’s URL, its service and method names, and the types and number of parameters for each method. In the realm of Web services, all of that information is captured in a service’s WSDL file. AXIS offers a tool, called WSDL2java, that interprets WSDL files and emits Java code that encapsulates all Web service intercommunication. (Where you previously wrote tens of lines of code to send SOAP messages manually with Apache SOAP, AXIS can reduce the effort to invoke a remote procedure to just two or three calls to create and initialize two objects.)
On the other hand, if you’re developing the server code for a new Web service, AXIS can help there, too. As mentioned above, all Web services describe themselves with WSDL files. Rather than write WSDL files from scratch (which have to accurately reflect the public methods of your Java classes), AXIS’s java2WSDL tool can generate WSDL files directly from Java source code. Or, given a WSDL file from another Web service, WSDL2java can generate server stub code, too. (In fact, from a single Java source file, you can create a client and a server and all of the files you need to deploy a Web service in less than an hour. We’ll do that very thing here in just a little while.) If you’re having problems debugging your service, AXIS’s handy tcpmon tool can be used to monitor and display all incoming and outgoing traffic.
Finally, AXIS makes deploying and managing Web services a snap. The fastest way to create an AXIS Web service is to simply drop a Java source file into the AXIS Web applications directory. The other technique, Web Services Deployment Descriptors (WSDD), are about as easy to use, but give you more control and more flexibility. For example, WSDD files can enable or disable individual methods in your Web service.
AXIS also offers a number of system administration tools that make management of Web services more tractable. Services can be deployed and un-deployed using AXIS’s AdminClient tool, and to help others consume your Web services, AXIS automatically generates WSDL files from any service deployed on your site.
Of course, a Web services toolkit would be worthless if it didn’t interoperate well with other Web services. AXIS is tested rigorously with other Web services implementations and works well with almost all of them, including Microsoft’s .NET (in fact, the AXIS developers maintain an impressive interoperability “scorecard” at).
Let’s take a hands-on look at AXIS, and focus on the many features and tools that facilitate the deployment of new Web services. As we go to press, AXIS Beta 2 is the most recent release. All significant features have been implemented and the AXIS team is planning a beta release each month, targeting an official release by the end of this summer. To begin, let’s get Tomcat and AXIS installed on our Linux Web server.
Take a Spin on Axis
All of the examples in this article were developed on Red Hat Linux 7.1 (running on a Toshiba Satellite laptop). The Java code is based on AXIS Beta 2 and Java JDK 1.4. The code was developed and debugged in the Sun ONE Studio 4 Community Edition integrated development environment. Apache’s Jakarta Tomcat 4.0.3 was used as the Web server and servlet engine.
If you want to follow along and try the examples on your own machine, use the instructions in the next few sections to install and configure AXIS.
INSTALL TOMCAT
If you don’t have Tomcat installed, go to the Apache Web site () and download the latest production version of the server (as we went to press the latest production version of Tomcat was 4.0.3). At a minimum, you will need to download these Java packages: jakarta-regexp-1.2, servlet-2.3, xerces-j 1.4.4, Tomcat 4.0.3 itself, and the Tomcat 4.0.3 default Web applications.
Once you’ve downloaded the files, use rpm to install them (assuming you’re using Red Hat). After running the installs, you should have several new directories and files. On Red Hat, the Tomcat server is installed by default into /var/tomcat4. The init.d script is installed to /etc/init.d/tomcat4. The startup configuration script can be found in /etc/tomcat4/conf/tomcat4 .conf. For the rest of this article, we’ll refer to /var/tomcat4 as TOMCAT_HOME.
CONFIGURE THE TOMCAT SCRIPT
Before you launch Tomcat, you must edit the Tomcat start-up script to point to your installation of the JDK. Go to the directory /etc/tomcat4/conf and edit the file tomcat4.conf. Change the JAVA_HOME variable to point to your copy of JDK 1.4. On the laptop, the line was JAVA_HOME=/usr/java/j2sdk1 .4.0. The other script variables, CATALINA_HOME, JASPER_HOME, and CATALINA_TMPDIR should already be set correctly.
Finally, if you are using JDK 1.4, you must add the following line to the end of your startup script or AXIS will not work (if you are using JDK 1.3, you can safely skip this step).
JAVA_ENDORSED_DIRS=”$CATALINA_HOME”/bin:
“$CATALINA_HOME”/common/lib:
“$CATALINA_HOME”/webapps/axis/WEB-INF/lib
START AND TEST TOMCAT
After editing tomcat4.conf, the only task left is to start Tomcat. As root, use the init.d script to launch the server:
# /etc/init.d/tomcat4 start
Starting tomcat4: [ OK ]
To test that Tomcat is running, point your Web browser to. (The port on your system may be 8080, depending on what other servers you have installed. If you’re not sure what port to use, open the file TOMCAT_HOME/ conf/server.xml, and look for the port number associated with the non-SSL standalone server.) You should see the default Tomcat web page.
INSTALL AXIS
The AXIS toolkit is distributed as a collection of jar files. To install AXIS on your server, go to and download the latest release. The release will be in a tar file named something like xml-axis.tar.gz. Unzip and untar the file into a temporary directory. We’ll refer to this temporary directory as AXIS_HOME (it’s probably useful to set a shell variable to record AXIS_HOME).
Copy the entire AXIS_HOME/webapps/axis directory to the directory TOMCAT_HOME/webapps. Next, copy the Xerces XML parser (the jar is located at TOMCAT_HOME/common/lib/xerces.jar to TOMCAT_HOME/webapps/axis/WEB-INF/lib. The following commands show what to copy:
# cp -pr $AXIS_HOME/webapps/axis $TOMCAT_HOME/webapps
# cp $TOMCAT_HOME/common/lib/xerces.jar $TOMCAT_HOME/webapps/axis/WEB-INF/lib
At this point, AXIS is installed and ready to use on the server. To test AXIS, point your browser to. You should see a page like the one shown in Figure One.
SET UP YOUR DEVELOPMENT ENVIRONMENT
To develop clients with AXIS, you will need to add several of the AXIS jar files to your Java CLASSPATH. If you’ve defined the AXIS_HOME environment variable in your shell, then a command like the following will add everything you need.
% export CLASSPATH=
$AXIS_HOME/src/axis/lib/axis.jar:
$AXIS_HOME/src/axis/lib/jaxrpc.jar:
$AXIS_HOME/src/axis/lib/commons-logging.jar:
$AXIS_HOME/src/axis/lib/tt-bytecode.jar:
$AXIS_HOME/src/axis/lib/wsdl4j.jar:
$TOMCAT_HOME/common/lib/xerces.jar:
$AXIS_HOME/src/axis:
$CLASSPATH
At this point AXIS should be working on the server, and your shell should be able to build the Java examples. Now, let’s build a Web service and a client and try them out.
Instant Web Service — Just Add Java
One of the nicest parts of AXIS is its “instant Web service” feature called Java Web Service (JWS) — just take a Java file, rename it, and drop it into TOMCAT_HOME/webapps/axis to make all of the (public) methods in the class callable through Web services.
The magic of JWS is defined in the TOMCAT_HOME/webapps/ axis/WEB-INF/web.xml file using the following configuration:
<servlet-mapping>
<servlet-name>AxisServlet</servlet-name>
<url-pattern>*.jws</url-pattern>
</servlet-mapping>
Whenever Tomcat — or any other J2EE-compliant server — is prompted for a .jws file in the AXIS context, the Axis-Servlet is invoked.
For our example, we’re going to deploy a famous quotes service. Initially, the service will offer two methods (but we’ll add complexity as we go along):
This is a very simple service, but you can easily replace it with a more substantial class that uses JDBC or another technique for persistence. Listing One shows Quote.java, a class that implements the quote() and count() methods.
Listing One: The Quote class
import java.util.HashMap;
import java.util.Map;
public class Quote {
private HashMap quotes = null;
public Quote() {
quotes = new HashMap();
quotes.put(”Groucho Marx”, “Time flies like an arrow.
Fruit flies like a banana.”);
quotes.put(”Mae West”, “When women go wrong, men go
right after them.”);
quotes.put(”Mark Twain”, “Go to Heaven for the climate,
Hell for the company.”);
}
public String quote(String name) {
String quote;
if (name == null || name.length() == 0
|| (quote = (String) quotes.get(name)) == null) {
quote = “No quotes.”;
}
return (quote);
}
public int count() {
return quotes.size();
}
}
To deploy Quote as a Web service, simply copy the Quote.java file to TOMCAT_HOME/webapps/axis/Quote.jws. As mentioned above, the .jws extension is special: if AXIS finds a .jws files, it will automatically compile the file and make the public methods in the class available as a Web service.
Note that Listing One is in the default package. Do not place your classes in a specific package if you want to deploy them via JWS.
The server-side coding is nice and simple. The hefty coding is done in the client. Listing Two shows QuoteClient.java.
Listing Two: The famous quotes client
1 package linuxmag.aug02.example1;
2
3 import org.apache.axis.client.Call;
4 import org.apache.axis.client.Service;
5 import org.apache.axis.encoding.XMLType;
6 import org.apache.axis.utils.Options;
7
8 import javax.xml.rpc.ParameterMode;
9
10 public class QuoteClient
11 {
12 public static void main(String [] args) throws Exception {
13 String host = “:“;
14 String servicepath = “/axis/Quote.jws”;
15 Options options = new Options(args);
16 int port = options.getPort();
17 String endpoint = host + port + servicepath;
18 String method = null;
19
20 args = options.getRemainingArgs();
21
22 if (args == null || (!((method = args[0])).equals(”quote”)
23 && !method.equals(”count”))) {
24 System.err.println(”Usage:”);
25 System.err.println(” QuoteClient count”);
26 System.err.println(” QuoteClient quote name”);
27 return;
28 }
29
30 String op1 = null;
31
32 if (method.equals(”quote”)) {
33 op1 = args[1];
34 }
35
36 String ret = null;
37 Service service = new Service();
38 Call call = (Call) service.createCall();
39
40 call.setTargetEndpointAddress(new java.net.URL (endpoint));
41 call.setOperationName(method);
42
43 if (op1 != null) {
44 call.addParameter(”op1″, XMLType.XSD_STRING, ParameterMode.IN);
45 call.setReturnType(XMLType.XSD_STRING);
46 ret = (String) call.invoke(new Object [] {op1});
47 } else {
48 call.setReturnType(XMLType.XSD_INT);
49 ret = ((Integer) call.invoke((Object[])null)).toString();
50 }
51
52 System.out.println(”Got result : ” + ret);
53 }
54 }
To run the client, put QuoteClient.java in AXIS_HOME/src/ axis/linuxmag/aug02/example1 and do the following:
% cd $AXIS_HOME/src/axis/linuxmag/aug02/example1
% javac QuoteClient.java
% java linuxmag.aug02.example1.QuoteClient
-p8180 quote “Groucho Marx”
Got result : Time flies like an arrow.
Fruit flies like a banana.
Let’s look at the client in some detail. Lines 3-8 bring in the additional classes required for an AXIS client. Line 13 names the host; here we’re using localhost and we’ve appended a colon so we can specify the exact port to connect to. Line 14 specifies the path to Web service. Because Quote.jws does not name a package, the path simply points to the /axis/ directory. Line 15 uses an AXIS convenience class named Options to find, store, and remove common AXIS arguments on the command line. For example, you can specify the port number you want to connect to with the -p flag.
Lines 22-28 are typical argument checking code.
Lines 37 and 38 create two critical classes. Service is the starting point to access any SOAP Web service. Call is used to invoke the service. Lines 40-41 set the endpoint — the destination for our SOAP message — and name the method we want to invoke. Lines 44-46 and lines 48-49 set the parameters for the remote call, and specify the type of the return value. The method quote returns a String, specified with XMLType.XSD_STRING. The method count returns an Integer, denoted with XMLType.XSD_INT. The return value of call.invoke() is Object, which we case to the corresponding return type of each method.
Figure Two shows the SOAP request and response (with the HTTP headers) for calling the quote method. (Some of the SOAP-ENV:Envelope elements have been omitted to save space; everything else appears intact.) Notice that endpoint appears as part of the HTTP POST; quote, the remote method name, is the only element of the SOAP-ENV:Body; and, the arguments to the remote method are the sub-elements of quote.
Finally, notice that the xsi:type attribute of the quote- Return element is xsd:string. AXIS uses the xsi:type attribute to deserialize the return value captured in the XML response into the correct Java class. In this case, the return value was deserialized into a String. (The opposite process of converting Java to XML, as is perfomed to generate the request, is called serialization).
The xsi:type is generated by the Web service and not by call.setReturnType(XMLType.XSD_STRING). So, why is the setReturnType() call even needed? Some Web services do not label return values with types. In those cases, the call to setReturnType() is a hint to AXIS, telling it to convert whatever the return value is to a Java String. In our example, the setReturnType() is indeed redundant, but we can’t always predict what a Web service will return.
If you want to see the outgoing requests and the incoming responses to your Web service, AXIS offers a special tool (actually a Java class) called tcpmon that acts as a proxy between two ports. See the sidebar “Debugging with tcpmon” on pg. 37 for more information on this valuable tool.
So, that’s the near-instant way to deploy a Web service and a general technique to write a client. While the JWS technique offers convenience, there are some detractors:
Exert Control with WSDD
WSDD files let you leverage all of the advanced features of AXIS. AXIS is quite extensible, and as you add components to your Web service, you’ll use a WSDD file to pull it together.
For example, we’ve seen the HTTP transport, but you could add an SMTP or FTP transport to process requests originating through those protocols. Or, you can add a logging mechanism to your service by adding an AXIS handler. Configured correctly, the logging handler would be called before your code every time your Web service receives a request.
Deploying a Web service via a WSDD file is a little more time consuming than JWS, but still pretty simple. Here’s how to deploy the quote service with WSDD:
1. Unlike JWS, you should provide a package name for your classes when deploying with WSDD. In this case, add the line packagelinuxmag.aug02.quoteservice; to the top of Quote.java.
2. Run javac to compile Quote.java, yielding Quote.class.
3. Create a new directory linuxmag/ aug02/quoteservice in TOMCAT_HOME/ webapps/axis/WEB-INF/classes and copy the file Quote.class to it.
4. Deploy the Web service by submitting the WSDD file shown in Listing Three to AXIS. A special utility called AdminClient does the work. The command below shows what to do.
Listing Three: The WSDD file for the famous quotes service
1 <deployment
2 xmlns=””
3 xmlns:java=”“>
4
5 <service name=”Quote” provider=”java:RPC”>
6 <parameter name=”className” value=”linuxmag.aug02.quoteservice.Quote”/>
7 <parameter name=”allowedMethods” value=”*”/>
8 </service>
9 </deployment>
% java org.apache.axis.client.AdminClient
-p8180 deploy.wsdd
[INFO] AdminClient-Processing file deploy.wsdd
<Admin>Done processing</Admin>
5. If you want to see if the service has been “registered” properly with AXIS, use the list option of AdminClient. You should see something similar to Figure Three (the output has been greatly abbreviated to save space). The list option also shows any JWS services you may have deployed.
Figure Three: Listing available Web services
% java org.apache.axis.client.AdminClient -p8180 list
…
<handler name=”URLMapper” type=”java:org.apache.axis.handlers.http.URLMapper”/>
<service name=”Quote” provider=”java:RPC”>
<parameter name=”allowedMethods” value=”*”/>
<parameter name=”className” value=”linuxmag.aug02.quoteservice.Quote”/>
</service>
…
Of course, you can also run a client to test the Web service. Let’s do that. Copy the code from Listing Two and make the following changes to adapt the code to access the new service:
call.setOperationName(new QName
(”Quote”, “quote”));
call.setOperationName(new QName
(”Quote”, “count”));
Compile and run the edited class shown in Figure Four .
Figure Four: Building and running the modified client
% javac QuoteClientWSDD.java
% java linuxmag.aug02.example1.QuoteClientWSDD -p8180 quote “Mae West”
Got result : When women go wrong, men go right after them.
% java linuxmag.aug02.example1.QuoteClientWSDD -p8180 count
Got result : 3
Again, that’s it. We’ve deployed a Web service and a complementary client in just a few minutes. In general, you will follow a similar procedure to deploy all your Web services via WSDD. (You might use JWS for debugging or quick projects to test interoperability.)
By the way, if you ever want to undeploy a service, create and submit another WSDD file that looks like the following:
<undeployment xmlns=”
axis/wsdd/”>
<service name=”Quote”/>
</undeployment>
WSDL: Web Services All The Time
So far, we’ve seen tools and code that make deployment of Web services easy, but we haven’t seen a way to make client coding any easier. What we need is something to separate the client from the particulars of the server. What we need is WSDL.
WSDL, or Web Services Description Language, is an XML-based syntax that describes a Web service in an abstract but regular form. Given a WSDL file, a client can dynamically call a Web service. Better yet, with a WSDL file and a few clever AXIS tools, you can even create Java stub code for your client and your server.
For example, if you still have the original Quote.jws file deployed in AXIS, point your Internet browser to. What you see in the browser is a WSDL description of your Web service, generated automatically by AXIS.
For our last example, let’s extend the famous quotes server, deploy it, and use AXIS tools to build a WSDL file and a client.
1. Make a directory AXIS_HOME/src/axis/linuxmag/aug02/quote. Copy the original Quote.java file (shown in Listing One) into that directory.
2. Edit Quote.java. At the top of the file add package linuxmag.aug02.quote; and add the two new methods shown in Figure Five to the Quote class.
Figure Five: Two new methods for the Web service
public String[] contents() {
String[] s = new String[quotes.size()];
Set keys = quotes.keySet();
int j = 0;
for (Iterator i = keys.iterator(); i.hasNext(); ) {
String name = (String) i.next();
s[j++] = (String) quotes.get(name);
}
return s;
}
public long time() {
return System.currentTimeMillis();
}
3. Java2WSDL is a special AXIS client that interprets a Java class and creates a WSDL file that describes all of the methods in the class. Once we have the WSDL file for our Quote class, we can use another AXIS utility, WSDL2Java to create client and server code stubs. To create a WSDL file from our class, run this command:
% java org.apache.axis.wsdl.Java2WSDL
-o quote.wsdl
-l””
-n “urn:Quote”
-p”linuxmag.aug02.quote” “urn:Quote”
linuxmag.aug02.quote.Quote
The output of the command (the -o switch) is quote.wsdl. The -l option specifies the URL (the endpoint) of the Web service. -n is the target namespace of the WSDL file. -p indicates a mapping from the package to a namespace. The last argument is the Java class (or a Java interface) that defines the methods of the Web service.
4. Before we generate any code, move the original Quote.java file to Quote.java.orig. If you don’t, it will get clobbered in the next step.
5. The AXIS utility WSDL2Java can generate stub code for the server and client. For the server, the utility generates all the code you need to deploy a service — all you’ll need to do is fill in the actual service code itself. For the client, the code to invoke the service remotely is reduced to three lines. WSDL2Java also generates a simple WSDD file to deploy your Web service.
Run the following command java command to generate the server and client stubs in the current directory.
% java org.apache.axis.wsdl.WSDL2Java
-o . -d Session -s -S true-Nurn:Quote
linuxmag.aug02.quote quote.wsdl
Run ls in the current directory to see a list of files. QuoteService .java, QuoteSoapBindingSkeleton.java, QuoteSoapBinding-Impl.java, and Quote.wsdd are used on the server. QuoteServiceLocator.java and Quote-SoapBindingStub.java are used on the client. Quote.java is a Java interface and is used on the client and server.
6. Open the file QuoteSoapBinding-Impl.java and replace its methods with those from Quote.java.bak. Make sure to import all of the packages you need and define all of the fields. QuoteSoapBindingImpl.java is where your code and the Web service code meet.
7. As root, create a new directory TOMCAT_HOME/webapps/axis/WEB-INF/classes/linuxmag/aug02/quote, and run commands to compile, copy, and deploy your service, as shown in Figure Six.
Figure Six: Deploying the new Web service
% javac *.java
% su
# cp Quote.class QuoteSoapBindingImpl.class QuoteSoapBindingSkeleton.class
$TOMCAT_HOME/webapps/axis/WEB-INF/classes/linuxmag/aug02/quote
% java org.apache.axis.client.AdminClient -p 8180 deploy.wsdd
[INFO] AdminClient - -Processing file deploy.wsdd
<Admin>Done processing</Admin>
8. Copy the code in Figure Seven to create the file named Test.java.
Figure Seven: A Web service client built from stub code
package linuxmag.aug02.quote;
import java.util.Date;
public class Test {
public static void main(String[] args) throws Exception {
QuoteServiceLocator l = new QuoteServiceLocator();
QuoteSoapBindingStub stub = (QuoteSoapBindingStub) l.getQuote();
System.out.println(”Time on server: ” + (new Date(stub.time())).toString());
System.out.println(stub.quote(”Groucho Marx”) + ” — Groucho Marx”);
System.out.println(”Number of quotes: ” + stub.count());
String[] quotes = stub.contents();
for (int i = 0; i < quotes.length; i++) {
System.out.println(quotes[i]);
}
}
}
Notice that only two calls — the first two lines of the main method — are needed to setup before we call the service.
9. Compile and run the client.
% javac Test.java
% java linuxmag.aug02.quote.Test
Time on server: Tue Jun 18 01:59:20 PDT 2002
Time flies like an arrow. Fruit flies like
a banana. — Groucho Marx
Number of quotes: 3
When women go wrong, men go right after them.
Time flies like an arrow. Fruit flies like a
banana.
Go to Heaven for the climate, Hell for the
company.
And, that’s all folks. WSDL2Java reduced a complex programming task to some very simple code. You can follow the same procedure to talk to any Web service. All you need is the Web service’s WSDL file.
Now, Get Off Your Axis…
AXIS has more tricks and features to discover. Although we’ve seen an expansive set of tools and techniques, we’ve only scratched the surface. To see more samples and get more information on other AXIS concepts, read the AXIS User’s Guide. There is also an in-depth architecture guide if you’re curious about the guts of AXIS.
A good toolkit is a software developer’s best friend. Armed with a set of classes, build tools, off-the-shelf infrastructure, and a few helpful examples, a programmer is ready to type away at any gauntlet management throws down. Add AXIS to your arsenal.
Debugging with tcpmon
To help you debug your Web service and Web client, AXIS includes a helpful utility class named org.apache.axis.utils.tcpmon (or Icpmon). tcpmon listens for connections on a given port on the localhost, and forwards incoming messages to another port on another server. By inserting itself between the two ports, tcpmon can show you all incoming and outgoing messages. A picture of tcpmon is shown here.
Assuming you have your CLASSPATH and DISPLAY variables set correctly, the following commands will launch a tcpmon X window.
% alias tcpmon=”java org.apache.axis.utils.tcpmon”
% tcpmon &
Once the window appears, enter a port number that’s not in use in the Listen Port # field. Choose Act as a Listener… and enter the hostname and port number of your Tomcat server in the Target hostname and Target port # fields, respectively. In the figure, the incoming port was 8181 and the hostname and port for Tomcat was localhost and 8180, respectively. Again, your port numbers may be different. Next, click the Add button and then click on the tab for your port number.
Incoming SOAP requests are shown in the topmost window, and responses are shown in the bottom window.
Sorry, the comment form is closed at this time.
Enter to Win an HP BladeSystem for Your IT Infrastructure
Linux Simply Runs Better on ProLiant Servers
Harness the Power of Virtualization for Server Consolidation
SUSE Linux Enterprise Server: The Solution for Mission-critical Computing
|
http://www.linux-mag.com/2002-08/axis_01.html
|
crawl-001
|
refinedweb
| 4,543
| 58.79
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.